Submitting Results
Setting API Credentials
touca login --help
usage: touca login [--api-url API_URL]
Set API credentials from the CLI
options:
--api-url API_URL URL to Touca server API
You will need your API Key to submit test results to the Touca server. You can
obtain your API Key
using the Touca web interface
and manually set it into your configuration profile. Alternatively, you can use
touca login
which automates much of this process.
This command will open a browser window to the Touca server to help you log into your account, then automatically finds your API Key and stores it into your configuration profile.
If you are self-hosting Touca server, you should pass the URL to your Touca
Server API using --api-url
.
Capturing Files
touca check --help
usage: touca check --suite SUITE [--testcase TESTCASE] src
Submit external test output to Touca server
positional arguments:
src path to file or directory to submit
options:
--suite SUITE name of the suite to associate with this output
--testcase TESTCASE name of the testcase to associate with this output
You can use touca check
to submit any external file(s) that may have been
generated as part of the test run.
$ touca config set api-key=a66fe9d2-00b7-4f7c-95d9-e1b950d0c906
$ touca config set team=tutorial-509512
$ touca check ./output.file --suite=my-suite
Since we did not specify the testcase, touca check
will infer it from
./output.file
as output-file
.
You can also submit an entire directory, in which case, every file would be
treated as a separate testcase. You can pass --testcase
to submit them all as
part of one testcase.
Another use-case of touca check
is submitting the standard output of any given
process to treat as a test result.
$ echo "hello" | touca check --suite=my-suite --testcase=my-testcase
Note that in the above case, setting --testcase
was mandatory since there is
no filename to infer it from.
Running Tests Locally
touca test --help
usage: touca test [-h] ...
Run your Touca tests
options:
-h, --help show this help message and exit
--testdir TESTDIR path to regression tests directory
--api-key API_KEY Touca API Key
--api-url API_URL Touca API URL
--team TEAM Slug of team to which test results belong
--suite SUITE Slug of suite to which test results belong
--revision VERSION Version of the code under test
--offline Disables all communications with the Touca server
--save-as-binary Save a copy of test results on local filesystem in binary format
--save-as-json Save a copy of test results on local filesystem in JSON format
--output-directory OUTPUT_DIRECTORY
Path to a local directory to store result files
--overwrite Overwrite result directory for testcase if it already exists
--testcase TESTCASES [TESTCASES ...], --testcases TESTCASES [TESTCASES ...]
One or more testcases to feed to the workflow
--filter WORKFLOW_FILTER
Name of the workflow to run
--log-level {debug,info,warn}
Level of detail with which events are logged
--no-color Do not use color in standard output
--config-file CONFIG_FILE
Path to a configuration file
Touca CLI makes it convenient to locally execute Touca tests written using the
Touca Python SDK. Simply navigate to any directory and run touca test
with
your preferred options to execute all the Touca workflows in that directory.
$ git clone git@github.com:trytouca/trytouca.git
$ cd trytouca/examples/python/02_python_main_api/
$ touca config set api-key=a66fe9d2-00b7-4f7c-95d9-e1b950d0c906
$ touca config set team=tutorial-509512
$ touca test
Running Tests on a Test Machine
touca run --help
usage: touca run -p PROFILE [-r REVISION]
Run tests on a dedicated test server
options:
-p PROFILE, --profile PROFILE
path to the profile file
-r REVISION, --revision REVISION
specific version to run
touca test
is convenient for running Touca tests on your own machine or as
part of the CI workflow. But these two environments have important limitations:
- Running Touca tests at scale with a large set of test cases could be time consuming, leading to longer build times.
- Our test workflows may require access to input data that could be large in size and difficult to provision on a build server.
- Performance benchmarks obtained on a build server are prone to noise.
To allow testing mission-critical software workflows at scale, Touca CLI
provides the touca run
command that is designed to be used on a dedicated test
machine. This command takes a recipe describing your software delivery pipeline
specification to download and deploy new releases of your software and to run
your tests against it.
Let's imagine that we want to automatically run Touca tests that are packaged and deployed to JFrog Artifactory as part of our build pipeline. Our objective is to setup a testing pipeline that periodically checks Artifactory for new versions of our artifact, pulls the new version, installs it, executes the Touca tests, and archives the generated test results.
touca run --profile ./sample.json --revision 1.2.3
Where sample.json
is a test profile that describes our test setup:
{
"artifactory": {
"artifact_url": "https://touca.jfrog.io/artifactory/{repo}/{group}/{name}/{name}-{version}.tar.gz",
"repo": "example-repo-local",
"group": "io.touca",
"name": "sample_app"
},
"install": {
"installer": "sample_app/install.sh",
"destination": "./deployed"
},
"execution": {
"executable": "sample_app",
"output-directory": "./output",
"suite": "sample_app"
},
"archive": {
"dir": "./archive"
}
}
Check out our JSON schema file for a full list of supported fields and scenarios.
We can use Linux Cron Jobs or Windows Task Scheduler to automate the execution of the above command. If we intend to run Touca test for every version of our code, we can use a short polling interval such as 5 minutes. The CLI gracefully exits if there are no new versions of our software available in Artifactory.
For Touca enterprise customers, we provide the test infrastructure and tooling required to remotely execute Touca tests at scale and based on custom requirements.
Submission Mode
To provide quick feedback, Touca CLI commands touca check
and touca test
as
well as the test runners for all SDKs run in synchronous mode by default. In
this mode, as test cases are executed, the test output will include whether the
test results submitted for a given test case match the baseline.
If you prefer to submit the captured results for a test without waiting for the
comparison results to complete, set the configuration option submit_async
.