API Reference

Client Library

Entry-point to the Touca SDK for Python.

You can install this sdk via pip install touca and import it in your code via:

import touca

Alternatively, you can import individual functions which may be useful in rare cases if and when you want to call them from production code:

from touca import check

If you are just getting started with Touca, we generally recommend that you install the SDK as a development-only dependency.

touca.configure(**kwargs) bool

Configures the Touca client. Must be called before declaring test cases and adding results to the client. Should be regarded as a potentially expensive operation. Should be called only from your test environment.

configure() takes a variety of configuration parameters documented below. All of these parameters are optional. Calling this function without any parameters is possible: the client can capture behavior and performance data and store them on a local filesystem but it will not be able to post them to the Touca server.

In most cases, You will need to pass API Key and API URL during the configuration. The code below shows the common pattern in which API URL is given in long format (it includes the team slug and the suite slug) and API Key as well as the version of the code under test are specified as environment variables TOUCA_API_KEY and TOUCA_TEST_VERSION, respectively:

touca.configure(api_url='https://api.touca.io/@/acme/students')

As long as the API Key and API URL to the Touca server are known to the client, it attempts to authenticate with the Touca Server. You can explicitly disable this communication in rare cases by setting configuration option offline to False.

You can call configure() any number of times. The client preserves the configuration parameters specified in previous calls to this function.

Parameters
  • api_key (str, optional) – (optional) API Key issued by the Touca server that identifies who is submitting the data. Since the value should be treated as a secret, we recommend that you pass it as an environment variable TOUCA_API_KEY instead.

  • api_url (str, optional) – (optional) URL to the Touca server API. Can be provided either in long format like https://api.touca.io/@/myteam/mysuite/version or in short format like https://api.touca.io. If the team, suite, or version are specified, you do not need to specify them separately.

  • team (str, optional) – (optional) slug of your team on the Touca server.

  • suite (str, optional) – slug of the suite on the Touca server that corresponds to your workflow under test.

  • version (str, optional) – version of your workflow under test.

  • offline (bool, optional) – determines whether client should connect with the Touca server during the configuration. Defaults to False when api_url or api_key are provided.

  • concurrency (bool, optional) – determines whether the scope of test case declaration is bound to the thread performing the declaration, or covers all other threads. Defaults to True. If set to True, when a thread calls declare_testcase(), all other threads also have their most recent test case changed to the newly declared test case and any subsequent call to data capturing functions such as check() will affect the newly declared test case.

touca.is_configured() bool

Checks if previous call(s) to configure() have set the right combination of configuration parameters to enable the client to perform expected tasks.

We recommend that you perform this check after client configuration and before calling other functions of the library:

if not touca.is_configured():
    print(touca.configuration_error())
    sys.exit(1)

At a minimum, the client is considered configured if it can capture test results and store them locally on the filesystem. A single call to configure() without any configuration parameters can help us get to this state. However, if a subsequent call to configure() sets the parameter api_url in short form without specifying parameters such as team, suite and version, the client configuration is incomplete: We infer that the user intends to submit results but the provided configuration parameters are not sufficient to perform this operation.

Returns

True if the client is properly configured

Return type

bool

See also

configure()

touca.configuration_error() str

Provides the most recent error, if any, that is encountered during client configuration.

Returns

short description of the most recent configuration error

Return type

str

touca.declare_testcase(name: str)

Declares name of the test case to which all subsequent results will be submitted until a new test case is declared.

If configuration parameter concurrency is set to "enabled", when a thread calls declare_testcase all other threads also have their most recent testcase changed to the newly declared one. Otherwise, each thread will submit to its own testcase.

Parameters

name – name of the testcase to be declared

touca.forget_testcase(name: str)

Removes all logged information associated with a given test case.

This information is removed from memory, such that switching back to an already-declared or already-submitted test case would behave similar to when that test case was first declared. This information is removed, for all threads, regardless of the configuration option concurrency. Information already submitted to the server will not be removed from the server.

This operation is useful in long-running regression test frameworks, after submission of test case to the server, if memory consumed by the client library is a concern or if there is a risk that a future test case with a similar name may be executed.

Parameters

name – name of the testcase to be removed from memory

Raises

ToucaError – when called with the name of a test case that was never declared

touca.check(key: str, value: Any, *, rule: Optional[touca._rules.ComparisonRule] = None)

Captures the value of a given variable as a data point for the declared test case and associates it with the specified key.

Parameters
  • key – name to be associated with the captured data point

  • value – value to be captured as a test result

  • rule – comparison rule for this test result

touca.check_file(key: str, value: Any)

Captures an external file as a data point for the declared test case and associates it with the specified key.

Parameters
  • key – name to be associated with captured file

  • path – path to the external file to be captured

touca.assume(key: str, value: Any)

Logs a given value as an assertion for the declared test case and associates it with the specified key.

Parameters
  • key – name to be associated with the logged test result

  • value – value to be logged as a test result

touca.add_array_element(key: str, value: Any)

Adds a given value to a list of results for the declared test case which is associated with the specified key.

Could be considered as a helper utility function. This method is particularly helpful to log a list of items as they are found:

for number in numbers:
    if is_prime(number):
        touca.add_array_element("prime numbers", number)
        touca.add_hit_count("number of primes")

This pattern can be considered as a syntactic sugar for the following alternative:

primes = []
for number in numbers:
    if is_prime(number):
        primes.append(number)
if primes:
    touca.check("prime numbers", primes)
    touca.check("number of primes", len(primes))

The items added to the list are not required to be of the same type. The following code is acceptable:

touca.check("prime numbers", 42)
touca.check("prime numbers", "forty three")
Raises

RuntimeError – if specified key is already associated with a test result which was not iterable

Parameters
  • key – name to be associated with the logged test result

  • value – element to be appended to the array

See also

check()

touca.add_hit_count(key: str)

Increments value of key every time it is executed. creates the key with initial value of one if it does not exist.

Could be considered as a helper utility function. This method is particularly helpful to track variables whose values are determined in loops with indeterminate execution cycles:

for number in numbers:
    if is_prime(number):
        touca.add_array_element("prime numbers", number)
        touca.add_hit_count("number of primes")

This pattern can be considered as a syntactic sugar for the following alternative:

primes = []
for number in numbers:
    if is_prime(number):
        primes.append(number)
if primes:
    touca.check("prime numbers", primes)
    touca.check("number of primes", len(primes))
Raises

RuntimeError – if specified key is already associated with a test result which was not an integer

Parameters

key – name to be associated with the logged test result

See also

check()

touca.add_metric(key: str, milliseconds: int)

Adds an already obtained measurements to the list of captured performance benchmarks.

Useful for logging a metric that is measured without using this SDK.

Parameters
  • key – name to be associated with this performance benchmark

  • milliseconds – duration of this measurement in milliseconds

touca.start_timer(key: str)

Starts timing an event with the specified name.

Measurement of the event is only complete when function stop_timer() is later called for the specified name.

Parameters

key – name to be associated with the performance metric

touca.stop_timer(key: str)

Stops timing an event with the specified name.

Expects function start_timer() to have been called previously with the specified name.

Parameters

key – name to be associated with the performance metric

touca.add_serializer(datatype: Type, serializer: Callable[[Any], Dict])

Registers custom serialization logic for a given custom data type.

Calling this function is rarely needed. The library already handles all custom data types by serializing all their properties. Custom serializers allow you to exclude a subset of an object properties during serialization.

Parameters
  • datattype – type to be serialized

  • serializer – function that converts any object of the given type to a dictionary.

touca.save_binary(key: str, cases: List[str] = [])

Stores test results and performance benchmarks in binary format in a file of specified path.

Touca binary files can be submitted at a later time to the Touca server.

We do not recommend as a general practice for regression test tools to locally store their test results. This feature may be helpful for special cases such as when regression test tools have to be run in environments that have no access to the Touca server (e.g. running with no network access).

Parameters
  • path – path to file in which test results and performance benchmarks should be stored

  • cases – names of test cases whose results should be stored. If a set is not specified or is set as empty, all test cases will be stored in the specified file.

touca.save_json(key: str, cases: List[str] = [])

Stores test results and performance benchmarks in JSON format in a file of specified path.

This feature may be helpful during development of regression tests tools for quick inspection of the test results and performance metrics being captured.

Parameters
  • path – path to file in which test results and performance benchmarks should be stored

  • cases – names of test cases whose results should be stored. If a set is not specified or is set as empty, all test cases will be stored in the specified file.

touca.post()

Submits all test results recorded so far to Touca server.

It is possible to call post() multiple times during runtime of the regression test tool. Test cases already submitted to the server whose test results have not changed, will not be resubmitted. It is also possible to add test results to a testcase after it is submitted to the server. Any subsequent call to post() will resubmit the modified test case.

Raises

ToucaError – when called on the client that is not configured to communicate with the Touca server.

touca.seal()

Notifies the Touca server that all test cases were executed for this version and no further test result is expected to be submitted. Expected to be called by the test tool once all test cases are executed and all test results are posted.

Sealing the version is optional. The Touca server automatically performs this operation once a certain amount of time has passed since the last test case was submitted. This duration is configurable from the “Settings” tab in “Suite” Page.

Raises

ToucaError – when called on the client that is not configured to communicate with the Touca server.

Test Framework

Touca Test Runner for Python is designed to make writing regression test workflows easy and straightforward. The test framework abstracts away many of the common expected features such as logging, error handling and reporting test progress.

The following example demonstrates how to use this framework:

import touca
from code_under_test import find_student, calculate_gpa

@touca.workflow
def test_students(testcase: str):
    student = find_student(testcase)
    touca.assume("username", student.username)
    touca.check("fullname", student.fullname)
    touca.check("birth_date", student.dob)
    touca.check("gpa", calculate_gpa(student.courses))

if __name__ == "__main__":
    touca.run()

It is uncommon to run multiple regression test workflows as part of a single suite. However, the pattern above allows introducing multiple workflows by defining functions with @touca.workflow decorators.

touca._runner.workflow(method=None, testcases=None)

Registers the decorated function as a regression test workflow to be executed, once, for each test case.

The following example demonstrates how to use this decorator:

@touca.workflow
def test_students(testcase: str):
    student = find_student(testcase)
    touca.assume("username", student.username)
    touca.check("fullname", student.fullname)
    touca.check("birth_date", student.dob)
    touca.check("gpa", calculate_gpa(student.courses))
touca._runner.run(**options)

Runs registered workflows, one by one, for available test cases.

This function is intended to be called once from the main module as shown in the example below:

if __name__ == "__main__":
    touca.run()
Raises

SystemExit – When configuration options specified as command line arguments, environment variables, or in a configuration file, have unexpected values or are in conflict with each other. Capturing this exception is not required.