Skip to main content

Quick Start

Welcome! If you are new to Touca, this is the right place to be! Our main objective here is to introduce Touca without taking too much of your time.

Revisiting Unit Testing

Let's assume that we want to test a software that checks whether a given number is prime.

01_python_minimal/is_prime.py
def is_prime(number: int):
for i in range(2, number):
if number % i == 0:
return False
return 1 < number

We can use unit testing in which we hard-code a set of input numbers and list our expected return value for each input.

from code_under_test import is_prime

def test_is_prime():
assert is_prime(-1) == False
assert is_prime(1) == False
assert is_prime(2) == True
assert is_prime(13) == True

With unit testing:

  • For each input, we need to specify the corresponding expected output, as part of our test logic.
  • As our software requirements evolve, we may need to go back and change our expected outputs.
  • When we find other interesting inputs, we may need to go back and include them in our set of inputs.

In our example, the input and output of our code under test are a number and a boolean. If we were testing a video compression algorithm, they may have been video files. In that case:

  • Describing the expected output for a given video file would be difficult.
  • When we make changes to our compression algorithm, accurately reflecting those changes in our expected values would be time-consuming.
  • We would need a large number of input video files to gain confidence that our algorithm works correctly.

Introducing Touca

Touca makes it easier to continuously test workflows of any complexity and with any number of test cases.

01_python_minimal/is_prime_test.py
import touca
from is_prime import is_prime

@touca.workflow
def is_prime_test(testcase: str):
touca.check("output", is_prime(int(testcase)))

This is slightly different from a typical unit test:

  • Touca tests do not use expected values.
  • Touca tests do not hard-code input values.

With Touca, we can define how to run our code under test for any given test case. We can capture values of interesting variables and runtime of important functions to describe the behavior and performance of our workflow for that test case. Touca SDKs submit this description to a remote Touca server which compares it against the description for a trusted version of our code. The server visualizes any differences and reports them in near real-time.

We can run Touca tests with any number of inputs from the command line:

git clone git@github.com:trytouca/trytouca.git
cd trytouca/examples/python
python -m venv .env
source .env/bin/activate
pip install touca
cd 01_python_minimal
touca config set api-key=<TOUCA_API_KEY>
touca config set api-url=<TOUCA_API_URL>
touca test --revision v1.0 --testcase 13 17 51

Where API Key and URL can be obtained from the Touca server at app.touca.io or your own self-hosted instance.

This command produces the following output:


Touca Test Framework
Suite: is_prime_test/v1.0

1. SENT 13 (127 ms)
2. SENT 17 (123 ms)
3. SENT 51 (159 ms)

Tests: 3 submitted, 3 total
Time: 0.57 s

✨ Ran all test suites.

Now if we make changes to our workflow under test, we can rerun this test and rely on Touca to check if our changes affected the behavior or performance of our software.

touca test

Touca Test Framework
Suite: is_prime_test/v1.1

1. SENT 13 (109 ms)
2. SENT 17 (152 ms)
3. SENT 51 (127 ms)

Tests: 3 passed, 3 total
Time: 0.55 s

✨ Ran all test suites.

Unlike integration tests, we are not bound to the output of our workflow. We can capture any number of data points and from anywhere within our code. This is specially useful if our workflow has multiple stages. We can capture the output of each stage without publicly exposing its API. When any stage changes behavior in a future version of our software, our captured data points will help find the root cause more easily.

Summary

Touca is very effective in addressing common problems in the following situations:

  • When we need to test our workflow with a large number of inputs.
  • When the output of our workflow is too complex, or too difficult to describe in our unit tests.
  • When interesting information to check for regression is not exposed through the interface of our workflow.

The highlighted design features of Touca can help us test these workflows at any scale.

  • Decoupling our test input from our test logic, can help us manage our long list of inputs without modifying the test logic. Managing that list on a remote server accessible to all members of our team, can help us add notes to each test case, explain why they are needed and track how their performance changes over time.
  • Submitting our test results to a remote server, instead of storing them in files, can help us avoid the mundane tasks of managing and processing of those results. The Touca server retains test results and makes them accessible to all members of the team. It compares test results using their original data types and reports discovered differences in real-time to all interested members of our team. It allows us to audit how our software evolves over time and provides high-level information about our tests.