Skip to main content

Your First Touca Test

You've made it this far. Great! 👍🏼

We assume you have followed our Setup Your Account tutorial to create an account on Touca. In this document, we will show you how to write and run a simple regression test to submit your first test results to the Touca server.

This is a hands-on tutorial. It's only fun if you follow along. 👨🏻‍💻

Code Under Test

Let us imagine we are building a profile database software that retrieves personal information of students based on their username.

def find_student(username: str) -> Student:

Where type Student could be defined as follows:

class Student:
username: str
fullname: str
gpa: float

Clone the Touca repository to a directory of your choice.

git clone

Navigate to the examples directory for your preferred programming language. Each example serves as a standalone hands-on tutorial.

Let's focus on the Main API example examples/<lang>/02_<lang>_main_api which includes two modules students and students_test. The students module represents our code under test: the production code for our profile database software. Our code under test can have any complexity. It may call various nested functions, connect to database, and scrape the web to return information about a student given their username.

Check out the students module for a possible "current" implementation:

def find_student(username: str) -> Student:
data = next((k for k in students if k[0] == username), None)
if not data:
raise ValueError(f"no student found for username: ${username}")
return Student(data[0], data[1], data[2], calculate_gpa(data[3]))

Writing a Touca Test

With Touca, we call our workflow under test with various inputs and try to describe the behavior and performance of our implementation by capturing values of variables and runtime of functions as results and metrics. While this is similar to unit testing, there are fundamental differences:

  • Instead of hard-coding inputs to our code under test, we pass them via the testcase parameter to our Touca test workflow.
  • Instead of hard-coding expected outputs for each test case, we use Touca data capturing functions to record the actual values of important variables.
  • Instead of being bound to checking the output value of our code under test, we can track value of any variable and runtime of any function in our code under test.

These differences in approach stem from a difference in objective. Unlike unit testing, our goal is not to verify that our code behaves correctly. We want to check that it behaves and performs as well as before. This way, we can start changing our implementation without causing regressions in our overall software.

Here is a possible implementation for our first Touca test code:

import touca
from students import find_student

def students_test(username: str):
student = find_student(username)
touca.assume("username", student.username)
touca.check("fullname", student.fullname)
touca.check("birth_date", student.dob)
touca.check("gpa", student.gpa)

if __name__ == "__main__":

Notice the absence of hard-coded inputs and expected outputs. Each Touca workflow, takes a short, unique, and URL-friendly testcase name, maps that to a corresponding input and passes that input to our code under test. In the above code snippet, once we receive the output of our find_student workflow, we use check to track various characteristics of that output. Touca notifies us if these characteristics change in a future version of our find_student workflow.

We can track any number of variables in each Touca test workflow. More importantly, we can track important variables that might not necessarily be exposed through the interface of our code under test. In our example, our software computes the GPA of a student based on their courses and using an internal function calculate_gpa. With Touca, we can check this function for regression by tracking both the calculated GPA and the list courses, without creating a separate test workflow.

def calculate_gpa(courses: List[Course]):
touca.check("courses", courses)
return sum(k.grade for k in courses) / len(courses) if courses else 0

Notice that we are using Touca check inside our production code. Touca data capturing functions are no-op in the production environment. When they are executed by Touca workflow in a test environment, they start capturing values and associating them with the active test case.

Lastly, Touca helps us track changes in the performance of different parts of our code, for any number of test cases. While there are various patterns and facilities for capturing performance benchmarks, the most basic functions are start_timer and stop_timer for measuring runtime of a given piece of code, as shown below.

import touca
from students import find_student

def students_test(username: str):
with touca.scoped_timer("find_student"):
student = find_student(username)
touca.assume("username", student.username)
touca.check("fullname", student.fullname)
touca.check("birth_date", student.dob)
touca.check("gpa", student.gpa)
touca.add_metric("external_source", 1500)

There is so much more that we can cover, but for now, let us accept the above code snippet as the first version of our Touca test code and proceed with running this test.

Running a Touca Test

Let us now use one of Touca SDKs to write a test that could help us detect future changes in the overall behavior or performance of our profile database software.

Navigate to directory python/02_python_main_api in the examples repository and create a virtual environment using Python v3.6 or newer.

python -m venv .env
source .env/bin/activate

Install Touca SDK as a third-party dependency:

pip install touca

We can run Touca test from the command line, passing the following information as command line arguments.

  • API Key: to authenticate with the Touca server
  • API URL: to specify where test results should be submitted to
  • Revision: to specify the version of our code under test
  • Testcases: to specify what inputs should be given to our workflow under test

We can find API Key and API URL on the Touca server. We can use any string value for Revision. More importantly, we can pass any number of test cases to the code under test, without ever changing our test logic.

touca config set api-key=<TOUCA_API_KEY>
touca config set api-url=<TOUCA_API_URL>
touca test --revision v1.0 --testcase alice bob charlie

In real-world scenarios, we may have too many test cases to specify as command line arguments. We can write our test cases to a file and pass the path to that file using the --testcase-file option. Alternatively, we can add our test cases directly to the Touca server. When test cases are not provided via --testcase or --testcase-file options, Touca SDKs attempt to retrieve them from the Touca server.

The above command produces the following output.

Touca Test Framework
Suite: students_test/v1.0

1. SENT alice (127 ms)
2. SENT bob (123 ms)
3. SENT charlie (159 ms)

Tests: 3 submitted, 3 total
Time: 0.57 s

✨ Ran all test suites.

At this point, we should see the results of our test on the Touca server. This is a big milestone. Congratulations! 🎉

Notice that this version is shown with a star icon to indicate that it is the baseline version of our Suite. Touca will compare subsequent versions of our software against the test results submitted for this version.

In the next section, we will see how to use Touca to understand the differences between different versions of our software, investigate their root cause, communicate our findings with our team members, and update the baseline version.