Setting Test Cases
Programmatic Testcase Declaration
In addition to passing testcases as command-line arguments or letting the SDK fetch and reuse the set of test cases already submitted to the baseline version, sometimes you may want to specify your test cases as code or infer them during the test runtime, based on the set of local files or datasets in a given directory, or rows of a given column in a CSV file, or a nested array of JSON response of a particular API endpoint, etc.
- Python
- C++
- JavaScript
- Java
In Python, you could use the testcases
parameter of the @touca.workflow
decorator which accepts either a list, a generator, or a generator function.
@touca.workflow(testcases=["alice", "bob", "charlie"])
def students_test(username: str):
student = code_under_test.find_student(username)
touca.check("gpa", student.gpa)
Here's the same snippet if we were to use a generator function.
def find_testcases():
for username in ['alice', 'bob', 'charlie']:
yield username
@touca.workflow(testcases=find_testcases)
def students_test(username: str):
student = code_under_test.find_student(username)
touca.check("gpa", student.gpa)
In C++, you could pass a third parameter to the test runner function
touca::workflow
to specify the set of test cases to use for that workflow.
touca::workflow(
"students_test",
[](const std::string& username) {
const auto& student = find_student(username);
touca::check("gpa", student.gpa);
},
[](touca::WorkflowOptions& w) {
w.testcases = {"alice", "bob", "charlie"};
});
You can use lambda captured variables to pass the test cases retrieved by your custom logic.
std::vector<std::string> usernames = {"alice", "bob", "charlie"};
touca::workflow(
"students_test",
[](const std::string& username) {
const auto& student = find_student(username);
touca::check("gpa", student.gpa);
},
[&usernames](touca::WorkflowOptions& w) {
w.testcases = usernames;
});
In TypeScript, you could pass a third parameter to the @touca.workflow
function and pass a list or a callback function to the testcases
field.
touca.workflow(
"students_test",
async (username: string) => {
const student = await find_student(username);
touca.check("gpa", student.gpa);
},
{ testcases: ["alice", "bob", "charlie"] }
);
In Java, we use @Touca.workflow
annotator to declare a workflow. Since
annotations are processed at compile time, settings testcases
as an annotation
parameter could be limiting. Instead, Java SDK provides
Touca.setWorkflowOptions
that can be called within the main function and prior
to calling Touca.run
.
public final class StudentsTest {
@Touca.Workflow
public void findStudent(final String username) {
Student student = Students.findStudent(username);
Touca.check("gpa", student.gpa);
}
public static void main(String[] args) {
Touca.setWorkflowOptions("findStudent", x -> {
x.testcases = new String[] { "alice", "bob", "charlie" };
});
Touca.run(StudentsTest.class, args);
}
}
Best Practices
Identifying the code under test is the first step to developing any test tool. We recommend that you choose your code under test such that it constitutes a pure function (in its mathematic sense) whose return value remains the same for the same argument, and its evaluation has no side effects. This way, for any implementation of our code, any given input will yield a specific output. If a subsequent implementation yields a different output for the same input, we can argue that the changes in the implementation have introduced regressions.
The type and definition of the input to our code under test are arbitrary and restricted only by the requirements of our workflow. But Touca tests always take a short, unique, and URL-friendly string as test case. As an example, a possible test case for our Touca test may be the name of a locally stored file or directory in which the input for the code under test is stored. It is up to us to load the actual input to our code under test based on the filename provided as test case.
The effectiveness of a Touca test depends, in part, on the variety of its test cases. Ideally, we should have enough test cases to cover all execution branches of our code under test. This may be difficult to achieve in real-world applications where the code under test may be arbitrarily complicated. As a more practical alternative, we recommend that the test cases be chosen such that they represent the range of typical inputs given to our software in production environments.