If you are not using Robot Framework view the Python docs for information about integrating with a lower level library.
The Tesults Python library and Tesults Listener for Robot Framework are hosted on PyPI and are compatible with Python 2 and 3.
First install tesults:
pip install tesults
Then install robot-tesults, which depends on the tesults package above:
pip install robot-tesults
targetRequired
A target token is required to push results to Tesults. If this arg is not provided robot-tesults does not attempt upload, effectively disabling it. Get your target token from the configuration menu in the Tesults web interface.
Inline method
robot --listener TesultsListener:target=eyJ0eXAiOiJ... robot.tests
In this case, the target token is supplied directly or inline in the commandline args. This is the simplest approach.
Notice that Robot Framework expects listener args to be provided separated by colons (:). That is:
robot --listener TesultsListener:arg1:arg2:arg3 robot.tests
Key method
robot --listener TesultsListener:target=target1:config=configFile robot.tests
In this case, robot-tesults will automatically look up the value of the token based on the property provided from a configuration file.
Here is the corresponding tesults.config file:
[tesults]
target1 = eyJ0eXAiOiJ...
target2 = ...
target3 = ...
target4 = ...
Or something more descriptive about the targets:
[tesults]
web-qa-env = eyJ0eXAiOiJ...
web-staging-env = ...
web-prod-env = ...
ios-qa-env = eyJ0eXAiOiK...
ios-staging-env = ...
ios-prod-env = ...
android-qa-env = eyJ0eXAiOiL...
android-staging-env = ...
android-prod-env = ...
configOptional
Provide the path, including the file name to a .config file. Args can be provided in a config file instead of the robot command.
robot --listener TesultsListener:config=/path-to-config/tesults.config
At this point robot-tesults will push results to Tesults when you run your robot command with TesultsListener. The robot-tesults plugin requires the target arg be supplied to indicate which target to use as described above.
robot --listener TesultsListener:target=token robot.tests
Provide the top-level directory where files generated during testing are saved for the running test run. Files, including logs, screen captures and other artifacts will be automatically uploaded.
robot --listener TesultsListener:files=/Users/admin/Desktop/temporary
This is one area where robot-tesults is opinionated and requires that files generated during a test run be saved locally temporarily within a specific directory structure.
Store all files in a temporary directory as your tests run. After Tesults upload is complete, delete the temporary directory or just have it overwritten on the next test run.
Also be aware that if providing build files, the build suite is always set to [build] and files are expected to be located in temporary/[build]/buildname
Caution: If uploading files the time taken to upload is entirely dependent on your network speed. Typical office upload speeds of 100 - 1000 Mbps should allow upload of even hundreds of files quite quickly, just a few seconds, but if you have slower access it may take hours. We recommend uploading a reasonable number of files for each test case. The upload method blocks at the end of a test run while uploading test results and files. When starting out test without files first to ensure everything is setup correctly.
build-nameOptional
Use this to report a build version or name for reporting purposes.
robot --listener TesultsListener:build-name=1.0.0
build-resultOptional
Use this to report the build result, must be one of [pass, fail, unknown].
robot --listener TesultsListener:build-result=pass
build-descOptional
Use this to report a build description for reporting purposes.
robot --listener TesultsListener:build-desc='added new feature'
build-reasonOptional
Use this to report a build failure reason.
robot --listener TesultsListener:build-reason='build error line 201 somefile.py'
Result interpretation is not currently supported by this integration. If you are interested in support please contact help@tesults.com.
If you execute multiple test runs in parallel or serially for the same build or release and results are submitted to Tesults within each run, separately, you will find that multiple test runs are generated on Tesults. This is because the default behavior on Tesults is to treat each results submission as a separate test run. This behavior can be changed from the configuration menu. Click 'Results Consolidation By Build' from the Configure Project menu to enable and disable consolidation by target. Enabling consolidation will mean that multiple test runs submitted with the same build name will be consolidated into a single test run.
If you dynamically create test cases, such as test cases with variable values, we recommend that the test suite and test case names themselves be static. Provide the variable data information in the test case description or other custom fields but try to keep the test suite and test name static. If you change your test suite or test name on every test run you will not benefit from a range of features Tesults has to offer including test case failure assignment and historical results analysis. You need not make your tests any less dynamic, variable values can still be reported within test case details.
Does your corporate/office network run behind a proxy server? Contact us and we will supply you with a custom API Library for this case. Without this results will fail to upload to Tesults.