Using Postman and Newman to run your API tests? Follow the instructions below to report results.
Install newman-reporter-tesults
npm install newman-reporter-tesults --save
Usually you run Postman collections with Newman by specificying the collection to run:
newman run your_collection.json
All you need to do is use the -r option to specify newman-reporter-tesults as a reporter to use:
newman run your_collection.json -r tesults --reporter-tesults-target <token>
The 'token' above should be replaced with your Tesults target token, which you received when creating a project or target, and can be regenerated from the configuration menu.
When you run your Postman collection with Newman as shown above, results will be submitted to Tesults.
Results for different test jobs or collections should be reported to different Tesults targets in order to maintain results history and utilize smart analysis capabilities. In order to do this generate multiple targets on Tesults and then have each of your newman cli commands utilize a different target for each collection you run.
newman run your_collection_1.json -r tesults --reporter-tesults-target <token_1>
newman run your_collection_2.json -r tesults --reporter-tesults-target <token_2>
newman run your_collection_3.json -r tesults --reporter-tesults-target <token_3>
Tesults will use the folder names (groupings) you create within your collection for Test suite names, so it's recommended you breakdown related tests into folders in your collection if you would like a report broken down into test suite sections.
You can optionally report build related information.
reporter-tesults-buildNameOptional
Use this to report a build version or name for reporting purposes.
newman run your_collection.json -r tesults --reporter-tesults-target <token> --reporter-tesults-buildName 1.0.0
reporter-tesults-buildResultOptional
Use this to report the build result, must be one of [pass, fail, unknown].
newman run your_collection.json -r tesults --reporter-tesults-target <token> --reporter-tesults-buildName 1.0.0 --reporter-tesults-buildResult pass
reporter-tesults-buildDescriptionOptional
Use this to report a build description for reporting purposes.
newman run your_collection.json -r tesults --reporter-tesults-target <token> --reporter-tesults-buildName 1.0.0 --reporter-tesults-buildResult pass --reporter-tesults-buildDescription 'A build description'
reporter-tesults-buildReasonOptional
Use this to report a build failure reason.
newman run your_collection.json -r tesults --reporter-tesults-target <token> --reporter-tesults-buildName 1.0.0 --reporter-tesults-buildResult fail --reporter-tesults-buildDescription 'A build description' --reporter-tesults-buildReason 'Build failure due to...'
Result interpretation is not currently supported by this integration. If you are interested in support please contact help@tesults.com.
If you execute multiple test runs in parallel or serially for the same build or release and results are submitted to Tesults within each run, separately, you will find that multiple test runs are generated on Tesults. This is because the default behavior on Tesults is to treat each results submission as a separate test run. This behavior can be changed from the configuration menu. Click 'Results Consolidation By Build' from the Configure Project menu to enable and disable consolidation by target. Enabling consolidation will mean that multiple test runs submitted with the same build name will be consolidated into a single test run.
If you dynamically create test cases, such as test cases with variable values, we recommend that the test suite and test case names themselves be static. Provide the variable data information in the test case description or other custom fields but try to keep the test suite and test name static. If you change your test suite or test name on every test run you will not benefit from a range of features Tesults has to offer including test case failure assignment and historical results analysis. You need not make your tests any less dynamic, variable values can still be reported within test case details.
Does your corporate/office network run behind a proxy server? Contact us and we will supply you with a custom API Library for this case. Without this results will fail to upload to Tesults.