Reviewing Test Results
Learn how to analyze test results effectively and take action on failures.
Overview
Reviewing test results is a critical part of the testing process. Understanding what passed, what failed, and why helps you maintain application quality and prioritize fixes effectively.
Understanding Test Status
Every test in Synthesized QA has one of four statuses:
✅ Passing
- Meaning: Test completed successfully with all assertions passed
- Action: No action needed - functionality is working as expected
- Value: Confirms that this functionality is stable
❌ Failing
- Meaning: Test completed but one or more assertions failed
- Action: Investigate the failure - could be a bug or outdated test
- Priority: Should be reviewed promptly, especially if previously passing
⏳ Pending
- Meaning: Test hasn't been executed yet
- Action: Run the test when ready to validate functionality
- Note: New tests start in this status
🔍 Review
- Meaning: Newly generated test awaiting acceptance
- Action: Review test description and accept or decline
- Note: Tests must be accepted before they can be run
Accessing Test Results
You can view test results in several places:
Tests Page
The main Tests page shows current status for all tests:
- Status breakdown showing counts (e.g., "6 Passing, 4 Failing, 46 Pending")
- Individual test status in the test list
- Last run timestamp for each test
- Duration of last execution
Run History Page
Navigate to Run History to see all past test executions:
- Complete list of test runs with timestamps
- Pass/fail counts for each run
- Environment used for each run
- Total execution time
Individual Test Details
Click on any test to see detailed results:
- Step-by-step execution timeline
- Screenshots at key points (UI tests)
- Request/response data (API tests)
- Error messages and stack traces
- Execution logs
Analyzing Failed Tests
When a test fails, follow this systematic approach:
Step 1: Review the Failure Message
Start by reading the error message carefully:
- What assertion failed?
- What was expected vs. what was actually found?
- At which step did the failure occur?
Step 2: Check Screenshots and Logs
For UI tests, review screenshots to see:
- The visual state when the test failed
- Whether the page loaded correctly
- If elements are present but in unexpected states
- Any error messages shown in the UI
For API tests, examine:
- Request payload sent to the API
- Response status code received
- Response body content
- Any API error messages
Step 3: Determine the Cause
Categorize the failure type:
- Application Bug: The application is not working as intended
- Outdated Test: The application changed but the test wasn't updated
- Environment Issue: The test environment has configuration problems
- Timing Issue: Race condition or loading delay not properly handled
- Test Data Issue: Required data is missing or in wrong state
Step 4: Take Appropriate Action
Based on the cause:
- Application Bug: File a bug report and prioritize the fix
- Outdated Test: Update the test to match current functionality
- Environment Issue: Fix environment configuration
- Timing Issue: Adjust test timeouts or waits
- Test Data Issue: Restore or create required test data
Comparing Test Runs
Look at test history to identify patterns:
- Consistent Failures: Tests that always fail likely indicate real issues
- New Failures: Recently started failing - what changed?
- Intermittent Failures: Flaky tests that need stabilization
- Environment Differences: Passes in one environment but fails in another
Using the Reports Dashboard
Navigate to Reports to see high-level trends:
- Overall pass/fail rate over time
- Test execution trends
- Most frequently failing tests
- Quality metrics and indicators
Best Practices for Result Review
Review Regularly
- Check test results shortly after execution
- Don't let failed tests accumulate without investigation
- Set up a routine for reviewing test status
Prioritize Failures
- Critical Path Tests: Review failures in login, checkout, core features first
- New Failures: Tests that recently started failing deserve immediate attention
- Regression Tests: Failures in previously stable tests indicate new bugs
Document Findings
- Add notes to test results explaining failures
- Link test failures to bug reports
- Track patterns and recurring issues
Communicate Results
- Share test results with relevant team members
- Highlight critical failures immediately
- Provide context when reporting bugs found by tests
Handling Flaky Tests
Flaky tests that pass and fail intermittently are problematic:
- Identify them: Track which tests have inconsistent results
- Investigate root cause: Usually timing, race conditions, or test data issues
- Fix or disable: Either stabilize the test or disable it until fixed
- Don't ignore: Flaky tests erode confidence in your test suite
When Tests Pass
Passing tests are valuable too:
- They confirm functionality is working correctly
- They provide confidence for releases
- They document expected behavior
- They catch regressions when they fail
If all tests pass consistently, that's excellent - but still review periodically to ensure tests remain relevant and comprehensive.
Taking Action on Results
Test results should drive action:
- File Bugs: Create bug reports for application issues found
- Update Tests: Keep tests current with application changes
- Improve Coverage: Add tests for gaps discovered
- Refine Tests: Make tests more reliable and maintainable
- Share Insights: Communicate quality status to stakeholders
Common Review Mistakes to Avoid
- Ignoring failures: Every failure contains information
- Not investigating root causes: Understanding why tests fail is crucial
- Accepting flaky tests: Intermittent failures undermine test value
- Only looking at failures: Analyze patterns in passes too
- Not updating outdated tests: Keep tests aligned with current functionality
Next Steps
Effective result review helps maintain high quality and catch issues early. Use insights from test results to continuously improve both your application and test suite.
Was this helpful?