Defect Reporting Best Practices — Writing Bug Reports That Get Fixed
A practical guide to writing bug reports that developers can act on — covering the six elements of a useful defect report, severity vs. priority, evidence, and how to log defects during test execution in Evaficy Smart Test.
For QA Engineers logging defects during test execution and Tech Leads reviewing defect report quality.
Why Most Bug Reports Don't Get Fixed
A bug report that sits in the backlog for months and eventually gets closed as "cannot reproduce" is not always the developer's fault. It is usually a reporting problem. The defect was real, but the report did not give the developer what they needed to find it, understand it, and fix it with confidence.
The three most common reasons a bug report fails: the steps to reproduce are too vague to follow exactly, the expected and actual results are too imprecise to verify a fix against, or the environment and data specifics are missing so the developer tests in a different context and cannot reproduce the failure. A report that lacks any one of these is incomplete. A report that lacks all three is useless.
Every element described in this guide exists to solve one of those three problems. A well-written defect report reduces the time from "bug logged" to "fix shipped" — and reduces the rate of defects that are closed without being fixed.
The cost of a vague bug report
A developer who cannot reproduce a bug from the steps provided will close it as "cannot reproduce." A developer who cannot tell from the expected and actual results whether their fix is correct will mark it "fixed" prematurely. In both cases, the bug reaches production — and the next bug report for the same issue will be written by a user.
The Six Elements of a Useful Defect Report
Every effective bug report contains the same six elements. Select each one to see what it should include, what a weak version looks like, and what a complete version looks like.
Steps to reproduce are the instructions a developer follows to see the bug themselves. If they cannot reproduce the bug from your steps, the report will be closed as "cannot reproduce" — regardless of whether the bug is real and regardless of how confident you are in its existence.
Write steps as if the developer has never opened the application before. Every assumption about prior state, navigation path, and data configuration must be stated explicitly.
- Start from an explicit known state: browser type, session status, and environment URL
- One action per step — clicking a button and observing the result are two separate steps
- Use exact UI labels in quotes so there is no ambiguity about which button or field
The expected result tells the developer — and later the re-tester — what correct behaviour looks like. It is the baseline against which the actual result is compared, and the standard used to verify the fix.
Expected results should be grounded in acceptance criteria or product specification, not in assumption. Vague expectations ("form should not submit") lead to fixes that technically satisfy the report but miss the full requirement.
- Quote the exact text of messages or labels that should appear — if the wording changes post-fix, re-testing will catch it
- State what should NOT happen as explicitly as what should: "The form does not submit" is often the most important line
- Reference the acceptance criterion if available: "Per AC-12: all required fields must be validated before form submission"
The actual result is the most important field for the developer. It describes the specific, observable failure: exactly what happened after the last step was executed. This is what needs to be fixed.
Describe what you saw, not your interpretation of what it means. "The cart total is wrong" is an interpretation. "The cart displays £12.00 after adding a £10.00 product to a previously empty cart — the expected total is £10.00" is an observation a developer can act on.
- Include exact values that appeared — not "the wrong amount" but "£12.00 when £10.00 was expected"
- Note unexpected redirects, state changes, or system side effects created by the failure
- If an error code or console message appeared, include the exact text verbatim
Many bugs are environment-specific or data-specific. A defect that only reproduces in Safari on iOS, only with a particular type of account, or only with a specific product in the cart cannot be fixed by a developer who is testing in Chrome with different data.
Specify every variable that might affect whether the bug can be reproduced: browser version, operating system, environment URL, account credentials, and any relevant data configuration.
- Include browser name and exact version — behaviour differs between Chrome 119 and Chrome 121
- Specify the exact test account or session type — guest, verified user, admin, and trial plan may behave differently
- Note reproducibility: "reproducible 5/5 attempts" vs "occurred once, unable to reproduce consistently"
Evidence eliminates the gap between what the reporter describes and what the developer sees. A screenshot of the failure state, a screen recording of the reproduction steps, or a console log with the JavaScript error dramatically accelerates diagnosis and reduces back-and-forth communication.
Attach evidence even when the steps seem perfectly clear. A developer seeing what "wrong" looks like before reproducing it spends less time reproducing it — and is less likely to close the report as "cannot reproduce" when the failure is subtle.
- For UI failures, a screenshot of the final state is the minimum — a screen recording of the steps is significantly more useful
- For JavaScript errors, export the browser console output including the full stack trace and any network request failures
- Name files descriptively: "checkout-empty-email-bug.png" not "screenshot1.png" or "image.jpg"
Severity vs. Priority — Understanding the Difference
Severity and priority are two distinct assessments that are routinely confused — and the confusion leads to bugs that are urgent getting deprioritised, and bugs that are visible but trivial consuming sprint capacity at the wrong time.
Severity
Severity is a technical assessment. It describes the impact of the defect on the system — how badly it breaks things. Severity is set by the QA engineer or developer based on observable impact.
Priority
Priority is a business decision. It describes the urgency of fixing the defect — how soon it needs to be addressed relative to other work. Priority is set by the product owner or team lead based on business context.
- The bug is visible to customers in production
- It blocks a key user journey in a live release
- It affects high-value or high-traffic parts of the product
- It is visible to stakeholders, executives, or during a demo
- There is a contractual or compliance deadline attached
When severity and priority diverge — the four combinations
Fix immediately
Core feature broken and business-critical. Blocks release or production use. Stop other work.Schedule promptly
Technically critical but low business exposure — rare edge case, low-traffic feature. Fix in next sprint.Fix this sprint
Cosmetic or minor, but high visibility — executive review, marketing page, onboarding. Prioritise for optics.Backlog
Neither technically critical nor urgently needed. Log it, set a milestone, and address in a maintenance cycle.Writing Steps to Reproduce That Anyone Can Follow
Steps to reproduce are the single biggest factor in whether a defect gets fixed promptly. A developer who can reproduce the bug in under two minutes will diagnose and fix it faster than one who spends forty minutes trying to understand what environment and state the bug requires.
1. Start from zero
Open a new incognito or private browsing window for every bug report. This eliminates cached state, stored sessions, and browser extensions as variables — and ensures your steps start from a clean, reproducible baseline.
2. State the environment explicitly
Include the full URL of the environment you tested in — not just "staging" but "https://staging.yourapp.com". Different staging instances may have different data, different feature flags, and different versions deployed.
3. Use exact UI text
Quote the exact label of every button, field, and link: "Click the 'Continue to Shipping' button" not "click submit." UI labels change during development — exact quotes make it immediately clear if steps have become out of date.
4. Specify account and data state
Name the exact test account used and describe the data configuration that was in place before step 1. A bug that only reproduces with a verified account on the Standard plan, with a cart that has a discount code applied, cannot be reproduced without those specifics.
5. Test your own steps
After writing the steps, close the browser, re-open from scratch, and follow your own instructions from step 1. If you cannot reproduce the bug from your steps alone, a developer won't be able to either.
Attaching Evidence — What to Capture and How
Evidence converts a defect report from a description into a demonstration. A developer reviewing a report with evidence sees exactly what the failure looks like before they attempt to reproduce it — which accelerates diagnosis and reduces the chance of the report being closed prematurely.
Screenshot
When: Always — minimum evidence for any UI defect- The screen state at the point the bug is visible — not just any screen state
- If possible, capture the element in focus (field with incorrect validation, button in wrong state)
- Include the browser URL bar in the screenshot so the page context is clear
Screen Recording
When: For any bug that requires a sequence of steps to manifest- Record from the start of your reproduction steps, not just the final failure
- Keep recordings concise — under 90 seconds, trimmed to the relevant steps
- Ensure the recording captures the full browser window including the URL bar
Console Log / Network Log
When: For JavaScript errors, API failures, and unexpected network responses- Export the full console output from browser DevTools (F12 → Console → right-click → Save as)
- Include the Network tab filtered to failed requests (4xx, 5xx) if the bug involves API calls
- Note the exact error message and stack trace — copy it as text, not just as a screenshot
How to Log Defects During Test Execution in Evaficy Smart Test
In Evaficy Smart Test, defects are logged in context — attached to the specific test step where the failure occurred. This links the defect directly to the test case, the scenario, and the test run, giving developers and reviewers complete traceability from symptom to source.
Fail the step
Mark the test step as Failed in the test run. This records the failure against the specific step where the behaviour deviated, giving the defect precise location context within the test case.
Record the actual result
Enter exactly what the system did — not what you expected, not your interpretation, but the observable failure. This becomes the primary field developers use to understand what to fix.
Add defect description and severity
Use the defect description field to add any additional context not captured in the actual result: error codes, environment notes, reproducibility, or links to related defects. Set severity based on the impact on the system.
Attach evidence links
Add URLs or references to screenshots, screen recordings, browser console exports, and external bug tracker tickets (Jira, Linear, GitHub Issues). Evidence attached at the step level stays linked to the exact failure point.
Defects logged in execution stay linked to their test run
Every defect logged during a test run in Evaficy Smart Test is associated with the specific test case, step, and run where it was found. This makes it straightforward to identify which test cases have open defects, which scenarios have a history of failures, and which defects need re-testing when a new run is created.
Re-testing Discipline — Don't Close Until You Verify
Re-testing is the gate between a developer's belief that a bug is fixed and confirmation that the fix works correctly. It is also one of the most frequently skipped steps in QA workflows under time pressure — and one of the most expensive when skipped, because defects that are not properly verified before closure often reappear in the next release.
Re-test the original steps exactly
Follow the exact steps from the original defect report — not a similar flow, not a simplified version, the exact steps. A fix that works for a slightly different path is not a verified fix.
Verify against the original expected result
Confirm that the actual result now matches the expected result stated in the defect report. If the expected result was imprecise, you may not be able to verify the fix — which is a reason to write precise expected results from the start.
Test the edges of the fix
A fix that resolves the reported case may introduce a regression in an adjacent case. After verifying the defect is fixed, run a targeted check of related functionality — particularly if the fix touched shared components or business logic.
Log a new defect if it is not fixed
If re-testing shows the defect persists or the fix is incomplete, reopen the original defect with a comment describing what was observed. Do not close and log a new one — the history in the original report is valuable context for the next fix attempt.
Related guides
How to Write Test Cases That Actually Catch Bugs
Anatomy of a good test case, common mistakes, and writing for execution vs. review.QA Team Roles and Best Practices
Role-by-role responsibilities including severity assessment and defect management ownership.How to Set Up a QA Project from Scratch
Step-by-step guide to structuring scenarios, running executions, and managing your QA workflow.Log defects in context during execution
In Evaficy Smart Test, defects are logged at the step level — linked directly to the test case, scenario, and run where they were found.