Evaficy Smart Test
Learning CentreQA FundamentalsHow to Write Test Cases

How to Write Test Cases That Actually Catch Bugs

A practical guide to writing structured, executable, and reviewer-ready test cases — covering anatomy, common mistakes, and the difference between writing for execution and writing for review.

For QA Engineers writing cases manually, and teams reviewing AI-generated output before execution.

Test case design
QA fundamentals
Test structure
Common mistakes
Writing for review

What Makes a Test Case Useful vs. Just Present

Most QA teams have test cases. Fewer have test cases that reliably find bugs. The difference is not the number of cases — it is their quality. A test case that is too vague to reproduce consistently, too broad to produce a useful pass/fail result, or too generic to verify a specific requirement is not a quality gate: it is a checkbox.

A useful test case does three things. First, it can be reproduced by any tester from the same starting state and produce the same result. Second, it verifies exactly one testable behaviour — not a general feature, not a flow, one specific outcome. Third, its pass or fail result is unambiguous: the tester either sees exactly what the expected result says should happen, or they do not. No interpretation required.

Everything in this guide is aimed at building cases that meet those three criteria. The anatomy section shows what belongs in each element. The mistakes section shows what breaks them. The template at the end gives you a starting point you can use directly.

Quality over quantity

Twenty well-written test cases will find more bugs and produce more useful results than a hundred vague ones. A scenario with complete, specific, verifiable cases also passes expert validation faster — reviewers can approve it based on titles and expected results alone, without needing to interrogate every step.


Anatomy of a Good Test Case

A complete test case has five elements. Each one serves a specific purpose — and each one fails in a predictable way when written carelessly. Select each element to see what it should contain, what a weak version looks like, and what a strong version looks like.

A good test case title tells you exactly what is being tested and what the expected outcome is — without opening the case. It is the first thing a reviewer reads and the only thing visible when scanning a long list of cases in a scenario.

Structure titles around the actor, the action, and the outcome: "Guest user cannot complete checkout without a valid email — validation error appears." A reviewer scanning fifty case titles should be able to assess coverage gaps without reading a single step.

Weak
Test checkout
Strong
Guest user cannot complete checkout without a valid email address — validation error appears below the field
Tips
  • State the expected outcome in the title, not just the action — "Login fails with incorrect password" beats "Test login"
  • Include the actor when role matters — "Guest user" vs "Logged-in user" vs "Admin" can be the entire point of the test
  • Avoid "verify that X works" — it says nothing about what "works" means without reading the steps

Preconditions define the starting state. A test that begins with "the user is logged in" will fail, produce unexpected results, or be impossible to reproduce if that state is ambiguous. Every dependency — account status, cart contents, permissions, existing data, feature flags — must be specified explicitly.

Think of preconditions as the setup the tester must complete before step 1. If the setup cannot be completed from the preconditions as written, the test cannot be run. Missing preconditions are the most common cause of "cannot reproduce" defect reports.

Weak
User is on the checkout page
Strong
A guest (unauthenticated) session is active. The cart contains one item (Product A, £24.99). The user has navigated to /checkout. No discount codes have been applied.
Tips
  • List every dependency: account state, data that must pre-exist, permissions, and environment settings
  • Include test data inline: "A verified account with email test@example.com and password Test1234! exists"
  • Specify environment or feature flags if the test is conditional on them

Steps are the instructions a tester follows to reproduce the scenario. Each step should describe a single, observable action — a click, a keystroke, a form submission. Steps that require interpretation ("navigate to settings", "complete the form") introduce ambiguity and make the test impossible to reproduce consistently.

Write steps as if the tester has never seen the application before. This ensures reproducibility across the team and prevents silent drift as the UI evolves over time.

Weak
Leave the email field empty and try to submit the form
Strong
1. Locate the "Email Address" field in the Contact Information section. 2. Leave the field empty (do not enter any text). 3. Click the "Continue to Shipping" button.
Tips
  • One action per step — "click the button and observe the error" is two steps
  • Use exact UI labels in quotes: "Click the 'Continue to Shipping' button" not "click submit"
  • When inaction matters, state it explicitly: "Leave the Email Address field empty"

The expected result is the single most important element of a test case, and the one most frequently written badly. It must be specific enough that a tester can look at the screen after the final step and determine pass or fail without making a judgment call.

"An error is shown" and "the system works correctly" are not expected results — they are invitations to guess. "A red validation message reading 'Email address is required' appears below the Email Address field, and the form does not submit" is an expected result. The tester either sees that exact text in that exact location, or they do not.

Weak
The checkout fails and shows an error
Strong
The form does not submit. A red validation message reading "Email address is required" appears directly below the Email Address field. The user remains on the Checkout page (/checkout).
Tips
  • Quote exact error message text — if the wording changes, the test fails and catches it
  • State what does NOT happen as well as what does: "The form does not submit"
  • Describe the final state of the page: where is the user, what is visible, what has changed?

Test data specifies the exact input values used in the test. Without it, two testers running the same case may use different inputs, produce different results, and reach different pass/fail conclusions — making the test unreliable and results incomparable across runs.

Test data is especially important for boundary cases, format validation, and tests involving account credentials or product data. If the test is sensitive to the specific values used, those values belong in the test data.

Weak
(none — tester chooses their own input values)
Strong
Email: (empty string, no whitespace) · Product: Product A, SKU: PA-001 · Price: £24.99 · Discount code: none
Tips
  • For negative tests, specify exactly which invalid value is being tested — not just "an invalid email"
  • For boundary tests, state the exact boundary value: "Username: exactly 50 characters"
  • Never use production account credentials in test cases — specify a dedicated test account

Common Mistakes That Weaken Test Cases

The same four mistakes appear in almost every team's test case library. They are easy to write and hard to notice until a defect slips through or a test run produces results that no one trusts.

Vague steps

Steps that require interpretation are not reproducible. Different testers will follow different paths and get different results — making it impossible to compare outcomes across runs or team members.

Weak
Navigate to checkout
Strong
Click the "Proceed to Checkout" button on the Cart page (/cart)
Missing preconditions

A test that cannot be set up from its preconditions cannot be run reliably. Unspecified account states, missing data requirements, and assumed permissions are the leading cause of "cannot reproduce" defect reports.

Weak
User is logged in
Strong
A verified account with email test@example.com exists. The user is logged in with that account. The cart contains exactly one item (Product A, £24.99). No promotions are active.
Unverifiable expected results

An expected result that requires judgment to evaluate — "works correctly", "page loads", "system behaves as expected" — produces inconsistent pass/fail results across testers and makes historical trend data meaningless.

Weak
The system works correctly after login
Strong
The user is redirected to /dashboard. The header displays the user's first name. No error messages are visible on the page.
Testing more than one thing per case

When a single case tests multiple behaviours, a failure leaves you uncertain which part failed and why. Atomic cases — one testable behaviour per case — produce failures that are immediately actionable and pass rates that accurately reflect the state of each capability.

Weak
Verify that registration works, the welcome email is sent, and the user can log in afterwards
Strong
Three separate cases: (1) User registers with valid inputs. (2) Welcome email arrives within 2 minutes of registration. (3) Newly registered user can log in after email verification.

The Test Case That Doesn't Get Written: Negative Paths

In every test suite, there is a systematic blind spot: the cases that test what happens when something goes wrong. Teams write happy path cases — the user registers, the payment completes, the item is added to the cart — and then move on. The negative paths are obvious in retrospect and invisible until production.

For every happy path test case, ask the corresponding negative question. If you have "User registers with a valid email and password," you also need "User cannot register with an email that is already in use," "User cannot register with a password below the minimum length," and "User cannot submit the registration form with the email field empty." Each of these is a separate case with a separate expected result.

Example: Login feature — happy paths and the negative cases that are usually missing
Typically written (happy path)
  • User logs in with a valid email and correct password
  • User is redirected to the dashboard after successful login
  • "Remember me" keeps the session active across browser restarts
Frequently missing (negative paths)
  • Login fails with correct email but incorrect password — error does not reveal if email exists
  • Login with an unverified email address shows a verification prompt, not a generic error
  • Account locked after 5 consecutive failed login attempts
  • Login form with empty email field shows inline validation before submission
The cases that slip through are almost always negative paths

The bugs that reach production are rarely happy path failures — those get caught quickly because they are tested repeatedly. The defects that survive are the negative path gaps: the unverified email that logs in anyway, the missing validation that allows an empty required field, the account lockout that never triggers. Write the negative cases deliberately, not as an afterthought.


Writing for Execution vs. Writing for Review

The same test case serves two different audiences at different stages of the QA cycle. Writing with both in mind from the start reduces revision cycles and speeds up expert validation.

What QA Engineers need to run the test reliably
1
Unambiguous steps

Every step must be executable without interpretation. A tester seeing the application for the first time should be able to follow the steps exactly and reproduce the same result as the author.

2
Explicit test data

Specify the exact values to use. If the tester selects their own inputs, results vary between testers and runs — making the test unreliable as a quality signal.

3
Pass/fail criteria that require no judgment

The tester should be able to look at the screen after the final step and immediately determine pass or fail. Any ambiguity in the expected result leads to inconsistent results.

4
Setup that can be completed from the preconditions

If the tester needs to ask questions to set up the test, the preconditions are incomplete. A fully specified starting state is the difference between a reproducible test and an unreproducible one.

Test cases written for execution should survive a tester who is new to the team. If they need to ask questions before running a case, the case needs more detail.


How AI Generation Covers the Cases You Would Have Missed

Manual test case writing has a systematic blind spot: people write the cases they think of, which skews toward the paths they have already considered. Negative paths, boundary conditions, and state-transition scenarios are underrepresented because they are less intuitive to generate without a structured process.

AI generation reverses this. Given precise inputs — test type, affected component, and acceptance criteria — the AI systematically produces cases for all four categories: happy paths, negative paths, edge cases, and state-dependent scenarios. The cases a manual writer would skip are often the first ones the AI generates for a well-specified negative or boundary scenario.

Systematic negative path coverage

For every "must" in your acceptance criteria, the AI generates a corresponding "must not" case. These are often the first casualties of time pressure in manual writing.

Boundary cases by default

Select "Boundary Edge Case" as the test type and the AI generates cases at exact limits — one under, at, and one over — that manual writers frequently approximate or skip.

A complete first draft in seconds

Use AI generation to produce the initial case set, then review and extend with manually written cases that require domain knowledge the AI cannot access from your inputs alone.


Test Case Template

A complete test case written to the standard described in this guide. Use this as a reference when writing cases manually or reviewing AI-generated output.

Example Test Case
Functional
Title

Guest user cannot complete checkout without a valid email address — validation error appears below the field

Preconditions

A guest (unauthenticated) session is active. The cart contains one item (Product A, £24.99). The user is on the Checkout page (/checkout). No discount codes have been applied.

Steps
1

Locate the "Email Address" field in the Contact Information section.

2

Leave the field empty (do not enter any text).

3

Click the "Continue to Shipping" button.

Expected Result

The form does not submit. A red validation message reading "Email address is required" appears directly below the Email Address field. The user remains on the Checkout page (/checkout).

Test Data

Email: (empty string, no whitespace) · Product: Product A, SKU: PA-001 · Price: £24.99 · Discount code: none


Related guides
AI Test Case Generation — How It Works
What the AI analyzes, what case types it produces, and how to write inputs that get the best output.
Software Testing Types Explained
A complete guide to functional, regression, smoke, exploratory, and other testing types.
QA Team Roles and Best Practices
Role-by-role responsibilities and the full QA cycle including defect management and expert validation.
Let AI write the first draft — then refine it

Generate a comprehensive initial case set from your acceptance criteria in seconds, then review, edit, and extend it with the domain knowledge only your team has.

Start your trial