Evaficy Smart Test
Learning CentreQA Glossary
Reference

QA Glossary — Essential Testing Terms Explained

Plain-English definitions for the terms you'll encounter across test planning, execution, defect management, and AI-assisted QA. Each entry links to the relevant deep-dive guide for the full context.

JUMP TO:

A
B
D
E
F
H
I
N
P
R
S
T
U
V
A

Acceptance Criteria

The conditions a feature must meet to be considered complete and accepted by the stakeholder. Written before development begins, acceptance criteria define the boundaries of correct behaviour and serve as the primary source for generating test cases. Well-written acceptance criteria use Given/When/Then or checklist formats and cover both the success path and key failure conditions.

Acceptance Criteria Guide

Actual Result

What the application actually did when a test case was executed, as observed and recorded by the tester. Recorded alongside the expected result when a test case fails. The gap between expected result and actual result is the defect. A precise actual result is the most critical element of a useful bug report.

Defect Reporting Best Practices

Agile Testing

A QA approach built for iterative development where testing happens continuously throughout the sprint rather than at the end of a project phase. Agile testing emphasises shift-left practices, close collaboration between testers and developers, and fast feedback cycles. Test cases are written from acceptance criteria before or during development, not after.

Agile QA Strategy
B

Boundary Testing

Testing at the edges of valid input ranges — values just below, at, and just above a boundary. Boundary conditions are among the most common sources of bugs because developers often code for typical values and overlook the limits. For example, a field that accepts 1–100 characters should be tested at 0, 1, 100, and 101 characters.

Software Testing Types Explained

Bug Report

A structured document recording a defect found during testing. A complete bug report contains a specific title, steps to reproduce, expected result, actual result, environment details, severity, and any supporting evidence such as screenshots or logs. The quality of a bug report directly determines how quickly the defect gets fixed.

Defect Reporting Best Practices
D

Defect

A deviation between what the application does and what it should do according to its requirements or acceptance criteria. Also called a bug. A defect is logged when a test case fails: the tester records what happened (actual result) versus what was expected, along with steps to reproduce and supporting evidence.

Defect Reporting Best Practices

Defect Lifecycle

The states a defect moves through from discovery to resolution: New → Assigned → In Progress → Fixed → Ready for Retest → Verified → Closed. A defect can also be marked Rejected (if not reproducible or not a defect) or Deferred (if valid but not scheduled for this release). Tracking lifecycle state keeps the team aligned on what is being worked on.

E

Edge Case

An input, condition, or state at the extreme limits of what the system is designed to handle — not typical enough to appear in normal-path testing, but real enough to occur in production. Edge cases are frequently overlooked in manually written test suites but are systematically covered when acceptance criteria include boundary and constraint information.

Expected Result

The correct output or behaviour defined for a test case step. Recorded before execution so the tester has a clear benchmark to compare against. An expected result must be specific and observable — "the form submits successfully and the user sees a confirmation message" rather than "it works". Vague expected results make test execution unreliable.

How to Write Test Cases That Catch Bugs

Exploratory Testing

Unscripted, simultaneous learning and testing in which the tester uses experience and judgment to investigate the system without following predefined test cases. Exploratory testing is most valuable for finding issues outside the defined scope — usability problems, unexpected interactions, and areas not covered by scripted scenarios. It complements, rather than replaces, structured test case execution.

Software Testing Types Explained
F

Functional Testing

Testing that verifies a feature behaves according to its requirements, from the user's perspective. Functional testing validates inputs, outputs, and system responses against defined acceptance criteria. It is the foundation of any test suite — without it, there is no confirmation that the feature works at all. Functional tests cover happy paths, negative paths, and key error conditions.

Software Testing Types Explained
H

Happy Path

The primary success scenario for a feature — the flow where the user provides valid input and the system responds correctly. Happy path testing is the minimum required for any feature, but it is not sufficient on its own. A feature can pass all happy path tests and still fail for invalid inputs, concurrent users, or edge conditions not represented in the success flow.

AI Test Case Generation — How It Works
I

Integration Testing

Testing that verifies two or more components, services, or systems work correctly together when combined. Integration tests catch problems that unit tests cannot: broken data contracts between components, incorrect API responses, authentication failures, and state inconsistencies that only appear when multiple parts of the system interact.

Software Testing Types Explained
N

Negative Testing

Testing with invalid inputs, missing required data, or unexpected user behaviour to verify the system handles errors correctly. Negative testing answers the question: what happens when the user does something wrong? A system that fails silently, corrupts data, or returns a blank screen on invalid input has failed its negative test cases even if all positive tests pass.

How to Write Test Cases That Catch Bugs
P

Pass / Fail

The binary outcome assigned to each test case during execution. A test case passes when the actual result matches the expected result exactly. A test case fails when any deviation is observed — including partial matches, unexpected warnings, and performance issues not captured in the expected result. Every failure triggers a defect report.

Test Run Execution Guide

Priority

How urgently a defect needs to be fixed relative to the team's current business needs and release schedule. Priority is a business decision, separate from severity. A critical-severity defect affecting a feature not yet released might carry low priority. A low-severity cosmetic defect on the homepage might carry high priority before a marketing campaign. Priority is assigned by the product owner or tech lead.

Defect Reporting Best Practices
R

Regression Testing

Re-running a defined set of existing test cases after a code change to verify that previously working functionality has not been broken. Regression testing does not test new features — it protects what was already working. The scope of a regression run should match the risk of the change: a small bug fix warrants a targeted regression; a major release warrants a broad one.

Software Testing Types Explained

Reproducible Steps

A numbered sequence of exact actions that consistently triggers a defect, starting from a known system state. Reproducible steps are the most critical part of a useful bug report. If a developer cannot reproduce the bug from the steps provided, the report will be closed — regardless of whether the defect is real. Steps must include preconditions, exact navigation path, input data, and the action that triggers the failure.

Defect Reporting Best Practices

Risk-Based Testing

A coverage strategy that allocates testing effort in proportion to the business risk of each area. High-risk areas — those most likely to fail, most heavily used, or most damaging if they break — receive the most test cases and the most thorough execution. Lower-risk areas receive lighter coverage. Risk-based testing is how teams make informed decisions about where to stop, not about skipping testing.

Test Coverage — How Much Testing Is Enough?
S

Scenario

A saved collection of test cases for a specific feature or user flow — for example, "User Registration" or "Checkout Flow". In Evaficy Smart Test, scenarios are the core unit of organisation: they are created from acceptance criteria, submitted for expert validation, reused across test runs, and used to track coverage over time. A good scenario covers the happy path, negative paths, and key edge cases for the feature.

How to Set Up a QA Project from Scratch

Severity

The impact level of a defect on system functionality, independent of business priority. Common severity levels: Critical (system crash, data loss, security breach), High (major feature broken with no workaround), Medium (feature impaired but workaround exists), Low (cosmetic or minor inconvenience). Severity is assigned by the QA engineer based on observed impact, not by how urgent it feels.

Defect Reporting Best Practices

Shift-Left Testing

Moving QA activities earlier in the development lifecycle — to the left on the project timeline — rather than waiting until code is complete. In practice this means writing test cases from acceptance criteria before or during development, reviewing requirements for testability in sprint planning, and running smoke tests immediately after each deployment. Shift-left testing reduces the cost of defects by finding them sooner.

Agile QA Strategy

Smoke Testing

A short, focused check of the most critical functions of an application immediately after a new build is deployed. Its single purpose: is this build stable enough to invest further testing effort in? If a smoke test fails, the build is rejected immediately. A smoke test typically covers login, core navigation, and one or two critical business operations — nothing more.

Software Testing Types Explained
T

Test Case

A single documented check consisting of a title, preconditions, numbered steps, and an expected result. Each test case tests exactly one condition or behaviour. A good test case is specific enough that two different testers would execute it identically and reach the same pass/fail conclusion. Test cases are organised into scenarios and executed during test runs.

How to Write Test Cases That Catch Bugs

Test Coverage

The proportion of defined requirements, acceptance criteria, or risk areas that have been tested. Complete coverage does not mean testing everything — it means testing the right things at the right depth. Coverage is measured against a baseline: the features listed in scope, the acceptance criteria written, or the risk tiers identified for the release.

Test Coverage — How Much Testing Is Enough?

Test Execution

The process of running test cases step by step, recording pass/fail for each, documenting actual results, and logging defects for any failures. Test execution produces a test run record — a time-stamped report of what was tested, what passed, what failed, and what defects were found. Structured execution is what separates systematic QA from ad hoc checking.

Test Run Execution Guide

Test Plan

A document defining the scope, approach, resources, and schedule for a testing effort. A test plan answers: what will be tested, what will not be tested, who will test it, what environments are needed, what the entry and exit criteria are, and what the risk mitigation strategy is. For agile teams, a lightweight test plan per sprint is more practical than a single upfront document.

Test Run

A guided execution session created from a saved scenario. In Evaficy Smart Test, a test run takes a tester through each test case one by one — recording pass/fail, actual results, and defects with evidence at each step. Completed test runs are saved and searchable, building a historical quality record across releases.

Test Run Execution Guide

Test Scenario

A high-level description of what needs to be tested for a feature, expressed as a user goal or business condition rather than a sequence of steps. A single test scenario contains multiple test cases. For example, the scenario "User cannot log in with invalid credentials" contains separate test cases for wrong password, unregistered email, empty fields, and locked accounts.

How to Write Test Cases That Catch Bugs

Test Type

The category of testing being performed, chosen based on what the test is designed to verify. Common test types include functional, regression, smoke, exploratory, integration, boundary, UAT, and performance. Selecting the correct test type determines what the AI focuses on when generating test cases and how the test cases are structured.

Software Testing Types Explained
U

UAT (User Acceptance Testing)

Testing performed by end users or business stakeholders — not the QA team — to confirm that the system meets their real-world needs before it goes live. UAT validates that the right product was built, not just that the product was built correctly. It is the final gate before production release and is often where requirements misunderstandings surface for the first time.

Software Testing Types Explained

Unit Testing

Developer-written tests that verify individual functions or modules in isolation, without any external dependencies. Unit tests run fast, give precise failure information, and are the first line of defence against regressions. They do not replace integration or functional testing — a system where every unit test passes can still fail when those units are combined.

V

Validation Workflow

A structured peer review process in which a completed test scenario is submitted to a designated reviewer — typically a Product Owner or Tech Lead — for approval before it is used in test execution. The reviewer can approve the scenario, request changes, or reject it. Validation ensures that test coverage has been reviewed by someone with domain authority before testing begins.

How to Use AI QA Testing

Verification vs Validation

Two distinct quality activities often confused. Verification asks: "Are we building the product correctly?" — checking that the implementation matches the specification. Validation asks: "Are we building the right product?" — checking that the product meets the user's actual needs. A product can pass verification (correctly implements the spec) and fail validation (the spec was wrong). Both are necessary.

Ready to put these terms into practice?

Evaficy Smart Test gives you structured test scenarios, AI-powered generation, expert validation, and step-by-step execution — all in one place.

Browse Learning Centre
How It Works