How to Set Up a QA Project from Scratch
A step-by-step walkthrough for creating a project, structuring your scenarios, generating test cases with AI, submitting for expert validation, and running your first structured test execution — all in one place.
For QA Engineers, Product Owners, and team leads starting a new project on Evaficy Smart Test.
Before You Begin — Three Decisions to Make First
A project in Evaficy Smart Test is an isolated workspace for a specific application, product, or module. Before you click "Create", take two minutes to answer these questions — they determine how you structure everything that follows.
What is the scope of this project?
One product, one module, or one release cycle? If you are testing a single product with many independent modules, consider one project per major module rather than one project for everything. Scenarios within a project share team access, so scope it to what a single team owns.
Who needs access and in what role?
Identify your QA Engineers (who will write scenarios and execute runs), your Tech Lead (who will review test cases technically), and your Product Owner (who will validate business alignment). You need at least one reviewer assigned before a validation request can be processed.
What testing types will this project use?
Will you be running functional tests, regression tests, smoke tests, or a combination? Knowing this upfront helps you write AI generation inputs that are consistent and appropriately scoped from the first scenario.
The Six Steps
1
Create and Configure the Project
From the main screen, click Projects in the left sidebar and then the button. The creation form is intentionally minimal — Evaficy projects are workspaces, not configuration files.
Project Name*
Description
Project limit
Advanced accounts can own up to 3 projects; Enterprise accounts up to 10. If you have reached the limit you have four options: upgrade to a higher plan, create a new account for a separate team or product area, delete an existing project, or transfer ownership of one to another team member so it no longer counts against your limit.
2
Invite Your Team and Assign Roles
Open the project settings by clicking the Manage Project icon next to the Projects heading in the left sidebar, then navigate to the Team Members tab. Add each team member by their email address and select their role. Invitations are sent by email and include a direct link to accept.
The four roles and what each can do:
Owner
- Create and delete the project
- Invite and remove members, change their roles
- Access all scenarios and test runs
- Transfer project ownership
Product Owner
- Manage team members and invitations
- Approve or reject validation requests (business review)
- Access all scenarios and test runs
Tech Lead
- Edit project details
- Approve or reject validation requests (technical review)
- Access all scenarios and test runs
QA Engineer
- Create and manage scenarios
- Generate test cases with AI
- Execute test runs
- Submit scenarios for validation
Assign roles before work starts
If you wait until a scenario is ready for validation to realize no one holds the PO or TL role, you have created a bottleneck. Assign roles immediately after creating the project — it takes 30 seconds and prevents that delay entirely.
Advanced projects support up to 3 team members; Enterprise projects up to 25. Pending invitations count toward this limit until they expire or are revoked.
3
Plan Your Scenario Structure
A scenario is a named collection of test cases for one specific feature or user flow. Before generating test cases, decide how to split your testing work across scenarios.
✓ Organise by feature
- "User Registration"
- "Login & Session Management"
- "Password Reset Flow"
- "Profile Settings"
✗ Organise by sprint
- "Sprint 14 Tests"
- "March Release"
- "Q1 2026"
- "Hotfix tests"
Feature-based scenarios survive sprints. They can be reused in any future test run, updated as the feature evolves, and searched by any team member. Sprint-based scenarios become orphaned and confusing within weeks — their names no longer reflect what they cover, and there is no natural owner.
One scenario = one user-facing capability
A good heuristic: if the scenario name would still make sense in six months, you have scoped it correctly. "Checkout with coupon codes" is clear six months from now. "Sprint 14 checkout stuff" is not.
4
Generate Your First Test Cases
The AI test case generator is available directly on the main screen as soon as you log in — no extra navigation needed. Fill in your criteria, generate or search for test cases, manually create individual cases using the New button, select the ones you want to keep, and press Save. The Save Test Scenario dialog will then appear, where you name the scenario and assign it to your project. The AI generation takes structured input and produces a complete set of test cases — covering positive flows, negative paths, edge cases, and boundary conditions — in seconds.
Generator input fields:
Test Type*
Affected Page
Custom Fields
After filling in the inputs, you have three options:
Let the AI produce test cases from your inputs. Available on Advanced (200/month) and Enterprise (500/month) plans. A warning appears when you approach 80% of your monthly limit, and the button is disabled once the limit is reached.
Find pre-existing validated test cases from the Evaficy library. Available on all plans. Useful for common patterns like login flows, form validation, and pagination.
Add a blank test case to fill in manually. Use this for cases that require institutional knowledge the AI does not have — known bugs, integration-specific edge cases, or your product's unique business rules.
Review before saving
AI-generated cases cover systematic coverage well — boundary values, invalid inputs, state transitions — but they cannot know your product's accepted workarounds, known limitations, or business exceptions. Review the generated list, deselect any cases that do not apply, and add manual cases for anything the AI missed.
5
Save Your Scenario and Submit for Validation
Once you have selected the test cases you want to keep, click Save to open the scenario save dialog. This is where you name the scenario, associate it with a project, and optionally request expert validation before execution.
Project*
Scenario Name*
Description
Request Validation
If you are updating an existing scenario, the save dialog shows a before/after comparison — each test case is marked as added (green), deleted (red), modified (yellow), or unchanged. This lets you confirm that the changes are exactly what you intended before overwriting the previous version.
Validation statuses:
Scenario has not been submitted for review.
Submitted and waiting for a reviewer to pick it up.
Reviewed and cleared. Ready to be included in a test run.
Reviewer has requested specific revisions. Check the validation notes.
Who reviews and what they look for
Tech Leads review for technical correctness — are the steps feasible, do they cover the right integration points, are the expected results technically accurate? Product Owners review for business alignment — do the test cases reflect what the feature is supposed to do from a user and business perspective? Both can approve or request changes with notes. Either approval alone is sufficient.
6
Create and Execute Your First Test Run
Test runs require the Enterprise plan.
Once your scenario is saved, click the account icon in the top-right of the header and select Test Runs from the menu, then click New Run. Select your project and scenario — the run name is pre-filled with the scenario name and current date, which you can edit. Then configure the execution environment.
Project*
Scenario*
Run Name*
Environment
Browser / OS
Build Version
During execution, the test case list is shown on the left. Select any test case to open the execution panel on the right. Work through each step, mark it as Pass or Fail, and record the actual result for each step. If a test case fails, add a defect description, set the severity and priority, and attach any evidence links — screenshots, video recordings, or bug tracker tickets — before moving on.
Save frequently
Click Save Result for each test case before navigating to the next one. If you navigate away with unsaved changes, you will be prompted to confirm — unsaved step results and defect details will be lost.
Maintaining the Project Over Time
A well-set-up project is not a one-time task. As your application evolves, your scenarios need to evolve with it. Evaficy Smart Test helps you stay current in two ways.
Updating scenarios as features change
When a feature changes, reload the scenario, edit or regenerate the affected test cases, and save. The before/after comparison in the save dialog makes it easy to review what changed. If the changes are significant, re-submit for validation — especially if the business logic or acceptance criteria have shifted.
Drift detection
When a test run is created, Evaficy takes a snapshot of the scenario at that point in time. If the underlying test case is later modified, the run will show a drift warning — a visual indicator that the scenario has changed since the snapshot was taken. This does not invalidate the run, but it signals that results should be interpreted with that context in mind.
Related guides
QA Team Roles & Best Practices
A deeper look at each role's responsibilities and the full QA cycle.
How to Use AI QA Testing
Platform overview covering generation, validation, and execution in detail.
Ready to create your first project?
Set up your project, invite your team, and generate your first AI test suite in minutes.