Skip to content

Test Estimation & Effort Planning

Purpose

QA estimation is not guessing test case count. It is forecasting the effort needed to reduce release risk to an acceptable level.

Estimation should consider: - scope size - change complexity - dependency risk - environment readiness - automation impact - regression impact


Estimation inputs

Input Example questions
Scope How many stories, flows, APIs, or integrations changed?
Complexity Are there many rules, state transitions, or data variants?
Risk Is this payment, auth, or compliance-sensitive?
Dependencies Does it rely on third-party or unstable environments?
Reuse Do tests/data/helpers already exist?
Non-functional needs Performance, security, accessibility required now?

Lightweight estimation model

Use story-level sizing for QA effort.

Size Typical QA effort Characteristics
S <= 0.5 day Low-risk change, minimal regression
M 1 day New rules or moderate regression impact
L 2-3 days Integration change, several paths, test data setup
XL 4+ days Multi-service change, high risk, performance/security scope

Example modifiers

Modifier Effect
New external integration +1 size
Critical domain +1 size
Missing environment/data +1 size
Existing automated coverage -1 size if strong

Release-level planning

For release planning, estimate by work buckets instead of only by stories.

Bucket Questions
Feature validation How many new or changed flows?
Regression Which existing areas are impacted?
Test data Do we need new fixtures/accounts?
Environment Is staging stable and production-like enough?
Automation updates Which tests must be added or fixed?
Reporting Who prepares go/no-go summary and risk register?

Estimation anti-patterns

Anti-pattern Why it fails
Estimate by test case count Ignores complexity and risk
Ignore setup effort Environment and data often dominate time
Assume automation is free Stable automation still has build and maintenance cost
Treat all stories equally Risk and blast radius differ heavily
No buffer for defect retest Late churn breaks the schedule

Good planning outputs

Every QA estimate should produce: 1. Effort size. 2. Main assumptions. 3. Major risks and blockers. 4. Required environments and data. 5. What must be automated now vs later.

Example output

Story: Bulk invite users
QA size: L
Assumptions:
- Email sandbox available
- CSV upload limit defined
Risks:
- Partial success behavior unclear
- Role assignment rules may conflict
Needed:
- 3 role-based accounts
- CSV fixtures
- API and UI regression on user management

Good estimates are explicit about uncertainty. Hidden assumptions create schedule slips.