Performance Budgets for Mobile Apps (2026)

Performance is death by a thousand cuts. No single change crosses the line; cumulatively, the app is slow. Performance budgets are numerical targets that every change respects. Cross the budget → the

May 16, 2026 · 3 min read · Testing Guides

Performance is death by a thousand cuts. No single change crosses the line; cumulatively, the app is slow. Performance budgets are numerical targets that every change respects. Cross the budget → the change is rejected or the budget is revised with justification. This guide covers how to set and enforce them.

Why budgets

Without a budget, engineers cannot judge "is this new feature's 100ms acceptable?" With a budget (cold start < 800ms on mid-tier), the question becomes "does this change keep us under?"

Budgets turn perf from subjective into boolean: under or over.

What to budget

Startup

Runtime

Network

Battery

Example budgets (mid-tier 2023 Android)

These are starting points. Adjust for your app's class (games differ, utilities differ).

Setting a budget

1. Measure current

Baseline across representative devices. Document p50, p95, p99.

2. Set target

For existing apps: current p95. For new: user-research-driven (e.g., Google's Web Vitals: LCP < 2.5s).

3. Commit to it

Budget is merge-blocking. Everyone on team aware. Documented in repo.

Enforcement

CI regression

Every PR runs perf benchmark. Compare to baseline. Regression > X% → red.

Jetpack Macrobenchmark (Android):


@Test fun startup() = rule.measureRepeated(
    packageName = "com.example",
    metrics = listOf(StartupTimingMetric()),
    iterations = 10,
    startupMode = StartupMode.COLD,
) { pressHome(); startActivityAndWait() }

CI fails if median exceeds budget.

Real-device pool

Run on 3-5 physical devices (flagship, mid, budget) per release. Numbers from one device are not enough.

Production monitoring

Firebase Performance Monitoring / Sentry Performance / New Relic Mobile. Real user data. Different from lab numbers — longer tail, older devices, degraded networks.

Release gate

Release-candidate builds must pass all budgets. If not, investigate or revise budget with data.

Revising budget

Budgets are not immutable. Revise with data:

Do not revise because a PR missed the budget. That is not science; that is capitulation.

Prioritization

  1. Startup — first impression
  2. Jank on primary interaction screens — perceived quality
  3. Memory — stability on low-end
  4. Battery — retention driver
  5. Network — cellular users

Pick 2-3 top budgets. Enforce ruthlessly. Expand once stable.

Anti-patterns

Synthetic budgets only

Lab numbers satisfy budget; real users experience slower. Field data must validate.

Single-device budgets

"Fast on our Pixel 7" means nothing for Samsung A12 users.

No rollback on regression

Budget broken; ship anyway; compound debt.

Budget is a suggestion

No enforcement = no budget.

How SUSA measures

SUSA's performance monitor samples CPU, memory, FPS, and frame time during exploration. Per-screen aggregation flags:

Reports include p50 / p95 / p99 per metric. Cross-session comparison shows regression.


susatest-agent test myapp.apk --persona impatient --steps 100
# results/perf.json has per-screen metrics

Continuous investment

Perf work is not one-time. Every quarter, dedicate time to:

Over a year of sustained effort, perf improves by 30-50%. Without effort, it regresses by 20% annually.

Performance is a team commitment. Budgets make it measurable. Enforce, revisit, improve.

Test Your App Autonomously

Upload your APK or URL. SUSA explores like 10 real users — finds bugs, accessibility violations, and security issues. No scripts.

Try SUSA Free