Performance Budgets for Mobile Apps (2026)
Performance is death by a thousand cuts. No single change crosses the line; cumulatively, the app is slow. Performance budgets are numerical targets that every change respects. Cross the budget → the
Performance is death by a thousand cuts. No single change crosses the line; cumulatively, the app is slow. Performance budgets are numerical targets that every change respects. Cross the budget → the change is rejected or the budget is revised with justification. This guide covers how to set and enforce them.
Why budgets
Without a budget, engineers cannot judge "is this new feature's 100ms acceptable?" With a budget (cold start < 800ms on mid-tier), the question becomes "does this change keep us under?"
Budgets turn perf from subjective into boolean: under or over.
What to budget
Startup
- Cold start: first frame
- First interactive: when UI accepts tap
- Time to content: when primary content visible
Runtime
- Frame time p50 / p95 / p99
- Jank percentage (frames >16.67ms)
- Memory peak
- Memory growth rate over session
Network
- Time to first byte (TTFB) on critical APIs
- Bytes downloaded on first session
- Request count to first interactive
Battery
- Drain per hour of foreground use
- Drain per hour of background (location, sync)
Example budgets (mid-tier 2023 Android)
- Cold start: < 800ms
- First interactive: < 1200ms
- Frame time p95: < 20ms
- Jank %: < 5%
- Memory peak: < 250MB
- First-session download: < 5MB
- Battery drain: < 3% / hour foreground
These are starting points. Adjust for your app's class (games differ, utilities differ).
Setting a budget
1. Measure current
Baseline across representative devices. Document p50, p95, p99.
2. Set target
For existing apps: current p95. For new: user-research-driven (e.g., Google's Web Vitals: LCP < 2.5s).
3. Commit to it
Budget is merge-blocking. Everyone on team aware. Documented in repo.
Enforcement
CI regression
Every PR runs perf benchmark. Compare to baseline. Regression > X% → red.
Jetpack Macrobenchmark (Android):
@Test fun startup() = rule.measureRepeated(
packageName = "com.example",
metrics = listOf(StartupTimingMetric()),
iterations = 10,
startupMode = StartupMode.COLD,
) { pressHome(); startActivityAndWait() }
CI fails if median exceeds budget.
Real-device pool
Run on 3-5 physical devices (flagship, mid, budget) per release. Numbers from one device are not enough.
Production monitoring
Firebase Performance Monitoring / Sentry Performance / New Relic Mobile. Real user data. Different from lab numbers — longer tail, older devices, degraded networks.
Release gate
Release-candidate builds must pass all budgets. If not, investigate or revise budget with data.
Revising budget
Budgets are not immutable. Revise with data:
- If the feature truly requires the cost, and user research supports it
- If hardware / OS changes shift the baseline
- If A/B tests show performance-for-feature is acceptable to users
Do not revise because a PR missed the budget. That is not science; that is capitulation.
Prioritization
- Startup — first impression
- Jank on primary interaction screens — perceived quality
- Memory — stability on low-end
- Battery — retention driver
- Network — cellular users
Pick 2-3 top budgets. Enforce ruthlessly. Expand once stable.
Anti-patterns
Synthetic budgets only
Lab numbers satisfy budget; real users experience slower. Field data must validate.
Single-device budgets
"Fast on our Pixel 7" means nothing for Samsung A12 users.
No rollback on regression
Budget broken; ship anyway; compound debt.
Budget is a suggestion
No enforcement = no budget.
How SUSA measures
SUSA's performance monitor samples CPU, memory, FPS, and frame time during exploration. Per-screen aggregation flags:
- Slow cold start
- Janky screens (< 90% frames within budget)
- Memory growth (leak indicator)
- Battery drain
Reports include p50 / p95 / p99 per metric. Cross-session comparison shows regression.
susatest-agent test myapp.apk --persona impatient --steps 100
# results/perf.json has per-screen metrics
Continuous investment
Perf work is not one-time. Every quarter, dedicate time to:
- Revisit budgets
- Review production RUM data
- Profile new features added
- Fix regressions
Over a year of sustained effort, perf improves by 30-50%. Without effort, it regresses by 20% annually.
Performance is a team commitment. Budgets make it measurable. Enforce, revisit, improve.
Test Your App Autonomously
Upload your APK or URL. SUSA explores like 10 real users — finds bugs, accessibility violations, and security issues. No scripts.
Try SUSA Free