Performance Testing for Mobile Apps (Practical 2026 Guide)

Performance is the difference between an app users keep and an app they uninstall. This guide covers the four dimensions that matter — startup, scrolling, memory, battery — plus the tools and methodol

May 23, 2026 · 3 min read · Testing Guides

Performance is the difference between an app users keep and an app they uninstall. This guide covers the four dimensions that matter — startup, scrolling, memory, battery — plus the tools and methodology for measuring them reliably.

The four dimensions

1. Startup

Cold start from tap-icon to first interactive frame. Target: < 1 second on mid-tier 2023 device. Above 2 seconds, abandonment climbs sharply.

2. Frame rendering (jank)

60fps means 16.67ms per frame. Janky = more than 5% of frames exceed the budget. Shows up as stutters during scroll, animation, transitions.

3. Memory

Working set, peak usage, leak rate. Exceeding device thresholds triggers OOM kills. Low-end phones kill at 400-600MB; flagships at 1-2GB.

4. Battery / power

Sustained drain per hour of typical use. 3% / hour foreground is the upper end of acceptable; 1-2% is good; below 1% is excellent.

Measurement

Startup


# Android
adb shell am start -W com.example/.MainActivity
# Look for TotalTime

Instrument with Jetpack Macrobenchmark for reliable p50/p95 across multiple runs.

Frame rendering


adb shell dumpsys gfxinfo com.example framestats
# Gives per-frame ms, GPU + CPU

Android Profiler → CPU tab → render profiling.

iOS: Instruments → Core Animation template.

Memory


adb shell dumpsys meminfo com.example

Android Profiler memory graph over a session.

LeakCanary for leak detection.

Instruments → Allocations / Leaks.

Battery

Android Battery Historian — bugreport processed through the historian web tool.

Xcode Organizer → Energy.

For field data, telemetry logging battery level at session boundaries.

Common perf bugs

Startup

Jank

Memory

Battery

Benchmarking

Jetpack Macrobenchmark (Android):


@Test
fun scroll() = benchmarkRule.measureRepeated(
    packageName = "com.example",
    metrics = listOf(FrameTimingMetric()),
    iterations = 5,
    startupMode = StartupMode.WARM
) {
    startActivityAndWait()
    device.findObject(By.res("main_list")).fling(Direction.DOWN)
}

Runs in CI, fails the build if p95 frame time regresses beyond threshold.

Performance budgets

Define per-screen budgets:

Budget violated → feature blocked.

How SUSA measures

Performance monitor samples every 5 seconds during exploration:

Per-screen aggregation flags:


susatest-agent test myapp.apk --persona impatient --steps 100

The impatient persona abandons slow screens, directly revealing which ones feel slow to users.

Field vs lab

Lab numbers are directional. Real users hit long tail. Instrument production (Firebase Performance Monitoring, New Relic Mobile, Sentry Performance) and compare lab-reproduce to field-observed. Lab < field → real users have worse conditions (older devices, worse networks) or longer sessions exposing leaks.

Order of investment

  1. Fix startup — highest user-visible impact
  2. Fix obvious leaks — catches OOM crashes
  3. Fix worst janky screens — biggest perceived quality win
  4. Battery optimization — long tail but high-impact for retention
  5. Network efficiency — payment-plan users benefit disproportionately

Performance is tractable. Most apps can improve every dimension by 30-50% in a focused month. The ROI is retention and ratings, both of which compound.

Test Your App Autonomously

Upload your APK or URL. SUSA explores like 10 real users — finds bugs, accessibility violations, and security issues. No scripts.

Try SUSA Free