Why I'm rebuilding how I ship mobile — Brandon Miller
·5 min read· ai· mobile· engineering

Why I'm rebuilding how I ship mobile

Mobile has been slower to absorb AI coding agents than web. The interesting engineering is in the scaffolding around the agent, not the models.

Why I'm rebuilding how I ship mobile

Mobile has a problem with AI coding agents. Not a fundamental one — the models are capable enough. The problem is structural: the feedback loops are slower, the platform idioms are denser, and the runtime state an agent needs to reason about is largely invisible to it.

Web got there first. A browser dev server reloads in a second. The DOM is inspectable. Errors surface cleanly in the console. An agent writing React code can get tight signal on whether something worked or didn't. The loop is fast enough that agents can be genuinely useful without much scaffolding.

Mobile isn't there yet. Gradle builds take real time. Emulator boot is a pause. Runtime state — what's in the backstack, what the view tree looks like, why that fragment isn't rendering — is trapped behind platform tooling the agent can't reach. The feedback the agent gets is coarser, slower, and harder to parse.

The scaffolding problem

This is why I've shifted my attention to scaffolding. The model quality is a given at this point. What isn't a given is whether the agent has what it needs to produce correct mobile code — and specifically, whether it can verify its own output.

The gap I keep running into: agents write confident Android code that compiles but fails at runtime in ways that require a running device to observe. Compose layout bugs. Navigation state corruption. ExoPlayer configuration drift. The agent produces plausible-looking code, the build succeeds, and then something breaks in a way that only shows up when you run it.

The fix isn't better prompting. It's making the runtime legible to the agent: structured build output it can actually parse, telemetry that surfaces device state back into the loop, test harnesses that produce signal the agent can act on.

What the engineer does now

The role shift is real. I'm typing less code. I'm spending more time on pipeline design — what does a good build artifact look like for an agent, which failures should route back for another pass, how do I structure prompt libraries per platform so the agent isn't reinventing Compose idioms from scratch on every run.

The first app I shipped through this pipeline went from idea to App Store review in a single day. That's not a one-off experiment I'm trying to repeat. It's a practice I'm building out systematically, with the assumption that this is how mobile apps get made going forward — at least for engineers who want to stay near the top of the productivity curve.

Why native

Cross-platform frameworks are a tempting shortcut when you're doing agent-assisted development. One codebase, less surface area, maybe simpler for an agent to reason about.

I keep building native anyway. The hard problems in mobile — media pipelines, real-time state, platform lifecycle, camera and sensor integration — don't collapse well into cross-platform abstractions. They show up as edge cases, performance ceilings, and debugging nightmares. Fifteen years of Android platform work means my instincts live at the native layer. That's where I can actually verify what the agent is doing.

The commitment

I'm shipping apps through this pipeline and writing up what breaks. Not theoretical posts about what AI coding agents could do — actual notes from running the thing, with specific problems and specific fixes.

The cadence will be steady. The posts will be technical. This site is the documentation for that work.

If you're building mobile coding agents, or working on the tooling layer around them, I'd like to talk. The gaps are specific, the problems are tractable, and there's real work to do.