Why We Built FoxFit
How we used AI-assisted development to build a commercial iOS app, and why we went Apple-only.
We wanted to build a fitness app that didn’t try to manipulate people. No guilt trips, no streaks designed to punish you for having a life, and no selling your data to advertisers. We just wanted a solid workout tracker that gets out of your way.
FoxFit is that app. It runs on iPhone, iPad, Apple Watch, and Mac. It syncs workouts across devices, integrates with HealthKit and has over 150 exercises with video guides and muscle diagrams included. It’s on the App Store now!
This post explains why we built it the way we did.
iOS Only
FoxFit is Apple-only. The omission of an Android version was deliberate.
iOS users consistently spend more on app subscriptions than Android users; most industry data puts the gap at around 2–3x per user. For subscription revenue specifically, Apple’s App Store accounts for a disproportionate share despite having fewer total users. For a small team building a subscription-based fitness app, iOS-only is the rational choice. We built FoxFit because we genuinely believe it has value, but we also need to make sure we are addressing a market that will allow us to sustain the app.
We’re trading volume for value. A smaller addressable market, but one that actually supports the ongoing development costs.
AI-Assisted Development
Claude Code wrote most of the code. We made the product decisions, defined the requirements, reviewed the output, and handled testing and deployment.
This might sound reckless to some people, but in our view it wasn’t. We built a process around it that makes the output reliable. Every feature starts with a Product Requirements Document before any code gets written. A persistent context file carries architecture decisions and coding patterns across sessions. Implementation prompts give the AI specific roles to adopt: swift-expert, code-simplifier, debugger, etc, so it approaches problems from multiple angles. A 68-prompt review system catches issues across 12 phases. 356 tests verify everything works. And we review the outputs to ensure we are getting what we want.
The result: tasks that would have taken weeks of learning took hours, bugs got diagnosed in minutes and patterns we didn’t know existed got applied correctly. The things the AI got wrong (because it gets things wrong ALL THE TIME) got caught by the review and testing layers we built around it.
The rest of this series will explain exactly how each part of that process works.
What’s Coming
Over the next few weeks, we’ll cover:
- How we give the AI persistent context (CLAUDE.md)
- Our PRD and implementation prompt workflow
- How we built a curated exercise database from scratch
- The code review system
- Testing strategy for AI-written code
- What went wrong and how we fixed it
None of this is AI hype, and we’re not bigging ourselves up. It’s just an explanation of what actually happened.