Key Takeaways
- Personal health metrics like VO2 max become truly actionable when you can ask questions about them and get AI-powered insights rather than just viewing raw numbers in an app.
- The hackathon solution eliminates two major friction points: complicated setup processes and the need for manual data exports, making continuous access to Apple Health data practical for everyday users.
- By connecting Apple Health data through a MCP Server connected to LLM Client, users can have natural conversations with AI assistants about their workout performance, trends, and areas for improvement.
- Automation tools like n8n enable scheduled fitness summaries and coaching feedback delivered straight to your inbox, turning passive data collection into active motivation.
- The project is evolving toward a complete open-source ecosystem that will support multiple wearables, local LLM models for privacy, and custom reports for trainers, making personalized fitness insights accessible to millions of smartwatch users worldwide.
Is Your HealthTech Product Built for Success in Digital Health?
.avif)
Apple Health is a great source of various metrics. But what do all of these metrics mean? With each new version of iOS and WatchOS, we get more and more insights, but it's still far from ideal. I'll give you an example from my personal life.
A while back, I was reading Outlive. One chapter made the biggest impression on me. It was about VO2 max, what this metric says about our health, and what its value means for our current and future capabilities. I really love hiking, so Peter Attia couldn't have reached me with a better message. He showed what different values of VO2 max mean when it comes to fitness capabilities at different ages. The graph shows the decline of maximum oxygen uptake (VO2 max) with age for three fitness groups. You can clearly see that someone in high fitness condition (95th percentile) at age 62 has the same aerobic capacity as an average 25-year-old. Meanwhile, someone in poor condition (5th percentile) may struggle with simple activities like quickly climbing stairs already at age 45.
%201.png)
So I was really curious: what's my current VO2 max? I was pretty confident about this value since I considered myself a pretty fit person. I ran quite a bit, played soccer, cycled, played squash, and walked a lot. So it couldn't be bad. But when I checked the values, it turned out my VO2 max didn't look that good at all.
I was below average. I looked at the table from the book again and realized I had to do something about it. I checked how I could increase this value. One way was interval running. I thought, why not? After all, I like running; I only needed to change the type of training.

So I added intervals to my running, and after a while, I started seeing results. I was curious, though: is my running technique and interval training good? Besides progress in VO2 max values, am I also making progress in my times? Could any of my habits (like getting relatively little sleep) be negatively affecting these values? I couldn't find answers to any of these questions directly in the app, so I started wondering what if I used LLMs to analyze my data?
That's how the idea for creating an Apple Health MCP server was born, which you can read more about here.
After a while, it turned out this solution had two main problems:
- Difficulty with installation and setup, though it wasn't a problem for me personally since you only need to do it once, many friends asked me for help, and every time it took forever.
- Data updates, the server relied on an export.xml file, which was the result of manually exporting data from Apple Health. In the long run, this was a really annoying process because to have current data, I had to spend about 10 minutes each time on manual export and then transfer the data to my computer.
But finally, an opportunity came up to solve these problems. That opportunity was a hackathon.
What did we do during the hackathon?
The goal was simple: solve the two problems above. My personal ambition was to “free” my data from Apple Health, make it available through an API, and then expose it to LLMs (like Claude) via an MCP server. The stretch goal was to add automation that would, for example, send a weekly summary comparing the results to the previous week.
So our architecture looked roughly like this:
-kopia.png)
Below you'll find a short description of each system component.
Mobile (export) app
The only way to have continuous access to Apple Health data is through integration with Apple HealthKit, so we needed an application. Our hackathon team didn't have a mobile developer, and there were only three of us. We determined this might be too ambitious a task for several hours. We therefore decided to use the Apple Health Export application, which enabled exporting data to external sources, including an HTTP endpoint.

Backend
The first component we needed to create was the backend. From building the Apple Health MCP, we already had experience with what data is available in Apple Health and what data models we would need. However, we made the decision that at the hackathon we would only handle workout data, omitting health metrics. The demo was supposed to focus on summarizing workout data, providing interesting insights, and motivation to keep pushing.
We based the backend on FastAPI. For this purpose, we used python-ai-kit, a boilerplate that significantly accelerates the initial steps of application development. Thanks to this, we could focus on the actual work:
- creating models
- exposing a webhook consuming data sent by the mobile application
- exposing an endpoint returning workout data
That was really quick. With the help of generative AI (Cursor), we built all of these parts pretty fast; however, it wouldn't have been possible without knowing how Apple Health data is structured, so we could use the experience from building the Apple Health MCP Server.
The most important model was Workout
:
class Workout(Base):
__tablename__ = "workouts"
id: Mapped[UUIDType] = mapped_column(UUID(as_uuid=True), primary_key=True)
name: Mapped[str | None] = mapped_column(String(255))
location: Mapped[str | None] = mapped_column(String(100))
start: Mapped[datetime] = mapped_column(DateTime(timezone=True))
end: Mapped[datetime] = mapped_column(DateTime(timezone=True))
duration: Mapped[Decimal | None] = mapped_column(Numeric(15, 5))
# aggregate metrics
active_energy_burned_qty: Mapped[Decimal | None] = mapped_column(Numeric(15, 5))
active_energy_burned_units: Mapped[str | None] = mapped_column(String(50))
distance_qty: Mapped[Decimal | None] = mapped_column(Numeric(15, 5))
distance_units: Mapped[str | None] = mapped_column(String(50))
intensity_qty: Mapped[Decimal | None] = mapped_column(Numeric(15, 5))
intensity_units: Mapped[str | None] = mapped_column(String(50))
humidity_qty: Mapped[Decimal | None] = mapped_column(Numeric(10, 2))
humidity_units: Mapped[str | None] = mapped_column(String(10))
temperature_qty: Mapped[Decimal | None] = mapped_column(Numeric(10, 2))
temperature_units: Mapped[str | None] = mapped_column(String(10))
# relationships
heart_rate_data: Mapped[list["HeartRateData"]] = relationship(
back_populates="workout",
cascade="all, delete-orphan",
passive_deletes=True,
)
heart_rate_recovery: Mapped[list["HeartRateRecovery"]] = relationship(
back_populates="workout",
cascade="all, delete-orphan",
passive_deletes=True,
)
active_energy: Mapped[list["ActiveEnergy"]] = relationship(
back_populates="workout",
cascade="all, delete-orphan",
passive_deletes=True,
)
This model captures comprehensive workout information, including:
- Temporal data: workout duration and start/end times for time-series analysis.
- Performance metrics: active energy burned, distance covered, workout intensity.
- Environmental conditions: temperature and humidity during exercise.
- Detailed heart rate insights through relationships (real-time HR data during workouts, post-workout recovery rates, and energy expenditure patterns).
- Location context: indoor vs. outdoor activities.
Looking at this carefully, you'll realize it needs improvements, for example:
- Lack of indexing on commonly queried fields (start/end dates, location, workout name)
- No validation for data ranges (e.g., temperature, humidity, duration must be positive)
- Workout type field is a string instead of an enum
- No source tracking to identify data origin (Apple Watch, iPhone, third-party apps)
However, due to hackathon time constraints, we prioritized getting a working prototype over data model polishing.
MCP Server
The next step was to create an MCP server. We again leveraged python-ai-kit, built on top of the FastMCP framework. Below you can see how this boilerplate streamlines project creation.
Since FastAPI, which we were using in our backend, exposes OpenAPI by default, we were able to ask Cursor to add two tools: get_heart_rate
and get_workouts
.
The logic was simple. The tools were essentially API wrappers without adding too much custom logic on top of them. However, it was a hackathon, and we simply needed a working version.
By basing the server on FastMCP, we were able to leverage deployment via fastmcp.cloud. The only thing we had to do was connect and specify the GitHub repository where the MCP server was located, and that went very quickly.
fastmcp.cloud also provides tools for debugging MCP servers and the ability to view logs, which is very useful during development.
-kopia%202.png)
N8N automation
Since we still had some time left, we decided to add automation in N8N.
-kopia%203.png)
It was pretty straightforward, which is why I love N8N. It's ideal for very fast prototyping due to many ready-made integrations.
What you can see on the diagram (from the left):
- HTTP request node: it requested our backend to fetch workout data. We used query params to filter workouts only from the latest month.
- AI summary node: it took the data returned by the API, then, using an OpenAI model, prepared a summary. We had quite a bit of fun creating prompts and setting the tone of voice. Some of the generated summaries weren't nice.
- Summary sending by e-mail node: it's quite self-explanatory.
N8N allows you to save the flow configuration in a .json file, so it’s easy to experiment and keep a stable version safe in the repository.
Claude
To test our MCP server, we used Claude. Since the server was available online, the configuration was very straightforward.
Claude → Settings → Connectors → Add custom connector → Set name and URL → Done.
Tip: If you're using an enterprise Claude subscription, the “Add custom connector” option may be unavailable to you. Custom extensions must be activated by the organization administrator. Then it will be possible to select it from the “Browse connectors” list.
Once added, we could check if Claude could see all the needed tools.

How did the solution work?
Demo time came faster than we expected. Like every team, we had three minutes to present our solution.
I started the presentation by asking Claude what it thought about my September activity. The summary was quite comprehensive and aligned with reality (I had to check in the Apple Health app because I didn't think I had worked out that many times in total).
-kopia%204.png)
I also asked it about the highlights.
-kopia%205.png)
Unfortunately, we didn't have time to construct questions to sensibly utilize the data from the MCP tool that returned heart rate data. If you look closely, not everything worked perfectly, the calorie count was incorrectly returned for some reason. It's impossible that I burned 1,740 calories during a ~7 km run. But it was a demo, and nobody noticed.
If you read the introduction of this article, you may also notice that although I also cared about analyzing my VO2 max, the summary doesn't mention it at all. That's right. As I mentioned during the backend implementation, we decided to severely limit the scope of supported data due to limited time.
Next, we presented the result of our automation created in N8N. For the demo, we decided to choose a gentler version of the coach (though still demanding).
-kopia%206.png)
As you can see, I have some things to improve.
Were we satisfied with our solution? Yes, we managed to ensure continuous data flow (which was previously a problem) and demonstrate that, thanks to the MCP server, our chosen LLM client has continuous access to the data. We were also very satisfied with how python-ai-kit, which we've been intensively developing recently, helped us set up both the backend and the MCP server.
The hackathon repository is available here: https://github.com/the-momentum/fit-happens-hackathon
You'll find all the above-described modules in it: backend, MCP server, and N8N workflow.
What's next?
The demo made quite a good impression (though we didn't win). However, for us, it was just the beginning. AI insights based on wearables data are something we've been thinking about for quite some time.
At Momentum, we also collaborated with many companies that utilized wearables data in one way or another, thanks to which we learned a lot about them. We know that time-series data analysis isn't always simple and straightforward.
We plan to develop the solution whose development we started at the hackathon. On our roadmap at this moment is:
- support for all data supported by Apple Health
- fixing all the known issues in the Apple Health MCP Server (issues)
- creating our own open-source application that will allow users to export data but also analyze it directly on their phone
- support for other wearables, such as Garmin, Fitbit, or Oura Ring
- a web application that will allow browsing data and conversing with it
- ability to share data with your trainer and generate custom reports for them
- support for local LLM models (we know how important privacy is in healthcare)
- model fine-tuning
- many more
What’s more, we will build this ecosystem as an open-source solution, so we encourage you to follow our Momentum GitHub to stay informed.
The wearables market is huge. According to the latest data, there are over 450 million smartwatch users worldwide (as of the end of September 2025), and their number is growing rapidly. We believe that these users deserve the opportunity to leverage the latest technologies such as gen AI to receive personalized health and fitness insights.
We’d love for you to explore the Apple Health MCP repo, try it out with your own Apple Health exports, and share feedback or contributions. Open source works best when it’s collaborative, and this is our invitation to build the future of healthtech with us by proposing new features, participating in collaboration, and helping shape the direction of the project.
Frequently Asked Questions

Let's Create the Future of Health Together
Check out the Apple Health MCP Server
Looking for a partner who not only understands your challenges but anticipates your future needs? Get in touch, and let’s build something extraordinary in the world of digital health.
We’ve made the code open so anyone can explore, use, or improve it. Take a look and see what’s possible.