How I Automated Apple Search Ads Bid Optimization With Claude Code
The full walkthrough of my ASA automation setup: the keyword architecture underneath it, the bid rules, the Claude Code stack, and why most ASA 'automation tools' miss the point.
Table of contents
A few weeks ago I wrote a LinkedIn post about using Claude Code to run my Apple Search Ads operations. The response was bigger than I expected. A lot of people reached out asking how it actually works, what the setup looks like, and whether they can copy it.
So here’s the full walkthrough.
This is not a setup tutorial. Claude Code’s own docs cover the install, and the tools I connect (Apple Ads API, Adjust API, Astro MCP, Figma MCP) each have their own setup pages. What I want to show is what happens once the plumbing is in place, and the piece that actually makes this work, which is not the automation itself. It’s the keyword architecture underneath it.
The old growth workflow is broken
Here’s what a standard ASA optimization round used to look like for me.
Pull keyword performance from the Apple Ads dashboard. Export to a CSV. Pull trial events from the MMP dashboard (Adjust in my case). Export to another CSV. Join them manually in a spreadsheet, because the campaign IDs don’t line up. Pull search term reports from a third dashboard. Notice a keyword with a weird CPA. Open Astro in a fourth tab to see if rankings moved. Open Figma in a fifth tab to check which CPP that keyword is pointing to. Scroll through four weeks of Slack to find out when the last bid change was made and why.
An hour in, I haven’t made a single bid decision yet. I’ve just assembled the data.
This is the reality of growth work across every account I’ve run. The signal is there. The tooling is not. Every dashboard is a silo, every export format is different, and the context that ties them together lives in your head and a messy Notion page.
That’s the workflow I replaced.
The foundation: the 3D keyword framework
Before I touch bid automation, I need to talk about architecture. Because here’s the thing most people miss. You cannot automate bid optimization on a messy account. If your campaigns are a pile of keywords grouped by “Generic vs Brand vs Competitor,” you don’t have enough structure for rules to work cleanly.
I use a framework I call the 3D keyword framework. Every keyword gets three tags.
Type. Brand, Generic, or Competitor. Non-negotiable. Every keyword is exactly one of these.
Relevancy. NorthStar, Extremely Relevant, Relevant, Somewhat Relevant, or Low Relevance. This measures how closely the keyword matches what the app actually does.
Segment. An app-specific functional category. Not a generic intent label. The segment reflects how users think about the problem your app solves. A photo-location app might have Photo_Spots_Core, Scenic_Views, Hidden_Discovery as segments. A fitness app might have Workout_Routines, Nutrition_Tracking, Progress_Monitoring.
Those three dimensions map to a 5-Group matrix.
| Group | What’s In It | Treatment |
|---|---|---|
| Group 1 | Brand + NorthStar keywords | Dedicated exact campaigns, individual CPPs, defend visibility even at lower ROAS |
| Group 2 | Extremely Relevant + Competitor | Dedicated exact campaigns, performance bidding, shared CPPs within segments |
| Group 3 | Relevant | Segment-level campaigns, segment-level bid strategy, shared CPPs |
| Group 4 | Somewhat Relevant | ASO metadata only, no ASA |
| Group 5 | Low Relevance | Monitoring only, no metadata, no ASA |
The account I reference throughout this post is built on this framework. Segment-based campaigns (not the lazy three-bucket split), individual exact campaigns for Group 1 and 2, a Discovery campaign using broad match with exact negatives to catch variants the exact campaigns miss, and a Probation campaign for promoted search terms before they graduate to permanent exact placement.
This matters because every automation rule I’m about to describe depends on this structure being in place. Group 1 and 2 keywords each get their own exact-match campaigns, with dedicated CPPs on Group 1 and shared CPPs within segments on Group 2. Group 3 keywords live in segment-level campaigns with shared CPPs. Groups 4 and 5 are not in ASA at all; they are ASO-only or monitoring-only. That filtering upstream is what lets the bid rules stay simple downstream. If your account is flat, the rules have nothing to key off.
Architecture first, automation second. Skip this step and you’re automating chaos.
The bid optimization logic
Now the actual automation.
North star metric: Cost Per Trial (CPTrial). Not CPI. Not cost per install. Calling installs a conversion in 2026 is a framing problem that hides most bad UA decisions. The app I’m referencing runs a free trial model, so CPTrial is the earliest real signal of purchase intent. For apps with a different monetization path, this would be CPSub, CPPurchase, or whatever paid event actually means something in your funnel. The point is it sits downstream of install.
Target CPTrial is set per app. For this account, it’s $40 on one app and $50 on another. Targets come from LTV modeling, not from thin air.
Attribution window: 7 days. I pull trial events from Adjust, filtered to installs attributed in the last 7 days. This matters because Adjust’s default attribution source is dynamic, which inflates trial counts by roughly 15% because it re-attributes later events. I use attribution_source=first instead, which locks attribution to the original install source. If your CPTrial looks suspiciously low, check this setting first.
Optimization window: 7 days minimum. I originally ran this every 3 days and had to change it. Shorter windows produce too few trials per keyword per window, which means the CPTrial calculation is noisy. A keyword with 2 trials over 3 days reads very differently than the same keyword with 5 trials over 7 days. I now enforce a 7-day minimum.
The rules themselves are simple. For every keyword with at least $20 of spend in the window:
| Condition | Action |
|---|---|
| CPTrial at or below target | Increase bid 20% |
| CPTrial above target | Decrease bid 20% |
| Spend at or above $100 with zero trials | Hard decrease bid 40% |
These rules apply the same way across Groups 1, 2, and 3. The Group does not change the bid logic. What the Group changes is what’s around the keyword: whether it gets a dedicated CPP or a shared one, whether it lives in its own exact campaign or a segment campaign, where it sits in the metadata. By the time a keyword reaches the bid optimizer, the architectural decisions have already been made. The rules only have to answer one question: is this keyword hitting its CPTrial target or not.
Broad match keywords in the Discovery campaign get the same treatment on their keyword-level bids. Discovery also needs search term handling, which is where most accounts bleed money.
Search term mining for negatives. Discovery campaign search terms with TTR under 2% and either $10+ spend or 500+ impressions get added as exact negatives in Discovery. TTR is Taps / Impressions. A search term with lots of impressions and almost no taps is irrelevant to users searching that term. Apple is showing the ad, nobody is clicking, budget is being wasted on impressions that do not convert to anything.
Search term mining for promotion. Discovery search terms with TTR above 4% and $30+ spend are candidates for promotion to the Probation campaign as exact keywords. Above $50 spend, they are high-confidence candidates. These are search terms users are clicking on, that we do not have an exact keyword for yet. They graduate to Probation first, get observed for a window, and then move to permanent exact placement in the right segment campaign once the performance holds.
Apple’s redaction caveat. Apple hides low-volume search terms. They return null in the API for privacy reasons, grouped under the broad keyword they matched to. The script can only analyze unredacted terms, which in my experience is roughly 40% of total Discovery spend. The other 60% has to be optimized at the broad keyword level via the regular bid rules. This is a platform limitation, not a methodology gap. I mention it because every “ASA automation tool” I’ve tested fails to acknowledge it.
That’s the entire rule set. Seven bullet points you could write on a napkin.
What Claude Code actually changes
Here’s where it gets interesting.
I have a Python script that implements everything above. Around 1,000 lines. It runs weekly. It pulls the data, applies the rules, outputs a bid actions file, a negatives file, a promote file, a keyword gap report, and a human-readable summary. A standalone cron job could run this.
That’s not what I use Claude Code for.
Claude Code is the conversational layer on top of the script, and it’s what I use between runs. Here’s what a typical week looks like.
Monday morning, I run the bid optimization script. Ten minutes later I have seven output files across two apps. I open Claude Code and ask it to read the summary files and flag anything unusual. It pulls them, reads the deltas, and tells me what shifted week over week. Maybe one segment’s CPTrial jumped 30% and another’s stayed flat.
I ask why. Claude Code pulls the keyword-level data from the Apple Ads API directly (no MCP needed, just a Python call), checks if specific keywords drove the segment shift, pulls the search term report for that campaign, and flags which search terms carried the volume. If the issue is a specific keyword, it checks the ranking in Astro via its MCP to see if organic position moved in a way that might be affecting paid performance. If the issue is a specific CPP, it pulls the Figma frame via its MCP and I can see whether a recent design change correlated with the shift.
All of that happens in one conversation. No tab-switching, no CSV exports, no joining data in spreadsheets, no “let me open another window.”
The script does the mechanical work. Claude Code does the investigation.
That’s the split that matters. Scripts are for deterministic tasks with clear inputs and outputs. Bid rules meet that bar. Conversational agents are for everything non-deterministic, which is roughly 70% of growth work. Why did this move. What’s correlated with it. Should I change the strategy. Is the target still right.
Running both together is the unlock.
What a round actually looks like
Here’s the output from a recent run, anonymized. Two apps, both sharing the same Apple Search Ads org, different event schemas, different CPTrial targets.
Both apps came in under their CPTrial targets for the week. The script generated dozens of keyword bid moves across them (a mix of +20% increases, -20% decreases, and a couple of -40% hard decreases on high-spend zero-trial keywords) and surfaced a handful of search term actions in Discovery: new exact negatives on wasted-impression terms, promote-to-exact candidates on high-TTR terms with enough spend.
Easy week on the bid logic.
Flagged for investigation: App B’s Install-to-Trial rate had held at 10-13% for four straight weeks, against a documented 22-28% target. That’s the kind of finding the script surfaces, and Claude Code investigates. In this case, it turned into a deeper conversation about whether the target itself needed recalibration, or whether trial attribution was broken upstream in the MMP setup.
This is what I mean by “bid automation is not the story.” The script told me CPTrial was under target on both apps. It would have been very easy to stop there. What actually mattered was the sustained I-to-T anomaly on App B, which the script flagged but could not explain. Claude Code is the layer that turns flags into answers.
What this unlocks
A few things change when your workflow looks like this.
Time. The weekly round used to take me 2 to 3 hours. It now takes 20 minutes if nothing’s weird, 60 minutes if something is.
Signal quality. Because I run on 7-day windows against a real trial metric rather than CPI, the bid moves are based on data that actually correlates with revenue. Noisy metrics produce noisy rules, which produce mediocre accounts.
Investigation on demand. When something shifts mid-week, I do not need to wait for the next full round. I pull the specific data I need through conversation, make a call, move on.
Portability. The framework scales. The same architecture works for any ASA account. The same rules work for any trial-based app. The same Claude Code stack works once the API keys are in your keychain and the MCPs are connected (for the tools that have them). Adding a new account is mostly a configuration problem, not a rebuild.
Honest positioning. This is the piece I care about for client work. When I tell someone “I’ve automated most of my ASA ops,” I can show them the methodology, the rules, the outputs, and the tooling. It’s not a black box. It’s a system they could operate themselves if they wanted to learn it.
What I’d do if I were starting from scratch
If you’re running ASA at meaningful scale and want to build something like this, the order matters.
- Fix the architecture first. Tag every keyword with Type, Relevancy, and Segment. Map them to the 5-Group matrix. Rebuild the campaigns so Group 1 and 2 have dedicated exact campaigns, Group 3 runs at segment level, and Discovery uses broad match with exact negatives covering everything already served. This is the part that takes the longest and returns the most.
- Pick a real north star metric. CPTrial, CPSub, CPPurchase. Whatever is closest to revenue while still having enough volume per keyword per window to be stable. Not CPI.
- Build the bid rules as code. Python, Node, whatever you use. Seven rules. The value is not in the cleverness of the script. It’s in removing the human tendency to anchor on last week’s bids.
- Wire Claude Code last. Once the script is running, layer Claude Code on top. Save your API keys (Apple Ads, MMP) in your keychain. Connect the MCPs that exist (keyword tracker, design tool). Claude Code calls the APIs directly for anything without an MCP. The orchestration is where the time actually comes back.
Skip step 1 and the rest is polish on a broken foundation.
What’s coming next
One thing worth flagging, because it’s going to reshape this system within the year.
The Apple Ads Platform API is coming, and it’s a bigger shift than most people realize. Apple is sunsetting the current Campaign Management API v5 on January 26, 2027, and replacing it with the Apple Ads Platform API. The rebrand matters, because it’s not just a versioned upgrade. The new API unifies advertising across Apple’s surfaces, including Apple Maps ads, which are launching in the US and Canada in summer 2026. Apple Search Ads as a standalone product is quietly becoming Apple Ads.
Two changes in there are immediately relevant to a bid optimization system like this one.
The first is slot-level reporting. Apple expanded to multiple ad slots per search result starting March 3, 2026, rolling out globally by the end of that month. Before this, an ASA ad was either in position 1 or it wasn’t. Now there are multiple positions, Apple charges different rates for each, and the existing API didn’t expose which slot an impression or tap came from. The new API will (per Apple’s preview docs and the integration guidance already in the wild). That means I can bid differently for position 1 vs. position 3, which today is a blind spot in every ASA account I’ve seen.
The second is the suggested_bid_amount field, which Apple quietly added to the Keyword Report object in a recent v5 release. It’s a bid recommendation calculated by Apple per keyword. Whether it’s useful or gameable is an open question. Either way, it’s a new input the rules can consume.
I have not worked through the full Platform API documentation yet. When I do, the two things I’ll evaluate first are whether slot-level data actually comes through cleanly, and whether the redacted low-volume search terms finally become accessible in any form. If the migration unlocks something material, I’ll write a follow-up walking through what actually changes in the bid logic.
Both of these changes fit cleanly into the framework that’s already there. That’s the point of building on a real architecture. You don’t rewrite the system when you get new data, you just tune it.
If you want help setting this up
I write a newsletter called Field Notes on App Growth. It covers this kind of workflow, platform API quirks, ASO experiments, and the reasoning behind the calls I make in my day-to-day growth work. You can sign up below.
If you’re running ASA at meaningful scale and want help setting something like this up for your team, reach out. I’m heads-down in a full-time Head of Growth role right now, so I’m only taking on small, one-off projects for the rest of the year. Architecture audits, bid logic setup, Claude Code wiring. Things with a defined start and end. Not ongoing retainers.
Either way, the framework is in this post. Copy it.
Written by Kevser Imirogullari
Independent mobile marketing consultant helping apps by connecting acquisition, store, and monetization insights they missed.
Explore free tools →Get more insights like this
Join 500+ app marketers getting weekly tips on ASO, Apple Search Ads, and mobile growth.
No spam. Unsubscribe anytime.
You might also like
7 Apple Search Ads Mistakes Killing Your ROAS (And How to Fix Them)
After 500+ ASA campaign audits, these are the mistakes I see over and over. Most are easy fixes that can cut your CPA by 20-40%.
Apple AdsWhy Your ASA Conversion Rate is Tanking (And How to Fix It)
Low conversion rate on Apple Search Ads? Use this 5-step diagnostic framework to find whether it's creative, product page, competitive, or app-related.
Newsletter
Weekly mobile growth insights
What I'm seeing inside real app growth work, before it becomes common advice.
Subscribeor get in touch