How I Automated 12 Hours of Weekly Paid Media Reporting with Claude and Two APIs

Matt Danese

Demand gen leader. AI builder. 8+ years at Meta, Webflow, Medely, Octus, and Regal.ai.

Every Monday morning, for about three years, I did the same thing. I opened LinkedIn Campaign Manager, Google Ads, and a blank spreadsheet. I pulled last week's data. I copy-pasted numbers across tabs, built pivot tables I'd rebuilt a hundred times before, and wrote a summary email that said roughly the same things every single week. Then I sent it to my manager and we talked about what to do next.

I estimated once that this ritual consumed 3–4 hours of my time per week. For a demand gen manager running multiple channels with a reporting analyst, it can easily hit 12–18 hours. That's one person's job, every week, for the rest of your career — just to answer the question "what happened last week?"

I stopped doing it manually in Q4 of last year. Here's exactly how.

The architecture (it's simpler than you think)

The system has three components: a data layer, an analysis layer, and an output layer. That's it. I use Node.js to hit the LinkedIn Ads API and Google Ads API on a cron schedule every Monday at 8AM. The data pulls are scoped to the prior calendar week — impressions, clicks, spend, conversions, CPL by campaign and ad set. Nothing fancy. The same numbers I used to pull by hand.

That data gets passed to the Anthropic Claude API with a structured prompt that gives Claude a specific analyst persona: senior demand gen lead, 8+ years B2B experience, direct and prioritized in their output, no hedging. The persona matters more than most people expect. A Claude prompt without a persona gives you balanced, hedged, corporate-sounding analysis. Give it a point of view and it writes like a person.

The output goes to Slack. Not an email. Not a dashboard. A Slack message with a structured format: one headline story (what was the single most important thing that happened this week), a "what worked" section, a "what needs attention" section, and three prioritized action items — ranked by expected impact, not urgency.

18 hrs Estimated weekly analyst time eliminated across 12 discrete automations in the full system.

What broke the first time

The data pull worked fine. The Claude analysis was good. The Slack output was readable. What broke was the prompt context window. I was passing raw JSON API responses directly into the prompt — campaign objects with every field the API returns, including a lot of fields I didn't care about. I hit token limits and the analysis started truncating.

The fix was pre-processing. Before passing data to Claude, I wrote a simple transformation step that strips everything except the fields that matter for analysis: campaign name, spend, impressions, clicks, conversions, CPL. I also aggregate to campaign level rather than ad-set level by default — Claude can ask for drill-down on specific campaigns if the top-line numbers look anomalous, which is a separate automation I added later.

The second thing that broke was the action items. Early versions of the prompt produced action items that were too generic: "Consider reallocating budget to higher-performing campaigns." That's useless. I rewrote the prompt to require that each action item include the specific campaign name, the specific metric that triggered it, and the specific dollar amount or percentage change recommended. Generic analysis is entertainment. Specific analysis is leverage.

The 12 automations I ended up with

The weekly briefing was just the start. Once I had the data pipeline and analysis layer working, I kept adding modules. Anomaly detection that fires mid-week if CPL spikes more than 30% vs. the prior 3-week average. Budget pacing alerts when a campaign is on track to over- or under-deliver by more than 15% by end of month. A creative fatigue detector that flags ad sets where CTR has declined week-over-week for three consecutive weeks.

The most useful addition was a real-time Slack Q&A bot. Using the same data pipeline with a persistent context window, I can ask questions like "which LinkedIn campaigns had the best ICP lead ratio last month?" or "show me the CPL trend for the [campaign name] campaign over the last 8 weeks" and get a real answer in under 10 seconds. This replaced about 60% of the ad-hoc data pulls I used to do manually.

The thing I didn't expect

The best outcome wasn't the time saved. It was the consistency. When I wrote the Monday briefing manually, my analysis quality varied. If I was busy, I'd write a thinner email. If I was already convinced I knew the answer, I'd look for data that confirmed it. The automated briefing has no off days. It applies the same analytical framework every single week, flags the same types of anomalies, asks the same clarifying questions. It's more consistent than I was.

If you want the full system design — prompts, data schema, Slack integration pattern, and the 12 automation specs — it's all in the PRD. Subscribe to the newsletter and you'll get access.

Want the full system PRD?

Subscribe to The Demand Engine(er) — free — and get instant access to all 5 system PRDs.

Get the PRDs →