Campaign Optimization: Turn Losing Campaigns Profitable

By Brent Dunn Jul 5, 2017 16 min read

Build Your First AI Project This Weekend

Stop consuming tutorials. Start creating. Get the free step-by-step guide.

Stop consuming tutorials. Start creating. Get the free step-by-step guide.

You’ve spent $500 on a campaign. The clicks are coming in. People are hitting your landing page. Conversions? Barely a trickle. You’re at -40% ROI.

Kill it or keep going?

This decision separates marketers who burn through budgets from those who build profitable campaigns. Most campaigns fail not because the offer was bad or the traffic was wrong. They fail because the marketer either gave up too early or held on too long.

This is the framework I use to make that call. I’ve run hundreds of paid campaigns across most major traffic sources, and this process has saved me from cutting winners too early and bleeding money on losers.

AI speeds up the analysis part. What used to take hours of spreadsheet work now takes minutes. But AI doesn’t fix bad offers or wrong audiences. The market decides what converts. AI just helps you see the data faster.


What’s in This Guide

SectionWhat You’ll Get
Two Optimizer TypesWhich camp you’re in and why it matters
AI Analysis WorkflowCopy-paste prompts for campaign analysis
Testing FrameworkWhen to use A/B vs multi-variant
What to TestOffers, angles, creatives, pages, placements
Cut or ContinueThe exact decision framework I use
Hidden WinsSpeed, filtering, payout bumps
Campaign TypesHow to handle each scenario
Data SystemTurn past campaigns into future wins

The Two Types of Optimizers

Two groups of marketers, both losing money for opposite reasons.

The “launch and pray” crew sets up multiple campaigns hoping something sticks. If it doesn’t work immediately, they write it off and move on.

The snipers launch a campaign and let it run even when it’s bleeding money. They cut segments one by one, losing a fortune while trying to optimize their way out of a hole.

Both approaches fail. The answer is somewhere between.

Why “Launch and Pray” Fails

You’re gambling. Throwing things at the wall with no strategy.

This group usually runs RON (run of network) from day one. No targeting. No hypothesis. Just hope.

You might stumble onto a winning campaign this way. But you won’t know why it worked. You won’t know how to scale it. And when it dies (they all do), you’re back to zero.

A campaign that isn’t immediately profitable doesn’t mean it’s dead. It means you have data to work with.

Why Pure Sniping Fails Too

Most of my successful campaigns came from the sniper approach. But there’s a catch: you’re going to lose money upfront while you optimize.

This wears on you. Watching red numbers day after day tests your patience.

The sniper does research first. They understand the offer and the traffic source. They build a marketing persona of their target audience.

Instead of RON, they map out tests:

  • The hook that captures attention
  • Landing page style
  • Target market segments
  • Specific placements

AI-Accelerated Optimization

Here’s what AI actually changes:

TaskBeforeWith AI
Data AnalysisExport, pivot tables, hours of manual reviewFeed CSV to Claude, get insights in minutes
Pattern RecognitionSpot trends yourself over weeksFinds patterns across thousands of rows instantly
Creative Testing3-5 variants per week20+ variants per week
Hypothesis GenerationExperience-based guessesData-driven suggestions

My AI Optimization Workflow

Step 1: Export Your Data

Pull campaign data as CSV from your traffic source or tracker. Include everything:

  • Date/time
  • Placement/site
  • Device, OS, browser
  • Geo (country, region, city)
  • Creative ID
  • Landing page
  • Conversions, revenue, cost

Step 2: Run This Analysis Prompt

Copy this into Claude with your CSV:

Analyze this campaign data and provide:

1. TOP PERFORMERS: Top 10 segments by ROI (minimum $50 spend to qualify)
2. WORST PERFORMERS: Bottom 10 segments losing the most money
3. PATTERNS: What characteristics do converting traffic share?
4. ANOMALIES: Any unusual patterns or data quality issues?
5. QUICK WINS: 3 optimizations I could make today

Data context:
- Traffic source: [NAME]
- Offer type: [DESCRIBE]
- Current overall ROI: [X%]
- Total spend in this data: [$X]

[PASTE CSV DATA]

Step 3: Dig Deeper

Once you have initial insights, run these follow-up prompts:

Placement tiers:

Group these placements into tiers:
- Tier 1: Profitable (keep and scale)
- Tier 2: Breakeven (test with new creative)
- Tier 3: Losing but salvageable (needs specific fix)
- Tier 4: Cut immediately

For Tier 3, suggest what specific change might improve each.

Creative performance:

Analyze creative performance patterns:
- Which creative IDs perform best on which placements?
- Any creative + placement combinations that outperform?
- Which creatives should I kill?
- What themes do winning creatives share?

Time patterns:

Analyze performance by:
- Hour of day
- Day of week
- Any time-based trends?

Should I implement dayparting? If so, what schedule?

AI Creative Generation

Instead of manually creating 3-5 ad variants, generate 20+:

I'm running [OFFER TYPE] on [TRAFFIC SOURCE].

My current best-performing angle is: [DESCRIBE]

Generate 15 ad copy variations that:
1. Test different emotional triggers (fear, curiosity, greed, urgency, social proof)
2. Test different hooks (question, statistic, bold claim, story)
3. Keep the same core benefit but frame it differently

Format: Headline (max 40 chars) | Description (max 90 chars)

For banner concepts:

My winning banner uses: [DESCRIBE VISUAL/COPY]

Generate 10 banner concept variations:
- 3 that test different visual styles
- 3 that test different headlines
- 2 that test different CTAs
- 2 completely different angles

For each, describe: Visual concept | Headline | CTA

Full AI workflow guide


The Power of Testing

Testing is how you turn losers into winners. Even profitable campaigns need constant testing. Your competitor is testing. If you stop, they’ll eventually take your placements.

Make sure you’re tracking everything first.

A/B Testing

Test one variant against another. 50% of traffic to A, 50% to B.

When to use it: Once you have a profitable campaign and want to improve it. You’re trying to beat your current winner.

Why it works: You reach statistical significance fast with only two variants.

The downside: Less chance of finding a dramatic outperformer.

Multi-Variant Testing

Test multiple variants at once. Four variants? Each gets 25% of traffic.

When to use it: Early in a campaign when you’re still figuring out what works. You’re testing different angles, styles, and approaches all at once.

Why it works: You can compare multiple approaches simultaneously and find the best direction faster.

The downside: Takes longer to reach statistical significance with traffic split so many ways.

My approach: Multi-variant testing to find the winning direction, then A/B testing to optimize it.

AI Test Analysis

Use this prompt to analyze your tests:

Here's my A/B test data:

Variant A: [CLICKS] clicks, [CONVERSIONS] conversions, [COST] spend
Variant B: [CLICKS] clicks, [CONVERSIONS] conversions, [COST] spend

Questions:
1. Is this statistically significant? (95% confidence)
2. If not, how much more data do I need?
3. What's the projected ROI difference if I scale the winner?
4. Any concerns about the data quality?

For multi-variant:

Here's my multi-variant test results:

[PASTE DATA FOR ALL VARIANTS]

1. Rank variants by statistical confidence
2. Which variants can I cut now?
3. Which should continue testing?
4. Recommended next test based on what's working?

The One Rule That Matters

Test one variable at a time.

If you change multiple things, you won’t know what worked. Your data becomes useless.

Testing a new landing page? Keep everything else the same:

  • Same placements
  • Same ad creatives
  • Same bid
  • Same targeting

Change one thing. Measure. Then change the next thing.

Stop Refreshing Your Stats

Set up the test and walk away. Work on your next campaign. Check results the next day.

Constantly refreshing stats doesn’t make conversions appear faster. It just makes you anxious and tempts you to make premature changes.

Time Matters Too

If you launch test A on Monday at 5pm and test B on Tuesday at 11am, your data is skewed.

Maybe your campaign performs better at night. Now your morning data is misleading.

Try to launch tests at the same time of day, same day of week when possible.


What To Test

“I’ve tested everything and still can’t get it profitable.”

No you haven’t.

There’s always something to test. The power of performance marketing is that you control almost every variable in the funnel. Channel not working? Switch it. Offer not converting? Test another. EPC too low? Try a different backend offer.

Here’s what to test, in order of impact:

Offers

Test offers first. If your offer doesn’t convert, nothing else matters.

Ask your affiliate manager what’s working. Look at what your competition runs. Test multiple offers against each other.

If an offer has different landing pages, test those too. Each converts differently.

Full offer selection guide

AI prompt for offer research:

I'm running [TRAFFIC TYPE] to [VERTICAL] offers.

Based on current market conditions, what offer characteristics typically perform best for this traffic type?

Consider: Payout structure, flow type, conversion requirements, landing page style.

Angle

Most overlooked optimization. Your angle creates the desire to act.

“Download an app” is too general. No one cares because the message reaches no one specific.

Your angle should target a specific group with a specific pain point. Three questions:

  • What does my audience fear?
  • What do they need most?
  • What pain are they trying to escape?

Create at least three angles and test them against each other.

Full guide to creative angles

AI prompt for angle generation:

My offer is: [DESCRIBE OFFER]
Target audience: [DESCRIBE]
Current angle: [YOUR CURRENT HOOK]

Generate 10 alternative angles that:
- Hit different emotional triggers
- Address different pain points
- Use different frameworks (fear, curiosity, greed, belonging)

For each angle, give me: Hook + Why it might work

Ad Creatives

Your ad is the first thing prospects see. Banner, text, or video - it determines your CTR and click costs.

High CTR keeps costs low whether you’re paying CPM or CPC. But don’t sacrifice quality for quantity. Clickbait that doesn’t match your landing page wastes money.

Think about context:

  • What is the visitor doing before they see your ad?
  • What mindset are they in?
  • What would make them stop and pay attention?

Your creative should prep visitors for your hook, not just grab attention.

AI ad copywriting guide

Landing Page Style

Four styles that work:

Click Through Page - Just enough detail to get the click. Quick benefits, strong CTA, then off to the offer.

Lead Capture Page - Most overlooked style. Collect email or phone before sending to the offer. You build an asset (your list) while still making money on the offer. Two choices for the visitor: give info or leave.

Long Form Sales Letter - The pages that take 20 minutes to scroll. Story-driven, pain point after pain point, no CTA until you’re fully sold. They work because by the time you see the buy button, you’re ready.

Sideways Sales Letter - Jeff Walker’s approach. Spread the pitch across multiple touchpoints. Better for high-ticket offers where you need to build trust first. Higher LTV, but longer time to revenue.

Full landing page guide

AI prompt for landing page variations:

My current landing page style: [DESCRIBE]
Offer: [DESCRIBE]
Current conversion rate: [X%]

Suggest 3 alternative page structures I should test:
1. A variation of my current style
2. A completely different style
3. A hybrid approach

For each, explain: Structure, key elements, why it might improve conversion.

Placements

Where your ad shows matters. If your ad appears on a site where no one cares about your offer, you’re wasting money.

Research where your audience spends time. Cut placements that don’t match your demographic. And watch for winning combinations: sometimes a specific creative + landing page + placement outperforms everything else.

Targeting

Narrow your targeting based on what converts:

  • Country
  • Device type
  • Operating system
  • Browser
  • ISP/Carrier

If your offer only works in certain countries on certain devices, don’t pay for traffic that can’t convert. Use your data to target only the highest ROI segments.

IP targeting guide

Bids

Bid adjustments can turn a loser profitable.

But test them. A lower bid reduces cost but might push you to position five instead of one. Or worse, you only show up on low-quality placements.

Test bid changes like you test everything else.


The Cut or Continue Framework

This is where most marketers screw up. Here’s the exact framework I use:

The 3x Rule

Don’t make any decisions until you’ve spent 3x your target CPA.

Target CPA is $20? Spend at least $60 before cutting anything.

Why 3x?

  • 1x CPA: Not enough data. Could just be bad luck.
  • 2x CPA: Better, but variance is still high.
  • 3x CPA: Enough data to see patterns.

The Segment Analysis Prompt

After hitting 3x spend, run this:

Campaign spent: $[X] (3x my $[Y] CPA target)
Overall ROI: [Z%]

Break down performance by:
1. Top 20% of segments by spend - what's their ROI?
2. If I cut the bottom 50% of segments, what's projected ROI?
3. Is there ANY segment performing profitably?

Decision framework:
- If cutting bad segments gets me to breakeven: CONTINUE
- If best segments are still deeply negative: CUT
- If insufficient data in any segment: CONTINUE at lower budget

The Decision Tree

At -60% or worse after 3x CPA spend:

  1. Any segments profitable? Clone and target only those.
  2. No profitable segments, but best segment at -30% or better? Test new creative/angle on that segment only.
  3. Best segment still deeply negative? Kill it.

At -30% to -60% after 3x CPA spend:

  1. Find your top 3 segments by ROI
  2. Clone campaign targeting only those
  3. Test 3 new creatives or angles
  4. Give it another 2x CPA spend
  5. Still not improving? Kill it.

At -5% to -30% after 3x CPA spend:

This is the sweet spot. Small optimizations can flip these profitable:

  1. Cut money-wasting segments
  2. A/B test your best creative against 2-3 new variants
  3. Ask your affiliate manager for a payout bump
  4. Optimize landing page load speed
  5. Test dayparting if data shows time patterns

At breakeven to +30%:

Don’t mess with it. Instead:

  1. Clone and expand to new placements
  2. Scale slowly
  3. Keep testing new creatives to prevent fatigue

Full scaling guide


Hidden Optimizations

Still breaking even? Here are the non-obvious wins:

Loading Speed

People bounce after 4 seconds. Target sub-1-second load times.

If your server is in the US and your traffic is European, you’re fighting latency. Put your server close to your audience.

Landing Page Code Guide

Filter Traffic

Getting traffic that won’t convert? Don’t waste it.

Running a Netherlands campaign but getting Belgium traffic? Redirect that traffic to an offer that accepts Belgium.

AI prompt for filtering:

Here's my traffic breakdown by [GEO/DEVICE/OTHER]:

[PASTE DATA]

Which segments should I:
1. Filter to a different offer?
2. Block entirely?
3. Test with different creative?

Payout Bump

Running volume? Ask for a payout bump.

Affiliate managers don’t want to lose your volume. 10-20% bumps are common if you’re doing real numbers.

They won’t give you one? Shop the offer to other networks. High volume? Go direct to the advertiser.

Don’t ask for a bump if you’re doing five leads a day.

Code Optimizations

Small code changes can boost conversions. Lazy loading, compression, CDN, redirect optimization.

Full code optimization guide


Clone Before You Cut

Found a winning segment? Don’t cut everything else from the original campaign.

Here’s what happens when you cut: Your eCPM changes. Your position on the ad server changes. You start showing up on different placements. The “optimization” breaks something else.

Do this instead:

  1. Clone the campaign
  2. Apply your targeting changes to the clone
  3. Run both simultaneously
  4. If the clone wins, pause the original
  5. If the clone fails, you still have the original collecting data

Never optimize a running campaign in place. Always clone first.


Steal From the Competition

Out of ideas? Go look at what’s working for others.

Visit your placements. Find competitors running similar offers. Note:

  • Their angle/hook
  • Landing page style
  • Ad creative approach
  • Position on the ad server (first impression or buried?)

If someone’s been running the same page for weeks, it’s probably converting. Build something similar to establish a baseline, then beat it.

Competitor analysis guide

AI prompt for competitive analysis:

Here's competitor landing page copy:

[PASTE COPY]

Analyze:
1. What's their angle/hook?
2. What emotional triggers are they hitting?
3. What objections do they address?
4. What's missing that I could add?
5. How could I differentiate while keeping what works?

Three Campaign Types (And What To Do With Each)

Zero Conversions

A week in, zero conversions. Hard to analyze because you have no feedback.

Run through this checklist:

  • Does the offer convert on other channels? (Ask your affiliate manager)
  • Are visitors clicking through to the offer or bouncing immediately?
  • Does your ad creative match your landing page? Mismatched messaging kills conversions.
  • Any technical issues? Offer down? Tracking broken? Slow load times?

If nothing’s obviously wrong, test different offers. Test the same offer on different networks. You need a baseline that works before you can optimize.

Potential Winner (-60% to -5%)

Most campaigns land here. Most marketers kill them too early.

You’re not losing money. You’re buying data.

This data tells you which segments work, which angles resonate, which placements convert. Use it.

Usually all it takes is shifting traffic to your best-performing segments.

AI prompt for diagnosis:

Campaign is at -30% ROI after [X] spend.

Here's my segment breakdown:

[PASTE DATA]

1. If I cut all negative segments, what's projected ROI?
2. Is there enough volume remaining to be worthwhile?
3. What 3 changes would have the biggest impact?
4. Should I continue or cut losses?

Jackpot (+50% ROI from Day One)

Rare. I’ve had a handful.

When it happens, verify with your affiliate manager before scaling. Sometimes offers fire conversions they shouldn’t. Sometimes advertisers are bleeding money and don’t know it yet.

Scale fast, but expect it to die. These campaigns almost always end suddenly:

  • Advertiser changes their landing page terms
  • New advertiser realizes they’re losing money and kills the offer
  • Technical glitch gets fixed

Enjoy it while it lasts. Don’t assume you can retire.

How to scale without blowing up


Build Your Campaign Memory

Most marketers waste the data they’re paying for. Win or lose, every campaign teaches you something. Capture it.

What to Track

After every campaign (weekly or at conclusion):

  • Best-performing angle
  • Average CTR
  • EPC by variant
  • Top landing page style
  • Total spend and revenue
  • Best and worst creative CTRs

AI Documentation System

Run this prompt after every campaign:

I just finished running a campaign. Help me document it.

Campaign details:
- Offer: [DESCRIBE]
- Traffic source: [NAME]
- Dates: [START] to [END]
- Total spend: [$X]
- Total revenue: [$X]
- Final ROI: [X%]

What worked:
[LIST]

What didn't work:
[LIST]

Create a structured campaign retrospective that includes:
1. Executive summary (2-3 sentences)
2. Key learnings
3. What to test next time
4. Reusable assets (angles, copy, targeting)
5. Things to avoid

Save these. When you launch similar campaigns, feed your history back to AI:

I'm launching a new [VERTICAL] campaign on [TRAFFIC SOURCE].

Here are my past campaign retrospectives for similar campaigns:

[PASTE RETROSPECTIVES]

Based on my historical learnings:
1. What should I definitely do?
2. What should I avoid?
3. What's my recommended launch strategy?

Your past failures become your competitive advantage. Every campaign compounds your knowledge.


What You Should Do Now

The fundamentals haven’t changed:

  • Test one variable at a time
  • Wait for statistical significance
  • Clone before you cut
  • Document everything

AI speeds up the analysis. It doesn’t replace judgment. The market still decides what converts.

Start here:

  1. Export your current campaign data as CSV
  2. Run the initial analysis prompt from this guide
  3. Use the 3x rule and decision tree to decide: cut or continue
  4. Document what you learned

If you’re building your first paid campaign, start with the full AI workflow guide to set up your process.

Questions? Contact me.


Previous Scaling Paid Campaigns with AI: Grow Without Blowing Up