When 97 Percent of Tests Fail on Purpose
Most brands treat creative failure as an embarrassment. Something that happened because the brief was off, or the creative wasn't good enough, or the audience targeting was slightly wrong. Fix the failure, move on. Sassy Saints took the opposite position: failure is the plan.
At one point they had roughly 1,000 campaigns running simultaneously, with the explicit expectation that around 3% of them would beat expectations and become actual winners. Not hoping for better odds - accepting those odds and building a machine calibrated to them. That single belief restructures everything downstream. You stop protecting ideas and start building throughput. More angles, more concepts, more hooks, more landing page variants, faster feedback cycles. Volume stops being a nice-to-have and becomes the only thing that actually surfaces winners.
I've run paid acquisition for enough brands to know how rare this mindset is. Most teams - even good ones - are still quietly emotionally attached to the work. Someone wrote that script. Someone spent two hours on the thumbnail. Pulling it after three days of weak ROAS feels like rejection. But if you go in expecting 97 failures per 100 tests, pulling fast stops feeling personal and starts feeling like the job.
What Sassy Saints built is less a media buying operation and more a production line. Tight inputs, ruthless iteration, and a system designed around the assumption that most things won't work so the things that do can surface quickly. Paid social stops being a creative exercise and becomes an operations problem. And once you see it that way, the decisions you need to make become very different ones.
The Framework That Assumes Most Ideas Are Wrong
Zoom into one decision Sassy Saints made early and it explains a lot about why their volume is sustainable rather than chaotic. They formalised their testing structure as Persona, then Sub-persona, then Angle, then Concept. Every piece of creative starts from that chain. And the reason it works is that each step has a specific, non-negotiable job.
Persona, in their model, has nothing to do with age brackets or income bands. A persona is a shared problem in a shared situation. The woman who wants salon-quality nails in ten minutes because she has a toddler climbing her leg is genuinely different from the woman who wants the same result because she has an event every night of her holiday next week. Same product. Same category. Completely different internal movies, different triggers, different objections. Run the same ad at both of them and you'll get mediocre results from each.
Sub-persona is where scale comes from without brand dilution. Once you have a hero persona that resonates hard, you branch into adjacent sub-personas - the same product truth reframed for different real-life contexts. You're not rebuilding the brand for each one. You're finding the version of the story that fits each situation.
Angle sits above the creative execution. It's the specific why-now hook you lead with - speed, cost savings, salon avoidance, UV damage concerns, the shame of chipped nails at work, the frustration of spending an hour on a manicure and destroying it within 24 hours. Each angle is testable independently. When an ad fails, you can trace it back to a specific level of the chain rather than shrugging and saying "the creative didn't land."
At 2 personas with 2 sub-personas each and 3 angles per sub-persona, you have 12 testable concepts before you've varied a single format or hook. That's the point. The framework generates volume with intention rather than volume through randomness.
The Customer Already Wrote Your Best Ad
What do most creative teams do when they need new ad ideas? They book a brainstorm. They get smart people in a room or a Notion doc and they ideate. And those ideas are usually fine - coherent, on-brand, inoffensive, and almost entirely imagined by people who are not the customer.
Sassy Saints don't brainstorm angles. They go mining.
The sources are the places customers are already being brutally honest - surveys run with a small financial incentive (they've used a $5 gift card to lift response rates), Amazon reviews from their own category and competitors, Reddit threads where people argue and confess and overshare about problems with their nails, Trustpilot reviews that get painfully specific, and their own comment sections on YouTube and TikTok. The point isn't just to collect feedback. It's to capture verbatim phrasing - the exact words a customer uses to describe the problem.
Because the customer's language is the hook. When someone writes "I'm sick of doing my nails and then ruining them opening a can of Diet Coke," you don't clean that up into polished marketing copy. You build an ad around that exact sentence, in that exact register, because the specificity is what creates recognition. Other women who have had that exact moment will feel seen in a way no brainstormed hook will ever produce. A recent survey push hit a 20% response rate - evidence that a modest incentive and a direct ask can generate real signal at scale.
They've also used tools like ChatGPT and Claude to help process large volumes of raw customer language, identifying psychographic patterns across reviews and threads. That's the right use of AI in creative - not generating ideas, but accelerating pattern recognition so you know which customer truths to test first. The research becomes the roadmap.
Clicking Through Then Bouncing Is a Funnel Problem
Picture two versions of the same campaign. In both, the ad speaks to a time-pressed mother - quick nails, no salon visit, lasts through chaos. In version one, the landing page continues that conversation, expanding on the proof, addressing the obvious objections (do they look fake? will they survive bathtime?), and the product page closes the deal with the same promise above the fold. In version two, the ad does its job, the customer clicks through, and the landing page leads with fashion-forward editorial imagery aimed at a very different kind of buyer. Conversion rates collapse in version two - not because the ad failed, but because the funnel broke the story.
This is where a lot of paid social teams quietly burn money without realising it. They optimise the ad, then hand off to whoever manages the website, and the narrative seam between click and purchase becomes the silent killer in their data.
Sassy Saints are strict about congruence across all three surfaces - the ad wins the click using a specific persona and angle, the landing page expands that argument with more proof and more objection handling aimed at the same persona, and the PDP closes by reducing friction while keeping the same core promise visible. They ran sub-persona split tests specifically to validate this - holiday event prep versus daily wear variants, for instance, to see whether the angle that converted at the top of the funnel still held through to purchase when the funnel reinforced it.
The practical implication is blunt. If your paid team and your web team sit in separate rooms with separate briefs, you have a structural problem. Congruence across ad, landing page, and PDP has to be a coordinated decision, not a coincidence.
A Creator Programme Built Around a Tuesday Call
The creator engine at Sassy Saints didn't arrive fully formed. It evolved from a need to feed a testing machine that would otherwise starve - when you're running hundreds of ad variations simultaneously, you need content throughput that a two-person in-house team simply cannot sustain.
They built an internal group of roughly a dozen creators. And the first thing to understand is that the compensation model shapes everything. Creators are paid a percentage of the ad spend their content attracts. That changes the relationship immediately. It stops being "post a couple of videos and we'll send you product" and starts being an actual performance partnership where a creator has a financial reason to care whether their content is working.
The weekly rhythm is a Tuesday call. Results from the previous week get reviewed, the best performing angles and hooks get shared openly with the whole group, and feedback gets delivered directly. The marketing lead shares angle roadmaps - literally what to test next, which hooks are outperforming, which concepts are getting cut. That kind of transparency is unusual in creator programmes and it's exactly what raises the quality floor across the board.
For fast iteration between calls, they've used WhatsApp. If a hook needs fixing, if an opening line isn't landing, if a demo sequence needs reordering - a creator doesn't wait until next Tuesday. They get the note, they reshoot, and the revised version goes back into the testing pipeline within days.
The output from that system has been substantial. A TikTok campaign built around the busy-mum persona reached 2.5 million views. A weekly creator drop across Instagram Reels accumulated 500,000 plays. Those numbers don't come from having great talent - they come from running a feedback loop tightly enough that the creative keeps improving on a weekly cycle.
Stop Copying Competitors and Start Copying Supplements
Most brands spend serious time in their competitors' ad libraries. I understand the instinct - it feels like competitive intelligence. You're learning what the market is running, what's getting repeated (which usually signals it's working), what formats are popular. And then you borrow from it, and you wonder why your ads feel like everybody else's.
Sassy Saints have been explicit that they don't look at direct competitors for creative ideas. They've seen competitors lift their own ads and landing page structures wholesale and they don't consider it a viable strategy worth worrying about. Copying competitors means you're always one step behind the thing you're copying, and you're contributing to a category-wide creative homogenisation that benefits nobody.
Instead, they go cross-category. Two places they've pointed to specifically - Comfort on the apparel side for strong content inspiration, and direct-response supplement brands for structural approaches, including long-form video formats running to around 20 minutes. A 20-minute VSL for press-on nails sounds audacious. But the underlying mechanics - the extended proof stack, the deep objection handling, the "here's why everything else you've tried hasn't worked" frame - translate across categories when the problem being solved is compelling enough.
Beauty brands copying beauty brands creates a race to the same middle. Beauty brands borrowing the best behavioural mechanics from apparel and supplements - where direct response has been tested and optimised with enormous rigour - creates something the competitor can't just lift from your ad library. They'd have to understand why it works, and that understanding takes time they're probably not investing.
Fresh formats are almost never found inside your own category. That's not a coincidence.
Start With the PDP and Earn Complexity Later
There's a temptation in D2C to build the full funnel architecture before you've proven anything. Twelve landing page variants, a quiz, an advertorial, a VSL, a pop-up flow calibrated to exit intent. I've seen early-stage brands spend three months building funnel infrastructure when they had no signal yet about which persona or angle was actually worth sending traffic to.
Sassy Saints started simpler. In the early stages, ads went straight to the product page. Clean, fast, and genuinely useful for getting signal without building a publishing operation first. The PDP becomes your baseline - you're learning what happens when someone lands directly on the product with minimal narrative scaffolding. That data shapes what you build next.
As the testing machine matured, they layered in advertorial-style landing pages - the listicle format specifically. An example is their "10 Reasons Press-Ons Beat Gel" page, which is modular by design. Headline, problem framing, editorial credibility, product introduction, proof stack, objection handling, offer, call to action. Keep the skeleton, rotate the narrative. Swap the angle and the specific customer story, retest. That's how you get iteration speed without rebuilding the UX every time.
More recently, a quiz funnel - "Find Your Nail Persona" - layered on top of the existing architecture, routing customers to landing pages matched to their declared situation. By that point, they knew which personas were worth routing and what those landing pages needed to say. The complexity was earned, not assumed.
The sequencing itself is a form of creative discipline. Starting with the PDP isn't a concession to limited resources. Choosing to hold off on complexity until you've earned it is how you avoid building funnels for personas nobody actually converts from.
You Can Build This In-House. You Probably Shouldn't.
Sassy Saints proved you can run a high-velocity creator programme internally. The Tuesday call, the WhatsApp loop, the performance-based pay, the weekly angle roadmaps - it works. But it works because someone at the centre of it has made it their primary job.
Here's what running that programme actually requires week to week. Recruiting and onboarding creators consistently as your needs shift. Writing briefs that are specific enough to produce usable content without stifling performance. Managing usage rights, whitelisting, deliverable tracking, and performance-based remuneration cleanly - the kind of clean that doesn't generate disputes or require a spreadsheet archaeologist to reconstruct three months later. Running the creative QA loop. Holding the quality bar while keeping creators motivated. Delivering the weekly performance review and then translating that into next week's testing roadmap before Tuesday comes around again.
Every hour an internal marketing team spends on creator operations - chasing late submissions, reconciling payments, rewriting vague briefs - is an hour not spent on the things a competitor genuinely cannot replicate. Improving the product. Reducing returns. Tightening the customer experience. Building the kind of operational moat that doesn't show up in an ad library.
This is exactly where Growthcurve fits. We run creator programmes the way high-performing brands run affiliates - structured tracking, clear incentives, consistent briefing, rapid iteration, and genuine performance accountability. Your team stays focused on product and customer experience while we keep the creative pipeline full and the testing machine fed. When you're playing the 3% winners game, your limiting factor is almost never ideas. It's throughput and governance and the ability to run the loop every week without burning out the people who should be focused elsewhere.
I think the brands that win over the next few years will be the ones who figured out that creative testing at scale is an operations discipline, not a marketing one - and who built or found the operational infrastructure to treat it that way before their competitors worked it out.









