There's a pattern showing up in RevOps conversations right now that's worth naming.
A team invests in Clay. They build enrichment waterfalls, wire in intent signals, set up AI-personalized sequences. Activity goes up. The automation is genuinely impressive: multi-step, signal-triggered, personalized at scale. And then they look at pipeline and ask, "Why isn't this working?"
The honest answer is usually uncomfortable: the problem isn't the automation. It's what the automation is aimed at.
The outbound stack got sophisticated but the results didn't follow
GTM teams have never had more powerful tools for outbound. Intent data, AI personalization, enrichment waterfalls, automated sequencing — the infrastructure to run a sophisticated outbound motion is now accessible to companies of almost any size.
And yet reply rates across the industry have been declining for two years. Pipeline generation isn't keeping pace with the investment in outbound tooling. Reps are logging activity, sequences are running, and the numbers still aren't moving.
This isn't just anecdotal. Scale Venture Partners surveyed 300 GTM leaders on AI adoption and found that the first wave of AI in GTM drove real gains in quantity metrics — activity levels, time savings, emails sent. Quality metrics like pipeline creation and rep attainment haven't moved in proportion. We're at the end of Phase 1. The teams winning in Phase 2 will be the ones who figured out what to aim the automation at.
Most teams diagnose this problem at the wrong layer. They reach for better messaging. They clean up contact data. They add another personalization variable. They increase send volume. These are all layer 3 and layer 4 interventions.
The root cause is usually layer 1: reps are targeting the wrong accounts.
Messaging doesn't land at companies that were never going to buy. AI personalization that correctly identifies a prospect's recent funding round but says nothing meaningful about why that should matter to them isn't personalization — it's noise with extra steps. Signal-based triggers fire on signals that don't actually predict buying intent.
As one GTM strategist recently put it: "A lot of GTM engineering output over the last two years has been technically impressive and commercially mediocre."
We've already seen this play out in one high-profile way: the early AI SDR craze. Some teams eliminated their entire SDR function, only to discover the technology wasn't ready — and ended up scrambling to rebuild pipeline mid-year. The problem wasn't the automation. It was skipping the foundation.
That's the pattern. And it starts before the first email is written.
The four layers of outbound and where most teams focus
When outbound underperforms, there's a predictable sequence of interventions most teams reach for:
-
Layer 4: Message quality: Is the copy compelling? Is the subject line working? Is the call to action clear?
-
Layer 3: Contact data: Do we have the right person? Is the email valid? Are we reaching the decision-maker?
-
Layer 2: Sequencing and timing: Are we following up enough? Are we hitting the right channels? Is the cadence right?
-
Layer 1: Account selection: Are we targeting accounts that actually fit? Are these companies that could buy from us, that are showing real buying signals, that look like our closed-won customers?
Most outbound optimization happens at layers 2, 3, and 4. But layer 1 is where most outbound problems actually live.

When account selection is right, everything else converts at higher rates without any other changes. When it's wrong, no amount of optimization at the other layers closes the gap.
What "wrong accounts" actually looks like in practice
It's easy to assume your team is targeting the right accounts. You have a territory plan. You have firmographic filters. You probably have some version of an ICP defined.
But there are a few common ways account selection breaks down quietly:
Territories are stale. Most territories get carved once a year, sometimes less. Between carves, headcount changes, market conditions shift, and the accounts sitting in rep books drift further from the current ideal. By the time you notice, reps have been working accounts that should have been replaced months ago.
Books are too big. A rep with 800 accounts in their territory isn't working their best accounts — they're working whatever surfaces to the top of the list on any given day, which is usually recent activity or whatever they already know. High-fit accounts go untouched not because reps are lazy but because they're overwhelmed.
Scoring doesn't reflect reality. Rule-based scoring models built on firmographic data decay fast. If your scoring model was built 18 months ago and hasn't been updated, it's probably rewarding activity at accounts that don't actually look like your best customers.
No one knows what's going untouched. Without coverage visibility, you find out a great-fit account sat unworked for two quarters after the quarter is already over.
None of these are messaging problems. None of them get fixed by better copywriting or cleaner contact data.
What solving account selection actually looks like
The teams seeing the best outbound results right now aren't the ones with the most sophisticated automation. They're the ones who solved account selection first — and then automated on top of a strong foundation.
In practice, that means a few things:
Scoring based on similarity, not rules. Instead of building a scoring model from firmographic filters ("company size between 200 and 500, in these industries, using these tools"), the highest-performing teams score accounts by similarity to their actual closed-won customers. The model learns from deals you've already won, not from assumptions about what a good fit looks like. When reps see accounts that scored highly, they can understand why — because the account looks like a customer who already bought.
Focused books, not massive territories. The math on rep focus is straightforward: a rep working 100 high-fit accounts with real attention will outperform a rep nominally owning 1,000 accounts and working whatever floats to the top. Teams that have moved to dynamic books — smaller, automatically refreshed sets of high-fit accounts per rep — consistently see more opportunities created per rep than those running static large territories.
Automatic retrieval when accounts go unworked. When a high-fit account sits untouched for too long, it should come back out of a rep's book and get replaced with another high-fit account. Not because the rep is penalized, but because unworked capacity is waste. The best account doesn't stay in a book indefinitely if no one is working it.
Coverage visibility before the quarter goes wrong. Instead of finding out at quarter-end which high-fit accounts went untouched, RevOps needs real-time visibility into which reps are covering their books and where pipeline is being created. That data is what drives coaching conversations that actually change outcomes.
The right order of operations
If your outbound isn't converting the way it should, the diagnosis matters more than the fix.
Before you rewrite the sequences, before you buy more contact data, before you add another personalization variable, ask whether the accounts underneath all of that are actually worth contacting.
The order of operations for outbound that works:
- Score your account universe against your closed-won customers
- Build rep books from the top of that list — focused, not overwhelming
- Keep those books current as reps work and headcount changes
- Track coverage in real time so you know which high-fit accounts are getting worked and which aren't
- Then optimize messaging, personalization, and sequencing on top of that foundation
Automation on a strong account foundation generates pipeline. Automation on a weak account foundation generates activity. They look similar in the dashboard until you check pipeline.
What this means for RevOps
The conversation around GTM engineering and outbound automation often focuses on the technical layer: which tools to use, how to wire them together, how to build the most sophisticated enrichment workflows.
The harder question, and the more important one, is whether all of that infrastructure is pointed at accounts worth targeting.
That's a RevOps question. It's about account scoring methodology, territory design, book management rules, and coverage visibility. It's about building the system that ensures your automation, however sophisticated it may be, starts with the right foundation.
When that foundation is right, the results follow. When it isn't, you get technically impressive outbound that doesn't generate pipeline.
Fix layer 1 first. Everything else gets easier from there.
Gradient Works is the pipeline platform for commercial sales teams. We connect account scoring, territory planning, dynamic book management, and coverage analytics so RevOps teams can build the foundation that makes outbound work, before you automate anything on top of it. See how it works →


