
Lead scoring is cosplay: What actually predicts revenue now
- 12 hours ago
- 8 min read
Lead scoring used to feel like grown-up marketing.
A neat little system that turned chaos into order. A tidy number that told sales who to call first. A dashboard that made everyone feel like the funnel was being managed by competent adults.
And then real life happened.
Buying committees got bigger. Intent got noisier. Forms got optional. Cookies got nerfed. Inboxes got hostile. Sales cycles became less linear and more like a drunken treasure hunt.
Yet somehow, a lot of teams are still proudly running the same scoring model they built when people downloaded whitepapers for fun and marketing could pretend it “handed leads to sales” like a factory line.
That’s why lead scoring now is often cosplay.
Not because scoring is inherently bad, but because most scoring models are pretending the world works the way it did when the model was invented.
Why your lead score is confidently wrong
Most lead scoring systems break for three reasons.
First, they’re built on activities that are easy to track, not activities that predict revenue.
Email opens, page views, webinar attendance, “visited pricing page”, “downloaded asset”, “clicked CTA”. All observable. All measurable. Many only weakly tied to a buying decision.
Second, they assume the buyer is a single person moving through a funnel.
In reality, the person filling out the form is often not the person with budget. Sometimes they are not even the person with a problem. They might be a researcher, an intern, a manager asked to “look into it”, or someone collecting screenshots for an internal deck. Your model gives them 82 points and everyone panics, while the actual decision maker never touches your website.
Third, they confuse engagement with intent.
Engagement can be curiosity, education, boredom, or comparison shopping. Intent is “we have a problem, we are prioritising it, and we are moving towards a decision”.
Most scoring models treat the first as a proxy for the second. That’s the fundamental lie. If you’ve ever watched an account rack up score like a slot machine and then ghost you completely, you’ve seen this lie in the wild.
The hidden cost of lead scoring theatre
Bad scoring isn’t neutral. It doesn’t just fail quietly. It actively wastes time and damages trust.
Sales loses faith and starts ignoring anything marketing sends. Marketing then tries to “fix adoption” with enablement sessions, new dashboards, or another scoring tweak. That makes it worse, because the problem is not communication. The problem is the signal.
Meanwhile, truly winnable opportunities sit in the shadows because they don’t behave like your model expects. They don’t click the right emails. They don’t fill the right forms. They might come in through a partner. They might show up in pipeline because a rep already has a relationship. Your model shrugs and calls them “low score”.
And when leadership asks, “Why are we not converting more MQLs?”, the answer becomes a shrug wrapped in charts.
The goal isn’t a better score. The goal is better prioritisation. So let’s talk about what actually predicts revenue now.
What predicts revenue now: Fewer signals, better signals
Revenue prediction in B2B isn’t about counting more clicks. It’s about identifying the conditions that exist when a deal is genuinely likely to happen.
Those conditions are usually not individual behaviours. They’re patterns. And they’re often account-level, not lead-level.
Think in terms of three layers:
Fit: Should this account buy from you, in a realistic universe?
Readiness: Are they in a buying window, or just browsing?
Momentum: Are they moving forward in a way that resembles real deals you’ve won?
Lead scoring usually over-indexes on layer two, and mostly measures the wrong thing. The best predictors combine all three.
Predictor 1: Verified ICP fit that sales actually agrees with
This sounds obvious. It’s not. Most teams have a “target customer” slide and a CRM full of everyone anyway.
Fit is still the strongest baseline predictor of revenue, but only if you define it like you mean it.
Fit is not “company size and industry”. That’s demographic cosplay, too.
Fit is: Do they have the problem you solve, at the scale you solve it, with the constraints you can handle?
If your scoring model can’t clearly separate “perfect fit but quiet” from “loud but wrong fit”, you’re going to keep feeding sales junk. Fit should be a gate. If fit is poor, you don’t “nurture harder”. You deprioritise and stop wasting time.
Predictor 2: Buying group emergence, not individual activity
Revenue happens when a group forms around a decision.
So the question is not “Did Jamie click the pricing page?” The question is “Is a buying group forming inside this account?”
Buying group emergence looks like:
Multiple people engaging from the same domain within a short window.
Engagement coming from different functions (for example, marketing plus ops plus leadership).
One person’s activity causing another person to appear (forwarding, internal sharing, follow-on visits).
Conversations that shift from “what is this?” to “how would this work for us?”
A single person binge-reading your blog can be a fan. Or a competitor. Or someone building a business case they will never get approved.
Three to six relevant people showing up within a month is the kind of pattern that starts to smell like revenue.
And no, this doesn’t require creepy tracking. Even with imperfect tracking, you can observe account-level patterns: Domains, meeting attendees, inbound sources, and the pace of interactions across contacts.
Predictor 3: Problem intensity signals, not content consumption
Content consumption is often a lagging indicator of curiosity.
Problem intensity is closer to a leading indicator of action.
Problem intensity looks like:
Operational disruption: Migration, re-org, new leadership, tool consolidation, compliance deadlines.
Performance pressure: Pipeline targets missed, CAC creeping up, SDR efficiency dropping, conversion rates flat.
Technical pressure: Systems breaking, data quality issues, workflow debt, integration failures.
Internal urgency: Hiring for ops roles, firing agencies, changing tools, leadership mandates.
These signals rarely show up as “clicked email #3”.
They show up in conversations, in CRM notes, in support tickets, in inbound form fields, in job descriptions, and in the way prospects describe their situation.
If your model can’t ingest these, at least design your process to capture them when they appear. A simple “why now?” field that sales actually fills, plus a few required dropdowns about current state, can outperform 50 points of email clicks.
Predictor 4: High-intent actions that cost the buyer something
A strong signal often has a cost. Not a monetary cost, but a time cost, a political cost, or a commitment cost.
High-intent actions include:
Requesting a tailored demo (not a generic “learn more”).
Bringing colleagues to a call.
Asking about implementation, security, procurement, or contract terms.
Sharing internal constraints and timelines.
Asking for a proposal, SOW, or business case help.
Engaging in mutual planning: Next steps with dates, not vibes.
These are harder to fake. They’re harder to do casually.
If your scoring model treats “webinar attended” as equal to “introduced their IT lead”, you’ve built a points costume, not a revenue predictor.
Predictor 5: Momentum patterns that match your won deals
Most teams score leads as if every deal moves the same way. But you already have the answer to “what predicts revenue”: it’s in your closed-won history.
Not as a generic attribution report. As a behavioural pattern. Take your last 30 closed-won deals and ask: What happened in the 30 to 90 days before the opportunity was created?
Look for common sequences like:
Multi-contact engagement followed by a consult request.
A spike in product-related page views followed by a stakeholder call.
Partner referral plus leadership attendee on call one.
Pricing conversation within two meetings of first contact.
Security review triggered early, not late.
Then look at your last 30 closed-lost deals and ask: What did they do that looked promising but went nowhere?
You will often find patterns that your score currently rewards, even though they correlate with failure. That’s a fun day.
Momentum is not “more activity”. Momentum is “the right activity in the right order”.
Replace “lead scoring” with “pipeline readiness”
If you want a disruptive idea that actually works, stop calling it lead scoring. Call it pipeline readiness. This simple naming shift forces the right questions.
Pipeline readiness asks: Is this person or account likely to enter pipeline soon, and if they do, is it likely to progress? That pushes you away from vanity engagement and towards decision conditions.
Pipeline readiness is built from a small set of signals that you can defend in a room with sales leadership. And crucially, it’s not one number. It’s a simple classification that drives action.
For example:
Not ready: Wrong fit or no buying window.
Warming: Fit is strong, early buying group signals.
Active: Clear buying window, high-intent actions present.
Sales engaged: Meetings happening, mutual plan forming.
Give sales something they can understand without a training session.
Give marketing something they can improve without inventing new points.
The scoring model you can actually run without hating your life
Here’s a practical approach that doesn’t require perfection.
Step 1: Set a “fit gate” that blocks nonsense
Create a fit classification based on a handful of fields that are stable:
Segment (size band that matches your pricing and delivery).
Use case match (the problem you actually solve).
Environment match (tech, complexity, constraints).
Exclusions (industries you don’t serve, geographies you can’t support, unrealistic budgets).
Fit should be a simple label: strong, medium, weak. If you can’t confidently label fit, default to medium, not strong. Strong should be earned.
Step 2: Track buying group emergence at the account level
Stop pretending lead-level alone can guide prioritisation.
Set up a rolling 14 to 30 day view of account engagement across contacts:
Number of engaged contacts from the domain.
Variety of roles engaged.
Recency and frequency of meaningful interactions.
Meaningful interactions are not all clicks. Weight things that indicate effort: Form submissions, meeting requests, product documentation, implementation content, pricing, comparison pages, and replies.
If your tracking is imperfect, still do it. Imperfect account-level signals can outperform perfect lead-level vanity metrics.
Step 3: Define 5 to 7 “high intent” events and treat them as sacred
Pick a short list. No more than seven.
These should be actions that are clearly tied to revenue outcomes in your world.
Examples:
Demo request with a real company email.
Meeting booked that includes more than one attendee.
Request for pricing, proposal, or security information.
Reply that answers “why now?”
Product trial activation plus meaningful usage milestone (if relevant).
Then design your process so these events trigger immediate, human follow-up. Not a nurture email. Not a “wait until they hit 100 points”.
If you can’t act on the event within a day, don’t pretend the score matters.
Step 4: Bake “momentum” into your sales process, not just your dashboards
Momentum is often captured in conversation, not clicks.
So build lightweight capture into the workflow:
A required field for timeline (even if it’s “unknown”).
A dropdown for current solution or status quo.
A simple “primary pain” field.
A checkbox for “buying group identified” with a minimum of two named stakeholders.
This is not admin theatre. It’s the information you need to predict revenue.
If reps won’t fill it, that’s feedback: Either the fields are junk, or the process has no consequence. Fix that before you blame the CRM.
The uncomfortable truth: The best predictor is still a good salesperson
Marketing Ops can build cleaner signals, better routing, and smarter prioritisation. But you cannot automate your way out of fundamental sales quality. If sales follow-up is slow, inconsistent, or purely transactional, no scoring model will save you.
If reps can’t diagnose pain, map stakeholders, and create urgency ethically, then the problem isn’t your score. It’s execution.
The goal of pipeline readiness is to make good sales teams faster and more consistent, not to create “hot leads” that close themselves.
So what should you do this week, not this quarter?
Kill anything that feels like scoring for scoring’s sake. Then do three practical moves.
First, audit your last 20 opportunities that became real pipeline and identify what happened immediately before they did. Not what your dashboards say. What actually happened.
Second, reduce your scoring inputs. If your model uses 40 signals, you are not sophisticated. You are overwhelmed.
Third, move from lead-level obsession to account-level readiness. If your business sells to buying committees and you are still scoring individuals like it’s 2014, you’re choosing to be wrong.
You don’t need a perfect model. You need a model you can defend, a process you can run, and signals that match how revenue actually happens now.
Because the job isn’t to create high-scoring leads.
It’s to create deals.









