Site built by Composite – Webflow Agency NYC & UX Design Agency NYC
Resources

How to Build an ICP That Actually Predicts Revenue

Most ICPs are assumptions. A modeled ICP is built on outcomes.
Insights
Intelligence

Most ideal customer profiles are built on assumptions: the team picks a few firmographic filters, lists their best-known customers, and calls it an ICP. That's a targeting filter, not a predictive model. A real ICP is built on outcome data, modeled against which customers actually expanded, retained, and drove the most lifetime value. GoodWork builds modeled ICPs for growth-stage B2B companies using machine learning trained on actual revenue outcomes, not conference room assumptions.

What Is an Ideal Customer Profile?

An ideal customer profile (ICP) is a data-driven model that defines the characteristics most likely to predict customer success, measured in revenue outcomes like expansion rate, retention rate, and lifetime value. It goes beyond firmographic targeting (industry, company size, title) to incorporate behavioral signals, technology composition, growth trajectory, organizational structure, and dozens of other attributes that only surface when you model the data against actual results.

The key word is "predicts." Most ICPs describe the customers the team already knows. A modeled ICP identifies the characteristics that predict future value based on patterns in the data, including patterns the team never would have spotted intuitively.

Only 42% of B2B companies have a formally documented ICP, and far fewer have validated theirs against actual revenue outcomes. The gap between "we have an ICP" and "we have a validated, predictive ICP" is where most of the value sits.

Why Do Most ICPs Fail to Predict Anything?

The standard ICP exercise goes like this. The team sits in a room. Someone names the ten best customers off the top of their head. The group identifies commonalities: "they're all mid-market SaaS companies," "they all have 200-500 employees," "the buyer was always a VP of Sales." Those commonalities become the ICP. Marketing uses it for targeting. Sales uses it for qualification. Done.

The problem is threefold.

First, survivorship bias. The customers the team remembers are the ones that worked out. Nobody's modeling against the customers that churned, contracted, or never expanded. A real ICP needs to understand what predicts bad outcomes as much as what predicts good ones.

Second, small sample sizes. Ten customers in a room is not a model. It's anecdote. The characteristics that appear to matter across ten accounts often disappear when you analyze hundreds. And the characteristics that actually predict value across the full customer base are often ones nobody in the room would have guessed.

Third, static assumptions. The ICP document goes stale the moment the business changes. New products launch. The customer mix shifts. The market evolves. A document written six months ago reflects a reality that no longer exists.

What Does a Modeled ICP Look Like?

A modeled ICP is built by analyzing your entire customer base against actual outcomes. Not "who do we think our best customers are" but "which characteristics, across every data point we have, correlate with expansion, retention, and lifetime value."

The inputs go far beyond what's in the CRM. A modeled ICP incorporates firmographic data (industry, company size, revenue, growth rate), technographic data (what tools they use, what stack they run), behavioral signals (product usage patterns, engagement velocity, support interactions), enrichment data (hiring patterns, funding events, technology changes, digital maturity indicators), and outcome data (which customers expanded, which churned, which drove the highest LTV, and when).

The machine learning does something humans can't: it finds non-obvious combinations that predict outcomes. A specific technology stack plus a specific growth trajectory plus a specific organizational structure might predict expansion with 80% confidence, but no human would have identified that combination by looking at ten accounts in a conference room.

The output isn't a slide with four bullet points. It's a scoring model that rates every contact in your CRM against the patterns that actually predict revenue. And it updates continuously as new outcomes flow in.

How Does ICP Scoring Change Daily Decisions?

An ICP document sitting in a shared drive changes nothing. A score on every contact record in Salesforce changes everything.

Organizations with a strong ICP achieve 68% higher account win rates and 25-35% shorter sales cycles compared to those relying on basic targeting filters. The mechanism is straightforward: when every contact carries a fit score, the team stops wasting time on accounts that will never convert and starts investing in the accounts with the highest probability of success.

Here's what it looks like operationally:

  • Lead routing uses ICP scores to send the highest-fit inbound leads to the best reps, immediately. No more round-robin routing that treats a perfect-fit lead the same as a marginal one
  • Outbound targeting uses the model to identify which prospects in the market match the ICP, before they ever raise their hand. Instead of filtering by industry and size, the team prospects against the full model
  • Expansion prioritization uses ICP scores at the contact level to identify which existing customers match the profile of past expanders. The expansion motion targets specific accounts, not the full customer base
  • Campaign segmentation uses modeled segments to send the right message to the right audience. Marketing isn't segmenting by industry anymore. They're segmenting by predicted behavior
  • Resource allocation uses scores to determine where CS invests. High-ICP-fit accounts with expansion signals get proactive investment. Low-fit accounts get efficient self-serve treatment

The ICP only matters if it changes behavior. A score on every record is how it changes behavior.

Why Does a Static ICP Create False Confidence?

A static ICP is arguably worse than no ICP at all, because it creates confidence in targeting that may no longer be accurate.

Consider a company that built their ICP two years ago around mid-market financial services firms with 200-500 employees. Since then, they've launched a new product that appeals to a different buyer. Their best expansion customers are now healthcare companies with 50-200 employees. Their highest-LTV segment shifted from financial services to professional services. But the ICP document still says "mid-market financial services," so marketing keeps targeting them, sales keeps qualifying against them, and the team wonders why conversion rates are declining.

A modeled ICP updates as the customer base evolves. Every new customer who converts, expands, or churns refines the model. The ICP from Q1 is slightly different from the ICP in Q3 because the data has evolved. That's not instability. That's accuracy. The business is changing, and the ICP should change with it.

What's the Difference Between ICP Scoring and Lead Scoring?

This distinction matters because most companies already have a lead score and assume it's doing the same job.

Lead scoring is typically based on engagement signals: email opens, website visits, content downloads, webinar attendance. It measures interest. A lead who downloaded three whitepapers and attended a demo gets a high lead score.

ICP scoring is based on fit signals: does this contact match the characteristics of customers who actually drove revenue? A lead who downloaded nothing but works at a company that perfectly matches your expansion profile might have a low lead score and a very high ICP score. That's the lead your team should be calling.

The best systems use both, but they're measuring fundamentally different things. Lead scoring answers "is this person interested?" ICP scoring answers "if this person converts, will they be a good customer?" Interest without fit produces pipeline that closes but churns. Fit without interest requires a different activation strategy. Understanding the difference changes how you allocate resources across the funnel.

How Do You Validate Whether Your ICP Is Working?

A modeled ICP should be measurable. Here are the signals that tell you whether it's predictive:

Do high-ICP-score leads convert at a meaningfully higher rate than low-ICP-score leads? If the score isn't separating outcomes, the model needs refinement.

Do high-ICP-score customers retain and expand at higher rates? If ICP fit predicts acquisition but not retention, the model is missing post-sale signals.

Does the team actually use the scores? If reps aren't referencing ICP scores in their daily prioritization, the model might be accurate but not operationalized. Intelligence that doesn't change behavior doesn't create value.

Are the model's predictions surprising? If the ICP just confirms what the team already believed, it's probably built on the team's assumptions rather than the data. A good model should reveal segments and characteristics that nobody would have guessed, because that's where the value is.

Key Takeaways

  • Most ICPs are targeting filters built on assumptions and survivorship bias. A modeled ICP is built on machine learning trained against actual revenue outcomes across the full customer base
  • Only 42% of B2B companies have a formally documented ICP, and far fewer have validated theirs against outcome data. The gap is where most of the targeting value sits
  • Organizations with a strong ICP achieve 68% higher win rates and 25-35% shorter sales cycles. The mechanism is a fit score on every contact record that changes daily prioritization
  • GoodWork builds modeled ICPs using real data science, incorporating firmographic, technographic, behavioral, and enrichment signals that go far beyond conference room assumptions
  • A static ICP document creates false confidence. A modeled ICP updates continuously as the customer base evolves, ensuring the targeting reflects current reality
  • The difference between ICP scoring and lead scoring is the difference between "is this person interested?" and "if this person converts, will they be a good customer?" Both matter. They measure different things
See GoodWork in Action

Get Early Access, Stay in the Know

Book a Strategy Call
Submit your details and we’ll send a scheduling link.
"GoodWork has changed how we identify and prioritize growth at PatientNow. We now have a clear, signal-driven view of which segments create the most value, what indicates real buyer expansion opportunities, and where we should focus our growth strategy and product roadmap. Instead of relying on assumptions, our teams can execute with precision and align around a shared understanding of our customer. GoodWork has become central to how we allocate resources, focus our strategy, and drive growth."
Bridget Winston
Chief Revenue Officer, PatientNow
"GoodWork has transformed how we understand our member ecosystem. We now have clarity on exactly where to focus our efforts and can identify underserved member segments that represent real growth opportunities. This insight helps us provide the best possible experience—not just for our members, but for our internal teams who now have the data they need to make confident decisions. The visibility into member patterns has been game-changing for strategic prioritization.
Sabrina Caluori
Chief Marketing Officer, Chief
“GoodWork gave our team a clearer, faster way to activate demand. Marketing and sales now share one view of which accounts matter most — and the context behind every lead. We can see when former buyers show up at new companies, enrich inbound and event lists automatically, and tailor outreach with precision. It’s improved our focus, our handoffs, and the overall speed of how we grow.”
Larisa Summers
SVP Marketing, Documo