Most RevOps dashboards are crowded with metrics nobody acts on. We watch ops teams pick apart MQL-to-SQL conversion week after week while CAC payback quietly extends from 18 months to 28. The metrics that drive revenue are usually the boring ones: forecast accuracy, NRR, pipeline coverage. The flashy ones get watched instead, and the two lists rarely overlap.
This article sorts RevOps metrics into five working categories, with benchmarks pulled from Forrester (formerly SiriusDecisions), Gartner, the Salesforce State of Sales report, ChartMogul, Bessemer Venture Partners, OpenView, KeyBanc Capital Markets, and Pavilion. Each section opens with a definition, names the formula, gives a benchmark range, and flags when the metric turns into theater. If you only have time for one section, skip to the vanity-versus-revenue filter near the end. That's the part that changes how you spend Monday mornings.
For the broader picture of how these numbers fit into a working RevOps function, the complete guide to revenue operations is the hub piece. For team design that supports these metrics, see how to structure a revenue operations team.
What RevOps metrics actually predict revenue?
RevOps metrics that predict revenue cluster around four signals: pipeline coverage against quota, conversion rates by segment, net revenue retention, and forecast accuracy. Everything else either feeds these four or tracks the data behind them. The 2024 Salesforce State of Sales report found that 81% of sales teams describe their forecasts as "somewhat or very inaccurate," which is the cleanest evidence that most RevOps stacks are measuring the wrong things.
The table below groups the working metrics by category, lists the formula, and gives a mid-market benchmark range. Treat it as a menu. A growth-stage RevOps function doesn't need every row. It needs the right twelve.
| Category | Metric | Formula | Benchmark (mid-market) | Source |
|---|---|---|---|---|
| Pipeline | Pipeline coverage | Open pipeline / quarter quota | 3x-4x for mid-market B2B | Forrester (SiriusDecisions) |
| Pipeline | Pipeline velocity | (# opps x avg deal size x win rate) / avg sales cycle | Trend over absolute number | Salesforce |
| Pipeline | Pipeline-to-quota ratio | Created pipeline / period quota | 4x-5x for steady growth | Pavilion |
| Pipeline | Time in stage | Days an opp sits in a stage | Within 1.5x stage median | Gartner |
| Conversion | Lead-to-MQL | Qualified leads / total leads | 25%-35% for B2B SaaS | Forrester |
| Conversion | MQL-to-SQL | Sales-accepted / marketing-qualified | 35%-50% | Forrester |
| Conversion | SQL-to-opportunity | Opps created / SQLs accepted | 50%-70% | Salesforce |
| Conversion | Opp-to-won | Closed-won / opps in stage | 20%-30% blended | Salesforce |
| Conversion | Win rate by segment | Won / closed in segment | Varies; SMB 25%, ENT 15% | KeyBanc |
| Revenue | Bookings | Signed contract value in period | Trend metric | ChartMogul |
| Revenue | ARR | Annualized recurring revenue | Trend metric | Bessemer |
| Revenue | Net revenue retention | (Starting ARR + expansion - churn - contraction) / starting ARR | 110%+ best-in-class SaaS | Bessemer |
| Revenue | Gross revenue retention | (Starting ARR - churn - contraction) / starting ARR | 90%+ healthy SaaS | KeyBanc |
| Revenue | LTV | Avg ACV x gross margin / churn rate | LTV/CAC of 3x+ | OpenView |
| Revenue | CAC | S&M spend / new customers acquired | Industry-specific | Bessemer |
| Revenue | CAC payback | CAC / (avg ARR per customer x gross margin) | Under 18 months for SaaS | Bessemer |
| Revenue | Magic number | (Net new ARR x 4) / S&M spend prior quarter | Above 0.75 efficient, 1.0+ excellent | KeyBanc |
| Forecast | Forecast accuracy | 1 - (|actual - forecast| / actual) | 85%+ for healthy ops | Gartner |
| Forecast | Slip rate | Opps slipped to next period / committed opps | Under 15% | Salesforce |
| Forecast | Commit vs best-case ratio | Commit forecast / best-case forecast | 60%-75% typical band | Pavilion |
| Hygiene | Stale opportunities | Opps with no activity in 30 days | Under 10% of pipeline | Salesforce |
| Hygiene | Missing-field rate | Records with required fields blank | Under 5% | Gartner |
| Hygiene | Duplicate-account rate | Duplicate records / total accounts | Under 2% | Forrester |
What are the most important pipeline metrics?
Pipeline metrics measure whether the top of the funnel is producing enough qualified opportunity volume to hit quota, and whether the deals already inside are moving at a healthy rate. Four metrics carry most of the diagnostic load: pipeline coverage, pipeline velocity, pipeline-to-quota, and time in stage.
Pipeline coverage
Pipeline coverage is the ratio of open pipeline value to the quota for the period that pipeline is meant to close. Formula: total open opp value in the period / period quota. Forrester (formerly SiriusDecisions) has long pegged the working benchmark at 3x for B2B mid-market, with 4x preferred for sales teams running below 30% win rates. The reason coverage works is arithmetic. If you close 25% of pipeline and you need $10M in revenue, you need $40M in open pipeline. There's no way around the math. The trap is generation pipelines that include bad-fit accounts to pad the ratio. A coverage number that has never been audited against win rate by source is fiction.
Pipeline velocity
Pipeline velocity is how fast revenue is moving through the funnel, expressed as one number. Formula: (number of opportunities x average deal size x win rate) / average sales cycle length, in days. Salesforce promotes it as a unifying KPI because every input is independently fixable. If velocity drops, you can isolate which lever changed. The number itself is less useful than the trend. A team running $42,000 per day this quarter versus $51,000 last quarter has a problem to investigate, not a problem to compare against an industry average.
Pipeline-to-quota ratio
Pipeline-to-quota tracks how fast new pipeline is being created relative to the next quota cycle. Formula: pipeline created in a period / quota for the next period. Pavilion's RevOps community guidance places healthy pipeline-to-quota at 4x to 5x for SaaS companies running steady growth, and 6x or higher for companies in capacity-add mode. Coverage describes inventory; pipeline-to-quota describes how fast new inventory is showing up. If coverage is healthy but pipeline-to-quota is below 3x, the funnel is coasting on what's already there and will run dry inside two quarters.
Time in stage
Time in stage is how long an opportunity sits at each stage of the pipeline. Formula: days between stage entry and stage exit. Gartner's 2024 sales analytics guidance recommends flagging any opportunity sitting at 1.5x the stage median as a slip risk. The metric matters because slips have a strong predictive signal. An opportunity in stage four for 80 days when the median is 30 has a much lower close probability than the CRM probability field shows. Most CRMs don't surface this without configuration. The work is worth it.
Which conversion metrics matter most?
Conversion metrics measure how much volume from one stage of the funnel makes it to the next, segment by segment. The five that drive decisions are lead-to-MQL, MQL-to-SQL, SQL-to-opportunity, opportunity-to-won, and win rate by segment. The aggregate funnel rate is almost always misleading; the segmented version is where the real picture lives.
Lead-to-MQL conversion
Lead-to-MQL is the rate at which raw inbound leads become marketing-qualified leads. Formula: MQLs / total leads in the same cohort. Forrester benchmarks B2B SaaS lead-to-MQL at 25% to 35%. The lower end is common in companies running broad demand-gen, the upper end in tighter ICP-matched programs. Below 20% usually points to misaligned targeting at the top of the funnel. Above 40% means the MQL criteria are too lenient.
MQL-to-SQL conversion
MQL-to-SQL measures how many marketing-qualified leads are accepted as sales-qualified by the SDR or AE team. Formula: SQLs / MQLs. Forrester data places the typical band at 35% to 50%. This is the metric most often watched without context. A 60% MQL-to-SQL with a 5% close rate is much worse than a 30% MQL-to-SQL with a 25% close rate. The point of the metric is to expose handoff friction between marketing and sales, not to drive a contest.
SQL-to-opportunity conversion
SQL-to-opportunity tracks how many sales-qualified leads turn into formal pipeline. Formula: opportunities created / SQLs in the same cohort. Salesforce State of Sales benchmarks place this at 50% to 70% for healthy B2B funnels. A drop here usually means SQLs are being declared too early. The fix is rarely "train SDRs harder." It's usually a tighter SQL definition.
Opportunity-to-won and win rate by segment
Opportunity-to-won is the share of opportunities that close as won. Formula: closed-won / total closed. Salesforce reports B2B blended win rates of 20% to 30% for mid-market and 15% to 22% for enterprise. The blended number hides what's underneath. KeyBanc Capital Markets' 2024 SaaS Survey separates win rates by segment and finds SMB win rates often 10 to 15 points higher than enterprise. If your single dashboard win rate is 22%, you don't actually know what your win rate is. Cut by segment, lead source, and competitive presence to find the real number.
What revenue metrics matter for growth-stage companies?
Revenue metrics measure the dollar output of the system. The eight worth tracking for growth-stage companies are bookings, ARR, net revenue retention, gross revenue retention, customer lifetime value, customer acquisition cost, CAC payback period, and the magic number. NRR and CAC payback are the two that most accurately predict valuation. Boards know that. Operators don't always behave like they do.
Bookings and ARR
Bookings is the total contract value signed in a period. Formula: sum of new contract value in period. ARR is the annualized recurring revenue at a point in time, the contract value extrapolated to a year. ChartMogul's 2024 SaaS metrics framework treats bookings as the leading indicator and ARR as the anchor for valuation modeling. The common error is conflating them. A $1.2M annual contract is $1.2M in bookings the day it signs. The ARR contribution is also $1.2M, but realized revenue follows over 12 months. Reporting both on the same line is how RevOps teams confuse boards.
Net revenue retention and gross revenue retention
Net revenue retention is the percentage of recurring revenue retained from existing customers, including expansion, after churn and contraction. Formula: (starting ARR + expansion - churn - contraction) / starting ARR. Bessemer Venture Partners benchmarks place best-in-class SaaS NRR at 110% or higher, with public SaaS leaders such as Snowflake and Datadog historically reporting NRR above 130%. Gross revenue retention is the same calculation with expansion stripped out. Formula: (starting ARR - churn - contraction) / starting ARR. KeyBanc's 2024 SaaS Survey places healthy GRR at 90% or higher for SMB-focused SaaS and 95% or higher for enterprise.
NRR is the single metric that most cleanly separates good SaaS businesses from average ones. A company with 100% NRR has to acquire a new customer for every dollar of growth. A company with 120% NRR grows 20% before any new logos sign. Boards model from this number, and RevOps teams should build their dashboards around it.
LTV and CAC
Lifetime value is the projected revenue a customer generates across the relationship. Formula: average ACV x gross margin / annual churn rate. Customer acquisition cost is the fully loaded sales and marketing spend to acquire one new customer. Formula: (sales spend + marketing spend) / new customers acquired in the same period. OpenView's 2024 SaaS benchmark places healthy LTV/CAC at 3x or higher for SaaS. Under 1x means the business is destroying value with every new customer. Both metrics are sensitive to how you allocate costs. Including or excluding customer success in the CAC denominator changes the number meaningfully. Pick a method and hold it constant across periods.
CAC payback and the magic number
CAC payback is the months required to earn back the cost of acquiring a customer. Formula: CAC / (average ARR per customer x gross margin). Bessemer's State of the Cloud reports place healthy SaaS CAC payback under 18 months, with best-in-class under 12 and anything above 24 worth a second look. The magic number is the per-quarter measure of sales efficiency. Formula: (net new ARR in quarter x 4) / sales and marketing spend in the prior quarter. KeyBanc's 2024 SaaS Survey treats above 0.75 as efficient growth and above 1.0 as excellent. Below 0.5 means the company is spending more than a dollar to generate a dollar of recurring revenue.
Together, these two metrics explain why a healthy-looking funnel can be unhealthy underneath. We've watched companies celebrate growing pipeline and bookings while CAC payback drifted from 18 to 28 months. Volume looked fine. Unit economics were quietly falling apart.
How accurate should sales forecasts be?
Forecast accuracy is how close forecasted revenue lands to actual revenue closed in a period. Formula: 1 minus the absolute value of (actual revenue minus forecasted revenue) divided by actual revenue. Gartner's 2024 sales operations research places healthy forecast accuracy at 85% or higher, with 90% reserved for highly disciplined sales orgs. The Salesforce State of Sales report finds 81% of sales teams describing forecasts as "somewhat or very inaccurate," which means most teams sit well below the 85% line.
Three forecast metrics matter beyond top-line accuracy. Slip rate is the percentage of committed opportunities that move out of the period. Formula: slipped opps / committed opps. Healthy slip rate is under 15%. Above 25% means commits are being placed under sales pressure rather than buyer signal. Commit-versus-best-case-versus-pipeline ratios describe the shape of the forecast. Pavilion guidance places typical commit-to-best-case at 60% to 75% and best-case-to-pipeline at 30% to 50%. If commit equals best case equals pipeline, the categories aren't being used. If commit is 90% of best case and the team consistently misses, commits are inflated.
The forecast metric that gets ignored most is the variance pattern. A team that misses by exactly 5% every quarter is far healthier than one that hits one quarter and misses the next by 20%. Consistency is a stronger signal than accuracy on any single period.
Why does CRM hygiene matter for RevOps?
CRM hygiene metrics measure the quality of the data feeding every other RevOps metric. The three that matter are stale opportunities, missing-field rate, and duplicate-account rate. Hygiene stays invisible until a forecast misses by 30%, or a campaign blasts the same prospect twice, or a renewal slips because the CSM had the wrong owner.
Stale opportunities
A stale opportunity is one with no activity, stage change, or note in 30 days. Formula: opps with no activity in 30+ days / total open opps. Salesforce dashboards typically flag pipelines with more than 10% stale opps as forecast risk. The reason is simple. An opp with no activity in a month isn't really pipeline. It's a wish.
Missing-field rate
Missing-field rate is the share of records with required fields blank. Formula: records with at least one required field blank / total records. Gartner's 2024 data quality benchmarks place healthy missing-field rates under 5%, with under 2% in mature RevOps operations. Above 10% means the CRM is producing reports nobody can trust.
Duplicate-account rate
Duplicate-account rate is the percentage of account records that match another record on a strong identifier. Formula: duplicate accounts / total accounts. Forrester guidance places healthy duplicate rates under 2%. Above 5% means territory assignments and ABM lists are unreliable, which means commission disputes and double-booked outreach. Cleanup looks expensive on paper, but leaving it alone costs more over a year than the project does.
For a deeper read on how the tooling supports hygiene, see the RevOps tech stack: what you actually need.
Goodhart's law and the vanity-metrics trap
When a measure becomes a target, it stops being a good measure. Goodhart's law is why RevOps dashboards quietly fail over a few quarters. We've watched it happen: SDRs hit MQL counts by lowering the qualification bar, AEs hit pipeline-creation targets by re-stating dead deals, CSMs hit retention numbers by keeping a churning account on the books an extra month. The defense is paired metrics. Pipeline coverage with win rate by source. Bookings with CAC payback. Forecast accuracy with slip rate. NRR with logo retention. A single metric is gameable; two metrics pointed at the same outcome are much harder to fake.
What's the difference between a vanity metric and a revenue metric?
A revenue metric changes a decision. A vanity metric describes activity. The test: if the number moves up or down by 20%, does someone change what they do tomorrow? If the honest answer is "look at it in next week's meeting," it's vanity.
The vanity-versus-revenue filter for RevOps separates four flashy metrics from four boring ones. Activity counts (calls made, emails sent, demos scheduled) describe motion, not outcomes. MQL volume in isolation is the easiest to game and the easiest to celebrate. Total pipeline value without coverage context is decorative. Average deal size without segment cuts hides the deals that matter. The revenue versions look different in practice. Coverage by segment changes hiring decisions. Win rate by source changes spend allocation. NRR changes the entire growth model. Forecast accuracy changes how the CFO writes the cash plan.
Pavilion's 2024 State of RevOps survey found that the top-performing RevOps teams report on roughly 12 metrics in their executive review. Bottom-quartile teams report on 30 or more. The top performers spend the difference on cleanup, segmentation, and audit, which is what makes those 12 trustworthy in the first place.
For the broader pattern of which metrics decorate slides versus drive decisions across an operations function, the KPIs that operations leaders actually track extends this filter to non-revenue functions.
How often should RevOps metrics be reviewed?
RevOps metrics need three review cadences. Real-time alerting catches anything that breaches a threshold and lives in the system, not in a meeting: SLA breaches on lead routing, opp inactivity flags, hygiene rule violations. Weekly reviews focus on pipeline coverage, pipeline velocity, time-in-stage outliers, and stale-opp counts. These move fast enough that monthly cadence loses the signal. Monthly reviews cover conversion rates by segment, win rate, forecast accuracy versus actual, and CAC payback. These move slowly enough that weekly review is noise. Quarterly reviews are reserved for NRR, GRR, magic number, and the metric inventory itself. Quarterly is also when RevOps should ask which metrics nobody looked at and cut them.
A common mistake is reviewing every metric at every cadence. The result is a dashboard nobody trusts, because the slow-moving metrics drag the conversation away from the fast-moving ones. Splitting review by metric speed fixes that.
Key takeaways
RevOps metrics fall into five categories: pipeline, conversion, revenue, forecast, and hygiene. A working dashboard picks roughly two to three per category and ignores the rest. Pipeline coverage at 3x to 4x is the working benchmark for B2B mid-market. NRR above 110% is the cleanest marker of best-in-class SaaS. CAC payback under 18 months is the line between healthy unit economics and capital destruction. Forecast accuracy at 85% or above is what separates a sales operation from sales theater.
Pair every offense metric with a defense metric. Bookings with CAC payback, pipeline coverage with win rate by source, NRR with GRR, MQL volume with MQL-to-SQL conversion. The pairing is what defeats Goodhart's law.
If you're building a RevOps measurement function from scratch, start with five: pipeline coverage, win rate by segment, NRR, CAC payback, and forecast accuracy. Get those right before adding a sixth. Once the foundation holds, the RevOps operating model for mid-market companies describes how to wire these metrics into the cadence of the business, and common RevOps mistakes and how to avoid them covers the failure patterns that show up first. For the broader case of why this measurement discipline pays back, the business case for revenue operations and measuring the ROI of process changes connect the metrics to the dollars they protect. For sizing what's normal at your scale, operations benchmarks for $30M to $500M companies gives the wider operating context.
Next step
Ready to go AI-native?
Schedule 30 minutes with our team. We’ll explore where AI can drive the most value in your business.
Get in TouchFree Playbook
Know which AI plays to make.
The framework for finding the 2-3 AI workflows in your business that return 5-20x — and the ones that look obvious but quietly destroy margin.