Every benchmark conversation we walk into starts the same way: a leader holds up a single number that feels off. It usually is off. It is also meaningless without context.
A CFO points at a G&A ratio. A VP of Ops frowns at forecast accuracy. A founder raises an eyebrow at tool spend per employee. The number is almost always accurate. What is missing is the revenue tier, the business model, and the operating philosophy of whoever published it. This article maps that missing context for mid-market companies in the $30M to $500M range.
What's a normal ops-spend ratio at $50M?
A normal G&A-as-percent-of-revenue ratio at $50M is 12 to 18 percent, and it trends down each year as the business scales. SaaS Capital's 2024 B2B SaaS Benchmarking Report places median G&A at 16 percent for companies in the $25M to $75M ARR band, with top-quartile performers at 11 percent. OpenView BenchmarkIt 2024 data shows a similar pattern: G&A falls from roughly 20 percent at $10M ARR to around 10 percent at $100M ARR.
The ratio is a proxy, not a target. A company running 18 percent G&A at $50M with 30 percent net new ARR growth is healthier than one at 12 percent with 8 percent growth. Bessemer Venture Partners' State of the Cloud 2024 report reinforces this: the best-scaling companies invest in operations ahead of revenue, then let the ratio compress as ARR catches up. Cutting G&A to hit a benchmark number usually starves the operations build-out that the next revenue tier will require.
There is a mix question inside G&A too. Finance, HR, IT, legal, and facilities roll up differently at different companies. APQC's Open Standards Benchmarking dataset, drawing on over 9,000 organizations, shows finance-function cost alone ranging from 0.6 percent of revenue at top performers to 2.0 percent at bottom quartile. Comparing a whole-G&A number to a peer that reports only finance is the most common benchmarking error we see.
How many ops people should a $100M company employ?
A $100M company typically runs a combined operations function (RevOps, BizOps, IT, finance ops, people ops) at 3.5 to 6 percent of total headcount. That lands at roughly 15 to 30 operations FTEs against 400 to 500 total employees. SaaS Capital's 2024 data puts operations headcount at a median of 4.8 percent for companies in the $75M to $150M ARR band.
The ratio shifts with business model. Services-heavy firms, tracked in SPI Research's 2024 Professional Services Maturity Benchmark, run lower ops ratios because billable delivery dominates headcount. Product-led SaaS companies, per OpenView BenchmarkIt 2024, invest earlier and run 5 to 7 percent of headcount in ops by the $50M revenue mark. Manufacturing and distribution look different again, with plant-operations teams that shouldn't be compared to knowledge-work ratios at all.
EBITDA per operations FTE is the counterweight. Scale Venture Partners' 2023 Scaling to $100M study found top-quartile B2B software companies generate $15M to $25M of revenue per operations FTE at the $100M mark. A team at 5 percent of headcount delivering $25M per ops FTE is doing more with the same ratio than a team at 4 percent doing $12M. For a deeper breakdown of right-sizing the function, see how many operations staff do you actually need.
Where do mid-market benchmarks come from?
Mid-market operations benchmarks come from a small set of recurring publishers, each with its own method and bias. Knowing the source is as important as the number itself.
SaaS Capital publishes an annual B2B SaaS Benchmarking Report drawn from 1,500-plus private SaaS companies. Best for growth, retention, and expense-ratio data in the $5M to $200M ARR range. Free.
OpenView BenchmarkIt is a self-serve platform with data from more than 2,000 SaaS companies. Best for unit economics (CAC payback, burn multiple, NRR) and go-to-market efficiency.
APQC (American Productivity and Quality Center) runs Open Standards Benchmarking across roughly 9,000 organizations and 1,500 process measures. Best for process-level data across industries. Membership-gated.
Gartner publishes annual IT Key Metrics Data and Finance Key Metrics studies. Best for IT spend per employee, software spend, and finance-function cost. Paywalled.
Bessemer Venture Partners publishes State of the Cloud, focused on public and late-stage private SaaS. Useful for directional trends at the top of the mid-market funnel.
CFO.com and Ventana Research cover finance-function benchmarks (close cycle, forecast accuracy, reporting lag), drawing on surveys of 500 to 1,500 finance leaders. McKinsey Global Institute and KeyBanc Capital Markets publish cross-industry productivity and SaaS operating benchmarks; KeyBanc's annual Private SaaS Company Survey is a useful cross-check against SaaS Capital and OpenView.
When sources disagree, they usually disagree about the sample, not the underlying measurement. SaaS Capital skews toward bootstrapped and growth-equity-backed companies. OpenView skews toward venture-backed PLG companies. APQC spans all industries and skews larger. The median moves with the sample.
The benchmark table: revenue tier by metric
This table collects the medians we reference most often in operations work, pulled from the publishers above. Read it as a direction finder, not a scorecard.
| Metric | $30M-$50M | $50M-$100M | $100M-$250M | $250M-$500M | Source |
|---|---|---|---|---|---|
| G&A as % of revenue | 16-20% | 13-17% | 10-14% | 8-12% | SaaS Capital 2024 |
| Operations headcount % | 4.0-6.5% | 4.5-6.0% | 4.0-5.5% | 3.5-5.0% | SaaS Capital 2024 |
| Software spend per FTE | $9,500-$12,000 | $11,000-$14,000 | $12,500-$16,000 | $14,000-$18,500 | Gartner 2024 IT Key Metrics |
| Quote-to-cash cycle (days) | 32-45 | 24-36 | 18-28 | 14-22 | APQC 2024 |
| Lead-to-MQL cycle (days) | 3-7 | 2-5 | 2-4 | 1-3 | Scale Venture 2024 |
| Issue-to-resolution (hours, Tier 1) | 18-36 | 12-24 | 8-18 | 6-14 | Gartner 2024 |
| First-pass yield (knowledge work) | 82-90% | 86-92% | 90-94% | 92-96% | APQC 2024 |
| Error rate (core transactions) | 2.5-4.0% | 1.8-3.0% | 1.2-2.2% | 0.8-1.6% | APQC 2024 |
| EBITDA per ops FTE ($M) | 0.8-1.4 | 1.2-2.0 | 1.8-2.8 | 2.4-3.6 | Scale Venture 2023 |
| Close cycle (business days) | 8-12 | 6-10 | 5-8 | 4-6 | APQC 2024 / CFO.com 2024 |
| Forecast accuracy (quarter, % variance) | 12-20% | 8-14% | 5-10% | 3-7% | Ventana Research 2024 |
| CAC payback (months, SaaS) | 18-30 | 15-24 | 12-20 | 10-16 | OpenView BenchmarkIt 2024 |
Ranges reflect interquartile data where available and published medians elsewhere. None of these numbers are targets. They describe the band of normal for companies in that revenue tier at the publication dates cited.
How do you benchmark without losing context?
You benchmark without losing context by running a five-step process that matches the comparison set to your business before you draw any conclusions. Picking a number out of a report and holding it up to your own is the fastest way to make a bad decision.
A five-step benchmarking process
We run this process during operations audits because it short-circuits fixation on a single ratio. The filter almost always reveals the benchmark is misdefined, compared against the wrong peer set, or disconnected from any real choice.
When is a benchmark worth using and when is it misleading?
A benchmark is worth using when the metric is well-defined, the comparison set matches your business, and the gap implies a specific decision. It is misleading when any of those three conditions fails. The failure modes are predictable.
The four ways benchmarks get misused
First, cross-model comparison. Holding a services firm's operations ratio up to a SaaS benchmark drives the wrong staffing call. Second, cross-stage comparison. A $40M company compared to a $300M benchmark looks inefficient by design, not by failure. Third, single-number fixation. A 15 percent G&A ratio reads completely differently at 40 percent growth versus 8 percent growth. Fourth, self-benchmarking in isolation. Tracking your own metric over time (an N of 1) is useful for trend, useless for calibration. Peer comparison is useless for trend, useful for calibration. You need both.
Another common misuse is the benchmark-driven cut. A CFO sees G&A at 18 percent against a 12 percent benchmark, cuts 6 points, and watches operations capability crater two quarters later. SaaS Capital's longitudinal data shows G&A compresses naturally with scale; forcing it down at the wrong stage creates the efficiency gap the next stage will have to fix.
Our rule: never treat a single benchmark gap as justification for a staffing or budget cut. Treat it as justification for a diagnostic. For more on the KPIs that populate these benchmarks, see KPIs that operations leaders actually track.
What cycle-time benchmarks matter most?
The cycle-time benchmarks that matter most for mid-market operations are quote-to-cash, lead-to-MQL, and issue-to-resolution. Each one directly affects cash, pipeline, or customer retention. Cycle times published by APQC, Scale Venture Partners, and Gartner line up as follows at the $50M to $100M band: quote-to-cash at 24 to 36 days, lead-to-MQL at 2 to 5 days, and issue-to-resolution (Tier 1) at 12 to 24 hours.
Quote-to-cash is the highest-impact cycle time in the mid-market. APQC's 2024 data shows top-quartile performers closing the window in under 14 days at $100M, driven by CPQ integration and automated invoice handoff from CRM to billing. Bottom-quartile companies run 45-plus days. A company moving from 35 days to 20 days at $100M ARR with quarterly billing frees roughly $4M in working capital, per CFO.com's 2023 working-capital study.
Lead-to-MQL measures speed from inbound lead capture to marketing-qualified lead handoff. Scale Venture Partners' 2024 benchmarks place the median at 2 to 5 days at $50M to $100M. Companies exceeding 7 days lose roughly 30 percent of convertible leads to decay, per a 2023 Harvard Business Review study on lead response timing. The fix is rarely BDR headcount. It is the enrichment, routing, and scoring logic that stalls leads in queues.
Issue-to-resolution is the customer-retention cycle. Gartner's 2024 customer-service benchmarks show Tier 1 median resolution at 12 to 24 hours for mid-market B2B, with top quartile under 6 hours. Resolution time correlates with churn more tightly than first-response time, which is what many teams track instead.
How should you benchmark tool spend per employee?
Tool spend per employee should be benchmarked against Gartner's IT Key Metrics Data and against your own trend, with the business model factored in. Gartner's 2024 data puts mid-market software spend per FTE at a median of $12,350, with an interquartile range from roughly $9,500 to $16,000. SaaS-heavy companies run higher (toward $16,000 to $18,500 at the top of the mid-market); manufacturing and distribution run lower.
The more useful cut is tool spend segmented by function. Go-to-market tooling typically runs $3,500 to $6,000 per go-to-market FTE. Engineering tooling runs $8,000 to $15,000 per engineer. G&A tooling runs $1,500 to $3,500 per G&A FTE. A single aggregate number doesn't tell you where the spend actually sits.
Tool spend without utilization data is half a picture. Gartner's 2024 SaaS Management research finds mid-market companies using only 55 to 65 percent of contracted software seats, with shelfware concentrated in sales enablement, analytics, and HR tech. A below-benchmark ratio achieved through shelfware hides capability you already paid for. For how to audit the stack without falling into the seat-count trap, see benchmarking your tech stack against peers.
What finance-ops benchmarks should you track?
The finance-ops benchmarks worth tracking are close cycle (business days to close the books), forecast accuracy (quarterly variance), and reporting lag (time from period-end to distributed report). Together, they describe how fast the finance function can turn operational reality into decision-ready data.
APQC's 2024 benchmarks place median close cycle at 6 to 10 business days in the $50M to $100M tier and 5 to 8 days in the $100M to $250M tier, with top-quartile teams closing in 5.5 days or less. The move from 10 days to 5 days changes when decisions can be made on actuals versus forecasts.
Forecast accuracy, per Ventana Research's 2024 Office of Finance benchmarks, runs at 12 to 20 percent quarterly variance at the $30M to $50M tier and tightens to 5 to 10 percent at $100M to $250M. Below 5 percent is world-class. Above 20 percent, the forecast is noise. The drivers are rarely the forecasting model itself. They are data quality inputs, deal-stage discipline in the CRM, and consistency in revenue recognition definitions between sales and finance.
Reporting lag measures time from period-close to the point operators can act on the data. CFO.com's 2024 survey of 680 mid-market finance leaders found median reporting lag of 7 business days, with top performers under 3. Long reporting lag makes a close cycle improvement pointless: closing on day 5 and distributing reports on day 12 still gives operators 12-day-old data. For how these finance benchmarks sit inside a broader ops budget, see operations budget planning for growth companies.
How do best-in-class operations teams benchmark differently?
Best-in-class operations teams benchmark against themselves over time and against carefully matched peers, weighting the internal trend more heavily. Scale Venture Partners' 2024 research on high-performing B2B operations functions identifies three patterns. They track a small set of metrics (6 to 10) with strong definitions. They benchmark quarterly against rolling peer data rather than annual point-in-time reports. And they keep the benchmark conversation separate from the target-setting conversation.
That last point is the one most teams miss. The benchmark tells you where peers sit. The target answers where you should be six months from now given your stage, model, and strategy. Collapsing them into one number (setting the target equal to the median) flattens best-in-class teams into average ones. For a fuller description of what the best teams look like, see what best-in-class operations teams look like.
Benchmark misuse
Benchmark done right
Key takeaways
Mid-market operations benchmarks are useful only when you match the comparison set, define the metric, and tie the benchmark to a decision. G&A at 12 to 18 percent is normal at $50M. Operations headcount at 4 to 6 percent is normal across the mid-market. Tool spend per employee sits between $9,500 and $18,500 depending on model and stage. Close cycle tightens from roughly 10 days at $40M to 5 days at $250M. Forecast accuracy moves from 15-plus percent variance to single digits over the same range.
Publishers to know: SaaS Capital for expense ratios, OpenView BenchmarkIt for unit economics, APQC for process data, Gartner for IT and finance cost, Bessemer for directional SaaS trends, CFO.com and Ventana for finance benchmarks, Scale Venture and KeyBanc for productivity metrics, McKinsey for cross-industry views.
Never read a benchmark as a target. Read it as a question. Without the median, the trend, and the decision tied to it, a benchmark is a number on a slide. With all three, it points you somewhere you can actually go.
Next step
Ready to go AI-native?
Schedule 30 minutes with our team. We’ll explore where AI can drive the most value in your business.
Get in Touch