ArticleOperations Intelligence

Benchmarking Your Tech Stack Against Peers

10 minAPFX Team

Every ops team we talk to thinks they have too many tools. The benchmarks back them up. Productiv's 2024 State of SaaS found the median enterprise runs 275 SaaS apps and uses under half of them. More governance rarely fixes this. What fixes it is a rationalization pass that surfaces the 40 percent nobody logged into last quarter.

Tech stack benchmarking measures your SaaS and operational tooling against peer companies using four numbers: tool count per employee, SaaS spend per FTE, tool adoption ratio, and tool redundancy rate. Done well, it tells you whether your stack is starving the team or drowning it. Done badly, it turns into a procurement witch hunt that kills the one obscure tool your best analyst quietly runs the business from.

What is tech stack benchmarking?

Tech stack benchmarking is the comparison of your company's software portfolio against peer companies using standardized ratios: apps per employee, spend per employee, adoption rate per license, and redundancy by category. The goal is to answer two questions. Are we paying for more than peers? Are we getting less out of what we pay for?

Four metrics do most of the work. Tool count is the raw number of SaaS apps in use, including shadow IT paid on corporate cards. SaaS spend per full-time employee divides annual SaaS spend by headcount. Tool adoption ratio is monthly active users divided by licenses paid for, usually measured per app. Tool redundancy rate is the percentage of functional categories where two or more tools do the same job (two CRMs, three project trackers, four chat clients).

Five data sources matter: Productiv's State of SaaS, Zylo's SaaS Management Index, Vendr's SaaS Trends, BetterCloud's State of SaaSOps, and Gartner's IT Key Metrics. Flexera's State of the Cloud adds infrastructure context. OpenView and Bessemer publish revenue-tier benchmarks where the cleanest comparables live.

What counts as a "normal" tool count at $50M in revenue?

At $50M in revenue, a typical company runs between 90 and 150 SaaS applications and spends $6,000 to $12,000 per employee per year on SaaS. That is the Zylo 2024 SaaS Management Index range for companies in the $25M to $100M revenue tier. The spread is wide because industry and org structure swing the number more than revenue does.

Product-led SaaS companies sit at the high end. Their GTM stack alone (CRM, marketing automation, enrichment, sequencing, intent data, attribution, CS platform) often runs 15 to 25 tools before engineering or finance touch anything. Services businesses at the same revenue usually run 60 to 100 apps because the internal GTM footprint is smaller. Fintech and regulated companies carry more compliance and audit tooling, adding 10 to 20 apps that never appear on a SaaS company's stack.

Apps per employee is the ratio that matters more than absolute count. Productiv's 2024 data shows the healthy band for knowledge-work companies sits between 0.6 and 1.2. Above 2 apps per employee at mid-market scale, you almost certainly have duplicate coverage and shadow IT that never went through procurement.

What do the peer benchmarks actually tell you?

Peer benchmarks tell you where you sit on four axes relative to companies your size. They do not tell you what to do about it. Every SaaS management platform sells benchmarks as an answer. They are a diagnostic.

A company at the 75th percentile for SaaS spend per employee is not automatically wasting money. It might be a Series C fintech whose compliance tooling, per Gartner's IT Key Metrics, runs 40 to 60 percent higher on SaaS per FTE than a comparable non-regulated peer. Adoption rate is a better waste signal than spend. A company paying 40 percent more than peers but using 85 percent of seats is getting value. A company at peer-median spend and 38 percent utilization is burning cash.

The benchmarks we trust at mid-market scale: Zylo for apps-per-employee and spend-per-FTE, BetterCloud for shadow IT and provisioning, Productiv for license utilization, Vendr for transaction pricing, and Gartner IT Key Metrics for total IT spend as a percentage of revenue. Zylo's data skews toward companies that already bought a SaaS management platform. Vendr's pricing skews toward buyers using a procurement partner.

Tech stack benchmarks by revenue tier and industry

The table below shows ranges from the 2024 editions of Zylo SaaS Management Index, Vendr SaaS Trends, and Gartner IT Key Metrics. Numbers are median to 75th percentile for healthy operators in each segment.

Revenue tierIndustryApps per employeeSaaS spend per FTETotal IT as % revenue
$10M to $50MSaaS / tech0.8 to 1.4$7,500 to $14,0008% to 14%
$10M to $50MServices0.5 to 0.9$4,200 to $8,5003% to 6%
$50M to $250MSaaS / tech0.9 to 1.5$9,200 to $16,5009% to 15%
$50M to $250MFintech / regulated1.0 to 1.6$11,800 to $19,40010% to 17%
$50M to $250MServices0.5 to 0.9$4,800 to $9,4003% to 6%
$250M to $500MSaaS / tech1.0 to 1.7$10,500 to $18,0009% to 14%
$250M to $500MFintech / regulated1.1 to 1.8$13,000 to $21,00011% to 17%
$250M to $500MEcommerce / DTC0.8 to 1.3$6,800 to $12,5004% to 8%

Two patterns repeat across the Zylo and Vendr datasets. SaaS companies run hotter than any other sector at the same revenue. Fintech and regulated companies carry a 20 to 40 percent premium from compliance tooling (SOC 2 evidence, GRC platforms, privileged access management, vendor risk). Services firms consistently sit 30 to 50 percent below the SaaS median. If your services firm looks like a SaaS company on these numbers, something is off.

How do you spot tool sprawl?

Tool sprawl shows up when tool count grows faster than headcount, when multiple apps cover the same functional category, and when license utilization drops below 60 percent across the portfolio. BetterCloud's 2024 State of SaaSOps found that enterprises add 18 apps per year on average and sunset only 4. The net accumulation is the problem, not any single purchase.

We look for four signals in client audits. First is functional duplication: three project management tools (Asana, Monday, ClickUp), two CRMs (HubSpot and Salesforce), four places employees message each other. Second is shadow IT paid on individual corporate cards that never entered the procurement queue. BetterCloud found 65 percent of SaaS apps at mid-market companies are procured outside IT. Third is orphan ownership, where no one can name the business owner and the original champion left the company. Fourth is the utilization cliff: apps with 90 percent seats assigned and 20 percent monthly active users, which Productiv found accounts for roughly 30 percent of SaaS spend at the median enterprise.

The sprawl pattern that hides best is compliance-adjacent tools. Somebody bought one to pass an audit and the invoice has renewed ever since because nobody wants to be the person who canceled a security tool. A $50M fintech we audited had five overlapping vendor risk tools, each bought during a different audit cycle by a different owner. Canceling four of them saved $94,000 annually. No audit finding changed.

Benchmarks used badly do more damage than no benchmarks

The most common misuse of stack benchmarking is the blunt-instrument cancellation. A CFO sees the company at the 75th percentile for SaaS spend per employee, demands a 20 percent cut, and operations spends a quarter canceling contracts without understanding adoption. The result is predictable. The tools that got canceled were the ones with small license counts and easy renewal terms. The genuinely wasteful tools, usually expensive annual enterprise contracts, stayed because they were harder to exit. Two quarters later the total spend is higher, not lower. Benchmarks are a starting point for investigation, not a budget target.

What is the tool redundancy rate and why does it matter?

Tool redundancy rate is the percentage of functional categories (CRM, project management, messaging, document storage, HR, finance, analytics) where two or more tools are in active use. At healthy mid-market companies, tool redundancy sits under 15 percent. Above 30 percent, the stack absorbs a real productivity tax from context switching, data reconciliation, and duplicate licensing.

If sales uses Salesforce and marketing uses HubSpot as a second CRM, every lead handoff requires a manual sync or a paid integration layer. Productiv's 2024 research found companies with tool redundancy above 25 percent reported 2.3 times more data inconsistency issues than companies below 15 percent. Gartner's 2023 IT Key Metrics estimated application duplication consumes 10 to 25 percent of total SaaS spend across mid-market portfolios.

The categories that stack redundantly most often in 2024 data are project management (Asana plus Monday plus ClickUp), messaging (Slack plus Teams plus Discord), document collaboration (Google Workspace plus Microsoft 365 plus Notion plus Confluence), and AI assistants. AI is the fastest growing redundancy category per Zylo. The median mid-market company now runs 4.2 AI tools against 1.3 at the start of 2023.

Tool sprawl audit methodology

A tech stack benchmark audit follows four steps from discovery to rationalization decision. This is the framework we run with operations teams at $30M to $500M in revenue. It takes two to four weeks for a company with 100 to 300 employees.

Four-step tech stack benchmark audit

Rationalization playbook

Rationalization is the process of reducing SaaS spend through license right-sizing, consolidation of duplicates, and cancellation of unused tools without breaking team workflows. The order matters. Right-size licenses first. Consolidate duplicates second. Cancel orphans last.

Right-sizing is the highest-return, lowest-risk move. Productiv's 2024 data shows that 53 percent of assigned licenses go unused at the median enterprise in any given month. Cutting unused seats on existing contracts requires no replacement workflow and no change management. A $50M company we audited cut 340 unused licenses across 22 apps and saved $218,000 annually. No tool got removed.

Consolidation of duplicates is higher return but requires actual work. Picking which project management tool to standardize on means someone loses their preferred workflow. Flexera's 2024 State of the Cloud survey found consolidation projects take 2.5 times longer than expected, mostly because the political cost gets underestimated. What tends to work: let the team with highest adoption keep their tool, migrate the lower-adoption teams, and give the migration a named owner with executive air cover.

Cancellation of orphan tools is last because it carries the most risk of silently breaking something. Before cutting, check SSO logs, webhooks, and scheduled jobs for the previous 90 days. Bessemer's 2024 State of the Cloud report found 18 percent of attempted orphan cancellations at growth-stage companies caused a downstream incident within 30 days. The fix is better discovery before the decision.

What benchmarks do not tell you

Tech stack benchmarks measure cost, count, and adoption. They do not measure strategic fit, team leverage, or the value a tool creates when it is used. A tool can be expensive, used by few people, and the single highest-leverage investment in your stack. Benchmarks will still flag it as waste.

Two patterns trip up benchmark-driven decisions. The first is innovation theater: companies add tools to look modern (AI assistants, next-gen BI, whatever the board asked about last quarter) without a business case. Benchmarks catch these because adoption is low and spend is high. Fair enough. The second pattern is compliance-driven additions that look like sprawl but are non-negotiable. GRC platforms, vendor risk tools, and audit-evidence collectors often show low adoption and high spend. Canceling them fails the next audit.

Specialty tools for high-leverage roles create a similar false positive. A $50,000-per-year forecasting tool used by two finance people looks wasteful on any adoption ratio. If those two people are running a $200M forecast, it is the best money the company spends.

Key takeaways

Tech stack benchmarking uses four numbers (tool count per employee, SaaS spend per FTE, adoption ratio, redundancy rate) to show where your stack sits relative to peers. The data sources that matter are Productiv, Zylo, Vendr, BetterCloud, and Gartner IT Key Metrics, with revenue-tier comparables from OpenView and Bessemer.

At $50M revenue, a typical company runs 90 to 150 apps and spends $6,000 to $12,000 per employee on SaaS. Industry swings the range more than revenue does. Redundancy above 30 percent creates a real productivity tax. Utilization under 60 percent means half your spend is not working.

Benchmarks are diagnostic, not prescriptive. Used as a CFO-imposed budget target, they damage the wrong tools. Used as an audit starting point, they surface the 40 percent of the stack nobody logs into. Read operations benchmarks for $30M to $500M companies for broader ratios, operational intelligence tools for mid-market companies for how to pick what you actually keep, and how to evaluate operations software without getting burned before the next renewal. For the hub view, start with what operations intelligence actually is.

If your stack feels bloated but you cannot tell which tools are carrying weight, that is the problem we solve. Let's find the friction.

Next step

Ready to go AI-native?

Schedule 30 minutes with our team. We’ll explore where AI can drive the most value in your business.

Get in Touch

Related Articles