The order fulfillment process has been running yellow for six weeks. Nobody escalated it because nobody owned the yellow signal. Cycle time crept up 18% quarter over quarter. Exception rate doubled. A CSM noticed, mentioned it once in a standup, moved on. Six weeks later the VP of Operations finds out when a customer threatens to cancel.
We see this in most growth companies we work with. The metrics are already captured. The dashboards are already built. The missing piece is a tracking system that routes a yellow signal to someone whose job depends on turning it green.
Tracking process health is a specific operational discipline with five metric categories, four cadences, one scorecard per process, named owners, and automated alerts. Here is how to build it.
What does process health actually mean?
Process health is a composite measure of whether a business process is delivering its intended outcome at the intended speed and quality, at a sustainable cost, without overloading the team that runs it. A healthy process produces the right output inside the right window, doesn't burn out the people operating it, and doesn't quietly pile up a backlog of exceptions.
Health is not the same as performance. Performance asks, "Did we hit the number this month?" Health asks, "Will we still hit the number six months from now without heroic effort?" A process can perform well this quarter and still be unhealthy, carried by a senior engineer who is privately patching broken tooling. That debt surfaces later, usually at the worst possible time.
The discipline goes back to W. Edwards Deming's work on statistical process control in the 1950s, but most growth-stage companies skip it until something breaks. Gartner's 2024 CIO survey found only 27% of organizations track process-level health beyond basic throughput, and fewer than half of those share the data with the people running the process.
What are the five indicators of process health?
The five indicators of process health are throughput, quality, cycle time, exception rate, and team sentiment. Each one captures a different failure mode, and a process has to sit in the green on all five to be called healthy. If one slips, you have a specific diagnostic.
Throughput is the count of completed work units per time window. Orders fulfilled per day, tickets resolved per week, invoices processed per month. If throughput drops without a matching drop in input volume, work is getting stuck somewhere in the pipeline.
Quality is the percentage of completed work that meets acceptance criteria on the first pass. APQC's 2023 Open Standards Benchmarking data shows top-quartile companies hit 95%+ first-pass yield on core processes while bottom-quartile companies sit around 72%. Most of that 23-point gap is rework that gets logged as ordinary work.
Cycle time is the clock time from when a unit of work starts to when it finishes. Rising cycle time tends to be the first visible signal that a process is in trouble. It usually shows up weeks before throughput drops or quality collapses, because work queues absorb the slack first.
Exception rate is the percentage of cases that can't go through the standard path and require human intervention, escalation, or a workaround. A healthy process keeps it stable. An unhealthy one lets it creep upward until exceptions are the norm.
Team sentiment is the subjective experience of the people running the process. A process can look fine on the other four metrics while its team quietly burns out on manual workarounds. Culture Amp's 2024 workplace research found that teams flagging specific process friction in pulse surveys predicted voluntary turnover 4-6 months in advance with 70%+ accuracy.
How often should you check process health?
Check process health on four cadences matched to the signal's volatility: real-time alerts for hard failures, daily review for core operational processes, weekly review for cross-functional processes, and quarterly review for strategic health. Mismatching cadence to signal is the single most common reason health tracking falls apart.
Real-time monitoring is for metrics where a delay of hours is itself a failure. Payment processing errors, production outages, customer-facing SLA breaches. These trigger automated alerts to the process owner the moment they cross a threshold.
Daily review is for operational metrics that matter at shift granularity. Support ticket backlog, order fulfillment throughput, production defect rate. A 15-minute standup covers the day's numbers, yesterday's exceptions, and any yellow signals that need attention.
Weekly review is for health metrics where daily noise would drown out the signal. Cycle time trend, rework rate, cross-departmental handoff performance. The process owner walks through the week's scorecard, the team discusses what moved from green to yellow, and actions get assigned.
Quarterly review is for the strategic picture. Are our processes still fit for the business we run today, or have we outgrown them? Sentiment data, exception patterns, and twelve weeks of trend data feed into decisions about what to rebuild, retire, or leave alone.
Atlassian's 2024 State of Teams report, analyzing data from 1 million platform users, found that teams running a weekly health review for their core processes reported 38% less time spent in reactive firefighting. Weekly reviews put the work on the calendar, which is usually what keeps it from sliding.
What is the green yellow red framework?
The green yellow red framework assigns each process health metric a threshold-based status, where green means "performing within target," yellow means "warning, investigate," and red means "action required." The framework turns continuous metrics into discrete decisions, which is what teams actually act on.
Green thresholds should match realistic steady state, not an aspirational goal. If your cycle time target is 4 hours and the current actual runs 6 hours 90% of the time, setting green at 4 hours means the scorecard is permanently yellow and the signal gets tuned out. Set green at 5 hours, yellow at 5-7, red above 7, and tighten the thresholds each quarter as the process improves.
Yellow is the color that matters most. Most organizations focus on red, which is the point where something has already broken. Yellow is the signal that lets you act before the break. McKinsey's 2023 operations research found companies with defined yellow-state protocols resolved emerging issues 3.2x faster than companies that only tracked red-state failures.
Red needs a documented response, not just a notification. For each process, define in advance what happens when a metric goes red: who gets paged, what the first action is, how long the owner has to respond, and the escalation path. If a red signal has no protocol behind it, all you have is a louder complaint.
The yellow gap is where processes go to die
Every unhealthy process we find has one thing in common: nobody owned the yellow signal. Someone saw it go yellow, someone mentioned it in a meeting, and no one's performance review was tied to resolving it, so it sat. Give yellow an owner and a response-time SLA, and processes tend to self-correct before they become incidents. Leave yellow as informational, and you get six-week slides from green to red with no intervention in between.
What makes a process unhealthy?
A process is unhealthy when one or more of its five indicators has been trending in the wrong direction for at least three consecutive review cycles, or when a single indicator crosses into red without a documented correction. Health is about trajectory, not just the current reading.
A few patterns account for most unhealthy processes in growth companies. The first is silent drift, where cycle time and quality degrade in small increments that fall under the radar of any single review. The second is workaround accumulation, where exceptions quietly become the standard path and the documented process no longer matches what people actually do. The third is sentiment collapse, where the team stops flagging issues because previous flags went nowhere, which causes the operational data to lag the morale data by weeks.
Lean Six Sigma has tracked these patterns for decades. What has changed is the instrumentation. Modern workflow systems expose timestamps, queue transitions, and exception counts that used to require dedicated measurement studies. The data is there. The problem is finding people who will actually read it.
How do you build a process health scorecard?
A process health scorecard is a single-page view per process that shows all five indicators with their current value, target, status, and trend, signed by the process owner and reviewed on a fixed cadence. Without one, tracking turns into a dashboard nobody opens.
Every scorecard has the same structure: a one-line process definition, a named owner, the five metrics with green/yellow/red thresholds, a 90-day trend per metric, last and next review dates, and a notes field for context the numbers miss. Here is the template we use with clients:
| Element | Content | Example |
|---|---|---|
| Process name | Single line, plain English | "Customer onboarding: contract to kickoff" |
| Owner | Named individual, not a team | "Jane Chen, VP Customer Success" |
| Throughput | Unit per period, target, status | "12 onboardings/week, target 15, yellow" |
| Quality | First-pass yield, target, status | "88%, target 95%, yellow" |
| Cycle time | P50 and P90, target, status | "P50 9 days, P90 21 days, target 7/14, red" |
| Exception rate | % of cases off standard path, target | "18%, target under 10%, red" |
| Team sentiment | Pulse score 1-5, target | "2.8 of 5, target 4.0, yellow" |
| Trend | 90-day direction per metric | Sparklines, arrows, or plain "improving/flat/worsening" |
| Last review | Date and reviewer | "2026-04-10, reviewed by Jane Chen" |
| Next review | Scheduled date | "2026-04-17 weekly standup" |
| Active actions | Open items with owners and due dates | "Reduce handoff wait, owner: Amir, due: 2026-04-24" |
The scorecard lives in one place, is updated once per review cycle, and is visible to both the team and leadership. If it is buried three folders deep on a shared drive, nobody will see it move.
How to set up process health tracking
How do you automate alerting for process health?
Automate alerting by setting programmatic thresholds in the source system, routing notifications to the named process owner, and requiring acknowledgment within a defined response window. If nothing forces the owner to acknowledge the alert, you will end up with an inbox full of ignored ones.
A few configuration choices separate useful alerting from alert fatigue. Alert on trend direction, not just absolute values, because a metric that drops 20% week-over-week deserves attention whether or not it crossed a threshold. Route alerts to a single person rather than a channel, because a Slack channel with 40 people quickly becomes a place where alerts go to be ignored. And pair every alert with a mandatory response status so the queue does not turn into an archive of unresolved problems.
Workflow systems like Asana, Monday, Linear, and Jira handle threshold alerts through native automation rules. ERP and CRM data usually routes through middleware (n8n, Zapier, or a custom pipeline). Pulse survey platforms like Culture Amp expose webhooks that trigger on score drops. The hard part is matching the alert routing to the accountability model.
How do you measure team sentiment in process health?
Measure team sentiment with short, frequent pulse surveys targeted at the specific process, scored on a consistent scale, and reviewed alongside the operational metrics. Annual engagement surveys will not substitute. They measure the workplace rather than the process, and by the time they surface a problem, the process has been unhealthy for months.
A useful process pulse is three to five questions, takes under 90 seconds, runs every two to four weeks, and asks specifically about the process. The questions that work: How clear is ownership at each step? How often do you use a workaround to complete your work? How confident are you that tools and handoffs support you? How much rework does this process create in a typical week?
Aggregate scores into a 1-5 index per process. A drop of 0.5 or more between cycles is a yellow signal. A score below 3.0 that doesn't recover within two cycles is red. Culture Amp's 2023 research found process-specific pulse scores led NPS drops by an average of 11 weeks in subscription businesses. Gallup's 2024 State of the Global Workplace put engagement at 21% globally and estimated $438 billion in annual productivity loss from disengagement. A process pulse is not an engagement survey, but the two correlate tightly.
What happens in a quarterly operations review?
A quarterly operations review is a 90-to-120-minute session where process owners present 12 weeks of scorecard data, leadership reviews trends across processes, and the team makes investment decisions for the next quarter. It is the meeting that turns tracking into action.
Each process owner walks through their 90-day scorecard trend and names any color changes. Leadership looks across scorecards for patterns: which department carries the most yellow, which process types are degrading, whether sentiment is starting to lead the operational data. The group picks a small number of investment priorities for the quarter and assigns owners, budgets, and milestones. Processes that have been green for four consecutive quarters get moved to a reduced review cadence so leadership time goes to the ones still in motion.
The quarterly review is where the yellow-signal problem finally gets caught. A process running yellow for six weeks looks different on a 90-day trend than on any single weekly scorecard. Patterns become visible, and investment dollars follow them.
Who should own each process health metric?
Each process health metric should be owned by a single named individual who is accountable for the status, has authority to allocate team resources, and has the metric in their performance review. Shared committee ownership tends to produce no real ownership at all. The metric moves when one person is measured on whether it moved.
Typical ownership for a growth company between $30M and $500M in revenue works like this. The VP or director of a function owns the overall scorecard. A senior manager inside that function owns operational metrics like throughput and quality. The process owner, often a senior individual contributor, owns cycle time and exception rate. Sentiment is co-owned by the process owner and the HR business partner, because process issues often surface as people issues first.
| Metric | Primary owner | Secondary owner | Review cadence |
|---|---|---|---|
| Throughput | Function manager | VP of function | Daily |
| Quality | Function manager | QA lead | Daily or weekly |
| Cycle time | Process owner | Function manager | Weekly |
| Exception rate | Process owner | Function manager | Weekly |
| Team sentiment | Process owner | HR business partner | Monthly |
| Overall scorecard | VP of function | Chief Operating Officer | Quarterly |
The ownership model has to be documented and refreshed when people change roles. About half the broken scorecards we audit have an owner who left the company three quarters ago.
Key takeaways
Process health is throughput, quality, cycle time, exception rate, and team sentiment, tracked on a cadence that matches the signal's volatility. Real-time alerts are for hard failures. Daily reviews cover operational noise. Weekly reviews are where you catch trend changes before they become incidents, and quarterly reviews translate 90 days of data into investment decisions.
The green/yellow/red framework turns continuous metrics into actual decisions. Green is steady state, red is a documented response protocol, and yellow is the signal that separates companies that fix processes from companies that keep firefighting them. Yellow has to have its own named owner.
Every scorecard needs a single accountable owner, published thresholds, automated alerting routed to that owner, and sentiment data sitting next to the operational numbers. If any of those are missing, the scorecard becomes a dashboard nobody opens.
For broader context, see what operations intelligence is. For the metrics layer, see KPIs that operations leaders actually track and building an operations dashboard that actually gets used. For the friction measurement work that feeds these scorecards, see how to measure operational friction across departments. For the question of how fast you should be watching, see real-time vs retrospective operations monitoring.
Next step
Ready to go AI-native?
Schedule 30 minutes with our team. We’ll explore where AI can drive the most value in your business.
Get in Touch