A new process lands on Monday morning. Kickoff deck, Slack announcement, recorded Loom, training on the calendar. Three weeks later the ops lead pulls the report. 40% of the team has drifted back to the old way. Another 20% are running a hybrid that follows neither process cleanly. About 40% are doing it as intended.
That is the median result of a process change that shipped without a behavioral design. Teams do not resist new processes because they hate change. They resist because the new process is slower, more awkward, or less legible than the old one, and no amount of executive sponsorship fixes a design problem.
Rollouts we've watched succeed share one trait: the new process is faster than the old one from day one. Rollouts that fail share the opposite. Nobody adopts a process that makes their job harder, no matter how good the email or how loud the executive sponsor.
Why don't teams follow new processes?
Teams don't follow new processes when the new way is harder than the old, when the benefit is invisible to the user, when enforcement is absent, when competing priorities outrank it, or when nobody collects feedback. Those five failure modes cover almost every stalled rollout we see.
Start with friction. Behavior change researcher BJ Fogg, director of the Stanford Behavior Design Lab and author of Tiny Habits (2019), argues that behavior happens when motivation, ability, and a prompt converge. Most rollouts pour energy into motivation through all-hands talks and memos, then ignore ability. A process that requires a new login, two extra fields, and a tool switch has a higher ability cost than the old one, and Fogg's research says the motivation to overcome that cost is rarely sustained past week three.
The second failure is invisible benefit. If the new process benefits the company but costs the user time, the user will quietly revert. That is rational self-interest, not laziness. The Prosci ADKAR model, developed by Jeff Hiatt and used in over 25 years of change research, places Desire as the second of its five stages for a reason. People need a personal answer to "why should I change," and company-level benefits do not count.
The third is unenforced expectation. When leadership announces a new process and tolerates the old one, the team learns that the announcement was theater. John Kotter's 1996 Harvard Business Review article "Leading Change" (later expanded into his Eight-Step Process for Leading Change) found that 70% of organizational change efforts fail, and one of his most cited failure modes is declaring victory too early. McKinsey's change management research found that sustained CEO and manager modeling of new behaviors is the single largest predictor of adoption success.
The fourth is priority conflict. A team member asked to follow the new invoicing process is also asked to close the quarter, onboard a new hire, and fix a production bug. The new process loses every tie-break, because it is the only item where the reward for compliance is abstract.
The fifth is the missing feedback loop. If nobody measures whether the process is followed or working, the process drifts. Gallup's 2024 workplace engagement research found only 21% of employees globally report being engaged at work, and one of the strongest predictors of disengagement is the perception that leadership changes things without listening to how the changes land.
The forced adoption trap
Mandating a new process without fixing the underlying friction usually produces worse outcomes than doing nothing. People comply for two weeks under observation, revert quietly once attention shifts, and learn that future announcements can be safely ignored. You do not just lose this rollout. You burn trust for the next five.
What is compliance by default?
Compliance by default is a design principle in which the new behavior is the path of least resistance, so users follow it without conscious effort. The phrase borrows from Richard Thaler and Cass Sunstein's 2008 book Nudge, which showed that default choices dominate human decisions far more than explicit persuasion does.
Thaler's Nobel-winning research on choice architecture found that when researchers changed the default enrollment for 401(k) plans from opt-in to opt-out, participation jumped from roughly 40% to over 90%. The underlying choice did not change. The default did.
The same principle rewrites process rollouts. Do not ask the team to remember the new way. Build the new way into the tool, form, or system so the old way is no longer available, or so the new way takes fewer clicks. If your new expense process requires a Google Form when the old process was an email, the old process wins. If the new process is a one-click button inside the tool the team already uses, the new one wins without anyone having to think about it.
How do you make the new process easier than the old one?
You make the new process easier by removing steps, by adding friction to the old path so the new one becomes the obvious choice, or both. Usually both. The goal is to flip the ability cost so the new way takes less effort than the old way.
Removing friction from the new path looks like pre-filling fields with known data, cutting a form from 11 questions to 4, replacing a multi-tool handoff with a single workflow, or automating the part the user hated. A good test: if you watch a team member try the new process and they ask a question, you have not removed enough friction yet. The process should be self-explanatory to someone who skipped training.
Adding friction to the old path is the other half. If the old process still works, some fraction of the team will keep using it out of habit. Close that door. Turn off the legacy form. Archive the old Slack channel. Rename the spreadsheet "DEPRECATED, do not use" and lock it to read-only. This is not punishment. It is removing the ambiguity that lets people drift.
Stuck rollout
Adopted rollout
Why does visible leadership modeling matter?
Leadership modeling matters because teams look at what leaders do, not what leaders say, when deciding whether a new process is actually expected. If a VP announces a new forecasting cadence and then submits her own forecast the old way, the team reads the situation correctly. The new cadence is optional.
Kotter's Eight-Step Process for Leading Change puts "form a guiding coalition" as step two and "anchor new approaches in the culture" as step eight, because both depend on leaders doing the new thing in front of their teams. McKinsey's change practice reports that in transformations where senior leaders visibly modeled the new behaviors, adoption rates were roughly 5.3 times higher than in transformations where leaders delegated modeling to middle management.
Social proof compounds this. Everett Rogers's Diffusion of Innovations (1962) found that adoption spreads through networks, with early adopters acting as proof points for the cautious majority. Your early champions are the evidence the rest of the team uses to decide whether the new process is normal, risky, or optional. If you can name five respected people already using it well, the team converges faster. If you cannot, you are not ready to roll out.
When do you mandate versus encourage?
You mandate when the old process creates risk that cannot be tolerated (compliance, safety, legal, data integrity). You encourage when the old process is merely inefficient. Mandating inefficiency creates resentment. Encouraging genuine risk creates exposure. Knowing which category your process falls into is half the battle.
Mandates work when the rule is clear, enforcement is real, and the alternative is actually closed. A new vendor approval process that routes through a locked form is a mandate. A new Slack channel "where we're all going to post updates now" is not, even if the email says it is. If you cannot enforce a mandate with a system control or a management consequence, do not mandate. Encourage, measure, and iterate instead.
Encouragement works when you can make the new way obviously better and let peer pressure and visible wins do the rest. The ADKAR Institute's guidance recommends starting with volunteer cohorts for non-critical process changes, letting those cohorts refine the process, and only broadening the rollout once the new version is demonstrably better than the old. That takes longer than a mandate but produces stickier adoption, because the people who shape the process defend it.
A practical rule: if the cost of non-compliance is a bad spreadsheet, encourage. If the cost is a regulatory fine or a corrupted dataset, mandate and enforce. The hybrid zone (where you "strongly encourage" something important) is where adoption goes to die.
The five-step adoption playbook
How to actually make a new process stick
How do you measure real adoption versus reported adoption?
You measure real adoption by looking at system data showing who did the new behavior and how often, not by asking people whether they followed the process. Self-reported adoption is almost always higher than actual adoption, because people round up, forget edge cases, and want to give the answer the asker wants.
Usable system data is specific. For a new sales update cadence, pull Salesforce record modification timestamps and count how many reps updated on the expected rhythm. For a new expense process, pull submission logs and see what fraction went through the new path. For a new code review standard, pull GitHub PR data and check how many PRs include the required elements.
Prosci's 2023 Best Practices in Change Management study, drawing on more than 2,000 respondents across industries, found that projects with defined adoption metrics were six times more likely to meet objectives than projects relying on qualitative reports. The metrics do not have to be exotic. They have to be real, system-derived, and reviewed on a cadence, not at a 90-day post-mortem when it is too late to course-correct.
A useful secondary metric is time-to-complete. If the new process takes 40% less time than the old one, adoption will be self-sustaining. If it takes more, you have a design problem, regardless of what the adoption rate looks like.
What do you do when adoption stalls?
You run a recovery plan: diagnose why adoption stalled, fix the top friction cause, relaunch with explicit acknowledgment that the first version had problems, and reset the measurement cadence. Pretending the stall is not happening, or doubling down on enforcement without addressing the design, usually makes the second rollout harder than the first.
Diagnosis starts with the people who adopted and then stopped. They are more informative than the people who never adopted, because they tried the new process and can tell you exactly where it broke. Ten 20-minute interviews with early-quitters will surface real friction faster than any survey. Ask what they tried, what got in the way, and what they are doing instead.
The second move is to identify your 20% who will never adopt. There is always a group, typically 10 to 20% of any team, who will resist a process change regardless of quality, because of tenure, workflow preferences, or entrenched identity around the old way. Do not waste the recovery plan on them. Rogers's diffusion research calls this group "laggards." Accept they will adopt last or not at all, and focus recovery energy on the 60% who are adoption-capable but not yet adopted.
The relaunch itself should be honest about the first version. Teams respect "we got this wrong, here's what we changed, here's what we're asking now" far more than a second rollout that pretends the first one went fine. Pair the relaunch with visible leadership use and a weekly check-in on metrics for the first month.
Key takeaways
Adoption is a design problem before it is a communication problem. The process that wins is the one with the lowest friction, not the one with the best launch video. If the new process is harder than the old one for the person doing the work, no amount of training or enforcement fixes that.
The research converges. BJ Fogg's Tiny Habits work shows that behavior change requires lowering ability cost, not just raising motivation. Thaler and Sunstein's Nudge research shows defaults beat exhortations. Prosci's ADKAR model and Kotter's Eight Steps both treat visible leadership modeling as non-negotiable. Gallup's engagement data and McKinsey's transformation analyses both point to feedback loops as the difference between rollouts that stick and rollouts that fade.
The playbook: redesign for lower friction, close the old path, model visibly, measure real behavior from system data, and run a weekly feedback loop for 30 days. For a view on picking which processes to change first, see how to prioritize process improvements with limited resources. For keeping improvements alive without exhausting the team, see continuous improvement without burning out your team. For applying these ideas at company scale inside a tech company, see lean operations for technology companies.
The underlying truth is boring and reliable. People follow the process that is easier. Make yours easier.
Next step
Ready to go AI-native?
Schedule 30 minutes with our team. We’ll explore where AI can drive the most value in your business.
Get in Touch