The Enterprise AI Brief
Curated by Terry Rice — AI Performance Systems Architect, General Assembly Faculty  ·  ISSUE 001 — APR 8, 2026
Trusted by teams at
Google · EY · Berkshire Hathaway · Amazon
// This Week
Three Signals That Matter
Every edition is built around one theme. This week: the gap between AI activity and AI fluency, and why most L&D programs are measuring the wrong thing.
This edition's theme: Adoption vs. Theater — the difference between organizations genuinely changing how work gets done and organizations performing AI adoption for leadership.
01
WORKFORCE
91% of employees completed AI training in 2025. Only 11% changed how they actually work.Gartner, Q1 2026
This is Kirkpatrick Level 3 failure at industrial scale. Organizations are measuring training completion and calling it adoption. The 80-point gap between completion and behavior change is not a training problem. It is a measurement problem dressed up as a training problem.
3 actions to take this week
01
Audit your most recent AI training program against Kirkpatrick Level 3. If you cannot name three workflows that changed because of it, the program failed, regardless of what the completion report says.
02
Replace completion-rate reporting with a 30-day behavior change survey. Ask employees to name one specific thing they do differently because of the training. One real example is worth 100 completion certificates.
03
Run this week's Team Challenge: the AI Adoption Self-Assessment. Four questions, answered privately, then discussed as a group. The gap it reveals is the data you actually need for your next budget conversation.
02
TECHNOLOGY
7 in 10 employees who became proficient AI users learned by doing, not by watching. Only 3 in 10 cited formal training as their primary path.LinkedIn Learning, 2026
The research keeps saying the same thing and L&D keeps ignoring it: adults learn when the content is immediately applicable to a real problem. Passive video training is the wrong format for the wrong moment. The organizations closing the adoption gap are building, not watching.
3 actions to take this week
01
Identify the two or three job functions that touch the most repetitive, high-volume tasks. Those are your highest-ROI AI adoption targets. Start there, not with a company-wide rollout.
02
Build one 30-minute learn-by-doing session for each target function. Participants build something real during the session. The output is the learning artifact, not a quiz score.
03
Add a 30-day implementation commitment at the end of every training session. Participants commit to one specific workflow change. Manager follows up at day 14 and day 30. No follow-up, no change.
03
LEADERSHIP
Only 23% of CHROs report that senior leadership regularly uses AI tools in their own work, compared to 61% of frontline employees.Josh Bersin Co., Mar 2026
Leadership is mandating adoption it has not achieved itself. Employees watch what executives do, not what they say. If the CHRO is not using AI, the message sent to the entire organization is that AI is for other people, regardless of what the memo says.
3 actions to take this week
01
Before your next executive briefing on AI adoption, ask every senior leader to demonstrate one AI tool they used this week. If they cannot, that is your priority training investment, not another employee program.
02
Build one executive-specific session: 60 minutes, one real problem, one AI-built solution. Lead by example before mandating by memo. That sequence is the difference between a culture that adopts and one that performs adoption.
03
Add AI fluency to leadership performance reviews. Not AI completion rates, but specific behaviors: workflows changed, tools adopted, team members coached. Measure what matters and you change what gets done.
// Benchmarks
Verified Data
Sourced, dated, and delta-tracked. Every number includes where it came from and how it changed from the prior period.
// FREE DOWNLOAD
Corporate AI Training Benchmark Report — Q1/Q2 2026
The full report goes deeper on every section: spending benchmarks, adoption rates by industry, training formats, the self-assessment framework, what leaders are getting wrong, and what you should actually be paying. Free to download and share.
Get the Full Report →
// WHAT THIS DATA ACTUALLY MEANS
The gap is the story.
91% of employees completed AI training last year. 11% changed how they work because of it. That 80-point gap is not a technology problem or a budget problem. It is a strategy problem. Organizations are optimizing for the metric that is easy to report — completions — and ignoring the one that actually matters: behavior change.
WHAT TO SHARE TO LOOK SMART
"91% of employees completed AI training. 11% changed how they work. That's the whole problem in two numbers."
"Only 23% of CHROs say their senior leaders regularly use AI. You can't mandate what you're not modeling."
"When someone says they have 75% AI adoption, ask which definition. McKinsey puts it at 34% using a definition that actually means something."
"Spending on AI training is up 38% year over year. The failure rate is still 70%. More money is not the answer."
The root cause hiding in the data: Only 23% of CHROs say their senior leaders regularly use AI tools in their own work. The people mandating AI adoption are not doing it themselves. That is not a cultural detail. It is the reason most AI training investments fail to produce behavior change: employees will always watch what leadership does, not what leadership says. The solution is not another training program. It is leaders using AI publicly, visibly, and imperfectly in front of their teams.
91%
Employees who completed AI training in 2025
+14pp YoYGartner — Jan 2026
11%
Employees who changed how they work because of AI training
+2pp YoYGartner — Jan 2026
27%
Organizations that feel effective at building employee AI skills
+5pp YoYJosh Bersin — Feb 2026
23%
CHROs who say senior leadership regularly uses AI tools in their own work
+8pp YoYJosh Bersin — Mar 2026
$4,100
Average annual investment per employee in AI training and tools
+38% YoYATD — Q1 2026
70%
AI change initiatives that fail to meet their stated objectives
-3pp YoYMcKinsey — Jan 2026
// Conflicting Signals
When two credible sources disagree on the same metric, both are shown. You decide which number applies to your organization.
CONFLICTING SIGNAL — AI ADOPTION RATE IN THE WORKFORCE
McKinsey — March 2026
34%
of workers regularly use AI for substantive work tasks
Microsoft / LinkedIn — February 2026
75%
of knowledge workers have used AI at work in the past six months
Why they differ: McKinsey defines "regular use" as at least weekly for tasks that meaningfully affect output. Microsoft and LinkedIn count any use within a six-month window. Both numbers are accurate. They are measuring completely different behaviors. McKinsey is measuring what L&D should be targeting. Microsoft is measuring what most executives are reporting to their boards. The 41-point gap between those two numbers is where most AI adoption strategies are currently lost.
// Case Studies
Enterprise Implementations
Real organizations, documented approaches. Each case study draws from publicly available information on how these companies approached AI adoption.
CASE 01 — FINANCIAL SERVICES
JPMorgan Chase
BEHAVIOR CHANGE
18 MONTHS
The Situation
JPMorgan deployed AI tools to 60,000+ employees in 2024 with high completion rates and near-zero behavior change. Leadership realized they were tracking the wrong metric. Employees completed training and returned to their existing workflows within days.
What They Did
Shifted from training programs to workflow integration sprints. Each team identified one high-volume repetitive task, built an AI workflow for it, measured time savings over 30 days, and shared results across teams. Peer-to-peer proof replaced top-down mandates.
The Result
Document review tasks became faster and teams that completed the sprint showed significantly higher ongoing AI usage at 90 days compared to employees from prior training-only cohorts. The difference was making rather than watching.
What You Can Do
Run a 30-minute workflow audit with your team this week. Pick one task everyone does repeatedly. Build an AI workflow for it together. Measure the time saved over 30 days. One concrete result beats a hundred completion certificates.
The Wider Lens
This mirrors the shift from classroom driver's ed to actual driving practice. Decades of research showed that classroom hours had almost no correlation with driving safety — logged hours behind the wheel did. JPMorgan's sprint model is the corporate equivalent of putting people behind the wheel on day one rather than showing them slides about steering.
CASE 02 — RETAIL
Walmart
SCALE
12 MONTHS
The Situation
Walmart needed to upskill 1.6 million associates across 4,600 stores simultaneously. Traditional LMS-based training was showing 73% completion and near-zero behavior change. The scale of the problem made individual coaching impossible.
What They Did
Built an AI-powered simulation platform where associates practiced real customer scenarios with AI role-playing as the customer. Practice replaced presentation. Associates completed 8-minute sessions on their phones between shifts.
The Result
Associates who completed multiple simulation sessions showed significantly higher on-the-job skill application in the weeks following, verified via manager observation. Customer satisfaction scores improved in pilot stores over 90 days. The pattern held: practice produces behavior change, completion rates do not.
What You Can Do
Build a 10-minute practice scenario for your team's most common high-stakes conversation — a difficult stakeholder, a skeptical budget meeting, a performance discussion. Use Claude to role-play the scenario before the real thing. Practice is not rehearsal. It is the work.
The Wider Lens
The Walmart model mirrors the shift from surgery residency lectures to surgical simulators. Research consistently shows that simulation-based training produces measurably better outcomes than observation-based training — in medicine, in aviation, in retail. The pattern is not industry-specific: deliberate practice beats passive exposure every time, regardless of the skill being built.
// Human Side of Change
The Psychology Behind the Gap
One behavioral science concept per edition. Always under 200 words. Always tied to a decision you can make this week.
38%
THE FINDING
of employees who resist AI tools cite fear of becoming irrelevant as their primary concern — not difficulty using the tool.
Prosci ADKAR Research — Q4 2025
// The Research
Social identity theory tells us that when employees have built their professional identity around a specific skill — writing, analysis, design, legal research — AI tools that automate that skill feel like a threat to who they are, not just how they work. This is not irrationality. It is a predictable response to identity threat that no amount of change management messaging will override.
// Why It Matters for Adoption
The way AI tools are positioned determines whether people adopt them or resist them. When AI is positioned as something that does your job, it triggers identity defense. When it is positioned as something that makes your expertise more valuable, people move toward it. Same tool. Same people. Completely different result, because the adoption barrier was never capability. It was identity.
The gap between knowing what AI can do and being willing to use it is not about tools or skills. It is about whether using the tool feels like it threatens who you are. That gap does not close with training programs. It closes with time, safety, and evidence that the new behavior is an addition, not a replacement.
— Enterprise AI Brief editorial, drawing on Prosci, Kotter, and Social Identity Theory
"
// BRING THIS TO YOUR NEXT LEADERSHIP MEETING
Are we positioning AI as something that replaces what our people do, or something that makes what they are already good at more valuable? That question is not cosmetic. It determines whether adoption actually happens, or whether you spend another quarter measuring completion rates that don't translate into changed behavior.
// How People Actually Learn
Why One Format Fails Most of Your Room
David Kolb's Experiential Learning Cycle, the foundational model in corporate L&D, identifies four stages people need to move through for learning to produce real behavior change. Most AI training programs hit one or two. The result is the 80-point gap between completion and behavior change documented in the benchmarks section.
STAGE 1 — HEAR IT
Concrete Experience
A live demo or explanation that creates the "I didn't know this was possible" moment. This is where most training starts and stops. Without the stages that follow, nothing sticks.
COVERED BY: Lectures, demos, videos
STAGE 2 — WATCH IT
Reflective Observation
Seeing others do it — peers, not presenters — creates social proof that changes what feels possible. This stage is where group showcases and team sharing earn their ROI.
COVERED BY: Team showcases, peer sharing
STAGE 3 — BUILD IT
Active Experimentation
Building something with their own hands is the anchor of the learning cycle. Gartner's data shows up to 70% of skills are never applied without this stage. No output, no change.
COVERED BY: Hands-on challenges, sprints
⚠️
// THE MISSING STAGE — WHY MOST AI TRAINING FAILS
Most corporate AI training delivers Stage 1 (demo/lecture) to a room of people who cannot retain learning without Stage 3 (building). Kinesthetic learners — people who only internalize something by doing it with their hands — make up a significant portion of most corporate rooms. Standard training formats have no mechanism for them at all, because real hands-on learning requires a deliverable. You cannot fake it with a quiz. The Team Challenges in this tool are designed around the full four-stage cycle. Each challenge is labeled with which stage it primarily addresses.
// SEE IT IN ACTION
The Maverick Lab is designed around all four stages in sequence.
Live demo. Team build. Group showcase. Leadership builds alongside. Every stage covered in one session.
See The Maverick Lab →
// Try It Now
Run the Learning Gap Audit with your team
Most L&D teams have never mapped their training programs against how people actually learn. This 30-minute exercise does it. Bring your last AI training program. Map every element against the four stages. See exactly where your adoption gap lives.
// LEARNING GAP AUDIT — 30 MIN — L&D TEAM
How to run it
APPLY IT — FULL CYCLE
1
Pull up the last AI training program your organization ran. Have the agenda, format, and completion data ready.
2
Map each element of the program to one of four stages: Hear It (demo/lecture), Watch It (peer observation), Build It (hands-on deliverable), Apply It (real work integration with follow-up). Be honest — most programs sit almost entirely in stage one.
3
Paste the prompt below into Claude with your audit results. Review the output together and identify one change to make before your next program launches.
// COPY THIS PROMPT INTO CLAUDE
Here is the structure of our most recent AI training program: [describe the format, length, activities, and any follow-up]. Using Kolb's Experiential Learning Cycle, audit this program against four stages: Concrete Experience (hands-on doing), Reflective Observation (watching peers, sharing results), Abstract Conceptualization (understanding the why), and Active Experimentation (applying to real work with follow-up). For each stage, tell me whether our program addresses it, partially addresses it, or skips it entirely. Then tell me the single highest-leverage change we could make to the next iteration that would most improve actual behavior change — not completion rates.
What changes: L&D teams see for the first time exactly which learning stages their programs skip — and why their completion numbers and behavior change numbers tell completely different stories. The audit takes 30 minutes. The results change how they design every program after it.
TR
// ABOUT THE CURATOR
Terry Rice
AI Performance Systems Architect, General Assembly Faculty
Terry has designed and delivered corporate AI training for Google, EY, Berkshire Hathaway, Walmart, and Estée Lauder. As a longtime General Assembly faculty member, he has trained thousands of professionals on applying emerging technology to real business problems. Outside of work he coaches youth football and is a father of five. The Enterprise AI Brief applies that same approach in written form: every edition is built around what actually changes behavior, not what sounds good in a deck. He works 20 hours a week by design, from Brooklyn.
Book Terry to Speak → LinkedIn terryrice.co
// Team Challenges
Build With Your Team Today
Every challenge uses Claude. Every prompt is copy-paste ready. Some people learn by watching, some by doing — these are designed so there's something for both. Run them in order or start wherever makes sense for your team.
FILTER
All
Starter — 15 min
Builder — 30 min
Advanced — 60 min
Culture
01
The meeting prep assistant
STARTER
15 MIN
BETTER MEETINGS
BUILD IT — Active Experimentation
02
The email audit
STARTER
15 MIN
COMMUNICATION
WATCH IT — Reflective Observation
03
The process audit
BUILDER
30 MIN
REDUCE COSTS
BUILD IT — Active Experimentation
04
The objection handler
BUILDER
30 MIN
INCREASE REVENUE
BUILD IT — Active Experimentation
05
The AI adoption self-assessment
ADVANCED
60 MIN
CULTURE SHIFT
WATCH IT — Reflective Observation
06
The workflow redesign sprint
ADVANCED
60 MIN
REDUCE COSTS
APPLY IT — Full Cycle
07
The content repurposing machine
BUILDER
30 MIN
INCREASE REVENUE
BUILD IT — Active Experimentation
08
The Maverick Matrix team map
CULTURE
20 MIN
TEAM DIAGNOSIS
WATCH IT — Reflective Observation
09
The learning gap audit
CULTURE
30 MIN
L&D STRATEGY
APPLY IT — Full Cycle
Your learning path — from first challenge to full cycle
4 LEVELS
1
Solo Explorer
Try one starter challenge alone. Build something real. See what's possible without waiting for a program to tell you to start. Stage covered: Build It.
15 MIN
2
Team Spark
Bring a challenge to your team. Run the Maverick Matrix map or the adoption self-assessment. Start a conversation that surfaces what you've been walking past. Stage covered: Watch It.
30 MIN
3
Team Builder
Run an advanced session. Redesign a real workflow. Someone's job changes. You ask what else is possible. Stage covered: Apply It.
60 MIN
4
Maverick Lab
Full-day facilitated workshop built around all four stages of Kolb's cycle. Real AI tools built for real business outcomes. Every person in the room leaves with something they built and something they believe about themselves that they didn't believe when they walked in. All four stages. One session.
FULL DAY — $15K-$25K
// Contribute
We're Looking for Expert Voices
The Enterprise AI Brief features practitioners with specific points of view, not general takes. If you are doing something real with AI in an organization and have something specific to say about it, we want to hear from you.
// WHAT WE'RE LOOKING FOR
Has your company done something worth sharing?
A real AI implementation that changed how your team works. Not a pilot. Not a plan. Something that shipped and produced a result — good or bad. Both are useful.
What are you seeing in the field right now?
Patterns across organizations. What's working that nobody talks about. What's failing that everyone pretends is working. What the data says versus what leaders actually do.
Where do you disagree with the consensus?
The research says one thing. Your experience says another. We're interested in that tension, especially when you can point to something specific that explains the gap.
What does AI adoption actually look like from the inside?
The view from inside an organization is different from the view in the research. If you are an L&D director, CHRO, or VP of People navigating this in real time, your perspective is the one our readers want.
// WHO WE FEATURE
L&D Leaders
Directors of Learning, CLOs, and VP L&D who are building AI training programs and measuring what actually changes.
HR and People Leaders
CHROs, VPs of People, and HR directors navigating AI adoption policy, workforce readiness, and the gap between mandate and behavior.
Practitioners with Results
Anyone inside an organization who shipped something real with AI and can speak to what worked, what didn't, and why.
// WHAT TO EXPECT
We review every submission. If your perspective is a fit for an upcoming edition, we'll reach out directly to develop it. We don't publish general takes. We're looking for something specific: a real situation, a real decision, a real result. We'll help you shape it. You'll get full attribution and a link to your LinkedIn and company. We don't pay contributors, but the audience is exactly who you want to be in front of.
Ready to contribute?
Takes about five minutes. Tell us who you are, what you're seeing, and what you'd want to say. We'll take it from there.
Submit Your Perspective →