Some people use AI constantly. Most use it occasionally. A few have stopped or started using another tool without their company’s approval. The dashboard shows active users but nobody is quite sure what those users are actually doing, or whether it matters.
This is where the majority of corporate AI programs sit right now.
What we have learned, working across many different organizations on this, is that the path forward tends to follow a fairly consistent shape. Not because there is one right answer, but because the same things tend to unblock adoption in roughly the same order. On other words, great companies think alike.
The most useful thing an organization can do before designing any training is run a short survey across all employees.
This sounds obvious. Most companies skip it anyway, usually because they think they already know the answers. They have a sense of which teams are ready, which tasks are good candidates, where the hesitation will come from.
That picture is almost always incomplete. The teams that seem enthusiastic often have a narrower range of actual use cases than expected. The teams that seem resistant often have specific, solvable concerns that nobody has addressed directly. The tasks people most want help with are frequently not the ones that show up in leadership conversations about AI strategy.
A baseline survey does two practical things. It tells you where to focus the program so it connects to real needs rather than assumed ones. And it gives you something to measure against later, which turns out to matter a lot when you need to show whether the investment is working.
There is also a data ethics dimension worth being deliberate about. If AI is involved in processing survey responses, it should be tested for bias and handled in a way employees can trust.(At least this is how we do it) The early signals a program sends about how it treats people shape whether those people engage honestly with it.
The most common pattern we see is this: a company invests in good training, employees attend, and not much changes afterwards.
Usually the training itself was fine. What was missing was leadership having done their own preparation first.
When executives are genuinely uncertain about what AI can do, they tend to communicate about it in ways that are inspirational but not useful. Employees hear that AI is a priority. They do not hear what that means for their specific role, their specific team, or the decisions they make day to day. The people who were already curious adopt it regardless. Everyone else waits for a clearer signal that never quite arrives.
This is not a criticism of leadership. AI is genuinely confusing right now. There is an enormous amount of noise, the tools change constantly, and the gap between what AI can do in a demo and what it can do reliably in real workflows is still significant. It takes real work to develop a grounded view.
What helps is a dedicated session where the leadership team builds that grounded view together. Not a briefing where someone presents to them, but a working session where they use the tools themselves, test the edges of what works, and work through what it means for their organization specifically. Which use cases are worth prioritizing. What the risks actually are versus the ones that get attention in the press. How to talk about AI with their teams in a way that gives people real direction.
The output is shared clarity. Everyone leaving with the same understanding of where you are going and enough specificity to translate that into direction for their teams.
When people who are not yet convinced try AI and it does not immediately work well, they usually conclude that AI is not for them. What they actually discovered is that AI does not work well for that particular task in that particular way. Fixing that misperception is much easier with clear guidance from leadership than without it.
With leadership aligned, the company-wide program can actually land.
The design question that matters most here is how specific the training gets. Generic training teaches people that AI can help with writing, research, summarization, analysis. That is true but not actionable. People leave knowing AI is useful in the abstract and unsure how to make it useful for their actual work.
What changes behavior is connecting the training to specific tasks in specific roles. A prompt that a customer service rep can use tomorrow morning to draft a response. A technique that a finance analyst can apply to extracting data from a document. The distance between "AI can help with writing" and something concrete enough to try immediately is where most programs lose people.
Learning science supports a combination of formats here. Short videos people can watch at their own pace work well for foundational concepts. Live sessions with a real instructor work better for actually developing skill, because they create the space to try things, get stuck, ask questions, and see how someone more experienced handles the same problems. Both together work better than either alone.
The goal at the end of a company-wide program is modest but specific. Every employee has used AI for at least one real task in their own work. They know what to do when it does not work the first time, which is more important than it sounds. And they know where to go when they have questions later.This lowers barriers and supports creating new habits.
In every organization, some people emerge from the initial program with noticeably more curiosity and capability than others. They are already experimenting beyond what was covered in training. Colleagues are asking them questions. They have opinions about which tools work better for which tasks.
These people are worth investing in deliberately.
Because with deeper training and a clearer mandate, they become genuinely useful to the teams around them. They can build working automations for real workflows. They can help colleagues get unstuck. They can identify new use cases that the formal program did not cover and bring those back into the organization.
The way to develop them is project-based. Not more theory, but a structured program where each person builds something real for their specific business context. A working AI workflow or automation that solves an actual problem in their team. Something that gets used after the program ends, not just demonstrated once.
What this produces, over time, is a distributed capability inside the organization. Each business unit has someone who can do things with AI that most people cannot, and who is connected to the broader effort. That turns out to be more durable than any centralized AI team.
Formal programs have end dates. Adoption does not.
AI tools change quickly. The capabilities available today are different from six months ago, and six months from now will look different again. New employees join with no context. Teams that were skeptical early on become curious once they see colleagues doing things they thought were not possible. Use cases that nobody had tried turn out to be surprisingly useful.
Staying current with all of this does not require launching a new formal program every few months. What it requires is a lighter ongoing structure. Weekly office hours where people can ask questions about what they are working on. An async channel where questions get answered without requiring anyone to wait for the next scheduled session. Regular sharing of what Champions are building so approaches that work spread across teams rather than staying siloed.
The measurement layer matters here too.. What are people actually using AI for this month compared to three months ago. Which teams have diversified their use cases and which are still doing the same two things. Where are people getting stuck. That picture shapes where the next round of support goes.
Assessment, then leadership alignment, then company-wide training, then Champions development, then ongoing support. The phases build on each other in ways that are easy to underestimate.
Assessment makes training specific. Leadership alignment gives training somewhere to land culturally. Company-wide training creates the shared foundation that Champions build on. Champions create the distributed capability that sustains adoption when the formal program ends. Ongoing support is what turns a program into a capability.
The organizations that look genuinely different after 18 months are not necessarily the ones that launched biggest. They are the ones that kept the structure running after the launch energy faded.

If you are thinking about what this could look like for your organization, we would be glad to start with a conversation.
Book a discovery call with the AI Academy team.
Why run an assessment before any training?
Without a baseline, you are designing training based on assumptions about what people need. Those assumptions are usually partially wrong in ways that matter. Two weeks of assessment changes the quality and specificity of everything that follows, and gives you something to measure progress against later.
What should a leadership session actually produce?
Shared clarity more than a strategy document. Everyone leaving with the same grounded understanding of what AI can and cannot do reliably right now, which use cases are worth prioritizing for your specific organization, and enough specificity to give their teams real direction rather than inspirational but vague encouragement.
How is role-specific training different from generic training?
Generic training teaches AI in the abstract. Role-specific training connects to the actual tasks someone does on a Tuesday afternoon. The difference shows up in whether people are still using AI three months after training ends. Generic training produces a temporary spike. Role-specific training produces behavior change.
Who tends to become a Champion?
Usually not the most senior person in a team. The most curious one. Often the person colleagues already ask when something is confusing. Credibility with peers and genuine curiosity matter more than existing AI knowledge, because the knowledge can be developed.
What does the Bootcamp actually produce?
Each participant builds a working AI workflow or automation for a real problem in their business unit. Not something to demo and shelve, but something that gets used. They also document it simply enough that colleagues can replicate or adapt it. The point is demonstrated capability, not just increased confidence.
Why does ongoing support matter after a formal program?
Because adoption does not have a fixed end date. Tools change, people join, teams that were hesitant become curious, new use cases emerge. A program with a fixed end date captures value once. Ongoing support lets that value compound over time. The difference in outcomes between organizations that maintain the structure and those that do not tends to become visible around the 12 month mark.
What is worth measuring month to month?
Less whether people are logging in and more what they are using AI for. Use case diversity within roles is a better signal than usage volume. A team where people are applying AI across five different types of work is further along than a team with high login rates but a narrow range of applications. That distinction shapes where coaching and support efforts go next.