

How Leaders Move From Pilots to Strategic Scale
AI leadership is no longer about experimenting with tools in pockets of the organization. In a workplace where a large share of knowledge workers already use generative AI—often without oversight—executives must decide how to guide adoption, reduce risk, and turn hype into measurable outcomes.
In a recent episode of AI Unscripted, enterprise strategy expert Andreas Welsch discussed what leaders are getting wrong about AI adoption, why so many organizations remain stuck in pilots, and how leaders can assess where to place technology bets across GenAI, agentic AI, and automation.
The conversation also surfaced a practical leadership theme: strategy and governance must lead the technology, not the other way around. That shift has direct implications for team structure, decision rights, and workforce transformation.
Executive Summary
- Many CEOs underestimate AI relevance or overestimate current strategic progress.
- Early productivity gains often start as “same work, faster,” then evolve into new workflows.
- Scaling requires alignment to business strategy and measurable KPIs/PPIs.
- Leaders should audit existing software to activate built-in AI features before building from scratch.
- The biggest risk may be choosing not to act while competitors invest and learn.
Key Takeaways
- Andreas Welsch argues AI applies to “any business” and “any role,” with value depending on context.
- Welsch points to executive overconfidence: many leaders believe adoption is strategic while employees disagree.
- Turning on Copilot or having isolated ChatGPT usage does not automatically equal an AI strategy.
- Productivity and quality improvements are real, but leaders should expect a learning curve before workflow redesign.
- AI can boost novice performance, yet expertise still matters—especially when work hits a “roadblock.”
- To move beyond pilots, Welsch emphasizes measurable outcomes tied to business goals over “tech-first” projects.
- For the next 12–36 months, Welsch recommends starting with an inventory of the existing tech stack and vendor AI features.
AI leadership is the executive capability to guide AI adoption in ways that advance business strategy, enable the workforce, and set clear guardrails for responsible use. In the conversation, Andreas Welsch repeatedly ties AI success to strategic alignment and measurability—using KPIs and process performance indicators (PPIs) to connect initiatives to revenue growth, cost goals, and operational outcomes. AI leadership also includes deciding whether AI is treated as a tool or as a “cybernetic teammate,” particularly as agentic AI becomes more prevalent and requires clearer decision rights and governance.
AI leadership starts by correcting two CEO misconceptions
According to Andreas Welsch, one persistent misconception is that “AI doesn’t apply to our business.” He argues the opposite: AI applies to any business and any role, and leadership’s job is to identify where it adds value in specific business areas.
A second misconception is executive overconfidence. Welsch references research (conducted late last year and published in March by Writer) indicating many C-level leaders believe their organizations are approaching AI strategically, while employees rate the organization much lower on adoption and literacy.
Key Insight: Andreas Welsch highlights a perception gap: executives often believe AI adoption is strategic, while employees report weak AI literacy and fragmented use. Treating scattered ChatGPT usage—or a blanket Copilot rollout—as “strategy” creates a false sense of progress and delays the governance work required to scale.
Are GenAI productivity gains real—or just “same work, faster”?
The discussion cites widely shared statistics: GenAI driving productivity gains (reported up to 43%) and improving work quality for many users (reported 68%). Welsch’s view is that these improvements are credible because workers have always sought ways to do work faster and better.
However, Welsch frames the value curve in phases. Early adoption frequently delivers acceleration: completing familiar tasks more quickly while people develop proficiency with new tools. Over time, that proficiency can open the door to changing workflows—using AI not only to speed execution, but to reshape how work is done.
Key Insight: Welsch distinguishes between “warming up” to GenAI and redesigning work around it. Initial gains often come from doing existing tasks faster, but longer-term value comes when teams intentionally change workflows and approaches—once they have enough comfort and proficiency to use the tools reliably.
AI upskilling and the new meaning of expertise
Welsch addresses research suggesting AI can help people reach high performance levels—even on tasks they have not done before. He explored this theme in his newsletter, The AI Memo, and describes how AI can help individuals become more proficient quickly.
He also cautions that expertise still matters. Welsch describes experimenting with AI-generated code to build an app as part of the “vibe coding” trend—prompting in natural language to generate software. AI accelerated the build, but when the work hit a wall, deeper expertise was still necessary to resolve roadblocks and improve the output.
In Welsch’s framing, AI acts as a “booster” across the skill spectrum. Newcomers can become capable faster, while experts can accelerate productivity even further. Yet sustained mastery still requires practice and learning, echoing the idea (popularized by Malcolm Gladwell) that expertise takes thousands of hours.
Key Insight: Welsch positions AI as a proficiency accelerator, not an instant replacement for expertise. AI can elevate early performance quickly—such as generating working code in minutes—but organizations still need experienced talent for troubleshooting, judgment, and improving work beyond the model’s first pass.
Moving from pilot to scale: the AI adoption strategy gap
The conversation notes that fewer than 10% of companies have deployed GenAI across five or more functions. Welsch connects this “pilot trap” to a familiar pattern seen in previous waves such as machine learning and robotic process automation (RPA): initiatives stall when they are not tied tightly to business strategy.
His recommendation is direct: begin with the organization’s strategy for the next 12–36 months. Is the business aiming to grow revenue, cut costs, or improve execution? Then connect AI initiatives to measurable outcomes using KPIs and PPIs, enabling stakeholder conversations “on eye level” about goals leaders already own.
Welsch also warns against starting with the technology itself. A “hammer looking for nails” approach can generate activity without impact, while measurable, strategy-led initiatives create proof points that build momentum.
Where to place bets in the next 12–36 months: start with the existing stack
With weekly releases and constant announcements, Welsch acknowledges that leaders can feel overwhelmed. His guidance remains consistent: follow business strategy first, then work backward into technology choices.
Welsch suggests avoiding a “build everything from scratch” mindset—an approach many organizations tried during earlier machine learning hype cycles, often discovering gaps in data quality, scarce talent, and shifting priorities. In contrast, GenAI models have become more “plug-and-play,” making it easier to capture value faster.
Practically, he recommends an inventory of the applications the company already uses. Many vendors now offer AI features embedded in their products. Leaders can evaluate whether these capabilities are activated, what prevents adoption, whether value is measurable, and whether add-on costs are justified by KPI improvements.
Key Insight: Welsch advises executives to reduce time-to-value by auditing the current tech stack and enabling built-in AI features before investing in foundational rebuilds. This approach bypasses common blockers—data readiness, scarce specialists, and long platform timelines—while still anchoring success to measurable outcomes.
Agentic AI as a “cybernetic teammate”: governance and decision rights
Welsch describes a shift in mindset: AI can be treated as more than software—especially with AI agents and “agentic AI.” When AI systems operate based on goals (for example, researching market trends for a specific segment), leaders must think differently about guardrails and interaction patterns.
He draws a parallel to how businesses already manage humans: codes of conduct, ethical expectations, and standards (including examples like IFRS in finance). In his view, organizations should not reinvent these concepts solely within technology teams.
Welsch also argues that functions with deep experience in roles, standards, and performance management—such as HR—should have a seat at the table. The future of work should not be defined only by technologists when digital “employees” begin to resemble teammates in how they are directed and evaluated.
The AI risk executives under-discuss: choosing not to act
While many leadership conversations focus on risk, Welsch flags a different one: the risk of not doing AI at all. He challenges the idea that leaders can “put their head in the sand” and wait for AI to pass as a fad.
Welsch argues momentum is growing, with more organizations crossing into broader adoption as they see others achieving results. Meanwhile, competitors are investing, learning, and building proof points.
His framing is not “spend big immediately,” but start with small, meaningful steps that create measurable differences without “breaking the bank.” Those proof points can then scale across the business.
Leadership Implications
- Anchor AI initiatives to business strategy: Start from 12–36 month goals (revenue, cost, execution) and work backward into AI use cases.
- Make AI measurable: Define KPIs and PPIs early so stakeholders can evaluate outcomes, not activity.
- Inventory the tech stack before building: Identify existing applications with embedded AI features and test for measurable impact.
- Design guardrails for agentic AI: Apply standards similar to human governance—codes of conduct, ethical boundaries, and role expectations.
- Bring workforce leaders into AI governance: Include HR and business leaders in defining how digital “employees” will be guided and assessed.
Why this conversation matters
This AI Unscripted discussion speaks directly to executive audiences facing rapid GenAI adoption, uneven governance, and persistent “pilot mode” challenges. Andreas Welsch’s commentary keeps returning to practical AI leadership: align initiatives to business strategy, measure outcomes, and treat workforce transformation as a cross-functional responsibility.
The conversation also surfaces the next wave of organizational complexity: agentic AI. As AI begins to act on goals rather than prompts alone, leadership must clarify guardrails and decision rights—and bring functions like HR into the governance discussion so digital labor is managed with standards the enterprise already understands.
Conclusion
The throughline in Andreas Welsch’s guidance is that AI leadership requires disciplined choices: align AI adoption to business strategy, define measurable outcomes, and resist confusing tool rollout with strategic transformation. As agentic AI evolves toward digital “teammates,” leaders must also establish guardrails and bring workforce stakeholders into governance.
Organizations that treat AI as both a performance booster and a workforce design challenge will be better positioned to scale beyond pilots—while those that delay action risk falling behind competitors that are already building momentum.
FAQ
What is the biggest misconception CEOs have about AI adoption?
Andreas Welsch says the biggest misconception is believing AI does not apply to the business, or assuming the company is already approaching AI strategically. Both views ignore where AI can add role-by-role value and hide gaps in literacy and execution.
Welsch points to a perception gap between executives and employees, where leadership confidence can exceed frontline reality.
Do productivity gains from generative AI actually hold up?
Welsch considers reported GenAI productivity and quality gains credible because knowledge workers have always sought ways to improve speed and outcomes. He adds that early value often comes from doing existing tasks faster before teams evolve workflows and approaches.
This framing helps leaders set expectations for an adoption curve rather than a single “before/after” moment.
Is rolling out Microsoft Copilot the same as having an AI strategy?
No. Welsch states that distributing tools broadly—such as rolling out Copilot—does not automatically make AI adoption strategic. Strategy requires measurable alignment to business outcomes and governance, not just access to software features or isolated experimentation.
Tool enablement can be useful, but it is not a substitute for an AI leadership plan tied to KPIs.
How should executives move from AI pilots to enterprise scale?
Welsch recommends aligning AI initiatives directly to the business strategy for the next 12–36 months, then making projects measurable with KPIs and PPIs. That approach enables stakeholder conversations on outcomes and reduces “hammer looking for nails” experimentation.
This is the same adoption challenge seen in machine learning and RPA: value scales when it is measurable and strategic.
Where should a CEO place bets across GenAI, agentic AI, and automation?
Welsch advises leaders to start from business goals and work backward, then audit the existing tech stack to find vendor applications with built-in AI features. Activating proven capabilities can deliver faster value than building platforms and models from scratch.
He also encourages evaluating whether add-on costs are justified by measurable KPI improvements.
What is agentic AI in practical business terms?
In Welsch’s description, agentic AI refers to AI agents that can act based on goals, such as researching market trends for a specific segment. This shifts AI from being only a tool to functioning more like a cybernetic teammate that requires guardrails and oversight.
This mindset changes how leaders define standards, decision rights, and safe operating boundaries.
Does AI eliminate the need for expertise and experience?
No. Welsch says AI can boost proficiency quickly, but expertise remains essential when work hits roadblocks or needs improvement beyond a first draft. His “vibe coding” example shows AI can generate code fast, while deeper skill is needed to troubleshoot.
AI acts as a productivity booster at every level, but it does not replace the practice required for mastery.
Which internal functions should be involved in AI governance for agents?
Welsch argues governance should not be defined only by technologists, especially as AI agents resemble digital employees. He suggests applying enterprise standards similar to human governance and notes that HR has relevant experience in codes of conduct and role expectations.
Involving workforce leaders supports consistent standards as agentic AI becomes more capable.
What is an under-discussed AI risk for CEOs right now?
Welsch highlights the risk of doing nothing—assuming AI is a fad that will pass. He believes AI momentum is growing and competitors are investing, which means delaying action can create a long-term disadvantage even if the organization tries to catch up later.
His recommendation is to start with small, meaningful, measurable steps that build internal proof points.

