AI leadership is increasingly defined by a single reality: technology is often the easy part, but people and process change are where deployments succeed or fail. In a wide-ranging conversation on a business podcast, AI leadership expert Andreas Welsch explains why most AI initiatives stall—and what executives can do differently as agentic AI enters real workflows.
Welsch draws on more than 25 years in technology and a decade advising large enterprises on AI and machine learning, including his time at SAP. His central message is consistent: AI adoption must start with business strategy, measurable KPIs, and workforce enablement—not with “shiny objects.”
The conversation is especially relevant to CIOs, CTOs, and CHROs navigating governance, operating model decisions, and the human impacts of automation. It also reflects a broader shift: leaders are increasingly asked to show outcomes rather than headlines.
Executive Summary
- AI projects fail when they start with tools instead of business strategy and KPIs.
- Success is measured by business impact, not the number of bots or agents deployed.
- Adoption requires top-down sponsorship and bottom-up empowerment and training.
- Agentic AI raises new workforce questions, including talent pipelines and role redesign.
- Hands-on experimentation beats hype; leaders should focus on the next two quarters.
Key Takeaways
- Welsch argues most “AI proof of concept” failure is a leadership and alignment problem, not a technology problem.
- He cautions against “innovation theater” and chasing every new AI trend without business relevance.
- AI initiatives should begin with the existing strategy: what the business is trying to achieve faster, cheaper, or better.
- Executives should demand measurable KPIs and language the business stakeholder cares about.
- Workforce enablement must pair leadership air cover with employee-level skills and support.
- Subject matter expertise remains essential for prompting, reviewing, and judging AI outputs.
- Practical automation (e.g., inbox agents, content pipelines) can deliver immediate productivity gains—when governed and reviewed.
What is AI leadership?
AI leadership is the executive capability to turn AI potential into measurable business outcomes while managing the human, process, and governance challenges that come with change. In Welsch’s view, it includes aligning AI work to strategy and KPIs, enabling people with tools and training, and making deliberate decisions about which tasks can be delegated to automation (including agentic AI) and which require human judgment. It also requires resisting hype, prioritizing use cases, and designing workflows that produce reliable results at scale.
Why this conversation matters
This podcast-style conversation is aimed at business and technology leaders trying to operationalize AI. It matters because it focuses less on abstract possibility and more on execution realities: why deployments fail, how to define success, how to bring people along, and what changes as AI agents enter day-to-day workflows.
For AI leadership and workforce transformation, Welsch’s perspective connects board-level expectations (“do AI”) to practical governance choices: prioritization, measurement, training, and role design. It also reflects a growing need for cross-functional alignment between technology leadership and HR leadership as work changes.
1) Technology is the easy part; humans make it messy
Welsch’s career began in IT rollouts and operating system deployments, where success depended on testing and readiness. Those lessons, he argues, apply directly to AI and agentic AI: even when the underlying technology works, adoption breaks when users are not prepared for the new way of working.
Change disrupts routines and comfort. That is why AI leadership must include workforce considerations from day one—especially when the goal is to embed AI into core processes rather than running isolated experiments.
Key Insight: Welsch frames AI transformation as primarily a human change challenge. Tools can be deployed, but adoption depends on user readiness, comfort with new workflows, and leadership attention to enablement—not just engineering execution.
2) Why AI projects fail: innovation theater and upside-down starting points
Welsch points to widely cited failure rates for AI initiatives and emphasizes why they persist: leaders feel pressured to “show innovation,” which can drive innovation theater—deploying flashy prototypes that are not linked to strategic outcomes.
In his assessment, one of the most common failure patterns is starting with a technology and then searching for a problem. That is “a sure way to failure,” he says, because it ignores the fact that the business already has a strategy and priorities.
Instead, Welsch advises beginning with where the business wants to go and asking how AI can help achieve that faster, more cost-effectively, or with new business models around the data the organization already has access to.
Key Insight: Welsch argues AI adoption should start with business strategy and measurable outcomes. When leaders chase “shiny objects” (machine learning, generative AI, agentic AI) and only later search for use cases, misalignment and failure are far more likely.
3) Defining success: business impact over bot counting
In many organizations, leaders are asked to report how many agents, bots, or AI features have been rolled out. Welsch calls this meaningless unless it is connected to measurable business impact.
Quantity can be actively harmful if it creates more work or introduces friction. A handful of well-governed AI capabilities that improve cycle time, quality, or revenue impact can be more valuable than a large number of low-impact deployments.
Welsch repeatedly returns to the language business stakeholders care about: how does this materially improve a process? How does it make money, reduce cost, increase speed, or reduce operational risk?
Key Insight: Success metrics should be tied to business KPIs, not “AI activity.” Welsch emphasizes outcomes such as faster resolution, meaningful productivity gains, and material process improvement—because stakeholders ultimately ask why the change matters.
4) How leaders should identify AI value in the business
Welsch describes a two-sided approach. First, AI adoption needs sponsorship from the top: executive buy-in, follow-through, access to tools, and training—beyond slogans like “AI-first.” Without this air cover, grassroots efforts tend to hit limits.
Second, adoption must be grounded at the bottom of the organization. Frontline employees know where processes break, what is manual, and where rework occurs. When empowered with skills and encouraged to experiment, they can identify practical opportunities (for example, document comparison, redlining, or other repetitive knowledge work).
Welsch also notes a pragmatic operating model: people should know who to reach out to when ideas become complex, such as a center of excellence or an AI-savvy specialist inside the organization.
5) Agentic AI and organization structure: more discussion than execution
On how AI will reshape organizational structure, Welsch observes more discussion than real execution. He attributes part of this to leadership hesitancy: public announcements about agents can trigger employee anxiety or external backlash about job replacement.
He highlights one visible example: Moderna combined HR and IT functions under HR leadership to accelerate AI rollout, framing it as a “future of work” topic. In that logic, HR is central because it manages job descriptions, training, codes of conduct, and role design as work changes.
Welsch also points to a central tension: roles are more than a “bag of tasks” that can be easily handed to agents. Leaders must decide which tasks can be delegated confidently—where quality is reliable—and where human work remains essential.
6) The talent pipeline risk: eliminating entry-level work can backfire
Welsch flags a growing assumption: organizations can stop hiring entry-level talent and hand “the bottom” of work to AI. He argues this creates a future capability gap because companies still need a talent bench—people who understand the software, industry, customer context, and internal operations as they progress into senior roles.
He cites IBM as an example of this dynamic: public comments about reducing back-office staff by delegating work to AI were later followed by the CHRO’s statement about tripling entry-level hiring in the U.S. to build that bench.
In Welsch’s broader work, he describes how organizational shapes may change over time—from traditional pyramids toward other structures (including diamond-, kite-, or spear-like designs) as AI absorbs certain categories of work. The leadership challenge is to redesign for capability and resilience, not just cost reduction.
7) What skills matter most in AI-ready teams
Welsch is explicit on one non-negotiable: subject matter expertise. Teams that understand their domain can write better instructions, set better goals for agents, and—crucially—review outputs with an informed quality bar.
He also emphasizes critical thinking and judgment. As AI reduces research and “leg work,” humans still must determine whether what is presented is accurate, actionable, and appropriate for the audience. Without that ability, organizations risk scaling errors and misinformation.
The conversation also touches on how expertise is developed over time. Regardless of whether the benchmark is 10,000 hours or another estimate, Welsch’s point is that proficiency takes sustained practice. The risk is that if AI shortcuts early tasks, leaders must intentionally design pathways for skill development and evaluation.
Key Insight: Welsch argues AI-ready performance depends on deep domain knowledge plus human judgment. As automation handles more routine work, the ability to evaluate output quality, context, and risk becomes the differentiator—especially for agentic AI that can execute tasks end-to-end.
8) Practical workforce development: fellowships, shadowing, and rotation
To address capability development—especially for the “middle” of the workforce—Welsch points to structured cross-functional learning. He describes fellowship programs where employees join another team for six to nine months, contributing as a full member while bringing an outside perspective and returning with new skills.
He also describes job shadowing as a way to expand leadership understanding. Early exposure to executive decision-making can help developing talent understand what leadership work entails in practice.
Finally, Welsch highlights job rotation and apprenticeship-style models (common in Europe, including for white-collar roles) that move people through multiple functions. The value is a broader understanding of how a company works and what different departments prioritize—capabilities that become more important as AI forces cross-functional redesign.
9) Hands-on agentic AI: examples that executives can recognize
Welsch recommends ignoring hype and “touching the keyboard.” In his view, leaders and teams should pick deliberate tasks to automate, especially repetitive work that adds limited value.
He offers a simple agent example from his own work: an inbox-monitoring agent that drafts responses when clients request a headshot and bio. When he moves an email to a specific folder, the agent retrieves the media kit materials and drafts a reply for review. The benefit is modest per request, but tangible, repeatable, and easy to demonstrate.
He also describes a larger content workflow automation: from cleaned transcript to newsletter summary (clearly labeled as AI-generated), image creation, video snippet creation, and social promo copy. What previously took six to seven hours can drop to minutes plus review time.
Welsch adds a practical selection criterion for tools: whether they expose an API so workflows can be automated and integrated into an existing stack.
Leadership Implications
- Lead with strategy and KPIs: require every AI initiative to map to measurable business outcomes and stakeholder language.
- Build governance into adoption: define review points (human-in-the-loop where needed) and quality bars before scaling agents.
- Enable the workforce deliberately: pair executive sponsorship with tools, training, and clear escalation paths (e.g., a center of excellence).
- Protect the talent pipeline: avoid eliminating entry-level development pathways; build a bench for future leadership roles.
- Prioritize hands-on learning: focus teams on near-term, high-frequency tasks to automate and iterate over the next two quarters.
Why this matters now
Welsch’s outlook reflects a key inflection point: AI has moved from experimentation to workflow integration, and agentic AI raises the stakes by increasing autonomy. That shift elevates leadership responsibilities in governance, measurement, workforce enablement, and organizational design. The repeated thread is execution discipline: prioritize, test, measure, train, and scale responsibly.
Conclusion
AI leadership, as Welsch describes it, is not about deploying the most agents or chasing the newest model. It is about aligning AI adoption to business strategy, proving impact through KPIs, and preparing people and workflows for change—especially as agentic AI becomes more capable.
Executives who focus on measurable outcomes, workforce enablement, and near-term execution discipline will be better positioned to scale responsibly. The future may be hard to predict, but, in Welsch’s words, focusing on the next two quarters will keep leaders productively busy.
FAQ
1) What causes most AI adoption failures in enterprises?
Most failures come from misalignment: teams start with AI tools and later search for problems, instead of beginning with business strategy and KPIs. Andreas Welsch also cites “innovation theater,” where leaders chase shiny objects without measurable outcomes. This weakens AI leadership.
Welsch emphasizes that technology is often the easy part; people, process change, and measurement discipline determine success.
2) How should executives measure AI success beyond “number of agents”?
Executives should measure AI success by business impact, not deployment counts. Andreas Welsch recommends tying AI adoption to KPIs that show material process improvement—faster cycle times, better quality, lower cost, or increased revenue—using stakeholder language business leaders value.
Counting bots can hide the reality that some AI features create more work and risk.
3) What is the best starting point for an AI strategy roadmap?
The best starting point is the existing business strategy. Welsch advises leaders to ask how AI can help achieve current goals faster or more cost-effectively, rather than adopting a tool first. AI leadership means prioritizing use cases with measurable KPIs from day one.
This approach reduces “shiny object” chasing and increases executive confidence in outcomes.
4) What does “top-down and bottom-up” AI adoption look like in practice?
Top-down means sponsorship, funding, tool access, and training—not just slogans. Bottom-up means empowering frontline employees who know where work breaks to identify automation opportunities. Welsch adds that teams should know where to escalate complex ideas, such as a center of excellence.
Combining both reduces resistance and surfaces practical workflows for agentic AI adoption.
5) How is agentic AI changing organizational structure?
Welsch sees more discussion than execution so far, partly due to fear of backlash when companies announce AI-driven changes. He notes examples like Moderna aligning HR and IT to frame AI as a “future of work” issue. The core challenge is deciding which tasks agents can reliably do.
Leaders must also accept that roles are more than a simple list of automatable tasks.
6) Why does subject matter expertise matter even with powerful AI?
Subject matter expertise improves instructions, goal-setting, and evaluation. Welsch argues that knowledgeable employees can write better prompts and more importantly judge whether outputs meet the organization’s quality bar. This is critical for AI governance, especially when agentic AI can execute tasks end-to-end.
Without expertise, teams may scale “meh” outputs or miss errors that create operational risk.
7) What new human skills become critical as AI handles more tasks?
Welsch emphasizes critical thinking and judgment. As AI reduces manual research and routine work, humans must assess whether outputs are accurate, actionable, and appropriate for the audience. AI leadership must intentionally develop these skills, because automation can otherwise shortcut learning pathways.
This is also why review processes remain important during responsible AI adoption.
8) What is a low-risk, high-value first agent use case?
A low-risk first use case is automating repetitive communications with clear inputs and review steps. Welsch describes an inbox agent that drafts replies for headshot and bio requests by pulling a media kit, then notifying him for review. This saves minutes per request without high operational risk.
These “small wins” build adoption confidence while keeping governance manageable.
9) How should leaders think about entry-level hiring in an AI-driven organization?
Leaders should avoid assuming AI can replace entry-level development entirely. Welsch warns this can eliminate the talent bench needed for future senior roles. He references IBM’s shift toward tripling entry-level hiring in the U.S. to build internal capability and ensure long-term workforce transformation resilience.
AI adoption still requires people who understand customers, systems, and operational context deeply.
10) What time horizon should executives focus on for AI planning?
Welsch advises focusing on the next two quarters. Given the pace of change in AI and agentic AI, five- and ten-year predictions are unreliable. AI leadership benefits from staying present: understanding the broad direction while executing on what is directly in front of the organization.
This keeps adoption grounded in real workflows, measurable outcomes, and iterative learning.

