

AI adoption has entered a “prove it” moment: leaders have funded tools, employees have access, and now boards are asking what actually changed on Monday morning.
In this environment, execution matters more than aspiration—especially around AI governance, workforce enablement, and measurable outcomes.
These insights are drawn from a UC Today video conversation on AI and productivity (Episode 1), featuring Andreas Welsch alongside Microsoft and Microsoft ecosystem experts.
Andreas Welsch, an AI leadership expert, emphasizes that the biggest constraint is rarely tool availability; it is the organizational capability to help people use AI well, stay accountable, and deliver business impact beyond basic drafting and summarization.
Executive Summary
- AI adoption is shifting from pilots to demands for measurable ROI.
- Employees often lack clarity on what AI can do and what is allowed.
- Value scales from productivity to operational excellence to strategic differentiation.
- Agentic AI is real, but autonomy must match workflow risk.
- Usage metrics are insufficient without downstream business impact measures.
Key Takeaways
- Welsch describes a “prove it” phase where leaders are challenged to demonstrate outcomes from AI investments.
- Many employees still do not know what AI tools exist internally—or what they are permitted to do with them.
- Welsch observes many organizations staying at “draft an email / summarize a meeting” rather than redesigning team workflows.
- He advocates progressing from personal productivity to operational excellence and then to strategic differentiation.
- He warns that open-source “DIY copilots” can shift integration labor onto employees and reduce real productivity gains.
- He highlights measurement gaps: prompts and logins do not equal business value.
- He stresses leadership communication and accountability to avoid fear, confusion, and “workslop” proliferation.
What is AI Adoption?
AI adoption is the organizational process of making AI tools useful in real work—at scale, safely, and with measurable outcomes.
In the UC Today conversation, Andreas Welsch frames adoption as more than deploying licenses. It includes clarity on permitted use, hands-on upskilling, workflow redesign, measurement beyond usage, and governance that keeps humans accountable for outputs—especially as agentic capabilities expand.
AI Adoption Is Now a “Prove It” Leadership Problem
Welsch argues that many organizations have moved beyond curiosity and into scrutiny.
Senior leaders who sponsored AI rollouts are increasingly asked to justify investment with concrete business results, while employees face a different challenge: figuring out what tools exist, how to use them well, and what the rules are.
Key Insight: Andreas Welsch describes a disconnect: leaders believe the organization “has AI,” while employees still ask what is available, what is allowed, and how to use it well. Bridging this gap requires enablement, clarity, and management attention—not additional hype.
From Productivity to Differentiation: The Three Circles of Value
Welsch outlines a progression that leaders can use to steer expectations and prioritize effort.
The inner circle is productivity: drafting emails and summarizing meetings. The next circle is operational excellence: automating repetitive team tasks that happen weekly, monthly, or quarterly. The outer circle is strategic differentiation: using unique data and insights to create new services, offerings, or products.
His caution is that many organizations remain stuck in the productivity circle because of limited hands-on experience and cultural patterns that push AI to employees without direction on where it should be applied.
Key Insight: Welsch’s value progression helps leaders avoid a common trap: equating AI adoption with isolated personal productivity tips. Real advantage comes when teams automate recurring work and eventually use organizational data to differentiate—while still maintaining accountability and quality.
Why AI Tools “Didn’t Stick”: The Hidden Adoption Failure Modes
Welsch observes three patterns across organizations.
First are “wait and see” organizations that delay hands-on learning even as change accelerates. Second are organizations trying to avoid licensing costs by standing up an open-source alternative that resembles a general chatbot. Welsch argues this can backfire by transferring integration labor to employees—copying between tools and losing time.
Third is the “sharpening the saw” problem: employees recognize the tools could help, but calendars are full and experimentation time feels risky when early attempts do not immediately succeed.
Integration matters because work happens in the workflow
Welsch points to the importance of deep integration into Microsoft 365—where email, meetings, files, and collaboration already happen—rather than forcing employees into disconnected tools.
AI Upskilling: The Real Bottleneck in AI Adoption
Welsch emphasizes that technology innovation alone does not unlock value.
He compares the current moment to early office productivity eras: capabilities exist, but organizations must help people learn what is possible and apply it to real work. He recommends revisiting AI capabilities over time, because features improve quickly and a failed attempt today may work in six weeks or three months.
He also highlights a leadership challenge: ensuring AI use does not degrade quality. Leaders should encourage AI use while making it explicit that employees remain accountable for outputs.
Key Insight: Welsch highlights a practical adoption blocker: people do not have time to experiment, and early failures discourage further attempts. Sustainable AI upskilling requires protected time, repeated practice, and clear expectations that AI assistance does not replace accountability for quality.
Agentic AI in the Enterprise: Real, Uneven, and Often Quiet
Welsch’s view is that agentic AI is already real in larger organizations that have built capabilities in machine learning and generative AI.
However, he observes that fewer organizations speak publicly about agentic deployments. He links this to workforce concerns and headlines that amplify fear of replacement. Welsch argues leadership communication is essential: addressing fears directly is better than leaving uncertainty to spread.
He also warns against treating agents as “digital employees” without recognizing the systemic effects of automation across teams and stakeholders. Automation in one department can create bottlenecks if downstream partners still operate manually.
Example: manufacturing logistics optimization using AI analysis
In a training scenario Welsch describes, a plant manager used AI to analyze exported data from manufacturing execution, orders, finance, and logistics. The AI identified customers placing repeated small orders. The organization used the insight to consolidate orders, reduce transport costs, and support better production planning.
Measurement: Why “Usage” Is Not a Value Story
Welsch recommends starting with availability and allocation metrics—how many AI licenses exist and whether they are being used.
But he cautions that counting prompts or logins does not reveal outcomes. To demonstrate value, leaders must connect adoption to business metrics and validate gains through conversations with teams about how AI is actually being applied.
This is especially important when AI generates more “outputs” (summaries, action items, emails) that may not be necessary. Several leaders in Welsch’s sessions reported receiving excessive AI-generated updates—creating new noise rather than reducing work.
AI Governance and Accountability: The Human Must Stay on the Hook
Welsch’s central governance point is simple: agents may behave like digital employees in practice, but they are still software.
That means organizations remain responsible for agent actions and outputs, just as with any other system. Governance must cover guardrails, standardized guidelines, and grounding agents in the organization’s policies, service levels, and values.
In parallel, he stresses that high-risk outputs (such as legally binding RFPs) still require human review, ideally by the person signing and owning the risk.
Leadership Implications
- Close the clarity gap: ensure employees know what AI tools exist and what is permitted.
- Protect experimentation time: create capacity for practice, retries, and workflow redesign.
- Move beyond individual tips: prioritize operational excellence use cases that eliminate recurring team tasks.
- Measure outcomes, not prompts: connect AI use to cost, time, quality, and throughput metrics.
- Set accountability norms: encourage AI use while keeping humans responsible for final outputs.
Why This Conversation Matters
The UC Today discussion reflects a broader shift in executive expectations: AI is no longer evaluated as a novelty, but as an operational capability that must perform.
For CIOs, CTOs, CHROs, and business leaders, the most actionable message is that workforce transformation is now the center of gravity. Welsch frames the problem as leadership-driven: enable people to use AI well, communicate the “why,” and reinforce accountability so quality does not degrade into AI workslop.
This also connects to Welsch’s broader focus: building AI leadership capability through strategy, roadmaps, and upskilling so organizations can translate fast-moving tools into sustainable ways of working.
Conclusion
AI adoption is no longer a technology selection exercise; it is a leadership and workforce transformation challenge.
Andreas Welsch’s message is that leaders should focus on practical enablement: move teams beyond basic productivity tasks, design operational workflows, communicate clearly, measure outcomes, and ensure governance keeps humans accountable—especially as agentic AI becomes more capable.
About the Author
FAQ
Is AI adoption already in the “prove it” phase?
Yes—AI adoption is increasingly in a prove-it phase where leaders must demonstrate outcomes, not just experimentation. Andreas Welsch notes that sponsors face “show me the money” pressure while employees still need clarity, training, and permission to use tools effectively.
What are the most common AI adoption mistakes?
The most common AI adoption mistakes are staying at basic summarization, failing to redesign workflows, and expecting employees to self-serve without enablement. Welsch also warns that DIY open-source copilots can shift integration work to employees and reduce productivity.
How should leaders measure AI adoption success?
Leaders should measure AI adoption beyond usage metrics like prompts and logins by linking AI to downstream business outcomes. Welsch recommends starting with license availability and usage, then validating impact through team conversations and business KPIs.
What does Andreas Welsch mean by moving from productivity to operational excellence?
Welsch describes a progression where productivity is personal assistance (drafts and summaries), operational excellence is automating repetitive team tasks, and strategic differentiation is using unique data for new offerings. This frames AI strategy as a staged maturity path for leaders.
Are AI agents real in the enterprise in 2026?
AI agents are real in large enterprises, especially those with established AI capabilities, according to Welsch. However, he observes uneven maturity and limited public disclosure, often due to workforce concerns and the need for leadership communication about intent and safeguards.
How much autonomy are organizations comfortable giving AI agents?
Organizations are typically more comfortable granting autonomy for low-risk tasks and less comfortable for high-risk workflows. In the conversation, examples include keeping humans in the loop for RFPs because they are legally binding, while using AI for analysis and drafting.
Why do employees struggle to adopt AI tools even when licenses are available?
Employees struggle with AI adoption because they lack time to experiment, do not know what tools exist, and fear wasted effort when early attempts fail. Welsch likens this to the “sharpening the saw” problem: capacity for learning is constrained by packed calendars.
What is the leadership role in preventing AI “workslop”?
Leaders prevent AI workslop by setting expectations that AI use is encouraged but quality and accountability remain mandatory. Welsch cites leaders receiving excessive AI-generated updates; the remedy is clear team norms, better targeting of outputs, and sometimes summarization approaches.
What is the biggest governance principle for AI agents?
The biggest governance principle is that agents may act like digital employees, but they remain software, so the organization is responsible for outcomes. Welsch stresses guardrails, grounding in policies and values, and human review for high-risk decisions and documents.

