

How Leaders Turn Hype Into Measurable Outcomes
Generative AI has moved from “interesting demo” to board-level agenda item, forcing leaders to revisit AI strategy under intense economic pressure and rapid product releases from Google and Microsoft.
In an interview with LinkedIn’s Ray Villalobos, Andreas Welsch, an AI leadership expert and Chief AI Strategist for Intelligence Briefing, described how to align AI strategy to business outcomes, avoid common failure modes, and operationalize AI adoption through workforce enablement.
The conversation also addressed reliability gaps in large language models (LLMs), emerging questions around intellectual property and privacy, and the role of regulation—topics that increasingly define AI governance and workforce transformation.
Executive Summary
- AI strategy should start with business strategy, not tools.
- Generative AI value concentrates where language moves information.
- LLMs are not designed for factual accuracy; human review remains essential.
- IP, privacy, and bias concerns are now central to governance discussions.
- Adoption accelerates with training, “multipliers,” and cross-functional communities.
Key Takeaways
- Andreas Welsch argues the best time to define an AI strategy was years ago; the next best time is now.
- AI strategy fails when treated as a technology exercise instead of a business-aligned initiative with measurable outcomes.
- Generative AI’s near-term impact is strongest in functions centered on language: sales, marketing, HR, and support.
- Enterprises should anticipate questions on cost, savings, and measurable impact—not novelty.
- Welsch highlights IP and copyright disputes (e.g., Getty vs. Stability AI) as signals that governance will tighten.
- Regulatory approaches differ: Europe’s rules-first posture contrasts with voluntary risk frameworks in the U.S. (e.g., NIST).
- Workforce transformation depends on transparency and bringing business stakeholders along throughout delivery—not only at kickoff.
What is AI strategy?
AI strategy is an organization’s plan for using artificial intelligence to advance business objectives with measurable outcomes. In Andreas Welsch’s framing, it cannot stand alone as a technology roadmap. It must align to business strategy first—defining what the business intends to achieve, how success is measured through KPIs, and where AI can influence those KPIs through automation, insights, personalization, or decision support. Effective AI strategy also accounts for adoption: enabling stakeholders, managing risk, and setting expectations about what AI can and cannot reliably do.
Why this conversation matters
The interview took place in a moment of sharp contrast: widespread layoffs and “efficiency” mandates alongside explosive demand for generative AI tools such as ChatGPT and image generation systems. That tension defines today’s leadership challenge: building AI strategy that is both responsible and ROI-driven.
The audience for the conversation—technical professionals and business leaders—mirrors the cross-functional reality of AI adoption. Welsch’s emphasis on stakeholder enablement, governance considerations, and measurable value directly supports executives navigating workforce transformation, operational change, and AI risk.
AI strategy under “the year of efficiency”: why timing still matters
Mark Zuckerberg’s “year of efficiency” messaging set the tone for many organizations: lean teams, heightened scrutiny, and fewer tolerated experiments. Welsch acknowledges the human cost of layoffs, calling the phrasing harsh because it affects real colleagues and friends.
Even so, Welsch maintains that pausing AI strategy is a mistake. In his view, AI can help organizations become more efficient and effective—through faster analysis, automated decisions, improved personalization, and better access to information. The prerequisite is alignment and measurement.
Key Insight: Andreas Welsch stresses that AI strategy must not be developed in isolation. When leaders treat AI as a technology initiative rather than a business-aligned investment, projects struggle to secure buy-in and fail to demonstrate impact. Alignment to business strategy and KPIs makes outcomes measurable and defensible—even in efficiency-driven environments.
Align AI strategy with business strategy (and make it measurable)
Welsch’s recurring theme is straightforward: business strategy comes first. Leaders should define objectives, identify the KPIs that track progress, and only then ask how AI can influence those KPIs.
In the interview, Welsch describes the core business lens as reducing cost and risk or increasing revenue. Whether the scenario is predictive maintenance for heavy machinery or optimizing product bundles to improve margin, the question from stakeholders remains consistent: what does it cost, and what does it return?
He also emphasizes that AI initiatives are not sprints. They require long-term investment, iteration, and consistent measurement to validate value and justify scaling.
Key Insight: Welsch highlights a universal ROI question across contexts—from a Fortune 500 CFO to an individual evaluating an AI-enabled editing tool: “How much will it cost, and what will it return?” This shared decision logic gives leaders a common language for prioritizing AI use cases and funding responsible adoption.
Where generative AI creates value across departments
Audience polling in the show placed IT/engineering at the top for expected generative AI benefits, with sales and marketing close behind. Welsch sees logic in both results.
On the IT side, chatbot help desks and documentation scenarios are immediate opportunities. On the commercial side, Welsch points to the “generative AI stack” diagrams circulating in the ecosystem (including those attributed to Madrona) where many application-layer products cluster around sales and marketing.
He cites common examples: generating or expanding copy, summarizing meeting notes and calls, producing follow-up materials, and distilling information. Because business runs on information transfer—and information transfer runs on language—LLMs become particularly relevant.
Key Insight: Welsch argues that the most durable early wins for generative AI come from language-heavy workflows: creating, summarizing, translating, and conveying information. This makes functions like sales, marketing, HR, and support attractive starting points—provided leaders define measurable outcomes beyond “cool demos.”
Use case inspiration: McKinsey’s functional map
Welsch references a McKinsey chart that maps potential generative AI applications across functions. Examples discussed include HR using generative AI to draft interview questions (with careful prompt design to avoid generic “fluff”).
He also highlights a less obvious opportunity: using generative AI to create large volumes of training data variations from text prompts, which can accelerate model development in scenarios such as defect recognition or other detection tasks.
Reliability and “hallucinations”: setting expectations for LLMs
As adoption grows, organizations confront a recurring issue: LLM outputs can be plausible and wrong. Welsch explains that this behavior is not accidental; large language models are designed to predict the next word that best fits the sequence—not to guarantee factual accuracy.
He advises leaders to be explicit with stakeholders about this limitation. For near-term enterprise use, safety often comes from selecting workflows where factual precision is less critical (e.g., rewriting or drafting variations) and keeping a human in the loop to verify outputs before publication or downstream use.
He also suggests combining LLMs with other techniques, such as retrieving search results first and using generative AI to present the information more clearly—a practical pattern while reliability improves.
Key Insight: Welsch recommends treating today’s generative AI as early in maturity for many high-stakes uses. Leaders can still create value by pairing LLMs with retrieval and by designing workflows that require verification. The governance move is not to ban the tools, but to control where and how outputs are trusted.
Ethics, privacy, and intellectual property: governance is no longer optional
The interview highlights how quickly governance questions are moving into the mainstream. Welsch points to ethics, intellectual property, and privacy concerns becoming more visible as generative tools reach the general public.
He calls out public debates around bias in datasets (including issues such as skin tones and over-sexualization risks in image generation), and he references the lawsuit filed by Getty against Stability AI. In that case, Getty alleges Stability AI used millions of copyrighted images and metadata to train its image generation model.
For leaders, these signals matter because they shape enterprise risk: what training data is permissible, who owns outputs, and how organizations protect personal data in AI-enabled workflows.
Regulation: balancing innovation and protection
Welsch brings a transatlantic perspective shaped by growing up in Germany and living in the United States for more than a decade. He associates the European approach with strong privacy norms and regulation, and he views the U.S. environment as enabling fast iteration and innovation.
He notes the EU AI Act as well-intended toward protecting individuals, particularly in decisions that materially affect lives. He contrasts that approach with the U.S. NIST AI Risk Management Framework, which outlines important considerations but is voluntary in adoption.
Welsch argues that leaders should expect more iteration in the regulatory balance. Because AI can automate impactful decisions at speed, governance needs to evolve beyond either strict enforcement challenges or purely voluntary guidelines.
AI adoption doesn’t happen by memo: enabling stakeholders end-to-end
Welsch emphasizes that resistance is often natural, especially among leaders who have not grown up in technology. His advice: involve stakeholders throughout the project, remain transparent about goals, and ask for input and feedback continuously—not only at kickoff.
When AI efforts fail, Welsch cautions against blaming business partners as “not ready.” Instead, leaders should build adoption intentionally through three practices:
- Mandatory training for selected colleagues with interest in technology to create scalable baseline understanding.
- Hands-on greenhouse projects with domain experts to co-define the problem, measures, and prototype outcomes.
- A multiplier community that shares lessons, spreads AI literacy, and generates new use cases across functions.
He also notes a limitation of training alone: without an immediate use case, knowledge retention is low. Projects and communities turn learning into operational capability.
Key Insight: Welsch frames successful AI adoption as organizational enablement. Training creates baseline literacy, domain-expert collaboration creates deep buy-in, and communities create scale through “multipliers.” This is workforce transformation in practical terms: enabling people to identify AI-suitable problems and partner with technical teams to solve them.
Startups vs. incumbents: different advantages, same AI strategy logic
In the interview, Welsch argues that the AI strategy logic is universal: define business strategy, establish objectives and KPIs, then map AI to measurable outcomes.
Where startups differ, he says, is agility and architecture. Startups can build cloud-native, data-centric, and AI-centric systems from the beginning, designing business models around AI capabilities. Incumbents may have more data at scale, but they often face legacy constraints and slower change management.
He cites examples of AI-driven personalization and recommendations in streaming and commerce contexts (e.g., recommendation-driven experiences and end-of-year “summary” experiences) as illustrations of how AI can become foundational to a product’s value.
Workforce transformation: what happens to jobs and skills?
The conversation returns to a central leadership question: impact on the job market. Welsch’s perspective is that individuals become more resilient by learning how to use AI to become more effective—planning meetings, organizing information, drafting communications, and iterating on ideas faster.
He suggests that productivity expectations will shift as AI capabilities become normal. As more work becomes easier to draft, summarize, or repurpose, the differentiator may increasingly be judgment, domain expertise, and oversight.
Welsch also raises a cultural question: if AI commoditizes outputs, will organizations and markets assign higher value to work explicitly created by humans? This signals a broader workforce transformation issue: leaders must redefine quality, authorship, and accountability in AI-augmented work.
Leadership Implications
- Anchor AI governance in business outcomes: require clear objectives, KPIs, and expected ROI before scaling use cases.
- Design workflows for verification: position LLMs as drafting and summarization tools with human review for accuracy.
- Operationalize stakeholder enablement: combine training with real projects and build multiplier communities across departments.
- Prepare for IP and privacy scrutiny: treat data sourcing, ownership, and output rights as governance requirements, not legal footnotes.
- Monitor regulatory divergence: track EU AI Act developments and voluntary frameworks such as NIST to anticipate compliance impacts.
Why this matters for AI leadership and adoption
This interview illustrates why AI leadership has shifted from experimentation to operating model design. Public releases from Google (Bard) and Microsoft (AI-powered Bing and Edge positioned as “copilots”) amplify executive expectations, while reliability and IP questions amplify risk.
Welsch’s broader work—daily guidance on AI leadership and a bi-weekly livestream (“What’s the Buzz”) featuring practitioners and academics—aligns with the core need expressed in the conversation: making AI tangible for business stakeholders and turning hype into outcomes.
For leaders responsible for workforce transformation, the most actionable message is that AI adoption is an enablement program. Tools matter, but governance, workflow design, and stakeholder buy-in determine whether the AI strategy becomes a durable capability.
FAQ: AI strategy, governance, and workforce transformation
1) How should executives start an AI strategy in 2026 if the organization is behind?
Start by aligning AI strategy to business strategy, then define objectives and KPIs before selecting tools. Andreas Welsch emphasizes that the “next best time is now,” but success depends on measurable outcomes and stakeholder enablement, not experimentation alone.
Leaders should prioritize use cases where AI can reduce cost and risk or increase revenue, then build adoption through training and cross-functional participation.
2) Which departments benefit most from generative AI in the near term?
Departments that move and transform information through language see the fastest gains: sales, marketing, HR, support, and IT documentation. Welsch links value to workflows like summarizing calls, drafting copy, and improving knowledge access—areas where LLMs fit naturally.
IT and engineering can also benefit through help desk chatbots and documentation assistance.
3) Are large language models reliable enough for enterprise use?
LLMs are not designed to guarantee factual accuracy; they predict likely next words, which can produce confident errors. Welsch recommends using them where drafting and summarization are helpful, pairing them with retrieval methods, and keeping humans in the loop to verify outputs.
This is a workflow and governance design issue as much as a model issue.
4) What does “human in the loop” mean for generative AI governance?
Human-in-the-loop means requiring human review before AI outputs are published or used in downstream decisions. Welsch suggests this especially for early-stage generative AI, where hallucinations and IP risks exist. Review processes become part of AI governance and quality control.
Leaders should define where verification is mandatory based on risk and impact.
5) How can leaders overcome business resistance to AI adoption?
Resistance declines when leaders involve stakeholders throughout delivery, communicate transparently, and demonstrate personal relevance. Welsch advises building AI adoption through mandatory training, hands-on collaboration with domain experts, and multiplier communities that spread practical understanding across the business.
This turns stakeholders into supporters rather than passive recipients.
6) How should an enterprise balance innovation with AI regulation?
Balance requires a risk-based approach that protects individuals while enabling iteration. Welsch contrasts the EU AI Act’s stronger regulatory posture with the voluntary NIST AI Risk Management Framework. Leaders should track both, anticipate compliance needs, and design governance that scales.
Regulation will likely continue evolving as AI automates more consequential decisions.
7) What are the biggest governance issues leaders should watch in generative AI?
Intellectual property, privacy, and bias are central governance issues as generative AI scales. Welsch references debates about dataset bias and privacy, plus the Getty vs. Stability AI lawsuit, as evidence that ownership and permissible training data will face increasing scrutiny.
Organizations should expect more legal and policy clarity—but not immediately.
8) How should startups think about AI strategy differently from incumbents?
Startups can build cloud-native and AI-centric from a blank sheet, while incumbents often retrofit AI into legacy stacks. Welsch notes startups’ agility as a major advantage, but he maintains the core AI strategy logic remains the same: align to business objectives and measure outcomes.
Incumbents may counterbalance with larger-scale data access.
9) Will generative AI replace jobs or change how work is done?
Generative AI is likely to change work by increasing productivity expectations and shifting tasks toward oversight, iteration, and judgment. Welsch encourages individuals to learn how to use AI for planning, communication, and ideation. Workforce transformation depends on adoption and skills development.
Leaders should plan for new workflow norms rather than simplistic job-replacement narratives.
10) What does success look like for an AI Center of Excellence (CoE)?
A successful AI Center of Excellence helps teams identify valuable use cases, prototype solutions, measure impact, and earn buy-in to scale. Welsch describes CoE work as supporting engineers and customers by clarifying where AI fits, what makes use cases valuable, and how to assess outcomes.
CoEs also accelerate enablement by producing multipliers across the organization.
Conclusion
The interview reinforces a core message for executives: AI strategy is not a tool decision, but a business-aligned operating decision. Generative AI can produce fast wins in language-centric workflows, but reliability, IP, privacy, and regulation require deliberate governance and workflow design.
Andreas Welsch’s guidance emphasizes measurable outcomes, stakeholder enablement, and organizational multipliers—practical ingredients for sustainable AI adoption and workforce transformation.

