AI Leadership: Why Enterprises Struggle to Drive Value with AI

Why Enterprises Struggle to Drive Value with AI

AI leadership is being tested as enterprises face increasing scrutiny over AI ROI, stalled pilots, and rising operational complexity.

In an InformationWeek article on why enterprises struggle to drive value with AI, multiple experts describe a consistent pattern: organizations invest heavily, but foundational strategy, measurement, governance, and adoption often lag behind.

Andreas Welsch, an AI leadership expert and founder and chief AI strategist at Intelligence Briefing, highlights why many teams are now caught off guard: early experimentation was funded without needing to prove return, but expectations have changed.

Original source: Why Enterprises Struggle to Drive Value with AI

Executive Summary

  • AI ROI is hard to measure without clear baselines and defined outcomes.
  • Many GenAI initiatives plateau because use cases are not transformative or production-ready.
  • Maintenance, monitoring, and human-AI learning curves change ROI over time.
  • Welsch urges governance that assesses business value before building and revisits progress regularly.
  • Leaders can unlock value by using existing AI features before building from scratch.

Key Takeaways

  • Welsch says organizations that stayed in “exploration” are now exposed when returns cannot be measured.
  • Welsch recommends a formal process and governance to assess business value and measurable return before starting.
  • Welsch emphasizes securing stakeholder buy-in and setting a regular cadence to measure progress.
  • Welsch advises leaders to ensure continued support—or stop the project—based on measurable progress.
  • Welsch urges assessment of existing applications to identify unused AI capabilities.
  • Welsch cautions that organizations do not need to build every AI-enabled application from scratch.

What is AI leadership?

AI leadership is the executive capability to translate AI investment into measurable business outcomes through clear use cases, governance, and adoption planning. In the InformationWeek coverage, this includes establishing metrics up front, aligning business and technical teams on what “success” means, and maintaining accountability after pilots move toward production. It also includes making deliberate decisions to continue, adjust, or stop initiatives based on evidence, not hype.

AI leadership and the ROI reset: from experimentation to accountability

Andreas Welsch explains that early in the GenAI hype cycle, organizations moved quickly to experiment. Budgets were consolidated to explore possibilities, and initiatives often did not need to deliver ROI immediately.

Welsch says the environment has changed. Organizations that remained stuck in exploration—without assessing business value first—are now “caught off guard” when a use case fails to deliver measurable return. His guidance: establish a formal process and governance that assesses value and measurable return before starting, secure stakeholder buy-in, and measure progress on a regular cadence to decide whether to continue or stop.

Key Insight: Welsch’s core AI leadership message is that governance must start before the build. Teams that treat experimentation as strategy risk getting trapped in an exploration loop, then losing executive support when returns cannot be demonstrated. Formal value assessment and ongoing checkpoints reduce wasted effort.

A practical shortcut Welsch calls out: use what is already available

Welsch also advises leaders to assess existing applications. Many organizations already have AI capabilities embedded in tools they own but are not using yet.

His implication for executive decision-makers is straightforward: not every capability needs to be built from scratch if value can be unlocked by enabling existing features with clear measures of return.

Why this media coverage matters

This InformationWeek coverage is aimed at technology and business leaders navigating AI investment decisions, including CIO, CTO, and other executives accountable for results. It surfaces a recurring enterprise reality: AI is “virtually everywhere,” yet organizations often have not completed the foundational work required to convert capability into durable outcomes.

For AI leadership and workforce transformation, the relevance is direct. The article highlights measurement gaps, governance needs, engineering readiness, and adoption barriers, while Welsch’s perspective addresses the executive inflection point: shifting from open-ended exploration to disciplined, measurable value creation with formal governance and stop/go decisions.

Leadership Implications

  • Define measurable outcomes before build: Adopt Welsch’s recommendation to assess business value and measurable return prior to starting.
  • Create a governance cadence: Set regular checkpoints to measure progress, renew sponsorship, or stop initiatives that cannot demonstrate return.
  • Prioritize adoption and training: Incorporate training and incentives so teams can interpret outputs and change workflows, not just access tools.
  • Audit current application portfolios: Follow Welsch’s guidance to identify unused AI capabilities already available in existing tools.
  • Align product, engineering, and functions: Use cross-functional teams to connect use cases to operational systems, data readiness, and business value.

Conclusion

Enterprises are not struggling with AI because of a lack of interest or funding; they are struggling because ROI requires discipline: baselines, metrics, governance, production-grade delivery, and adoption planning. The InformationWeek reporting reinforces that value creation is not automatic—especially for GenAI.

Welsch’s AI leadership guidance points to a pragmatic path forward: assess business value before starting, secure stakeholder buy-in, measure progress on a regular cadence, stop what does not deliver, and enable AI capabilities already present in existing applications. That shift—from experimentation to accountable execution—supports stronger workforce transformation outcomes and more credible AI ROI.

FAQ

What does Andreas Welsch recommend for improving AI ROI?

Andreas Welsch recommends setting up a formal process and governance to assess business value and measurable return before starting an AI initiative. He also advises securing stakeholder buy-in and creating a regular cadence to measure progress and stop projects that underperform.

Welsch also suggests auditing existing applications to find AI capabilities already available but unused.

What governance actions help prevent “stuck in experimentation” AI programs?

Governance actions that prevent “stuck in experimentation” programs include value assessment before build, stakeholder buy-in, and regular progress reviews with explicit continue/stop decisions. These AI governance steps help leaders avoid funding indefinite exploration without measurable returns.

Welsch specifically calls for a formal process and a cadence to measure progress and stop projects when needed.

How can leaders avoid building every AI app from scratch?

Leaders can avoid building every AI app from scratch by assessing existing applications to identify AI capabilities that are already included but not yet activated or adopted. This approach can shorten time to value and simplify governance by leveraging known platforms.

Welsch explicitly advises organizations to assess existing applications and use available AI capabilities where possible.

About the Author