How Executives Win AI Trust Amid Workforce Change

AI adoption has re-entered the enterprise spotlight—after analysts were discussing a potential “AI winter,” and then generative AI tools like ChatGPT reset expectations almost overnight.

In a conversation on The Digital Leader Show, Andreas Welsch discussed what leaders often miss: adoption is primarily a people topic, not a technology rollout.

The discussion also connected AI adoption to workforce transformation, trust and explainability, and the governance boundaries created by regulation, privacy, and ethics.

Executive Summary

  • Generative AI revived momentum after “AI winter” concerns.
  • AI adoption succeeds when tied to measurable business KPIs.
  • Trust depends on explainability, controls, and stakeholder involvement.
  • Finance and HR show repeatable enterprise use cases.
  • Privacy, ethics, and regulation define non-negotiable boundaries.

Key Takeaways

  • Welsch noted that media and analysts were discussing a new AI winter—until ChatGPT reignited attention.
  • Generative AI excels at text-based tasks (summarizing, translating, drafting) and can boost creativity.
  • Welsch emphasized that many organizations still chase “shiny objects” instead of starting from business problems.
  • Measurability matters: executives need KPIs and clear returns to secure buy-in.
  • In finance, AI can help speed close and reconciliation through recommendations and matching.
  • In HR, AI can sort and prioritize resumes, helping teams manage hiring surges.
  • Welsch stated AI is a people topic first, requiring change management and transparent stakeholder engagement.

What is AI adoption?

AI adoption is the organizational process of selecting, implementing, and scaling AI capabilities to improve outcomes such as productivity, speed, quality, and business performance. In the enterprise context discussed by Andreas Welsch, AI adoption includes aligning use cases to measurable KPIs, ensuring stakeholder trust through transparency and explainability, and designing controls that meet regulatory and risk requirements. It also requires change management—bringing subject matter experts and business leaders into the process from the beginning rather than treating AI as a technology project delivered “over the fence.”

AI Adoption in Enterprises: From “AI Winter” to Generative AI Momentum

Welsch observed that only months earlier, analysts were discussing the possibility of an “AI winter,” driven by disappointment that AI had not delivered on promises many organizations made over the last several years.

Then generative AI—especially ChatGPT—created a dramatic shift. The result was renewed executive attention and accelerated experimentation, particularly around text generation, summarization, translation, and coding assistance.

Key Insight: Andreas Welsch highlighted a familiar enterprise pattern: expectations swing quickly. Organizations moved from “AI is underdelivering” to “AI is everywhere” within months, largely because generative AI made capabilities visible and accessible to non-specialists.

Welsch also underscored a fundamental limitation leaders must internalize: tools like ChatGPT are strong at predicting the next word in a sentence. That creates powerful outputs—but it is not the same as human understanding, judgment, or accountability.

Start With the Business Problem, Not the “Shiny Object”

Welsch advised leaders to reverse a common adoption mistake: selecting a popular AI capability and searching for a problem it might solve. Instead, organizations should start with measurable business goals.

He recommended identifying the business problem first, then defining how success will be measured—ideally through a KPI that can be influenced directly. Clear measurability makes the investment tangible and improves stakeholder buy-in.

Key Insight: Welsch’s guidance focused on executive discipline: define the outcome, define the KPI, and only then decide whether AI is the right tool. Sometimes the best solution may be rules-based automation, RPA, workflow management, or another approach.

This framing also supports governance. When the goal and KPI are explicit, it becomes easier to define controls, monitor performance, and manage risk when models produce false positives or unexpected outputs.

Repeatable Enterprise Use Cases: Finance Close and HR Hiring

While use cases vary by industry, Welsch pointed to common opportunities across functions where AI can improve speed and reduce manual effort.

Finance: Faster close and smarter reconciliation

In finance, Welsch described AI helping close the books more quickly. Examples included matching incoming payments to open invoices and recommending reconciliations at period end—allowing teams to reduce manual work and focus on cross-checks.

HR: Resume triage at scale

In HR, Welsch highlighted the hiring surge problem: large employers can receive huge volumes of resumes. AI embedded into HR systems can sort, rank, and prioritize candidates, helping recruiters focus on better matches between resumes and job descriptions.

Key Insight: Welsch’s examples show where enterprise AI adoption often starts successfully: high-volume, repeatable work inside core functions (finance and HR) where leaders can define measurable throughput, accuracy, and cycle-time improvements.

Trust, Explainability, and the “Black Box” Adoption Barrier

Welsch addressed a recurring enterprise reality: business users may resist AI outputs—especially when AI systems are perceived as “black boxes.” Resistance is often rational, not emotional.

Many employees have performed tasks for years, know processes end-to-end, and understand escalation paths when something breaks. When AI makes recommendations “better than” an experienced professional, users may ask how they can troubleshoot, validate, or remain accountable—especially in regulated environments.

Welsch argued trust is essential and is strengthened when systems are explainable. Context on why a prediction was made helps users validate outputs, investigate false positives, and manage controls.

Key Insight: Explainability is not a “nice-to-have” for AI adoption in regulated or high-risk functions. Welsch framed it as a practical requirement: users need to understand key influencing factors so they can troubleshoot, audit, and maintain responsibility when outcomes matter.

AI Adoption Is a People Topic First

Welsch’s central point was direct: AI is primarily a people topic first and a technology topic second. Successful AI adoption depends on change management more than model selection.

He recommended involving business stakeholders and subject matter experts from the beginning—sharing goals transparently, capturing the challenges users face today, translating those into outcomes and KPIs, and aligning data to the questions the business is trying to answer.

Critically, Welsch cautioned against “throwing solutions over the fence.” Continuous involvement through deployment and refinement increases trust in AI, and trust in leadership.

Privacy, Regulation, and Ethics: The Non-Negotiable Boundaries

Welsch emphasized that rules and regulations exist for a reason: to protect consumers and align technology usage with societal values. He also referenced his experience growing up in Germany, where data privacy and protection are viewed as deeply important.

In his view, organizations must strike a balance: continuing to innovate and create economic impact while respecting boundary conditions around privacy, regulation, and ethics.

Key Insight: Welsch positioned privacy and ethics as guiding principles, not constraints to bypass. For AI adoption to scale in enterprises, leaders must define what adheres to their values, meet regulatory requirements, and still pursue innovation responsibly.

The Next Wave: Hyper-Personalized Experiences and Immersive Worlds

Looking ahead, Welsch suggested that the next shift may come from the convergence of technologies. While the metaverse was a major trend before generative AI, he described future opportunities when immersive experiences combine with generative AI for dialogue, imagery, video, and voice.

He pointed to the possibility of hyper-personalized experiences that adapt in real time based on what an individual tends to enjoy—creating tailored interactions that feel fundamentally different from today’s digital experiences.

This possibility reinforces the importance of governance: personalization relies on data, which increases the stakes for privacy, consent, and responsible use.

Leadership Implications

  • Anchor AI adoption to KPIs: define measurable outcomes before selecting AI methods or vendors.
  • Design for explainability: ensure users receive contextual reasons for predictions to support controls and audits.
  • Operationalize stakeholder involvement: keep business SMEs engaged from problem definition through refinement.
  • Separate “core differentiation” from “common capability”: build bespoke models where competitive advantage requires it; consume embedded AI elsewhere.
  • Set privacy and ethics as boundary conditions: align innovation efforts with regulation and organizational values.

Why This Conversation Matters

This discussion took place on The Digital Leader Show, a program focused on business, technology, and the humanities for executive audiences navigating digital transformation.

The timing is important for AI leadership: generative AI has increased executive curiosity and accelerated experimentation, even as enterprises face workforce disruption, shifting work models, and heightened scrutiny on privacy and risk.

Welsch’s perspective connects strategy and execution. His emphasis on change management, explainability, and measurable outcomes reflects the realities leaders face when scaling AI beyond pilots and into enterprise operations.

Conclusion

AI adoption is accelerating again, but the enterprise success factors remain consistent: start with business problems, measure outcomes, involve stakeholders, and build trust through transparency and explainability.

As Andreas Welsch emphasized, responsible adoption requires governance boundaries—privacy, ethics, and regulation—while still enabling innovation and workforce transformation at scale.

About the Author

FAQ

What is the biggest barrier to AI adoption in enterprises?

The biggest barrier is often trust: business users need transparency, controls, and explainability to rely on AI outputs. Enterprises also face regulatory and accountability constraints, making “black box” models harder to deploy in high-risk workflows.

How should leaders prioritize AI adoption use cases?

Leaders should prioritize AI adoption by starting with a measurable business problem and a KPI that can be influenced. Andreas Welsch advised avoiding “shiny object” selection and instead confirming whether AI, automation rules, or workflow tools best fit.

Which enterprise functions see early AI adoption success?

Finance and HR frequently see early AI adoption success because they contain high-volume, repeatable work. Welsch cited finance close and reconciliation recommendations, and HR resume sorting and ranking, as practical starting points for measurable gains.

Why does explainability matter for AI governance?

Explainability matters because enterprises must troubleshoot decisions, manage false positives, and maintain accountability under controls and audits. Welsch explained that showing influencing factors behind predictions builds trust and supports risk management, especially in regulated industries.

Is AI adoption mainly a technology program or a change program?

AI adoption is mainly a change program. Welsch stated it is a people topic first and a technology topic second, requiring stakeholder involvement, transparent communication, and continuous refinement so business users do not receive AI solutions “thrown over the fence.”

How should executives think about privacy in AI adoption?

Executives should treat privacy as a boundary condition for responsible AI adoption, not a hurdle to bypass. Welsch emphasized that rules and regulations exist to protect consumers and align technology use with societal values, alongside ethical decision-making.

What role did generative AI play in changing executive interest?

Generative AI reignited executive interest by making AI capabilities visible and usable for everyday tasks. Welsch noted that analysts were discussing an AI winter until ChatGPT created renewed momentum, particularly around text generation, summarization, and translation.

Should organizations build AI models or consume embedded AI features?

Organizations may do both, depending on competitive needs. Welsch suggested building custom models where AI is truly differentiating, while consuming embedded AI features for common enterprise tasks to avoid over-investing scarce data science resources unnecessarily.

What is the connection between AI adoption and workforce transformation?

AI adoption changes how work is performed by reducing manual tasks, reshaping roles, and requiring new oversight and skills. Welsch’s focus on change management and trust reflects workforce transformation realities, where adoption succeeds only when people are brought along.