Practical Lessons On Data, Mindset, And Automation For AI Adoption

Enterprise leaders are under pressure to turn AI from a headline into operational value. In a live-stream conversation, Andreas Welsch explained what consistently helps organizations move from early experimentation to sustainable AI adoption—and what commonly blocks progress.

Welsch’s perspective is grounded in years of working with customers on the business realities of AI: defining metrics, aligning stakeholders, and choosing the right tool for the problem (including when AI is not required). The conversation also covered workforce transformation topics such as AI literacy, internal communities, and upskilling business professionals.

Executive Summary

  • Start with business metrics, baselines, and stakeholder alignment before selecting AI technologies.
  • ERP data is often a strong starting point: structured, transactional, and easier to operationalize.
  • Build AI literacy through small experiments, communities of practice, and real workflow exposure.
  • Use automation and RPA to remove repetitive steps; use AI to interpret data and recommend actions.
  • Govern privacy, ethics, and responsible use from the beginning and continuously thereafter.

Key Takeaways

  • AI work starts with a business question. Welsch emphasized defining the status quo and the metrics that matter.
  • AI mindset includes knowing when not to use AI. Many problems can be solved faster with rules, digitization, or basic analytics.
  • Structured ERP data is a practical launchpad. Transactional data is typically clearer and easier to work with than unstructured sources.
  • Cross-functional stakeholders improve solution quality. Different roles see the same problem differently, improving outcomes and buy-in.
  • Communities accelerate AI literacy. Sharing successes and failures helps organizations avoid repeating mistakes.
  • Reuse what already exists. Pre-built services (e.g., entity extraction, ticket categorization) reduce reinvention and speed time-to-value.
  • Privacy and responsible AI cannot be bolted on later. Welsch advised introducing policies at the start and reviewing continuously.

What is AI adoption?

AI adoption is the practical integration of AI capabilities into business workflows to improve measurable outcomes—such as efficiency, accuracy, speed, or service quality—without treating AI as the goal. In this conversation, Andreas Welsch framed adoption as a combination of business clarity (metrics and baselines), organizational readiness (AI literacy and stakeholder alignment), and operational execution (data availability, governance, and change management). Successful adoption also includes selecting the simplest effective approach, which may be rules, automation, or digitization rather than AI.

Why ERP data often jump-starts AI adoption

Welsch agreed with the observation that ERP and asset management systems can be a “rich pool” for early AI work. Much of this data is transactional and structured, making it easier to understand and prepare compared to unstructured formats such as PDFs, images, or spreadsheets.

He noted that “structured” does not automatically mean “ready.” Organizations may still need preparation and cleansing, and completeness depends on the use case. Still, ERP data typically provides a clearer starting point for teams that need early wins.

Welsch also described how IoT data can become valuable when connected back into core systems. He used the cold-chain example: sensors monitoring temperature during transit can trigger alerts while goods are in motion, not after products arrive degraded.

Key Insight: ERP data reduces early friction because it is structured and transactional, enabling faster prototyping. Welsch highlighted the next step: bridging physical signals (e.g., sensors) with digital workflows so decisions happen during operations, not after the fact.

AI adoption starts with metrics and a shared baseline

Welsch’s first move in an engagement is not model selection. It is defining what “better” means. He described the need to identify the metrics that matter and establish a status quo baseline—because even the baseline can be unclear.

He also recommended involving stakeholders with different perspectives on the same process. This reduces the risk of treating AI as only an IT project or only a manufacturing project. It strengthens problem definition and builds organizational understanding of how AI projects succeed at scale.

A consistent theme was realism: the organization must know the trade-offs, what can be improved, and how performance will be evaluated before any AI “hype” enters the discussion.

Key Insight: Welsch emphasized that the hardest part can be agreeing on what the problem is and how success is measured. Clear metrics plus cross-functional stakeholders create the foundation for governance, prioritization, and executive sponsorship.

AI mindset: literacy, data discipline, and knowing when AI is unnecessary

Welsch described an “AI mindset” as general awareness of what AI can do in business terms: recommendations, classification, categorization, and proposals that make roles more efficient. For technical teams, the mindset also includes understanding concrete technologies and prerequisites.

A non-negotiable element is data discipline. Welsch stated that AI work “starts with data,” and that a basic understanding of this dependency is essential regardless of role.

He also cautioned that “not every business problem is an AI problem.” Organizations can waste time and resources chasing AI when rules, analytics, or digitization solve the issue faster and more robustly.

Examples from the conversation included paper-based workflows and finance processes where customers send payment advice that is still not fully digitized. In many cases, basic digitization reduces friction before AI is even necessary.

Key Insight: AI literacy is not only about capabilities; it is also about restraint. Welsch argued that leaders should normalize a toolkit approach—choose rules, digitization, analytics, automation, or AI based on speed, robustness, and available resources.

How to gain executive sponsorship: hard facts, process discovery, and wasted effort

In response to a question about sponsorship, Welsch pointed to “hard facts” as the strongest driver. Leaders need evidence of a problem and its measurable impact—time, money, and resources spent on activities that could be automated.

He recommended process intelligence, process discovery, and analysis as precursor steps. These approaches reveal what is actually happening inside workflows, turning gut feel into quantified opportunity.

This also supports prioritization: if the organization can show the cost of rework and manual handling, the business case for automation or AI becomes easier to communicate to executive decision-makers.

AI upskilling and workforce transformation: enable the people closest to the process

Welsch argued that AI mindset and literacy should extend beyond a single C-level role. People closest to day-to-day processes know where workflows break, where tasks are tedious, and where work is duplicated across systems.

He encouraged empowering business analysts and business users to identify opportunities and surface them. Governance and vendor selection may be IT-led, but opportunity discovery often comes from the frontline.

The conversation reinforced that many AI efforts succeed when domain experts and technical experts learn together. Welsch described this as a “greenhouse” dynamic: business users develop better data questions while data scientists learn operational context and see how production deployment changes outcomes.

Welsch also noted that not everyone needs to become a data scientist to use AI. Leaders should check the feature lists of existing products, because AI capabilities may already be available without launching a large new program.

Building an internal AI community: newsletters, webinars, and ambassador networks

Welsch recommended creating community mechanisms to accelerate learning. In his experience, community exchange helps teams share what worked, what did not, and how obstacles were resolved.

In one example from a large engineering unit, community members included product managers, product owners, data scientists, and engineers across domains such as asset management, supply chain, and finance. The aim was to spread implementation learning across teams rather than isolating knowledge in one project.

On mechanics, Welsch described a simple structure: a monthly newsletter to highlight what “AI ambassadors” should know, plus webinars or meetings where technology experts present capabilities and examples aligned to business workflows.

He also observed that some sessions can be agenda-light, enabling peer sharing and real problem-solving—especially around common constraints like data access and cleansing.

Where enterprise AI projects get stuck: data gaps, unstructured ground truth, and operationalization

Welsch identified data as a recurring bottleneck. Even when processes are “standard,” real-world business nuance can reduce data completeness or consistency, making it hard to build a reliable feature set.

He highlighted additional complexity with unstructured data: access, labeling, and “ground truth” required to evaluate model performance. Leaders must also consider how models are updated and operated in production, including feedback loops and retraining.

The conversation also surfaced a common sequencing challenge: organizations want AI outcomes before investing in data capture, but data capture is required to produce AI outcomes. Welsch’s guidance was pragmatic—avoid pushing the work out indefinitely, and plan for accumulation periods when data is insufficient.

Key Insight: Data gaps and unstructured labeling challenges often stop AI progress after initial enthusiasm. Welsch advised realism: accumulate data when needed, use what is already available, and build operational feedback loops so early deployments improve rather than stall.

RPA vs AI: when to automate clicks vs interpret information

Welsch distinguished AI from robotic process automation (RPA) in practical workflow terms. AI is strong at identifying patterns and generating predictions or outputs—especially when extracting information from unstructured data such as PDFs.

RPA, in contrast, is suited to repeatable, mundane steps: logging into portals, downloading files, saving documents, copying values between systems, and submitting forms. These steps may be “non-value-adding” but still necessary.

The enterprise value comes from combination. RPA can move documents through the workflow, while AI extracts and interprets information and returns structured outputs to the process. Welsch also noted that these capabilities have become easier to use: developers and bot builders can call AI services via APIs without being data scientists.

Responsible AI and privacy: introduce policies early and review continuously

On corporate privacy policies, Welsch’s guidance was direct: introduce them at the start of a project, before baseline and metric definition, and then review continuously. Privacy, ethics, and responsible AI should not be treated as an “afterthought” once a model is in production.

He referenced the importance of data protection requirements such as those associated with the European Union’s General Data Protection Regulation (GDPR) and the societal benefit of strong protections.

The operational implication is that governance is not a one-time checklist. Monitoring and review should be part of how AI systems are evaluated and managed over time.

Leadership Implications

  • Anchor AI adoption in measurable outcomes. Require baselines and metrics before approving build work.
  • Design cross-functional ownership. Blend business process expertise with IT governance and standardization.
  • Invest in communities of practice. Use newsletters and webinars to share lessons and reduce repeated failures.
  • Prioritize reuse. Encourage teams to evaluate pre-built services and existing product capabilities before custom builds.
  • Operationalize responsibility. Introduce privacy and ethics policies at project start; monitor continuously after deployment.

Why this conversation matters

This discussion took place in a live-stream format with audience Q&A, reflecting the real questions enterprise leaders and practitioners ask when AI moves from experimentation to operational rollout. The topics were consistently leadership-relevant: defining value, building internal capability, selecting the right tools, and avoiding common pitfalls.

Andreas Welsch, an AI leadership expert in enterprise solutions, connected technology decisions to workforce transformation. His emphasis on AI literacy, internal communities, reusable capabilities, and early governance aligns with the reality that sustainable AI adoption depends as much on people and process as on models.

Conclusion

Enterprise AI succeeds when leaders treat it as a business transformation discipline, not a technology contest. Across metrics, stakeholder alignment, data realities, community learning, and responsible governance, Andreas Welsch’s guidance points to a repeatable pattern: build clarity first, then apply the simplest effective tools.

Done well, AI adoption becomes a practical capability—supported by AI literacy, reusable components, and workflow integration across automation and AI. Done poorly, it becomes a cycle of pilots that stall on data gaps, unclear ownership, and avoidable reinvention.

FAQ

1) What is the first step in enterprise AI adoption?

The first step is defining the business problem and the metrics that should improve, including a baseline for today’s performance. Welsch emphasized that agreeing on “status quo” and “better” enables proper prioritization and avoids building AI without measurable value.

2) Why do ERP systems often provide the best starting data for AI?

ERP data is often transactional and structured, making it clearer to understand and easier to prepare than unstructured sources like PDFs or images. Welsch noted it can still require cleansing, but it typically reduces early friction in AI adoption efforts.

3) What does an “AI mindset” mean for business leaders?

An AI mindset means understanding what AI can do in business terms—recommendations, classification, categorization, and proposals—while also knowing AI starts with data. Welsch also stressed recognizing that not every business problem is an AI problem.

4) How can leaders secure executive sponsorship for AI projects?

Executive sponsorship comes from hard facts: quantified process waste, cost, and resource impact that automation or AI can improve. Welsch recommended process discovery and analysis to reveal what is actually happening, because quantified opportunity is easier to fund than intuition.

5) How should organizations start AI projects when data is limited?

When data is limited, expectations should be managed and teams should consider pre-trained models or commercially available services where possible. Welsch also advised accumulating data over time and building feedback loops so early deployments can improve through user input.

6) When should corporate privacy policies be introduced in AI work?

Privacy and responsible AI policies should be introduced at the very beginning of a project and then reviewed continuously. Welsch cautioned against treating privacy, ethics, and responsibility as an afterthought once models are built and deployed in production.

7) What is the difference between RPA and AI in business processes?

RPA automates repeatable, click-based tasks such as logging into portals, downloading files, and copying values between systems. AI interprets information—such as extracting data from PDFs or generating recommendations. Welsch emphasized the combined value of RPA plus AI services via APIs.

8) Who should lead AI literacy and workforce transformation?

AI literacy should not be limited to one executive role; it applies across functions, from finance to HR to operations. Welsch argued that people closest to the process are best positioned to identify automation opportunities, while IT supports governance and standardization.

9) How can organizations prevent AI projects from becoming one-off custom builds?

Preventing one-off builds requires evaluating reusable components and existing services before building from scratch. Welsch cited examples like named entity recognition and ticket categorization, which are often available as pre-built capabilities, enabling faster delivery and better scalability.

10) What is a practical way to build an internal AI community?

A practical approach is forming a cross-domain community and supporting it with a monthly newsletter, webinars, and peer sharing sessions. Welsch described “AI ambassadors” and technology-led sessions that show concrete examples, helping teams share lessons and accelerate AI adoption.

About the Author