

How Leaders Can Train the Workforce Without Increasing Risk
AI upskilling has become a frontline leadership issue as generative AI spreads across business functions faster than policies, training, and safe workflows can keep up.
While prompting can look deceptively simple—“type text into a box”—effective, responsible use requires practice, iteration, and organizational guardrails that reduce the chance of sensitive data exposure.
This article is adapted from an InformationWeek article that examines how organizations are training employees to become better prompt engineers, featuring guidance from Andreas Welsch and examples from multiple companies and experts.
Original source: AI Upskilling: How to Train Your Employees to Be Better Prompt Engineers
Executive Summary
- Scale AI upskilling with a “Community of Multipliers,” then broaden training iteratively.
- Match learning formats to user maturity: cohorts for basics, workshops for advanced topics.
- Function-specific training improves relevance and adoption in departments like marketing.
- Advanced training should cover LLM fundamentals, RAG, vector databases, and security risks.
- Prompting proficiency depends on role: developers optimize tokens; business users master frameworks and techniques.
Key Takeaways
- Andreas Welsch, an AI leadership expert, advocates starting with a “Community of Multipliers” to accelerate organizational learning.
- Welsch recommends piloting training in one business area, collecting feedback, iterating, then scaling enterprise-wide.
- Welsch notes generative AI tools remain “a new type of application” for most business users, even after widespread availability.
- Welsch believes prompt engineering training should “inspire learners to think and dream big,” not just memorize templates.
- Welsch differentiates formats: cohort-based online sessions for introductory literacy; executive training broadens toward GenAI products.
- Welsch argues advanced training benefits from interactive workshops because context, networking, and expert access matter.
- Welsch emphasizes advanced topics should include LLMs, retrieval-augmented generation, vector databases, and security risks.
What is AI upskilling?
AI upskilling is the structured effort to help employees use AI tools effectively, safely, and repeatedly in real work. In the context of generative AI, it includes improving AI literacy, teaching practical prompting and response evaluation, and clarifying what data can (and cannot) be shared with AI systems. Because organizations want to scale GenAI without compromising sensitive data, AI upskilling often combines training formats, role-based instruction, and ongoing practice through assessments and peer learning.
Why this media coverage matters
InformationWeek’s coverage targets CIOs, IT leaders, and executives who are navigating generative AI adoption under real-world constraints: uneven employee skill levels, expanding tool choices (including proprietary LLMs and embedded AI), and heightened concern over confidentiality.
For AI leadership and workforce transformation, this context matters because training is becoming a primary scaling mechanism. The operational question is no longer whether employees will use generative AI, but whether leaders will equip them to use it well—and within policy.
AI upskilling starts with a “Community of Multipliers”
Andreas Welsch, founder and chief AI strategist at boutique AI strategy consultancy Intelligence Briefing, recommends beginning AI upskilling with a “Community of Multipliers.” These are early tech adopters eager to explore new tools and make them useful in day-to-day work.
The goal is practical scaling: multipliers learn first and then teach others inside their departments, creating leadership leverage without requiring a centralized training team to do everything.
Key Insight: Andreas Welsch explains that a “Community of Multipliers” can accelerate AI upskilling because early adopters translate new capabilities into department-specific practices. This enables faster diffusion of safe, effective prompting behaviors than relying solely on self-service courses or one-time organization-wide sessions.
Pilot, iterate, then scale enterprise-wide
Welsch also advises piloting training formats in one business area, gathering feedback, and iterating on both concept and delivery. After refinement, leaders can roll out training to the broader organization to maximize utility and impact.
This approach supports executive governance goals: it surfaces risk, adoption barriers, and workflow realities before the program becomes enterprise policy by default.
Match training formats to user maturity
Welsch emphasizes that different learning environments benefit different user groups. Cohort-based online sessions have proven successful for introductory levels of AI literacy, while executive training expands the scope from basic prompting to GenAI products.
Advanced training, in Welsch’s view, is best delivered as a workshop because deeper content requires more context and interaction—and the value also comes from networking and access to an expert trainer.
Key Insight: Welsch draws a practical line between “introductory literacy” and advanced capability building. Online cohorts can scale fundamentals across large populations, but advanced training benefits from workshops where participants can pressure-test prompts, compare results, and explore concepts like RAG and security risks with guided support.
What advanced training should include
Welsch says advanced training should go deeper into fundamentals including LLMs, retrieval-augmented generation, vector databases, and security risks. For leaders, this is less about turning everyone into engineers and more about ensuring teams understand what the tools are doing—and where they can fail.
Make training function-specific to drive adoption
Welsch also highlights the importance of tailoring workshops and training to a function’s context—for example, using GenAI in marketing. When examples reflect real workflows, training becomes easier to operationalize and easier to govern.
This matters for workforce transformation because prompting is not a single universal skill. It changes with role, risk profile, and desired outputs.
Key Insight: Function-specific training increases relevance by grounding prompting techniques in the audience’s actual work. Welsch notes that tailored workshops can help learners connect prompt design to their responsibilities, which makes AI upskilling more actionable—and reduces the chance employees improvise unsafe practices when under pressure.
Examples of how organizations are training prompt engineering
Organizations in the InformationWeek article used different approaches, often combining internal champions, assessments, and practical projects to reinforce skills over time.
Create & Grow: stratified learning and assessments
Digital agency Create & Grow began with the basics of generative AI and its applications, then implemented stratified sessions: foundational concepts for novices and more complex techniques for experienced team members.
Its training covers AI and language model basics, prompt design and response analysis, industry- and client-specific use cases, ethical considerations and best practices, and a mix of online courses, workshops, and peer-led sessions. The company also uses regular assessments and practical projects to gauge mastery.
Why developers may have an edge—and what executives should do about it
Welsch adds that for software developers, mastery can be framed as a cost function—getting optimal output with the shortest prompt to consume fewer tokens. For business users, proficiency can be measured by awareness of common prompting techniques and frameworks.
Key Insight: Developers can operationalize prompting through APIs, datasets, and repeatable workflows, while business users often need frameworks and practice to reach consistency. Welsch’s distinction helps leaders define what “good” looks like for different populations—without forcing one standard across all roles.
Leadership Implications
- Build governance into AI upskilling: define confidentiality expectations and reinforce that outputs must be verified.
- Scale through internal champions: start with a “Community of Multipliers,” then formalize peer teaching inside departments.
- Design learning as an operating system: pilot in one area, collect feedback, iterate, then scale enterprise-wide.
- Segment by maturity and role: use cohorts for literacy, executive programs for product strategy, workshops for advanced concepts.
- Train to workflows, not just prompts: pair prompting skills with repeatable processes, context, and risk-aware review practices.
Why this matters for AI leadership and workforce transformation
Generative AI adoption has expanded rapidly, with employees using tools to write, code, brainstorm, summarize, and more. The InformationWeek article underscores a leadership reality: organizations want to scale GenAI while ensuring employees are not compromising sensitive data.
Welsch’s guidance connects AI leadership to workforce transformation by treating training as a scaling mechanism. The emphasis is not only technique, but also helping employees “think differently and use software differently,” through experimentation and iteration within an open-ended conversation.
As tools proliferate (including OpenAI, Gemini, proprietary LLMs, and embedded GenAI), leadership advantage increasingly comes from who can convert experimentation into safe, repeatable, governed workflows.
Conclusion
AI upskilling is becoming essential infrastructure for scaling prompt engineering across the enterprise. The article’s examples show there is no single training model, but successful approaches share common traits: role-appropriate learning formats, function-relevant examples, continuous practice, and clear boundaries around confidentiality and verification.
Welsch’s approach—multipliers, pilots, tailored workshops, and advanced fundamentals—offers leaders a practical path to accelerate adoption without turning generative AI into unmanaged risk.
FAQ
1) What is AI upskilling in the context of prompt engineering?
AI upskilling for prompt engineering is structured training that helps employees write better prompts, evaluate outputs, and apply GenAI safely at work. It combines practice, role-based guidance, and clear rules for sensitive data and verification to support responsible adoption.
2) Why should enterprises invest in prompt engineering training instead of relying on self-learning?
Enterprises invest in prompt engineering training because GenAI is strategic and employees may otherwise use tools in inconsistent or risky ways. The article notes workers often seek online courses independently, but leaders still need aligned, secure, organization-specific AI upskilling programs.
3) How can leaders scale AI upskilling quickly across departments?
Leaders can scale AI upskilling by starting with early adopters who teach peers inside their departments. Andreas Welsch calls this a “Community of Multipliers,” then recommends piloting training in one business area, iterating with feedback, and rolling it out broadly.
4) What training formats work best for different GenAI skill levels?
Different formats fit different skill levels: cohort-based online sessions work for introductory AI literacy, while executive training expands from prompting to GenAI products. Andreas Welsch says advanced training is best in workshops because it needs context, interaction, and expert access.
5) What should advanced prompt engineering training include for enterprise teams?
Advanced prompt engineering training should cover deeper fundamentals and risk areas. Andreas Welsch specifically cites LLMs, retrieval-augmented generation (RAG), vector databases, and security risks. These topics help leaders and teams understand both capability limits and governance exposure points.
6) How can organizations reduce ambiguity and improve prompt quality?
Prompt quality improves when ambiguity is removed and instructions are specific. The article describes a tactic of asking the AI model whether a prompt is ambiguous, then revising it. It also describes structured templates that specify role, task, constraints, context, and desired output.
7) Do developers have an advantage in prompt engineering and AI adoption?
Developers can have an edge because they may connect prompts to datasets and APIs and build repeatable workflows. The article also notes understanding different language models can help. Andreas Welsch adds developers may optimize prompts as a cost function to reduce token usage.
8) What are baseline safety expectations for employees using generative AI at work?
Baseline safety expectations include understanding whether inputs remain confidential, verifying outputs because models can make mistakes, and knowing how to vet results. The article emphasizes that organizations want to scale GenAI but must also prevent sensitive data exposure through training and guardrails.
9) How should executives measure prompt engineering proficiency for business users?
Executives can measure business-user proficiency by awareness of common prompting techniques and frameworks, not by coding ability. Andreas Welsch distinguishes this from developer proficiency, which may be measured by achieving optimal outputs with shorter prompts to minimize tokens and improve efficiency.

