Accelerating Generative AI Adoption in Project Management

What Leaders Must Do Now as Generative and Agentic AI Accelerate

AI adoption is no longer a future-state initiative for project organizations—it is already changing how project work gets planned, communicated, and executed.

In a PMI session held during AI Community Day and AI Month, Andreas Welsch (AI leadership expert and founder/chief AI strategist) described how AI is evolving from basic automation to generative AI and now agentic AI, and why many organizations still are not adequately prepared.

The conversation was aimed at project professionals but carries direct implications for CIOs, CTOs, and CHROs who are accountable for AI governance, workforce transformation, and operating model change.

Welsch’s central message: leaders should encourage hands-on experimentation while raising standards for quality and accountability, because AI output still requires human oversight.

Executive Summary

  • AI use at work is widespread, but many employees hide it from managers.
  • AI is shifting work from information gathering to review and decision proposals.
  • Agentic AI increases autonomy, raising governance and workflow-design questions.
  • Quality, values, and accountability must remain non-negotiable.
  • Project leaders should enable experimentation within clear organizational guardrails.

Key Takeaways

  • Welsch outlined four phases: programmed software, machine learning, generative AI, and agentic AI.
  • Generative AI is probabilistic; it can be “correct most of the time,” not always.
  • Agentic AI introduces goal-based tasking (e.g., draft, review, and send a report).
  • AI does not remove the project “iron triangle”—human review still consumes time and cost.
  • Employees may avoid disclosure because of fear of being seen as lazy or incompetent, or being given more work.
  • Project leaders should set expectations: empowerment to use AI, quality standards, and human accountability.
  • Delegation discipline matters more with AI: objective, context, data, resources, and outcome criteria.

What is AI adoption?

AI adoption is the deliberate integration of AI tools and capabilities into how work gets done—paired with the leadership behaviors, cultural norms, and guardrails required to use AI responsibly. In Welsch’s framing, adoption is not only about access to tools like ChatGPT or Copilot, but also about enabling people to use AI openly, maintaining quality expectations, and preserving accountability for outputs. As AI evolves toward agentic systems that can take actions toward a goal, AI adoption increasingly becomes a governance and workflow design discipline, not a one-time technology rollout.

Why this conversation matters

This PMI AI Community Day session reflected a reality many executives now face: AI has moved from experimentation to daily use across teams.

Welsch noted that headlines can be polarizing—often emphasizing fear and job loss—while missing the more urgent operational need: organizations and leaders must prepare.

Because project managers operate at the intersection of deadlines, budgets, resources, and stakeholder expectations, their day-to-day practices offer a practical lens for enterprise AI leadership, governance, and workforce transformation.

Key Insight: Welsch emphasized that the most important message is preparation: AI is permeating industries, but many companies are not adequately ready to adopt it. For leaders, that shifts the priority from hype management to building the operating conditions—skills, guardrails, and accountability—needed for scalable AI adoption.

AI adoption is happening—often without managers knowing

Welsch referenced survey findings showing broad AI usage at work and a parallel trend: many employees do not disclose that usage to their managers.

The reasons are cultural and incentive-driven: employees fear being perceived as lazy or incompetent, or being assigned additional work because they appear more productive.

For executive leaders, this is a governance and risk issue as much as a productivity issue, because hidden usage increases the chance of inconsistent quality, policy violations, and unmanaged workflow changes.

Key Insight: Welsch highlighted a workplace paradox: people are getting real value from AI, yet many keep it private. This indicates that AI adoption is as much about culture and leadership signals as it is about tool access—employees need explicit permission, guidance, and expectations to use AI responsibly.

From programmed software to agentic AI: the four phases leaders should understand

Welsch described AI’s evolution in four phases that matter for strategy and operating model design.

First was linear, programmed software (“if this happens, then do that”), which is explainable but rigid.

Second came machine learning, which detects patterns in data for recommendations and classification (e.g., e-commerce suggestions or document classification).

Third is generative AI, enabled by large language models, which can draft and summarize content but is probabilistic.

Fourth is agentic AI, where systems can be given goals and take more autonomous action—such as compiling inputs, producing a report for review, and sending it.

Key Insight: Welsch’s shift from generative to agentic AI reframes what “automation” means. Instead of asking AI only to draft text, teams can increasingly ask it to execute multi-step work toward a goal. That raises new questions about oversight, verification, and where approvals must remain human-owned.

AI changes the work distribution—not accountability

Welsch challenged simplistic narratives that “entry-level jobs will vanish.”

Instead, he described a shift in where effort is spent: AI can help acquire relevant information across systems and present options, while humans review, refine, and develop decision proposals.

The critical principle remains unchanged: accountability sits with people, not the tool.

He also noted a practical constraint: even when AI speeds drafting or analysis, humans still must verify outputs—especially because generative AI can hallucinate.

Executive lens: the “sliding window” of work

In project contexts, AI can reduce time spent collecting status, searching documents, or assembling updates.

That time often gets reallocated to review, stakeholder decisioning, and ensuring outputs match organizational standards.

The project “iron triangle” still applies in an AI era

Welsch addressed whether AI “solves” the traditional scope–time–cost constraints.

His conclusion: the iron triangle remains relevant because AI introduces new tradeoffs.

Generative AI can be fast, but it may be wrong; review and correction consume time. Agentic AI can automate workflows, but approvals and business accountability still demand human involvement.

Cost may decline as models become cheaper, but lowered cost often triggers higher demand—more use cases, deeper analyses, and expanded scope.

Key Insight: Welsch argued that AI does not remove constraints; it changes where constraints show up. Teams may draft faster, but must invest in verification and governance. Technology may become cheaper, yet expanded use can increase overall demand. Leaders should plan for these second-order effects.

Stop using AI like a search engine: three higher-value roles for AI

Welsch observed that many professionals use AI like Google: a single question and answer.

That limits value and increases risk, especially when users accept outputs without a reference frame or verification.

He offered three practical ways project leaders can elevate usage beyond basic drafting.

1) AI as a thought partner

Example from the session: ask AI to review a high-level timeline and identify potential risks or gaps.

2) AI as a sparring partner for stakeholder preparation

Example: role-play a meeting with executives (e.g., CFO, revenue, operations) and anticipate questions, then request feedback on responses.

3) AI as a personal coach for difficult conversations

Example: practice approaching a disengaged or unproductive team member, then ask for critique and improvement ideas.

Welsch also reminded participants to follow organizational AI policies and avoid entering confidential data or personally identifiable information.

Empowerment, quality, accountability: the leadership guardrails that scale AI adoption

Welsch proposed three leadership levers for integrating AI into teams.

First, empowerment: leaders should explicitly encourage AI use and model it.

Second, quality and values: faster drafts do not excuse generic or “bland” outputs, and the quality bar must remain high.

Third, accountability: humans remain responsible for what gets sent, submitted, or acted upon.

This triad directly addresses the hidden-usage problem by replacing ambiguity with expectations and support.

Key Insight: Welsch’s guardrails are leadership behaviors, not technology settings. Empowerment reduces shadow AI usage. Quality standards prevent “AI workslop” from becoming normal. Accountability ensures governance stays intact when outputs move faster than traditional review cycles.

Delegation discipline becomes an AI skill: objective, context, data, resources, outcome

Welsch connected effective AI use to a familiar management skill: delegation.

When delegating to a person—or to an AI system—the request should specify objective, context, available data, resources, and what “good” looks like.

This becomes more important with agentic AI, where the tool may take multi-step actions toward a goal.

In executive terms, this is prompt quality as operating discipline: clearer inputs reduce variance, rework, and downstream risk.

Human-in-the-loop vs. human-on-the-loop: choosing the right oversight model

Welsch distinguished between two oversight approaches for AI-enabled workflows.

In a human-in-the-loop model, a person stays actively involved throughout the process, which fits higher-risk decisions.

In a human-on-the-loop model, AI does the work and a person reviews the output, similar to delegating to a junior team member.

Choosing between the two becomes a governance decision tied to risk, impact, and accountability.

Automation can shift burdens: apply system thinking before “optimizing” reports

Project managers in the session described draining tasks such as creating reports, handling shifting priorities, and managing interpersonal dynamics.

Welsch acknowledged AI can automate pieces of reporting (e.g., collecting weekly status updates and assembling a report), but warned against optimizing one step while increasing work elsewhere.

He encouraged leaders to ask whether the cadence and purpose of reporting are still appropriate, not only whether AI can automate it.

This is a practical governance lesson: AI adoption is also workflow redesign.

Leadership Implications

  • Make AI use explicit: Encourage open usage to reduce shadow AI and inconsistent practices.
  • Set quality standards: Treat AI output as a draft; require verification before distribution or decisions.
  • Define oversight models: Choose human-in-the-loop vs. human-on-the-loop based on risk and impact.
  • Institutionalize delegation discipline: Require objective, context, data, resources, and outcome criteria in AI-enabled tasks.
  • Redesign workflows, not just tasks: Apply system thinking so automation does not shift burdens or create new bottlenecks.

Why this media coverage matters

The PMI AI Community Day and AI Month programming brought AI adoption into a practitioner setting where constraints are real: deadlines, budgets, governance, and stakeholder expectations.

Welsch’s perspective matters for AI leadership because it moves the discussion from abstract claims about job loss to operational readiness: culture, standards, delegation, and oversight.

The session also aligns with his broader work helping leaders prioritize AI use cases and bring people on board, which is the core challenge in workforce transformation.

Conclusion

AI adoption in project management is already underway, but scaling it responsibly requires leadership—especially as generative AI gives way to agentic AI.

Welsch’s message is practical: empower teams to use AI, protect quality and values, and keep accountability human-owned.

For executives, the next step is to translate these principles into governance, workflow design, and workforce enablement so AI improves outcomes without introducing unmanaged risk.

FAQ

1) What is the fastest way to start AI adoption in project management?

The fastest way is to start with hands-on experimentation in low-risk tasks, then set clear expectations for quality and accountability. In Welsch’s view, leaders should explicitly encourage AI use and require human review before outputs are shared.

2) Why do employees hide AI use from their managers?

Employees often hide AI use because they fear being seen as lazy or incompetent, or because they expect more work if productivity rises. Welsch referenced surveys where about half of users reported not telling managers about workplace AI usage.

3) How does agentic AI change governance requirements?

Agentic AI changes governance by enabling goal-based execution across multiple steps, not just drafting content. Welsch described scenarios where AI gathers inputs, produces a report for review, and then sends it, increasing the need for oversight and approval design.

4) Is generative AI reliable enough for executive reporting?

Generative AI can accelerate drafting and summarization, but it is not reliable enough to skip verification. Welsch stressed that these systems are probabilistic and can hallucinate, so leaders must maintain review processes because accountability remains human-owned.

5) Does AI eliminate the project iron triangle of scope, time, and cost?

AI does not eliminate the iron triangle; it shifts tradeoffs. Welsch argued that human review still takes time, AI usage can still create cost, and lower technology costs often increase demand, expanding scope rather than removing constraints.

6) What leadership behaviors prevent “AI workslop” in teams?

Preventing “AI workslop” requires leaders to set and enforce quality standards, not just promote tool usage. Welsch emphasized that faster drafting does not excuse bland or generic outputs, and that teams remain responsible for professional-grade deliverables.

7) How should leaders delegate tasks to AI tools effectively?

Leaders should delegate to AI with the same discipline used for people: define the objective, provide context, specify available data, clarify resources, and define the outcome criteria. Welsch presented this as a practical framework for better AI-assisted execution.

8) What is the difference between human-in-the-loop and human-on-the-loop?

Human-in-the-loop means a person stays actively involved throughout the AI-enabled process, while human-on-the-loop means AI does the work and a person reviews the output. Welsch positioned this as a useful way to choose appropriate oversight levels.

9) Where can project teams use AI beyond drafting status reports?

Project teams can use AI as a thought partner to spot timeline risks, as a sparring partner to prepare for stakeholder meetings, and as a personal coach for difficult conversations. Welsch gave role-play and feedback examples to illustrate these higher-value uses.

10) What should executives watch for when automating project reporting?

Executives should watch for burden shifting and unintended workflow consequences. Welsch cautioned that automating status collection may simply move effort onto others, and recommended system thinking—reconsider cadence and purpose—rather than automating by default.

About the Author