Agentic AI Governance: How Leaders Can Prevent “Agent Slop” From Becoming a Productivity Crisis

Preventin “Agent Slop” with Agentic AI Governance

Agentic AI is moving from experimentation to scale, but many organizations are underestimating a fast-growing risk: “agent slop,” the low-quality work produced by poorly designed AI agents.

As enterprises push teams to deploy agents quickly, governance, training and coordination often lag behind. Andreas Welsch, an AI leadership expert, warns that rushed adoption can produce compounding errors, internal distrust and external brand damage.

This article is based on media coverage of Andreas Welsch published by BuiltIn and focuses on what executive leaders (CIOs, CTOs, CHROs, and business unit heads) can do now to avoid an “always-on” slop problem as AI agents proliferate.

Original source: As Companies Embrace Agentic AI, A New Kind of ‘Slop’ Is Emerging

Executive Summary

  • Agent slop is low-quality work produced by poorly designed AI agents without guardrails.
  • Rushed mandates can leave employees improvising, cutting corners and deploying fragile agents.
  • Unchecked slop can spread 24/7, multiplying errors and eroding trust inside the enterprise.
  • External slop risks refunds, reputational damage and potential legal exposure.
  • Training, strategy, culture, and orchestration reduce slop and accelerate responsible scale.

Key Takeaways

  • Welsch links “use more AI” mandates to unmanaged deployment, where employees “cut corners” and “miss things” experts would catch.
  • Welsch cautions that low-quality or incorrect outputs can trigger refunds, tarnish brand image and even legal action.
  • Welsch highlights that poor agent outputs becoming public can damage standing with customers, partners and the general public.
  • Welsch points to Deloitte’s report for Australia’s Department of Employment and Workplace Relations as an example of hallucination-driven errors (including fake citations) that were not corrected.
  • Welsch notes agent slop worsens when employees do not feel comfortable being transparent about how agents are used.
  • Welsch recommends normalizing AI discussions in regular meetings to build comfort, peer learning and transparency.

What is Agentic AI?

Agentic AI refers to AI systems—often called AI agents—that can execute complicated, multi-step tasks without human intervention or rules to guide them. These agents may be software-based inside enterprise systems or embodied in physical forms such as robots, drones and autonomous vehicles.

In workplace settings, agentic AI raises both opportunity and risk. When agents are poorly designed or deployed without guardrails and guidelines, they can produce “agent slop”: low-quality outputs that accumulate into operational confusion and measurable productivity loss.

AI Governance for Agentic AI: Why “Agent Slop” Is Different From Ordinary Workslop

AI “slop” first gained attention as low-value AI-generated content across social platforms. “Workslop” later described professional-looking but shallow AI-generated work artifacts.

Agent slop is the next step: the same quality problem, produced specifically by AI agents—and potentially at much larger scale. Built In cites Tonkean’s Matt Aaronson warning that agents can produce slop 24/7, and widespread employee access to agent-building can “create a massive mess.”

Key Insight: Agent slop is not only a quality issue; it is a scaling issue. Unlike individual AI usage, agents can run continuously and propagate errors across workflows, systems and teams—turning small mistakes into persistent operational drag if governance and coordination are absent.

Why Agent Slop Happens: Model Limits, Data Quality and Human Deployment Behavior

Welsch attributes agent slop to both technical and organizational drivers. On the technical side, the models powering agents remain limited, can hallucinate and often require large volumes of high-quality, real-time data.

Even relatively common data issues—errors, missing values, or values spanning too wide a range—can prevent an AI agent from completing a workflow correctly. When those weaknesses meet high automation ambition, low-quality outputs become predictable.

On the organizational side, Welsch emphasizes that leaders may push AI adoption without sufficient guidance and support. In that environment, employees try to “figure this out on their own,” which Welsch says can cause corner-cutting and missed risks that professional developers and IT experts would identify.

Key Insight: Welsch connects agent slop to the combination of immature capabilities and unmanaged rollout. When employees are told to “use more AI” without clear best practices, they may deploy agents that look productive on paper but create downstream rework and firefighting.

Why Businesses Should Care: Trust Erosion, Operational Confusion and External Fallout

Uncoordinated agent deployment can cause agents to get in each other’s way, automate tasks better left untouched and create confusion among employees. These outcomes can compound skepticism toward AI.

Built In cites Pegasystems research indicating that 33 percent of workers doubt that agents can deliver high-quality work and 30 percent do not trust the accuracy of agent-generated responses. Increased skepticism can lead to resistance, which undermines innovation efforts.

Welsch also highlights the external risk. If customers and partners see low-quality or incorrect results, Welsch warns they may demand refunds, and brand image can be “tarnished,” especially if issues reach the news. Welsch adds there could be legal action if incorrect advice influences decisions.

Welsch points to Deloitte’s report for Australia’s Department of Employment and Workplace Relations as a cautionary example: the report contained fake citations and other errors driven by hallucinations that were not corrected, resulting in a partial refund and reputational harm.

Key Insight: Agent slop is an executive risk, not a tooling nuisance. Welsch frames the consequences in enterprise terms—customer remediation, brand damage and potential legal exposure—making quality controls and accountability mechanisms foundational to responsible agentic AI adoption.

Set Realistic Expectations: Treat Human Review as a Design Requirement

Built In stresses that leaders should emphasize AI’s shortcomings. Even sophisticated systems can produce errors and hallucinations, so employees must review AI-performed work and fact-check AI-generated content.

Clear expectations also help teams decide which tasks should be assigned to AI agents and which require a human touch. Without this clarity, organizations risk automating work that should not be automated—or over-trusting outputs that require verification.

Example: Quality failures are often not hidden

Built In’s Deloitte example illustrates that hallucination-driven mistakes (such as fake citations) can surface in high-stakes deliverables. When oversight is weak, errors can persist through publication and trigger reputational consequences.

Offer Proper Skills Training: Reduce “Constant Monitoring” and Mis-delegation

Encouraging adoption without training can slow progress and increase slop. Built In cites an Asana report finding that about one-third of workers are unsure which tasks to delegate to agents, leading to constant monitoring.

That indecision creates a paradox: agents may generate more work for employees and reduce productivity. Built In recommends mentorships, group trainings, access to online courses and other resources so teams can learn to improve workflows rather than rely on trial and error.

Upskilling as workforce transformation

Training is not simply technical enablement; it is change enablement. When employees understand limitations, delegation patterns and review responsibilities, agentic AI becomes a predictable workflow component rather than a source of repeated rework.

Define a Clear AI Agent Strategy: Prevent Shadow Agents and Fear-driven Narratives

Andreas Welsch warns that unclear plans can amplify employee fears. Without clarity, doubts can turn into resistance. Built In recommends a long-term roadmap describing how the organization will adopt, deploy and scale agentic AI, then sharing it across departments so employees understand how roles may evolve.

Welsch also recommends establishing policies for when agents should and shouldn’t be used, reducing the risk that employees “randomly deploy their own agents” that produce slop over time.

Key Insight: Governance starts with communication. Built In’s guidance aligns with Welsch’s emphasis on avoiding unmanaged rollout: when strategy, policies and role impacts are explicit, agentic AI adoption becomes coordinated—reducing the probability that fragmented, overlapping agents generate persistent low-quality outputs.

Cultivate an AI-Friendly Culture: Make Agent Use Discussable and Visible

Agent slop can grow when employees do not feel comfortable sharing how they use AI agents. Welsch notes that leaders can reduce hesitation by adding AI to the agenda in regular team meetings.

Making space for discussion can encourage ideas, increase buy-in and support transparency about agent usage. Welsch also points to the value of peer learning—if employees are uncertain, they may talk to peers “who might know a trick or two.”

Are AI Agents Still the Future of Work? Benefits, Caution and a Long Timeline

Despite slop risk, Welsch frames agentic AI as a major future-of-work trend. The article cites a PwC survey reporting benefits among organizations adopting agents: 66 percent increased productivity, 57 percent cost savings and 55 percent faster decision-making.

At the same time, Andrej Karpathy has pushed back on claims that 2025 is the “year of agents,” suggesting companies may need at least a decade to perfect these tools before unlocking full potential. Built In positions slop and hallucinations as growing pains that increase the need for structure, governance and orchestration.

Leadership Implications

  • Establish governance and usage policies: Define where agents are appropriate and require review for high-stakes outputs.
  • Operationalize human verification: Make fact-checking and oversight explicit responsibilities, not optional cleanup.
  • Invest in workforce enablement: Provide training, mentorship and resources so delegation decisions are consistent.
  • Communicate a roadmap: Share adoption and scaling plans across departments to reduce fear and resistance.
  • Coordinate agents through orchestration: Reduce agent-to-agent interference and fragmented “shadow agent” deployments.

Why this media coverage matters

BuiltIn’s coverage of Welsch’s perspective targets technology and business audiences navigating fast-changing AI capabilities. Its framing of “agent slop” translates a technical quality issue into an executive risk: productivity drag, internal trust erosion and external brand damage.

For AI leadership and workforce transformation, the article is relevant because it ties outcomes to decisions leaders control—training, strategy, communication, culture and tooling—while underscoring Welsch’s point that unmanaged mandates can trigger corner-cutting and fragile deployments.

Conclusion

Agentic AI can deliver productivity and decision-speed benefits, but it also introduces a new failure mode: always-on, compounding low-quality work. Preventing agent slop requires more than better prompts or bigger models.

Executive leaders can reduce risk by treating Agentic AI governance as a core operating capability—setting realistic expectations, funding upskilling, clarifying strategy, normalizing transparency and coordinating agents through orchestration.

FAQ

1) What is agent slop in an enterprise setting?

Agent slop is low-quality work produced by AI agents that were poorly designed and lack proper guardrails and guidelines. In enterprises, this matters because agents can run continuously, creating compounding mistakes that can reduce productivity and trust.

Built In distinguishes agent slop from generic AI slop and workslop by emphasizing that agents can operate at scale and without ongoing human direction.

2) How is agent slop different from workslop?

Agent slop is the same basic quality problem as workslop—professional-looking but low-substance output—except it is produced by AI agents specifically. Because agents can execute multi-step workflows autonomously, the volume and persistence of low-quality work can be greater.

Built In describes AI slop as social content, workslop as workplace artifacts, and agent slop as agent-generated low-quality work.

3) Why does agent slop happen even with advanced AI models?

Agent slop happens because models still have limitations, including hallucinations, and agents often require high-quality, real-time data to function correctly. Errors, missing values or overly broad data ranges can derail workflows and produce low-quality outcomes.

Built In also highlights the human factor: rushed adoption without guidance can lead to corner-cutting and weak implementations.

4) What leadership behaviors increase the risk of agent slop?

Andreas Welsch warns that mandates like “use more AI” can leave employees to figure out agents on their own, which encourages cutting corners and missing risks that professional developers or IT experts would catch. This unmanaged rollout increases the likelihood of agent slop.

The coverage frames this as a governance and support gap, not simply a tooling issue.

5) What are the business risks if agent slop becomes customer-visible?

Welsch cautions that if a business becomes known for low-quality or incorrect results, customers may demand refunds and brand image can be tarnished, especially if issues reach the news. He also notes potential legal action if incorrect advice drives decisions.

Andreas Welsch mentions Deloitte’s report error and refund as an example of how public failures can damage reputation.

6) How can executives reduce employee distrust in AI agents?

Executives can reduce distrust by setting realistic expectations about AI limitations, requiring review and fact-checking, and providing training on what to delegate to agents. Built In cites research showing skepticism about quality and accuracy, which can fuel resistance without transparency.

This is an AI adoption management challenge as much as a technical one.

7) What training prevents agent slop during AI upskilling programs?

Training that clarifies which tasks to delegate to agents and how to monitor outputs can reduce constant oversight and mis-delegation. Built In cites an Asana report that about one-third of workers are unsure what to delegate, which can create more work and lower productivity.

Built In recommends mentorships, group trainings and access to courses to avoid trial-and-error deployments.

8) What role does AI strategy play in preventing shadow agents?

A clear strategy and roadmap help employees understand how agents will be adopted, deployed and scaled, reducing fear and ad hoc usage. Built In recommends policies defining when agents should and shouldn’t be used, so employees don’t randomly deploy agents that can generate slop.

9) How does culture influence agent slop and AI governance outcomes?

Culture matters because agent slop can occur when employees don’t feel comfortable sharing how they use AI agents. Welsch notes leaders can address hesitation by adding AI to regular meeting agendas, making usage discussable and enabling peer learning to improve transparency.

Andreas Welsch frames this as a practical step to normalize AI as “the new way of working.”

10) Are AI agents still worth adopting despite agent slop concerns?

Welsch argues AI agents remain a future-of-work trend because many organizations report productivity gains, cost savings and faster decision-making from adoption. At the same time, the article notes caution from Andrej Karpathy that perfection may take a decade, making governance essential.

The practical implication is to scale responsibly: combine training, strategy and orchestration to capture value while limiting slop.

About the Author