

Why IT Can’t Be the “HR Department” for AI Agents and Governance:
Agentic AI is quickly moving from experimentation to day-to-day work execution, forcing leaders to answer an uncomfortable question: who “manages” AI agents once they operate inside real workflows?
The debate accelerated after NVIDIA CEO Jensen Huang said at CES 2025 that “the IT department of every company is going to be the HR department of AI agents in the future.” The phrase is memorable—yet it can also mislead executive decision-making about agentic AI governance.
Andreas Welsch, an AI leadership expert, argues that treating IT as HR for AI agents risks repeating old organizational mistakes. Managing AI agents demands both software governance and workforce-grade operating disciplines—without pretending the agents are human employees.
Original source: IT the HR of Agentic AI? Not So Fast
Executive Summary
- “IT as HR for agents” is an oversimplification that can distort governance decisions.
- AI agents are software, requiring auditing, compliance and controlled access—not “people management.”
- Welsch recommends applying HR lessons (roles, onboarding, evaluation) to agentic AI operations.
- Effective models depend on an IT–HR partnership, not a departmental handoff.
- Readiness gaps include knowledge quality, documentation and prompt engineering discipline.
Key Takeaways
- Andreas Welsch says HR leaders and subject matter experts should be included in agentic AI conversations early.
- Welsch frames agentic AI as an opportunity for HR to extend competence and relevance as rules for humans also shape agents.
- Welsch warns that positioning IT as “HR for AI agents” can create unforeseen workplace consequences.
- Welsch notes many IT departments have limited AI deployment beyond tools like Copilot, complicating expectations.
- Welsch states agent management should borrow HR’s decades of practices: role definition, onboarding, evaluation and compliance.
- Welsch outlines practical “agent management” questions: persona, scope, responsibility, policy adherence, rewards, planning and collaboration.
What is Agentic AI?
Agentic AI refers to AI agents that can act within workflows—performing tasks, making recommendations and supporting self-service across the enterprise. In the Reworked discussion, these agents are treated as software-driven entities embedded into business processes rather than autonomous “employees.”
Because AI agents can touch policies, data access and customer or employee interactions, agentic AI governance becomes essential. Oversight typically includes clear scope, accountability, auditing, compliance measures and the operational discipline to keep outputs aligned with verified information and organizational rules.
Agentic AI Governance Starts by Rejecting a Catchy—but Risky—Metaphor
Huang’s “IT becomes HR for AI agents” prediction landed because it compresses a complex operating problem into a simple org chart move. The Reworked coverage challenges that simplicity.
Andreas Welsch cautions that swapping responsibilities between departments can backfire, comparing it to asking HR to integrate and operate a new communication platform. Separate competencies exist for a reason, and agentic AI raises both technical and organizational demands.
Key Insight: Agentic AI governance is not solved by renaming ownership. Welsch’s position suggests leaders should separate the metaphor from the operating reality: AI agents require structured management practices, but those practices blend IT execution with HR-rooted disciplines like role clarity, evaluation and compliance behavior.
AI Agents Are Software, Not Humans—So Governance Must Look Different
In the Reworked article, Pegasystems’ David Vidoni emphasizes a foundational point: AI agents are not human employees. They are software-driven entities that need workflows, governance and compliance.
That distinction matters because it changes what “management” means. The priority becomes accountability, auditing and process adherence—plus transparent recommendations based on verified data. In this framing, IT is well-positioned for technical oversight, while HR’s role is critical for policy documentation across regions and for the human side of adoption.
Key Insight: Treating agents like employees can create governance theater. Vidoni’s point reinforces a more executable model: manage AI agents as software in workflows, and ensure policies, access scope and auditing are explicit. This shifts the conversation from metaphor to controls leaders can validate and regulators can inspect.
Applying HR Lessons to Agentic AI: Welsch’s Operating Checklist
Welsch argues that HR leaders and subject matter experts should be brought into agentic AI planning. The rationale is pragmatic: many rules and processes that apply to human team members will also apply to AI agents—especially when companies need a consistent image and service to customers and stakeholders.
Welsch also notes a readiness constraint: many IT departments “barely have traditional AI or even generative AI use cases deployed, aside from Copilot.” That gap makes it risky to assume IT can single-handedly take on a broadened “HR for agents” mandate.
Key Insight: Welsch’s core contribution is not a slogan—it is an operating translation. Agent management should replicate HR-domain lessons built over decades: role definition, proficiency expectations, onboarding, evaluation, learning, reward and compliance processes. This turns agentic AI from experiments into governable workforce infrastructure.
The seven questions Welsch says leaders should answer
- Persona (cultural fit): How should the agent behave and communicate?
- Scope of work (job description): What is the agent expected to work on?
- Responsibility (seniority): What is the agent allowed to do?
- Policy adherence (code of conduct): How is the agent expected to behave?
- Rewards (compensation): What incentives motivate goal achievement?
- Planning (workforce analysis): How is the need for more agents determined?
- Collaboration (organizational structure): What teams and roles must coordinate?
Designing Agent Roles: Persona, Scope and “Seniority” Reduce Risk
Welsch’s first three questions—persona, scope and responsibility—map to the highest-impact governance failures leaders see in practice: inconsistent tone, unclear boundaries and over-privileged actions.
Persona defines how an agent communicates with employees, customers or stakeholders. Scope defines the tasks it can touch. Responsibility—framed as “seniority”—sets what it is allowed to do, not just suggest.
These elements are operational guardrails. They determine what data an agent may access, how it behaves in sensitive scenarios and whether it can initiate actions that carry compliance or reputational consequences.
Key Insight: Role design is governance in disguise. Welsch’s “persona, scope, responsibility” triad can be used as a leadership test: if an AI agent’s job description cannot be written clearly, it should not be granted broad workflow permissions. Clarity precedes automation—especially when stakes are high.
Policy Adherence and Rewards: The Hard Part Isn’t Code, It’s Behavior
Welsch explicitly includes “policy adherence (code of conduct)” and “rewards (compensation)” in agent management. That framing matters because it pushes leaders beyond technical deployment into behavioral alignment.
In a workplace context, “policy adherence” becomes a design and monitoring problem: how the agent should behave, what it must not do, and how exceptions are handled. “Rewards” becomes an incentive design question—what the organization optimizes, and how success is measured for goal achievement.
The Reworked coverage also flags the importance of correcting mistakes—such as incorrect or offensive responses or mishandling confidential information—reinforcing why policies must be explicit and enforceable.
IT–HR Must Evolve Together—But Not by Pretending Agents Are Employees
The article includes strong skepticism from Deborah Perry Piscione, who argues that companies often underestimate human factors and organizational complexity. She notes that technical implementation can be a minority of the challenge, while culture adaptation, skills transformation and organizational resistance dominate outcomes.
Workato CIO Carter Busse adds that IT professionals may need stronger emotional intelligence, communication and change management to handle employee resistance. Busse also highlights practical constraints: clean, organized knowledge, and the reality that employees often avoid writing things down.
Welsch’s model fits this reality by proposing shared operating disciplines without collapsing roles. HR can help ensure the “rules of the workplace” translate into agent expectations, while IT ensures secure implementation, maintenance and controlled access in workflow systems.
Operational Readiness: Knowledge Quality and Prompt Discipline Shape Agent Performance
The Reworked coverage underscores that agentic AI performance depends on the quality of underlying knowledge. Busse notes organizations will need to curate clean, well-organized knowledge to improve agent performance—challenging in environments where documentation is avoided.
Busse also points to prompt engineering as increasingly vital to ensure AI agents return the right answers and perform effectively. In an executive context, this is less about novelty and more about operational control: consistent prompts, testable behaviors and predictable outputs.
Vidoni’s emphasis on transparency and verified data connects directly: governance is undermined when agents are fed inconsistent knowledge or lack clear rules on what information they can access.
Leadership Implications
- Define agent roles like jobs: use persona, scope and responsibility before granting workflow permissions.
- Build joint governance: align IT controls with HR-rooted policy, conduct and regional documentation needs.
- Operationalize auditing: require transparent recommendations grounded in verified information and accountable logs.
- Invest in knowledge readiness: curate clean knowledge sources; treat documentation as a performance dependency.
- Plan workforce integration: set collaboration models for humans and agents, and define escalation for agent mistakes.
Why this media coverage matters
This Reworked feature targets digital workplace, talent and employee experience leaders navigating the shift from AI tools to agentic AI. The coverage matters because it challenges a high-profile narrative (IT as HR for agents) with operational realities: governance, change management and cross-functional accountability.
For AI leadership and workforce transformation, the article frames what executives must do next: stop relying on metaphors, define roles and guardrails, and build a shared operating model that keeps AI agents accountable inside business processes.
Welsch’s contribution is particularly relevant to executive audiences because it converts the debate into a set of implementable questions—making agentic AI governance measurable, discussable and easier to operationalize.
Conclusion
Agentic AI governance cannot be delegated to a single function based on a catchy prediction. The Reworked discussion shows why: AI agents are software in workflows, yet they also shape human work, policy adherence and organizational trust.
Andreas Welsch’s guidance centers on applying HR-grade operating lessons to agent management—persona, scope, responsibility, conduct, incentives, planning and collaboration—while preserving the distinct competencies of IT and HR. For executives, that combination is the path from hype to durable workforce transformation.
R
FAQ
Should IT be the “HR department” for AI agents?
No—agentic AI governance should not be reduced to IT acting as HR for AI agents. AI agents are software in workflows, while HR-grade disciplines still matter. The practical solution is an IT–HR partnership with clear accountability, policies and controls.
Are AI agents considered employees in the workplace?
No—AI agents are not employees; they are software-driven entities embedded into business processes. That means agentic AI governance focuses on workflows, compliance, auditing and access controls. HR involvement remains important for policy translation and workforce change management.
What does Andreas Welsch recommend for managing AI agents?
Andreas Welsch recommends applying HR lessons to agentic AI management, including role definition, onboarding, evaluation, learning, rewards and compliance. He also proposes seven practical questions leaders should answer—covering persona, scope, responsibility, conduct, incentives, planning and collaboration.
What is the most important starting point for agentic AI governance?
The best starting point is defining the agent’s role: persona, scope of work and responsibility boundaries. This makes governance concrete and prevents overreach. If an AI agent’s “job description” cannot be written clearly, it should not receive broad workflow permissions.
Why do HR leaders need to be involved in agentic AI?
HR leaders need to be involved because many workplace rules and processes that apply to humans also shape AI agent behavior. Welsch frames this as an opportunity for HR relevance. HR helps translate policies, conduct expectations and regional requirements into operational guidance for agents.
What are common operational risks when deploying AI agents?
Common risks include unclear scope, inconsistent behavior, mishandling confidential information and untraceable recommendations. The Reworked coverage highlights the need for governance, auditing and verified data. Agentic AI governance should specify allowed actions, escalation paths and monitoring for mistakes.
How does knowledge quality impact AI agent performance?
Knowledge quality directly affects whether AI agents deliver accurate, consistent outcomes. The coverage notes organizations must curate clean, organized knowledge and that employees often avoid writing things down. Without disciplined documentation, agentic AI adoption becomes unreliable and harder to govern at scale.
Is prompt engineering a leadership concern or just a technical detail?
Prompt engineering becomes a leadership concern when it affects reliability, compliance and employee trust. The article notes prompt engineering is increasingly vital to get correct answers and effective performance. In agentic AI governance, prompt discipline supports predictable behavior and easier auditing of outputs.
How should executives measure accountability for AI agents?
Executives should measure accountability through transparent recommendations, verified data sources and auditable logs aligned to policies and workflows. This reflects the article’s emphasis on auditing and compliance for AI agents. Agentic AI governance should also define who owns corrections when agents make mistakes.

