Why Agentic AI Won’t Bring The “SaaSpocalypse” Overnight

 

What Enterprise Software Leaders Should Do Next

Agentic AI is driving a fresh wave of disruption narratives in enterprise software—most notably the claim that “SaaS is dead.” Andreas Welsch, an AI leadership expert and former SAP VP and Head of AI Marketing at SAP America, argues the reality is more nuanced: it is an evolution, not an overnight collapse.

This article distills Welsch’s perspectives from a GLG Roundtable conversation on AI disruption to Enterprise SaaS. The discussion focused on what changes first (and what does not), how pricing and cost models may shift, and why governance, traceability, and workforce transformation become decisive leadership issues.

For CIOs, CTOs, CHROs, and software executives, the central leadership challenge is balancing speed with risk: capturing the benefits of agents while maintaining deterministic, auditable business outcomes for critical systems.

Executive Summary

  • Welsch frames the “SaaS apocalypse” as a multi-year evolution, not a big-bang event.
  • Core systems (finance, HR, supply chain) are harder to replace than “edge” tools (project management).
  • Pricing is likely to move beyond seats toward outcomes, business objects, tokens, or credits.
  • Outcome-based models raise governance needs: clear definitions, dispute avoidance, and traceability.
  • Workforce impact centers on review, QA, and judgment—not just faster content or code generation.

Key Takeaways

  • Not all SaaS is equally exposed: Ancillary apps face faster disruption than ERP-grade core processes.
  • Seat-based pricing pressure is real: Fewer humans at screens implies new monetization units.
  • Outcome pricing works best near business metrics: Tickets resolved, documents processed, objects created.
  • Token/credit models can confuse buyers: Transparency tools (estimators, reporting) become necessary.
  • Margins can fluctuate under fixed outcome prices: Vendors may add caps (characters/words/pages) to manage risk.
  • Vendor value is integration and context: The differentiator is workflow + semantics + governance, not the base model.
  • Trust, logging, and auditability are non-negotiable: Especially for regulated industries and mission-critical workflows.

What is Agentic AI?

Agentic AI refers to AI systems that can work toward a goal by taking actions, delegating tasks, and completing multi-step workflows with increasing autonomy. In the conversation, Welsch described agents as a shift beyond simple “assistants,” potentially operating faster than humans and reducing the number of people needed to complete a task. However, he emphasized today’s limitations: reliability, coordination challenges in multi-agent setups, ambiguity in language-based instructions, and the need for strong traceability and human oversight for critical business processes.

Why this conversation matters

This GLG Roundtable discussion surfaced what AI leadership teams need most right now: a pragmatic view of enterprise risk, a realistic timeline for transformation, and clear implications for pricing, governance, and workforce redesign. Welsch repeatedly grounded the debate in enterprise realities—system downtime risk, regulated requirements, and the difference between “core” and “nice-to-have” applications.

In short, the conversation is relevant because agentic AI is not just a product feature decision. It is an operating model decision that affects revenue architecture, accountability boundaries, and how teams define “great work” in an AI-accelerated environment.

1) The “SaaSpocalypse” is an evolution—core vs. edge matters

Welsch cautioned against treating “SaaS is dead” as a universal claim. In his view, software that runs core processes—finance, HR, supply chain—will take much longer to replace than tools that sit at the edge of operations.

He contrasted core systems with applications like project management, where substitution is already common (spreadsheets, Notion, or quickly built internal tools). If an edge tool fails, the business impact is limited. If a finance system fails—or books money incorrectly—the consequences are far more severe.

Key Insight: Welsch’s core/edge distinction reframes disruption risk. The nearer software is to mission-critical, regulated, and repeatable business processes, the more enterprises prioritize reliability, auditability, and vendor accountability over novelty.

2) Pricing after seats: outcomes, agents, business objects, tokens

Welsch expects enterprise SaaS monetization to evolve as agentic AI reduces the number of humans “in the loop.” He described several directions already visible in the market:

  • Outcome-based pricing (e.g., $2 per successfully resolved service ticket, cited in customer service software examples).
  • Consumption models (tokens, credits, minutes/pages processed) closer to platform economics.
  • Business-object pricing (documents, pages, reports, job descriptions, bills of lading, etc.).
  • Agent-based pricing (charging for an AI agent that takes over a human task, with high usage allowances).

He also noted the market has not yet converged on a single “successor metric” to seats. Until it does, customers will keep demanding clearer estimators and reporting to understand spend.

Key Insight: Welsch highlighted that pricing works best when it maps to metrics business leaders already monitor. Outcomes and business objects are easier to defend commercially than abstract credits—if “success” is defined upfront with precision.

3) The margin problem: fixed prices, variable AI costs

Outcome-based pricing can create a margin management challenge: vendors may set a fixed unit price (e.g., per resolved ticket), while their underlying costs fluctuate due to token usage, ticket length, or multi-step agent workflows.

Welsch described practical ways vendors may mitigate this risk:

  • Averaging cost across volume (pricing based on typical usage patterns).
  • Caps and thresholds (e.g., a ticket up to X characters; beyond that, it counts as a second ticket).
  • Standard page definitions (e.g., page thresholds by word count, charging again beyond the limit).

He also pointed out that platform-level competition (selling tokens) tends to be margin-thin, while application-level value (closer to business logic) supports higher pricing power.

Key Insight: Welsch’s point is structural: as vendors move toward outcome pricing, they inherit cost volatility from model usage. Leadership teams should expect contractual “guardrails” and defined limits to become standard deal terms.

4) Why SaaS still matters: integration, semantics, and enterprise-grade delivery

Welsch compared LLMs to “basic technology” like electricity: models improve over time, but the model alone is not the differentiator. In his view, SaaS vendors create value through:

  • Workflow integration into the systems where work happens.
  • Context enrichment using the data inside enterprise applications (e.g., HR, finance, CRM).
  • Governance and guardrails to meet enterprise requirements (safety, reliability expectations).
  • Repeatability versus ad-hoc prompting that varies by user.

He also discussed “human integration” costs: cheaper standalone tools can shift time burdens onto employees who must copy/paste between systems, eroding productivity gains.

Key Insight: Welsch’s argument is that enterprise SaaS differentiation is not “having an LLM.” It is owning the integrated workflow, the semantics (meta-model/knowledge relationships), and the operational accountability that enterprises buy to reduce risk.

5) Data, portability, and the real moat: relationships between systems

When asked about data being a competitive moat—and what happens if customers want to leave—Welsch emphasized that data alone is not valuable. The value lies in semantics and relationships: connecting sales to production, or correlating operational signals across systems to produce decisions faster.

He also returned to a recurring enterprise reality: replacing a core system is not only a technical decision. It is a risk and accountability decision. Leaders must consider who is “on the hook” when something breaks at 2:00 AM in production.

Welsch acknowledged that moving data out is easier for less critical tools than for core systems like manufacturing execution, CRM, or finance.

6) Adoption patterns: what CIOs do vs. what vendors do

Welsch described enterprises as actively exploring agents—often quietly due to internal change-management concerns and fear of backlash. He cited examples discussed at industry conferences, including financial services exploration of agent support for investor portfolio management, and communications firms structuring reusable agent components (document extraction, summarization) to avoid reinventing the wheel.

On the vendor side, he expects established software companies to protect revenue by monetizing access to systems, objects, and outcomes—potentially resulting in higher total cost for customers even if seat counts decline.

He also noted contract realities: enterprise renegotiation typically occurs at renewal cycles (often three to five years), with pricing evolution introduced during renewal or product transitions.

7) Workforce transformation: from “easy button” to review and accountability

Welsch described a near-term shift in knowledge work: AI makes generation easy (emails, reports, code), while the burden shifts to review, filtering, prioritization, and quality assurance.

He cited a developer dynamic: broad adoption of coding assistants, but productivity gains flatten due to review needs, security concerns, and code quality limitations. He also emphasized an operational risk: if critical systems fail, someone must still troubleshoot and restore service.

In leadership training work, he observed that some teams treat AI as an “easy button,” passing drafts to others for review. Welsch’s position: AI does not remove accountability for excellent work; leadership and culture must reinforce that standard.

8) Regulated industries: why traceability and logging are mandatory

For regulated environments, Welsch argued for a minimum requirement: traceability. Leaders need logs showing what an agent did, what data it accessed, and why it made decisions—both for compliance and for root-cause analysis when errors occur.

He highlighted practical troubleshooting questions leadership must be able to answer: Did the agent misunderstand the goal? Did it misinterpret another agent’s instruction? Did context limitations contribute? Did hallucination occur?

This traceability requirement becomes central as enterprises weigh probabilistic AI behavior against deterministic expectations in core processes.

Leadership Implications

  • Define what “success” means before pricing shifts: Outcome-based contracts require shared, auditable definitions.
  • Build governance for traceability: Logging, access tracking, and RCA readiness should be designed in, not bolted on.
  • Design workflows to reduce “human integration” drag: Favor solutions embedded where work happens to limit copy/paste overhead.
  • Prepare the workforce for review-centric work: Upskill teams on evaluation, security, and judgment, not only generation.
  • Segment AI adoption by risk: Start around the edges; treat core finance/HR/supply chain with higher assurance thresholds.

Why this media coverage matters

This roundtable-style conversation reflects what many enterprise leaders discuss privately: agentic AI is moving quickly, but public statements lag because workforce impact can trigger backlash. Welsch’s contribution is valuable because it focuses on executive operating realities—contract cycles, uptime accountability, pricing units, and the cultural requirement to maintain quality while accelerating output.

It also connects directly to AI leadership priorities: governance, adoption strategy, and workforce transformation. The key message is that disruption is real, but it will be mediated by enterprise risk tolerance, deterministic process requirements, and the need for auditable outcomes.

Conclusion

Welsch’s view of agentic AI and the “SaaSpocalypse” is pragmatic: enterprise SaaS is unlikely to disappear overnight, but monetization models and delivery expectations will evolve. The nearer the software is to core operations, the more reliability, governance, and accountability dominate buying decisions.

For AI leadership teams, the path forward is clear: align agentic AI to business metrics, engineer traceability into workflows, and build workforce capability in review and governance. In this transition, agentic AI becomes not only a technology decision—but a leadership and operating model decision.

Recommended links

External references

FAQ

1) Is the “SaaSpocalypse” real in enterprise software?

SaaS disruption is real, but Andreas Welsch characterizes it as an evolution rather than a sudden collapse. Core enterprise systems are harder to replace due to risk, reliability requirements, and accountability when failures occur. Edge tools face faster substitution.

Welsch differentiates between mission-critical systems (finance, HR, supply chain) and less critical applications (e.g., project management).

2) What enterprise software categories are most exposed to agentic AI disruption?

Software that is not core to operations is most exposed, according to Welsch. Tools that can be replaced with spreadsheets, lightweight apps, or quickly built workflows face higher risk than ERP-grade systems where errors or downtime create major consequences.

He highlights that “rip and replace” is far less likely for core systems than for ancillary applications.

3) How does agentic AI change SaaS pricing models?

Agentic AI pressures seat-based pricing because fewer humans may sit in front of screens, Welsch explains. He expects shifts toward outcome-based pricing, business-object monetization, or token/credit models—depending on where the vendor sits in the stack.

Examples discussed include charging per successfully resolved service ticket or per processed document/page.

4) Why is outcome-based pricing hard to implement?

Outcome-based pricing can work, but it requires precise definitions of success, Welsch notes. If “successful resolution” is ambiguous, customers may dispute charges and vendors may face costly back-and-forth. Governance and upfront alignment become essential.

It also introduces vendor exposure to fluctuating underlying AI costs.

5) Will AI token costs and usage variability compress SaaS margins?

Margins can fluctuate when vendors charge fixed outcome prices but incur variable token costs, Welsch explains. Vendors may average costs, introduce caps, or define thresholds (e.g., by characters, words, pages) to control cost overruns and protect margins.

Application-level value closer to business logic typically supports stronger pricing power than platform-level token resale.

6) What value do SaaS vendors add if models become commoditized?

Welsch argues the model is not the main differentiator; integration and context are. SaaS vendors add value by embedding AI into workflows, enriching prompts with enterprise data, applying guardrails, and delivering enterprise-grade reliability, governance, and repeatability.

This reduces “human integration” work like copy/paste between disconnected tools.

7) What do regulated industries require from agentic AI systems?

Regulated industries need traceability at minimum, Welsch says: logs showing what an agent did, what data it accessed, and why it acted. This supports compliance, audits, and root-cause analysis when something goes wrong in a critical workflow.

Without traceability, leaders cannot reliably troubleshoot or defend decisions.

8) How will agentic AI change software engineering roles?

Welsch expects software engineering to shift toward review and quality assurance as code generation becomes easier. Developers still must validate security, efficiency, and correctness, and maintain enough knowledge to troubleshoot failures in production-critical systems.

The central risk is over-delegation: losing the capability to evaluate and correct AI-generated work.

9) Can LLM providers disintermediate SaaS vendors by selling agents directly?

Disintermediation is a risk, but Welsch emphasizes enterprise buying behavior: organizations often prefer proven vendors for core processes because they can transfer risk and demand uptime support. Sandbox experimentation differs from global rollout for mission-critical operations.

In practice, adoption depends on trust, accountability, and operational reliability at scale.

About the Author