4 Ways the AI Supercycle Is Changing How Companies Operate

Mar 23, 2026

As companies deploy AI systems to act with greater autonomy, they are rethinking software, data, governance and operating models to compete in AI’s next phase.

Key Takeaways

  • Leaders and investors are grappling with the next phase of the AI supercycle, determining where durable value will accrue as capital intensity, constraints and governance come into focus.
  • As models advance rapidly, competitive advantage in enterprise software is shifting toward platforms that make AI reliable, governed and accountable.
  • Data access, structure and governance are increasingly determining both AI performance and enterprise risk.
  • As AI agents execute work directly, companies are rethinking how tasks are orchestrated, controlled and valued.
  • Physical-world requirements around safety, reliability and real-time execution are becoming decisive factors in how edge AI scales.

Artificial intelligence has reached a new inflection point: Companies are deploying AI inference at scale, and agentic AI systems are now capable of planning and executing complex tasks. This is driving a sharp increase in the consumption of compute and tokens, while placing new demands on the software, data and infrastructure required to support AI’s massive expansion.

 

C-suite executives from the world’s largest technology companies gathered at the recent Morgan Stanley Technology, Media & Telecom Conference in San Francisco to discuss how this transition is reshaping investments and operating models. Across conversations, a common theme emerged: the industry is deploying AI into core workflows, systems and physical environments where it can reason, act and operate in ways that must be reliable, governed and economically durable.

 

A key question for leaders and investors is how long this cycle can run and where sustainable value will accrue, noted Mark Edelstone, Chairman of Global Semiconductor Investment Banking at Morgan Stanley. The industry has already seen unprecedented investment in AI infrastructure, with executives describing a step‑change in capital intensity as inference scales. At the same time, constraints around power, memory and compute availability are becoming more visible, alongside increased focus on governance, security and control as AI systems act with greater autonomy. 

Investment Banking

Edelstone on Durability of the AI Wave

We've all seen disruption in the past, where you’ve transitioned from mainframes to PCs to the cloud, to mobile, and in a lot of instances, we've ended up with just overbuilding of infrastructure and capacity, and typically, in most cycles, it's ended poorly.

 

It's really early, in my opinion, here in this AI wave today. There's been a tremendous amount of CapEx has gone into infrastructure. We've built a lot of these large language models, and we're now in the early stages of inference. We've seen just an enormous amount of CapEx from the hyperscalers and neo clouds. We've been growing that at about 60 to 70% a year for the past three years. But investors rightly want to understand how will this cycle ultimately end.

 

We've seen prior cycles that have been challenging. I think this one just has a lot of room to run. My name is Mark Edelston. I'm the Chairman of our Global Semiconductor Investment Banking practice, and I've been here at the firm since 1997.

Transcript

We've all seen disruption in the past, where you’ve transitioned from mainframes to PCs to the cloud, to mobile, and in a lot of instances, we've ended up with just overbuilding of infrastructure and capacity, and typically, in most cycles, it's ended poorly.

 

It's really early, in my opinion, here in this AI wave today. There's been a tremendous amount of CapEx has gone into infrastructure. We've built a lot of these large language models, and we're now in the early stages of inference. We've seen just an enormous amount of CapEx from the hyperscalers and neo clouds. We've been growing that at about 60 to 70% a year for the past three years. But investors rightly want to understand how will this cycle ultimately end.

 

We've seen prior cycles that have been challenging. I think this one just has a lot of room to run. My name is Mark Edelston. I'm the Chairman of our Global Semiconductor Investment Banking practice, and I've been here at the firm since 1997.

Against this backdrop, four themes stood out for corporate leaders and investors at the conference:

 

  1. Software’s advantage is increasingly tied to making AI reliable and accountable at scale.
  2. Data access, structure and governance are becoming central to both performance and risk management.
  3. Companies are rethinking how agentic AI tasks are controlled, coordinated and evaluated for cost.
  4. Physical-world complexities around safety and execution are shaping how edge AI ultimately scales.


1. Enterprise Software May Still Win as Models Advance

The rapid improvement of large language models has raised a central question for executives: Will models themselves displace traditional enterprise software? Discussions at the conference pointed to a more nuanced outcome. While models are dramatically improving the speed and quality of tasks, deploying AI in production environments requires reliability, governance and accountability. “The enterprise buyer isn’t necessarily buying a tool or capability or a software solution—they're buying a service-level agreement (SLA). They're buying the governance, security, control and all those things that wrap around the offering that is embedded in software,” said Rohan Mehra, Global Co-Head of AI Investment Banking.

Investment Banking

Mehra on Software Amid AI Disruption

I think there's two things that we all need to remember about these software companies, despite the market turmoil right now, that's the result of pressure from different AI technologies. The first is that these companies have deep relationships with their customers. There's a lot of moats around these businesses that have enabled them to grow at scale, and deliver performance for a long time.

 

Things like being a system of record, things like being ingrained into the workflows within an organization. Things like domain expertise and those, those moats, those advantages, those differentiators don't evaporate overnight just because of AI.

 

I think part two that we need to remember is the enterprise buyer, the customer isn’t necessarily buying a tool or capability or a software solution. They're buying an SLA, right? They're buying the governance, the security, the control, and all those things that wrap around the offering that is embedded in software. And if we think about it through that lens, while the ability to generate code, the ability to innovate with AI is going to improve the capabilities that we have in the world. It doesn't immediately overnight translate into an SLA that's adaptable and adaptable by the enterprise.

 

And so I think a little bit of patience, a little bit of ability to see through some of this volatility is important. And yes, there will be winners and there will be losers, but over the long arc, I think that a lot of these companies that are under pressure today from a stock price perspective will actually be quite successful.

 

I’m Rohan Mehra, and I’m the Global Head of AI Banking here at Morgan Stanley.

Transcript

I think there's two things that we all need to remember about these software companies, despite the market turmoil right now, that's the result of pressure from different AI technologies. The first is that these companies have deep relationships with their customers. There's a lot of moats around these businesses that have enabled them to grow at scale, and deliver performance for a long time.

 

Things like being a system of record, things like being ingrained into the workflows within an organization. Things like domain expertise and those, those moats, those advantages, those differentiators don't evaporate overnight just because of AI.

 

I think part two that we need to remember is the enterprise buyer, the customer isn’t necessarily buying a tool or capability or a software solution. They're buying an SLA, right? They're buying the governance, the security, the control, and all those things that wrap around the offering that is embedded in software. And if we think about it through that lens, while the ability to generate code, the ability to innovate with AI is going to improve the capabilities that we have in the world. It doesn't immediately overnight translate into an SLA that's adaptable and adaptable by the enterprise.

 

And so I think a little bit of patience, a little bit of ability to see through some of this volatility is important. And yes, there will be winners and there will be losers, but over the long arc, I think that a lot of these companies that are under pressure today from a stock price perspective will actually be quite successful.

 

I’m Rohan Mehra, and I’m the Global Head of AI Banking here at Morgan Stanley.

At the same time, enterprises are adapting to the pace of change by preserving optionality. Rather than committing to a single model or provider, many organizations now run frequent “evals” to determine which models perform best for specific tasks. Open and proprietary models increasingly coexist, with teams balancing performance, cost and reliability as model capabilities, pricing and infrastructure economics shift frequently. Long‑term lock‑in has become harder to justify in a market evolving this quickly. As competition intensifies, software platforms that integrate models into end‑to‑end systems of workflows, permissions and controls may gain durable advantage.

 

2. Data Is Determining Performance and Safety in Agentic AI

As AI becomes embedded into day‑to‑day workflows, the structure and governance of data is increasingly shaping how well systems perform and how safely they operate. Interfaces are already shifting away from static dashboards toward more direct and conversational access to information. At the same time, software platforms are being asked to serve two very different users: humans and AI agents. The emergence of agents as a new “customer” of data platforms places fresh demands on how information is structured, accessed and maintained.

 

As AI agents gain autonomy, enterprise data has become the critical layer that determines what these systems can safely and reliably do. Agentic AI has elevated the importance of what many executives described as context engineering: the systems and processes that ensure models have the right information, permissions and constraints at the moment they are asked to act. High‑performing AI systems require memory of prior interactions, retrieval from internal documents and deterministic rules that define what systems and data an agent is allowed to access. Not all relationships can or should be inferred; many must be explicitly defined. As a result, data platform providers are designing AI architectures so that data context—what the model can see and use—is managed outside the model, and information is observable, interoperable and continuously updated.

 

For corporate leaders, this has direct implications for both risk management and competitive advantage. As agents gain access to sensitive information—such as financial, operational or human‑resources data—auditability, lineage tracking and permissioning become essential. At the same time, AI systems that stitch together multiple tools and data sources introduce new security risks, particularly at the seams between systems. Fragmented architectures increase vulnerability, while more consolidated platforms reduce attack surfaces and improve control. Over the long term, competitive advantage may accrue to companies that combine deep, domain‑specific data with strong governance frameworks.

 

3. Agentic AI Is Turning Software into a Workforce

The use of agentic AI represents a shift from software that supports employees to systems that increasingly perform work on their behalf. As active participants in business processes, AI agents are influencing how productivity, workforce design and work itself is organized.

 

Long‑standing assumptions about control and governance are being tested. Conference participants noted that many organizations are experimenting with multiple agentic tools simultaneously, in a market evolving too quickly for standardization. At the same time, agents are increasingly able to access unstructured information such as documents and PDFs, call other agents and operate across systems. This has elevated the importance of secure orchestration—ensuring agents can act, but only within clearly defined boundaries. Fragmented approaches introduce risk; platforms that can coordinate agent behavior centrally are becoming more critical as autonomy increases.

 

These changes are also beginning to reshape how software is priced and valued. As agents do work once performed by humans, seat counts may decline even as activity levels rise. Usage becomes a function of tasks completed rather than people logged in, leading to uneven consumption patterns. Subscription pricing remains attractive for its predictability, but it is becoming increasingly complemented by usage‑based elements, with licensing models expanding to include humans and digital workers.

 

4. Edge AI Is Scaling Under Physical‑World Constraints

A notable shift discussed at the conference was the movement of AI from digital workflows into physical environments—vehicles, factories, ports, mines and infrastructure. Intelligence is being increasingly deployed at the edge—embedded directly into machines that can perceive their environment, make decisions locally and act in real time.

The edge is becoming the place where intelligence meets accountability.

 

Many physical environments demand low latency, resilience and control that centralized processing cannot reliably provide. As a result, hybrid architectures are emerging in which models are trained and refined in the cloud but executed locally, close to where decisions are made. This pattern is especially pronounced in industrial and infrastructure settings, where systems must continue operating even when connectivity is limited, and where physical access creates additional security considerations. The edge is becoming the place where intelligence meets accountability. 

 

Physical AI also introduces a different set of design and risk considerations. In safety‑critical applications—such as defense, heavy machinery or autonomous vehicles—autonomous actions must be independently verified, with clear boundaries between what systems can suggest and what they are allowed to execute. While the end applications differ by industry, the underlying challenge is consistent—building systems that can reliably perceive, reason and act according to the laws of the physical world. Over time, the companies best positioned to capture value from physical AI are likely to be those that productize this complexity into scalable platforms.

Explore More TMT Insights

Learn more takeaways from the Morgan Stanley Technology, Media & Telecom Conference.