Thoughts on the Market

AI’s Tangible Wins and Disruption

March 5, 2026

AI’s Tangible Wins and Disruption

March 5, 2026

Live from Morgan Stanley’s TMT conference, our panel break down where AI is already delivering real returns—and where rapid advances are raising new risks.

TotM

Transcript

Michelle Weaver: Welcome to Thoughts on the Market. I'm Michelle Weaver, U.S. Thematic and Equity Strategist here at Morgan Stanley.

 

Today we've got a special episode on AI adoption. And this is a first in a two-part conversation live from our Technology, Media and Telecom conference.

It's Thursday, March 5th at 11am in San Francisco.

 

We're really excited to be here with all of you taping live. And we've got on stage with me. Stephen Byrd, he's our Global Head of Thematic and Sustainability Research; Josh Baer, Software Analyst; and Lindsay Tyler, TMT Credit Research Analyst.

 

So, Stephen, I want to start with you, pretty broad, pretty high level. We recently published our fifth AI Mapping Survey that identifies how different companies are exposed to the broad AI theme. Can you just share with us some insights from that piece and how stocks are performing with this AI exposure?

 

Stephen Byrd: Yeah, it's interesting. I mean, we've been doing this survey now, thanks to you, Michelle, and your excellent work, for quite a while. And every six months it is pretty telling to see the progression.

 

I would say a few things that got my attention from our most recent mapping was the number of companies that are quantifying the adoption benefits continues to go up quite a bit. And to me that feels like that's going to be table stakes very soon as in every industry you see two or three companies that are really laying out quite specifically what they expect to be able to do with AI and lay out the math. I think that really is going to pull all the other companies to follow suit. So, we're seeing that in a big way.

 

We do see adopters, with real tangible benefits performing well. But a new thing that we're seeing now, of course, in the market is concerns that in some cases adoption can lead to dramatic deflation, disruption, et cetera. That's coming up as well.  So, we're seeing greater concerns around disruption as well.

 

But broadly, I'd say a proliferation of adoption, that that universe of companies continues to grow, increases in quantification of the benefits. So, that is good. What's really surprised me though, is the narrative among investors has so quickly moved from those benefits which we've talked about into flipping that to toggle all negative, which I know some of our analysts have to deal with every day.   The mapping work suggests significant benefits. But the market is fast forwarding to very powerful AI that is very disruptive in deflation. And that's been a surprise to me.

 

 Michelle Weaver: Mm-hmm. Josh, I want to bring software into this. Your team has been arguing that AI is actually good for software. And it's really something that you need that application layer to then enable other companies to adopt AI. Can you tell us a little bit about how much GenAI could add to the broader enterprise software market? And how are you thinking about monetization these days?

 

Josh Baer: Of course. I think the best starting place is a reminder that AI is software, and so we see software as a TAM expander. And in many ways, even though this is extremely exciting innovation, it's following past innovation trends where first you see value accrue and market cap accrue to semiconductors, and then hardware and devices, and then eventually software and services. And we do think that that absolutely will occur just given [$]3 trillion in infrastructure investment into data centers and GPUs.

 

There's got to be an application layer that brings all of these productivity and efficiency gains to enterprises and advanced capabilities to consumers as well. And so we see AI more as an evolution for software than a revolution. An evolution of capabilities and expansion of capabilities. LLMs and diffusion engines absolutely unlocked all of these new features of what software can do. But incumbents will play a key role in this unlock.

 

And our CIO surveys really support that. Quarterly we ask chief information officers about their spending intentions, and these application vendors who we cover in the public markets are increasingly selected as vendors that companies will go to, to help deploy and apply AI and LLM technologies.

 

So, to answer your question, we estimate GenAI could unlock [$]400 billion in incremental TAM for software; for enterprise software by 2028. And this is based on looking at the type of work able to be automated, the labor costs associated with that work, the scope of automation, and then thinking about how much of that value is captured typically by software vendors.

 

Michelle Weaver: And you have a bit of a different lens on AI adoption. So, what are some of the ways you're hearing software customers using these AI tools and anything interesting that popped up at the conference?

 

Josh Baer: To echo what Stephen laid out, I mean, all of our software companies are using AI internally, both to drive efficiencies, but also to move faster. So thinking about product. Innovation, you know, the incumbents are able to use all of the same coding tools and, you know, …

 

Michelle Weaver: Mm-hmm.

 

Josh Bear: … products geared to developers to move faster and more efficiently on R&D. So, they're doing more. From a sales and marketing perspective, a G&A perspective, every area of OpEx, our software companies are in a great position to deploy the AI tools internally.

 

I think more important[ly], speaking to this TAM and expanded opportunity, is our companies have skews that they're monetizing. It might be a separate suite that incorporates advanced AI functionality. It might be a standalone offering, or it might be embedded into the core platform because the essence of software is AI and it, you know, leading to better retention rates and acceleration from here.

 

Michelle Weaver: Mm-hmm. And Stephen, going back to you on the state of play for AI, we had the AI labs here and we heard a lot about the developments and what's to come. So, what's your view on the trajectory for LLM advancements and what are some of the key signposts or catalysts you're watching here?

 

Stephen Byrd: Yeah, this is for me, maybe the most important takeaway of the conference – is this continued non-linear improvement of LLMs, which we've been writing about for quite some time. And just to give you an example, we think many of the labs have achieved a step change up in terms of the compute that they have, in some cases 10 x the amount of compute to train their LLMs. And that [if] the scaling laws hold – and we see every sign that they will – a 10x increase in compute used to train the models results in about a doubling of the model capabilities.

 

Now just let that sink in for a moment. Let's just think about that. A doubling from here in a relatively short period of time is difficult to predictIt's obviously very significant and I think several of the LLM execs at our event sounded to me extremely bullish on what that will be. A lot of that I think will be evident in greater agentic capabilities.

 

But also, I'd say greater creativity. It was about three weeks ago, three of the best physics minds in the world worked with an LLM to achieve a true breakthrough in physics – solving a problem that had never been solved before. A couple of days ago, a math team did the same thing. And so, what we're seeing is sort of these breakthrough capabilities in creativity. This morning I thought Sam speaking to, you know, incredible increases in what these models can do – which also brings risk. You know, I think it was interesting he spoke to, you know, the risk of misalignment, the risk of what these models are doing.

 

But for me, that's the single biggest thing that I'm thinking about, and that's going to be evident in the next several months.

 

Michelle Weaver: Mm-hmm.

 

Stephen Byrd: So, you know, on the positive side, it leads to greater benefits from AI adoption. And to Josh's point that, you know – more and more the economy can be addressed by AI, I do get concerned about the risk that that kind of step change will create greater concerns about disruption and deflation.

 

That causes me to think a lot about that dynamic.  Interestingly, we think the Chinese labs will not be able to keep pace just for one reason, which is compute. We think the Chinese labs have everything else they need. They have the talent, the infrastructure. They certainly have the energy, power. But they don't have the chips.

 

If what we laid out with the American models turns out to be true, I could see a chain reaction where the Chinese government pushes the Trump administration for full transfer of the best technology to China. And China could use their rare earth trade position to ensure that. 

 

Michelle Weaver: Mm-hmm. So, let's think about then bottlenecks in the U.S. Power is still one of the main bottlenecks. We had several of the solutions providers here at the conference. So, what are you thinking in terms of the size of the power bottleneck in the U.S. and how are we going to fix that?

 

Stephen Byrd: Yeah, absolutely. I am bullish on the companies that can de-bottleneck power, not just in the U.S., a few other places. Let's go through the math in terms of the problem we face and then the solution.

 

So, we have this very cool – it is cool if you're a nerd – power model that starts in the chip level up, from our semiconductor teams.  And from that, we build a global power demand model for data centers. We then apply that to the U.S.

 

Through 2028 we need about 74 gigawatts of data centers, both AI and non-AI to be built in the United States. I don't think we'll be able to achieve that for lots of reasons. But starting from that 74, we have sort of 10 gigs that have been recently built or are under construction. We have 15 gigs of incremental grid access, but after those two, we have to go to unconventional solutions, meaning typically off-grid solutions, over 40 gigawatts of unconventional solutions.

 

So that will be repurposing Bitcoin sites, which could be sort of 10 to 15 gigawatts. That'll be big. Renewable energy, fuel cells will be part of the solution. Gas turbines will be a big part of the solution. Co-locating at a few nuclear plants. I'm less bullish than I used to be on that. But when we net all that out, we think the U.S. is likely to be 10 to 20 percent short of the data center capacity that will need to be in.

 

It's not just a power grid access issue, though, that's a big one. Labor is now showing up as a huge issue. Many of the companies I speak to trying to develop data centers struggle with availability of labor. Electricians being one very tangible example. In the U.S. we need hundreds of thousands of additional electricians.

 

So, for any of your children, like mine, thinking about careers, you know, you'd be surprised [at] the amount of money that people are making in the infrastructure business that does feel like it's a labor shift that's going to have to happen, but it's going to take years. So, in that context, we had a number of the Bitcoin companies at our event here. And the economics of turning a Bitcoin site into hosting a data center are extremely attractive. I mean, extremely attractive.

 

To give you a sense of that. Before this opportunity presented itself to these Bitcoin players, those stocks tended to trade at an enterprise value per watt of about $1 to $2 a watt. Then we started to see these deals in which the Bitcoin players build a data center and lease them to hyperscalers. Those deals – depends a lot on the deal but – have created between $10 and $18 a watt of value. Let me repeat that. 10 to 18 – relative to where these stocks were at 1 to 2.

 

Now many of these stocks have rerated, but not all of them. And there's still quite a bit of upside. And what we've noticed is the economics that the hyperscalers are paying are trending up and up and up. Because of this power shortage that we're dealing with. So, a lot of exciting opportunities are still in the power space.

 

Michelle Weaver: Great. Well, I think that's a good place to wrap this first part of our conversation around AI adoption and the state of play. We'll be back again tomorrow with Part Two, looking at financing and risks.

 

To our panelists, thank you for talking with me. And to our audience, thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.

Hosted By
  • Michelle Weaver

Thoughts on the Market

Listen to our financial podcast, featuring perspectives from leaders within Morgan Stanley and their perspectives on the forces shaping markets today.

Up Next

February 27, 2026

AI as New Global Power?

Our Deputy Head of Global Research Michael Zezas and Stephen Byrd, Global Head of Thematic and Sus...

Transcript

Michael Zezas: Welcome to Thoughts on the Market. I'm Michael Zezas, Morgan Stanley's Deputy Head of Global Research.

 

Stephen Byrd: And I'm Stephen Byrd, Global Head of Thematic and Sustainability Research.

 

Michael Zezas: Today – is AI becoming the new anchor of geopolitical power?

 

It's Friday, February 27th at noon in New York.

 

So, Stephen, at the recent India AI Impact Summit, the U.S. laid out a vision to promote global AI adoption built around what it calls “real AI sovereignty.” Or strategic autonomy through integration with the American AI stack. But several nations from the global south and possibly parts of Europe – they appear skeptical of dependence on proprietary systems, citing concerns about control, explainability, and data ownership. And it appears that stake isn't just technology policy. It's the future structure of global power, economic stratification, and whether sovereign nations can realistically build competitive alternatives outside the U.S. and China.

 

So, Stephen, you were there and you've been describing a growing chasm in the AI world in terms of access to strategies between the U.S. and much of the global south, and possibly Europe. So, from what you heard at the summit, what are the core points of disagreement driving that divide?

 

Stephen Byrd: There definitely are areas of agreement; and we've seen a couple of high-profile agreements reached between the U.S. government and the Indian government just in the last several days. So there certainly is a lot of overlap. I point to the Pax Silica agreement that's so important to secure supply chains, to secure access to AI technology. I think the focus, for example, for India is, as you said; it is, you know, explainability, open access. I was really struck by Prime Minister Modi's focus on ensuring that all Indians have access to AI tools that can help them in their everyday life.

 

You know, a really tangible example that really stuck with me is – someone in a remote village in India who has a medical condition and there's no doctor or nurse nearby using AI to, you know, take a photo of the condition, receive diagnosis, receive support, figure out what the next steps should be. That's very powerful. So, I'd say, open access explainability is very important.

 

Now, the American hyperscalers are very much trying to serve the Indian market and serve the objectives really of the Indian government. And so, there are versions of their models that are open weights, that are being made freely available for health agencies in India, as an example; to the Indian government, as an example.

 

So, there is an attempt to really serve a number of objectives, but I think this key is around open access, explainability, that I do see that there's a tension.

 

Michael Zezas: So, let's talk about that a little bit more. Because it seems one of the concerns raised is this idea of being captive within proprietary Large Language Models. And maybe that includes the risk of having to pay more over time or losing control of citizen data. But, at the same time, you've described that there are some real benefits to AI that these countries want to adopt.

 

So, what is effectively the tension between being captive to a model or the trade off instead for pursuing open and free models? Is it that there's a major quality difference? And is that trade off acceptable?

 

Stephen Byrd: See, that's what's so fascinating, Mike, is, you know, what we need to be thinking about is not just where the technology is today, but where is it in six months, 12 months, 24 months? And from my perspective, it's very clear. That the proprietary American models are going to be much, much more capable.

 

So, let's put some numbers around that. The big five American firms have assembled about 10 times the compute to train their current LLMs compared to their prior LLMs, and that's a big deal. If the scaling laws hold, then a 10x increase in training compute to result in models are about twice as capable.

 

Now just let that sink in for a minute, twice as capable from here. That's a big deal. And so, when we think about the benefit of deploying these models, whether it's in the life sciences or any number of other disciplines, those benefits could start to get very large. And the challenge for the open models will be – will they be able to keep up in terms of access to compute, to training, access to data to train those models? That's a big question.

 

Now, again, there's room for both approaches and it's very possible for the Indian government to continue to experiment and really see which approach is going to serve their citizens the best. And I was really struck by just how focused the Indian government is on serving all of their citizens. Most notably, you know, the poorest of the poor in their nation. So, we'll just have to see.

 

But the pure technologist would say that these proprietary models are going to be increasing capability much faster than the open-source models.

 

So, Mike, let's pivot from the technology layer to the geopolitical layer because the U.S. strategy unveiled at the summit goes way beyond innovation.

 

Michael Zezas: Yeah, it's a good point. And within this discussion of whether or not other countries will choose to pursue open models or more closely adhere to U.S. based models is really a question about how the United States exercises power globally and how it creates alliances going forward.

 

Clearly some part of the strategy is that the U.S. assumes that if it has technology that's alluring to its partners, that they'll want to align with the U.S.’ broad goals globally. And that they'll want to be partners in supporting those goals, which of course are tied to AI development.

 

So, the Pax Silica [agreement], which you mentioned earlier, is an interesting point here because this is clearly part of the U.S. strategy to develop relationships with other countries – such that the other countries get access to U.S. models and access to U.S. AI in general. And what the U.S. gets in return is access to supply chain, critical resources, labor, all the things that you need to further the AI build out. Particularly as the U.S. is trying to disassociate more and more from China, and the resources that China might have been able to bring to bear in an AI build out.

 

Stephen Byrd: So, Mike, the U.S. framed “real AI sovereignty” as strategic autonomy rather than full self-sufficiency. So, essentially the. U.S. is encouraging nations to integrate components of the American AI stack. Now, from your perspective, Mike, from a macro and policy standpoint, how significant is that distinction?

 

Michael Zezas: Well, I think it's extremely important. And clearly the U.S. views its AI strategy as not just economic strategy, but national security strategy.

 

There are maybe some analogs to how the U.S. has been able to, over the past 80 years or so, use its dominance in military and military equipment to create a security umbrella that other countries want to be under. And do something similar with AI, which is if there is dominant technology and others want access to it for the societal or economic benefits, then that is going to help when you're negotiating with those countries on other things that you value – whether it be trade policy, foreign policy, sanctions versus another country. That type of thing.

 

So, in a lot of ways, it seems like the U.S. is talking about AI and developing AI as an anchor asset to its power, in a way that military power has been that anchor asset for much of the post World War II period.

 

Stephen Byrd: See, that's what's so interesting, Mike, [be]cause you've highlighted before to me that you believe AI could replace weaponry as really the anchor asset for U.S. global power. Almost a tech equivalent of a defense umbrella.

So how durable is that strategy, especially given that some countries are expressing unease about dependency?

 

Michael Zezas: Yeah, it's really hard to know, and I think the tension you and I talked about earlier, Stephen, about whether countries will be willing to make the trade off for access to superior AI models versus open and free models that might be inferior, that'll tell us if this is a viable strategy or not. And it appears like this is still playing out because, correct me if I'm wrong, it's not like we've received some very clear signals from India or other countries about their willingness to make that trade off.

 

Stephen Byrd: No, I think that's right. And just building on the concept of the trade-offs and, sort of, the standard for AI deployment, you know, the U.S. has explicitly rejected centralized global AI governance in favor of national control aligned with domestic values.

 

So, what does that signal about how global technology standards may evolve, particularly as in the U.S., the National Institute of Standards and Technology, or NIST, works to develop interoperable standards for agentic AI systems.

 

Michael Zezas: Yeah, Stephen, I think it's hard to know. It might be that the U.S. is okay with other countries having substantial degrees of freedom with how they use U.S.-based AI models because they could use U.S. law to, at a later date, change how those models are being used – if there's a use case that comes out of it that they find is against U.S. values. Similar in some way to how the U.S. dollar being the predominant currency and, therefore, being the predominant payment system globally, gives the U.S. degrees of freedom to impose sanctions and limit other types of economic transactions when it's in the U.S. interest.

 

So, I don't know that to be specifically true, but it's an interesting question to consider and a potential motivation behind why a laissez-faire approach might be, ultimately, still aligned with U.S. interests.

 

Stephen Byrd: So, Michael, it sounds like really AI is becoming the new strategic infrastructure globally.

 

Michael Zezas: Yeah, I think that's actually a great way to think about it. And so, Stephen, if that were the case, and we're talking about the potential for this to shape geopolitical competition, potentially economic differentials across the globe. And if that is correlated, at least, to some degree with the further development and computing power of these models, what do you think investors should be looking at for signals from here?

 

Stephen Byrd: Number one, by a mile for me, is really the pace of model progress. Not just American models, but Chinese models, open-source models. And there the big reveal for the United States should be somewhere between April and June – for the big five LLM players. That's a bit of speculation based on tracking their chip purchases, their power access, et cetera. But that appears to be the timeframe and a couple of execs have spoken to that approximate timeframe.

 

I would caution investors that I think we're going to be surprised in terms of just how powerful those models are. And we're already seeing in early 2026, these models that were not trained on that kind of volume of compute have really exceeded expectations, you know, quite dramatically in some cases. And I'll give you one example.

 

METR is a third-party that tracks the complexity, what these models can do. And METR has been highlining that every seven months, the complexity of what these models are able to do approximately doubles. It’s very fast. But what really got my attention was about a week ago, one of the LLMs broke that trend in a big way to the upside.

 

So, if the scaling laws would hold, based on what METR would've expected, they would expect a model to be able to act independently for about eight hours, a little over eight hours. And what we saw was, the best American model that was recently introduced was more like 15. That's a big deal. And so, I think we're seeing signs of non-linear improvement.

 

We're also going to see additional statements from these AI execs around recursive self-improvement of the models. One ex-AI executive spoke to that. Another LLM exec spoke to that recently as well. So, we're starting to see an acceleration. That means we then need to really consider the trade-offs between the open models and the proprietary. That's going to become really critical and that should happen really through the spring and summer.

Michael Zezas: Got it. Well, Stephen, thanks for taking the time to talk.

 

Stephen Byrd: Great speaking with you, Mike.

 

Michael Zezas: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen. And share the podcast with a friend or colleague today.

 

TotM
Our Global Commodities Strategist Martijn Rats discusses the geopolitical drivers behind the recen...

Transcript

Martijn Rats: Welcome to Thoughts on the Market. I’m Martijn Rats, Morgan Stanley’s Global Commodities Strategist.

Today – what’s fueling the latest oil market rally.

It’s Thursday, February 26th, at 3pm in London.

What happens when oil prices jump, even though there’s no actual shortage of oil? That’s the situation we’re in right now. Tensions between the U.S. and Iran have escalated again. Naturally, markets are paying attention.

Over the past week, Brent crude rose about $3 to around $72 per barrel. WTI climbed into the mid-$60s. Shipping costs surged. And traders have started paying a premium for protection against a sudden oil spike – the levels we haven’t seen since the early days of the Ukrainian invasion.

But here’s the key point: there’s no clear evidence that global oil supply has tightened. Exports are still flowing. Tankers are still moving. And some near-term indicators of physical tightness have actually softened. When oil is truly scarce, buyers scramble for immediate barrels and short-term prices spike relative to future delivery. Instead, those spreads have narrowed, and physical premiums have eased.

This isn’t a supply shock. It’s a risk premium. In simple terms, investors are buying insurance. So what could happen next? We see four broad scenarios.

Before I outline them though, here’s something we do not see as a core case: a prolonged closure of the Strait of Hormuz. Roughly 15 million barrels per day of crude and another 5 million of refined product moves through that corridor. A sustained shutdown would be enormously disruptive. But we think the probability is very low.

Now coming back to our four scenarios. The first is straightforward.  A negotiated settlement; conflict is avoided. Iranian exports continue and shipping lanes remain open. In that scenario, what unwinds is the geopolitical risk premium – which we estimate at roughly $7 to $9 per barrel. If that fades, Brent could drift back to the low-to-mid $60s, similar to past episodes where prices spiked on fear and then retraced once supply proves unaffected.

Second, we could see short-lived frictions – shipping delays, higher insurance costs, temporary logistical issues. That might remove a few hundred thousand barrels per day for, say, a few weeks.. Prices could briefly spike into the $75–80 range. But balancing forces would kick in relatively quickly. For example, China has been building inventories at a steady pace. At higher prices, that stockbuilding would likely slow, helping offset temporary disruptions. That points to some further upside in prices – but then normalization.

The third scenario is more serious, but still contained: localized export losses of perhaps 1 to 1.5 million barrels per day for a month or two. Prices would stay elevated longer, but spare capacity and demand adjustments could eventually stabilize the market.

 Now our last scenario is the more serious and considers a potential shipping shock. The real risk here isn’t wells shutting down – it’s shipping disruption. Global trade of crude oil depends on efficient tanker movement. If transit times were extended even modestly, effective shipping capacity could fall sharply, creating what amounts to a temporary tightening of about 2 to 3 million barrels per day – or about 6 percent of global seaborne supply. That is a logistics shock, not a production outage – but it would push prices toward early-2022-type levels, at least briefly.

Now let’s zoom out. Beyond geopolitics, the fundamentals look weak. OPEC+ supply is rising, and our forecasts show a sizable surplus building in 2026. Even if some of that oil ends up in China’s stockpiles, a lot would still likely flow into core OECD inventories. Historically, when the market looks like this, prices tend to fall, not rise.

Which brings us back to the central point. Oil isn’t rallying because the world has run out of barrels. It’s rallying because markets are pricing geopolitical risk. And unless that risk turns into actual, sustained disruption, insurance premiums tend to expire.

Thank you for listening. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.

 

Disclaimer:

This podcast references jurisdiction(s) or person(s) which may be the subject of economic sanctions. Readers are solely responsible for ensuring that their investment activities are carried out in compliance with applicable laws.

TotM

More Insights