March 4, 2026
Our Deputy Global Head of Research Michael Zezas and Head of Public Policy Research Ariana Salvatore assess the potential market outcomes of the Middle East conflict, weighing its possible duration and economic impact.
Important note regarding economic sanctions. This report references jurisdictions which may be the subject of economic sanctions. Readers are solely responsible for ensuring that their investment activities are carried out in compliance with applicable laws.
Listen to our financial podcast, featuring perspectives from leaders within Morgan Stanley and their perspectives on the forces shaping markets today.
Michael Zezas: Welcome to Thoughts on the Market. I'm Michael Zezas, Morgan Stanley's Deputy Head of Global Research.
Stephen Byrd: And I'm Stephen Byrd, Global Head of Thematic and Sustainability Research.
Michael Zezas: Today – is AI becoming the new anchor of geopolitical power?
It's Friday, February 27th at noon in New York.
So, Stephen, at the recent India AI Impact Summit, the U.S. laid out a vision to promote global AI adoption built around what it calls “real AI sovereignty.” Or strategic autonomy through integration with the American AI stack. But several nations from the global south and possibly parts of Europe – they appear skeptical of dependence on proprietary systems, citing concerns about control, explainability, and data ownership. And it appears that stake isn't just technology policy. It's the future structure of global power, economic stratification, and whether sovereign nations can realistically build competitive alternatives outside the U.S. and China.
So, Stephen, you were there and you've been describing a growing chasm in the AI world in terms of access to strategies between the U.S. and much of the global south, and possibly Europe. So, from what you heard at the summit, what are the core points of disagreement driving that divide?
Stephen Byrd: There definitely are areas of agreement; and we've seen a couple of high-profile agreements reached between the U.S. government and the Indian government just in the last several days. So there certainly is a lot of overlap. I point to the Pax Silica agreement that's so important to secure supply chains, to secure access to AI technology. I think the focus, for example, for India is, as you said; it is, you know, explainability, open access. I was really struck by Prime Minister Modi's focus on ensuring that all Indians have access to AI tools that can help them in their everyday life.
You know, a really tangible example that really stuck with me is – someone in a remote village in India who has a medical condition and there's no doctor or nurse nearby using AI to, you know, take a photo of the condition, receive diagnosis, receive support, figure out what the next steps should be. That's very powerful. So, I'd say, open access explainability is very important.
Now, the American hyperscalers are very much trying to serve the Indian market and serve the objectives really of the Indian government. And so, there are versions of their models that are open weights, that are being made freely available for health agencies in India, as an example; to the Indian government, as an example.
So, there is an attempt to really serve a number of objectives, but I think this key is around open access, explainability, that I do see that there's a tension.
Michael Zezas: So, let's talk about that a little bit more. Because it seems one of the concerns raised is this idea of being captive within proprietary Large Language Models. And maybe that includes the risk of having to pay more over time or losing control of citizen data. But, at the same time, you've described that there are some real benefits to AI that these countries want to adopt.
So, what is effectively the tension between being captive to a model or the trade off instead for pursuing open and free models? Is it that there's a major quality difference? And is that trade off acceptable?
Stephen Byrd: See, that's what's so fascinating, Mike, is, you know, what we need to be thinking about is not just where the technology is today, but where is it in six months, 12 months, 24 months? And from my perspective, it's very clear. That the proprietary American models are going to be much, much more capable.
So, let's put some numbers around that. The big five American firms have assembled about 10 times the compute to train their current LLMs compared to their prior LLMs, and that's a big deal. If the scaling laws hold, then a 10x increase in training compute to result in models are about twice as capable.
Now just let that sink in for a minute, twice as capable from here. That's a big deal. And so, when we think about the benefit of deploying these models, whether it's in the life sciences or any number of other disciplines, those benefits could start to get very large. And the challenge for the open models will be – will they be able to keep up in terms of access to compute, to training, access to data to train those models? That's a big question.
Now, again, there's room for both approaches and it's very possible for the Indian government to continue to experiment and really see which approach is going to serve their citizens the best. And I was really struck by just how focused the Indian government is on serving all of their citizens. Most notably, you know, the poorest of the poor in their nation. So, we'll just have to see.
But the pure technologist would say that these proprietary models are going to be increasing capability much faster than the open-source models.
So, Mike, let's pivot from the technology layer to the geopolitical layer because the U.S. strategy unveiled at the summit goes way beyond innovation.
Michael Zezas: Yeah, it's a good point. And within this discussion of whether or not other countries will choose to pursue open models or more closely adhere to U.S. based models is really a question about how the United States exercises power globally and how it creates alliances going forward.
Clearly some part of the strategy is that the U.S. assumes that if it has technology that's alluring to its partners, that they'll want to align with the U.S.’ broad goals globally. And that they'll want to be partners in supporting those goals, which of course are tied to AI development.
So, the Pax Silica [agreement], which you mentioned earlier, is an interesting point here because this is clearly part of the U.S. strategy to develop relationships with other countries – such that the other countries get access to U.S. models and access to U.S. AI in general. And what the U.S. gets in return is access to supply chain, critical resources, labor, all the things that you need to further the AI build out. Particularly as the U.S. is trying to disassociate more and more from China, and the resources that China might have been able to bring to bear in an AI build out.
Stephen Byrd: So, Mike, the U.S. framed “real AI sovereignty” as strategic autonomy rather than full self-sufficiency. So, essentially the. U.S. is encouraging nations to integrate components of the American AI stack. Now, from your perspective, Mike, from a macro and policy standpoint, how significant is that distinction?
Michael Zezas: Well, I think it's extremely important. And clearly the U.S. views its AI strategy as not just economic strategy, but national security strategy.
There are maybe some analogs to how the U.S. has been able to, over the past 80 years or so, use its dominance in military and military equipment to create a security umbrella that other countries want to be under. And do something similar with AI, which is if there is dominant technology and others want access to it for the societal or economic benefits, then that is going to help when you're negotiating with those countries on other things that you value – whether it be trade policy, foreign policy, sanctions versus another country. That type of thing.
So, in a lot of ways, it seems like the U.S. is talking about AI and developing AI as an anchor asset to its power, in a way that military power has been that anchor asset for much of the post World War II period.
Stephen Byrd: See, that's what's so interesting, Mike, [be]cause you've highlighted before to me that you believe AI could replace weaponry as really the anchor asset for U.S. global power. Almost a tech equivalent of a defense umbrella.
So how durable is that strategy, especially given that some countries are expressing unease about dependency?
Michael Zezas: Yeah, it's really hard to know, and I think the tension you and I talked about earlier, Stephen, about whether countries will be willing to make the trade off for access to superior AI models versus open and free models that might be inferior, that'll tell us if this is a viable strategy or not. And it appears like this is still playing out because, correct me if I'm wrong, it's not like we've received some very clear signals from India or other countries about their willingness to make that trade off.
Stephen Byrd: No, I think that's right. And just building on the concept of the trade-offs and, sort of, the standard for AI deployment, you know, the U.S. has explicitly rejected centralized global AI governance in favor of national control aligned with domestic values.
So, what does that signal about how global technology standards may evolve, particularly as in the U.S., the National Institute of Standards and Technology, or NIST, works to develop interoperable standards for agentic AI systems.
Michael Zezas: Yeah, Stephen, I think it's hard to know. It might be that the U.S. is okay with other countries having substantial degrees of freedom with how they use U.S.-based AI models because they could use U.S. law to, at a later date, change how those models are being used – if there's a use case that comes out of it that they find is against U.S. values. Similar in some way to how the U.S. dollar being the predominant currency and, therefore, being the predominant payment system globally, gives the U.S. degrees of freedom to impose sanctions and limit other types of economic transactions when it's in the U.S. interest.
So, I don't know that to be specifically true, but it's an interesting question to consider and a potential motivation behind why a laissez-faire approach might be, ultimately, still aligned with U.S. interests.
Stephen Byrd: So, Michael, it sounds like really AI is becoming the new strategic infrastructure globally.
Michael Zezas: Yeah, I think that's actually a great way to think about it. And so, Stephen, if that were the case, and we're talking about the potential for this to shape geopolitical competition, potentially economic differentials across the globe. And if that is correlated, at least, to some degree with the further development and computing power of these models, what do you think investors should be looking at for signals from here?
Stephen Byrd: Number one, by a mile for me, is really the pace of model progress. Not just American models, but Chinese models, open-source models. And there the big reveal for the United States should be somewhere between April and June – for the big five LLM players. That's a bit of speculation based on tracking their chip purchases, their power access, et cetera. But that appears to be the timeframe and a couple of execs have spoken to that approximate timeframe.
I would caution investors that I think we're going to be surprised in terms of just how powerful those models are. And we're already seeing in early 2026, these models that were not trained on that kind of volume of compute have really exceeded expectations, you know, quite dramatically in some cases. And I'll give you one example.
METR is a third-party that tracks the complexity, what these models can do. And METR has been highlining that every seven months, the complexity of what these models are able to do approximately doubles. It’s very fast. But what really got my attention was about a week ago, one of the LLMs broke that trend in a big way to the upside.
So, if the scaling laws would hold, based on what METR would've expected, they would expect a model to be able to act independently for about eight hours, a little over eight hours. And what we saw was, the best American model that was recently introduced was more like 15. That's a big deal. And so, I think we're seeing signs of non-linear improvement.
We're also going to see additional statements from these AI execs around recursive self-improvement of the models. One ex-AI executive spoke to that. Another LLM exec spoke to that recently as well. So, we're starting to see an acceleration. That means we then need to really consider the trade-offs between the open models and the proprietary. That's going to become really critical and that should happen really through the spring and summer.
Michael Zezas: Got it. Well, Stephen, thanks for taking the time to talk.
Stephen Byrd: Great speaking with you, Mike.
Michael Zezas: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen. And share the podcast with a friend or colleague today.
Martijn Rats: Welcome to Thoughts on the Market. I’m Martijn Rats, Morgan Stanley’s Global Commodities Strategist.
Today – what’s fueling the latest oil market rally.
It’s Thursday, February 26th, at 3pm in London.
What happens when oil prices jump, even though there’s no actual shortage of oil? That’s the situation we’re in right now. Tensions between the U.S. and Iran have escalated again. Naturally, markets are paying attention.
Over the past week, Brent crude rose about $3 to around $72 per barrel. WTI climbed into the mid-$60s. Shipping costs surged. And traders have started paying a premium for protection against a sudden oil spike – the levels we haven’t seen since the early days of the Ukrainian invasion.
But here’s the key point: there’s no clear evidence that global oil supply has tightened. Exports are still flowing. Tankers are still moving. And some near-term indicators of physical tightness have actually softened. When oil is truly scarce, buyers scramble for immediate barrels and short-term prices spike relative to future delivery. Instead, those spreads have narrowed, and physical premiums have eased.
This isn’t a supply shock. It’s a risk premium. In simple terms, investors are buying insurance. So what could happen next? We see four broad scenarios.
Before I outline them though, here’s something we do not see as a core case: a prolonged closure of the Strait of Hormuz. Roughly 15 million barrels per day of crude and another 5 million of refined product moves through that corridor. A sustained shutdown would be enormously disruptive. But we think the probability is very low.
Now coming back to our four scenarios. The first is straightforward. A negotiated settlement; conflict is avoided. Iranian exports continue and shipping lanes remain open. In that scenario, what unwinds is the geopolitical risk premium – which we estimate at roughly $7 to $9 per barrel. If that fades, Brent could drift back to the low-to-mid $60s, similar to past episodes where prices spiked on fear and then retraced once supply proves unaffected.
Second, we could see short-lived frictions – shipping delays, higher insurance costs, temporary logistical issues. That might remove a few hundred thousand barrels per day for, say, a few weeks.. Prices could briefly spike into the $75–80 range. But balancing forces would kick in relatively quickly. For example, China has been building inventories at a steady pace. At higher prices, that stockbuilding would likely slow, helping offset temporary disruptions. That points to some further upside in prices – but then normalization.
The third scenario is more serious, but still contained: localized export losses of perhaps 1 to 1.5 million barrels per day for a month or two. Prices would stay elevated longer, but spare capacity and demand adjustments could eventually stabilize the market.
Now our last scenario is the more serious and considers a potential shipping shock. The real risk here isn’t wells shutting down – it’s shipping disruption. Global trade of crude oil depends on efficient tanker movement. If transit times were extended even modestly, effective shipping capacity could fall sharply, creating what amounts to a temporary tightening of about 2 to 3 million barrels per day – or about 6 percent of global seaborne supply. That is a logistics shock, not a production outage – but it would push prices toward early-2022-type levels, at least briefly.
Now let’s zoom out. Beyond geopolitics, the fundamentals look weak. OPEC+ supply is rising, and our forecasts show a sizable surplus building in 2026. Even if some of that oil ends up in China’s stockpiles, a lot would still likely flow into core OECD inventories. Historically, when the market looks like this, prices tend to fall, not rise.
Which brings us back to the central point. Oil isn’t rallying because the world has run out of barrels. It’s rallying because markets are pricing geopolitical risk. And unless that risk turns into actual, sustained disruption, insurance premiums tend to expire.
Thank you for listening. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.
Disclaimer:
This podcast references jurisdiction(s) or person(s) which may be the subject of economic sanctions. Readers are solely responsible for ensuring that their investment activities are carried out in compliance with applicable laws.
Sign up to get Morgan Stanley’s Five Ideas newsletter delivered weekly to your inbox.
Subscribed!
Thank you for subscribing to our blog newsletter. Stay tuned to hear about Morgan Stanley ideas!