The Next Frontier in AI: Computers That Think Like Humans

Apr 25, 2023

Cognitive computing creates potential investment opportunities, as companies develop the technology and use it to transform their business.

Key Takeaways

  • Cognitive computing harnesses reams of data to simulate the human thought process and address complex problems.
  • Neuromorphic computing could power robots, self-driving cars and smart devices while using significantly less processing power than existing technology.
  • Companies that create the technology could outperform, but there may be more upside in companies that use it to fuel efficiency and growth.

Computers that can “think” like humans and solve complex problems may be the next frontier in a new era of artificial intelligence (AI). They may also represent a significant investment opportunity—both in the companies developing these solutions and the companies using them to transform their industries.


Cognitive computing builds on the existing ability of generative AI to answer questions and create text, audio and images based on user prompts. “Cognitive computing is still primitive, its potential to develop real-time insights from torrents of data currently meaningless to humans and machines alike will drive competition and economic value,” says Shawn Kim, head of Morgan Stanley Research’s Asia technology team.


Understanding the Innovation

Two major classes of cognitive computing systems are expected to emerge. The first can perform tasks independently without human intervention, such as the operation of autonomous vehicles, personal assistants and drones. The second augments human capabilities—for example, collaborating with a physician to diagnose diseases or even perform surgery.


This new evolutionary AI hinges largely on the emergence of “neuromorphic” computing, a chip-based technology that uses artificial neurons to mimic the functions and characteristics of the human brain and drive improvements in costs, efficiency and processing.


"It’s critically important that investors and companies begin to understand these developments, because they will shape business models for decades to come," says Kim. "Investing in such technology will be crucial to the long-term prospects of many firms."


Speaking Human

The world’s largest tech companies have already made artificial intelligence central to their applications through deep learning methods that use neural networks to artificially replicate the structure and functionality of the brain. The systems are adept at pattern recognition, natural language processing, complex communication, learning and other—once exclusively human—activities. They can cover a broad range of applications, from cognitive consumer devices, such as smartphones, to robotics and infrastructure or utilities.


Cognitive computing would bring with it fundamental differences in how systems are built and how they interact with humans. Cognitive systems go beyond tabulating and calculating based on pre-configured rules and programs. Instead, they build knowledge and learn, understand natural language, and reason and interact more naturally with humans, potentially even "speaking" like humans.


"Humans no longer do the directing. The more data a cognitive system can access, the more accurate it becomes—just like a growing child," says Kim.

Ultimately, neuromorphic chips could power a range of artificial applications, because of their capacity to sense, learn, infer and make real-time decisions, without explicit instructions in code or millions of prior examples to learn from.
Head of Asia Technology Research for Morgan Stanley

Chips That Think…and Feel?

Neuromorphic computing is leading the advances in cognitive systems, promising to bring exponential improvements in computing performance.


In a major departure from existing technology, which requires significant processing power, learning capabilities can be directly performed on a chip itself. This could help reduce operating costs, yield faster computing speeds and response times and facilitate the ability to perform multiple tasks simultaneously.


Researchers have already made significant progress creating neuromorphic chips that store and retrieve large amounts of information simultaneously.


“Ultimately, neuromorphic chips could power a range of artificial applications, because of their capacity to sense, learn, infer and make real-time decisions, without explicit instructions in code or millions of prior examples to learn from,” says Kim.


Neuromorphic chips could eventually be used in robots, self-driving vehicles and smart devices since they are small and power-efficient while able to act as the eyes, ears and noses of conventional and cognitive computers. They could be used to interact with the environment in real time and spot any patterns that are out of the ordinary (such as reacting to a reckless driver).


The chips may also find their way into smartphones, wearables and other handheld devices. "These innovations may go unnoticed today before becoming mandatory and ultimately dominant tomorrow," says Kim.


Opportunities and Caution Ahead

Current tech leaders with mature platforms are best positioned to benefit in the era of cognitive computing, given the large capital requirements and computing intensity needed to build and maintain the complex models, as well as the importance of scaled, unique data sets. Key data-center providers could also see upside as demand for computing power rises.


However, says Kim, “We see more compelling opportunities among adopters of cognitive computing, especially to increase productivity and fuel growth.” Data scientists and machine learning experts, for example, would be freed from training AI systems at every step and could instead redirect their time.


As the technology continues to develop, investors and adopters will need to be alert to the ethical implications of its use. In a recent open letter by the Future of Life Institute, a group of leaders in AI, tech and academic institutions, among others, called for a pause of at least six months in the training of advanced AI systems so we can all better understand the risks these systems may have for society and humanity. Such risks include data security and privacy, impact on employment, ingrained bias and safety concerns. It will be important to weigh the potential benefits of this technology against such risks.

More Insights