Goldman’s Head Of Research Crucifies The “AI Bubble”: Not One Transformative Application Has Been Found

GlobalIntelHub Markets

Rallying markets like to shoot, or in this case buy, and ask questions much later or never - and certainly not until it is far too late, everything has crashed, and the fingerpointing and crying are all that's left, at which point the only question on the market's mind is why did nobody ask any questions earlier.

The latest example of this of course is the 3D TVfake meatvirtual realitythe metaverseblockchain, chatbot (aka AI) bubble, where a handful of supergiant firms are adding hundreds of billions in market cap every single day because of some algo glitch where the market believes their revenue growth is virtually unlimited because somehow other companies have trillions and trillions in capital spending power which they will - in a zero sum circle jerk - give to the five biggest companies in the world, making them even bigger in the process.

Luckily, almost two years after ChatGPT 3.5 was first released and almost a century after modern AI first emerged, some are finally starting to ask questions. And in the latest Top of Mind note from Goldman, the bank goes actually asked the question whose negative response would lead to an immediate market crash: is there too much spending on AI, and too little benefit? (available to pro subscribers).

While the Goldman note (if not its head of research who comes out about as pessimistic on the topic as is possible) does not provide a definitive answer (and why would it seek to burst a bubble that will result in billions in M&A, IPO and follow on and debt issuance fees) it does share a handful of interviews with pundits on both sides of the aisle. But more importantly, it does assert that there is virtually "nothing to show" in terms of actual, tangible results on the $1 trillion that tech giants are set to spend on AI capex in the coming years. Worse, Goldman's head of equity research, is downright apocalyptic on what the current AI bubble will lead to; think dot com bubble on steroids.

Here is how Goldman's Allison Nathan, author of the biweekly Top of Mind note, frames the prevailing AI dynamic:

The promise of generative AI technology to transform companies, industries, and societies continues to be touted, leading tech giants, other companies, and utilities to spend an estimated ~$1tn on capex in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid. But this spending has little to show for it so far beyond reports of efficiency gains among developers. And even the stock of the company reaping the most benefits to date — Nvidia — has sharply corrected. We ask industry and economy specialists whether this large spend will ever pay off in terms of AI benefits and returns, and explore the implications for economies, companies, and markets if it does, or if it doesn’t.

Among the many pundits Nathan spoke to, the most notable was Daron Acemoglu, Institute Professor at MIT, who’s very skeptical. He estimates that "only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are “not a law of nature.” So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade. In short, not only will the hundreds of dollars spent on CapEx end up being a huge waste of capital, but the trillions in market cap gained by the Big 7 will be the biggest bubble in history.

It goes worse: according to Goldman's Head of Global Equity Research Jim Covello, to earn an adequate return on the ~$1tn
estimated cost of developing and running AI technology, it must be able to solve complex problems, which, he correctly says, it isn’t built to do. 
He also points out that truly life-changing inventions like the internet enabled low-cost solutions to disrupt high-cost solutions even in its infancy, unlike costly AI tech today. And he’s skeptical that AI’s costs will ever decline enough to make automating a large share of tasks affordable given the high starting point as well as the complexity of building critical inputs—like GPU chips—which may prevent competition. He’s also doubtful that AI will boost the valuation of companies that use the tech, as any efficiency gains would likely be competed away, and the path to actually boosting revenues is unclear. Finally, he questions whether models trained on historical data will ever be able to replicate humans’ most valuable capabilities (spoiler alert: they won't).

Below we excerpt from the Acemoglu and Covello interviews, and urge all professional subs to read the full note as it may help you avoid huge investment losses in the future, by waking up to the full extent of the AI bubble before the rest of the herd.

* * *

Interview with Daron Acemoglu, Institute Professor at MIT and has written several books, including Why Nations Fail: The Origins of Power, Prosperity, and Poverty and his latest, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Below, he argues that the upside to US productivity and growth from generative AI technology over the next decade—and perhaps beyond—will likely be more limited than many expect

Allison Nathan: In a recent paper, you argued that the upside to US productivity and, consequently, GDP growth from generative AI will likely prove much more limited than many forecasters—including Goldman Sachs—expect. Specifically, you forecast a ~0.5% increase in productivity and ~1% increase in GDP in the next 10 years vs. GS economists’ estimates of a ~9% increase in productivity and 6.1% increase in GDP. Why are you less optimistic on AI’s potential economic impacts?

Daron Acemoglu: The forecast differences seem to revolve more around the timing of AI’s economic impacts than the ultimate promise of the technology. Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing, etc. as well as create new products and platforms. But given the focus and architecture of generative AI technology today, these truly transformative changes won’t happen quickly and few—if any—will likely occur within the next 10 years. Over this horizon, AI technology will instead primarily increase the efficiency of existing production processes by automating certain tasks or by making workers who perform these tasks more productive. So, estimating the gains in productivity and growth from AI technology on a shorter horizon depends wholly on the number of production processes that the technology will impact and the degree to which this technology increases productivity or reduces costs over this timeframe.

My prior guess, even before looking at the data, was that the number of tasks that AI will impact in the short run would not be massive. Many tasks that humans currently perform, for example in the areas of transportation, manufacturing, mining, etc., are multifaceted and require real-world interaction, which AI won’t be able to materially improve anytime soon. So, the largest impacts of the technology in the coming years will most likely revolve around pure mental tasks, which are non-trivial in number and size but not huge, either.

To quantify this, I began with Eloundou et al.’s comprehensive study that found that the combination of generative AI, other AI technology, and computer vision could transform slightly over 20% of value-added tasks in the production process. But that’s a timeless prediction. So, I then looked at another study by Thompson et al. on a subset of these technologies—computer vision—which estimates that around a quarter of tasks that this technology can perform could be cost-effectively automated within 10 years. If only 23% of exposed tasks are cost effective

to automate within the next ten years, this suggests that only 4.6% of all tasks will be impacted by AI. Combining this figure with the 27% average labor cost savings estimates from Noy and Zhang’s and Brynjolfsson et al.’s studies implies that total factor productivity effects within the next decade should be no more than 0.66% — and an even lower 0.53% when adjusting for the complexity of hard-to-learn tasks. And that figure roughly translates into a 0.9% GDP impact over the decade.

Allison Nathan: Recent studies estimate cost savings from the use of AI ranging from 10% to 60%, yet you assume only around 30% cost savings. Why is that?

Daron Acemoglu: Of the three detailed studies published on AI-related costs, I chose to exclude the one with the highest cost savings—Peng et al. estimates of 56%—because the task in the study that AI technology so markedly improved was notably simple. It seems unlikely that other, more complex, tasks will be affected as much. Specifically, the study focuses on time savings incurred by utilizing AI technology—in this case, GitHub Copilot—for programmers to write simple subroutines in HTML, a task for which GitHub Copilot had been extensively trained. My sense is that such cost savings won’t translate to more complex, open-ended tasks like summarizing texts, where more than one right answer exists. So, I excluded this study from my cost-savings estimate and instead averaged the savings from the other two studies.

Allison Nathan: While AI technology cannot perform many complex tasks well today—let alone in a cost-effective manner—the historical record suggests that as technologies evolve, they both improve and become less costly. Won’t AI technology follow a similar pattern?

Daron Acemoglu: Absolutely. But I am less convinced that throwing more data and GPU capacity at AI models will achieve these improvements more quickly. Many people in the industry seem to believe in some sort of scaling law, i.e. that doubling the amount of data and compute capacity will double the capability of AI models. But I would challenge this view in several ways. What does it mean to double AI’s capabilities? For open-ended tasks like customer service or understanding and summarizing text, no clear metric exists to demonstrate that the output is twice as good. Similarly, what does a doubling of data really mean, and what can it achieve? Including twice as much data from Reddit into the next version of GPT may improve its ability to predict the next word when engaging in an informal conversation, but it won't necessarily improve a customer service representative’s ability to help a customer troubleshoot problems with their video service. The quality of the data also matters, and it’s not clear where more high-quality data will come from and whether it will be easily and cheaply available to AI models. Lastly, the current architecture of AI technology itself may have limitations. Human cognition involves many types of cognitive processes, sensory inputs, and reasoning capabilities. Large language models (LLMs) today have proven more impressive than many people would have predicted, but a big leap of faith is still required to believe that the architecture of predicting the next word in a sentence will achieve capabilities as smart as HAL 9000 in 2001: A Space Odyssey. It’s all but certain that current AI models won’t achieve anything close to such a feat within the next ten years.

Allison Nathan: So, are the risks to even your relatively conservative estimates of AI’s economic impacts over the next 5-10 years skewed to the downside?

Daron Acemoglu: Both downside and upside risks exist. Technological breakthroughs are always possible, although even such breakthroughs take time to have real impact. But even my more conservative estimates of productivity gains may turn out to be too large if AI models prove less successful in improving upon more complex tasks. And while large organizations such as the tech companies leading the development of AI technology may introduce AI-driven tools quickly, smaller organizations may be slower to adopt them.

Allison Nathan: Over the longer term, what odds do you place on AI technology achieving superintelligence?

Daron Acemoglu: I question whether AI technology can achieve superintelligence over even longer horizons because, as I said, it is very difficult to imagine that an LLM will have the same cognitive capabilities as humans to pose questions, develop solutions, then test those solutions and adopt them to new circumstances. I am entirely open to the possibility that AI tools could revolutionize scientific processes on, say, a 20-30- year horizon, but with humans still in the driver’s seat. So, for example, humans may be able to identify a problem that AI could help solve, then humans could test the solutions the AI models provide and make iterative changes as circumstances shift. A truly superintelligent AI model would be able to achieve all of that without human involvement, and I don’t find that likely on even a thirty-year horizon, and probably beyond.

Allison Nathan: Your colleague David Autor and coauthors have shown that technological innovations tend to drive the creation of new occupations, with 60% of workers today employed in occupations that didn’t exist 80 years ago. So, could the impact of AI technology over the longer term prove more significant than you expect?

Daron Acemoglu: Technological innovation has undoubtedly meaningfully impacted nearly every facet of our lives. But that impact is not a law of nature. It depends on the types of technologies that we invent and how we use them. So, again, my hope is that we use AI technology to create new tasks, products, business occupations, and competencies. In my example about how AI tools may revolutionize scientific discovery, AI models would be trained to help scientists conceive of and test new materials so that humans can then be trained to become more specialized and provide better inputs into the AI models. Such an evolution would ultimately lead to much better possibilities for human discovery. But it is by no means guaranteed.

Allison Nathan: Will some—or maybe even most—of the substantial spending on AI technology today ultimately go to waste?

Daron Acemoglu: That is an interesting question. Basic economic analysis suggests that an investment boom should occur because AI technology today is primarily used for automation, which means that algorithms and capital are substituting for human labor, which should lead to investment. This explains why my estimates for GDP increases are nearly twice as large as my estimates for productivity increases. But then reality supervenes and says that some of the spending will end up wasted because some projects will fail, and some firms will be too optimistic about the extent of the efficiency gains and cost savings they can achieve or their ability to integrate AI into their organizations. On the other hand, some of the spending will plant the seeds for the next, and more promising, phase of the technology. The devil is ultimately in the details. So, I don't have a strong prior as to how much of the current investment boom will be wasted vs. productive. But I expect both will happen.

Allison Nathan: Are other costs of AI technology not receiving enough attention?

Daron Acemoglu: Yes. GDP is not everything. Technology that has the potential to provide good information can also provide bad information and be misused for nefarious purposes. I am not overly concerned about deepfakes at this point, but they are the tip of the iceberg in terms of how bad actors could misuse generative AI. And a trillion dollars of investment in deepfakes would add a trillion dollars to GDP, but I don't think most people would be happy about that or benefit from it.

Allison Nathan: Given everything we’ve discussed, is the current enthusiasm around AI technology overdone?

Daron Acemoglu: Every human invention should be celebrated, and generative AI is a true human invention. But too much optimism and hype may lead to the premature use of technologies that are not yet ready for prime time. This risk seems particularly high today for using AI to advance automation. Too much automation too soon could create bottlenecks and other problems for firms that no longer have the flexibility and trouble-shooting capabilities that human capital provides.

And, as I mentioned, using technology that is so pervasive and powerful—providing information and visual or written feedback to  humans in ways that we don’t yet fully understand and don’t at all regulate—could prove dangerous. Although I don't believe superintelligence and evil AI pose major threats, I often think about how the current risks might be perceived looking back 50 years from now. The risk that our children or grandchildren in 2074 accuse us of moving too slowly in 2024 at the expense of growth seems far lower than the risk that we end up moving too quickly and destroy institutions, democracy, and beyond in the  process. So, the costs of the mistakes that we risk making are much more asymmetric on the downside. That’s why it’s important to resist the hype and take a somewhat cautious approach, which may include better regulatory tools, as AI technologies continue to evolve.

* * *

Interview with Jim Covello, Head of Global Equity Research at Goldman Sachs. He argues that to earn an adequate return on costly AI technology, AI must solve very complex problems, which it currently isn’t capable of doing, and may never be.

Allison Nathan: You haven’t bought into the current generative AI enthusiasm nearly as much as many others. Why is that?

Jim Covello: My main concern is that the substantial cost to develop and run AI technology means that AI applications must solve extremely complex and important problems for enterprises to earn an appropriate return on investment (ROI). We estimate that the AI infrastructure buildout will cost over $1tn in the next several years alone, which includes spending on data centers, utilities, and applications. So, the crucial question is: What $1tn problem will AI solve? Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed in my thirty years of closely following the tech industry.

Many people attempt to compare AI today to the early days of the internet. But even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions. Amazon could sell books at a lower cost than Barnes & Noble because it didn’t have to maintain costly brick-and-mortar locations. Fast forward three decades, and Web 2.0 is still providing cheaper solutions that are disrupting more expensive solutions, such as Uber displacing limousine services. While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.

Allison Nathan: Even if AI technology is expensive today, isn’t it often the case that technology costs decline dramatically as the technology evolves?

Jim Covello: The idea that technology typically starts out expensive before becoming cheaper is revisionist history. Ecommerce, as we just discussed, was cheaper from day one, not ten years down the road. But even beyond that misconception, the tech world is too complacent in its assumption that AI costs will decline substantially over time. Moore’s law in chips that enabled the smaller, faster, cheaper paradigm driving the history of technological innovation only proved true because competitors to Intel, like Advanced Micro Devices, forced Intel and others to reduce costs and innovate over time to remain competitive.

Today, Nvidia is the only company currently capable of producing the GPUs that power AI. Some people believe that competitors to Nvidia from within the semiconductor industry or from the hyperscalers—Google, Amazon, and Microsoft— themselves will emerge, which is possible. But that's a big leap from where we are today given that chip companies have tried and failed to dethrone Nvidia from its dominant GPU position for the last 10 years. Technology can be so difficult to replicate that no competitors are able to do so, allowing companies to maintain their monopoly and pricing power. For example, Advanced Semiconductor Materials Lithography (ASML) remains the only company in the world able to produce leading edge lithography tools and, as a result, the cost of their machines has increased from tens of millions of dollars twenty years ago to, in some cases, hundreds of millions of dollars today. Nvidia may not follow that pattern, and the scale in dollars is different, but the market is too complacent about the certainty of cost declines.

The starting point for costs is also so high that even if costs decline, they would have to do so dramatically to make automating tasks with AI affordable. People point to the enormous cost decline in servers within a few years of their inception in the late 1990s, but the number of $64,000 Sun Microsystems servers required to power the internet technology transition in the late 1990s pales in comparison to the number of expensive chips required to power the AI transition today, even without including the replacement of the power grid and other costs necessary to support this transition that on their own are enormously expensive.

Allison Nathan: Are you just concerned about the cost of AI technology, or are you also skeptical about its ultimate transformative potential?

Jim Covello: I’m skeptical about both. Many people seem to believe that AI will be the most important technological invention of their lifetime, but I don’t agree given the extent to which the internet, cell phones, and laptops have fundamentally transformed our daily lives, enabling us to do things never before possible, like make calls, compute and shop from anywhere. Currently, AI has shown the most promise in making existing processes—like coding—more efficient, although estimates of even these efficiency improvements have declined, and the cost of utilizing the technology to solve tasks is much higher than existing methods. For example, we’ve found that AI can update historical data in our company models more quickly than doing so manually, but at six times the cost.

More broadly, people generally substantially overestimate what the technology is capable of today. In our experience, even basic summarization tasks often yield illegible and nonsensical results. This is not a matter of just some tweaks being required here and there; despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks. And I struggle to believe that the technology will ever achieve the cognitive reasoning required to substantially augment or replace human interactions. Humans add the most value to complex tasks by identifying and understanding outliers and nuance in a way that it is difficult to imagine a model trained on historical data would ever be able to do.

Allison Nathan: But wasn’t the transformative potential of the technologies you mentioned difficult to predict early on? So, why are you confident that AI won't eventually prove to be just as—or even more—transformative?

Jim Covello: The idea that the transformative potential of the internet and smartphones wasn’t understood early on is false. I was a semiconductor analyst when smartphones were first introduced and sat through literally hundreds of presentations in the early 2000s about the future of the smartphone and its functionality, with much of it playing out just as the industry had expected. One example was the integration of GPS into smartphones, which wasn’t yet ready for prime time but was predicted to replace the clunky GPS systems commonly found in rental cars at the time. The roadmap on what other technologies would eventually be able to do also existed at their inception. No comparable roadmap exists today. AI bulls seem to just trust that use cases will proliferate as the technology evolves. But eighteen months after the introduction of generative AI to the world, not one truly transformative—let alone cost-effective—application has been found.

Allison Nathan: Even if the benefits and the returns never justify the costs, do companies have any other choice but to pursue AI strategies given the competitive pressures?

Jim Covello: The big tech companies have no choice but to engage in the AI arms race right now given the hype around the space and FOMO, so the massive spend on the AI buildout will continue. This is not the first time a tech hype cycle has resulted in spending on technologies that don’t pan out in the end; virtual reality, the metaverse, and blockchain are prime examples of technologies that saw substantial spend but have few—if any—real world applications today. And companies outside of the tech sector also face intense investor pressure to pursue AI strategies even though these strategies have yet to yield results. Some investors have accepted that it may take time for these strategies to pay off, but others aren’t buying that argument. Case in  oint: Salesforce, where AI spend is substantial, recently suffered the biggest daily decline in its stock price since the mid-2000s after its Q2 results showed little revenue boost despite this spend.

Allison Nathan: What odds do you place on AI technology ultimately enhancing the revenues of non-tech companies? And even without revenue expansion, could cost savings still pave a path toward multiple expansion?

Jim Covello: I place low odds on AI-related revenue expansion because I don't think the technology is, or will likely be, smart enough to make employees smarter. Even one of the most plausible use cases of AI, improving search functionality, is much  more likely to enable employees to find information faster than enable them to find better information. And if AI’s benefits remain largely limited to efficiency improvements, that probably won’t lead to multiple expansion because cost savings just get arbitraged away. If a company can use a robot to improve efficiency, so can the company’s competitors. So, a company won’t be able to charge more or increase margins.

Allison Nathan: What does all of this mean for AI investors over the near term, especially since the “picks and shovels” companies most exposed to the AI infrastructure buildout have already run up so far?

Jim Covello: Since the substantial spend on AI infrastructure will continue despite my skepticism, investors should remain invested in the beneficiaries of this spend, in rank order: Nvidia, utilities and other companies exposed to the coming buildout of the power grid to support AI technology, and the hyperscalers, which are spending substantial money themselves but will also garner incremental revenue from the AI buildout. These companies have indeed already run up substantially, but history suggests that an expensive valuation alone won’t stop a company’s stock price from rising further if the fundamentals that made the company expensive in the first place remain intact. I’ve never seen a stock decline only because it’s expensive—a deterioration in fundamentals is almost always the culprit, and only then does valuation come into play.

Allison Nathan: If your skepticism ultimately proves correct, AI’s fundamental story would fall apart. What would that look like?

Jim Covello: Over-building things the world doesn’t have use for, or is not ready for, typically ends badly. The NASDAQ declined around 70% between the highs of the dot-com boom and the founding of Uber. The bursting of today’s AI bubble may not prove as problematic as the bursting of the dot-com bubble simply because many companies spending money today are better capitalized than the companies spending money back then. But if AI technology ends up having fewer use cases and lower adoption than consensus currently expects, it’s hard to imagine that won’t be problematic for many companies spending on the technology today.

That said, one of the most important lessons I've learned over the past three decades is that bubbles can take a long time to burst. That’s why I recommend remaining invested in AI infrastructure providers. If my skeptical view proves incorrect, these companies will continue to benefit. But even if I’m right, at least they will have generated substantial revenue from the theme that may better position them to adapt and evolve. Allison Nathan: So, what should investors watch for signs that a burst may be approaching?

Jim Covello: How long investors will remain satisfied with the mantra that “if you build it, they will come” remains an open question. The more time that passes without significant AI applications, the more challenging the AI story will become. And my guess is that if important use cases don’t start to become more apparent in the next 12-18 months, investor enthusiasm may begin to fade. But the more important area to watch is corporate profitability. Sustained corporate profitability will allow sustained experimentation with negative ROI projects. As long as corporate profits remain robust, these experiments will keep running. So, I don’t expect companies to scale back spending on AI infrastructure and strategies until we enter a tougher part of the economic cycle, which we don’t expect anytime soon. That said, spending on these experiments will likely be the one of the first things to go if and when corporate profitability starts to decline.

Much more in the full note available to pro subscribers.

Print Friendly, PDF & Email
0 0 votes
Article Rating
Subscribe
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments