Jim Covello, who is Goldman Sachs’ head of global equity research, meanwhile, said that he is skeptical about both the cost of generative AI and its
ultimate transformative potential.
AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do, he said.People generally substantially overestimate what the technology is capable of today. In our experience, even basic summarization tasks often yield illegible and nonsensical results. This is not a matter of just some tweaks being required here and there; despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks.He added that Goldman Sachs has tested AI toupdate historical data in our company models more quickly than doing so manually, but at six times the cost.Covello then likens the
Jason KoeblerAI arms racetovirtual reality, the metaverse, and blockchain, which areexamples of technologies that saw substantial spend but have few—if any—real world applications today.
With few exceptions, I haven’t kept up with articles on my blog about the gen AI phenomenon over the past two years. These recent reports from Sequoia Capital and Goldman Sachs reflect my own views quite well, and raise additional questions about the ability of the companies investing in chips and large language models to ever recoup their massive capital expenditures. The Sequoia piece in particular makes an interesting point I haven’t thought of before: because of rapid technological advancements, GPU infrastructure is depreciating much faster than physical infrastructure, so companies will need to constantly update their chips to keep pace – which in turn reduces the revenues an individual unit of GPU CapEx can generate.
I’ve been keeping track of other relevant points that mostly speak against the hype and inflated expectations:
- on the business side, The Information estimated that OpenAI could lose as much as $5 billion this year, and may run out of cash in 12 months. Needless to say, their income is nowhere near sufficient to sustain this, and they will likely need fresh investments to keep the business running. Will Microsoft be willing to keep funding them indefinitely after they have incorporated a startup and have stakes in several others?
- a survey from Upwork reveals that employees are far from convinced that this technology could improve their productivity. Quite the contrary:
77% say these tools have actually decreased their productivity and added to their workload
. - finally, a study in Nature finds that AI models collapse when trained on recursively generated data – an expected outcome given the data processing inequality in information theory. That means that any plans the companies might have to use so-called ‘synthetic data’ to refine models are doomed to failure.
I expect that, besides questionable profitability, the issues around access to training data will deal a decisive blow to LLMs. Internal corporate data is often disparate and messy, not the clean structured data that would be ideal for AI training. Public data is inherently limited, and becoming increasingly polluted by AI outputs since the launch of ChatGPT and the like. Copyrighted data is progressively harder to access, as some newspapers and artists are suing AI companies to protect their work. While other news companies have opted to license their articles to LLM providers, this only adds to the latter’s costs without providing significant advantages.
These assessments seem to have penetrated to investors as well, as tech stocks have declined sharply in recent weeks, from Nvidia to Intel – although Intel has its own specific issues. Like other bubbles before it, the generative AI bubble looks primed to burst.
Post a Comment