① Ilya Sutskevi, co-founder of OpenAI, believes that the AI industry should move away from simple “scaling” and return to the “research era” that focuses on underlying innovation; ② Sutskevi points out that data is limited and organizations already have a lot of computing resources, so simple quantitative changes may no longer bring about qualitative changes.
As tech giants continue to invest heavily and expand their scale, the topic of artificial intelligence... Warnings of an AI bubble are growing louder. Ilya Sutskever, co-founder and former chief scientist of OpenAI and a key creator of ChatGPT, now believes it's time to press the "pause button."
In a recent episode aired on Tuesday, Sutskevich, considered a pioneer of modern AI, challenged the conventional wisdom that scaling up AI may be the key roadmap for advancements. In his view, the AI industry will have to return to the research phase.
Recently, led by tech giants, the investment in the AI industry has become almost obsessive: frantically stacking GPUs and building new data centers. They are attempting to touch the holy grail of AGI (Artificial General Intelligence ) by scaling up their models and fundamentally improve their AI tools—whether LLMs (Large Language Models) or image generation models.
It seems to be a common belief that the more computing power or training data you have, the smarter your AI tools will be. However, Sutskevi poured cold water on this idea, saying that the AI industry should move away from simple "scaled" accumulation and return to a "research era" that focuses on underlying innovation.
Last May, Sutskaya left OpenAI. The following month, he joined forces with former Apple... SSI was founded by Daniel Gross, the company's AI lead, and Daniel Levy, a former researcher at OpenAI, with the aim of developing artificial intelligence technologies that are smarter than humans but not dangerous. Thanks to the "halo effect" of its three founders, SSI has attracted industry attention since its inception.
In the aforementioned interview, Sutskevi said that this "recipe" has produced impactful results over the past five years. This approach is also effective for companies because it offers a simple and "very low-risk" way to invest resources, rather than pouring money into research that may yield no results.
However, Sutskevich now believes this approach is outdated. He explains that data is finite, and organizations already possess substantial computing resources.
He emphasized that this path is becoming crowded and inefficient. "The benefits of pre-trained data will eventually run out; data is finite. When you scale up 100 times, simple quantitative changes may no longer bring about qualitative changes."
"So we've gone back to the research era, only now we have mainframe computers," he added.
However, Sutskevi did not deny the need for computing. He pointed out that computing remains essential for research and that in industries where major organizations adopt the same paradigm, computing can be one of the "important differentiating factors".
However, he emphasized that research is crucial for finding effective or efficient ways to utilize all available computing power.
According to Sutskevi, one area that needs more research is enabling models to generalize like humans—essentially learning from limited information or examples.
“I think the most fundamental problem is that these models generalize much worse than humans in some ways. That’s very obvious. It seems like a very basic thing,” he added.

(Article source: CLS)