The frequently mentioned "AI bubble theory" has sparked a debate about Nvidia. A review of the market capitalization of (NVDA.US).
Although Nvidia recently delivered a strong third fiscal quarter report for fiscal year 2026 (ending October 26, 2025), it has been accused of accounting fraud, circular financing, and an AI bubble by well-known investor and "big short" Michael Bury.
A reporter from China Business Journal noted that with the release of Google's latest high-performance model, Gemini 3, the market sentiment has shifted again. This is because the model is trained using Google's self-developed TPU, and coupled with the news that Meta plans to use TPU chips, the stock price of Google's parent company, Alphabet (GOOGL.US), rose 1.6% against the market trend on November 25th (Eastern Time), marking its third consecutive trading day of record highs, with its market capitalization approaching the $4 trillion mark. Meanwhile, Nvidia fell by more than 6%, hitting a new low in more than two months.
"The Gemini 3's performance this time is amazing, and it has indeed changed the market's perception of it significantly." Xu Tao (pseudonym), a senior R&D executive at a large domestic model company, told reporters that the combination of Gemini 3 and TPU has a certain impact on Nvidia's competitive advantage.
Following a sharp drop in its stock price, Nvidia made a rare public statement on social media, saying, "We are delighted with Google's success—they have made tremendous progress in AI, and we will continue to supply products to Google. Nvidia is currently a generation ahead of the industry and is the only platform that can run all AI models and is universal across a wide range of computing scenarios." Furthermore, according to foreign media reports, Nvidia had previously secretly distributed a seven-page memo to Wall Street analysts refuting accusations from the "big short sellers."
Amidst the escalating AI bubble theory, what underpins Nvidia's high market capitalization? What are its competitive advantages? This question is posed by the Beijing Municipal Commission of Science, Technology and Industry for National Defence. Senior engineer Zhang Fa'en told reporters, "What supports Nvidia's market value on the surface is the supply shortage of GPU hardware, but the core is actually the software and hardware ecosystem barriers built by CUDA. In practice, the biggest pain point for engineers is often not the slightly inferior hardware performance, but the high cost of migrating the software stack."
One link in the closed loop of computing and storage network is missing.
Looking at the key financial data, Nvidia's performance in the third quarter of fiscal year 2026 was impressive. The financial report shows that Nvidia's revenue for the quarter was $57.006 billion, a year-over-year increase of 62% and a quarter-over-quarter increase of 22%; calculated according to US Generally Accepted Accounting Principles (GAAP), net profit was $31.91 billion, a year-over-year increase of 65% and a quarter-over-quarter increase of 21%.
As an "AI shovel seller," Nvidia's booming growth mainly relies on data centers. The business segment reported a record quarterly revenue of $51.2 billion, representing a 25% increase quarter-over-quarter and a 66% increase year-over-year, according to the financial report.
In the earnings announcement, Nvidia CEO Jensen Huang stated that the company's latest generation of Blackwell architecture chips "sold far beyond expectations, and cloud GPUs are sold out." He added, "The computing demands for training and inference continue to accelerate, both growing exponentially. We have entered a virtuous cycle in AI."
Notably, the data center networking business performed exceptionally well, growing 162% year-over-year to $8.2 billion. This growth was primarily driven by strong demand for NVLink, Spectrum-X, and Infiniband solutions. Meta and Microsoft... Oracle bone script Major clients such as these have contributed significant incremental growth in this area.
“More and more data centers are using NVIDIA’s converged computing and networking solutions,” said Su Lianjie, chief AI analyst at Omdia, an industry research firm. “Although NVIDIA’s network solutions are relatively expensive, their performance is at the forefront of the industry.”
It is understood that NVIDIA's networking business encompasses three main technologies: NVLink, InfiniBand, and Ethernet, each with different technical characteristics, application scenarios, and advantages. NVLink is NVIDIA's proprietary interconnect technology designed to enable high-speed direct connections between GPUs, primarily used in large-scale GPU clusters, HPC, artificial intelligence , and other fields.
InfiniBand is geared towards AI factories and is widely used in HPC clusters and large-scale data centers. Ethernet, on the other hand, is geared towards AI Cloud applications. It is widely used in enterprise networks and general data centers, providing broad compatibility and low-cost network connectivity.
According to Su Lianjie, Nvidia's data center business is still missing the storage component; "once we have computing, storage, and networking, the loop will be complete." "Due to the explosive growth of data and the demand for big data... " To meet the demands of AI manufacturers, who want to bring computation as close to data as possible to reduce data transfer costs, storage needs to be designed in conjunction with computing and transmission modules to achieve high synergy and optimization. This is how he explained the significance and importance of storage for the current AI industry.
“Nvidia currently doesn’t have a storage solution, but this is a general industry characteristic; companies that focus on computing typically find it difficult to succeed in the storage business,” Su Lianjie said. He added that Intel previously… I tried to make storage solutions, but it ended in failure. Storage vendors have different technology iteration speeds than computing vendors, and their corporate cultures are also different.
Su Lianjie believes that the current question of whether Nvidia will invest in storage technology stems primarily from the demands of AI technology for full-stack optimization of computing, storage, and networking. "To do AI well, high-performance storage is needed to provide massive amounts of data ." Throughput and high bandwidth, high concurrency read and write capabilities, high IOPS, scalability and elasticity, data consistency, etc.
He speculated that Nvidia would not manufacture storage itself in the future and might acquire related companies. "Nvidia is currently quite close to several major storage manufacturers, such as VAST Data and DDN. Whether this relationship will lead to Nvidia's acquisition in the future remains to be seen."
Top-tier models don't need Nvidia.
In response to recent online discussions about an AI bubble and the situation where well-known investors are reducing their holdings or even leaving the market, Jensen Huang insisted in the earnings call that he did not see an AI bubble and admitted that the company is in a "win-win situation": good performance will be accused of contributing to the AI bubble, while poor performance will be regarded as evidence of the bubble bursting.
Nvidia CFO Colette Kress also refuted the claim that Nvidia chips have a short lifespan, stating that chips from six years ago are still working at full capacity.
In response to Burry's series of accusations, according to the aforementioned memo, Nvidia stated that the company has absolutely no connection with the scandal of manipulating financial data, does not rely on supplier financing, and has no special purpose entities. The so-called "$610 billion revolving financing" accusation is baseless, and the company's total strategic investments this year amounted to only $4.7 billion, which is only a small part of its revenue of hundreds of billions of dollars.
Nvidia also responded to Burry's previous targeted accusations regarding stock buybacks and insider trading, stating that Burry had cited incorrect data and information.
On the evening of November 25th, during the conference call following the earnings release, Alibaba... CEO Wu Yongming stated that there is unlikely to be an AI bubble for at least three years, and the pace of deployment of Alibaba Cloud AI servers and other products is seriously lagging behind customer demand.
"Compared to ASIC (Application-Specific Integrated Circuit) chips designed specifically for a particular AI framework or function, NVIDIA offers higher performance, greater versatility, and better substitutability," NVIDIA stated in a social media post.
Zhang Fa'en further pointed out that Nvidia's strength lies in having the world's best algorithm engineers "build" on its foundation, turning computing power into a plug-and-play infrastructure like water and electricity. "Its moat is that it forces all competitors to run that 'last mile' of software adaptation, which is time-consuming and fatal for model manufacturers who are racing against time," he said.
Reporters noted that when this wave of AI first broke out, the tech industry was scrambling for Nvidia chips. Although leading model players were talking about finding secondary suppliers and "de-Nvidia-izing," the results showed that it was mostly just talk, because Nvidia's previously record-breaking market value was more convincing.
However, this time it's a little different. On November 18th, Google dropped a bombshell—Gemini 3, which, in evaluations, comprehensively surpassed OpenAI's GPT-5.1. On the 21st, Google also launched the image generation tool Nano Banana Pro.
With the help of Gemini 3, Google's "comeback" drama intensified, and the company's market value approached the $4 trillion mark. Subsequently, an internal memo from OpenAI CEO Altman further endorsed Google's powerful AI capabilities from another perspective.
It is understood that the Gemini 3 was trained entirely on Google's self-developed TPU, and the training cost may be 30% lower or even more than that of using Nvidia GPUs. The Information reports that Meta is in talks with Google to use Google's self-developed AI chip in its data centers in 2027. TPU has the potential to generate billions of dollars in transactions.
Some argue that the high performance of Google's Gemini 3 breaks the tradition that "top-tier models must use Nvidia," which is an encouragement to domestic computing power manufacturers. In other words, under what circumstances can domestic models use domestic computing power cards to replace or even surpass Nvidia cards?
In response, Zhang Fa'en said: "I agree that Google has brought top-notch models. In fact, the successful training of Gemini 2.5 Pro based on TPU has already broken the mold of 'top-notch models must use Nvidia.' The current Gemini 3 Pro and Nano Banana Pro have created a breakthrough effect, further strengthening the ability of self-developed dedicated architectures (such as TPU) to compete and reach the top."
"As for domestically produced cards, their single-card performance far surpasses Nvidia's, and we still have a long way to go. However, we have many opportunities." He said that, for example, we can achieve "system-level replacement" in model training in specific vertical fields or in large-scale inference scenarios. "We can optimize the software stack and operators to the extreme for specific businesses, and through cluster interconnection efficiency and extreme cost-effectiveness, we can completely replace or even surpass them."
Xu Tao stated that his company has already migrated some, and even most, of its computing power to domestically produced computing resources. "Recently, we adapted some domestically produced GPUs, and they performed better than expected. Although we encountered some problems during use, the response speed from all parties was quite fast, so we are still very confident," he said.
(Article source: China Business Network)