top of page

Nvidia Q2 2023 Preview

Nvidia Q2 23 preview 001.jpeg

22-Aug-2023. Nvidia reports results tomorrow after the market close. Since the October 2022 low, Nvidia's price has already increased by over 320%. So, isn't the bar set too high for another positive surprise? Previously, after the publication of the Q1 2023 results, Nvidia's share price increased by 24% the next day (see chart). Relative to the S&P500, after 17 trading days relative performance was +37% (second chart). After the publication of the Q4 2022 results, the next day Nvidia's share price increased by 14%. And how are the other 6 largest technology companies doing - after the publication of the Q2 2023 results? The best Alphabet +9.6% against the S&P500. Worst Tesla -16% vs S&P500 (see next chart).

Nvidia Q2 23 preview 002.jpeg
Nvidia Q2 23 preview 003.jpeg
Nvidia Q2 23 preview 004.jpeg

Nvidia Q2 2023 Review

Nvidia Q2 2023 1.jpeg

24-Aug-2023. Nvidia - Q2 results. Revenue skyrocketed, growing +101% YoY and +88% sequentially QoQ. Nevertheless, 96% of the increase in revenues is accounted for (both YoY and QoQ) by one segment... namely Data Centers. No wonder, it was in this segment that the demand for AI materialized. CFO Commentary: “Data Center revenue was a record, up 171% from a year ago and up 141% sequentially, led by cloud service providers and large consumer internet companies. Strong demand (…) was primarily driven by the development of large language models and generative AI.”

Nvidia Q2 2023 2.jpeg

24-Aug-2023. Nvidia - part 2. The increase in revenues per segment (and their share in total revenues) is clearly visible in the charts below for individual segments, where the Y scale is the same for all charts.

Nvidia Q2 2023 3.jpeg
Nvidia Q2 2023 4.jpeg
Nvidia Q2 2023 5.jpeg

Is Nvidia Expensive ?

nd2.jpg

4-Sep-2023. Is Nvidia expensive? Sure with MarketCap/Revenue almost at 19x However, based on its own valuation history and future revenues… the valuation didn't change much after Market Cap increased from USD 360 billion at the end of 2022 (MCap/Rev at 15x) to USD 1.2 trillion now (MCap/Rev at 18.7x) - with an average valuation for 2020-22 of 17.4x Revenues for the last 12 months (trailing twelve month) are USD 32.861 billion. This gives a MarketCap/Revenue ratio of as much as 36.7x But if, in the TTM calculation, we factor in Q3 2023 revenue (based on the company's guideline of $16.0 billion) - then the ratio drops to 28.0 - which is expensive even compared to Nvidia's own valuation history (average MarketCap / Revenue for 2020-2022 is "only” 18.4x). But when we calculate annual revenue based on the company's guideline for the subsequent quarter, the annualized revenue will hit $64.0 billion (based on the Q3 2024 guideline) and the MarketCap / Revenue ratio will drop to 18.7 - which is around the 2020-2022 average of 17.4x Practically, the only explanation for such an increase in revenues is the "Data Center" business segment, or AI in other words. Revenues in this segment are: Q3 2023 (exactly July 2022 - Nov 2022): USD 3.833 billion Q4 2023 (exactly November 2022 - January 2023): USD 3.616 billion Q1 2024 (exactly Feb 2023 - Apr 2023): USD 4.284 billion Q2 2024 (exactly May 2022 - July 2023): USD 10.323 billion Q3 2024 (exactly August 2022 - November 2023, my estimate): USD 12.723 billion From 3.6 to 12.7 in three quarters… amazing! Things like that don't happen…too often! So the main question is obvious… is this revenue growth sustainable? Or when will Nvidia hit the next air pocket or when will customers realize they ordered too many GPUs? Partial answers to such questions were given during the Q2 conference call, but more on that in the next post...

nd1.jpg
nd3.jpg

Nvidia AI & acc. computing

nvidia 1.PNG

4-Sep-2023. Nvidia - part 2. AI has an impact on the entire stock market and may extend the stock market cycle/further advance of equity indices. Based on the Nvidia’s Q2 conference call, it can be stated: - this is not a one-time leap related to AI, but "a new computing era has begun" and the industry is undergoing "a platform shift", - and it all boils down to two things: accelerated computing and generative AI. Vivek Arya, Bank of America Merrill Lynch – Analyst: “(…) Just give us your sense of how sustainable is this demand as we look over the next one to two years (…) how many servers are already AI accelerated? Where is that going? So, just give us some confidence that the growth that you are seeing is sustainable into the next one to two years.” Jensen Huang, CEO: “There's about $1 trillion worth of data centers, call it, a quarter of trillion dollars of -- of capital spend each year. You're seeing the data centers around the world are taking that capital spend and focusing it on the two most important trends of computing today, accelerated computing and generative AI. And so, I think this is not a -- this is not a -- a near-term thing. This is a -- a long-term industry transition, and we're seeing these two platform shifts happening at the same time.” Joe Moore, Morgan Stanley – Analyst: “(…) how much-unfulfilled demand do you think there is?” CEO: “(…) we have excellent visibility through the year and into next year. The demand -- the easiest way to think about the demand is the world is transitioning from general-purpose computing to accelerated computing. (…) The best way for companies to increase their throughput, improve their energy efficiency, improve their cost efficiency, is to divert their capital budget to accelerated computing and generative AI.” Toshi Hari, Goldman Sachs – Analyst: “(…) given your position as the key enabler of AI (…), I'm curious how confident you are that there will be enough applications or use cases for your customers to generate a reasonable return on their investments. I guess I asked the question because there is a concern out there that, you know, there could be a bit of a pause in your -- in your demand profile in the outyears.” CEO: “Toshi, I'm reluctant to -- to guess about the future (…) Using general-purpose computing at scale is no longer the best way to go forward. It's too costly, it's too expensive, and the performance of the applications are too slow, right? And finally, the world has a new way of doing it. It's called accelerated computing. And what kicked it into turbocharge is generative AI.”

Nvidia +21.8% to ATH

Nvidia 21% 001.jpeg

16-Nov-2023. Nvidia +21.8% in 10 days to ATH Nvidia stock rose for the tenth consecutive session to a new all-time high on 14-Nov. This represents an increase of 21.8% (closing prices, 1-Nov to 14-Nov). Yesterday Nvidia had a first negative session (-1.55%). Nvidia is the top performer from the entire Mag7 club, counting since the publication of Q1 2023 earnings. Nominally, Nvidia has increased by 60.1% since May 24, 2023, and relative to the S&P500 by 50.7%. See Table 1, bottom panel. In second place is Meta with a rate of return since the publication of Q1 2023 results +58.9% nominally, and +47.9% relatively to the S&P500 (see table 1 and chart 1). Chart 2 shows Nvidia’s stock price since 2021. Recently, other companies have also been on a good streak: Microsoft: 9 positive days in a row (27-Oct to 8-Nov +10.8%), Meta: 8 positive days in a row (3-Nov to 14-Nov +8.2%), Amazon: 8 positive days in a row (27-Oct to 7-Nov +19.4%), S&P500: 8 positive days in a row (30-Oct to 8-Nov +6.5%).

Nvidia 21% 002.jpeg
Nvidia 21% 003.jpeg

Nvidia Q3 2023 Preview 

Nvidia 001.PNG

21-Nov-2023. Nvidia Q3 2023 earnings preview. The Nvidia share price reached its all-time-high yesterday - exactly one day before the publication of Q3 2023 results (earnings are due today after the market close). The most important things to watch: 1) Beat/miss on sales, Nvidia's Q2 revenues grew 101.5% YoY (beating Wall Street expectations by 22.3%). The market expects Q3 revenues of $16.2 billion (which means a YoY increase of 173%). 2) Beat/miss on adjusted EPS, In Q2, adjusted EPS was $2.70 (+430% YoY and beat Wall Street expectations by 30.4%). The market expects $3.37 in Q3 (a year ago it was $0.58). 3) Revenue growth in the Data Center segment, This is key, as this segment houses the company's AI chips division. 4) China, On October 17, Nvidia's stock price fell 7.5% after news that President Biden intends to restrict the sale of AI chips to China. Nvidia has asserted that the impact in the near term may not be significant, however the long-term effects could be more pronounced. 5) Competition from other tech giants E.g. production of their own AI chips. 6) Potential disruptions in the supply chain. The Nvidia share price increased from the publication of the Q1 results (24-May-2023) to 20-Nov-2023 nominally by 61.5%, and relative to the S&P500 by 54.6% (see the chart).

Nvidia Q3 2023 Earnings

Nvidia Q3 1.jpeg
Nvidia Q3 2.jpeg

23-Nov-2023. Nvidia Q3 earnings. Nvidia reported blowout Q3 2023 results, however the stock is down in after-hours trading (-1,15% at the moment of writing). First take: 1) Revenue $18.2 billion (expected $16.2 billion, +11.8% beat), YoY +205.5% (expected +173%). Figure 1. 2) Adjusted EPS $4.02 (expected $3.37, +19.3% beat). 3) Data Center Segment Revenue $14.51 billion (expected $12.82 billion, +13.2% beat), YoY +279% (expected +234%). 4) Revenue Q4 Guidance $20.0 billion (plus/minus 2%). Figure 2 shows revenue segments and Q4 guidance. This guidance means 230% YoY revenue growth in Q4. 5) On export to China... unfortunately there is no good news here. Export restrictions to China have already come into force and Nvidia expects a significant decline in sales in Q4. The CFO's detailed commentary in this regard is presented in Figure 3.

Nvidia Q3 3.jpeg
Nvidia Q3 4.jpeg
Nvidia Q3 5.jpeg

23-Nov-2023. Nvidia Q3 Earnings, part 2. Growth Engines in Full Throttle. Jen-Hsun Huang, Nvidia's CEO during the earnings conference: “NVIDIA GPUs, CPUs, networking, AI foundry services, and NVIDIA AI enterprise software are all growth engines in full throttle.” And indeed, the growth of Nvidia's business is impressive. The same when it comes to the further growth and prospects of AI. In fact, the entire results conference only confirmed this... except for one aspect... the company admitted that the revenue forecast for Q4 could have been higher if not for China. Colette Kress, CFO: „But with the absence of China, for our outlook for Q4, sure, there could have been some things that we are not supply constrained that we could have sold to China, but we no longer can. So, could our guidance have been a little higher in our Q4? Yes.” And some more color form CFO on export ban: “U.S. government announced a new set of export control regulations for China and other markets, including Vietnam and certain countries in the Middle East. These regulations require licenses for the export of a number of our products, including our Hopper and MPIR 100 and 800 series and several others (…). We expect that our sales to these destinations will decline significantly in the fourth quarter (…) For the highest performance levels, the government requires licenses. For lower performance levels, the government requires a streamlined prior notification process. And for products, even lower performance levels, the government does not require any notice at all. (…) We are working with some customers in China and the Middle East to pursue licenses from the U.S. government. It is too early to know whether these will be granted for any significant amount of revenue.” “The export controls will have a negative effect on our China business, and we do not have good visibility into the magnitude of that impact even over the long term.” “(…) regarding potentially new products that we could provide to our China customers. It's a significant process to both design and develop these new products. As we discussed, we're going to make sure that we are in full discussions with the U.S. government of our intent in these products as well (…). And going forward, whether that's medium term or long term, it's just hard to say both the ideas of what we can produce with the U.S. government and with the interest of our China customers. So, we stay still focused on finding that right balance for our China customers, but it's hard to say at this time.” Attached charts showing quarter on quarter and year on year revenue growth - divided into segments by market platform. Apart from the obvious growth of Data Center, Gaming and Professional Visualization are also performing well both sequentially and year on year.

Nvidia post Q3 2023 

wy1.PNG

23-Nov-2023. Nvidia underperforms S&P500. Nvidia closes the first day of trading after the publication of Q3 2023 results with a slight loss: nominally -2.46% and relatively to the S&P500 -2.86%. While the share price reacted spectacularly after the results for Q4 2022 and Q1 2023, the last two quarters were not so good (the price oscillates relatively around the S&P500, Chart 1). It seems that the great financial results were already reflected in the share prices. However, since the publication of the Q4 2022 results, the rate of return is still impressive: nominally +134.7% and +120.6% relative to the S&P500 - see chart 2. Table 1 summarizes the results of Mag7 and Novo Nordisk.

wy2.PNG
Tabela results kopia.jpg

Nvidia Q4 2023 preview

Nvidia PNG.png

20-Feb-2024. The entire investment world now depends on Nvidia's tomorrow earnings... ... and it's not even the largest company in the world (yet) 😊 Let's check the data… Wall Street expects revenues of $20.41 billion, which is sequentially +12.7% more than the previous quarter. See Figure 1. But year on year it will be +237.3%! The company itself provided guidance of $20.0 billion, +/- 2% (Figure 2). Wall Street also expects revenues to be exactly on the company's upper band. Not very ambitious… In the previous three quarters, the company beat the consensus by 11.9%, 22.4% and 10.3% respectively - see Figure 1. But the most important thing will, of course, be the increase in revenues in Data Centers (AI is mainly there). Wall Street expects $17.2 billion, or +18.6% sequentially and +375.87% YoY. See Figure 3. According to the consensus, revenues in Data Centers will increase by another 40% to $24.1 billion over the next 4 quarters.

FEB 20 Nvidia1.PNG
FEB 20 Nvidia2.PNG
FEB 20 Nvidia3.PNG

Nvidia Q4 2023 review

Nvidia PNG 2.png

22-Feb-2024. Nvidia Q4 2024 Earnings Review When Nvidia published its results for the quarter ending January 31, 2023, it beat Wall Street's revenue expectations by only 0.5%. Back then, revenues amounted to only $6.05 billion and dropped year-on-year by 20.8% (yes, that's the correct number, they dropped by over 20%). Fast forward a year to the current quarter ending January 31, 2024 and Nvidia beat market expectations by 8.3% (the smallest beat in 3 quarters - see Figure 1) delivering revenues of $22.10 billion - which means an increase of 22.0% sequentially and 265% year on year (and yes, this is also the correct number – see Figure 2). What does a year mean in the AI industry… The last year on the AI market was best summed up by Jensen Huang, founder and CEO of NVIDIA: “Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries and nations.” Exactly across: (1) companies, (2) industries, and (3) nations (so-called sovereign AI). Well, we can say that everyone is already investing in AI. Collete Cress, CFO during the earnings call: “ Countries around the world are investing in AI infrastructure to support the building of large language models in their own language, on domestic data, and in support of their local research and enterprise ecosystems.” However, the Data Centers business segment is responsible for as much as 92% of the year-on-year revenue growth - see Figure 3. The NVIDIA Data Center platform is focused on accelerating the most compute-intensive workloads, such as AI, data analytics, graphics and scientific computing. Jensen Huang, CEO: “Our Data Center platform is powered by increasingly diverse drivers — demand for data processing, training and inference from large cloud-service providers and GPU-specialized ones, as well as from enterprise software and consumer internet companies. Vertical industries — led by auto, financial services and healthcare — are now at a multibillion-dollar level”. Figure 4 shows the growth in revenues in this segment and the consensus of analysts' estimates for 2024-2025 (both before the publication of the results and after the upward revision made by analysts after the publication of the financial results).

Nvidia 1.PNG
Nvidia 2.PNG
Nvidia 3.png
Nvidia 4.PNG
Nvidia PNG 3.png

22-Feb-2024. Nvidia Q4 2024 Earnings Call Key Takeaways. CFO on demand for next generation products: “We expect our next generation products to be supply constrained as demand far exceeds supply.” Some color on AI revenue: “We estimate in the past year approximately 40% of data center revenue was for AI inference. (…) in the fourth quarter, large cloud providers represented more than half of our data center revenue”. On China: “Although we have not received licenses from the US government to ship restricted products to China, we have started shipping alternatives that don't require a license for the China market. China represented a mid single digit percentage of our data center revenue in Q4, and we expect it to stay in a similar range in the first quarter”. On new stream of revenues: “We also made great progress with our software and services offerings, which reached an annualized revenue run rate of $1 billion.“ CEO on revenue growth in 2024, 2025 and beyond: “Yeah, well, we guide one quarter at a time, but fundamentally, the conditions are excellent for continued growth. Calendar 24 to calendar 25 and beyond. And let me tell you why. We're at the beginning of two industry wide transitions, and both of them are industry wide. The first one is a transition from general to accelerated computing. General purpose computing, as you know, is starting to run out of steam. (…) There's just no reason to update with more CPUs when you can't fundamentally and dramatically enhance its throughput like you used to. And so you have to accelerate everything. This is what Nvidia has been pioneering for some time. And with accelerated computing, you can dramatically improve your energy efficiency, you can dramatically improve your cost in data processing by 20 to one, huge numbers. And of course, the speed, that speed is so incredible that we enabled a second industry wide transition called generative AI. (…) We believe these two trends will drive a doubling of the world's data center infrastructure installed base in the next five years and will represent an annual market opportunity in the hundreds of billions”.

CiscoNvidia

Nvidia vs Cisco Systems

Cisco bubble.png

25-Feb-2024. What does a real bubble look like? Try Cisco Systems Figure 1 shows the Price to Sales Ratio for Cisco Systems. In the 1990s, the company provided Internet infrastructure, just as Nvidia today provides computing infrastructure for artificial intelligence models. In the 1990s, Cisco was hailed as the King of the Internet. On March 27, 2000, at the peak of its stock price, Cisco was valued at 63 times sales and was then the largest company in the world by market cap, overtaking Microsoft. Figure 2 shows the Price to Sales ratio for both Cisco and Nvidia. As of February 23, 2024, Price to Sales for Nvidia is 31.9 (however, based on trailing sales). Because the company is rapidly increasing revenues, the actual Price to Sales is only 20 (see Figure 3, blue line). Nvidia has had 3 big business and valuation expansions in recent years (Figure 3): 1) Since 2016 due to the Bitcoin mining craze (Nvidia's GPUs were really popular for mining Bitcoin and Ethereum), 2) From 2020 due to the pandemic (Nvidia's GPUs supported remote work, gaming and covid-19 research), 3) From November 2023 due to the artificial intelligence boom (Nvidia's GPUs are essential for AI model training). Do we have a real bubble in Nvidia's valuation? Not necessary, if it is, it's more of a "baby bubble". Key takeaway: since 2020, the valuation has been stable around 18 times sales - see Figure 4 (Figure 5 shows more details about the company's market capitalization and annualized sales).

Cisco1.PNG
Cisco2.PNG
Cisco3 ver2.PNG
Cisco4.PNG
Cisco5.PNG

Nvidia valuation check

Nvidia March19 3D.png

21-Apr-2024. Nvidia – valuation check Nvidia fell by over 10% on Friday, and since the March 25 All-Time-High it is already down 19.8% (at closing prices). According to intraday prices, Nvidia has already fallen 21.77% (from the intraday high of $ 974 on March 8, 2024). Let's check how the company looks from a fundamental perspective, to simplify let’s have a look at sales growth. Figure 1 shows Wall Street's expectations for revenue growth: 1) Immediately before the earnings were published on February 22, 2024, 2) Immediately after the publication of these earnings, and 3) as of April 19, 2024. Current expectations are even higher than those after the publication of the results - analysts are constantly raising their expectations regarding revenue growth. That's good! Figure 2 shows Nvidia's market cap against revenues. The company's rapidly growing market cap goes hand in hand with strong revenue growth. Figure 3 shows the same relationship as the ratio (price to sales). As of April 19, the ratio dropped to 19.6 - and if we take the revenue expected by Wall Street in the next quarter (the company will publish its guidance on May 22, 2024) - the ratio drops to 17.8 - which is below the average for 2020-2023. Figure 4 shows the Nvidia stock price with the earnings release dates plotted. All in all, nothing to worry about business growth and Nvidia's valuation according to the Price to Sales ratio. As a reminder, the Price to Sales at the peak of the Cisco System stock price on March 25, 2000 was 63 - see Figure 5.

Nvidia March19.PNG
Nvidia March19 2.PNG
Nvidia March19 3.PNG
Nvidia March19 4.PNG
Nvidia March19 5.PNG

Nvidia's 20% drawdown

Nvidia April22 3D.png
Nvidia April22 1.png
Nvidia April22 3.png

22-Apr-2024. Is recent Nvidia’s 20% drawdown a big deal? If we look at history, a 20% drawdown is not a big deal... unfortunately. Figure 1 shows drawdowns on the Nvidia stock since 1999. Another example would be Cisco Systems in the 1990s - see Figure 2. On the way to the top in 2000, we had several drawdowns much larger than 20%. But there is also an interesting aspect in the case of Nvidia, looking at the price since 2016 - see Figure 3. Since 2016, we have experienced 3 strong growth waves and two over 50% drawdowns related to: (1) Bitcoin mining craze 2016-2018, (2) pandemic and gaming frenzy of 2020-2021, and (3) AI in 2022 – 2024. During the transition from (1) to (2) and from (2) to (3), Nvidia experienced revenue shortfalls ("air pockets") related to prior market saturation, which correlate with large drawdowns - see Figure 4. It can be assumed that, to a large extent, the target drawdown size this time may depend on the fact whether we are going to see another "air pocket" in the company's business... Nvidia reports its earnings on May 22 this year.

Nvidia April22 2.png
Nvidia April22 4.png

Nvidia's consensus change

Nvidia 11May 3D.png

11-May-2024. Nvidia - Wall Street Revenue Estimate Change Nvidia will release earnings on May 22 after the close of trading. This may be the second most important event in May (after the publication of the US CPI for April on May 15). Wall Street expects revenues to increase in Q1 2025 (quarter ending April 30, 2024) to $24.07 billion (which means +235% YoY, or +$16.88 billion). Let's check how the revenue consensus has changed recently. See Figure 1. While analysts significantly increased their estimates immediately after the publication of results for Q42024 (quarter ending January 31, 2024), in the next two months (from February 24 to April 19) the consensus increased only by +0.4% to +4.8 % - see the lower table in Figure 1. The last change between April 19 and May 8 is cosmetic. As a reminder, NVIDIA's outlook for the first quarter of fiscal 2025 with regards to revenue is $24.0 billion, plus or minus 2%. In previous quarters, the company easily beat its guidance. So by how much does a company have to beat its own outlook for the stock price to react positively this time?

Nvidia 11May.PNG

Nvidia Q1 2024 preview

Nvidia 11May 3D.png

22-May-2024. Will Nvidia beat all expectations once again? In each of its last five consecutive quarters Nvidia has beaten EPS and Sales estimates plus also raised guidance for the next quarter. Will it be the same this time? Wall Steet expects revenues of USD 24.69 billion and this forecast has increased by 2.6% in the last two weeks - see Figure 1. On May 8 this year the consensus revenue amounted to USD 24.07 billion (see the table at the bottom of the chart). The forecast for the next quarter (Nvidia will provide guidance for the next quarter) is USD 26.82 billion (an increase of only 0.8% since May 8). Wall Street expects earnings of $5.65 per share (adjusted (non-GAAP) diluted EPS). More of my analyzes about Nvidia at this link: https://jamkaglobal.com/mag8#mag8

NVDA 22May.PNG

Nvidia Q1 2024 review

Nvidia 3D 1.png
NVDA 1.PNG
NVDA 3.png
NVDA 5.PNG

24-May-2024. Nvidia Q1 Earnings Review Nvidia once again beat earnings expectations in fine style (see Figure 1). The current Wall Street consensus for future revenue, two days after the earnings release is up approximately 5-10% compared to the consensus immediately before the May 22 earnings release - see Figure 2. But the most important issue for investors remains the question of further revenue growth. According to Market Platform kind of revenues, the company's sales are split in 5 segments, but as much as 87% of sales are from the Data Centers segment. Interestingly, Data Centers segment is responsible for 97% of YoY sales growth, and for 106% of quarter-to-quarter sales growth. See Figure 3 and 4. Approximately 45% of sales in the Data Centers segment, or approximately $10.2 billion per quarter, are sales to 4 companies: Amazon, Alphabet, Meta and Microsoft (so-called hyperscalers). CFO commentary: “Data Center revenue of $22.6 billion was a record, up 23% sequentially and up 427% year-on-year, driven by continued strong demand for the NVIDIA Hopper GPU computing platform. (…) Strong sequential data center growth was driven by all customer types, led by enterprise and consumer internet companies. Large cloud providers continue to drive strong growth as they deploy and ramp NVIDIA AI infrastructure at scale and represented the mid-40s as a percentage of our Data Center revenue.” In a sense, we are dealing with a virtuous cycle when more advanced AI models need more computing power, and only better models can win this race. In this respect, the product cycle only drives Nvidia's sales when subsequent chip versions are even faster. The second trend is to expand the market beyond Hyperscalers, including Sovereign AI, but also other industries such as carmakers, biotechnology and health-care companies. Colette Kress on more demand for AI compute: „As generative AI makes its way into more consumer Internet applications, we expect to see continued growth opportunities as inference scales both with model complexity as well as with the number of users and number of queries per user, driving much more demand for AI compute. In our trailing four quarters, we estimate that inference drove about 40% of our Data Center revenue. Both training and inference are growing significantly.” On Sovereign AI demand: “Data Center revenue continues to diversify as countries around the world invest in Sovereign AI. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks. Nations are building up domestic computing capacity through various models. Some are procuring and operating Sovereign AI clouds in collaboration with state-owned telecommunication providers or utilities. Others are sponsoring local cloud partners to provide a shared AI computing platform for public and private sector use.” “we believe Sovereign AI revenue can approach the high single-digit billions this year. The importance of AI has caught the attention of every nation”. Jensen Huang on demand on AI training and inference (Training is the process of teaching an AI model how to perform a given task. Inference is the AI model in action, producing predictions or conclusions without human intervention): “Strong and accelerated demand -- accelerating demand for generative AI training and inference on Hopper platform propels our Data Center growth. Training continues to scale as models learn to be multimodal, understanding text, speech, images, video and 3D and learn to reason and plan.” “The demand for GPUs in all the data centers is incredible. We're racing every single day. And the reason for that is because applications like ChatGPT and GPT-4o, and now it's going to be multi-modality and Gemini and its ramp and Anthropic and all of the work that's being done at all the CSPs are consuming every GPU that's out there.” On startups demand: “There's also a long line of generative AI startups, some 15,000, 20,000 startups that in all different fields from multimedia to digital characters” “the demand, I think, is really, really high and it outstrips our supply. (…) Longer term, we're completely redesigning how computers work. And this is a platform shift. Of course, it's been compared to other platform shifts in the past. But time will clearly tell that this is much, much more profound than previous platform shifts. And the reason for that is because the computer is no longer an instruction-driven only computer. It's an intention-understanding computer.” Nvidia is quickly introducing new faster products, for example in the field of Data Center GPUs: H100 Tensor Core GPU, then H200 Tensor Core GPU, and the next one is GB200 NVL72. For example, Tesla has already purchased 35,000 H100s, Jensen Huang: „Enterprises drove strong sequential growth in Data Center this quarter. We supported Tesla's expansion of their training AI cluster to 35,000 H100 GPUs. Their use of NVIDIA AI infrastructure paved the way for the breakthrough performance of FSD Version 12, their latest autonomous driving software based on Vision.” The H200 boosts inference speed by up to 2X compared to H100 GPUs when handling LLMs like Llama2. H200 reduces also TCO (total cost of ownership) by 50% and energy use by 50% – see Figure 5. GB200 NVL72 delivers 30X faster real-time trillion-parameter LLM inference and 4X faster LLM training – see Figure 6.

NVDA 2.PNG
NVDA 4.png
NVDA 6.PNG

Nvidia valuation check

Nvidia 3D 3.png

26-May-2024. Is Nvidia stock already expensive? The bad news is that, unfortunately, yes. At least to its own valuation history for the last 4+ years. At a price of $1064.69, this is already the 88th percentile of the valuation from 2020-2024 according to the price to sales ratio. In other words, only 12% of the time during this period was Nvidia more expensive. Below are some valuation points: - at the median valuation point, the price should be $815 (-23.5% from today's price) - at 75th percentile $952 (-10.9% from today's price) - at 90th percentile $1,095 (+2.9% from today's price) - at 95th percentile $1,157 (+8.7% from today's price) - at 100th percentile $1,348 (+26.6% from today's price) - at 25th percentile $719 (-32.5% from today's price) To calculate the price-to-sales ratio, I used the annualized revenues from the next quarter (company's guidance). This approach reflects well the rapidly growing revenues. See Figure 1. Each quarter is shown separately. Figure 2 shows Nividia's stock price on a logarithmic scale. A few weeks ago, investors had a moment of doubt and at the session on April 19, they sold Nvidia with a maximum decline on that day of even -11% at a price of $756.1. To date, the price has increased by over 40%. Figure 3 shows how Nvidia's stock price reacted relative to the S&P500 after the earnings release. Each of the last 6 quarters has yielded a higher return than the S&P500. After the publication of the last results on May 22, we are now 12.2% relatively higher than the S&P500. Finally, there is also good "news"... first, that in the short term, valuations do not have a major impact on the company's share price, and second, it can be expected with a high degree of probability that the bull market in Nvidia shares should last until the end of the bull market on the entire market (as was the case, for example, with Cisco Systems in the 1990s). This does not mean that significant corrections are not possible (e.g. in April this year, when the drawdown on Nvidia’s shares was 22.4% (intraday) and 19.8% at closing prices).

NVDA valuation 1.PNG
NVDA valuation 2.PNG
NVDA valuation 3.PNG

Nvidia inflation is good!

Nvid Taiwan 3D.png
Nvid 01.png

3-Jun-2024. Inflation is good. While economies, central banks and markets today have problems with inflation... Nvidia makes money from inflation... This is of course all about "computation inflation" – see Figure 1. NVIDIA CEO Jensen Huang at COMPUTEX 2024, Taiwan, June 2, 2024: “If the data that is that we need to process continues to scale exponentially but performance does not - we will experience computation inflation and in fact we're seeing that right now as we speak. The amount of data center power that's used all over the world is growing quite substantially, the cost of computing is growing, we are seeing computation inflation. This of course cannot continue, the data is going to continue to increase exponentially and CPU performance scaling will never return. There is a better way. For almost two decades now we've been working on accelerated Computing Cuda augments a CPU, offloads and accelerates the work that a specialized processor can do much better, in fact the performance is so extraordinary that it is very clear now as CPU scaling has slowed and event substantially stopped. We should accelerate everything I predict that every application that is processing intensive will be accelerated and surely every data center will be accelerated the near future.” But how to fight that inflation? It’s simple: The more you buy the more you save! – see Figure 2. “We could accelerate what used to take a 100 units of time down to one unit of time.” “We add a GPU a $500 to a $1,000 PC and the performance increases tremendously. We do this in a data center, a billion dollar data center, we add $500 million worth of GPUs and all of a sudden it becomes an AI Factory. This is happening all over the world today. Well the savings are quite extraordinary.. 1)you're getting 60 times performance per dollar 2)a 100 times speed up you only increase your power by 3x 3)100 times speed up you only increase your cost by 1.5x The savings are incredible. The savings are measured in dollars, it is very clear that many companies spend hundreds of millions of dollars processing data in the cloud. That's the reason why you've heard me say the more you buy the more you save and now I've shown you the mathematics, it is not accurate but it is correct, okay that's called CEO math. CEO math is not accurate but it is correct: the more you buy the more you save.”

Nvid 02.png

Nvidia - this is Blackwell

Blackwell 3d.png
Blackwell 01.png

16-Jun-2024. Ladies and gentlemen, this is Blackwell. Nvidia CEO Jensen Huang’s Keynote at COMPUTEX 2024 was fascinating. Below are selected excerpts about the world's most powerful AI chip: Nvidia's Blackwell. Jensen Huang (Figure 1): “Ladies and gentlemen, this is Blackwell. Blackwell is in production. Incredible amounts of technology.” Blackwell-architecture GPUs pack 208 billion transistors and are manufactured using a custom-built TSMC 4NP process. All Blackwell products feature two reticle-limited dies connected by a 10 terabytes per second (TB/s) chip-to-chip interconnect in a unified single GPU. Jensen Huang (Figure 2): “This is our production board. This is the most complex, highest performance computer the world's ever made. This is the Grace CPU. And these are, you can see each one of these Blackwell dies, two of them connected together. You see that it is the largest die, the largest chip the world makes. And then we connect two of them together with a ten terabyte per second link.” Figure 3: “And the performance is incredible. Take a look at this. So, you see, you see, the computational, the FLOPs, the AI FLOPs, for each generation has increased by a thousand times in eight years. Moore's law in eight years is something along the lines of, oh, I don't know, maybe 40, 60. And in the last eight years, Moore's law has gone a lot, lot less. And so just to compare, even Moore's Law as best of times compared to what Blackwell could do.” Figure 4: “And whenever we bring the computation high, the thing that happens is the cost goes down. (…) the energy used to train a GPT-4, 2 trillion parameter, 8 trillion tokens (…) has gone down by 350 times. Well, Pascal would have taken 1,000 gigawatt hours. (…) we've now taken with Blackwell what used to be 1,000 gigawatt hours to three, an incredible advance, three gigawatt hours.” From 17,000 joules (of energy) per token to just 0,4 joules per token! Chat GPT-4 uses about 3 tokens to generate one word. Jensen Huang: “Our token generation performance has made it possible for us to drive the energy down by 45,000 times, 17,000 joules per token. That was Pascal 17,000 joules. It's kind of like two light bulbs running for two days. It would take two light bulbs running for two days. Amounts of energy, 200W running for two days to generate one token of GPT-4. It takes about three tokens to generate one word. And so the amount of energy used necessary for Pascal to generate GPT-4 and have a ChatGPT experience with you was practically impossible. But now we only use 0.4 joules per token, and we can generate tokens at incredible rates and very little energy.” However, the Blackwell is not big enough for AI compute! Jensen Huang (Figure 5): “Okay, so Blackwell is just an enormous leap. Well, even so, it's not big enough. And so we have to build even larger machines. And so the way that we build it is called DGX.” DGX Blackwell (GB200 NVL72) connects 36 Grace CPUs and 72 Blackwell GPUs in a rack-scale design. The GB200 NVL72 is a liquid-cooled, rack-scale solution that boasts a 72-GPU NVLink domain that acts as a single massive GPU. This “massive single GPU” is quite big – see Figure 6. Yet, what’s amazing.. it’s still not enough for AI compute… but this is a story for another post… Jensen Huang: “And even this is not big enough, even this is not big enough for an AI factory. So we have to connect it all together with very high speed networking.”

Blackwell 02.png
Blackwell 03.png
Blackwell 04.png
Blackwell 05.png
Blackwell 06.png

Nvidia valuation check

NVDA 3DD.png

19-Jun-2024. Nvidia – already expensive? The wealth of the Nvidia’s CEO increased by $3.9 billion only during yesterday's session (this is an increase in the value of Nvidia shares held by Jensen Huang), and by as much as $74.9 billion since the beginning of the year. Investors have nothing to complain about... because only during yesterday's session the value of their Nvidia shares increased by $113.3 billion, and since the beginning of the year... drum roll please... by $2.11 trillion. So is Nvidia already expensive? Price to sale, based on the sum of revenues from the preceding 12 months (TTM), is 41.8x. At the peak of market valuation in March 2000, Cisco Systems achieved a Price to Sales of 63.2x - see Figure 1. But the Nvidia's revenues are growing rapidly. TTM currently amounts to $79.77 billion. But based on the company's guidance (next quarter annualized company's guidance), they already amount to $112.00 billion - see Figure 2. Price to Sales drops to 29.8x. Wall Street continues to raise its own estimates of Nvidia's future revenues (see Figure 3) and the current forecast for 4 subsequent quarters is $130.67 billion - thus the Price to Sales drops to 25.5x. Wall Street forecasts revenues in the another 4 subsequent quarters (from the 5th to the 8th quarter, i.e. from May 2025 to April 2026) at the level of $161.34 billion - which reduces the Price to Sales ratio to (only) 20.7x. Summary in Figure 4.

NVDA 01.PNG
NVDA 02.PNG
NVDA 03.PNG

Nvidia's drawdowns

NVDA drawdowns 3DD.png

22-Jun-2024. How big Nvidia's drawdown is attractive? After two down sessions, Nvidia is down 6.65% since its last ATH (based on daily closing prices). This is the biggest drop since April when we had a drawdown of 19.79%, ending with a strong note: a 10% price drop in one day... on April 19. Since the bottom in October 2022, we have so far had three large drawdowns of around 20%, and with such large drawdowns, the entire market (S&P500) joined the declines at the same time - see Figure 1. Generally, it can be said that during this period, Nvidia did not have any major declines above 10% unless the entire market fell at the same time. The current Nvidia drawdown of 6.65% did not result in declines on the S&P500 - where the drawdown is only 0.41%. Let's check what drawdowns we faced in the case of Cisco Systems in the 1990s. See Figure 2. From the bottom of 1994 ($0.77), the price of Cisco rose to the top in 2000 ($56.85) by 73 times! At the same time, the S&P500 increased only 3.4 times. On the way to the top, Cisco had 5 drawdowns above 20% (the largest were 51%, 38%, 37%, 29%, 26%). However, the size of the drawdown was less related to the decline of the entire market (S&P500). Only the last large drawdown in 1998 can be clearly linked to the decline of the broad market (Cisco -37%, S&P500 -19%). Interestingly, the declines also ended with a strong one-day decline (on August 31, 1998, the S&P500 fell 6.8%, marking the bottom of the correction). Cisco fell 13.5% on that day (although it marked the bottom of its own correction only on October 7, 1998). But similarly in the case of Cisco... the end of the correction was associated with a strong accent: 3 consecutive days of declines of -13%, -4% and -5%. In total 21%. From then until 2000 top, Cisco's share price increased by 630%! Figure 3 compares Nvidia and S&P500 drawdowns, and Figure 4 similarly for Cisco Systems. In the case of Nvidia, historical declines above 50% are nothing out of the ordinary. However, after a decline of 89% in 2002, Cisco needed as many as 19 years to set another ATH. In the case of Nvidia, the size of the drawdown may be influenced (in the future) by the scale of the slowdown in Nvidia's revenue growth, as well as how such a slowdown in growth, or even a possible decline in revenues, will be received by the market.

NVDA drawdowns 001.png
NVDA drawdowns 002.png
NVDA drawdowns 003a.png
NVDA drawdowns 004.png

Nvidia's super chips

Nvda Rubin 3D.png

24-Jun-2024. Nvidia’s super chips: Blackwell, Blackwell Ultra, Rubin & Rubin Ultra. The newest and fastest Blackwell super chip has not yet gone on sale for good, and Nvidia is already working on Blackwell's successor, called Rubin. Jensen Huang: „We have code names in our company and we try to keep them very secret. Oftentimes, most of the employees don't even know, but our next generation platform is called Rubin, the Rubin platform.” Nvidia shortened the product cycle to one year: Blackwell in 2024, Blackwell Ultra in 2025, Rubin in 2026 and Rubin Ultra in 2027. This will enable even faster development of AI and the creation of even larger data centers. Jensen Huang: „Our company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm, and push everything to technology limits”. Jensen Huang: „The days of millions of GPU data centers are coming, and the reason for that is very simple. Of course, we want to train much larger models, but very importantly in the future, almost every interaction you have with the internet or with a computer will likely have a generative AI running in the cloud somewhere, and that generative AI is working with you, interacting with you, generating videos or images or text, or maybe a digital human. And so you're interacting with your computer almost all the time. And there's always a generative AI connected to that. Some of it is on prem, some of it is on your device, and a lot of it could be in the cloud. These generative AIs will also do a lot of reasoning capability instead of just one shot answers. They might iterate on answers. So that it'd improve the quality of the answer before they give it to you. And so the amount of generation we're going to do in the future is going to be extraordinary”. Figure 1 shows a comparison of the Hooper, Blackwell and Rubin platforms. Jensen Huang: „So we have the Rubin platform, and one year later we have the Rubin Ultra platform. All of these chips that I'm showing you here are all in full development, 100% of them. And the rhythm is one year at the limits of technology. All are architecturally compatible. So this is basically what NVIDIA is building”. Figure 2 shows the details of the Blackwell platform. Meanwhile, Nvidia is currently down 11.70% from the June 18 all-time high - see Figure 3.

Nvda Rubin 01.png
Nvda Rubin 02.png
Nvda Rubin 03.png

The Past and Future of AI

26-Jun-2024. Past and Future of AI. This is a short history of AI (see graphic by Goldman). H/T to James Wong, who posted it on LinkedIn. The biggest progress is the public debut of Chat GPT-3.5 in November 2022, followed by the quick (after only 4 months) debut of a much better version of Chat GPT-4.0. An even better version of Chat GPT-4o debuts in May 2024. But what's next? From the hardware side (computing infrastructure in general), Nividia will not disappoint... the newest and fastest Blackwell chip is already in production, and the company is already working on even faster ones: Blackwell Ultra (will be available in 2025), Rubin (2026) and Rubin Ultra (2027) . From the "software" side... we have more and better versions of LLM models ahead of us. Leopold Aschenbrenner, ex-member of OpenAI's Superalignment team, now founder of an investment firm focused on artificial general intelligence (AGI) — has just posted a massive 165-page long AI essay. He asserts that we are about to see the linear trend of AI improvements towards AGI relatively soon. If true... the current boom in AI shares is not even a "baby bubble" - we are at the very beginning... Take a look at Leopold's chart (attached below) where we may be in 2028... ("Base Scaleup of Effective Compute"). Leopold Aschenbrenner: "I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph". Leopold Aschenbrenner: "GPT-4’s capabilities came as a shock to many: an AI system that could write code and essays, could reason through difficult math problems, and ace college exams. A few years ago, most thought these were impenetrable walls. But GPT-4 was merely the continuation of a decade of breakneck progress in deep learning. A decade earlier, models could barely identify simple images of cats and dogs; four years earlier, GPT-2 could barely string together semi-plausible sentences. Now we are rapidly saturating all the benchmarks we can come up with. And yet this dramatic progress has merely been the result of consistent trends in scaling up deep learning". And what is the biggest obstacle? Lack of (further) data needed to train LLM models on.. even all the data available on the Internet is not enough..

AI histoy by Goldman.jpeg
Base Scaleup of Effective Compute.png

Nvidia Q2 2024 preview

Nvda Rubin 3D.png

28-Aug-2024. Nvidia Earnings Preview. Overall, the most important for Nvidia’s results will be two things: (1) Q2 2024 revenue and EPS. Wall Street expects Q2 revenue of $28.35 billion and EPS of $0.63 per share. (2) and company’s guidance for Q3 2024. Wall Street expects Q3 EPS at $0.69 and Q3 revenue at $31.18 billion. For the full year, Nvidia is expected to guide EPS around $2.70, and revenue of $120.14 billion. One way to assess potential GPU demand is to look at what other Mag7 companies have said on the subject during recent earnings calls. Below are selected comments from recent earning calls.. (Meta, Microsoft, Alphabet, Tesla, Amazon) indicating potential demand for AI computing infrastructure. In the case of Apple, there was no direct reference to this. Figure 1 shows how often AI was mentioned during the earnings calls. Figure 2 shows Nvidia’s revenue and Wall Street expectations. Meta: Mark Zuckerberg: “The amount of compute needed to train Llama 4 will likely be almost 10x more than what we used to train Llama 3 -- and future models will continue to grow beyond that. It's hard to predict how this will trend multiple generations out into the future, but at this point I'd rather risk building capacity before it is needed, rather than too late, given the long lead times for spinning up new infra projects”. Llama 3.1 was trained on 16,000 H100s, and Llama 4 is going to have 10x more, i.e. 160,000 GPUs. Chat GPT-4 used 25,000 GPUs. Grok 2 … 20,000 GPUs (and Grok 3 is going to use 100,000). Susan Li, CFO: “We anticipate our full-year 2024 capital expenditures will be in the range of $37-40 billion, updated from our prior range of $35-40 billion. While we continue to refine our plans for next year, we currently expect significant capex growth in 2025 as we invest to support our AI research and our product development efforts”. Microsoft: AMY HOOD on CAPEX outlook: “To meet the growing demand signal for our AI and cloud products, we will scale our infrastructure investments with FY25 capital expenditures expected to be higher than FY24. As a reminder, these expenditures are dependent on demand signals and adoption of our services that will be managed thru the year”. Over the last 4 quarters Capex at Microsoft has been growing by 70-80% YoY (was $19bln last quarter). Alphabet: Sundar Pichai, CEO, on AI capex: “the risk of under-investing is dramatically greater than the risk of over-investing for us here, even in scenarios where if it turns out that we are over-investing, we clearly -- these are infrastructure which are widely useful for us. (…) But I think not investing to be at the front here, I think, definitely has much more significant downside”. Tesla: Elon Musk about Nvidia chips: “ (…) what we are seeing is that the demand for NVIDIA hardware is so high that it's often difficult to get the GPUs. And there just seems this -- I guess I'm quite concerned about actually being able to get steady out NVIDIA GPUs and when we want them”. Amazon: Andrew R. Jassy, CEO: “We remain very bullish on the medium to long-term impact of AI in every business we know and can imagine. The progress may not be one straight line for companies. Generative AI, especially, is quite iterative and companies have to build muscle around the best way to solve actual customer problems. But we see so much potential to change customer experiences. (…) We are investing a lot across the board in AI, and we'll keep doing so as we like what we're seeing and what we see ahead of us”. Brian T. Olsavsky, CFO: “For the first half of the year capex was $30.5 billion. Looking ahead to the rest of 2024, we expect capital investments to be higher in the second half of the year. The majority of the spend will be to support the growing need for AWS infrastructure as we continue to see strong demand in both generative AI and our non-generative AI workloads”.

NVDA Prev Q2 001.PNG
NVDA Prev Q2 002.PNG

Nvidia Q2 2024 review

NVDA Review Q2 3DDD.png

31-Aug-2024. Beating as usual! Nvidia earnings review. As usual, Nvidia beat Wall Street expectations, both in terms of Q2 2024 results (the quarter ending July 31, 2024), but also provided guidance for the next quarter above market expectations.. and it has been the case for many quarters … see Figure 1. Looking at the main market narratives and investor sentiment … regardless of the earnings beat, probably Nvidia’s stock price would have fallen anyway. Two days after the results were released, Nvidia is down 4.97% (while the S&P500 is up 1.01%). Relatively, Nvidia is losing 5.97% to S&P500 - see Table 1. Historically (after the last 6 quarterly earnings releases) Nvidia has always beaten S&P500 - counting to the end of a given quarter, i.e. before the publication of the next earnings - see Figure 2. Table 1 (last column) shows higher rates of return compared to S&P500 from 4% to 47% (on relative basis). Key takeaways after the Nvidia’s earnings call: 1)It’s all about growth, growth, growth! Colette M. Kress : “Hopper demand is strong, and Blackwell is widely sampling. (…) Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year '26. In Q4, we expect to get several billion dollars in Blackwell revenue. Hopper shipments are expected to increase in the second half of fiscal 2025. Hopper supply and availability have improved. Demand for Blackwell platforms is well above supply, and we expect this to continue into next year”. “NVIDIA H200 platform began ramping in Q2, shipping to large CSPs, consumer Internet, and enterprise company. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering over 40% more memory bandwidth compared to the H100”. Currently, Nvidia sells chips on the Hooper platform (H100 and H200), the next platform will be Blackwell, and the one after that Rubin. 2)The next AI demand is "Sovereign AI" - nothing new, but Nvidia has given numbers for the first time … „Our sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at national imperatives for their society and industries. Japan's National Institute of Advanced Industrial Science and Technology is building its AI Bridging Cloud Infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach low double-digit billions this year.” 3)According to Nvidia… generative AI is accelerating: more scale and compute; image generation; coding; general robotics; physical AI; synthetic data generation.. “And there are several things that are happening in generative AI. So, the first thing that's happening is the frontier models are growing in quite substantial scale. (…) And so, it's not unexpected to see that the next-generation models could take 10, 20, 40 times more compute than last generation. (…) The second is although it's below the tip of the iceberg, what we see are ChatGPT image generators. We see coding. We use generative AI for coding quite extensively here at NVIDIA now. And then general robotics. The big transformation last year as we are able to now learn physical AI from watching video and human demonstration and synthetic data generation from reinforcement learning from systems like Omniverse, we are now able to work with just about every robotics companies now to start thinking about, start building general robotics. And so, you can see that there are just so many different directions that generative AI is going. And so, we're actually seeing the momentum of generative AI accelerating”. 4)the world builds about $1 trillion worth of data centers NOW ! Jensen Huang: “(…) remember, the world is moving from general-purpose computing to accelerated computing. And the world builds about $1 trillion worth of data centers. $1 trillion worth of data centers in a few years will be all accelerated computing. In the past, no GPUs are in data centers, just CPUs. In the future, every single data center will have GPUs”. 5)Blackwell is a game changer “Blackwell is going to be a complete game changer for the industry. And Blackwell is going to carry into the following year. (…) remember that computing is going through two platform transitions at the same time. (…) which is general-purpose computing is shifting to accelerated computing, and human-engineered software is going to transition to generative AI or artificial intelligence-learned software”. “The Blackwell vision took nearly five years and seven one-of-a-kind chips to realize, the Gray CPU, the Blackwell dual GPU, and a colos package, ConnectX DPU for East-West traffic, BlueField DPU for North-South and storage traffic, NVLink switch for all-to-all GPU communications, and Quantum and Spectrum-X for both InfiniBand and Ethernet can support the massive traffic of AI”. 6)NVLink (5th generation) is a game changer Nvidia’s webpage: “Unlocking the full potential of exascale computing and trillion-parameter AI models hinges on swift, seamless communication between every GPU within a server cluster. The fifth generation of NVIDIA® NVLink® is a scale–up interconnect that unleashes accelerated performance for trillion- and multi-trillion parameter AI models”. Jensen Huang: “This is a very big deal with its all-to-all GPU switch is game-changing. The Blackwell system lets us connect 144 GPUs in 72 GB200 packages into one NVLink domain, with an aggregate NVLink bandwidth of 259 terabytes per second in one rack. Just to put that in perspective, that's about 10x higher than Hopper. 259 terabytes per second kind of makes sense because you need to boost the training of multitrillion-parameter models on trillions of tokens”.

NVDA Review Q2 001.PNG
NVDA Review Q2 002.PNG
NVDA Review Q2 003.PNG

Nvidia Q3 2024 review

Nvidia Q3.png

22-Nov-2024. Beating as usual! Nvidia earnings review. As usual, Nvidia beat Wall Street expectations, both in terms of Q2 2024 results (the quarter ending July 31, 2024), but also provided guidance for the next quarter above market expectations.. and it has been the case for many quarters … see Figure 1. Looking at the main market narratives and investor sentiment … regardless of the earnings beat, probably Nvidia’s stock price would have fallen anyway. Two days after the results were released, Nvidia is down 4.97% (while the S&P500 is up 1.01%). Relatively, Nvidia is losing 5.97% to S&P500 - see Table 1. Historically (after the last 6 quarterly earnings releases) Nvidia has always beaten S&P500 - counting to the end of a given quarter, i.e. before the publication of the next earnings - see Figure 2. Table 1 (last column) shows higher rates of return compared to S&P500 from 4% to 47% (on relative basis). Key takeaways after the Nvidia’s earnings call: 1)It’s all about growth, growth, growth! Colette M. Kress : “Hopper demand is strong, and Blackwell is widely sampling. (…) Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year '26. In Q4, we expect to get several billion dollars in Blackwell revenue. Hopper shipments are expected to increase in the second half of fiscal 2025. Hopper supply and availability have improved. Demand for Blackwell platforms is well above supply, and we expect this to continue into next year”. “NVIDIA H200 platform began ramping in Q2, shipping to large CSPs, consumer Internet, and enterprise company. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering over 40% more memory bandwidth compared to the H100”. Currently, Nvidia sells chips on the Hooper platform (H100 and H200), the next platform will be Blackwell, and the one after that Rubin. 2)The next AI demand is "Sovereign AI" - nothing new, but Nvidia has given numbers for the first time … „Our sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at national imperatives for their society and industries. Japan's National Institute of Advanced Industrial Science and Technology is building its AI Bridging Cloud Infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach low double-digit billions this year.” 3)According to Nvidia… generative AI is accelerating: more scale and compute; image generation; coding; general robotics; physical AI; synthetic data generation.. “And there are several things that are happening in generative AI. So, the first thing that's happening is the frontier models are growing in quite substantial scale. (…) And so, it's not unexpected to see that the next-generation models could take 10, 20, 40 times more compute than last generation. (…) The second is although it's below the tip of the iceberg, what we see are ChatGPT image generators. We see coding. We use generative AI for coding quite extensively here at NVIDIA now. And then general robotics. The big transformation last year as we are able to now learn physical AI from watching video and human demonstration and synthetic data generation from reinforcement learning from systems like Omniverse, we are now able to work with just about every robotics companies now to start thinking about, start building general robotics. And so, you can see that there are just so many different directions that generative AI is going. And so, we're actually seeing the momentum of generative AI accelerating”. 4)the world builds about $1 trillion worth of data centers NOW ! Jensen Huang: “(…) remember, the world is moving from general-purpose computing to accelerated computing. And the world builds about $1 trillion worth of data centers. $1 trillion worth of data centers in a few years will be all accelerated computing. In the past, no GPUs are in data centers, just CPUs. In the future, every single data center will have GPUs”. 5)Blackwell is a game changer “Blackwell is going to be a complete game changer for the industry. And Blackwell is going to carry into the following year. (…) remember that computing is going through two platform transitions at the same time. (…) which is general-purpose computing is shifting to accelerated computing, and human-engineered software is going to transition to generative AI or artificial intelligence-learned software”. “The Blackwell vision took nearly five years and seven one-of-a-kind chips to realize, the Gray CPU, the Blackwell dual GPU, and a colos package, ConnectX DPU for East-West traffic, BlueField DPU for North-South and storage traffic, NVLink switch for all-to-all GPU communications, and Quantum and Spectrum-X for both InfiniBand and Ethernet can support the massive traffic of AI”. 6)NVLink (5th generation) is a game changer Nvidia’s webpage: “Unlocking the full potential of exascale computing and trillion-parameter AI models hinges on swift, seamless communication between every GPU within a server cluster. The fifth generation of NVIDIA® NVLink® is a scale–up interconnect that unleashes accelerated performance for trillion- and multi-trillion parameter AI models”. Jensen Huang: “This is a very big deal with its all-to-all GPU switch is game-changing. The Blackwell system lets us connect 144 GPUs in 72 GB200 packages into one NVLink domain, with an aggregate NVLink bandwidth of 259 terabytes per second in one rack. Just to put that in perspective, that's about 10x higher than Hopper. 259 terabytes per second kind of makes sense because you need to boost the training of multitrillion-parameter models oNvidia Q3 2025 Earnings Review. Two days after the earnings release, Nvidia is down 2.7% (and -3.6% relative to S&P500). It seems that investors are skeptical about Nvidia’s continued rapid growth, just like they were a quarter earlier. Nevertheless, after the last 7 earnings releases, Nvidia has consistently outperformed the S&P500 through the quarterly earnings release date (outperforming the S&P500 by 3.7% to 44%). See Figure 1. From the bottom in October 2022 to today, Nvidia’s market cap has increased by 1152% (from $280 billion to $3.5 trillion). During the same period, Nvidia’s annualized quarterly revenue has increased by 536% (from $23.6 billion in Q3 2022 to $150.0 billion in Q4 2024). See Figure 2. For investors, the most important thing is the company's continued growth - in other words, how much longer can Nvidia grow at this pace... and indeed, quarterly (sequential) revenue growth has already slowed to about 7% in Q4 2024, but 7% quarterly is about 31% annually! For comparison, annual revenue growth for S&P500 companies was about 7% last quarter. Figure 3 shows Nvidia's revenue growth. How best to summarize the situation after the publication of Nvidia's next results? This is a further continuation of the tug-of-war between skeptical investors and optimistic Jensen Huang, Nvidia's CEO. The main arguments on both sides in the attached table.. enjoy!

Nvidia Q3 001.png
Nvidia Q3 002.PNG
Nvidia Q3 003.png
Nvidia Q3 004.png

AI CAPEX UP!

NVDA 11-Feb 3DDD.png

12-Feb-2025. AI Capex up – Nvidia down ? How much can 2 weeks change.. while releasing Q4 2024 earnings… the big-techs surprised the market with increased spending plans for AI Capex in 2025… Bank of America: “In their first earnings since the introduction of DeepSeek R1 all major cloud hyperscalers pointed to a much stronger capex outlook as we had previewed. Our aggregate capex tracker now points to an increase of +32% YoY to $363bn in CY25, up from +22% YoY to $326bn just two weeks ago. Importantly, the mix of spend also continues to skew more towards servers (CPUs, GPUs, ASICs, etc.). (…) Particularly, we see the development of top-of-the-line frontier models (i.e. OpenAI, Meta models) to continue regardless of the derivative or “distilled” models from the likes of DeepSeek, and AI compute/networking remain important enablers of this AI golden age”. Capex from 2024 to 2025 (consensus projections, $bln): Alphabet from $52bln to 73, Microsoft from 76 to 94, Amazon from 83 to 102, Meta from 39 to 62.5 Oracle from 11 to 16. Well, Nvidia from the June 2024 peak of $135.6… to $132.8 as of February 11, 2025 ? Figure 1 shows the Nvidia stock returns in the period between each quarterly earnings release… the current quarter would be the first negative.. counting from November 22, 2024 to yesterday: Nvidia absolute and relative (to S&P500) performance is negative.. Table 1 shows the details. Nvidia releases results on February 26 this year after market close. BoA Nvidia’s price objective is $190 “based on 33x CY26E PE ex cash, within NVDA's historical 21x-67x forward year PE range”.

NVDA 11-Feb 001.PNG
NVDA 11-Feb 002.PNG

Nvidia Q3 2026 Review

Nvidia Q3.png

28-Feb-2026. Cena akcji Nvidia de facto nie rośnie od sierpnia 2025 roku (jesteśmy od 1 sierpnia 2025 w trendzie bocznym w okolicach 180 USD za akcję), pomimo, że spółka pokazuje coraz wyższe zyski. Wykres obok dobrze to oddaje.. Cena akcji od 1 sierpnia 2025 do 27 lutego 2026 wzrosła jedynie 2%, podczas gdy forward EPS wzrósł w tym samym okresie 35.3%, a trailing EPS 35.2% ! Innymi słowy jeżeli cena akcji miałaby nadążyć za rosnącymi zyskami, to dzisiaj powinniśmy być w okolicach 230-240 USD za akcje.. a nie poniżej 180 USD… Nvidia publikując wyniki za Q3 2026 (19 listopada 2025) pobiła oczekiwania.. zyski (forward EPS i trailing EPS) mocno wzrosły (patrz wykres), niemniej kurs był w trendzie spadkowym (od 207 USD do 171 USD, czyli łączny spadek 17%). Trochę podobnie jest dzisiaj po publikacji wyników w dniu 25 lutego…cena akcji zaczęła gwałtownie spadać i po dwóch dniach notowań na rynku kasowym mamy spadek o 9.4% (sic!) …. ale tym razem skala pobicia oczekiwań rynkowych była niesamowita ! – patrz dane tekstowe pod wykresem… Jeżeli dodamy BEAT na przychodach na raportowanych wynikach plus BEAT na prognozie spółki na kolejny kwartał to otrzymamy 12% BEAT (25-Feb-2026), 8.8% BEAT (19-Nov-2025) i 3.6% BEAT (27-Aug-2025) – co jest super ciekawe… im większy BEAT tym większy spadek kursu po ogłoszeniu wyników ! W takiej logice Nvidia powinna pomyśleć o jakimś MISS.. żeby zapewnić sobie wzrost kursu po wynikach. Po 30 latach zawodowego zarządzania aktywami mógłbym zaryzykować stwierdzenie, że ludzie nie sprzedają agresywnie akcji takiej spółki po takich wynikach… po 3 kwartale z rzędu mocnego BEAT, i po tym jak kurs jest na płasko od prawie 7 miesięcy… Oczywiście szukanie / dopasowywanie ex post powodów dlaczego cena Nvidia tak szybko spada nie ma większej wartości.. Po kliku dniach otrzymamy najbardziej pasujący powód do takiego zachowania się kursu po wynikach… ale wątpię, czy znajdziemy chociaż jeden komentarz w temacie co stanie się z kursem po ogłoszeniu kolejnych wyników 20 maja 2026 ! Np. jeżeli będzie kolejny >10% BEAT.. to kurs spadnie/nie zmieni się/ wzrośnie ? To byłaby wartość dodana… Na kolejnym slajdzie pokazuję taki sam wykres, ale zaczynający się 1 stycznia 2024 Co dalej ? Dwie rzeczy: (1) minimum czego można oczekiwać od kursu akcji to podążanie za rosnącym EPS-em…, szansę na wygraną mają optymiści i cierpliwi… , ponadto (2) 16 marca br. prezes Nvidia wygłosi keynote presentation na Nvidia GTC 2026 – jak zwykle można oczekiwać kilku istotnych ogłoszeń w zakresie najnowszych technologii oraz dalszego szybkiego wzrostu biznesu spółki, (3) kolejne wyniki kwartalne Nvidia 20 maja br. Podczas konferencji wynikowej prezes Nvidia szczególnie podkreślał fakt, że „compute = revenue”, oraz że właśnie doświadczyliśmy „the agentic AI inflection moment” Pytanie od analityka: when you look at your top cloud customers, cloud CapEx close to $700 billion this year, many investors are concerned that it would be harder for this level to grow into next year (…) but how confident are you about your customers' ability to continue to grow their CapEx? And if their CapEx doesn't grow, can NVIDIA still find a way to grow in that envelope? Jen-Hsun Huang, CEO:I am confident in their cash flow growing. And the reason for that is very simple. We have now seen the inflection of agentic AI and the usefulness of agents across the world and enterprises everywhere. You're seeing incredible compute demand because of it. In this new world of AI, compute is revenues. Without compute, there's no way to generate tokens. Without tokens, there's no way to grow revenues. So in this new world of AI, compute equals revenues. And I am certain that at this point with the productive use of Codex and Claude Code and the excitement around Claude Cowork and just the incredible enthusiasm about OpenClaw and the enterprise versions of them. All of the enterprise ISVs who are now working on agentic systems on top of their tools platforms. I am certain at this point that we are at the inflection point, we've reached the inflection point and we're generating profitable tokens that are productive for customers and profitable for the cloud service providers. W ekosystemie Nvidii ISVs (Independent Software Vendors), czyli niezależni dostawcy oprogramowania stali się kluczowym kanałem dystrybucji i monetyzacji AI Jen-Hsun Huang, CEO: "it's really important to realize that inference equals revenues now for our customers because agents are generating so many tokens, and the results are so effective. When the agents are coding, it's off generating thousands, tens of thousands, hundreds of thousands because they're running for minutes to hours. And so these systems, these agentic systems are spawning-off different agents, working as a team. The number of tokens that are being generated is really, really gone exponential. And so we need to inference at a much higher speed. And when you're inferencing at a much higher speed and each one of those tokens are dollarized, it directly translates into revenues. And so inference equals - inference performance equals revenues for our customers (…) the benefits that AI produces for the world ultimately has to generate revenues. And we're seeing right in front - right being developed as we see, as we stand here, agentic AI has turned an inflection point, and it literally happened in the last couple of 2, 3 months. Of course, inside the industry, we've been seeing it for a while, probably 6 months or so. But the world is now awakened to the agentic AI inflection. The agents are super smart. They're solving real problems. Coding is obviously supported by agentic systems now and all of our coders here at NVIDIA are using agentic systems, either Claude Code or OpenAI Codex, enormously to -- and oftentimes both and Cursor, oftentimes all 3, depends on the use case. But they have agents and co-design partners, engineering partners to help them solve problems. And you could see the revenue skyrocketing." Colette Kress, CFO: "Our demand profile is broad, diverse and expanding beyond just chatbots. First, there is a fundamental platform shift from classical machine learning to generative AI. Strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI, including search, ad generation and content recommender systems is encouraging our largest customers to accelerate their capital spending. For example, at Meta, advancements in their GEM model drove a 3.5% increase in ad clicks on Facebook and more than 1% gain in conversations on Instagram, translating into meaningful revenue growth. With the same NVIDIA infrastructure, Meta Superintelligence Labs can train and deploy their frontier agentic AI systems. Frontier agentic systems have reached an inflection point. Claude Code, Claude Cowork and OpenAI Codex have achieved useful intelligence. Adoption is skyrocketing and tokens are profitable, driving extreme urgency to scale up compute. Compute directly translate to intelligence and revenue growth." Colette Kress, CFO: "Every data center is power-constrained. Customers make critical architectural decisions based on performance per watt given these constraints and the need to maximize AI factory revenue. SemiAnalysis declared NVIDIA, Inference King, as recent results from InferenceX reinforced our inference leadership with GB300 NVL72, achieving up to 50x performance per watt and 35x lower cost per token compared with Hopper, and continuous optimization of CUDA software helped deliver up to 5x better performance on GB200 NVL72 just within 4 months. NVIDIA produces the lowest cost per token and data centers running on NVIDIA generate the highest revenues." Jen-Hsun Huang, CEO: "Now the thing that is -- the wave that we're seeing now is the agentic AI inflection, and the next inflection beyond that is physical AI, where we take AI and these agentic systems into the physical applications, such as manufacturing, such as robotics. And so that's a giant opportunity ahead." Wykres obok pokazuje reakcję kursu Nvidia po ogłoszeniu wyników. Obecny spadek w drugim dniu w wysokości 9.4% jest najbardziej negatywną reakcją w ostatnich 3 latach ! Generalnie przez ostatnie 7 kwartałów nie było istotnej pozytywnej reakcji rynku… Ostatnie mocno pozytywne reakcje to Q1 2025 (publikacja maj 2025), oraz Q4 2024 (publikacja luty 2025) Tabela poniżej wykresu pokazuje skumulowaną stopę zwrotu od dnia ogłoszenia wyników (liczba dni to „trading days”).

NVDA 28Feb.png
image.png
image.png
image.png

Nvidia i IT Sector są tanie!

NVDA 11-Feb 3DDD.png

28-Mar-2026. Nie tylko spadają wyceny największych spółek technologicznych …. ale spadają też relatywne wyceny tych spółek, czyli w porównaniu do innych sektorów lub całego rynku. Innymi słowy nie spadają (albo spadają wolniej) wyceny spółek nie-technologicznych. Lewy wykres obok porównuje relatywną wycenę Nvidia (na wskaźniku forward PE) względem całego rynku, który reprezentuje S&P500. W obecnym momencie Nvidia jest wyceniana jedynie 6% drożej niż cały S&P500 ! EPS Nvidia i tak będzie rosnąć 3-4 razy szybciej niż EPS dla S&P500… innymi słowy kurs Nvidia powinnien rosnąć podobnie 3-4 razy więcej niż S&P500, aby utrzymać relatywną wycenę na dzisiejszym poziomie ! Spadek wyceny nie dotyczy jedynie Nvidia. Podobny, albo nawet większy spadek wyceny dotyczy całego sektora technologicznego (prawy wykres). S&P500 Information Technology Sector Index składa się z 73 spółek technologicznych wchodzących w skład indeksu S&P500. Obecna premia na wycenie dla tego indeksu względem całego S&P500 to jedynie 4% ! Tak nisko nie było nawet podczas wyprzedaży w marcu 2020 roku (pandemia). Dzisiejsza wycena jest najniższa w obecnej dekadzie. Obecny forward PE dla S&P500 to 19.05x, a dla S&P500 Information Technology Sector Index jedynie 19.80x Innymi słowy od 29-Oct-2025 mamy „relatywną wyprzedaż” spółek technologicznych. W tym czasie indeks S&P500 Information Technology spadł o 17.2%, podczas gdy zyski (forward EPS) dla tego indeksu wzrosły o 32.4%. Natomiast indeks S&P500 spadł w tym samym okresie o 7.5%, a zyski (forward EPS) dla tego indeksu wzrosły w tym okresie o 12.4%. Najwyższy forward PE ma Tesla 189x. Wall Street oczekuje, że Tesla dostarczy $5.77 bln zysku operacyjnego w następnych 4 kwartałach. Wycena spółki to $1.37 trn. Najniższy forward PE ma Micron Technology 4.1x. Wall Street oczekuje, że Micron dostarczy $115 bln zysku operacyjnego w następnych 4 kwartałach. Wycena spółki to $0.41 trn. Go figure !

image.png
image.png
bottom of page