Jump to content

Nvidia surpasses $3 trillion in market capitalization to become more valuable than Apple


Recommended Posts

WWW.CNBC.COM

Investors are becoming more comfortable that Nvidia's huge growth in sales to a handful of cloud companies can persist.

 

Quote

 

Nvidia passed Apple in market cap on Wednesday as investors continue betting on the chipmaker behind the artificial intelligence boom. It is now the second-most valuable public company, behind Microsoft.

 

Nvidia also hit a $3 trillion market cap milestone on Wednesday after shares rose over 5%. At market close, Nvidia had a market value of $3.019 trillion, versus Apple’s, which stood at $2.99 trillion. Microsoft is the most valuable publicly traded company, with a market cap of $3.15 trillion, as of Wednesday.

 

Nvidia shares have risen more than 24% since the company reported first-quarter earnings in May and have been on a tear since last year. The company has an estimated 80% market share in AI chips for data centers, which are attracting billions of dollars in spending from big cloud vendors.

Investors are also becoming more comfortable that Nvidia’s huge growth in sales to a handful of cloud companies can persist. For the most recent quarter, revenue in its data center business, which includes its GPU sales, rose 427% from a year earlier to $22.6 billion, about 86% of the company’s overall sales.

 

 

Link to comment
Share on other sites

  • Commissar SFLUFAN changed the title to Nvidia surpasses $3 trillion in market capitalization to become more valuable than Apple

Nearly bought nvidia several times, but never did. Truly remarkable and congratulations to those that did. 
 

This momentum will not continue at this pace for much longer. I can’t see them topping 7 trillion in the next 2 years. My guess is the increases will slow down. 

Link to comment
Share on other sites

9 hours ago, chakoo said:

I wonder if there is a point where Nvidia makes so much from AI that they give up on gaming because it would make them more money by not bottlenecking production facilities.

 

This is a legit concern if you are a PC gamer. Their gaming GPUs are now a very small part of their revenue and even smaller portion of their earnings and the ratio will likely only continue in one direction. At some point they might just wash their hands of it. Only saving grace I can see would be Jensen carrying on just because he genuinely likes them and at this point no one is going to question him.

Link to comment
Share on other sites

11 hours ago, legend said:

I am begging AMD to compete on AI.

You have to wonder how much sony is kicking themselves for not keeping up with their crazy architecture on the side. 

  • True 1
Link to comment
Share on other sites

48 minutes ago, chakoo said:

You have to wonder how much sony is kicking themselves for not keeping up with their crazy architecture on the side. 


Not all that much. Crazy architectures mean more work because there are less standards and libraries that can be applied. The cost of that outweighs the benefits. For example, back in the day I took a class on programming for the Cell processor and did a simple ML application. I got the cell version to scale really well with SPUs and it blew out of the water standard CPUs. I even posted about it on the IGN boards! But the work was enormous in comparison and game dev would only be harder given time constraints and human bandwidth. So the exotic architectures are not really worth it in that regard. 
 

For the PlayStations, the fact that AMD isn’t competing with Nvidia on AI isn't as bad either. GPUs are kind of already good out of the box for model inference. Yes, tensor cores are nice and give a bump, but the disparity for inference in terms of raw compute isn’t that great in the grand scheme of things. Especially not since on a fixed platform you can hyper optimize if you need to. 
 

The real disparity is on the training hardware with data center GPUs for AI and the accelerated libraries for training. This doesn’t matter much for the PlayStation itself since you’re not going to use the PSs for training. 

  • Halal 1
Link to comment
Share on other sites

3 hours ago, legend said:

Not all that much. Crazy architectures mean more work because there are less standards and libraries that can be applied. The cost of that outweighs the benefits. For example, back in the day I took a class on programming for the Cell processor and did a simple ML application. I got the cell version to scale really well with SPUs and it blew out of the water standard CPUs. I even posted about it on the IGN boards! But the work was enormous in comparison and game dev would only be harder given time constraints and human bandwidth. So the exotic architectures are not really worth it in that regard. 
 

For the PlayStations, the fact that AMD isn’t competing with Nvidia on AI isn't as bad either. GPUs are kind of already good out of the box for model inference. Yes, tensor cores are nice and give a bump, but the disparity for inference in terms of raw compute isn’t that great in the grand scheme of things. Especially not since on a fixed platform you can hyper optimize if you need to. 
 

The real disparity is on the training hardware with data center GPUs for AI and the accelerated libraries for training. This doesn’t matter much for the PlayStation itself since you’re not going to use the PSs for training. 

 

From a gaming perspective, I agree but from a HW-pushing perspective, I disagree. I think if Sony kept pushing R&D on their crazy HW architecture they would be a big player in AI right now. Sony was pushing teams to learn how to work in parallel processing to get the most out of the hardware slightly before Cuda came around as an alternative. I also know the joy/pain of working with the Cell processor. I built some frameworks at work for us to use the SPUs but we ended up not doing anything with it since the game was primarily a Vita game. I actually enjoyed it more than I hated it but I also spent most of my time working on pipelines and rarely on gameplay so it wasn't hard for me to wrap my head around it.

 

 

Link to comment
Share on other sites

21 minutes ago, chakoo said:

 

From a gaming perspective, I agree but from a HW-pushing perspective, I disagree. I think if Sony kept pushing R&D on their crazy HW architecture they would be a big player in AI right now. Sony was pushing teams to learn how to work in parallel processing to get the most out of the hardware slightly before Cuda came around as an alternative. I also know the joy/pain of working with the Cell processor. I built some frameworks at work for us to use the SPUs but we ended up not doing anything with it since the game was primarily a Vita game. I actually enjoyed it more than I hated it but I also spent most of my time working on pipelines and rarely on gameplay so it wasn't hard for me to wrap my head around it.

 

 

 

I'm not sure Sony could have justifiably funded it at the time. It's a huge financial effort to push novel parallel architectures, and it's complicated when you're also forced with to work with various other companies (e.g., IBM for Cell). Nvidia had an existing business model for their chips that were already usable in standard PCs (which was critical to capture the AI market) and it was fortuitous that that existing business model worked well for accelerating AI. That allowed them organically and sustainably grow into it. Sony's main business model that would have supported it was PS, but as we both agree, you couldn't have really justified it there making it nothing more than a cost they'd have to bear. Even in retrospect knowing that AI blew up, it's unclear that there would have been a viable business model for Sony to go from where things were to being a big player in the AI hardware space.

 

The only good alternative candidate where in retrospect you could easily justify it is AMD and perhaps Intel. Both of them had more clear organic paths to it in the same way Nvidia did.

Link to comment
Share on other sites

4 hours ago, legend said:

The only good alternative candidate where in retrospect you could easily justify it is AMD and perhaps Intel. Both of them had more clear organic paths to it in the same way Nvidia did.

 

This reminds me of how there was a time not that long ago when AMD easily had the fastest ARM CPUs on the market, but gave up on ARM development because I guess nobody wants ARM in the datacenter? I still have an AMD ARM pizza box sitting in my server room, being babied, because there is no serious not-Ampere alternative on the market. I also have a ThunderX2 server because I thought they'd be a major player, but nope, they disappeared too.

  • Sad 1
Link to comment
Share on other sites

2 hours ago, Reputator said:

Did Sony actually design it? I thought it was primarily IBM's.

 

It was certainly primarily IBM. I'm pretty sure Sony was involved in the design, but I honestly don't remember the details as well I used to know them. Either way, that's exactly what I mean by it being more complicated with other company involvement.

 

EDIT:

Wikipedia at least indicates that it was a joint design effort, also with Toshiba.

EN.WIKIPEDIA.ORG
Quote

In mid-2000, Sony Computer Entertainment, Toshiba Corporation, and IBM formed an alliance known as "STI" to design and manufacture the processor.[8]

 

Link to comment
Share on other sites

25 minutes ago, legend said:

 

It was certainly primarily IBM. I'm pretty sure Sony was involved in the design, but I honestly don't remember the details as well I used to know them. Either way, that's exactly what I mean by it being more complicated with other company involvement.

 

EDIT:

Wikipedia at least indicates that it was a joint design effort, also with Toshiba.

EN.WIKIPEDIA.ORG

 

What an unfortunate acronym 

Link to comment
Share on other sites

As mentioned, it was a joint effort and sony did contribute a lot to the design. There were initially plans to have duel cell chips in the system in place of a CPU/GPU combo.

 

7 hours ago, legend said:

 

I'm not sure Sony could have justifiably funded it at the time. It's a huge financial effort to push novel parallel architectures, and it's complicated when you're also forced with to work with various other companies (e.g., IBM for Cell). Nvidia had an existing business model for their chips that were already usable in standard PCs (which was critical to capture the AI market) and it was fortuitous that that existing business model worked well for accelerating AI. That allowed them organically and sustainably grow into it. Sony's main business model that would have supported it was PS, but as we both agree, you couldn't have really justified it there making it nothing more than a cost they'd have to bear. Even in retrospect knowing that AI blew up, it's unclear that there would have been a viable business model for Sony to go from where things were to being a big player in the AI hardware space.

 

The only good alternative candidate where in retrospect you could easily justify it is AMD and perhaps Intel. Both of them had more clear organic paths to it in the same way Nvidia did.

 

I don't fully agree Sony is a large company and has funded a lot of R&D stuff that wasn't always practical initially that did eventually make it into a product but I feel we will go around in circles on this. :)

 

I also agree that AMD is probably the best other alternative but I would put the likes of Qualcomm and others over Intel, and to an extent IBM if they were still in chips.

  • Halal 1
Link to comment
Share on other sites

15 hours ago, chakoo said:

You have to wonder how much sony is kicking themselves for not keeping up with their crazy architecture on the side. 

Vydea owns CUDA. I saw this shit coming since like 2001 uni labs.

Link to comment
Share on other sites

2 hours ago, unogueen said:

Geforce 2 GTS on ALL lab pcs??? Someone made it their home.

 

You foresaw the dominance of NVIDIA in HPC world in 2001 because some system builder put the same GPU in every lab PC?

 

Time to put down the bong.

  • Haha 2
Link to comment
Share on other sites

19 hours ago, chakoo said:

 

From a gaming perspective, I agree but from a HW-pushing perspective, I disagree. I think if Sony kept pushing R&D on their crazy HW architecture they would be a big player in AI right now. Sony was pushing teams to learn how to work in parallel processing to get the most out of the hardware slightly before Cuda came around as an alternative. I also know the joy/pain of working with the Cell processor. I built some frameworks at work for us to use the SPUs but we ended up not doing anything with it since the game was primarily a Vita game. I actually enjoyed it more than I hated it but I also spent most of my time working on pipelines and rarely on gameplay so it wasn't hard for me to wrap my head around it.

 

 

“I think” is doing a lot of heavy lifting in that paragraph. 

Link to comment
Share on other sites

2 hours ago, Reputator said:

 

You foresaw the dominance of NVIDIA in HPC world in 2001 because some system builder put the same GPU in every lab PC?

 

Time to put down the bong.

 

Also before cuda existed.

 

Yes Yes I too saw the inevitable current state of AI back when I was making GBA games. 

:duckhuntdog:

Link to comment
Share on other sites

1 minute ago, BloodyHell said:

“I think” is doing a lot of heavy lifting in that paragraph. 

 

PS3 isn't some fluke in design. You can see where Sony was focusing on parallelized operations as it is a core part of the VU design on the PS2.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...