News Earth Simulator, NIMs, RTX AI, Rubin - NVIDIA Computex June 2024

  • Thread starter Deleted member 2206
  • Start date
D

Deleted member 2206

NEET
Apr 20, 2024
1,141


NVIDIA is one of the biggest players when it comes to the AI revolution. The world was SHOCKED and STUNNED as they always are with these announcements during their March keynote earlier this year in which they announced...
  • New incredibly powerful 'Blackwell' chip. Around 2-30 times as powerful as the previously used chips and consumes 25x less electricity and cost. These can be used to run trillion-parameter models and corporations that develop AI are already adopting these.
  • Project GR00T; a foundation model for humanoid robots that allows them to mimic human movement through literally just watching them.
  • Development of a new 6G cloud platform AI.
  • Among other things...
3 months later in the heat of June we get another slew of very interesting announcements such as...
  • Earth 2 - An Earth Simulator using AI to predict climate patterns with very high resolution.
  • Nvidia Inference Microservices (NIMs) - Pretrained, specialized AI models with software interfaces able to run on CUDA.
  • ACE Digital Humans - Digital humans running on the NIMs framework integrating several AI systems similar to GPT4o with the ability to input sight and sound and output AI-generated responses and speech; these digital humans will have AI avatars and agentic capabilities, replacing human workers (potentially).
  • Further Chip Developments - Alongside more information about new 'Blackwell' chips releasing this year they promise 'Blackwell Ultra' chips for 2025 and 'Rubin' chips for 2026 with ambitious computational capabilities.
All in all it seems like even if we don't get AGI by 2030 I still believe that there will be cataclysmic effects being felt by everyone by 2030 requiring UBI. We are not even halfway through the decade and we already have AI with agentic capabilities, super-duper serious automation opportunities for business owners to salivate over, the emergence of autonomous robots that can learn by observing humans, rapid chip developments and earth simulators coming into play.
 
クーロ

クーロ

عثمان دان فوديو الثاني
Jan 23, 2024
5,216
Project GR00T; a foundation model for humanoid robots that allows them to mimic human movemen
They are being very vague on it
View attachment 73ba42c3-53c0-4b51-8cde-ee7d63a54e4a_1600x900.webp
But I guess it uses video data for imitation learning then adds onto it with reinforcement learning using NVIDIA's GPUs? They're using a lot of acronyms I don't get lmao. Looks interesting though considering these models usually take a ton of power to train
 
クーロ

クーロ

عثمان دان فوديو الثاني
Jan 23, 2024
5,216
MelaninQueen

MelaninQueen

🫘 I don't want to discuss about violence anymore.
Feb 19, 2024
16,635
Nvidia engineers have big brains, but... 28GB VRAM only on 5090s!? Dude, I want to run a finetuned Llama 3 70b model without having to spend two months of Western salary on SLi setup.
 
MelaninQueen

MelaninQueen

🫘 I don't want to discuss about violence anymore.
Feb 19, 2024
16,635
D

Deleted member 2206

NEET
Apr 20, 2024
1,141
Updates:

Ya we keep getting new updates every 2 weeks or so. 2 weeks ago it was Google and their big announcement and around that time GPT4o also. Now NVIDIA. I don't see how anyone can claim we are stagnating or in an AI winter. Feels like we're heading toward Cyberpunk 2030s.
 
Last edited:
  • +1
Reactions: RNT
D

Deleted member 2206

NEET
Apr 20, 2024
1,141
Nvidia engineers have big brains, but... 28GB VRAM only on 5090s!? Dude, I want to run a finetuned Llama 3 70b model without having to spend two months of Western salary on SLi setup.
I am too poor :eek:
The best I can run locally right now is 10.7B models, if I want something with actual intelligence I have to hook up an Anthropic/OpenAI API. Although 10.7B models are good enough for roleplay atleast.
 
MelaninQueen

MelaninQueen

🫘 I don't want to discuss about violence anymore.
Feb 19, 2024
16,635
I am too poor :eek:
The best I can run locally right now is 10.7B models, if I want something with actual intelligence I have to hook up an Anthropic/OpenAI API. Although 10.7B models are good enough for roleplay atleast.
I wish I could run WizardLM 70B with large context on my GPU... but I only have 24GB.
 
Activity
So far there's no one here

Similar threads

D
Replies
4
Views
560
Unemployed
Unemployed
D
Replies
1
Views
401
Isle of Sippy
Isle of Sippy
Deleted member 1379
Replies
46
Views
4K
Deleted member 2463
D
Top