News Situational Awareness - AGI by 2027

Amadeus

Amadeus

Majestic
Apr 20, 2024
1,042

I think it can't be overstated how important AI is and the relevance it has to NEETdom; by 2030 it's entirely possible we will all (as a society) be NEETs due to AI. There are a lot of leaks and issues regarding 'AI safety' from OpenAI (I don't believe in this tbh, it will only be lobotomized for poor consumers); hot off the press are some 'leaks' from superalignment researcher ex-OpenAI Leopold Aschenbrenner that demonstrates many insights on what he's observed in the AI space - the main conclusion being that AGI could be achieved by 2027.

The document is linked above but to give you a brief summary (It's 165 pages long so this is brief :feelsLUL:):
  • GPT-2 was released in 2019 and couldn't even count to 5. In 2023 GPT-4 was released and is smarter than most university undergrads. This represents an 100,000x increase in computational power which he claims will rise at the same rate in 2023-2027. Another 100kx or more increase.
  • Progress in deep learning has been so explosive that current benchmarks keep having to be improved.
  • Improved algorithm efficiencies cause the price of older models to drop significantly, so GPT-4o will cost peanuts by 2027 (perfect for use in video games 0o0).
  • Newer AI models are running out of training data because they are trained on the entire internet almost. He claims sample efficiencies will instead allow models to learn more from smaller data samples. In the same way, most of the internet is junk text; he claims these are what's going to be improved in the next 4 years:
    • Long-term memory.
    • Much better quality data-sets.
    • Ability to manipulate computer programs.
    • Critical thinking, Scaffolding and Chains of Thoughts.
    • The ability to spend weeks researching, thinking and honing rather than having to provide an instant answer.
    • Higher personalization. Right now they only have general data-sets.
  • This all results in a PhD+ level worker by 2027 that can introspect, plan, correct itself, know everything about a corporation's history, workers, files and be able to respond in the best possible way. This could lead to the automation of research and a positive feedback loop of improving itself and science in general, leading to superintelligence and vertical scientific progress. AI has already displayed emergent properties and creativity surpassing humans - see the AI model 'AlphaGo'. This would lead to scientific breakthroughs no human could fathom, exponentially.
  • Like with StarGate, leading to 2030 $Trillions are being put into AI training. This is the hard ceiling. Specialized AI chip development is VASTLY outperforming Moore's law but will settle at Moore's Law by 2030. If we don't reach AGI by 2030 we will likely not reach it for a long time after. Basically 2024-2030 is the AI endgame.
  • GPT-4 data clusters required 10,000 homes worth of electricity. By 2030 training AI models will require 20% of US electricity demands. This electricity demand cannot comply with current US climate regulations, so power will probably be sourced from the Middle East, there will be no power bottleneck like people anticipate.
  • There is great incentive for governments to use AI for war and almost no security in these Datacenters. He calls for miltary-grade super-security.
  • Current AI models are trained via human reinforcement learning what is good and bad. A superintelligence cannot be properly superaligned by humans. The current main competitors are the UK/USA and China; with a lot of national security risks and potential for conflict.

1717880575467

The timeline he 'leaks' is:
  • 2025/2026 - AI better than most university students, able to replace administrative and clerical workers.
  • 2027 - Phd level AI, able to replace researchers and engineers. AGI achieved.
  • 2028/2029 - Intelligence Explosion, AGI rapidly improves itself leading to superintelligence.
  • 2030 - ASI achieved.
  • Beyond 2030 - Exponential technological progress with AI humans cannot comprehend.

In short the article extrapolates from current trends and makes ambitious predictions about AGI based on the huge amount of investment being pumped into the field. It also talks about how bottlenecks are likely to not be an issue with the bigger issue being potential war for AGI and alignment problems. To be honest it reads like a giddy sci-fi nerd wrote this and I noticed some unprofessional Gen-Z slang terminology :mmm:, I would take this with a grain of salt. Well, I broadly agree with a lot of his reasoning and it's a good read; society will definitely be unrecognizable by 2030 even if AI only has moderate improvements due to software integration and specialized use-cases.
 
Last edited:
WestEuropoor

WestEuropoor

Yes sir, i can boogie!
Oct 7, 2022
5,960

I think it can't be overstated how important AI is and the relevance it has to NEETdom; by 2030 it's entirely possible we will all (as a society) be NEETs due to AI. There are a lot of leaks and issues regarding 'AI safety' from OpenAI (I don't believe in this tbh, it will only be lobotomized for poor consumers); hot off the press are some 'leaks' from superalignment researcher ex-OpenAI Leopold Aschenbrenner that demonstrates many insights on what he's observed in the AI space - the main conclusion being that AGI could be achieved by 2027.

The document is linked above but to give you a brief summary (It's 165 pages long so this is brief :feelsLUL:):
  • GPT-2 was released in 2019 and couldn't even count to 5. In 2023 GPT-4 was released and is smarter than most university undergrads. This represents an 100,000x increase in computational power which he claims will rise at the same rate in 2023-2027. Another 100kx or more increase.
  • Progress in deep learning has been so explosive that current benchmarks keep having to be improved.
  • Improved algorithm efficiencies cause the price of older models to drop significantly, so GPT-4o will cost peanuts by 2027 (perfect for use in video games 0o0).
  • Newer AI models are running out of training data because they are trained on the entire internet almost. He claims sample efficiencies will instead allow models to learn more from smaller data samples. In the same way, most of the internet is junk text; he claims these are what's going to be improved in the next 4 years:
    • Long-term memory.
    • Much better quality data-sets.
    • Ability to manipulate computer programs.
    • Critical thinking, Scaffolding and Chains of Thoughts.
    • The ability to spend weeks researching, thinking and honing rather than having to provide an instant answer.
    • Higher personalization. Right now they only have general data-sets.
  • This all results in a PhD+ level worker by 2027 that can introspect, plan, correct itself, know everything about a corporation's history, workers, files and be able to respond in the best possible way. This could lead to the automation of research and a positive feedback loop of improving itself and science in general, leading to superintelligence and vertical scientific progress. AI has already displayed emergent properties and creativity surpassing humans - see the AI model 'AlphaGo'. This would lead to scientific breakthroughs no human could fathom, exponentially.
  • Like with StarGate, leading to 2030 $Trillions are being put into AI training. This is the hard ceiling. Specialized AI chip development is VASTLY outperforming Moore's law but will settle at Moore's Law by 2030. If we don't reach AGI by 2030 we will likely not reach it for a long time after. Basically 2024-2030 is the AI endgame.
  • GPT-4 data clusters required 10,000 homes worth of electricity. By 2030 AI models will require 20% of US electricity demands. This electricity demand cannot comply with current US climate regulations, so power will probably be sourced from the Middle East, there will be no power bottleneck like people anticipate.
  • There is great incentive for governments to use AI for war and almost no security in these Datacenters. He calls for miltary-grade super-security.
  • Current AI models are trained via human reinforcement learning what is good and bad. A superintelligence cannot be properly superaligned by humans. The current main competitors are the UK/USA and China; with a lot of national security risks and potential for conflict.

View attachment 137011
The timeline he 'leaks' is:
  • 2025/2026 - AI better than most university students, able to replace administrative and clerical workers.
  • 2027 - Phd level AI, able to replace researchers and engineers. AGI achieved.
  • 2028/2029 - Intelligence Explosion, AGI rapidly improves itself leading to superintelligence.
  • 2030 - ASI achieved.
  • Beyond 2030 - Exponential technological progress with AI humans cannot comprehend.

In short the article extrapolates from current trends and makes ambitious predictions about AGI based on the huge amount of investment being pumped into the field. It also talks about how bottlenecks are likely to not be an issue with the bigger issue being potential war for AGI and alignment problems. To be honest it reads like a giddy sci-fi nerd wrote this and I noticed some unprofessional Gen-Z slang terminology :mmm:, I would take this with a grain of salt. Well, I broadly agree with a lot of his reasoning and it's a good read; society will definitely be unrecognizable by 2030 even if AI only has moderate improvements due to software integration and specialized use-cases.
Giga based. I want hyper intelligent AI AND I WANT IT NOW
 
Levi

Levi

coalposter
Sep 25, 2022
936
agi would be terrible for the super rich, no need for workers, no slaves, no power, communism
 
Activity
So far there's no one here

Similar threads

Amadeus
Replies
7
Views
279
reitard
reitard
Amadeus
Replies
1
Views
252
Isle of Sippy
Isle of Sippy
crusty ass nigga
Replies
4
Views
180
Hobbit
Hobbit
Top