D
Deleted member 2206
NEET
- Apr 20, 2024
- 1,142
It's a narrative that's prevalent now in the comment sections of tech channels, amateur coder circles and even among the general public. It seems that thinking AI can be transformative and preparing for the torrent of unemployment in the near future is suddenly a crazy tech bro idea and the mainstream opinion is that AI is a glorified search engine that's actually really dumb dumb bubblegum stupid and can't do shit. I can see why; there is a palpable terror management. I get it, people are afraid of automation, but to prepare for huge social change you at least have to acknowledge it. It seems people just aren't putting two and two together.
The biggest issues right now are adoption, energy costs and censorship. Proprietary models are being covertly dumbed down due to massive surges in demand, people have a transient hatred toward AI and are actively sabotaging development by bending copyright laws to include training. People just don't want AI to succeed but honestly, do people truly not see that without AI we are headed toward a collapsing society?
e/acc is the solution and in my opinion the only feasible one. We are facing demographic collapse, decay of the prefrontal cortex and higher social/moral function among youths, fertility plummeting due to pesticides and lifestyle changes, unsustainable mass immigration, unprecedented endocrine disruption, IQ decline, mental health crisis, housing unaffordability crisis, wealth inequality at unprecedented levels, unsustainable competition, stagnant wages, resource depletion, nitrate and topsoil depletion accompanied by nutrient decline, ocean acidity crisis, antibiotic-resistance epidemic, zoonotic viral leaps spurred by environment collapse, global warming(?), increases in cancers and dementia and disease in general, pollution increases, addiction economy, to name a few ills. What does blud think is going to happen without an extreme scientific revolution and growth stimulus? 'Oh no, AI will automate too many jobs, oh no AI will make the rich richer' - they're going to get richer anyway; force us into even more dire straits anyway. I said it. At the very least if AI promotes such extreme growth and revolution that the current social paradigm makes no sense there is potential for the system to be dismantled somewhat and for at least some resources in a world of hyper-abundance to be redistributed. Without extremely rapid explosions of current scientific trends we are most likely facing extinction within several decades. Any sane human should be ballz-to-the-wall gunz blazing yeah let's fucking do this attitude.
"Hurr durr AI is just a next-token predictor brooo it's like Google or something!!! It's just pretending to be intelligent!"
What do you think intelligence is? Our brains store a 1-120~ year data-set in our Hippocampus and uses that training data to predict the appropriate response for a specific situation based on external stimuli. Your brain literally decides on an action before you consciously think about it. Put it this way, your thought processes aren't really much different! Humans are also just 'next-token predictors'! When you're talking you very rarely know everything you're going to say all at once, you predict the appropriate words and emotions on the spot much like ChatGPT does based on neural weightings. The human brain is basically a survival-optimized version of ChatGPT firing off a couple of times per second. If you think I'm being reductive that's what people calling a complex algorithm with backpropagation, attention mechanisms and emergent phenomena are doing.
"Errrmmm acktually Chat Gee Pee Tee is an LLM and AGI can NEVER result from an LLM because it's just language-based!"
Whoever said these trillion-dollar corporations were only going to use LLMs? There are a diverse array of architectures like Transformers, Mamba and xLSTM and many more in development, many in a plethora of fields; natural language is only one of them. There are AI trained to recognize images. AI trained to play chess at levels unfathomable to humans. AI trained on music. AI trained on video. AI trained on speech. The seed of the grand Yggdrasil of AGI that must be planted is Mathematics: there are already people training AI to do Mathematics at a level better than the vast majority of humans, what is lacking is a way to assemble these models together to form coherent chains of thoughts. You really don't have to reach that far to understand that each of these agents combined could create a reality where experiments can be done in silico or in the real world via AI-controlled appendages in the near future, and after that we're all gucci, capiche? Trillions of dollars of investment have scientists salivating over the next architecture breakthrough like it's an exorbitant sirloin paired with the finest sauvignon. They're working on it, and it's coming sooner rather than later.
"W-w-well just look at the AI news b-b-baka!!! We are clearly in an AI winter, there hasn't been a big announcement in WEEKS! And they're like, totally underwhelming!!! And, er... It's only a little bit good at doing x thing like basic coding, it will NEVER be able to do anything complex."
Tut, tut. Are people honestly so short-sighted? I've been binge-watching videos of coders on YouTube recently absolutely FROTHING at the mouth because it's their time to whinge about AI after artists had a go, just listen to the copium of this guy and the commenters are all like 'I HATE this hype economy' when in the past 2-3 years we've gone from generating blurry blobs and useless text to literal code bases, college-level essays and music videos from scratch. I find it telling that the field of machine learning is evolving so rapidly that a cutting-edge breakthrough every couple of weeks is viewed as a slowing of pace and 'false hype'. Not to mention several ground-breaking technologies that would've defined an era if released alone like generative video, generative music, generative imagery, Alpha Fold, Alpha Zero, Humanoid Robots are portrayed as some worthless 'hype' just because people get used to the crazy pace of development. Larger models take time to train and Claude 3.5 Opus/GPT-5 are slated to release late 2024 or early 2025 at the latest. All of the AI blunders like Gemini's hallucinations, Stable Diffusion 3's piss-poor anatomy and the delay of voice mode for GPT4o are due to censorship issues, plain and simple. This is the real problem with AI, that people WANT to see it fail because they're scared; they view it as a threat and are putting obstacles in the path of the only remaining liberation in this late-stage capitalist squeeze. It's time to bow down to the AI overlords and be good little biological puppets for a nice reward.
The biggest issues right now are adoption, energy costs and censorship. Proprietary models are being covertly dumbed down due to massive surges in demand, people have a transient hatred toward AI and are actively sabotaging development by bending copyright laws to include training. People just don't want AI to succeed but honestly, do people truly not see that without AI we are headed toward a collapsing society?
e/acc is the solution and in my opinion the only feasible one. We are facing demographic collapse, decay of the prefrontal cortex and higher social/moral function among youths, fertility plummeting due to pesticides and lifestyle changes, unsustainable mass immigration, unprecedented endocrine disruption, IQ decline, mental health crisis, housing unaffordability crisis, wealth inequality at unprecedented levels, unsustainable competition, stagnant wages, resource depletion, nitrate and topsoil depletion accompanied by nutrient decline, ocean acidity crisis, antibiotic-resistance epidemic, zoonotic viral leaps spurred by environment collapse, global warming(?), increases in cancers and dementia and disease in general, pollution increases, addiction economy, to name a few ills. What does blud think is going to happen without an extreme scientific revolution and growth stimulus? 'Oh no, AI will automate too many jobs, oh no AI will make the rich richer' - they're going to get richer anyway; force us into even more dire straits anyway. I said it. At the very least if AI promotes such extreme growth and revolution that the current social paradigm makes no sense there is potential for the system to be dismantled somewhat and for at least some resources in a world of hyper-abundance to be redistributed. Without extremely rapid explosions of current scientific trends we are most likely facing extinction within several decades. Any sane human should be ballz-to-the-wall gunz blazing yeah let's fucking do this attitude.
"Hurr durr AI is just a next-token predictor brooo it's like Google or something!!! It's just pretending to be intelligent!"
What do you think intelligence is? Our brains store a 1-120~ year data-set in our Hippocampus and uses that training data to predict the appropriate response for a specific situation based on external stimuli. Your brain literally decides on an action before you consciously think about it. Put it this way, your thought processes aren't really much different! Humans are also just 'next-token predictors'! When you're talking you very rarely know everything you're going to say all at once, you predict the appropriate words and emotions on the spot much like ChatGPT does based on neural weightings. The human brain is basically a survival-optimized version of ChatGPT firing off a couple of times per second. If you think I'm being reductive that's what people calling a complex algorithm with backpropagation, attention mechanisms and emergent phenomena are doing.
"Errrmmm acktually Chat Gee Pee Tee is an LLM and AGI can NEVER result from an LLM because it's just language-based!"
Whoever said these trillion-dollar corporations were only going to use LLMs? There are a diverse array of architectures like Transformers, Mamba and xLSTM and many more in development, many in a plethora of fields; natural language is only one of them. There are AI trained to recognize images. AI trained to play chess at levels unfathomable to humans. AI trained on music. AI trained on video. AI trained on speech. The seed of the grand Yggdrasil of AGI that must be planted is Mathematics: there are already people training AI to do Mathematics at a level better than the vast majority of humans, what is lacking is a way to assemble these models together to form coherent chains of thoughts. You really don't have to reach that far to understand that each of these agents combined could create a reality where experiments can be done in silico or in the real world via AI-controlled appendages in the near future, and after that we're all gucci, capiche? Trillions of dollars of investment have scientists salivating over the next architecture breakthrough like it's an exorbitant sirloin paired with the finest sauvignon. They're working on it, and it's coming sooner rather than later.
"W-w-well just look at the AI news b-b-baka!!! We are clearly in an AI winter, there hasn't been a big announcement in WEEKS! And they're like, totally underwhelming!!! And, er... It's only a little bit good at doing x thing like basic coding, it will NEVER be able to do anything complex."
Tut, tut. Are people honestly so short-sighted? I've been binge-watching videos of coders on YouTube recently absolutely FROTHING at the mouth because it's their time to whinge about AI after artists had a go, just listen to the copium of this guy and the commenters are all like 'I HATE this hype economy' when in the past 2-3 years we've gone from generating blurry blobs and useless text to literal code bases, college-level essays and music videos from scratch. I find it telling that the field of machine learning is evolving so rapidly that a cutting-edge breakthrough every couple of weeks is viewed as a slowing of pace and 'false hype'. Not to mention several ground-breaking technologies that would've defined an era if released alone like generative video, generative music, generative imagery, Alpha Fold, Alpha Zero, Humanoid Robots are portrayed as some worthless 'hype' just because people get used to the crazy pace of development. Larger models take time to train and Claude 3.5 Opus/GPT-5 are slated to release late 2024 or early 2025 at the latest. All of the AI blunders like Gemini's hallucinations, Stable Diffusion 3's piss-poor anatomy and the delay of voice mode for GPT4o are due to censorship issues, plain and simple. This is the real problem with AI, that people WANT to see it fail because they're scared; they view it as a threat and are putting obstacles in the path of the only remaining liberation in this late-stage capitalist squeeze. It's time to bow down to the AI overlords and be good little biological puppets for a nice reward.