Recent comments in /f/Futurology
Cryptizard t1_ja69a0b wrote
Reply to comment by Mason-B in So what should we do? by googoobah
It seems to come down to the fact that you think AI researchers are clowns and won’t be able to fix any of these extremely obvious problems in the near future. For example, there are already methods to break the quadratic bottleneck of attention.
Just two weeks ago there was a paper that compresses GPT-3 to1/4 the size. That’s two orders of magnitude in one paper, let alone 10 years. Your pessimism just makes no sense in light of what we have seen.
peter303_ t1_ja68zwz wrote
Reply to US 'develops' AI-powered facial recognition tech for military robot drones - The drones are to be tasked with expeditionary roles, including special operations, to "open the opportunity for real-time autonomous response by the robot." by Gari_305
The official US position is there is a human in the decision loop for death decisions.
Now many other companies have developed drones, a human oversight is not always the case.
shine-like-the-stars t1_ja68nbg wrote
Reply to comment by Poly_and_RA in The ultimate solar panels are coming: perovskites with 250% more efficiency by Renu_021
You sound like you know a lot about solar. I want to get solar on my house and have no idea where to get started. Is there some tech that’s leaps and bounds ahead, or are most rooftop solar solutions the same?
KeaboUltra t1_ja68jun wrote
GPT is an Artificial Narrow Intelligence. as it self proclaims. IMO, I personally categorize it as a very simple AGI. It has general intelligence in the sense that you can ask it to do something and it'll do a "human" job at it. that is to say, flawed until you correct it. I don't think AGI need to be conscious and aware of what it's thinking or doing. IMO, that's something between AGI and ASI. Putting ChatGPT into a robot and Mapping it to allow it to control it to do manual tasks is the next great step IMO. the "dumb" AI such as Google home, Lidar Roombas, and such will likely combine into one device or specialize device that controls all devices. the way cgpt can create code, a more efficient and mature version of it might be able to impliment that code externally to program other devices to get it to do a task you specifically wanted it to do. So for example, you can ask your bot to make your smart lights blink if you get a text message for the night, you could normally set up an IFTTT routine to do it, but I think in the future, AI will have the ability to create them for you and plug the command into any device compatible with it, and we can already see the beginnings of it. I think some people have already made Chat GPT control their smart lights. theres also a brand out there called "matter" which is trying to make all smart devices compatible using one software so people don't have to sign up with multiple websites/apps just to control them.
In the future, I think we'll see chatgpt/AI manipulate things in the real world. not just toss you computer frontend/backend code and answer random questions or prompt, to get them to do things more useful than their original purpose. IFTTT and AI will merge
LLF2 t1_ja67unm wrote
Reply to US 'develops' AI-powered facial recognition tech for military robot drones - The drones are to be tasked with expeditionary roles, including special operations, to "open the opportunity for real-time autonomous response by the robot." by Gari_305
Will potential targets be wearing masks to counter this then?
Micheal_Bryan t1_ja67k77 wrote
-
you are living in the best economic conditions the world has ever seen.
-
we are nowhere near some imaginary total collapse.
-
the AI threat is real, prepare for that, but also realize it is kind of the new thing to talk about, and while important, is extremely low on the threat scale of actual world problems.
-
you need to turn off all of the negative media you are clearly consuming and get to work making the planet a better place.
-
true peace and happiness can be found by focusing on others.
mikeonaboat t1_ja67ici wrote
Reply to comment by Mayor__Defacto in Is VR a viable way for construction blueprints and proposals to be assembled in the future? by TIFUstorytime
Depends if you need the build to be repeatable. For giant projects VR and 3D modeling is big big. Also, huge money if you own a company that can do it on time and efficiently.
Mason-B t1_ja67eyi wrote
Reply to comment by Cryptizard in So what should we do? by googoobah
> You ignore the fact that we have seen a repeated pattern where a gigantic model comes out that can do thing X and then in the next 6-12 months someone else comes out with a compact model 20-50x smaller that can do the same thing. It happened with DALLE/Stable Diffusion
DALLE/Stable Diffusion is 3.5 Billion to 900 million. Which is x4. But the cost of that is the training size. Millions of source images versus billions. Again, we are pushing the boundaries of what is possible in ways that cannot be repeated. With a 3 orders of magnitude more training data we got a 4x reduction in efficiency (assuming no other improvements played a role in that). I don't think we'll be able to find 5 trillion worthwhile images to train on anytime soon.
But it is a good point that I missed it, I'll be sure to include it in my rant about "reasons we are hitting the limits of easy gains"
> You ignore the fact that there are plenty of models that are not LLMs making progress on different tasks. Some, like Gato, are generalist AIs that can do hundreds of different complex tasks.
If you read the paper they discuss the glaring limitation I mentioned above. Limited attention span, limited context length, with a single image being significant fraction (~40%) of the entire model's context. That's the whole ball game. They also point out the fundamental limit of their design here is the known one: quadratic scaling to increase context. Same issues of fundamental design here.
I don't see your point here. I never claimed we can't make generalist AIs with these techniques.
> I can’t find any reference that we are 7 orders of magnitude away from the complexity of a brain. We have neural networks with more parameters than there are neurons in a brain. A lot more. Biological neurons encode more than an artificial neuron, but not a million times more.
Depends how you interpret it. Mostly I am basing these numbers on super computer efficiency (for the silicon side) and the lower bound of estimates made by CMU about what human brains operate at. Which takes into account things like hormones and other brain chemicals acting as part of a neuron's behavior. Which yes, does get us to a million times more on the lower bound.
If you want to get into it there are other issues like network density and the delay of transmission between neurons that we also aren't anywhere close to at similar magnitudes. And there is the raw physics angle about how much waste heat the different computations generate at a similar magnitude difference.
To say nothing of the mutability problem.
> The rate of published AI research is rising literally exponentially. Another factor that accelerates progress.
Exact same thing happened with the boom right before AI winter in the 80s. And also stock market booms. In both cases, right before the hype crashes and burns.
> I don’t care what you have written about programming, the statistics say that it can write more than 50% of code that people write TODAY. It will only get better.
The github statistics being put out by the for-profit company that made and is trying to sell the model? I'm sure they are very reliable and reproducible (/s).
Also can write the code is far different than would. My quantum computer can solve problems first try doesn't mean that it will. While I'm sure it can predict a lot of what people write (I am even willing to agree to 50%) the actual problem is choosing which prediction to actually write. Again to say nothing of the other 50% which is likely where the broader context is.
And that lack of context is the fundamental problem. There is a limit to how much better it can get without a radical change to our methodology or decades more of hardware advancements.
mikeonaboat t1_ja67cgn wrote
Reply to comment by Maurauderr in Is VR a viable way for construction blueprints and proposals to be assembled in the future? by TIFUstorytime
They build ships with 3D model rendering so you can “walk through” and find the problem areas before they start making it. Then they just transfer the metal cut sizes to the giant CNC machine and print the blue prints that come with it.
Source: I was inspecting and reviewing shipbuilding for last 4 years.
The-Fox-Says t1_ja670fv wrote
Reply to comment by Poly_and_RA in The ultimate solar panels are coming: perovskites with 250% more efficiency by Renu_021
From what I can find online they’re normally 24-29% efficient so that would be over 60% efficiency which is very significant if true
Dariaskehl t1_ja66ulr wrote
Reply to US 'develops' AI-powered facial recognition tech for military robot drones - The drones are to be tasked with expeditionary roles, including special operations, to "open the opportunity for real-time autonomous response by the robot." by Gari_305
Can I trademark the phrase: “had the drone been armed, this tragedy could have been prevented in time.” -now; and somehow get paid when it’s used?
Northstar1989 t1_ja65ujg wrote
Reply to comment by Tomycj in AI is accelerating the loss of individuality in the same way that mass production and consumerism replaced craftsmanship and originality in the 20th century. But perhaps there’s a silver lining. by SpinCharm
> all comes down to this: you are not entitled to other people's work. Capitalism is in big part the recognition of this hard to swallow but true and ethical principle,
That's utter BS.
Capitalism is literally about the owners of Capital reaping returns for investments without doing any work.
It is the very opposite of what you are saying.
Leave it to a Neoliberal to try and turn reality on its head. You are answering in bad faith, and being blocked.
[deleted] t1_ja65nqb wrote
Reply to comment by essaitchthrowaway3 in So what should we do? by googoobah
[removed]
[deleted] t1_ja65f0l wrote
Reply to comment by Heap_Good_Firewater in So what should we do? by googoobah
[deleted]
bobobuttsnickers t1_ja65buw wrote
The fact is that no one KNOWS what will happen. Everyone is guessing or making theories. Which means they are using their imagination to do so (in addition to facts and data of course.)
So why not use you imagination to think of ways the world could be BETTER, and potentially avoid collapse? Then think about what stands in the way of those things happening. Is there anything you can do to help bring about your more positive imaginative view of the future?
Encourage your child to think this way.
GarlicBreadRules t1_ja65bse wrote
Reply to Opinion: Mining on the moon is no longer a loony idea, and Canada can capitalize on it by Gari_305
Why not Antarctica first? It’s got air, and water, and you can get there by boat.
russianpotato t1_ja659u2 wrote
Reply to comment by Rondaru in Opinion: Mining on the moon is no longer a loony idea, and Canada can capitalize on it by Gari_305
Oh shit! What if we have machines that work...wait for it.....under water....where something as small as a water molecule can get into the oil and ruin an engine.
Igottamake t1_ja656mp wrote
Reply to comment by SpaceAngel2001 in Opinion: Mining on the moon is no longer a loony idea, and Canada can capitalize on it by Gari_305
Whoever smelts it will get accused of dealing it.
Bewaretheicespiders t1_ja64v1r wrote
Reply to comment by ca_kingmaker in Opinion: Mining on the moon is no longer a loony idea, and Canada can capitalize on it by Gari_305
True, but at least theyre somewhat trying to get there.
Dickmusha t1_ja64oc5 wrote
Reply to So what should we do? by googoobah
People are very confused with this modern AI stuff because a lot of it was already possible for quite some time just no one cared or had a reason to use it or develop it. The AI we are using to do stupid stuff is not really the AI you need to be scared of. So singularity? Eventually.. but its not right now. Also silicon is facing its own issue ... the bigger fear should be that computers are about to hit an impossible to pass limit that will actually STOP further advancements in AI that we actually want to happen. Moores law is dead and transistors are reaching a serious stall in advancement .. that is why AMD and Intel are focusing on work around instead of actual smaller processes. My biggest fear is that chips will hit this wall and the economic issues for this lack of advancement will be a bigger issue. The AI we are working on now will then also hit a wall and the positives of that AI will be stopped in their tracks fucking up all of the futurists realities we are hoping for that could be used for uplifting the undeveloped world. Physics is at a stall, microchips are at a stall, material science is at a stall.. AI may just design tech we won't even be able to act on or make real and we will stalled out in advancement despite knowing what the next steps should be.
Cryptizard t1_ja648ox wrote
Reply to comment by Mason-B in So what should we do? by googoobah
Yes like I said everything you wrote is wrong. Moore’s law still has a lot of time left on it. There are a lot of new advances in ML/AI. You ignore the fact that we have seen a repeated pattern where a gigantic model comes out that can do thing X and then in the next 6-12 months someone else comes out with a compact model 20-50x smaller that can do the same thing. It happened with DALLE/Stable Diffusion, it happened with GPT/Chinchilla it happened with LLaMa. This is an additional scaling factor that provides another source of advancement.
You ignore the fact that there are plenty of models that are not LLMs making progress on different tasks. Some, like Gato, are generalist AIs that can do hundreds of different complex tasks.
I can’t find any reference that we are 7 orders of magnitude away from the complexity of a brain. We have neural networks with more parameters than there are neurons in a brain. A lot more. Biological neurons encode more than an artificial neuron, but not a million times more.
The rate of published AI research is rising literally exponentially. Another factor that accelerates progress.
I don’t care what you have written about programming, the statistics say that it can write more than 50% of code that people write TODAY. It will only get better.
Couldbehuman t1_ja6410p wrote
Reply to comment by evorna in Wormholes Bend Light Like Black Holes Do — and That Makes it Possible to Find Them, New Study by Gari_305
I read that as big long space dicks and I refuse to be corrected
ca_kingmaker t1_ja69g1n wrote
Reply to comment by Bewaretheicespiders in Opinion: Mining on the moon is no longer a loony idea, and Canada can capitalize on it by Gari_305
It’s a pretty weak criticism that you disregard Canada for going with USA yet Europe as a continent gets credit.
You’d think it was more an anti Canadian bias.