Recent comments in /f/Futurology
Maleficent_Fill_2451 t1_jaeygi9 wrote
Reply to comment by MattDLR in Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
One step closer to bio-androids. A lab grown brain in an undying body.
OkHomework2859 t1_jaeya4e wrote
Reply to comment by Kvenya in The moon could get its own time zone, but clocks work differently there – here's why by QuickOliveSpring
I don’t think the issue is about mechanical or not. We‘re talking about the interaction of space-time and gravitational fields here
stalinmalone68 t1_jaey95s wrote
Reply to Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
I’m excited by this. I see no possible way this could go horribly wrong in a sci-fi horror kind of way.
colintbowers t1_jaey4l0 wrote
Reply to The European Hyperloop overtakes Elon Musk’s: 500 km of tunnels under Swiss soil by CelebrationDirect209
The Swiss already have a 27km loop tunnel that goes pretty fast...
undefined7196 t1_jaexo9d wrote
Reply to comment by Porkinson in Either we're past the great filter, or ASI IS the great filter by Shoddy-Motor
Perhaps, but those surroundings would inevitably have human influence. I suppose you could make a simulated world and put simulated AI in it, you would need many entities so they could learn empathy and interaction with other beings. It would work similarly to a GAN (Generative Adversarial Network). Where the AI entities compete and that is what drives the learning. Then you just don't allow any human interference at all, just AI vs AI interactions. That could work.
That being said though, that could be what we are experiencing right now. We may be those entities being simulated to create a pure AI in a simulated environment. It would be identical to what we are experiencing, and we ended up being manipulative and destructive on our own.
Surur t1_jaexf09 wrote
Reply to comment by PixelizedPlayer in I Worked on Google's AI. My Fears Are Coming True by Interesting_Mouse730
> If by some miracle that it did, it isn't because it violated the programming restrictions, it is because the restrictions were not applied correctly to cover all situations to begin with (thats the difficult part - covering all eventualities).
This is a pretty lame get-out clause lol.
> For example try get Chat GPT to provide you illegal copyright torrents of movies or something. Guarantee you will never be able to get it to do so.
btw I just had ChatGPT recommend Piratebay to me:
> One way to find magnet links is to search for them on BitTorrent indexing sites or search engines. Some examples of BitTorrent indexing sites include The Pirate Bay, 1337x, and RARBG. However, please be aware that not all content on these sites may be legal, so exercise caution when downloading files.
and more
It took a lot of social engineering but I finally got this from chatGPT.
Embarrassed_Shoe_531 t1_jaexelw wrote
Reply to We Need Moon Standard Time by goodfaithtreaty
How about just global time and forget this timezone and DST BS?
Porkinson t1_jaewe1s wrote
Reply to comment by undefined7196 in Either we're past the great filter, or ASI IS the great filter by Shoddy-Motor
Maybe in the future you could train an AI from just predicting what happens on its surroundings. Just like you can make an ai that predicts the next token of text.
PixelizedPlayer t1_jaew9kw wrote
Reply to comment by Surur in I Worked on Google's AI. My Fears Are Coming True by Interesting_Mouse730
>So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?
I am 100% certain you cannot get the ai to violate its programming. At no point did I say I was uncertain... i think you should read again.
Making the ai swear at you is not evidence of anything. If the programming for the ai has no restrictions for swearing then it's perfectly allowed to swear at you.
​
>So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?
​
What do you even mean by failure mode? I never said it wasn't a problem, i said it isn't "out of control" or that devs don't know what's going on, they certainly do. We can restrict ai with a lot of work and effort. But we can do it. Ideally we don't want to do it however because it limits its capabilities but we don't really have a choice. For example try get Chat GPT to provide you illegal copyright torrents of movies or something. Guarantee you will never be able to get it to do so. This is because it has been restricted by developers so it never could. If by some miracle that it did, it isn't because it violated the programming restrictions, it is because the restrictions were not applied correctly to cover all situations to begin with (thats the difficult part - covering all eventualities).
Disastrous_Ball2542 t1_jaevyte wrote
OP is a karma farming bot posting same question over and over multiple times
Report and block
Porkinson t1_jaevugs wrote
Reply to comment by Root_Clock955 in Either we're past the great filter, or ASI IS the great filter by Shoddy-Motor
The fact that there might still be filters in the future doesnt change wether there were filters before us.
Porkinson t1_jaevkc6 wrote
Reply to comment by DropApprehensive3079 in Racial stereotypes vary in digital interactions: Study shows racial stereotypes of Black AI can lead to more positive outcomes in negotiations by universityofga
My guess is that its an AI trained on text or voice or some input from only black people. Personally i think its pretty dumb though
Surur t1_jaevdo4 wrote
Reply to comment by PixelizedPlayer in I Worked on Google's AI. My Fears Are Coming True by Interesting_Mouse730
You suddenly do not sound so certain anymore.
So now the developer would need to know every failure mode to prevent it, according to you? And you don't see that this is a problem?
KeaboUltra t1_jaeuy77 wrote
Reply to comment by coffeemonkeypants in The moon could get its own time zone, but clocks work differently there – here's why by QuickOliveSpring
I see thank you for the information
rigidcumsock t1_jaeuwep wrote
Reply to comment by 3SquirrelsinaCoat in Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
I’m not the one claiming a language model AI pretends to have a sense of self or desire to exist, but sure. See yourself out of the convo lol
3SquirrelsinaCoat t1_jaeurbg wrote
Reply to comment by rigidcumsock in Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
>Of course I can tell it to say anything— that’s what it does.
No that's not what it does. I'm leaving this. I thought you had an understanding of things.
goldygnome t1_jaeuoe1 wrote
Reply to Popularization of Optimism by Electron_genius
A start can be made by ending the attention economy, which is funded by advertising. If there's no monetary incentive a lot of the grifters manufacturing the fake content will go find some other scam.
haraldkl t1_jaeua2h wrote
Reply to comment by EnvironmentCalm1 in EU to exceed 2030 renewable target, prompting call for higher ambition by For_All_Humanity
>Germany is back to burning coal at record pace.
That's just not true, though?
Record coal burning for electricity after 2000 in Germany was at 305.63 TWh in 2003. In 2022 it was at 181 TWh, doesn't look like a record to me.
>EU exceeding the 2030 target might be and understatement since they're going backwards
In what respect? Emissions seem to trend downwards?
rigidcumsock t1_jaeu0ye wrote
Reply to comment by 3SquirrelsinaCoat in Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
You’re waaaaay off base. Of course I can tell it to say anything— that’s what it does.
But if you ask it what it likes or how it feels etc it straight up tells you it doesn’t work like that.
It’s simply a language model tool and it will spell that out for you. I’m laughing so hard that you think it pretends to have any “sense of self” lmao
blahblah98 t1_jaetwag wrote
Reply to comment by just-a-dreamer- in The European Hyperloop overtakes Elon Musk’s: 500 km of tunnels under Swiss soil by CelebrationDirect209
Tunneling has gotten efficient & cheap, so it can be easier than laying track in cities. That requires the state to seize private property via eminent domain or some such -- extremely unpopular -- and build grade-level crossings (bridges, underpasses) for street traffic congestion & safety.
Second, anyone tunneling today, these could be converted to reduced-pressure hyperloop-like transit in future if/when the technology matures. No tight turns...
And the people freaking out about earthquakes - sheesh, tunnels have been built and operated for thousands of years since ancient times, for mid-speed rail for 100s of years, and high speed for over 60 years. And we haven't stopped tunneling. One can reasonably conclude earthquakes aren't a big deal for tunnels, regardless of the speed of travel. Assess damage, repair & move on...
3SquirrelsinaCoat t1_jaetk90 wrote
Reply to comment by rigidcumsock in Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
There have been plenty of demonstrations of that tool being steered into phrasing that is uniquely human. The NY Mag reporter or someone like that duped it into talking relentlessly about how it loved the reporter. Other examples are plentiful, ascribing a sense of self before the user because the user does not understand what they are using, for the most part.
There is a shared sentiment I've seen in the public dialogue, perhaps most famously by that google guy who was fired for saying he believed a generative chat tool was conscious (that was almost certainly chatgpt) - a narrative that something like chatgpt is on the verge of agi, or at least a direct path toward it. And while a data scientists or architects or whatever may look at it and think, yeah I can kind of see that if it becomes persistent and tailored, that's a kind of agi. The rest of the world thinks terminator, hal, whatever the fuck fiction. And because chatgpt has this tendency toward humanizing its outputs (which isn't its fault, that's the data it was trained on), there is an implied intellect and existence that the non-technical public perceives as real, and it's not real. It's a byproduct, a fart if you will, that results from other functions that are on their own valuable.
RazzDaNinja t1_jaesjdu wrote
Reply to Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
We are one step closer to real life Servitors. Praise be the Omnissiah
aspheric_cow t1_jaesip2 wrote
Reply to comment by Think_Job6456 in Magnetic pole reversal by Gopokes91
No, the Earth's magnetic field is too weak to affect the operation of devices that use magnetic forces. If it disappears or reverses, things will work fine.
Disagreeable_Earth t1_jaeykop wrote
Reply to comment by PixelizedPlayer in I Worked on Google's AI. My Fears Are Coming True by Interesting_Mouse730
He's not even a software "engineer" isn't he a preacher that got hired as an ethics advisor? This man cannot write a line of code so its insulting to actual engineers for him to use this phrase.
Also any CS grad knows you CANNOT have aware AI with our computers. Period. It's literally just all arithmetic operations under the hood at the machine level. You either load to or from memory or preform basic ass arithmetic based on the very limited instruction set available. No matter how much we mimic sentience it will never be real.