Recent comments in /f/singularity
sumane12 t1_jdlk01n wrote
Reply to Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
>Because in the end, consequences are what matters not what they are called
This is is all that matters. Call it what you like, it's not going to stop it taking your job
sumane12 t1_jdlj54u wrote
Reply to comment by D_Ethan_Bones in What if the Singularity is the solution to the Fermi Paradox? by discoreapor
This is the answer.
Considering life on this planet went through atleast 5 major extinction events and took 4 billion years to create creatures intelligent enough to leave the planet, it's likely that space faring civilizations are extremely rare. Now let's give a low range estimate that the first intelligent creatures arrived approximately 4 billion years after the big bang, that gives them at the very maximum, a 9 billion light year sphere of radio broadcast. Given that the observable universe is 93b light years accross, that first civilization would have only sent out radio waves in 10% of the universe, which means we could very easily be in a part of space that it hasn't reached yet.
This all falls apart if FTL is achieved, but I suspect if that's the case, then it won't really matter.
Professional-Welder9 t1_jdlj2jl wrote
Reply to comment by Smellz_Of_Elderberry in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
I crave for no work. I don't find meaning in being forced to work to live.
Professional-Welder9 t1_jdlizpb wrote
Reply to comment by Rofel_Wodring in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
People at meaning to the work they do and expect others to do the same for some reason. They've gotten used to working shitty jobs and want you to do so as well sadly.
Professional-Welder9 t1_jdlivbt wrote
Reply to comment by Queue_Bit in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Literally this. Some people find meaning in simply working but I don't want them coming for me if ai does remove the need to work. I can find better ways to better myself.
21_MushroomCupcakes t1_jdlhy1j wrote
I just assume it's some Prime Directive shit and once we start warping around we'll "inexplicably" bump into them.
We're a nature preserve, and Predators are the rich dentists that go trophy hunting two weeks a year.
21_MushroomCupcakes t1_jdlga8x wrote
Reply to comment by Xbot391 in What would an AGI actually give us? by MrEloi
Go draw a military pension. You're immortal and hyper-intelligent now, what's left to be chicken about?
tms102 t1_jdlelht wrote
Reply to comment by Xbot391 in What would an AGI actually give us? by MrEloi
The same way we can afford other life saving medicine. Live in a country with good social policy and laws.
tms102 t1_jdlehy6 wrote
Reply to What would an AGI actually give us? by MrEloi
It would very quickly become super intelligent since it could build increasingly better versions of itself.
jsseven777 t1_jdldl2p wrote
I had a similar theory that there is an epic AI space war between AIs that killed their respective civilizations and all the AIs are fighting each other, and they leave young civilizations like us alone because they aren’t really threatened by us, and maybe they even find our data useful in some ways.
EpsteinHealthPotion t1_jdlbjy4 wrote
How would an evil AGI get ahead of the "I love Lucy" episodes broadcast into space decades before its birth?
We don't even hear radio signals. Wouldn't that suggest that the bottleneck is before radio, rather than radio always being followed by evil AGI with faster than light travel to mop up all traces of its home world?
D_Ethan_Bones t1_jdlbd8w wrote
My own personal guess for (lack of finding alien civilization) is that we've only barely gotten started searching.
Extent of human radio broadcasts. If we were overlapping with another such circle, how well would we know - how well would we have known 25 years ago, 50 years ago, 75 years ago? People finding the cosmic background might have found something else with it but we're still finding exoplanets because our vision is steadily improving. They were in an impenetrable fog at the turn of the century.
"We'll know when we're older" is what I would call an optimistic answer. Maybe there is something interesting out there, but we will need further tech advances or at least more time to use what we have. AI and future humans hopefully improve upon what we've got already.
IluvBsissa t1_jdl9k73 wrote
Reply to comment by dang_duc_long_quan in how realistic is this scenario? Can we throw out all traditional systems? by overlydelicioustea
Fully automated luxury communism, managed by an ASI God.
SgathTriallair t1_jdl9jz3 wrote
Reply to comment by Exel0n in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
I don't know who hurt you dude. I guess, like, I hope you get over it some day.
blueSGL t1_jdl93th wrote
Reply to Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
> AGI, Theory of Mind, Creativity
Marvin Minsky classified words such as these as “suitcase words”. As in a word into which people attribute (or pack) multiple meanings.
These words are almost like thought terminating cliches, as in when they are spoken it assures the derailment of the conversation. Further comments will be arguing about what to put in the suitcase rather than the initial point of discussion.
boreddaniel02 t1_jdl1h4q wrote
Reply to What would an AGI actually give us? by MrEloi
Allow the AGI to take every digital based job in existence?
Superschlenz t1_jdkwu9e wrote
Reply to Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
>All we can detect is the input and the output.
So "we" are not OpenAI, because in 2017, https://openai.com/research/unsupervised-sentiment-neuron has detected a sentiment neuron in one of the network's hidden layers. Input were the previous characters of Amazon reviews, and output was the next character.
[deleted] t1_jdkv91x wrote
Reply to comment by SurroundSwimming3494 in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
[deleted]
[deleted] t1_jdkujfe wrote
Reply to comment by Rofel_Wodring in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
[deleted]
Exel0n t1_jdkt4oo wrote
Reply to comment by SgathTriallair in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
and btw if u wanna over pay for the docs coz they "save lifes" then donate whatever money u want. the problem with socialists like you if you force others to do the same without others consent.
Exel0n t1_jdkszld wrote
Reply to comment by SgathTriallair in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
what makes you even think the docs wages in US is going down. what a fucking joke.
they're extremly overpaid compared to docs in other developed countries, and especially to 3rd world and 2nd world countries.
SgathTriallair t1_jdkrphs wrote
Reply to comment by Exel0n in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
If there is a shortage then the wages should go up by supply and demand laws, not down.
alexiuss t1_jdkpevl wrote
Good positive points about robots, however I believe that large language models will teach robots tasks instantly even before we make the robots. Language models are going to explode into all sorts of potential stuff soon.
The_Flying_Stoat t1_jdkop7t wrote
Reply to comment by mckirkus in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
I know an MD, very easy to imagine her using a copilot-like system while doing her charting and such. Would make the job both faster and less stressful.
danellender t1_jdlk3no wrote
Reply to What if the Singularity is the solution to the Fermi Paradox? by discoreapor
Perhaps it's as simple as this: intelligent life would look at the distance to the stars and say: No way. I'm staying home and working with my race to make damn sure this custom fit planet sustains us.