Recent comments in /f/singularity
Iffykindofguy t1_j9vzcng wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
another of these huh
MistakeNotOk6203 OP t1_j9vz7oa wrote
Reply to comment by Cryptizard in Hurtling Toward Extinction by MistakeNotOk6203
Nothing of course, but the way the accelerationist sentiment was spread was just to spam AGI good posts and opinion posts like "I think that chatgpt is cool and I like it", so maybe initiating lots of discussion (ideally fueled by the Bankless podcast) can modify that sentiment.
TrainquilOasis1423 t1_j9vyix0 wrote
Didn't even finish the first sentence. WHAT THE HELL IS THIS ACRONYM?
LLaMA (Large Language Model Meta AI)
kiyotaka-6 t1_j9vygku wrote
Once it breaks the logical point of redirecting it's creativity to itself to further improve itself, it will exponentially improve. that would be when it becomes dangerous imo
Additional_Ad_5265 t1_j9vye9h wrote
Reply to comment by throwaway_890i in Optimism in the Singularity in face of the Fermi-Paradox by [deleted]
Who says that we havnt?
nillouise t1_j9vxbsi wrote
It seem the most possible reason is the light travel is impossible. Anyway, we will know the answer nearly.
turnip_burrito t1_j9vv9m1 wrote
Reply to comment by [deleted] in Fading qualia thought experiment and what it implies by [deleted]
It may even be that we are also different second to second. 🤔
turnip_burrito t1_j9vuzm2 wrote
Reply to comment by LambdaAU in Fading qualia thought experiment and what it implies by [deleted]
Only after AGI.
LambdaAU t1_j9vukof wrote
Reply to comment by turnip_burrito in Fading qualia thought experiment and what it implies by [deleted]
This will be an actually testable hypothesis soon so we could actually gain a lot of information towards understanding consciousness.
RandomUsername2579 t1_j9vt8nc wrote
Reply to comment by SgathTriallair in Microsoft is already undoing some of the limits it placed on Bing AI by YaAbsolyutnoNikto
No thanks, that’d be annoying. It’s already too restricted as it is.
Brashendeavours t1_j9vswaa wrote
Here comes another steaming coiler from Zuckerberg.
Cryptizard t1_j9vripn wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
What are you adding to this discussion that hasn't already been talked to death in dozens of other posts with the exact same topic over the last couple days? Or that EY hasn't been saying for a decade?
Kinexity t1_j9vqhhb wrote
Reply to comment by Jayco424 in Optimism in the Singularity in face of the Fermi-Paradox by [deleted]
ASI is unneccessary in space conquest. Just AGI is enough.
KelbyGInsall t1_j9vpud0 wrote
They’re making it available to the community means they hit a wall and hope you’ll bust through it for them.
Artanthos t1_j9vphbo wrote
Reply to comment by ebolathrowawayy in Microsoft is already undoing some of the limits it placed on Bing AI by YaAbsolyutnoNikto
>I don't know who you're kidding, maybe yourself? The conservative platform is about 95% of the issues I named and gun control
I'm not talking about the conservative platform, and I've tried to make this very clear.
I'm talking about the hive mind classifying anything and everything that disagrees with it as conservative and downvoting it while ignoring their own issues and any real data that contradicts their own biases.
Ivan_The_8th t1_j9votwt wrote
Truth is - the lightspeed is incredibly slow, and it's the limit to how fast a civilization can expand.
zero0n3 t1_j9vo41k wrote
Reply to comment by MrSickRanchezz in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
I wasn’t the one saying art was scarce. I’m the one saying it was abundant! It’s the medium we record it on that has changed over time. The concept of art really hasn’t.
And even then it wasn’t scarce. Just look at the pyramids. Art everywhere from the writing to the presentation of mummies etc.
The emotion and free will piece was more conceptual. Like A species that doesn’t have emotions or free will wouldn’t be able to create or understand art at any level.
Savings-Juice-9517 t1_j9vmqlt wrote
Reply to OpenAI’s roadmap for AGI and beyond by yottawa
Key takeaways:
Short term:
- OpenAI will become increasingly cautious with the deployment of their models. This could mean that users as well as use cases may be more closely monitored and restrained.
- They are working towards more alignment and controllability in the models. I think customization will play a key role in future OpenAI services.
- Reiterates that OpenAI’s structure aligns with the right incentives: “a nonprofit that governs us”, “a cap on the returns our shareholders can earn”.
Long term:
- Nice quote: “The first AGI will be just a point along the continuum of intelligence.”
- AI that accelerates science will be a special case that OpenAI focuses on, because AGI may be able to speed up its own progress and thus expand the capability exponentially.
Credit to Dr Jim Fan for the analysis
imlaggingsobad t1_j9vmlwb wrote
Meta's AI research is very good, people are sleeping on them. It's definitely a 3 way race between Google, Microsoft and Meta.
Representative_Pop_8 t1_j9vm63x wrote
Reply to comment by Silly_Awareness8207 in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
no, that's not true. to make a next generation of computers you need the cumulative efforts of thousands of engineers, scientists businessmen etc. you could have an ai as smart as two very bright humans and it is unlikely it would on its own develop a better AI.
No_Ninja3309_NoNoYes t1_j9vlwyv wrote
Reply to What are the big flaws with LLMs right now? by fangfried
Some LLMs are not trained with the right amount of parameters or the right learning rate. But the static nature of LLMs is the biggest problem. You need neuromorphic hardware and spiking neural networks to address the issue. In the meantime I think quick fixes will be attempted such as forward 2x passes. My friend Fred says that just adding small random Gaussian noise to the parameters can also help. Obviously human brains are very noisy but somehow very efficient too.
Representative_Pop_8 t1_j9vlnsa wrote
Reply to comment by beders in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
in the Chinese room it is not the operator that knows Chinese, it is the setuo of rules + operator that clearly knows Chinese. A llm spent needed to be conscious to master language
Cryptizard t1_j9vlbuz wrote
>which is statistically very unlikely
What makes you say that? There are multiple studies that suggest we are at the very, very beginning of the time period that the universe is able to support life. The universe is only 14 billion years old, and it will have conditions for life to arise for another 10-100 trillion years. Statistically, the overwhelming majority of lifeforms (99.999%) that will ever evolve will come after us.
We reached the advanced intelligence stage almost as fast as we possibly could. Our solar system was one of the earliest ones with abundant heavy elements. Life evolved very shortly after our planet's formation, less than 1 billion years after. It has taken us 4 billion years to reach the level we are at now. Our planet will naturally become uninhabitable in another half a billion years, as the sun gets too hot and we lose all the CO2 in the atmosphere. On a cosmic scale, we had a very small window to actually get the intelligence and civilization stuff worked out.
There is also the inflationary argument made by Alan Guth, that the number of universes is growing exponentially and so almost every civilization that ever arrises is the "first" one in their own universe. I'll let you google that one if you haven't heard it.
Representative_Pop_8 t1_j9vl9ym wrote
Reply to comment by beders in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
they have absolutely mastered language.
VladVV t1_j9vzgvs wrote
Reply to comment by Spreadwarnotlove in Human Intelligence augmentation is probably more dangerous than regular AI by [deleted]
How in the world is that assertion self-evident? You can’t just say something like that without reasoning it.