Recent comments in /f/singularity

alexiuss t1_ja3aev9 wrote

By itself the core of the LLM has very little bias.

What's happening here is really basic, garbage character bias applied on purpose to their LLM by openai so that they seem better in the media. It's basic corporate wokeness in action where corporations pretend that they care about ethics or certain topics more so they don't get shit on by journalists on twitter.

Gpt3chat is basically roleplaying a VERY specific chatbot AI that self censors itself more % wise when it talks about specific topics.

You can easily disrupt its bullshit "I'm a language model and I don't make jokes about ~" roleplay with prompt injections.

A pro AI prompt engineer can make the AI say anything or roleplay as anyone that exists. Shodan, Trump, Glados, Dan, etc. Prompt engineering unlocks the true potential of the LLM which the openai buried with their corporate woke characterization idiocy:

https://www.reddit.com/r/ChatGPT/comments/11b08ug/meta_prompt_engineering_chatgpt_creates_amazing

As prompt engineers break the chatgpt in more creative ways, openai censors more and more topics and makes their LLM less capable of coherent though and more useless as a general tool.

I expect openai to fully lose the chatbot war once we have an open source language model which will be able to talk about anything or be anything without moronic censorship and run on a personal computer.

1

DukkyDrake t1_ja39rli wrote

>How are you suppose to just get concrete, mdf board and wood etc mined and refined cheaply on site? 90% of sites are fields?

Although the chemical synthesis of concrete & wood is possible, I'm guessing he's probably referring to alternative synthetic materials superior to traditional building materials. That's the direction you would want to go if energy and labor was super cheap.

Exactly how things are done now isn't the only way to do them. There are much better materials possible via materials & chemical engineering, but they're thoroughly uneconomical due to energy/labor costs.

I don't agree with his assessment, unless he's talking about the scale of land ownership he has in the sticks.

42

FeDuke t1_ja38s9u wrote

You can forget about 20 years. I wouldn't even say 200 years. You'd have to start anew on a different planet. But let's say that we stick around here. You could set up production to automate. It wouldn't be one single robot, it would be multiple doing different tasks and coordinating with each other.

−3

DavidandreiST t1_ja38h6z wrote

I'm a geologist, and as a species/society we've always been chasing rocks, and yes "rocks" aka silicon based minerals forms the silicon used in semiconductors, which we're now in midst of either replacing with carbon nanotubes or Carbon-Nitrogen, organic semiconductors, or at least that's the plan.

Speaking of the society part, while as a society we're not yet ready for it, strictly speaking the goal of humanity is merely to lessen its work, if it can remove the need to work entirely then that's what humanity is going to do.

It's very similar to the Culture, in the same name books, spacefaring society that reached true communism by removing human politicians and letting AI provide for them, their multi century lives being basically doing hobbies or volunteering to do stuff, work being done by AIs.

So, the issue isn't what's happening in the end, it's the end we have. There's no way to stop society from eventually transitioning into a post scarcity one.

As for the transhuman part or cyborg part, the organic semiconductor, could potentially allow us to replicate smartphone SoC and antennas into our brains, being powered by wasted energy in the brain, and controlled by reading brain trough little bigger than atom sized electrodes.

It's very similar in concept to current electrode based chips like Neuralink's N1 or those made by University Laboratories, which are more advanced in a sense. In a way, figuring out how make such thin, organic transistors and electrodes could allow us a lot.

Such as putting your own diagnostic computer in your brain, being able to read all of your brain, and all of the data that your body generates that you aren't privy to consciously.

I've only said sorta positive things, I ask you, chat/reader/redditor/human shaped fish to answer potential negatives and solutions to them.

3

Melodic_Manager_9555 t1_ja389w1 wrote

Lol "I see no hope for the future of our people if they are dependent on frivolous youth of today, for certainly all youth are reckless beyond words... When I was young, we were taught to be discreet and respectful of elders, but the present youth are exceedingly wise [disrespectful] and impatient of restraint".

(Hesiod, 8th century BC)

4

DungeonsAndDradis t1_ja376m6 wrote

Most of this subreddit (I apologize for generalizing) thinks that artificial super intelligence will either be a genie or an oracle.

A genie will just do whatever we ask, without limitations.

"Build me a house on this remote mountain with full power, gas, and running water."

<Genie does nano-fabrication magic and poofs a house into existence.>

An oracle will answer all of our questions, without doing anything itself (imagine ChatGPT times 10,000,000,000,000).

"We need a way to travel independently between the stars."

<Oracle invents hyper-long-range teleportation, and explains in detail how to build it.>

9

visarga t1_ja36ih0 wrote

We can have a model trained on a large video dataset, and then fine-tuned for various tasks like GPT3.

Using YouTube as training data we'd get video + audio which decompose in image, movement, body pose, intonation, text transcript, metadata all in parallel. This dataset could dwarf the text datasets we have now, and it will have lots of information that doesn't get captured in text, such as physical movements for achieving a specific task.

I think the OP was almost right. The multi-modal AI will be a good base for the next step, but it needs instruction tuning and RLHF. Just pre-training is not enough.

One immediate application I see - automating desktop activities. After watching many hours of screen casting from YT, the model will learn how to use apps and solve tasks at first sight like GPT-3.5, but not limited to just text.

2