Recent comments in /f/singularity

strongaifuturist OP t1_j9v08bt wrote

You can’t even be sure I’m having subjective experiences and I’m a carbon based life form! It’s unlikely we’ll make too much progress answering the question for LLMs. It quickly becomes philosophical. Anyway even if it were conscious it’s nit clear what you would do with that. I’m conscious most of the time but I don’t mind going to sleep or being put under anesthesia. So who knows what a conscious chat bot would want (if anything).

1

knarfomenigo t1_j9uznmh wrote

There is no way to know. Really any person who gives you a year for singularity is is full of shit.

In one hand, exponential improvement could make it happen sooner than we could even imagine, many companies or countries don't share their R+D info, so we wouldn't even notice.

In the other hand, many companies like Google who are developing big AI's might face financial problems because of them if they are not very proffitable, such as a decrease as the money paid by advertisers or cloud storage. They are risking their current working model for one in which they are not the monopolistic leaders and might not be as lucrative as the leading position they had until now. This is a big danger for ai, because if it turns un-proffitable, big tech companies will reduce their investments thus s lowing it's development speed. HOWEVER, Satia Nadella said in the last Microsoft presentation for the GPT-Bing integration that "the AI wars have started" so it looks unlikely that investment in them will go down in the following years.

Here's my "bananas" prediction (PURE FICTION).

In 2023 we will see chat gpt becoming just a great user interface, great if combined with powerful apis with updated content. We will be able to choose if use it in internet browsers search, but other products will appear allowing to interact with video, voice, images... all at once. Most people will be unable to use it though.

By 2025 Bing will be as used as google, because of the functionalities of the integration between Open AI and Microsoft's software. Many experts on this softwares will make big money offering solutions to medium sized companies to implement them. Many companies will not addapt and face big decreases in their revenues. Many movements of affraid people will create anti-ai trends in social media, traditional media companies will amplify this fear through anti-ai news, creating social disagreement about it, polytical polarization.

By 2030, many European countries will be making HUGE bans on AI companies due to the fear and inconvenience from workers with graphic arts, design, music and writing backgrounds. Countries like India, China, Turkey and Russia will invest strongly in this technologies. Conservative political western parties and far-left will include their oposition to big-ai companies in their polytical programs too. Big changes will come in porn, music and other entertainment industries in which most of the content will be ai-produced or ai-enhanced.

By 2040 it will become obvious that countries without AI restrictions achieve a higher level of efficiency in certain industries, plus many of the friction created for job destruction will be reduced enough for polytical parties to forget about it. Then it will become a polytical strategy to bet into it for military reasons, and both left and right will be highly investing into it to manipulate public opinions.

From then on, it's just a matter of time to AI to develop exponentially, maybe it's 10 more, maybe it's 30 years, not only for economical but also for military reasons. I don't believe it can be later than 2070 before China or USA achieve ai singularity.

1

Tiamatium t1_j9uzmge wrote

Weeks, maybe months.

The larger problem might be long-term memory, but once we figure that out... Actually no, it is easy to figure it out.

So weeks, maybe months, but you will need wifi. A d it will be a bit laggy, as in it will take a noticable delay to respond. Not long, just noticable, so that will take out a lot of emotions out of shit.

Honestly, this depends on when OpenAI releases chatGPT API, because once that's out, it's out. It really is just a quick connection of voice-to-text API, chatGPT and text-to-voice, ad that's it.

4

MrSickRanchezz t1_j9uzcdc wrote

Wrong. Just because you do not have a whole bunch of clay pots doesn't mean art was scarce. Time exists dude, it makes objects disappear and get buried if they're not maintained.

It appears you've confused the word 'scarce' with something else. Scarce by definition means there was not enough to meet the needs of the population. Or, demand was higher than something's availability.

Art has never had that problem. Art has been suppressed at various points throughout history, but it was still there, and meeting the public demand for it.

English better, or stop bickering with people when you can't even write coherently.

1

MrSickRanchezz t1_j9uz5sb wrote

At no point in history has art ever been scarce on a global scale. The only societies art has EVER even been suppressed in, were dictatorships, where someone specifically had a problem with art and killed people for making it. But even then, there's plenty of art from those places.

Not sure who told you art was scarce at some point, but they're wrong, and you're wrong. Hell we have found art from our non-human ancestors. You're clearly completely out of your element, and talking out of your ass here. You'd be wise to quit digging.

1

MrSickRanchezz t1_j9uyzk1 wrote

Humans have always, and will always have both emotion and free will. At no point in history has art ever been scarce on a global scale. The only societies art has EVER even been suppressed in, were dictatorships, where someone specifically had a problem with art and killed people for making it. But even then, there's plenty of art from those places.

Not sure who told you art was scarce at some point, but they're wrong, and you're wrong. Hell we have found art from our non-human ancestors. You're clearly completely out of your element, and talking out of your ass here. You'd be wise to quit digging.

1

maskedpaki t1_j9ut4v4 wrote

For those wondering about the performance

5 shot performance on MMLU.

Chinchilla 67.5

this new model 68.9

human baseline 89.8

​

so it seems a smidge better than chinchilla on 5 shot MMLU Which many consider to be the important AGI benchmark (its one of the AGI conditions on metaculus)

some nice work by meta.

30

jdmcnair t1_j9usu83 wrote

True. The meaning of the word "sentience" is highly subjective, so it's not a very useful metric. I think it's more useful to consider whether or not LLMs (or other varieties of AI models) are having a subjective experience during the processing of responses, even if intermittently. They certainly are shaping up to model the appearance of subjective experience in a pretty convincing way. Whether that means they are actually having that subjective experience is unknown, but I think simply answering "no, they are not" would be premature judgment.

1