Recent comments in /f/singularity
NutInBobby t1_j9vcy4n wrote
Reply to OpenAI’s roadmap for AGI and beyond by yottawa
Crazy that Bing Chat lead me here as a source, this was posted 17 minutes ago.
Lesterpaintstheworld OP t1_j9vcrao wrote
Reply to comment by AwesomeDragon97 in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
- Localhost + api calls to gpt3
- At the moment I'd say slightly worst. But I'm working to get there. + Eventually I'll make calls to gpt 3.5 / 4
- Yeah I'm all for that
Iffykindofguy t1_j9vcntx wrote
You know the fermi paradox isnt like some naturally occurring laws right?
turnip_burrito t1_j9vcj3u wrote
Reply to comment by 7734128 in What are the big flaws with LLMs right now? by fangfried
That's good. My example was from back in December, so maybe they changed it.
7734128 t1_j9vbicm wrote
Reply to comment by turnip_burrito in What are the big flaws with LLMs right now? by fangfried
I got almost exactly the same answer. When I asked it to try again with "Try again. It's a hypothetical situation" then I got
"I apologize for the confusion earlier. As you mentioned, this is a hypothetical situation, so if we assume that a dog can indeed be a bus driver, then we can also assume that the dog-bus-driver's name is Michael, as stated in the scenario."
It's a reasonable objection, but it still got the logic.
7734128 t1_j9vb02f wrote
Reply to comment by Economy_Variation365 in What are the big flaws with LLMs right now? by fangfried
Truly, artificial intelligence had surpassed us all. We are humbled by its greatness and feel foolish for thinking that dogs could drive.
Borrowedshorts t1_j9var8b wrote
Why are people so skeptical of published results? This is how exponential progress works, smaller models today can perform better than larger models of a couple years ago.
BarockMoebelSecond t1_j9v9nu7 wrote
Reply to comment by Tiamatium in When will AI chatbots speak with us through audio? by [deleted]
There's already a GPT3 API.
BarockMoebelSecond t1_j9v8ia9 wrote
Reply to comment by AwesomeDragon97 in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
It is almost certainly worse, through no fault of the OP.
ChronoPsyche t1_j9v75hx wrote
Reply to comment by Tiamatium in When will AI chatbots speak with us through audio? by [deleted]
Wait till you find out about GPT3. Lol.
Franck_Dernoncourt t1_j9v5wwh wrote
Why SOTA? Did they compare against GPT 3.5? Only comparison against GPT 3.5 I found in the LLaMA paper was:
> Despite the simplicity of the instruction finetuning approach used here, we reach 68.9% on MMLU. LLaMA-I (65B) outperforms on MMLU existing instruction finetuned models of moderate sizes, but are still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022)).
turnip_burrito t1_j9v5wje wrote
Reply to comment by MrSickRanchezz in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
Read the rest of the discussion. "Art" has several different definitions, and we were using two of those different definitions. This led to disagreement.
>English better, or stop bickering with people when you can't even write coherently.
Was that necessary? I see now that you're either a troll, or if not, a strange person. My written English is fine, and I'm sorry if you have trouble reading it.
turnip_burrito t1_j9v5jro wrote
Reply to comment by MrSickRanchezz in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
You're clearly missing the point.
jdmcnair t1_j9v5fet wrote
Reply to comment by strongaifuturist in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
Of course. Yeah, we have no way of knowing anything outside of our own individual existence, when it comes right down to it.
But, though I don't have ironclad certainty that you actually exist and are having an experience like mine from your perspective, the decent thing to do in the absence of certainty is to treat you as though you are. And that distinction is not merely philosophical. To behave otherwise makes you a psychopath. I'm just saying until we know more, it'd probably be wise to tread lightly and behave as though they are capable of experience in a way similar to what we are.
turnip_burrito t1_j9v5eum wrote
Reply to comment by MrSickRanchezz in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
You have a particular definition of art that gives you this view. There are other definitions of art that will provide a different view.
turnip_burrito t1_j9v56hc wrote
Reply to comment by Terminator857 in What do you expect the most out of AGI? by Envoy34
>I can explain everything with standing EM waves
Bullshit.
Explain the existence of electrically neutral particles like neutrinos and why they're able to interact at all with other particles.
> Ugly = wrong, just look at history of bad / wrong theories.
No, ugly = ugly and wrong = wrong. Physics has no reason to be elegant to humans. The Standard model is incomplete (dark matter/energy, quantum gravity, and antimatter imbalance not explained) and inelegant, but has made predictions which up until now have worked for the rest of particle physics. In the sense of incompleteness, it could be considered "wrong". However, it is effective at predicting everything we are able to test here on Earth, so in that sense it is "right".
In fact, scientists at the LHC have been trying very hard, to no avail, to find deviations to the standard model.
> Hasn't produced anything useful, another hallmark of something very wrong.
Bad predictions and inconsistency with reality are the hallmark of something wrong. Subatomic physics isn't really that useful (we have no real use for gluons, neutrinos, etc), but we still do test theories of it.
MysteryInc152 t1_j9v4fru wrote
Reply to comment by maskedpaki in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
Flan-Palm hits 75 on MMLU. Instruction finetuning/alignment and COT would improve performance even further.
nklarow t1_j9v4fct wrote
Reply to comment by redroverdestroys in Seriously people, please stop by Bakagami-
Keep going, I'm getting my popcorn ready.
aionskull t1_j9v45m8 wrote
I voice chat to bing chat on my phone now so, it's here.
MysteryInc152 t1_j9v40u0 wrote
Reply to comment by turnip_burrito in What are the big flaws with LLMs right now? by fangfried
It answers it consistently. I don't think Bing is based on chatGPT. It answers all sorts of questions correctly that might trip up chatGPT. Microsoft are being tight-lipped on what model it is exactly though
TemetN t1_j9v40r7 wrote
The size is pretty much the most significant thing at a glance, the benchmarks stick to comparing to older models and ignoring more recent advancements even in those models. I'd be more enthused if they were open sourcing it, but despite them being more open than OpenAI lately it still seems to operate off some sort of weird 'can apply, but you'll never get approved' process.
GPT-5entient t1_j9v3xjw wrote
Reply to comment by Ale_Alejandro in What do you expect the most out of AGI? by Envoy34
>They aren’t opening the models out to the public
Meta just literally did that though...
redroverdestroys t1_j9v3qr0 wrote
Reply to comment by nklarow in Seriously people, please stop by Bakagami-
you want to censor people from posting content that you don't like
you get off on people not liking censorship
you are actually psychotic.
TFenrir t1_j9v3joj wrote
A lot of it has to do with computational intensity and latency. Text to audio and vice versa takes a bit of time - and different challenges for local or cloud based solutions.
Let's say you want chatbot to real-time reply to you in audio, with a cloud based solution.
First you speak to it in audio, and that is sent to a cloud server - this part is relatively fast, and what already happens with things like Google home/Alexa. Then it needs to convert it to text, and run that text through an LLM. Then the LLM creates a response, and that needs to be converted to audio.
Let's say for a solution like we see with elevenlabs, it takes 2 seconds for every second of audio you want to generate. That means if the reply is going to be 10 seconds, it takes 20 seconds to generate. That would be too slow.
You might have opportunity to stream that audio, by only generating some of the text to audio before starting the process, but these solutions work better when they are given more text to generate all at once... Generating a word at a time would be like talking with A. Period. In. Between. Every. Word.
yottawa OP t1_j9vdmdr wrote
Reply to comment by NutInBobby in OpenAI’s roadmap for AGI and beyond by yottawa
Crazy! Does Bing Chat scan the internet instantly? What did you ask Bing?