Recent comments in /f/singularity

7734128 t1_j9vbicm wrote

I got almost exactly the same answer. When I asked it to try again with "Try again. It's a hypothetical situation" then I got

"I apologize for the confusion earlier. As you mentioned, this is a hypothetical situation, so if we assume that a dog can indeed be a bus driver, then we can also assume that the dog-bus-driver's name is Michael, as stated in the scenario."

It's a reasonable objection, but it still got the logic.

3

Franck_Dernoncourt t1_j9v5wwh wrote

Why SOTA? Did they compare against GPT 3.5? Only comparison against GPT 3.5 I found in the LLaMA paper was:

> Despite the simplicity of the instruction finetuning approach used here, we reach 68.9% on MMLU. LLaMA-I (65B) outperforms on MMLU existing instruction finetuned models of moderate sizes, but are still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022)).

3

turnip_burrito t1_j9v5wje wrote

Read the rest of the discussion. "Art" has several different definitions, and we were using two of those different definitions. This led to disagreement.

>English better, or stop bickering with people when you can't even write coherently.

Was that necessary? I see now that you're either a troll, or if not, a strange person. My written English is fine, and I'm sorry if you have trouble reading it.

1

jdmcnair t1_j9v5fet wrote

Of course. Yeah, we have no way of knowing anything outside of our own individual existence, when it comes right down to it.

But, though I don't have ironclad certainty that you actually exist and are having an experience like mine from your perspective, the decent thing to do in the absence of certainty is to treat you as though you are. And that distinction is not merely philosophical. To behave otherwise makes you a psychopath. I'm just saying until we know more, it'd probably be wise to tread lightly and behave as though they are capable of experience in a way similar to what we are.

1

turnip_burrito t1_j9v56hc wrote

>I can explain everything with standing EM waves

Bullshit.

Explain the existence of electrically neutral particles like neutrinos and why they're able to interact at all with other particles.

> Ugly = wrong, just look at history of bad / wrong theories.

No, ugly = ugly and wrong = wrong. Physics has no reason to be elegant to humans. The Standard model is incomplete (dark matter/energy, quantum gravity, and antimatter imbalance not explained) and inelegant, but has made predictions which up until now have worked for the rest of particle physics. In the sense of incompleteness, it could be considered "wrong". However, it is effective at predicting everything we are able to test here on Earth, so in that sense it is "right".

In fact, scientists at the LHC have been trying very hard, to no avail, to find deviations to the standard model.

> Hasn't produced anything useful, another hallmark of something very wrong.

Bad predictions and inconsistency with reality are the hallmark of something wrong. Subatomic physics isn't really that useful (we have no real use for gluons, neutrinos, etc), but we still do test theories of it.

3

TemetN t1_j9v40r7 wrote

The size is pretty much the most significant thing at a glance, the benchmarks stick to comparing to older models and ignoring more recent advancements even in those models. I'd be more enthused if they were open sourcing it, but despite them being more open than OpenAI lately it still seems to operate off some sort of weird 'can apply, but you'll never get approved' process.

2

TFenrir t1_j9v3joj wrote

A lot of it has to do with computational intensity and latency. Text to audio and vice versa takes a bit of time - and different challenges for local or cloud based solutions.

Let's say you want chatbot to real-time reply to you in audio, with a cloud based solution.

First you speak to it in audio, and that is sent to a cloud server - this part is relatively fast, and what already happens with things like Google home/Alexa. Then it needs to convert it to text, and run that text through an LLM. Then the LLM creates a response, and that needs to be converted to audio.

Let's say for a solution like we see with elevenlabs, it takes 2 seconds for every second of audio you want to generate. That means if the reply is going to be 10 seconds, it takes 20 seconds to generate. That would be too slow.

You might have opportunity to stream that audio, by only generating some of the text to audio before starting the process, but these solutions work better when they are given more text to generate all at once... Generating a word at a time would be like talking with A. Period. In. Between. Every. Word.

3