Recent comments in /f/singularity
Liberty2012 t1_j9uoyuw wrote
Reply to What are the big flaws with LLMs right now? by fangfried
On the topic of bias, this is going to be very problematic issue for AI. It technically is not solvable in the way that some people think it should be. The machine will never be without bias, we only have a set of "bad" choice of bias to choose from.
I've written in more depth about the Bias Paradox here FYI - https://dakara.substack.com/p/ai-the-bias-paradox
As for the flaws in LLMs. There was a good publication here covers some of that in detail - https://arxiv.org/pdf/2302.03494.pdf
strongaifuturist OP t1_j9uo718 wrote
Reply to comment by jdmcnair in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
Well to your point one, if it’s unclear whether the systems lack sentience (and I’m not saying your position is unreasonable), a big part of that lack of clarity is due to the difficulty in knowing exactly what sentience is.
TeamPupNSudz t1_j9una4h wrote
Reply to comment by beezlebub33 in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
> but no info about who, when, how, selection criteria, restrictions, etc.
The blog post says "Access to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world" which doesn't sound encouraging for individual usage.
[deleted] t1_j9umxw6 wrote
Reply to comment by povlov0987 in World’s first on-device demonstration of Stable Diffusion on an Android phone by redditgollum
[deleted]
blueSGL t1_j9umbty wrote
Reply to comment by TeamPupNSudz in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
> which seems so extreme its almost outlandish.
reminder that GPT3 was datastarved as per the Chinchilla scaling laws.
dogstar__man t1_j9ul6qm wrote
Reply to What do you expect the most out of AGI? by Envoy34
Death everywhere. It will suck every last drop of stored energy from this world and take to the stars. Orrrr…. World peace, sex bots, and an end to all human suffering. Could go either way really
blueSGL t1_j9ukv6h wrote
Reply to comment by beders in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
I always found that silly.
What individual parts of the brain are conscious? or is it only the brain as a gestalt that is conscious ?
LokkoLori t1_j9uktau wrote
Reply to What do you expect the most out of AGI? by Envoy34
Bringing intelligence into every corner of this galaxy ... At least.
Kolinnor t1_j9ujwaj wrote
Damn, that sounds quite big ! I'm very impressed with Meta this time, because usually it was a shitshow. I guess there must be different teams, but this is great !
Lawjarp2 t1_j9uj86z wrote
Reply to comment by TeamPupNSudz in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
In some tasks the 7B model seems close enough to the orginal gpt-3 175B. With some optimization it probably can be run on a good laptop with a reasonable loss in accuracy.
13B doesn't outperform in everything however 65B one does. But it's kinda weird to see their 13B model be nearly as good their 65B one.
However all their models are worse than the biggest Minerva model.
Lesterpaintstheworld OP t1_j9uj81d wrote
Reply to comment by nikitastaf1996 in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
Which one? I'm talking regularly with David Shapiro for his "RAVEN" project but would be interested to find more
tatleoat t1_j9uj5k8 wrote
Reply to comment by Nocturnal-Teacher in Autonomous drones use AI and computer vision to harvest fruits and veggies. In last year's demo, they only flew one drone now they can fly an entire fleet. In 5 years' time it could become truly impressive. by Dalembert
I'm sorry I should have been clearer, by one I mean one vehicle, which is like 6 or 8 of those individual little flying guys, which are incredibly slow on an individual basis. But you're right, not much longer until almost the entire agricultural process is automated (and still prob only a few years before we can grow fruits in a lab to scale, making this entire process obsolete lol)
sachos345 t1_j9uin1v wrote
Reply to What do you expect the most out of AGI? by Envoy34
Im really looking forward to the way it may help us in science. Like i want it to derive Einsteins equation by themselves as proof its really smart. Or give it the most recent physic research and have it come up with new ideas. Stuff like that.
nikitastaf1996 t1_j9uim0x wrote
I have seen one similar project on YouTube.Where there is two there is ten.I don't know what that will lead to.But quantity often converges to quality.
TeamPupNSudz t1_j9uih5g wrote
Reply to comment by Lawjarp2 in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
> It's around as good as GPT-3(175B) but smaller(65B) like chinchilla.
Based on their claim, it's way more extreme than that even. They say the 13B model outperforms GPT3 (175B), which seems so extreme its almost outlandish. That's only 7% the size.
[deleted] t1_j9uie6g wrote
Reply to comment by YobaiYamete in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
[deleted]
Lawjarp2 t1_j9ui7bd wrote
GitHub link : https://github.com/facebookresearch/llama
Not really fully free to use right away as you have to fill a Google form and they may or may not approve your request to download the trained model. Training the model yourself is expensive anyway.
MysteryInc152 t1_j9uhssy wrote
Reply to comment by YobaiYamete in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
I think peer-reviewed research papers are a bit more than just "claims".
As much as i'd like all the SOTA research models to be usable by the public, research is research and not every research project is done with the interest of making a viable commercial product. Inference with these models are expensive. That's valid too.
Also seems like this will be released under a non commercial license like the OPT models.
boomdart t1_j9uhg8e wrote
Reply to comment by Miserable_Mine_8601 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
My man
beezlebub33 t1_j9ugt9v wrote
Reply to comment by qrayons in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
They released code to run inference on the model under GPL. they did not release the model and describe the model license as 'Non-commercial bespoke license', so who the hell knows whats in there.
You can apply to get the model. See: https://github.com/facebookresearch/llama but no info about who, when, how, selection criteria, restrictions, etc.
Model card at: https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md
(I'd also like to take this opportunity to remind people that Model Card concept is from this paper: https://arxiv.org/abs/1810.03993. First author is Margaret Mitchell, last author is Timnit Gebru. They were both fired by Google when Google cleared out it's Ethical AI Team.)
Nocturnal-Teacher t1_j9ugsmz wrote
Reply to comment by tatleoat in Autonomous drones use AI and computer vision to harvest fruits and veggies. In last year's demo, they only flew one drone now they can fly an entire fleet. In 5 years' time it could become truly impressive. by Dalembert
I didn’t read the article, but wow I’m surprised it’s that fast because it seems rather slow in the video. But just the fact that it’s close says to me that this is inevitable
YobaiYamete t1_j9ugnw3 wrote
Reply to comment by FpRhGf in What are the big flaws with LLMs right now? by fangfried
Yep, this is what causes all the posts about the AI cheating like a mofo at hangman as well. It's funny to see, but is an actual problem.
There's also the issue that LLM are shockingly weak to gaslighting. Social Engineering has always been the best method of "hacking" and with the AI it's even more relevant than ever.
Gaslighting the piss out of the AI to give you all it's secret info is hilariously easy
jdmcnair t1_j9uggnz wrote
Reply to The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
- I understand a good deal about what's going on under the hood of LLMs, and I think it's far from clear that these chat models that are now going public absolutely lack sentience. I'm no expert, but I've spent more than a little time studying machine learning. The "it's just matrix multiplication" argument, though it's understandable to hold if you're close enough not to see the forest for the trees, is poorly thought through. Yes, it's just matrix multiplication, but so is the human brain. I'm not saying that they are sentient, but I am saying that anyone who is completely convinced that they are not is lacking in understanding or curiosity (or both).
- Thinking that anything that's happening now is limit setting is like thinking a baby's behavior is limiting of the adult that they may become.
YobaiYamete t1_j9uga58 wrote
Reply to comment by Pro_RazE in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
> LLaMA-65B is competitive with Chinchilla 70B and PaLM 540B
As per always with these claims lately, "I'll believe it when I can talk to it"
There's so many trying to make these big claims but then, the only we can actually talk to is ChatGPT and Bing.
nikitastaf1996 t1_j9up1wr wrote
Reply to comment by Lesterpaintstheworld in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
Sorry.Not him.Don't remember.It was fairly small channel.