Recent comments in /f/singularity

Liberty2012 t1_j9uoyuw wrote

On the topic of bias, this is going to be very problematic issue for AI. It technically is not solvable in the way that some people think it should be. The machine will never be without bias, we only have a set of "bad" choice of bias to choose from.

I've written in more depth about the Bias Paradox here FYI - https://dakara.substack.com/p/ai-the-bias-paradox

As for the flaws in LLMs. There was a good publication here covers some of that in detail - https://arxiv.org/pdf/2302.03494.pdf

2

TeamPupNSudz t1_j9una4h wrote

> but no info about who, when, how, selection criteria, restrictions, etc.

The blog post says "Access to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world" which doesn't sound encouraging for individual usage.

15

Lawjarp2 t1_j9uj86z wrote

In some tasks the 7B model seems close enough to the orginal gpt-3 175B. With some optimization it probably can be run on a good laptop with a reasonable loss in accuracy.

13B doesn't outperform in everything however 65B one does. But it's kinda weird to see their 13B model be nearly as good their 65B one.

However all their models are worse than the biggest Minerva model.

4

tatleoat t1_j9uj5k8 wrote

I'm sorry I should have been clearer, by one I mean one vehicle, which is like 6 or 8 of those individual little flying guys, which are incredibly slow on an individual basis. But you're right, not much longer until almost the entire agricultural process is automated (and still prob only a few years before we can grow fruits in a lab to scale, making this entire process obsolete lol)

5

sachos345 t1_j9uin1v wrote

Im really looking forward to the way it may help us in science. Like i want it to derive Einsteins equation by themselves as proof its really smart. Or give it the most recent physic research and have it come up with new ideas. Stuff like that.

1

MysteryInc152 t1_j9uhssy wrote

I think peer-reviewed research papers are a bit more than just "claims".

As much as i'd like all the SOTA research models to be usable by the public, research is research and not every research project is done with the interest of making a viable commercial product. Inference with these models are expensive. That's valid too.

Also seems like this will be released under a non commercial license like the OPT models.

37

beezlebub33 t1_j9ugt9v wrote

They released code to run inference on the model under GPL. they did not release the model and describe the model license as 'Non-commercial bespoke license', so who the hell knows whats in there.

You can apply to get the model. See: https://github.com/facebookresearch/llama but no info about who, when, how, selection criteria, restrictions, etc.

Model card at: https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md

(I'd also like to take this opportunity to remind people that Model Card concept is from this paper: https://arxiv.org/abs/1810.03993. First author is Margaret Mitchell, last author is Timnit Gebru. They were both fired by Google when Google cleared out it's Ethical AI Team.)

5

Nocturnal-Teacher t1_j9ugsmz wrote

I didn’t read the article, but wow I’m surprised it’s that fast because it seems rather slow in the video. But just the fact that it’s close says to me that this is inevitable

2

YobaiYamete t1_j9ugnw3 wrote

Yep, this is what causes all the posts about the AI cheating like a mofo at hangman as well. It's funny to see, but is an actual problem.

There's also the issue that LLM are shockingly weak to gaslighting. Social Engineering has always been the best method of "hacking" and with the AI it's even more relevant than ever.

Gaslighting the piss out of the AI to give you all it's secret info is hilariously easy

8

jdmcnair t1_j9uggnz wrote

  1. I understand a good deal about what's going on under the hood of LLMs, and I think it's far from clear that these chat models that are now going public absolutely lack sentience. I'm no expert, but I've spent more than a little time studying machine learning. The "it's just matrix multiplication" argument, though it's understandable to hold if you're close enough not to see the forest for the trees, is poorly thought through. Yes, it's just matrix multiplication, but so is the human brain. I'm not saying that they are sentient, but I am saying that anyone who is completely convinced that they are not is lacking in understanding or curiosity (or both).
  2. Thinking that anything that's happening now is limit setting is like thinking a baby's behavior is limiting of the adult that they may become.
1