Recent comments in /f/technology

Strazdas1 t1_jcedkvb wrote

Of course they are going to rebrand it. Eugenics as a word has so much negative baggage this may be the first time i didnt get downvoted to hell when i said its not a bad thing.

And thats why we have to push for this technology to be available to everyone, so the class divide wouldnt get so wide. The rich is going to use it whether we like it or not, the only way we can equalize the field is if we use it ourselves.

1

dwarfarchist9001 t1_jce8cs6 wrote

Because Google keeps canceling projects and refusing to release products. Google invented the concept of transformers which is what the T in GPT stands for and then never did anything with it for years. Just last week Google published their PaLM-E paper in which they re-trained their PaLM LLM to be multimodal including the ability to control robots. Before the paper was even published Google did what they usually do with successful projects and shut down the Everyday Robots team that developed it.

1

sosomething t1_jcdyqms wrote

Well here's the awesome thing about geography- there's lots of it.

You don't have to live in a place with a high cost of living.

In most of those places; coastal cities, Chicago, NY; the increase in your wages isn't commensurate with the increased cost of living compared to an emerging city or town. Someone making $80k in Cincinnati lives a lot better than the same person making $120k in Chicago. And that's only if your skills are the type that aren't marketable in towns without a tech sector. Although with emergent decentralized workforces and remote work, even that is becoming less of a factor.

I could do my job from anywhere with a halfway decent ISP, which means I could do it for any company that wanted to hire me and then live more or less wherever I wanted. Why, then, would I choose to pay $2000/mo for a shitty apartment in a major city when I could live like a king somewhere else?

0

CactusSmackedus t1_jcdmxe8 wrote

Bing kicks ass lol, I can get quick and easy citations of USC if I know vaguely what I'm looking for, find papers way easier than scholar search if again I vaguely know what I want, get pointed to the correct philosophical concepts (and even have a philosophical discussion with it), I mean really it's the best way to find stuff on the web right now.

Just tell it what you're looking for and voila

Also a great quick reference for games, although as you can expect it's better at knowing things about newer, popular games.

1

AaruIsBoss t1_jcdmktm wrote

“Blood alone moves the wheels of history! Have you ever asked yourselves in an hour of meditation, which everyone finds during the day,how long we have been striving for greatness? Not only the years we've been at war, the war of work, but from the moment as a child when we realized that the world could be conquered. It has been a lifetime struggle. A never-ending fight. I say to you, and you will understand that it is a privilege to fight! We are warriors! Salesmen of north-eastern Pennsylvania, I ask you once more: Rise and be worthy of this historical hour! No revolution is worth anything unless it can defend itself! Some people will tell you salesman is a bad word. They'll conjure up images of used car dealers and door to door charlatans. This is our duty: to change their perception. I say salesmen... and women of the world unite! We must never acquiesce for it is together, TOGETHER, THAT WE PREVAIL! We must never cede control of the motherland! For it is together that [the audience] we prevail!"

13

CactusSmackedus t1_jcdmi15 wrote

It's not, the commenter doesn't know what they're talking about. There's a paper out in the last few days (I think) showing that weaker systems can be fine tuned on input/output from stronger model and approximate the better models' results. This implies any model with paid or unpaid API access could be subject to a sort of cloning. It suggests that competitive moats will not be able to hold.

Plus (I have yet to reproduce since I've been away from my machine) APPARENTLY a Facebook model weights got leaked in the last week and apparently someone managed to run the full 60B weights model on a raspberry pi (very very slowly) but two implications:

  1. "Stealing" weights continues to be a problem, this isn't the first set of model weights to get leaked iirc, and once you have a solid set of model weights out, experience with stable diffusion suggests there might could be an explosion of use and fine tuning.

  2. Very very very surprisingly (I am going to reproduce it if I can because if true this is amazingly cool) consumer grade GPUs can run these LLMs in some fashion. Previous open sourced LLMs that fit in under 16Gb of vram are super disappointing because to get the model size small enough to fit on the card you have to limit the number of input tokens, which means the model "sees" very few words of input with which to produce output, pretty useless.

Now I don't think this year we'll have competitive LLMs running on GPUs at home, but, even if openAI continues to be super lame and political about their progress, eventually the moat will fall.

Also all the money to be made (aside from bing eating google) or maybe I should say most of the value is going to be captured by skilled consumers/users of LLMs not by glorified compute providers.

2