Recent comments in /f/MachineLearning

gatorling t1_j7c1b0y wrote

I think the motivations for the two companies differ. What would Google gain from releasing a chat bot ? Instead, Google likely aims to introduce LLM capabilities into their search engine in (most likely) subtle, measured and careful ways. Opting for incremental improvements in search backed by rigorous A/B experiments.

Whereas OpenAI gains a lot to release an awesome chat bot. They get to generate buzz and secure next rounds of funding.

20

Eggy-Toast t1_j7bvpp1 wrote

I’ve thought about it. I do not believe AI is going anywhere or will stop taking jobs. We could slow it down, but I don’t see it stopping without running the risk of falling behind as a technological country. There are a lot of dying industries, we need ways to keep food on those tables regardless of if they were lost by AI or not. Protections for the worker not sanctions on AI.

1

Myxomatosiss t1_j7budz6 wrote

If you truly believe that, you haven't studied the human brain. Or any brain, for that matter. There is a massive divide.

Ask it for a joke.

But more importantly, it has no idea what a chair is. It has mapped the association of the word chair to other words, and it can connect them together in a convincingly meaningful way, but it only has a simple replication of associative memory. It's lacking so many other functions of a brain.

1

Freed4ever t1_j7brdep wrote

And Kodak invented the digital camera. Just because Google invented it first, it doesn't necessarily mean anything commercially. Contrary to your statement about "not a threat to Google", the fact that they invented it, but didn't release it, it means that they thought the technology would be a threat to them, just like Kodak. Now with the cat out of the bag, Google for sure won't repeat the same mistakes as Kodak, but it remains to be seen how this will affect them in long term. It takes 6 months to form a habit, right? Bing will go live in a few weeks, how long will it take for Google to go live?

15

e-rexter t1_j7bn2tw wrote

The danger, as is often the case, is human lack of understanding of the technology, leading to misuse, not the technology itself. Where is the intention of the AI? It is just doing word (partial word) completion, and feeding on lots of human dystopian content and playing it back to you. You are anthropomorphizing the AI.

1

blablanonymous t1_j7bjjgw wrote

Lol are you joking? No one is talking about being able to buy a home. I’m talking about being able to afford a 1 bedroom. Look up the median rent in SF since 2010. It almost doubled until he recently started decreasing in certain area. You don’t think a rent that doubles is going to push some people on the street? Do you live in SF? If so ask someone who has been there for 20 years how the situation has changed over that period.

0

Emotional_Section_59 t1_j7bfaex wrote

>Imagine if AI does destroy millions of jobs and these workers cannot adapt instantly. What do you think will happen?

Those who lost their jobs can be provided with a Universal Basic Income funded by the businesses that made them redundant. That way businesses save on costs while people don't lose a cent. I concede it's very idealistic but it's definitely possible, dare I say even likely should democracy not collapse.

I think it would be more productive to plan ahead in a similar vein to the paragraph above instead of attempting to barricade the march of progress.

3

7366241494 t1_j7bdshf wrote

I talked with someone inside Google who saw the unnerfed version. He said, “I have a CS degree and am pretty clever about asking the right questions to break the Turing Test… and I was very impressed.”

Google invented Transformers, and it’s naïve for people to think ChatGPT is so special that it’s a threat to Google.

58

blablanonymous t1_j7b9vpd wrote

Well exactly. The question is can we have progress AND some level of stability for society? Imagine if AI does destroy millions of jobs and these workers cannot adapt instantly. What do you think will happen? Poverty homelessness. Do you think people will just accept their fate for the greater Progress? No, if it reaches a certain critical point, that will create a lot of instability. How do you think these people will vote? Who do you think politicians will pick as scapegoats to capitalize on that anger? I work in AI. There is a lot of good that be done with it, but thinking about the impact on society is necessary.

0