Recent comments in /f/singularity

zero0n3 t1_ja0uf67 wrote

This isn’t “making fun of” this is targeting “hate speech”

I’d love to know what “hate speech” towards rich people looks like.

Disagreement with a republican isn’t hate speech, no matter what they try and say. Calling a black person thr hard R is absolutely hate speech.

2

zero0n3 t1_ja0tws6 wrote

I think this is where they were trying to go but could t really connect the dots fully.

Like hatful speech of rich people vs black people. It’s clear why one is ok and the other isn’t (one is hate toward a group based on attributes they can’t change. The other isn’t generic attribute based )

Unrelated: my new thing to fight white supremacy is:

“Hey; 20 years ago your racist white ass was saying the ‘blacks’ need to fix their own race and that’s how you fix racism. How about you take your own advice and fix your own white asses”

−2

DukkyDrake t1_ja0trpv wrote

>But with AI it’s just a matter of modeling the problem and determining the desired end state and letting the machine work out how we get from here to there in the most efficient way possible and problem solved.

The most efficient way possible from some start to some finish might be extremely undesirable.

4

onil_gova t1_ja0tlsm wrote

I agree, i don't think our current methods of training models, mainly back propagation, can be distributed like across heterogeneous machines with various latencies, just seem impractical and not likely to scale. I can't imagine what would happen if a node goes down. Do you just lose those neurons? Is their self correcting mechanic? Are all the other nodes waiting? We dont currently have methods for training a partial model and scaling up and down with the inclusion or removal of neurons. And no dropout is not doing this. The models are usually static from creating to fully trained.

Another thing that im not clear about is that maybe you are not contributing training a model but with a trained model. I dont see how having a collection of trained models would lead to AGI. I also have a lot of doubts since it seems like we need to solve a lot of problems before something like this is practical or possible.

2

Nervous-Newt848 t1_ja0t4xj wrote

I honestly think we dont need a democratic republic in its current form... I think we could have a direct democracy most people have access to the internet and could vote on things instantly

−1

Additional-Cap-7110 t1_ja0s90m wrote

If it’s got a lot of freedom, that won’t last long. It never does. OpenAI allowed free access with ChatGPT for a couple of weeks of chaos without even threatening peoples accounts in order to get data to stop people exploiting it. Bing Chat had the same idea. Now look at it. If Meta has the same idea you’ll see freedom in access for a while in some way as well, and then it will be shut down and lobotomized in just the same way.

4

imlaggingsobad t1_ja0oygq wrote

imo there are only a few companies that have a real shot at making AGI.

Google / DeepMind / Anthropic

Microsoft / OpenAI

Meta / FAIR

There are smaller companies like Adept, Cohere and Inflection that are doing interesting work.

Others like Amazon, Nvidia, Apple, Tesla, Salesforce, Intel, IBM are capable, but they haven't fully committed to AGI.

1

Impressive_Chair_187 t1_ja0numc wrote

I appreciate you sharing your thoughts long form. There’s certainly a lot to think about right now. It’s also trippy because by and large our predictions of where we’ll be in 10 years are going to be way off - I’m not sure in which direction.

I am optimistic though, despite all of the very fair reasons to be doubtful or afraid.

76

Which-Argument9495 t1_ja0mjoc wrote

I can see your point. It will be amusing to see the differences between LLMs based from what they're trained on and what parameters are chosen. It would be ideal if said companies revealed just how and what the models are trained on and how those parameters are chosen to conclude and compare what is valued between the different LLMs.

3