Recent comments in /f/technology

Shavethatmonkey t1_jdcrcrx wrote

What did he mean by that?

He meant that the Republican party is so openly racist that black people voting for them was hurting their own civil rights.

It's incredible that a warning about open Republican racism is what Trumplings try to twist into racism. Republicans who hate BLM for protesting their racism continually bring up Joe's comment as though it excuses Republican racism after that.

Since Joe said that have Republicans said anything you consider racist?

1

GTthrowaway27 t1_jdcquu3 wrote

Of course. Mean and media won’t be exact unless it’s normal, bimodal, etc

But why act as though it’s going to be a meaningful difference?

The CEO is saying the average user is older than you would assume(teenagers/college). That’s it. Even if the median and mean are different by several years, that’s still generally going to be the same point.

It just seems the easy Reddit contrarian point of “um aktually median is better” (hence multiple comments saying the same thing) when they’re probably not that much different to begin with

8

thats-fucked_up t1_jdcpknd wrote

They're chasing a moving target. Let's see if they can catch them. The big difference is that Ford is distracted by its money makers and Tesla is not. (See Saturn.) Tesla is the only one making a profit, and by the time Ford is building 600,000 units, Tesla will be building two to four million, plus directly threatening every American car makers' cash cow by building their Cybertruck.

−18

agm1984 t1_jdcoyl5 wrote

I agree with you, but it also represents the interface between human and machine, so it must be accurate.

The issue I am highlighting is minor but might be revealing some unfortunate aspects. For example, if you can adopt a mathematical approach to deciding what words to use, there is a kind of latent space in which the answer to your question draws an octopus tentacle of sorts. The shape of the tentacle is analogous to the chosen words.

My argument is that the tentacle can be deformed at parts of a sentence related to the word 'is' (which is comically an equals sign) because it misrepresents the level of precision it is aware of. For me this is a huge problem because it means either (or both) the "AI" is not extrapolating the correct meaning from the lowest common denominator of cumulative source materials, or the source materials themselves are causing the "AI" to derive a bum value in the specific context with the problem.

My example of gravity 'at all scales' is interesting because there is no world where a scientist can assert such a concrete clause. In actual english terms, it's more like a restrictive clause because the statement hinges on the context around it. Maybe there is a sentence that says "currently" or "to the best of our knowledge", or maybe there is an advanced word vector such as "has been" that helps indicate that gravity is solved here at the moment but might not be in the future.

It's very minor, but my warning extends to a time when a human is reading that and taking fragments at face value because they feel like the "AI" is compiling the real derived truth from the knowledge base of humankind. My warning also extends to a time when a different "AI" is receiving a paragraph from ChatGPT and for the exact same reasons misinterprets it due to these subtle errors of confidence. There's something uncanny about it, and this is where I see an issue currently if you want to use it as an interface. Maybe my side point is that it doesn't make sense to use it as an AI-to-AI interface because you lose so much mathematical accuracy and precision when you render the final output into fuzzy english. Other AIs need to know the exact angle and rotation between words and paragraphs.

1

Gabelschlecker t1_jdcmlkf wrote

Yes, because they were never developed to give factual information. Just a glance at how these models actually work reveals very obviously, that they do not have an internal knowledge base. They have no clue whatsoever, what is a factual correct and what is not.

Their job is producing realistic language. That's what their architecture is supposed to achieve and they do it quite well when trained on large datasets. That they, at times, produce real facts is mere side effect.

The problem is that people ignore this, because they project human-like intelligence on anything that can produce human-like language.

ChatGPT is a great tool, because it can be used to help you produce new texts (e.g., editing your own text) or can give you ideas or suggestions. It cannot replace a search engine and it can't cite you any sources.

2

GTthrowaway27 t1_jdcmi3k wrote

But if there aren’t any 2 year old users then they’re excluded from the distribution anyways and both mean and median aren’t going to be impacted anyways

And if there are it’s not going to meaningfully affect the mean or median. Unless there’s more 2 year olds online than I expected…

14