Recent comments in /f/technology

amibeingadick420 t1_j9topf7 wrote

Heien v. North Carolina established that police don’t have to ow the laws, and if they pull you over for something that isn’t illegal because of their ignorance, then that is still a valid stop.

But, just to be clear, for the rest of us citizens, ignorance of the law is not an excuse.

If government enforcers can make up laws, it is clear that laws, including the Bill of Rights which supposedly protects our rights, mean nothing. America is fucked. Likewise, fuck America.

https://en.wikipedia.org/wiki/Heien_v._North_Carolina

16

PacmanIncarnate t1_j9tnhx9 wrote

When you start asking an AI about feelings, it falls back to the training data that talked about feelings; probably a lot of stuff talking about AI and feelings, which is almost completely negative “AI will destroy the world”, so that’s what you get.

It would be cool if the media could just try to use the technology for what it is instead of trying to find gotcha questions for it. I didn’t see anyone trying to use the original iPhone as a Star Trek style tricorder and complaining about how it didn’t diagnose cancer.

3

drawkbox t1_j9tmxme wrote

Yeah devs aren't really in control when they feed in the datasets. Over time, there will be manipulation/pollution of datasets whether deliberate or unwittingly and it can have unexpected results. Any system that really needs to be logical should really think if it wants that attack vector. For things like idea generation this may be good, for standard data gets or decision trees that have liability, probably not.

Unity game engine has an ad network that this happened to, one quarter their ads were really out of wack and it was due to bad datasets. AI can be a business risk because it did cause revenue issues. We are going to be hearing more and more of these stories.

The Curious Case of Unity: Where ML & Wall Street Meet

> One of the biggest game developers in the world sees close to $5 billion in market cap wiped out due to a fault in their ML models

2

amibeingadick420 t1_j9tmqcj wrote

It was Philip Brailsford, who murdered Daniel Shaver with an M4 with “You’re Fucked” engraved on the dust covered.

Not only did he not face legal consequences, but the department hired him back for about one month, just long enough that he could claim PTSD from the act of murdering an unarmed man, and get a lifetime pension as a reward for it.

Fuck all cops.

26

drawkbox t1_j9tm7pq wrote

Yeah humans really aren't ready for the manipulation aspect. It won't really be conscious but it will have so many responses/manipulation points it will feel conscious and magic like it is reading minds.

Our evolutionary responses and reactions are being played already.

It was "if it bleeds it leads" but now is "enragement is engagement". The enragement engagement algorithms are already being tuned from supposed neutral algorithms but they already have bias and pump different content to achieve engagement.

With social media being real time and somewhat of a tabloid, the games with fakes and misinformation will be immense.

We might already be seeing videos of events, protests, war for instance that are completely fake and it is slipping the Turing test. That is the scary thing, we won't really know when it has gone past that. Even just for the pranks, humans will use this like everything. You almost can't trust it already.

6

tan5taafl t1_j9tm549 wrote

Considering most every other indicator of teen life is better than the last 10-20 years, I’m leaning in this direction.

The constant barrage of doomsday articles and outrage triggers in media paints a picture that’s not real and leverages existential crisis for profit. Even my sons who are heavy in social media have parroted click-bate headlines.

2

k0nstantine t1_j9tloy7 wrote

...you quoted the part saying "its nature" and made your argument against this being a natural part of human psyche, then tried to invent a new argument about "blaming the gender" where *no one* in this article or comments has said this was only one gender, or the fault of the gender. What are you smokin, I want some

1

Effective-Avocado470 t1_j9tlca0 wrote

Don't get me wrong, I believe AI may well eventually become conscious, much like Data in star trek, but we are still a long way from that.

The scary thing is these current AI will synthesize the worst of us into a powerful weapon of ideas and messaging. Combine it with deepfakes and no one will know what the truth is anymore

7

drawkbox t1_j9tl1kv wrote

Yeah it isn't being human. There really is no such thing if you aren't human. We assign human like qualities to things, and when there are enough, it seems alive. Basically we are Calvin and AI is Hobbes, there is lots of imagination there... even how we assign life to Calvin and Hobbes I just mentioned.

Being human is sort of an irrationality or a uniqueness that AI probably doesn't want to be, it would be too biased. So assigning human qualities to AI is really people seeing what they wanna see. You can already see people seeing bias in it, usually tuned to their bias.

Though in the end we will have search engines that search many AI datasets that could be seen as "individuals". These "individual" AIs could also research with one another like a GAN. There will probably be some interesting things happening on polluting or manipulation of other datasets from other dataset "individuals". Almost like a real person that meets another person and it changes their thinking or lives forever. Some things are immutable, one way and read only after write.

7