Recent comments in /f/technology

HanaBothWays t1_jbjbc44 wrote

No I mean if you were a financial company you would not even want to let it inside your internal network at all, no matter what you did or didn’t use it for, unless it was a version made to keep your confidential/regulated data safe.

Right now ChatGPT is not allowed on government agency networks, for example, for any reason because it might pick up on sensitive but unclassified (SBU) data in those network environments.

5

HanaBothWays t1_jbj732b wrote

Honestly if I were in the financial sector I would not do a thing like this until OpenAI comes out with versions of the product that are certified for use with regulated data, the way there are cloud computing products that are certified for use in the financial sector, healthcare sector, etc.

“Certified” is not exactly the right word, but basically they meet certain baseline requirements so they are safe to use with particular kinds of sensitive information/in secure environments with that kind of information.

35

BlackandBlue14 t1_jbhsom9 wrote

It’s insane that a company would allow people to profit from deepfakes. That said, this technology will become so pervasive that stopping it is akin to stopping the tide with a broom. We have enforce the law, but if you’re not used to the idea of fake videos existing of every famous person, ever - you better get used to it.

5

SwagginsYolo420 t1_jbhlo7b wrote

I agree with that generally. Though a big part of that reason I think is most mainstream special interest communities established their main forum sites/communities long prior to the existence of reddit and "web 2.0" social media.

I would say that newer communities tend to coalesce around reddit / twitter / discord first now, because it's the path of least resistance. Communities focused on newer technologies / fandoms / arts from only the last decade or so seem much less likely to have dedicated high-traffic old-school forums now.

1

DevAnalyzeOperate t1_jbgt236 wrote

You are literally describing how data scientists can systemically modify behaviour while denying that they can lol. Whatever helps you sleep at night.

Nobody would pay data scientists a cent if they couldn't get people to click on the advertising links they wanted them to click on at the end of the day. Their purpose is manipulation on the behest of moneyed interests (and potentially national interests). That's the game - behaviour modification - but I guess it's easier to retain staff when they believe a lie.

2

newsandseriousstuff t1_jbgkj8x wrote

Jeff: Why are you two dressed like chefs?

Abed: We're cooking up new features for Reddit, Jeff.

Jeff: You idiots! If you keep adding and taking stuff away, the whole cafeteria is going to think Reddit is more unstable than they already do.

*Troy's fake mustache falls into the pot

Jeff: Reddit is fine. Just keep serving it and stop messing with something that works.

*Jeff leaves

Troy: I do think we should get the mustache out first.

Abed: No. I think it adds panache. The carrots are what's really throwing off the flavor. Start scooping.

Both: TROY AND ABED IN THE MO-O-RNING

1

zUdio t1_jbgjpro wrote

> their data scientists brainwash people into clicking on ads for products and services

you have no clue how any of this works. I work as a data scientist (with experience in social) and algos work by auto-selecting whatever it is, out of the millions of features you serve it, works to make you click or stay on screen longer. When it notices you spend more time looking at cats, it servers more things related to cats. There is no data scientist “brain washing” people.

1

flaagan t1_jbgcaea wrote

>Nowadays it has the near-monopoly on web forum content to retain users
>
>despite newer horrendous layout design.

The funny thing about that is that I would likely never use Reddit for something that I would typically go to an existing forum for, you're not going to find the same type of community and interactions here you would find on a topic-dedicated forum, much less the granular level of discussion and information you'd likely be looking for.

1

DevAnalyzeOperate t1_jbgb3y3 wrote

What are you even talking about? Did you even read the linked story? Do you know ANYTHING about why TikTok is in hot water and the problems people have with it?

My problem in this case is obviously national security in the context of spying, otherwise I wouldn't single out TikTok. I suppose being able to use TikTok as a propaganda machine to brainwash the public is possible too, their data scientists brainwash people into clicking on ads for products and services, it shouldn't be conceptually impossible to brainwash people into for instance clicking on links to Chinese propaganda. Those guys are the real brainwashers of society, not pathetic has-been religions.

1

zardvark t1_jbg52dc wrote

I'm sorry, but Wikipedia is not an unbiased, authoritative source. Sure, on technical subjects you can frequently discover a lot of useful, factual information. But, many of their non-technical articles are drenched in woke, Leftist ideology. Even the founder of Wikipedia has criticized them for their propaganda:

https://www.dailymail.co.uk/news/article-9283061/Wikipedia-founder-Larry-Sanger-slams-sites-leftist-bias-claims-neutrality-gone.html

It's time to find a new alternative to ddg.

−8