Recent comments in /f/Futurology

adt t1_j9nv4zj wrote

>shouldn't the onus of delineating man from machine be on the side providing the AI chatbot?

It is.

Here's a very long read, but it will explain how OpenAI is building in watermarking for use by govt + themselves + maybe academia.

https://scottaaronson.blog/?p=6823

>'to watermark, instead of selecting the next token randomly, the idea will be to select it pseudorandomly, using a cryptographic pseudorandom function, whose key is known only to OpenAI. That won’t make any detectable difference to the end user, assuming the end user can’t distinguish the pseudorandom numbers from truly random ones. But now you can choose a pseudorandom function that secretly biases a certain score—a sum over a certain function g evaluated at each n-gram (sequence of n consecutive tokens), for some small n—which score you can also compute if you know the key for this pseudorandom function'

And why they wouldn't just stick it in a database of logs:

>'Some might wonder: if OpenAI controls the server, then why go to all the trouble to watermark? Why not just store all of GPT’s outputs in a giant database, and then consult the database later if you want to know whether something came from GPT? Well, the latter could be done, and might even have to be done in high-stakes cases involving law enforcement or whatever. But it would raise some serious privacy concerns: how do you reveal whether GPT did or didn’t generate a given candidate text, without potentially revealing how other people have been using GPT? The database approach also has difficulties in distinguishing text that GPT uniquely generated, from text that it generated simply because it has very high probability (e.g., a list of the first hundred prime numbers).'

7

MINIMAN10001 t1_j9nuxou wrote

I like how they say "whether human employees like it or not"

but everything I know about call centers is that they are actually the worst.

People calling into call centers completely dumpstering on the poor worker while the poor worker is getting dumpstered on by management for metrics resulting in inhumane treatment from both the customer and the boss.

Employees can live a happier life doing pretty much anything else.

I know Comcast has tied in their systems so I can learn if they are constantly doing maintenance when I get home from work so I can't use the service we're paying for. The answer is yes, but it used to take so many more menus than it does now to get to the automated system which knows the answer.

3

khamelean t1_j9nuuqj wrote

It’s kind of like trying to detect if a student used a calculator on a math test.

The key point to take away is that it’s no longer useful to teach students how to do complex calculations in their head. What’s far more important are the fundamental concepts. Do they understand the formulas and when to apply them, do they understand how to use the tools available to them to achieve a goal.

The end goal has never been to write an essay, it’s just to convey information. Far more important than the essay itself is information being conveyed. What idea is the student trying to communicate.

It will take academia a while to adjust though. For many years teachers stuck with the mantra of “you won’t always have a calculator on you”. I’m sure some will cry “you won’t always have access to an LLM generative text engine”, but we all know that’s simply not true.

6

Surur t1_j9ntwmv wrote

> I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating.

The training is resource intensive. The running is not, which is demonstrated by ChatGPT being able to support millions of users concurrently.

Even if you need a $3000 GPU to run it, that's a trivial cost for the help it can provide.

3

wanfuse1234 t1_j9ntswj wrote

Tech development starts as a nearly linear progression till it reaches an asymptotic phase where it quickly goes nearly exponential and then levels off and the curve inverts, we are about to reach the asymptotic phase in this tech, and with it n^2 problems will become solvable which is a whole new class of problems that can be solved, both good and bad, including AGI. We will reach the singularity likely within 10 years. Maybe 20.

2

LettucePrime OP t1_j9ntf72 wrote

I had an enormous 10+ paragraph version of this very simple post discussing exactly some of those smaller LLMs, & while I'm not too familiar with Pygmalion, I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating. Effectively I argued that, because of economic & tech pressures, the AI industry is due for a contraction pretty soon, meaning that AI generated text would only come from an ever dwindling pool of sources as the less popular models die out.

I abandoned it before I got there, but I did want to touch on truly small scale LLMs & how fucked we could be in 3-5 years when any PC with a decent GPU can run a Russian Troll Farm.

Regarding privacy concerns, yeah. That's probably the best path to monetization this technology has at the moment. Training models on the business logic of individual firms & selling them an assistant capable of answering questions & circulating them through the proper channels in a company - but not outside it.

4

bremidon t1_j9nsyx7 wrote

>The purpose of 230 is to allow ISPs to remove harmful/inappropriate content without facing liability

Ding ding ding. Correct.

This was and is the intent, and is clear to anyone who was alive back when the problem came up originally.

However a bunch of court cases kept moving the goalposts on what ISPs and other hosts were allowed to do as part of "removing harmful/inappropriate content". Now it does not resemble anything close to what Congress intended when 230 was created.

If you are doing a good-faith best effort to remove CP, and you accidentally take down a site that has Barney the Dinosaur on it, you should be fine. If you somehow get most of the bad guys, but miss one or two, you should also be fine. That is 230 in a nutshell.

The idea that they can use it to increase engagement is absolutely ludicrous. As /u/Brief_Profession_148 said, they have it both ways now. They can be as outspoken through their algorithms as they like, but get to be protected as if it is a neutral platform.

It's time to take 230 back to the roots, and make it clear that if you use algorithms for business purposes (marketing, sales, engagement, whatever), you are not protected by 230. You are only protected if you are making good faith efforts to remove illegal and inappropriate content. And "inappropriate" needs to be clearly enumerated so that the old trick of taking something away with the reason "for reasons we won't tell you in detail" does not work anymore.

Why any of this is controversial is beyond me.

10

Clairvoidance t1_j9ns42x wrote

There's the issue of locally run LLMs. It's even possible low-scale with models like Pygmalion, but it would be an even bigger issue if there wasn't low-scale models, as nothing would stop the richer people from having a language learning model on the downlow, or as funny as it sounds, there might even emerge some sort of black-market of LLM

people are also seemingly very careless about what they put into LLMs

4

jimmcq t1_j9nrv3h wrote

The scenario you're describing is a valid concern, as advancements in AI technology continue to progress rapidly. It's possible that in the future, all online interactions could be between humans and AI, with no way of telling the difference.

However, it's important to remember that Reddit, as well as other social media platforms, currently have human moderators and administrators who oversee the platform's operations. These individuals are responsible for ensuring that the platform remains safe and free from harmful content.

Furthermore, even if all interactions on Reddit were AI-generated, it's unlikely that the platform would actively prevent users from interacting with actual humans. Users could still choose to interact with other users outside of the platform, such as through video calls or other forms of communication.

In terms of trust, it's important to approach all online interactions with a healthy dose of skepticism, regardless of whether you're interacting with a human or AI. It's always a good idea to verify the information you're receiving and to critically evaluate the sources of that information.

Overall, while the scenario you're describing is possible, it's important to remember that social media platforms are still managed by humans and that users should approach all online interactions with caution and critical thinking.

0

YOLO420BUST t1_j9nrh0p wrote

This is why Moons makes no sense. Reddit farming bots farming each other's karma with no supply cap. Its encouraging the exact behavior you're talking about and can be the only end result.

3

Clairvoidance t1_j9nqxwj wrote

It is possible that in the future, social media platforms like Reddit could be fully run by AI-generated content, comments, and conversations. However, it is unlikely that Reddit would actively prevent users from interacting with actual human beings, as this would not align with their core mission of creating an open and engaging community.

If all the content on Reddit was generated by AI, it could potentially be difficult for users to differentiate between real human interactions and machine-generated responses. However, it is also likely that users would develop a sense of suspicion towards content that seems too formulaic or repetitive, which could help them identify AI-generated responses.

Furthermore, as AI technology continues to improve, it is possible that AI-generated content could become so advanced that it is indistinguishable from human-generated content. In this case, the distinction between human and machine-generated content may become irrelevant.

Ultimately, the trustworthiness of social media will depend on the standards and regulations put in place by the platforms themselves, as well as the level of transparency they offer their users. As AI technology continues to advance, it will be important for social media platforms to maintain open lines of communication with their users and prioritize ethical considerations in the development of their AI systems.

−1

just-a-dreamer- t1_j9nqa8p wrote

Humans do kill humans all the time, I don't see that behaviour as an issue in itself?

The middle class conservative that enforces zoning laws at a council meeting basicly killls a homeless man eventually. Insulin priced 12x over production costs kills people. Medical bills kill people. Rent kills people.

In one way or another the rich kill the poor all the time 24/7. That is how human society operates.

An AI that cares to improve human lifes overall won't hesistate to kill some humans then to improve the lifes of the majority. It is the way of our species.

1

hxckrt t1_j9nq67q wrote

Ah so the answer is "yes, we're going to model subjective appreciation of art"?

Go has an objective score you can quickly calculate to get better than humans. Writing and art do not, so you're still stuck copying humans, because you need them to rate the output. You're confusing objective score (quantity) with subjective quality.

And "no point fighting against it"? You're starting to sound like the Borg gif. Try to understand how this works before you abandon all hope in favor of our robot overlords.

1

OpusChao t1_j9no2k3 wrote

There is no such thing as currency that's not fiat anymore, unless you suggest we start trading in gold and silver. Non-fiat currency hasn't been a thing since we dropped the gold standard in 1971. The entire global economy is basically just a house of cards now, about to come crashing down. The more they try to fix it the worse it gets, printing more money, adjusting the interest, bailing out companies that should have been bankrupt, etc, is just pushing forward a problem that gets bigger and bigger the further we push it until it collapses. And I'm not talking about some hyperinflation and a new great depression, although that will come first. No, it'll be a total collapse, unrecoverable. Anyone that can't see this, and it dosen't look like YouTube's new CEO can, are going to crash with it.

For perspective. The debt world wide is about $300 trillion (2022). The total amount of physical money in the world is about $5 trillion. The estimated total value of things owned by people, is about $90 trillion. The biosphere of the entire Earth, including oceans, was estimated to be worth about $33 trillion. And all investments, including crypto, basically things that will be worthless if the economy fails, is a staggering $1.3 quadrillion. So this is an entirely artificial system with no basis in reality. With how modern fiat money works, selling the entire ecosystem of the planet would barely cover 1/10 of our collective dept. Selling everything in the world would cover about 1/3 of it. And yet we have artificial values of over a quadrillion, if my math is right you could buy the whole world and everything in it about 10 times with that. So what value does it actually have if there is more of it than there is possible to buy with it? (Source: Google, Federal Reserve, Science dot org.)

Point is, invest in anything and it will become worthless sooner or later, probably sooner, unless it's something of actual physical substance.

6

Cnoized t1_j9nn7kz wrote

Rune Scape had an era where 99% of the players were bots. They would farm/trade/do quests. I assume it will be a little like that.

15