Recent comments in /f/Futurology

Thufir_My_Hawat t1_ja1m0sq wrote

I think the only thing that hasn't been solved to, at minimum, workable specifications is localization -- as in, taking media in one language and converting it into another. Too much creative work goes into it that -- rewriting jokes that are reliant on cultural references or puns, recontextualizing norms that are unspoken but not universal, even rewriting characters who fit a trope that only exists in a single culture.

Only a general AI would be capable of that, and we may still be decades away from that, since most funding has been diverted away from it towards ML. But for communication? We're already there for the most part. Might not be able to get good machine translation for.... idk, Georgian, but any major language is already good enough to write a business email, so long as the other party knows a machine did it.

0

vercertorix t1_ja1l02x wrote

If people went for it, I’d learn it as a universal language, we could all be learning it from childhood along with our native language and all be on even ground professionally, though it seems like inevitably we’d start asking ourselves why we’re bothering with our native languages, maybe not all of us, but enough to started sliding into a single language planet with a few holdouts.

4

imakesawdust t1_ja1kk84 wrote

2

vercertorix t1_ja1kd1j wrote

From what I’ve seen machine translators are not currently always clear on the jargon for each subject matter, that is, as in an example from a movie I heard, to some people a floater is a dead body taken out of the water while to some it’s someone who works as a freelancer. Similarly, different professions use the same words differently and machine translation can miss the context. Potentially it could be trained to recognize it, but might take a lot of effort and language keeps evolving. Beyond that sometimes it needs to know not to translate something, names of towns, acronyms, etc. Not saying it can’t be done, but I think that especially in professional jargon heavy documents and speech, there might be a higher concentration of word choice mistakes. Potentially, that could be helped with a simple “Profession” selector, but you’d have to build lexicons for each and keep it up to date, for every language, and if you change topics you might have to manually change. I wouldn’t make that automatic or I’m betting if they happen to use a couple words fitting another topic it might accidentally trigger a change, and suddenly you’re using nautical navigation terms, while they’re talking about cooking seafood.

6

alittlebitaspie t1_ja1jpcy wrote

The things learning A language (as in one) does to the brain is important, and due to concepts not being fully expressed across all languages, you could end up with a sum total that would be lossy at best. Also, the language you speak shapes your mind and how you are able to look at the world. There are things that the meat puppets served by those AIs would irretrievably lose in the conversion, and I'm not sure that we even fully understand what all of those would be until well after.

Right now AI development is like old 'solutions' to conservation problems by dropping in X or Y species. We will find many disasters, some that can't be undone, for each mild success. The rates for wild successes will be even worse. Humans are a natural system, as is their society, and our brains develop and have evolved to develop within that system. We have to make sure that we don't leave behind important information and development chasing developing ease.

9

HS_HowCan_That_BeQM t1_ja1i7ws wrote

I thought idioms would be the hardest. Then I asked ChatGPT the following:

'What would be the German equivalent of "down the drain" as in "All that work was for nothing, it's down the drain"?'

And damned if it didn't answer:

'The German equivalent of "down the drain" in the context you provided would be "umsonst" or "vergeblich". So, you could say: "Die ganze Arbeit war umsonst/vergeblich, es ist alles für die Katz."'

fuer die Katz was my understanding of translation of the English idiom. And an AI "knew" that. Although it threw in as an afterthought, deciding to concentrate on the "...for nothing" instead of "down the drain".

3

ToothlessGrandma t1_ja1i64e wrote

I don't think you understand what I'm saying. If nobody has a job, or a job with wages reduced by a significant amount, it doesn't matter what any employer or shareholder wants. The population won't have the income anymore to buy whatever you're selling. Do you think if the minimum wage was reduced to 5$ an hour everywhere in the U.S., that anyone would have any money to buy anything?

3

TheSensibleTurk t1_ja1hmvl wrote

Without going into specifics due to NDAs and such, as a contracted linguist I can attest that there already are third party technologies that allow for instant translation and transliteration with minimal and acceptable amount of loss vs a human. But the government still wants humans to do it in matters pertaining to public safety, national security or the military because human linguists and translators may be required to give deposition in court cases. When you take into account things like FISA warrants where the judges are especially stringent and the government has to clear a high bar, the government absolutely prefers human agency lest the courts or other observers accuse it of rigging the AI/machine. So, I don't think we'll see it in the government sector due to hose accountability and judicial concerns.

47

No-Owl9201 t1_ja1gttk wrote

Batteries really are at an early stage of development and often the biggest problems are the cost of production, the effective life of the battery, and the energy density it can hold. So probably to early to assess a lignin/polymer battery but it'd be a pretty green product and as lignin is a papermaking byproduct there'd be no scarcity, or mining issues.

1

guyonahorse t1_ja1ftau wrote

Well, ChatGPT's training is pretty simple. It's trained on how accurate it can predict the next words in a training document. It's trained to imitate the text it was trained on. The data is all "correct", which amusingly leads to bad traits as it's imitating bad things. Also amusing is the qualia of the AI seemingly being able to have emotions. Is it saying the text because it's angry or because it's just trained to imitate angry text in a similar context?

But yeah, general intelligence is super vague. I don't think we want an AI that would have the capability to get angry or depressed, but these are things that evolved naturally in animals as they benefit survival. Pretty much all dystopian AI movies are based on the AI thinking that to survive it has to kill all humans...

3

nbgrout t1_ja1fqrn wrote

And knowing more words/languages expands your capability for thought.

Language is more than just some sounds and scribbles that directly translate to person's/places/things. It is very often impossible to express exactly the same thought in a different language because the idea itself has cultural context and meaning imbued by the language.

For example, in English we would say "I like bananas". In Spanish the closest translation is "me gusta bananas," but those are fundamentally two different statements. In English, you are the subject taking affirmative action on the object (banana) by "liking" it. In Spanish, you are instead the passive object being acted upon by the subject (the banana) which is "pleasing" you (gustar ~ to please). Think about that, it seems subtle but consider the implications of being passive, acted upon by the world vs being active, acting up on the world.

15