Recent comments in /f/Futurology

321gogo t1_ja2dgqq wrote

I don’t think people want this generally though. A huge part of media is being able to connect with others over it. On top of that most people are attached to the message the creators are trying to convey behind their art.

7

Rocket_3ngine t1_ja2de1r wrote

Your writing style is fantastic!

The repetitive content on social platforms and media resources creates a bubble of similar ideas. The appearance of the Internet laid the foundation for vanishing intercultural differences.

A 2014 book, “The Culture Map” by Erin Meyer, discusses societal and cultural differences.

Even though the cultures of Asian and Western cultures are different, I can no longer say that in 2023 the difference will remain the same.

The current interconnectedness bursts the cultural gap because international companies have adopted doing business with other nations. The employees' international experience is transmitted locally through daily communication with their colleagues, friends, and family, blurring the boundaries of cultural differences.

Therefore intensifying international communication between nations generates similar ideas, and AI will only intensify the process.

1

FuturologyBot t1_ja2dcyd wrote

The following submission statement was provided by /u/NadiyaJeba:


To recreate the ancient marine environment, researchers examined fossils from South China, a shallow sea during the Permian-Triassic transition. The team was able to analyze prey-predator relationships and determine the functions ancient species performed by categorizing species into guilds, or groups of species that exploit resources in similar ways. These simulated food webs represented the ecosystem before, during, and after the extinction event in a plausible way.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11c9ctr/new_study_reveals_biodiversity_loss_drove/ja2axw2/

1

bogglingsnog t1_ja2cgc2 wrote

Designing for lowest common denominator is a terrible constraint to start with on software. There's no reason they couldn't have a simplified view and a complete view, or a gradient made by various adjustable graphics settings, it just doesn't justify the sacrifices made.

It wasn't creative, and it felt like the lack of visuals made it that much more obvious. If it was designed really well it looking like a Wii game would not have been an issue - obviously Wii games have been quite successful.

I know I'd have a hard time recommending some business software that looks like playing Wii Sports, so they did themselves injustice in that arena too. Should have gone with something more minimalist and professional.

To reiterate, my primary issue and concern is the lack of vision and creativity.

2

Jonsj t1_ja2ca3v wrote

Why? Language is just a friction that stop us from communicating, why did we learn how to stop washing clothes by hand? Or run instead of flying?

It's just a block, something that makes life harder, not easier. It's a tool, you would learn far more if you could talk to everyone, far more perspectives and ways of looking at thing's.

People would understand each other better, less misunderstandings. You don't sound like a Luddite, you sound like someone that thinks the status quo has a benefit, just because it had been this way, not because it has an actual benefit.

−10

94746382926 t1_ja2bune wrote

I think he's 5-10 years too early, although I'm personally happy that all this money is being used to improve the tech more quickly.

At this point it's just a matter of how much money they're willing to burn until they can hit the gold mine of unobtrusive and high fidelity consumer level devices. There's a good chance they run out of steam before then even if they could sustain this level of spending for over a decade. It's hard to stay motivated if adoption stalls.

1

94746382926 t1_ja2bktk wrote

The biggest problem with the metaverse is that it has to work on standalone headsets which have the processing power of a smartphone. This restricts it to shitty flat cartoon environments instead of something with more life and realism that a powerful PC could push.

Meta does have some amazingly photorealistic avatar generation software and facial tracking features in the pipeline. It's just that their best selling products (Oculus Quest 1 and 2) can't handle it. I think the introduction of the Quest Pro is the first taste of what's to come hardware wise. If they start marketing it towards businesses and build out those use cases we could see a drastically improved iteration of the metaverse compared to what we have now.

1

egocantin t1_ja2bfw7 wrote

It's exciting to see how rapidly technology is advancing in the field of language translation. With the advent of lingolink.ai, I believe we're closer than ever before to achieving a future where we won't need to learn languages. I'm optimistic that one of the big three (Google, Deepl, Lingolink) will soon release a live translation app that will make learning languages obsolete. Maybe within the next 1-2 years.

1

just-a-dreamer- t1_ja2bbg5 wrote

I would check out into a digital world anyway.

Make arrangements that my body shall be kept alive and create my dream world in the digital realm. I only contact other humans if I feel like it, don't have to live up to any standard that way.

How much would it cost to keep a human body function? 10k a year? Food, water, meds, waste disposal, heat, shelter, can't be that hard.

1

the1j t1_ja2b0s6 wrote

You know what; thinking about the big picture I don't care that ai might do art or might automize jobs.

I'm scared for my own job and future. I work in customer support, something that would be pretty easy to replace via ai; and I'm completing an engineering degree, which may be be about to be radically transformed by AI which I have not learnt as im on my final few years.

maybe i'll look back in 30 year and maybe my worries will be for nothing and I'll adapt to the change, or maybe it'll all end bad; but i just have no way of knowing and that scares me

2

NadiyaJeba OP t1_ja2axw2 wrote

To recreate the ancient marine environment, researchers examined fossils from South China, a shallow sea during the Permian-Triassic transition. The team was able to analyze prey-predator relationships and determine the functions ancient species performed by categorizing species into guilds, or groups of species that exploit resources in similar ways. These simulated food webs represented the ecosystem before, during, and after the extinction event in a plausible way.

30

94746382926 t1_ja2aq9m wrote

Personally, I don't think he's wrong. I just think he's early. Whether or not he's too early depends on how much money he's willing to burn and how long it takes for the tech to get good enough that the average consumer sees it as a must have. I'm moreso referring to the hardware than the current shitty incarnation of "VR worlds".

1

jamesj OP t1_ja2acnm wrote

Hey I appreciate the time to engage with the article and provide your thoughts. I'll respond to a few things.

>The first two elements of that is the definition for any model, which is exactly what both AI and deterministic regression algorithms all do.

Yes, under the framework used in the article, an agent using linear regression might be a little intelligent. It can take past state data and use it to make predictions about the future state, and use those predictions to act. That would be more intelligent than an agent which makes random actions.

>I'm not saying it's a bad paper or theory, but that this essay doesn't really justify why it brings it up so much

Yes, that is a fair point. I was worried that spending more time on it would have made it even longer than it already was. But one justification is that it is a good, practical, definition of intelligence that demystifies the process of intelligence to what kind of information processing must be taking place. It is built off of information theory work in information bottlenecks, and is directly related to the motivation for autoencoders.

>The problem is that Schmidhuber 2008 only exists as a preprint and later as a conference paper -- it was never peer-reviewed.

The paper isn't an experiment with data, it was first presented at a conference to put forward an interpretation. It's been cited 189 times. I think it is worth reading, the ideas can be understood pretty easily. But it isn't the only paper that discusses the connection between compression, prediction, and intelligence. Not everyone talks in the language of compression, they may use words like elegance, parameter efficiency, information bottlenecks, or whatever, but we are talking about the same ideas. This paper has some good references, it states, "Several authors [1,5,6,11,7,9] have suggested the relevance of compression to intelligence, especially the inductive inferential (or inductive learning) part of intelligence. M. Hutter even proposed a compression contest (the Hutter prize) which was “motivated by the fact that being able to compress well is closely related to acting intelligently”

>The equation E = mc2 For the newbies out there, this is what's called a red flag.

I was trying to use an example that people would be familiar with. All the example is pointing out is that the equations of physics are highly compressed representations of the data of past physical measurements, that allow us to predict lots of future physical measurements. That could be said of Maxwell's equations or the Standard Model or any successful physical theory. Most physicists like more compressed mathematical descriptions: though they usually would call it more elegant rather than use the language of compression.

>This is completely the wrong way to think about it if you're trying to understand these things, so I hope he actually knows this.

I don't think it is wrong to say that what the transformer "knows" about the images in its dataset has been compressed into its weights. In a very real sense, a transformer is very lossy compression algorithm which takes in a huge dataset and learns weights which represent patterns in the dataset. So no, I'm not saying that literally every image in the dataset was compressed down to 1.2 bytes each. I'm saying that whatever SD learned about the relationships of the pixels in an image to their text labels is stored in 1.2 bytes per dataset image in its weights. And you can actually use those weights as a good image compression codec. The fact that it has to do this in a limited number of parameters is one of the things that forces it to learn higher-level patterns and not rely on memorization or other simpler strategies. Illya Sutskever talks about this, and was part of a team that published on it, basically showing that there is a sweet spot for data/parameter where giving it more parameters improves performance to a point, but there is a point where adding even more decreases performance. His explanation for this is that by limiting the number of parameters, the model is forced to generalize. So in Schmidhubers language, the network is forced to make more compressed representations, so it overfits less and generalizes better.

>First, this is the connectivist problem/fallacy in early AI and cog sci -- the notion that because small neuronal systems could be emulated somewhat with neural nets, and because neural nets could do useful biological-looking things, that then the limiting factor to intelligence/ability is simple scale

My argument about this doesn't come from ML systems mimicking biology. It comes from looking at exponential graphs of cost, performance, model parameters, and so on, and projecting that exponential growth will likely continue for a while. The first airplane didn't fly like a bird, it did something a lot simpler than that. In the same way, I'd bet the first AGI will be a lot simpler than a brain. I could be wrong about that.

But, I'm not even claiming that scaling transformers will lead to AGI, or that AGI will definitely be developed soon. All I'm saying is that there is significant expert uncertainty in when AGI will be developed, and it is possible that it could be developed soon. If it were, that would probably be the most difficult type of AGI to align, which is a concern.

2

Worth_Procedure_9023 t1_ja29qa8 wrote

Craftsmanship in a lot of ways is just ADHD/OCD levels of attention paid to technically irrelevant details.

I built a table that weighed over 400lbs and is more sturdy than the wall it's bolted to. Why? Because I had a fear of my kids tipping it over.

I've got no doubt I could park my little car on it, and it would creak but hold. Why? Because I don't build things to be "just enough".

But the "just enough" mentality is what often enables the have-nots to actually have some nice shit.

Don't let the boomers convince you that craftsmanship is gone. It is just much harder to notice because craftsmanship has exceeded the ability of the layperson to perceive it.

1

Monnok t1_ja28d43 wrote

There is a pretty widely accepted and specific definition for general AI... but I don't like it. It's basically a list of simple things the human brain can do that computers didn't happen to be able to do yet in like 1987. I think it's a mostly unhelpful definition.

I think "General Artificial Intelligence" really does conjure some vaguely shared cultural understanding laced with a tinge of fear for most people... but that the official definition misses the heart of the matter.

Instead, I always used to want to define General AI as a program that:

  1. Exits primarily to author other programs, and

  2. Actively alters its own programming to become better at authoring other programs

I always thought this captured the heart of the runaway-train fear that we all sorta share... without a program having to necessarily already be a runaway-train to qualify.

2