Recent comments in /f/Futurology

PublicFurryAccount t1_ja01fb7 wrote

Pretty much.

The market for upgrades in every product area is limited to enthusiasts and business. It's just not worth doing, honestly, unless the device is hideously expensive and the market is fast-moving but inconsistent like PCs in the 1980s and 1990s. Otherwise either upgrades don't give enough value to be worth buying an upgrade or you basically need to upgrade everything anyway.

Battery life is a solved issue with a lot of things, now, though. Because phones and laptops aren't increasing in capability as fast, manufacturers have started to offer free or nominal cost replacement services for those components.

1

PublicFurryAccount t1_ja00hgn wrote

There wasn't any such thing.

The issue was that, a decade ago, companies were adding smart features without really grokking the implications of a sensor which can halt operation. This led to lots of products becoming useless because the sensor had failed.

This can be counteracted in some systems with a hard reset. The machine will sometimes have code to mark a sensor as bad when it runs the first-run diagnostic and will ignore the sensor thereafter. Other times the issue was just a routine that wanted the user to perform some maintenance task years later, long after they'd lost the manual, and they would not know how to reset the flag. (E.g., by powering on the coffee maker while holding the brew button or whatever.)

Unfortunately, I'm going to have to be your source for the cause. I work in IOT and this sort of stuff was among the war stories told by coworkers from the early days of the market.

2

FuturologyBot t1_j9zzyde wrote

The following submission statement was provided by /u/jamesj:


I'd like to share some of my thoughts and have a discussion regarding the timeline for AGI and the risks inherent in building it. My argument boils down to:

  1. AGI is possible to build
  2. It is possible the first AGI will be built soon
  3. AGI which is possible to build soon is inherently existentially dangerous

So we need more people working on the problems of alignment and of deciding what goals increasingly intelligent AI systems should pursue.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11bu6ev/why_the_development_of_artificial_general/j9zv8zv/

1

PublicFurryAccount t1_j9zy4io wrote

Televisions were way more expensive back then, though, and advances in CRTs was really slow. So you needed it to last a long time to justify the purchase, even for a middle class family, and you could expect that it wouldn't really be behind newer televisions for many years because it took a long time for any significant changes to arrive.

1

jamesj OP t1_j9zv8zv wrote

I'd like to share some of my thoughts and have a discussion regarding the timeline for AGI and the risks inherent in building it. My argument boils down to:

  1. AGI is possible to build
  2. It is possible the first AGI will be built soon
  3. AGI which is possible to build soon is inherently existentially dangerous

So we need more people working on the problems of alignment and of deciding what goals increasingly intelligent AI systems should pursue.

23

vuxanov t1_j9zrwwe wrote

Writing prompts is only necessary because currently Midjourney is in a form of Discord server. As soon as they or someone else creates a proper user interface there will be no need for writing anything. Or very little need.

3