Recent comments in /f/Futurology

mascachopo t1_jc31uup wrote

No. Programming is not just about sitting in front of a computer and write some code according to some specifications. Most of the time goes into figuring out the right technology/library for a given task or how to modify existing code to achieve a new feature without breaking old stuff, performance work, sorting security issues, bugs, etc. All tasks without a clear specification you can just throw into a prompt. Anyone that gives you a straight yes is just not familiar with the job and misses the fact the developer work involves not only using general knowledge in programming well defined tasks which is what AI is very good at, and for which will and already is a great tool for more simple tasks we do although you still need an experienced developer to evaluate and test the code they produce since they are quite prone to errors for the inexperienced one.

2

strabosassistant t1_jc2zlq5 wrote

For 95% of the coders, programmers - yes, in 5-10 years there will be no need for workhorse members on a team. Only truly innovative, pioneering technologists will have a reason to exist to expand the template of capabilities the AI can learn and apply. Volume coders, clock-punchers, 'went in because parents said it was a good field' people will need to find other work.

2

Strict_Jacket3648 t1_jc2ywec wrote

I hope true A.I. will take over the governments and go full on socialism like in utopian sy fy books, where being rich means being a millionaire not a billionaire taking advantage of workers....The only thing wrong with socialism is the human factor. True A.I. is on it's way we can't stop it.

1

FuturologyBot t1_jc2ro0g wrote

The following submission statement was provided by /u/lughnasadh:


>>In past industrial revolutions, machinery has also replaced human labor but productivity gains did not all accrue to owners of capital—those gains were shared with labor through better jobs and wages. Today, for every job that is automated all productivity gains go to the owners of capital. In other words, as AI systems narrow the range of work that only humans can do, the productivity gains are accruing only to the owners of the systems, those of us with stocks and other financial instruments. And as we all know well, the development of AI is largely controlled by an oligopoly of tech leaders with inordinate power in dictating its societal impact and our collective future.

What is interesting about this article is how blunt it is in stating current AI use is unethical. Especially considering the source, The Carnegie Council For Ethics in International Affairs. I am especially impressed that the authors do not automatically accept the premise that AI will generate more jobs than it replaces. That question is more often brushed under the carpet and ignored by academic think tanks.

I've asked the authors of this article to do an AMA with r/futurology. If anyone reading this could facilitate that, I'd be grateful if they could DM me here, or message the Mods.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11qe2mx/now_is_the_moment_for_a_systemic_reset_of_ai_and/jc2m7am/

1

TheBookOfSmells t1_jc2p94a wrote

I've tried to approach this question practically myself, by seeing how much I could actually get Github copilot and GPTChat to do for me. The problem I had is that I still needed some sort of specification of what I wanted. In some cases this could potentially be replaced by an image, but often it seems to require precise language detailing exactly what should happen. Programming languages can be seen as just a type of specification, after all, that allows a compiler to generate machine code. So maybe programming languages will evolve to meet the needs of the AI programmer/human programmer better. Maybe that will look a lot like natural language - think prompt engineering. Maybe like current high level languages. Maybe more of a question and answer exchange between human and AI.

2

LoveFearLearn t1_jc2ms6l wrote

It can be difficult to predict with certainty what the future will hold, but there are certainly a variety of potential outcomes.

On the one hand, digital and information technology have the potential to enable significant advances in areas such as healthcare, education, and environmental sustainability. For example, artificial intelligence and big data could lead to more personalized healthcare and more efficient use of resources. Similarly, online education and remote work could improve access to education and employment opportunities, especially for those in remote or underserved areas.

On the other hand, there are also concerns about the potential negative consequences of digital and information technology. These include issues such as data privacy, cyber-attacks, and the potential for automation to displace workers and exacerbate economic inequality. In addition, the increasing prevalence of social media and online communication has raised concerns about issues such as online harassment and the spread of misinformation.

Overall, it’s difficult to say whether we are heading towards a dystopia, utopia, or something in between when it comes to the future of digital and information technology. The outcome will depend on a variety of factors, including the decisions made by policymakers, the actions taken by individuals and organizations, and the development of new technologies and their impact on society.

1

lughnasadh OP t1_jc2m7am wrote

>>In past industrial revolutions, machinery has also replaced human labor but productivity gains did not all accrue to owners of capital—those gains were shared with labor through better jobs and wages. Today, for every job that is automated all productivity gains go to the owners of capital. In other words, as AI systems narrow the range of work that only humans can do, the productivity gains are accruing only to the owners of the systems, those of us with stocks and other financial instruments. And as we all know well, the development of AI is largely controlled by an oligopoly of tech leaders with inordinate power in dictating its societal impact and our collective future.

What is interesting about this article is how blunt it is in stating current AI use is unethical. Especially considering the source, The Carnegie Council For Ethics in International Affairs. I am especially impressed that the authors do not automatically accept the premise that AI will generate more jobs than it replaces. That question is more often brushed under the carpet and ignored by academic think tanks.

I've asked the authors of this article to do an AMA with r/futurology. If anyone reading this could facilitate that, I'd be grateful if they could DM me here, or message the Mods.

2

alex20_202020 OP t1_jc2lz6i wrote

Disclaimer: below is a hypothesis. Please argue against if know what to say, i like to argue.

Less sleep needed means one can more likely raise a child w/out sleep deprivation. So ones with the trait night have more kids on average. Hence back to my initial statement.

1

alex20_202020 OP t1_jc2lvci wrote

Disclaimer: below is a hypothesis. Please argue against if know what to say, i like to argue.

Less sleep needed means one can more likely raise a child w/out sleep deprivation. So ones with the trait night have more kids on average. Hence back to my initial statement.

−4

Corsair4 t1_jc2fmtc wrote

>know some studies show educated westerners tend to have less kids

This isn't a westerner thing, this is a "literally every economically developed country, and most developing" thing.

Every economically developed country is under replacement rate. A lot of developing countries are dropping dramatically. India went from a rate of 6.something to replacement adjacent over the course of 50 years.

It has absolutely nothing to do with sleep schedules. Birth rate drops as a society becomes more economically developed, and - crucially - women have a greater emphasis on their own education and career.

4