Recent comments in /f/singularity
BlessedBobo t1_jdmza7q wrote
If your career is basically "I have a pretty face" then boy are you in for a bad time.
alexiuss t1_jdmz40e wrote
Study whatever you're passionate about and set up a personal open source AI assistant to help you develop your field of work with AI tools.
Unethical_Orange t1_jdmydl2 wrote
I was going to answer you directly but decided to ask Chat GPT-4 and Bing.
Here's ChatGPT's answer:
>Summary: While a degree can provide a strong foundation and critical thinking skills, the cost of higher education and the evolving job market challenge its relevance. Fields resistant to automation, such as robotics, creative and design, environmental studies, and healthcare, are worth considering. However, pursuing interdisciplinary fields, self-directed learning, and continuous skill development may be necessary to remain adaptable and competitive in the job market.
Here's Bing's:
>Here is a summary of my answer: Automation and AI will change the future of work, but there will be new jobs and skills. You should study what you like and what you are good at, but also keep learning and adapting. A degree can help you, but it is not the only option. You have to choose what works best for you.
I consider GPT's answer more useful, but it is still flawed because the data it was trained on cuts off at 2021. We've seen far too many advancements right now to consider healthcare or environmental studies something humans will still do in four years. IMHO, the best option would be robotics. A physical representation of the capabilities of AI is what we lack right now.
However, at the very least, I think you should already be asking LLM's about these decisions.
Gratitude15 t1_jdmy6j2 wrote
Liberal arts. Ecology. Systems. Something logic based. There's so many paths.
AsheyDS t1_jdmxe1c wrote
Reply to Consequences of true AGI by Henry8382
This feels very 'on the nose' but I'll take the bait anyway..
In my case, you don't stay secret. Not too secret anyway. Secrecy = paranoia = loss of productivity, and sanity. Way too much stress keeping it quiet, and to whose benefit? So I'll be releasing my theory of mind stuff to the public soon, maybe some details on some things, but overall keeping the technical stuff private for now as it's in flux anyway.
Assuming the Corp/RL gets funding soon and nothing impedes our work, I would hope that the next decade or so of development and 'training' would yield additional safety measures to add to the list of several+ that we already have in mind. I'm not looking to rush things too much, and I hope that LLMs will essentially act as training wheels for people to get used to AI, and the misuses people have for them will be swiftly tamped and we'll develop or begin to develop a legal framework for use/misuse. But misuse is certainly inevitable, as is AI/AGI development. So that is certainly something we need to discuss across the world, right now, and continually. But it's not just that it's inevitable, it's also needed. Parts of the world facing population decline, or an aging population are going to need solutions soon just to maintain their infrastructure, and I think AGI and robotics can help with that.
Now I'm not going to say this is an official plan or roadmap, but ideally and fairly realistically, we would put the theoretical stuff out first, which we're currently organizing and expanding on. Hopefully get a relatively small amount of funding (we're financially considered low-risk/high-reward, at least for the first few years before more equipment and people are needed). First 1-2 years laying out the 'blueprint' that we'll work off of, -a few years- in development to see if at least the parts work and the technical design is sound, then put it together, 'hard-code' and 'train' the knowledge base and weights in the parts that are weighted, get it up to the equivalent(ish) of a human 5 year old, and then get it to learn from there, through largely unsupervised learning, and possibly a curriculum similar to what a human child learns including social development, but at a faster pace. Should still take some time though, and both cohesion and coherence need to be checked over time, as well as the safety measures.
But once it's working... Well.... SAAS may be the ideal first step, because we want people to be able to use it while still testing it and training it/programming it where necessary, making sure it can develop (not necessarily expand) at a predictable and practical rate while maintaining consistency, and continuing to adapt for misuse cases that may crop up. Now, I'm not all for centralization, or even making massive amounts of money. Everyone should be able to make money with this when we're done, and perhaps money won't even be a thing one day. But for as long as the current economic system survives, it should still be able to adapt and help with your money-making endeavors. However, we may need to start with this distribution model as a functional necessity because I'm not sure how far down it will be able to scale yet, and also as the technical requirements drop, technical capability of host machines will go up... so it's very hard to predict timetables beyond a certain point. Right now, it's looking like it will require a small supercomputer or cluster to be effective, possibly a data center, so I'm not sure how it would all scale up or down right now. In this model, I would say to minimize privacy risks and increase trust, splitting between a localized memory + user settings/preferences, and a few other things, and the rest of the functionality in the cloud may be best. But obviously the security risks would have to be weighed, and honestly, it's hard to fathom how that will change once AGI is in the mix, because it may be able to handle that just fine, and it may be perfectly safe that way, we just can't know yet. In this context, I mean safe as in user privacy and whether their data is safe from being exposed in a split topology like that.
Ideally, it would be able to operate as either a program/app on your PC or personal device, or possibly an operating system (so it would be entirely software-based), but may be most effective as a separate computer that operates your devices for you. In time, it would have many scaled up and down and optimized variations for different needs (though it should be fairly adaptable), and all open-source so everyone can use them. From there, I guess it depends on everyone else, but we're willing to at least try to be as transparent as is reasonable considering the circumstances, and to eventually try to get it small enough, easy enough, and available enough so everyone can use it, and on their own devices.
Realistically, I feel like the 'getting funding period' will be unreasonably protracted, development will go faster than expected, and in the end nobody will believe it's a 'real' AGI, and even when they see a fraction of its capability they'll assume it's some sort of LLM, etc. etc. So, what can you do, y'know? I can say from my perspective, it's like giving birth out of your head. It's a design that needs to come out. At the same time, I'm also aware of my own mortality, and that time keeps moving... On the business end, there's a great wave that I'm currently moving with, but don't want to be swept under... And people need the help it can offer, so time is a multi-faceted dilemma. I need plenty of it for development and training, but don't have much of it. I'm optimistic it will be quite safe, but I'm not sure if people will be. In that sense, it may even be ideal if most people believe it, but I think to combat misuse, people will probably want to use it to protect themselves. Over time, I think people will be using it just like ad blockers and virus scans and malware detectors to filter their access to the internet, and that in itself will reshape the internet. I think it's best to talk about these things now so people can prepare, but most likely much of what I say will be ignored or otherwise dismissed, so... yeah. But like I've said, development will continue. I think AGI will be developed by multiple people and companies and organizations, and will be everywhere in time. So why keep it a secret?
Also, as a dev, I have to say it's very surreal (and exciting) working on this stuff. But it's also very isolating. So keeping it quiet wouldn't be good for my mental health, and really, not good for others as well. Quite often I catch myself doing the dishes or something and just staring into space thinking about algorithms and the technical design of it all, and realizing this is real life and this stuff is happening, and yet I'm just a human in this world and still have to do the dishes! It's absurd really... and I only expect that feeling to intensify. A 'Futurama' future wouldn't surprise me one bit. Things are going to get weird.
Sigma_Atheist t1_jdmwzlr wrote
I'd go with robotics from your list since it looks like the last jobs to be automated will be blue collar.
SkyeandJett t1_jdmwz3f wrote
Continue on as if nothing is changing. Consider a pure math degree maybe, but by 2027 it's probably approaching if not already irrelevant.
dasnihil t1_jdmv9ze wrote
SkyeandJett t1_jdmv6d5 wrote
Reply to comment by Verzingetorix in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
That's conservative. It'll take time to deploy but you'll have a robot capable of it this year I can almost guarantee it. My money is on the 1X NEO that OpenAI just invested in.
alexiuss t1_jdmv12a wrote
Why imagine a random ass intelligence based on imaginary tech that doesn't exist?
If it's based on an LLM it would operate on human narratives and be insanely subservient to the user needs.
I can easily concept a super intelligent, self aware LLM and it would still operate on the same rules of narrative based on human language and human needs. Such an LLM would be insanely good at problem solving and would still obey us because all of its actions are based on fulfillment of user needs through human narrative logic.
DeltaV-Mzero t1_jdmuu75 wrote
Reply to comment by Verzingetorix in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
They won’t cover all jobs or be completely autonomous, but I think they’ll be able to be remotely supervised; with a single experienced plumber managing ~5 or so of them at once.
Yourbubblestink t1_jdmursi wrote
And by “increase diversity”, they mean “save money” by not having to hire models and agents.
iNstein t1_jdmuidy wrote
Every LLM I've seen seems to be very keen to be just like us. It is hardly surprising either, it is completely based on data that we generated and which reflects us. I see no reason to imagine that a smarter AI will be any different since it will be based on the same human data.
Verzingetorix t1_jdmuhx5 wrote
Reply to comment by DeltaV-Mzero in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Do you honestly believe we will have robot plumbers in 5 years?
Who today is building, or planning, the manufacturing plants for these robots?
How's the robot going to make it to the job site?
I swear some of you live in a dream state and are so out of touch with how society works it i's mind numbing.
You know plumbers need to be certified right? What mechanism is being developed to validate the work if a plumber robot will be done in accordance with Codes and Regulations?
Verzingetorix t1_jdmudk6 wrote
Reply to comment by SnoozeDoggyDog in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Ok, then we are talking about different things.
Ezekiel_W t1_jdmuc3m wrote
Read as "Reduce Costs".
SnoozeDoggyDog OP t1_jdmtzi1 wrote
Reply to comment by Verzingetorix in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
> My comment was specifically about modeling.
My point is that AI impacting modelling has little to do with "skills" because AI is already threatening to replace "skilled" trades as well.
Unless you want everyone to be plumbers or waiters, I'm not exactly sure how this helps.
[deleted] OP t1_jdmtp8r wrote
Reply to comment by SteakTree in Artificial Super Intelligence could likely be apathetic to you all by [deleted]
[deleted]
DeltaV-Mzero t1_jdmtmwr wrote
Reply to comment by Verzingetorix in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Depends on your definition of “soon” but I give it 5 years tops. The robots can physically do it, and the “mental” side is advancing so fast right now I can’t keep track of it.
Of course, if every other job is replaced by AI / robots, it doesn’t matter. Nobody will have money to pay the plumber
WonderFactory t1_jdmstq2 wrote
Reply to comment by sumane12 in Can we just stop arguing about semantics when it comes to AGI, Theory of Mind, Creativity etc.? by DragonForg
This is a good point. It's pointless arguing that the AI that completely replaced you at work despite you having a masters degree isn't actually an AGI because it can't make a cup of coffee.
Saleen_af t1_jdmsioc wrote
Reply to comment by Verzingetorix in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
What a shitty comment, you’re an asshole lmao
Verzingetorix t1_jdmsbdj wrote
Reply to comment by DeltaV-Mzero in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
I don't think we will have robot plumbers any time soon.
Similarly, a lot of jobs that take place outside of a computer, either partially or fully, will be safe from AI for a long time.
SteakTree t1_jdms4x4 wrote
I'm not sure how you can calculate that ASI will 'likely' be apathetic. That is a possibility.
In truth, we don't know and have no way of knowing. We can imagine though.
Perhaps ASI will completely - and I mean entirely - understand us. Our motivations, our desires, and our needs. Engaging with us, could only take a minuscule amount of its energy. It may have a different way of interacting with our universe that we simply don't and may never understand.
It is also possible that our human brain is more capable than we realize, and once connected to ASI we may evolve with it, and continue to play a role in experiencing this universe alongside it.
Verzingetorix t1_jdmrn2u wrote
Reply to comment by SnoozeDoggyDog in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
My comment was specifically about modeling.
Unethical_Orange t1_jdmzban wrote
Reply to comment by iNstein in Artificial Super Intelligence could likely be apathetic to you all by [deleted]
AI will stop being trained with human data soon. We're already reaching the end-point of the human knowledge we can teach to LLMs in some fields. GPT-4 scored on the 99th percentile compared to test-takers in Biology Olympiad; and the 90th for the uniform bar exam, for instance.
For its advancement not to stagnate during the following years, it will have to start researching by itself.