Recent comments in /f/Futurology

CollegeIntellect t1_jbys1jx wrote

The shielding is really just two plates slightly apart from each other. When the debris strikes the first plate it vaporizes and the second plate catches the vapor. It’s called a whipple shield. That first plate prevents back splash when it strikes the second plate. This minimizes debris going back to space.

Graveyard orbits are for geostationary satellites. That are farther out, if I remember correctly, than the standard GEO orbits. Those are premium orbits. They are tightly regulated. Unfortunately, those are way to far out to deorbit on their own. It’s something like a millennia or more before perturbations could cause it to deorbit. Going out to gather them and bring them back is a waste of current resources, hence the graveyard orbit.

If you’re looking to take some real life Aerospace knowledge into kerbal. I recommend searching for “patched conics”. This matches how kerbal models their system and will allow you to understand how they get to their dV calculations plus do slingshots in the game.

You’re in luck too, one of my favorite channels just dropped a video on space debris and debris shields here: https://youtu.be/_FFNz2q7F88

Source: https://ai-solutions.com/_freeflyeruniversityguide/patched_conics_transfer.htm

1

petrichoring t1_jbyr8i8 wrote

Totally! The issue of suicide is such a tricky one, especially when it comes to children/teens. My policy is that i want to be a safe person to discuss suicidal ideation with so with my teens I make it clear when I would need to break confidentiality without the consent of the client (so unless they tell me they’re going to kill themselves when they leave my office and aren’t open to safety planning, it stays in the room). With children under 14, it’s definitely more of an automatic call to parents if there’s SI out of baseline, especially with any sort of implication of plan or intern. Either way, it’s important to keep it as trauma-informed and consent-based as possible to avoid damaging trust.

But absolutely it becomes more of an issue when an adult doesn’t have the training or relationship with the client to handle the nuances of SI with what a big spectrum it can present as; ethically, safety has to come first. And, like you said, that can then become a huge barrier to seeking support. My fear is that a chat bot can’t effectively offer crisis intervention because it is such a delicate art and we’d end up with dead kids. The possibility for harm outweighs the potential benefits for me as a clinician.

I do recommend crisis lines for support with SI as long as there is informed consent in calling. Many teens I work with (and people in general) are afraid that if they call, a police officer will come to their house and force them into a hospital stay, which is a realistic-ish fear. Best practice should be that that only happens if there’s imminent risk to harm without the ability to engage in less invasive crisis management (I was a crisis worker before grad school and only had to call for a welfare check without the person’s consent as a very last resort maybe 5% of the time, and it felt awful) but that depends on the individual call taker’s training and protocol of the crisis center they work at. I’ve heard horror stories of a person calling with passive ideation and still having police sent to their house against their will and I know that understandably stops many from calling for support. I recommend using crisis lines if there isn’t other available support because I do believe in the power of human connection when all else feels lost, with the caveat that a caller know their rights and have informed consent for the process.

Ideally we’d implement mental health first aid training to teens across the board so they could provide peer support to their friends and be a first line of defense for suicide risk mitigation without triggering an automatic report. Would that have helped you when you were going through it?

6

Surur t1_jbypao6 wrote

AI is any intelligence which is not organic. The current implementation is neural networks, but there was a time people thought AIs would use simple algorithms. Even AlphaGo uses tree searches, so there is no real cut-off which makes one thing an AI and the other not.

Which is why OP's statement that ChatGPT is not real AI is so ridiculous.

1

ramrug t1_jbyoze2 wrote

I agree, but it's already happening. We must learn to deal with it. I can easily imagine a near future where companies use ChatGPT for hiring advice, because the chat bot will know more about an individual than anyone else. Essentially it will collect all gossip about everyone. lol

Hopefully some effort is made (through law) to anonymize all that personal data.

−1

TheCrimsonSteel t1_jbymxam wrote

Man, one single post scratching every itch in my brain - nerdy generalist, mat sci engineer, and gamer. I do love Reddit

I'm guessing, depending on details, that ballistic shielding is generally designed to not cause further debris and sort of optimized for toughness? Assuming it doesn't impact the base protection and functionality of protection

And now I'm super curious about debris fields and tracking, I knew we did some amounts, may have to go on a nerd rabbit hole

It is good to hear we're doing a decent amount of 25 year self termination plans. I vaguely remember there being something of a "graveyard orbit" do you know if those are those deeper orbits, or do we try to make those also into one's that'll self-deorbit?

There's a niche of Kerbal players that love shooting for hyper realism. I'm more on the end of let's see what the existing engine can do and engineer something that functions on wonky mechanics and not so much real life. Like mass driver designs that are just... broken

And any resources to recommend from real life aerospace physics that'd improve my Kerbal skills? I'd love to improve my gravity slingshot game

1

Gagarin1961 t1_jbylg97 wrote

I don’t think OP is saying the AI is a good replacement for therapists, they’re saying it’s a decent replacement for having no option to talk things out.

Not every problem needs to be addressed by a licensed therapist either. I would say most don’t.

For any serious issue, charGPT constantly recommends contacting a professional therapist.

8

leeewen t1_jbykkhm wrote

I can also tell you as a teen who was suicidal I didn't tell people because of mandated reporting. It endangered my life more because I couldn't speak to people.

While not on topic, and not critical of you or your post, mandated reporting can be more dangerous than its absence for some.

On topic, the ability to vent, to AI, may go along way to some where they lack any other options.

35

Surur t1_jbyjw78 wrote

Do you think ChatGPT got this far magically? OpenAI uses Human FeedBack Reinforcement Learning to teach the neural network what kind of expressions are appropriate and which ones are inappropriate.

Here is a 4-year-old 1-minute video explaining the technique.

For ChatGPT, the feedback was provided by Kenyans, and maybe they did not have as much awareness of child exploitation.

Clearly, there have been some gaps, and more work has to be done, but we have come very far already.

1

AppropriateStranger t1_jbyj6zd wrote

This is such a red flag thread... Parent, please don't rely or hope for such a horrible future. That's the type of data that we really should keep away from AI companies and just off the internet in general, AI can not reliably counsel people on their problems safely, and no matter how many of these situations come up where people want to use the AI as their emotional tampon and it "works out" it doesn't make it safe or healthy.

16