Recent comments in /f/Futurology

czl t1_jb9kzow wrote

You get images or video that you suspect may contain a message but not access to originals and you want a way to judge whether there is a message present and inside which images.

It is foolish to leave unaltered originals available if you are using stenography thus the comparison test you refer to can not be done in practice.

If you compress you message well the result is near noise and it is that noise that you then mix among the “natural noise” your media contains. Done right this is hard to decode or even detect unless you know the algorithm.

When claims are made about “encoding efficiency” that depends on (1) what you are hiding (2) inside what with (3) what chance of detection.

32

rherbom2k OP t1_jb9kta5 wrote

The government wants to use deepfakes offensively despite claiming to develop tools to counter them. This can undermine trust in all content and erode democracy. As technology advances, people will continue to use it maliciously. The impact of deepfakes can be disastrous, causing society to lose trust in institutions and government. The future looks bleak as we must create ethical guidelines and educate the public to counter disinformation and promote transparency.

7

Schrecht t1_jb9jssl wrote

>If you're altering a source file (by adding information, as in this example), it's detectable

Technically true. For steganography, detection requires a copy of the original. If you create your own content and keep no copy after inserting the message, the bad guys don't have the original.

13

volci t1_jb9il5y wrote

>Besides being perfectly secure, the new algorithm showed up to 40 per cent higher encoding efficiency than previous steganography methods, they said.

Sorry, but extraordinary claims require extraordinary evidence

If you're altering a source file (by adding information, as in this example), it's detectable

Cryptographic hashes are a perfect test for this type of communication - the hash of the original will never match that of the altered copy

The only "perfectly secure" communication is a true one-time pad ...though, of course, the individuals using that system are subject to data extraction through less 'technical' means

394

FuturologyBot t1_jb9d32g wrote

The following submission statement was provided by /u/sgfgross:


“Ultimately, we might be able to inject RNA into patients and transform enough cells to activate the immune system against cancer without having to take cells out first,” Ravi Majetim, the lead researcher, said. “That’s science fiction at this point, but that’s the direction we are interested in going.”

This approach has the potential to open up an entirely new therapeutic approach to treating cancer and may provide a way to develop a vaccine for cancer.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11kx63b/stanford_medicine_scientists_have_found_a_way_to/jb9a2me/

1

sgfgross OP t1_jb9a2me wrote

“Ultimately, we might be able to inject RNA into patients and transform enough cells to activate the immune system against cancer without having to take cells out first,” Ravi Majetim, the lead researcher, said. “That’s science fiction at this point, but that’s the direction we are interested in going.”

This approach has the potential to open up an entirely new therapeutic approach to treating cancer and may provide a way to develop a vaccine for cancer.

49

ch4m3le0n t1_jb92ihp wrote

That's fine, and that happens today - in fact second opinions are a really important part of the process (both for you and the radiologists), but the AI is going to be of low value to you there, since it's accuracy is no better.

There is some value in having the AI sanity check the radiologist, but if they differ you are going to need two radiologists anyway.

I'm sorry to hear about your diagnosis, however, and I wish you a healthy future.

1

ch4m3le0n t1_jb9262z wrote

This is purely anecdotal on the doctors behalf. There are companies like Annalise.ai which are using hundreds of radiologists to train models for just one type of cancer, and the best they can do is sanity check, so far. This is likely one of those cases. There is a role for AI in this process, but it still requires a radiologist to interpret.

The bigger issue than finding cancers you missed is actually false positives, finding cancers that aren't really there. Thats a much harder problem. Imagine having to get a breast biopsy for a cancer that doesn't exist. That's the state of the art today.

1

green_meklar t1_jb8robf wrote

>Generate actual consciousness or an illusion of consciousness?

The real thing, of course. Fakes only take you so far.

>We can perform experiments/tests to see if the machine is representing consciousness in the same way we do but that doesn't mean the machine is conscious.

It can strongly suggest so, especially if we combine it with a robust algorithmic theory of consciousness.

Presumably none of us will ever be 100% certain that we're not the only thinking being in existence, but that's fine, we get plenty of other things done with less than 100% certainty.

1

rogert2 t1_jb7pd81 wrote

I don't think that "doctors dismissing patients concerns" is a source of failure to detect breast cancer via mammograms.

I assume women generally get mammograms because health experts recommend regular checks for all women. The reason radiologists fail to detect breast cancer in some x-rays is not that they aren't taking women seriously, because the women weren't coming in with symptoms or complaints -- they came in for a preventative screening. Radiologists sometimes fail to detect breast cancer because each radiologist looks at thousands of essentially identical x-rays over their career, breast cancer is uncommon, and cancer that does exist is hard to visually recognize in its early stages.

I'm not saying that people don't dismiss the complaints of women, whether in a healthcare or other context. But that's not what's going on here, because breast cancer checks are generally driven by prevention rather than symptoms.

1

Norseviking4 t1_jb7jq98 wrote

I look forwars to ai and machines taking over as much as possible of healthcare. AI that is never tired, never have a bad day, never distracted would be a huge comfort to me.

I really dont trust people (i have had several bad experiences with bad doctors)

1

AviationAdam t1_jb7ektk wrote

It’s very fun saying things with no basis of claim. I looked up your claim and could find no academic studies, or even news articles that are showing this is happening right now. There’s about a dozen talking about how it might happen, but unsurprisingly not one showing how it’s currently happening.

In fact most articles were showing how there is a huge accountant shortage and they can’t hire enough… huh funny I thought your AI was replacing them.

1