Recent comments in /f/Futurology

DoctorGrumble t1_j97hccv wrote

As a radiologist, I can assure you our specialty is not obsolete as many people have been suggesting since Vinod Khosla's hot take 10+ years ago.

If you don't understand medical imaging, when it's used, it's strengths/weaknesses/limitations, I can certainly understand why you would think it's easy for an AI to analyze the date/image and make the appropriate diagnosis. Unfortunately though, most imaging does not have an algorithmic yes/no answer and can have multiple not untrue interpretations given intrinsic limitations in imaging technology. This is where the "art" of radiology comes in - communicating what you can say, what you can't say, and what the next best steps are to answer the question in a way that a provider can understand. Yes AI at this point can identify things like brain bleeds, pneumothorax/pneumoperitoneum, lung nodules, and many other "findings". But it is not anywhere near actually catching all the findings, interpreting them in the clinical context, and making appropriate recommendations (and for the record a lot of radiologists aren't good at this either).

As AI gets better, I'll bet that it's role in radiology will continue along the same path it currently has - as a tool to assist us. In order to handle the continually increasing volume of scans that are ordered, we will need tools to help increase our efficiency while maintaining accuracy, and that's where I believe AI will come in.

This doesn't even take into account the procedural work I do as a rad - US/CT/Fluoro guided biopsies, aspirations, drain placement, ablations, etc. When AI gets to a point where it can do these for use... I don't think there are many medical subspecialties that will be safe. And whether we allow AI to get to that point is probably one of the biggest ethical questions we as a society will have to answer in the future.

Anyways, long rant over. Just tired of people who have no idea what an actual radiologist does say confidently and without real information that our specialty is obsolete and won't exist in 10 years (it's been 11 years since Khosla first said that and we currently have a pretty significant radiologist shortage in this country).

But hey, if people want to believe it I won't complain - gives us great job security.

1

Particle_Partner t1_j9771aq wrote

After 10 years, most surgeons (and other highly trained professionals) are just hitting their stride - enough experience to be really proficient and know one's limitations, but not so old as to be set in only one way of doing things. Some would even say 10 years is too inexperienced, and many doctrs are still paying off their student loans at 10 years out, not financially ready to retire.

Besides, after having invested 20 years into professional training and becoming a surgeon or other kind of doctor, most are having too much fun to retire - retirement would be a letdown for the next 40 years.

1

Particle_Partner t1_j975h74 wrote

I totally agree. Every field has its pros and cons, but fortunately, in medical training, you get to try lots of different things before graduating as a doctor. Even after that, there are different types of residencies and fellowships.

It's really a matter of finding the right personal fit, at something you're able to do physically and mentally for the next 45 years.

If you really want lifelong job security..., sorry it doesn't exist for doctors or anyone else. Plan on doing a few different things over the course of the next 45 years as you grow professionally and your interests and priorities change over time. Who knows, you might end up doing something that doesn't currently exist. Radiology and radiation oncology didn't exist in 1894, the Xray hadn't been discovered yet!

1

pshawSounds t1_j974uwr wrote

Full article:

While the algorithms at the heart of traditional networks are set during training, when these systems are fed reams of data to calibrate the best values for their weights, liquid neural nets are more adaptable. “They’re able to change their underlying equations based on the input they observe,” specifically changing how quickly neurons respond, said Daniela Rus, the director of MIT’s Computer Science and Artificial Intelligence Laboratory.

One early test to showcase this ability involved attempting to steer an autonomous car. A conventional neural network could only analyze visual data from the car’s camera at fixed intervals. The liquid network — consisting of 19 neurons and 253 synapses (making it minuscule by the standards of machine learning) — could be much more responsive. “Our model can sample more frequently, for instance when the road is twisty,” said Rus, a co-author of this and several other papers on liquid networks.

The model successfully kept the car on track, but it had one flaw, Lechner said: “It was really slow.” The problem stemmed from the nonlinear equations representing the synapses and neurons — equations that usually cannot be solved without repeated calculations on a computer, which goes through multiple iterations before eventually converging on a solution. This job is typically delegated to dedicated software packages called solvers, which would need to be applied separately to every synapse and neuron.

In a paper last year, the team revealed a new liquid neural network that got around that bottleneck. This network relied on the same type of equations, but the key advance was a discovery by Hasani that these equations didn’t need to be solved through arduous computer calculations. Instead, the network could function using an almost exact, or “closed-form,” solution that could, in principle, be worked out with pencil and paper. Typically, these nonlinear equations do not have closed-form solutions, but Hasani hit upon an approximate solution that was good enough to use.

“Having a closed-form solution means you have an equation for which you can plug in the values for its parameters and do the basic math, and you get an answer,” Rus said. “You get an answer in a single shot,” rather than letting a computer grind away until deciding it’s close enough. That cuts computational time and energy, speeding up the process considerably.

“Their method is beating the competition by several orders of magnitude without sacrificing accuracy,” said Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign.

As well as being speedier, Hasani said, their newest networks are also unusually stable, meaning the system can handle enormous inputs without going haywire. “The main contribution here is that stability and other nice properties are baked into these systems by their sheer structure,” said Sriram Sankaranarayanan, a computer scientist at the University of Colorado, Boulder. Liquid networks seem to operate in what he called “the sweet spot: They are complex enough to allow interesting things to happen, but not so complex as to lead to chaotic behavior.”

At the moment, the MIT group is testing their latest network on an autonomous aerial drone. Though the drone was trained to navigate in a forest, they’ve moved it to the urban environment of Cambridge to see how it handles novel conditions. Lechner called the preliminary results encouraging.

The Physics Principle That Inspired Modern AI Art By Exploring Virtual Worlds, AI Learns in New Ways AI Overcomes Stumbling Block on Brain-Inspired Hardware How Transformers Seem to Mimic Parts of the Brain Beyond refining the current model, the team is also working to improve their network’s architecture. The next step, Lechner said, “is to figure out how many, or how few, neurons we actually need to perform a given task.” The group also wants to devise an optimal way of connecting neurons. Currently, every neuron links to every other neuron, but that’s not how it works in C. elegans, where synaptic connections are more selective. Through further studies of the roundworm’s wiring system, they hope to determine which neurons in their system should be coupled together.

Apart from applications like autonomous driving and flight, liquid networks seem well suited to the analysis of electric power grids, financial transactions, weather and other phenomena that fluctuate over time. In addition, Hasani said, the latest version of liquid networks can be used “to perform brain activity simulations at a scale not realizable before.”

Mitra is particularly intrigued by this possibility. “In a way, it’s kind of poetic, showing that this research may be coming full circle,” he said. “Neural networks are developing to the point that the very ideas we’ve drawn from nature may soon help us understand nature better.”

66

FuturologyBot t1_j973vaf wrote

The following submission statement was provided by /u/lughnasadh:


Submission Statement

The AI behind self-driving cars could do with a boost. Although some developers are touting Level 5 autonomy "soon", it seems to have been that way for a while. In reality, Level 4 is about the most anyone has advanced to with a commercial product. That's good for set predetermined routes, but the promise of Level 5 is "door-to-door" autonomy.

This seems like quite a fundamental breakthrough. It's interesting to wonder when it will be first commercialized.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/116kw0h/mit_researchers_makes_selfdrive_car_ai/j96z2qt/

1

meshtron t1_j973ru7 wrote

Fair enough I guess. But the numbers were far from the only problem with your argument(s). Trying to roll back bits and pieces of your statement after the fact to "shift" it closer to being true is not a very efficient or credible way to communicate, AND it prevents you from actually learning the material. You should separate "I know" and "I think" from "I heard" and, in your case, "I think I heard."

1

lughnasadh OP t1_j96z2qt wrote

Submission Statement

The AI behind self-driving cars could do with a boost. Although some developers are touting Level 5 autonomy "soon", it seems to have been that way for a while. In reality, Level 4 is about the most anyone has advanced to with a commercial product. That's good for set predetermined routes, but the promise of Level 5 is "door-to-door" autonomy.

This seems like quite a fundamental breakthrough. It's interesting to wonder when it will be first commercialized.

21

silomshady t1_j96ypvh wrote

If you think your medical training ends with school - you are in the wrong business. Also, every branch of medicine will still be around - the only thing that will change is the equipment, technology, and techniques. But as long as you stay up to date and stay educated you are good.

If you wanna be irreplaceable- master the EMR/EHR system your hospital or practice uses, this will save you HOURS a day in filling out patient info - thus allowing you to handle more patients. Which hospital admins always like :)

1

ledow t1_j96rp2g wrote

Sure you can.

You run it as a service, not as a for-profit industry.

Like the majority of the developed world.

You should not be PROFITING from sickness. Break-even at best. And that's far too fine a balancing act. You SPEND MONEY on healthcare to get more productivity out of your populous... it's literally a loss-leader. Like education, the other example.

Education is a 100% loss industry. You shouldn't be charging kids to go to school, and you spend all the money you do have on their education, and combat wastefulness.

Welcome to "What life is like outside of shitty 'everything's about money' America".

1

just-a-dreamer- t1_j96r8nw wrote

I know that I know little. Therefore, when I see that I declared a wrong number, I look it up on Wikipedia and stand corrected.

My doubts about alien life also comes from recollections of podcasts I enjoyed from Ray Kurzweil and Ben Goertzel. Their arguments concerning alien life make sense to me.

1

meshtron t1_j96q8lm wrote

I've been happy the whole time - but you started off wrong, doubled-down on your wrongness, and are continuing to frolic in the pool of your wrongness. No skin offa my nose - just funny to read.

I would offer this small suggestion (that you will completely ignore): learn to shape your language appropriately to your level of knowledge of any given topic. You've made it readily apparent here you have absolutely no idea about any of the things you're discussing or making assertions about here. That is completely fine; that is how we learn and explore. But doing so using your false assertions as a foundation for equally false arguments is stopping you from actually learning anything. It falls well into the old idiom "sometimes it's better to not speak and appear a fool than to speak and remove all doubt." Learn and explore, but (even on the internet) don't expect to meaningfully steer discussion without understanding at least SOME of the context and subject matter. Learn to ask good questions, learn to receive and internalize good answers, learn to research things that you're curious about, and learn not to just perpetually double-down on your own flawed arguments - your life, should you succeed at this - will be better for it.

1

davidswelt t1_j96p81d wrote

AI guy here.

While it's always important to create a human connection, current technology is getting good at imitating exactly that through conversational systems.

My bet would be on surgery, emergency medicine, and any field that does not readily spew out data for models to be trained and validated; ML is particularly good at unbiased big-picture assessments if the relevant data is available for training and then for diagnosis (and it is not, currently).

I don't think this should be your only concern -- specialities, I hear, differ in lifestyle, and ultimately I think you've got to be interested in your patients and the subject matter.

1