This is a big flashy headline that isn't as big of a deal as it presents itself. AI is still extremely far from assisting doctors, let alone replacing them.
"Diagnosis a 1 in 100,000 condition in seconds" is an absolutely meaningless statement.
What was the condition? Does it present with vague and difficult to assess symptoms or does it have a pathognomonic clinical sign that identifies it immediately, or is it somewhere in between? Did the AI diagnose it correctly, if so was it on the first try? Is it repeatable, could it diagnose it again? How prone is it to false positives, can we be sure it wouldn't diagnose a healthy patient or a patient with a similarly presenting problem? What about false negatives? It caught it this time, do we know how many times it missed it? What about a treatment plan? Does it know how best to treat it and can it work to personalize a treatment to fit that patient specifically with any comorbidities or conflicting medications taken into account? When planning treatments does it stick strictly the drug label or does it factor in published research on dosing?
I never said it can't be useful, just that it isn't very useful right now and it certainly isn't going to replace doctors any time soon. I said it in another comment that eventually I think AI will be a tool that could be used to help doctors.
Same logic I could say my ex girlfriend who watched every season of house correctly diagnosed a 1 in a million case. As you said in short, in medical school there's an old saying "when you hear hoofbeats think horses not zebras". If you are likely to catch the 1 in a million, how many of the 999,999 with common ailments are you likely to accidentally diagnose with something crazy.
Knowing little about doctors these were some of my thoughts, as well as what if chatgpt is missing something and needs to check or observe something about an individual.
This is a big flashy headline that isn’t as big of a deal as it presents itself. AI is still extremely far from assisting doctors, let alone replacing them.
While I also agree it is less than the hype. There are already people that are just concerned about moving quickly up the ladder and/or lazy that are just using GPT on the low and taking credit (with or without first checking any of it). I read about a law firm that was found to have used GPT for a case and were found out mainly due to legal case citations that they submitted were just made up by GPT and couldn't be found to have ever happened. They then claimed that they weren't aware that the AI would provide fake information as it sounded real enough.
Not to mention all the tech companies that are having to tell workers to stop uploading code or other information for the AI to work on. Given how bad the lack of fucks given by so many docs with pill mills and opioids. I am more than willing to believe there are already docs all over that are using GPT or any of the others.
I can attest to many docs/nurses not giving any fucks even when just trying to get correct diagnostic codes for the lab company I worked for years ago to simply bill insurance. We had to get a specific code and not a general code (like they can't just use something like 264 as it is a general code for "Vitamin A deficiency" and would need to state like 264.1 "With conjunctival xerosis and Bitot's spot" to specify which kind of Vitamin A deficiency). I would have to call about them missing or not being specific. And I got a shockingly high amount of docs that had ordered the freaking tests just tell me to just use "whatever made sense." Our already fucked medical services are gonna get much worse.
I mean, sure. I know people who have used ChatGTP to write their discharges. It'll definitely be tried as a crutch by the lazy in the short term, but I think it'll end up being used as a actual tool in the long term (not just in medicine, but in a wide variety of fields). However, I also think that's an entirely different discussion than the one this article presents. I think the conversation of how AI can be used as a tool to assist existing and future professionals is an entirely different conversation than wether or not AI is going to replace any given profession. I also think it's a wildly more productive conversation because I don't believe there are many professions that can be completely phased out by AI.
I also think that the point you raised about codes is another entirely different discussion that could be had about the pitfalls of modern day medicine. I'm actually going to argue hard in favor of the doctors who told you to "choose whatever code is most appropriate" because in my experience and opinion, knowing specific billing codes is wildly outside the scope of knowledge needed and expected for a doctor. Their job should be first and foremost to treat their patients. Navigating the unnecessarily complicated and red tape filled maze that is billing and insurance codes is not only an unrelated skill set, but also a necessity brought out of a flawed and predatory system built by those who seek to profit at the cost of healthcare (i.e. insurance companies) rather than those who seek to make a living by providing healthcare.
Appreciate the funny post, but for anyone reading too much into this it's misleading at best (also just barely passing at 60% only correct). It's referencing a portion of the test with multiple choice questions. So that's relatively easy for a language model, since it can predict an answer from a focused question. Please don't ask chat gpt individualized questions about your health. It does decent for giving out some general information about medical topics, but you'd be better off at going to a reputable site like mayo clinic, Cleveland clinic, or all the resources at national library of medicine who maintain free very nice medical knowledge databases on tons of topics. It's where chat gpt is probably scraping it's answers from anyways, and you won't have to worry about it making up nonsense that looks real and inserting it into the answer.
And if chat gpt comes up with sources in an answer, look them up yourself no matter how convincing they seem on their face. I've seen it invent doi numbers that don't exist and all sorts of weird stuff.
Language models, just like any model, only interpolate from what they've been trained on. They can answer questions they've seen the answer to a million times already easily enough, but it does that through stored word association, not reasoning.
In other words, describe your symptoms in a way that isnt popular, and you'll get "misdiagnosed".
And they have a real problem with making up citations of every type. Fabricating textbooks, newspaper articles, legal decisions, and entire academic journals. They can recognize the pattern and utilize it, but because repeated citations are relatively rare compared to other word combinations (most papers get cited dozens of times, not millions like LLMs need to make confident associations between words), they just fill in basically whatever into the citation format.
Ya not scared. Patients have no idea how tell you wants going on, and they're language/vocabulary is all completely different; so good luck getting ChatGPT to understand how to use the proper line of questioning to ensure understanding.
Finally, some 1%ers are getting automated out of a job. Soon we'll start hearing opinion pieces about how people who get automated out of a job deserve some of the profit.
The only doctors who are 1%ers are the ones who finished med school 30+ years ago, and managed to start their own practice before hospital systems started buying up and consolidating everything. Anyone who got their start more recently is much more likely to be working for one of these consolidated practices, with zero ownership and an insane schedule. Considering the cost in both time and money for me school, a family medicine doctor will be in about the same place net-worth wise as a high level tech worker. Still good money but far from 1% territory.
Yeah, basically if you work for a paycheck you're probably not the 1%. The venture capital firms buying up all the medical practices and hospital? Those guys are the 1%