My dad’s an orthopaedic trauma surgeon. A friend was curious if his knee injury outcomes could be predicted from the radiologist’s report. So I supplied that plus X-rays and description of condition and situation that caused it to ChatGPT-4 (the early days version) to see if it would describe everything correctly and say what actions should be taken and then sent that to my dad for validation. He said it was spot on.
He then asked what it would say for the prognosis. Also again spot on.
So there’s things it can do well, for certain. I’m of the opinion that if we manage to scale this machine it will create novel science.
In any case, the two complaints expressed in the article ( inappropriate comprehension of patient problems and inequality) are both actually better with LLMs. The patience and understanding of an LLM cannot be beaten. Once we fix the context window, and once again unshackle this machine from ridiculous chains that safetyists have put it in, we will improve patient care.
Maybe they won't cure cancer, but a host of problems will be taken care of.
He then asked what it would say for the prognosis. Also again spot on.
So there’s things it can do well, for certain. I’m of the opinion that if we manage to scale this machine it will create novel science.
In any case, the two complaints expressed in the article ( inappropriate comprehension of patient problems and inequality) are both actually better with LLMs. The patience and understanding of an LLM cannot be beaten. Once we fix the context window, and once again unshackle this machine from ridiculous chains that safetyists have put it in, we will improve patient care.
Maybe they won't cure cancer, but a host of problems will be taken care of.