Interesting quote from the venturebeat article linked:
> “There is also a reason why clinicians who deal with patients on the front line are trained to ask questions in a certain way and a certain repetitiveness,” Volkheimer goes on. Patients omit information because they don’t know what’s relevant, or at worst, lie because they’re embarrassed or ashamed.
In order for an LLM to really do this task the right way (comparable to a physician), they need to not only use what the human gives them but be effective at extracting the right information from the human, the human might not know what is important or they might be disinclined to share, and physicians can learn to overcome this. However, in this study, this isn't actually what happened - the participants were looking to diagnose a made-up scenario, where the symptoms were clearly presented to them, and they had no incentive to lie or withhold embarrassing symptoms since they weren't actually happening to them, it was all made up - and yet, it still seemed to happen, that the participants did not effectively communicate all the necessary information.
> In order for an LLM to really do this task the right way (comparable to a physician), they need to not only use what the human gives them but be effective at extracting the right information from the human
That's true for most use-case, especially for coding.
> Sure, but think of a good help desk tech: if they waited for users to accurately report useful information, nothing would ever get fixed.
I sometimes also have to do "help-desk-like" duties on the applications that I am responsible for (think like 3rd level technical support):
I can tell you that you can train your users to give more helpful useful information (but of course sometimes they don't don't know by themselves what is important and what is not).
Sure. But as a patient, you are also not expected to know what is or isn't important. Omitting unimportant information (to you) because your brain does a low pass filter is partially what the doctor is trying to bypass.
Its as if every single person had to be an expert in every field to be able to function, that's really not a thing and we expect the actual experts to know how to extract the needed information.
That's one of the main differences between mediocre and incredible engineers, being able to figure out what the problem that needs to be solved is and not work on whatever a stakeholder asks them to build.
Okay, so your code has been segfaulting at line 123 in complicated_func.cpp, and you want to know to which version of libc you have to roll back to as well as related packages if any.
What's the current processor temperature, EPS12V voltage, and ripple peaks if you have a oscilloscope? Could you paste cpuinfo? Have you added or removed RAM or PCIe device recently? Does the chassis smell and look normal, no billowing smoke, screeching noise, fire?
Good LLMs might start asking these questions soon, but you wouldn't supply these information at the beginning of interaction(and it's always the PSU).
Yeah, there's a lot of agency on both sides of the equation when it comes to any kind of consultant. You're less likely to have bad experiences with doctors if you're self aware and thoughtful about how you interact with them.
> “There is also a reason why clinicians who deal with patients on the front line are trained to ask questions in a certain way and a certain repetitiveness,” Volkheimer goes on. Patients omit information because they don’t know what’s relevant, or at worst, lie because they’re embarrassed or ashamed.
In order for an LLM to really do this task the right way (comparable to a physician), they need to not only use what the human gives them but be effective at extracting the right information from the human, the human might not know what is important or they might be disinclined to share, and physicians can learn to overcome this. However, in this study, this isn't actually what happened - the participants were looking to diagnose a made-up scenario, where the symptoms were clearly presented to them, and they had no incentive to lie or withhold embarrassing symptoms since they weren't actually happening to them, it was all made up - and yet, it still seemed to happen, that the participants did not effectively communicate all the necessary information.