Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not a radiologist so I could well be overinterpreting. However, if so, I am not sure that I am alone. This study published in Nature Medicine was hailed by radiologists as one of the "notable successes in using explainability methods to aid in the discovery of knowledge" [1].

Your sober assessment seems valuable, and would make for an interesting letter to the editor.

1 = https://www.thelancet.com/journals/landig/article/PIIS2589-7...



The explainability/methodology is lauded.

Not sure what a letter to the editor would accomplish. The nature paper only interpreted radiographs and the only claim of the authors was basically that the model is a better predictor of pain than KLG.

Your comment misinterpreted this as “using the patients' symptoms and objective data” (when they only used objective data) and added “may actually outperform current medical standards” which was not the claim as current medical standards already consider patient symptoms in addition to objective data, as stated in the article reference to the TKA guideline.

When I report a joint xray I’m not assessing the patient’s pain level, they can be asked that.


You said:

> Your comment misinterpreted this as “using the patients' symptoms and objective data” (when they only used objective data)

This represents an important misunderstanding of the methods of the paper. The model was trained using images (objective data) and the pain score (patients' symptoms). From the methods: "A convolutional neural network was trained to predict KOOS pain score for each knee using each X-ray image."

Also with respect to the author's claims, from the paper's abstract:

> Because algorithmic severity measures better capture underserved patients’ pain, and severity measures influence treatment decisions, algorithmic predictions could potentially redress disparities in access to treatments like arthroplasty.

You think I'm misinterpreting, but I still think that the paper is more important than you're giving credit.


Inference on the validation set is xray -> pain score. It does not incorporate patient symptoms to make the prediction. In real life a surgeon incorporates the xray + patient symptoms/pain score.


Skipped a step: the model needs to be trained, which requires the patient symptoms as the target for weight updates. I think that you simply misread my original comment.


Perhaps I got lost but I am discussing your original statement of “using the patients' symptoms and objective data may actually outperform current medical standards” which relates to the model predictions/inference not training.

In this context we are talking about a pain predictor from an xray which is neat but not the point of KL grading.

KL is a system to grade severity of osteoarthritis on radiographs (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4925407/) and not a threshold for surgery or predictor of symptoms.

The comparator, current medical standards you reference, would be a model outperforming surgeon assessment in conjunction with radiographic findings. Not the predictive value of KL grade.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: