I actually think we know fairly well how deep learning methods work (and what the shortcomings are), we just have no way to interpret the models it produces. Wouldn't ML techniques to reduce scan times fail at the most critical moments, ie when patients had unusual or unexpected ailments? Using ML in on downsampled MRI images feels akin to having an artist with a lot of familiarity of human anatomy touch up a scan.
That implies that doing something is always better than doing nothing. Unless we're talking things like antibiotics, I'm not sure I'd agree. Medical error is a nontrivial cause of death, increasing that significantly could probably be worse than what you're trying to treat.
As an example, you can run a million simulations on a satellite with different initial conditions to test your new control algorithm. However, you have infinitely many possible initial conditions, and you can't simulate all of them. If you however show that the closed loop system in stable sine sense, it's a more rigorous guarantee.
This proposal basically says using ML we can quarter the number of frequencies we sample and still get good looking scans. But the full resolution is made by inventing details based on statistics from a biased input (most MRIs are taken due to something being wrong).
Again, as with super resolution, ML cannot add detail that isn’t there, anything it creates is simply based on the statistical model it formed from the training set.
[*] For example, by measuring the quality of reconstruction of a known image (e.g. a real or digital phantom) or, in the ideal world, by evaluating clinical outcomes.
Odd remark. FDA approves compressed sensing products (e.g., [1], [2], [3], [4]) precisely because it is possible (and provably so) to quantify and/or characterize such “artifacts” up to substantial equivalence.
[1] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn...
[2] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn...
[3] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn...
[4] https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn...
"510(k)" "deep learning" site:accessdata.fda.gov/cdrh_docs
Or alternatively replace "deep learning" by either another distinctive theory/methodology employed or trade/device name.I do remind you to cross-check what material product is being reviewed.
I don’t want to be part of it, thanks
Note, I am biased because I research MRI acquisition and reconstruction methods and I am rolling out trials of fast MRI methods (that use some ML in the reconstruction) to find out how robust the methods actually are in practice.
It looks like you are proposing some kind of mixed approach, not a simple “less data, faster scan, ML to the rescue”. I understand how MRI works but you are surely and obviously much more knowledgeable than me, so I simply wish you luck!
My problem with the article comes from reading that someone telling about using generative models on health data, I don’t think is time for this yet.
In an ideal world, a deep learning algorithm should provide an independent report of potential features of interest to a doctor to let him know if something that he could have missed. However, I hope it stays in the intended role and doesn't make the doctor less careful.
I actually find what you are saying it to be a much better usage of ML in this sector.
Healthcare is an area where we need good and clean data as much as possible, let’s use ML reconstruction somewhere else.
This can also all be simulated offline without an MRI machine to test on with just access to a few full scans... So could be a good weekend project for someone here on HN, and your technique might even be in use by the time you need an MRI scan and will mean your doctor can get results slightly quicker and you get better healthcare, together with hundreds of millions of other people!