If they reproduce their results in other clinical settings, the immediate impact on patient care includes: 1) accelerating diagnosis (and treatment) for patients with traumatic brain injuries (by effectively up-scaling lower resolution scans) 2) healthcare providers in developing countries will effectively get a low-cost "upgrade" to their existing equipment 3) cancer patients in rural America could be monitored for treatment response in a setting that is closer to home (because rural communities tend to be resource-poor in terms of medical technology).
If we consider that a logical extension of their work could be to develop a compression algorithm for MRI data, then it's easy to see an even broader impact that includes: 1) connecting rural patients with high-quality radiologist services (i.e. remote MRI interpretations), and 2) decrease the cost of long-term storage, access, and retrieval for MRI data.
On the topic of FB's issues with privacy: I agree that FB has a long way to earn my trust as a doctor and a patient. That being said, it's important to give credit where credit is due. It seems that FB gained access to the imaging data by working collaboratively with NYU on this specific project. By comparison, it's an open secret among those of us in the biomedical informatics community that over the course of many years Google Cloud has quietly gained access to the personal health information of millions of Americans. So, when it comes to privacy concerns, it's important to avoid being myopic - the concern is valid, but the primary threat may not be as obvious as it first seems.
I am VERY pessimistic about this. I don't know how well you know medical equipment providers but this will never be sold as a low-cost "upgrade" to existing machines. It will be sold with new equipment only and with a hefty surcharge as an option enabling higher patient throughput.
There is no real money in upgrades. Most equipment lasts only 8-10 years anyway.
To understand how this would work, we need to 1) understand the lifecycle of big-ticket medical equipment (ME) and 2) recognize that ME products are at the core of multiple revenue streams. The first point has to do with the renewed/refurbished market for used/last-generation ME. The second point has to do with the service agreements/warranties/support contracts that are needed in order to keep the ME operational. These factors combine to yield a sales process with multiple negotiating dimensions.
How these negotiations actually play out depends on whether you're a deep-pocketed healthcare system or not (it sucks, but it's true). If you can afford it, you'll have lots of ways to sport the latest and greatest ME without breaking the bank on any single purchase. Some of your old stuff will end up in the renewed/refurbished ME market, thereby offsetting your total cost of ownership (either directly or indirectly). Once used ME hits secondary markets, the customer profile changes: these customers are not looking to keep up with the Cleveland Clinics and Stanford's of the world. They're looking for long-term value, so reliability and longevity is top priority - and this is where I see software "upgrades" coming into play. Some of these customers may already have one or two MRIs, while others may not. In either case, the software "upgrade" becomes a differentiator that speaks directly to the priorities of these customers.
TL;DR - Today, healthcare providers with limited financial resources (e.g. those in developing countries, rural areas) are incentivized to purchase capital equipment through "discounts" on service/support. In the future, we're likely to see software "upgrades" (such as those made possible by FB's work) bundled/leveraged as an incentive. The net effect is the same: extend the clinically useful lifespan of medical equipment (MRIs in this case) and greater access to medical technology around the world.
If you need an MRI or a CT of an area adjacent to orthopedic implants, you are currently 100% SOL because distortion or reflection artifacts from the metal completely destroy the imagery across a medically significant distance. There are computational filtering techniques for reducing these artifacts, but, respectfully, they are still really terrible, and close to the implants you can't see shit. All advancements in this area short of inventing new imaging physics will most likely be purely computational corrections. Consider that.
There were similar discussions a few year ago when deep learning was not commonly used yet and compressed sensing was the hot topic of the moment. It can reconstruct MRI or CT images from limited data (and thus allows for quick MR scans or low dose CT) but you have to satisfy a sparsity condition that is seldom granted. There are a few use cases (like MR angiography) where the data is sparse enough and compressed sensing works great.
For deep learning techniques, you need to be very cautious about which structures your network may remove or introduce.
This module might work well, but the modules by cheap competitors might have such behaviour, and it's extremely hard to test that an implementation is bug free.
The question is rather: does this feature improve diagnoses? Sure, the images look nicer now. But that's not why they are being created. MRI images are made for inspection by trained radiologists who are already filtering out artifacts. So is this tool better at this job, or does it actually worsen the ability of the radiologists to read the images like those xerox scans?
Maybe I'm a bit paranoid, idk. After all, diffusion MRI is already being used for surgical planning even though it has several shortcomings. But in that instance there are probably no good alternatives, while here the alternative is the trained eye of a radiologist.
Here are the results from the paper:
The radiologists ranked our adversarial approach as better than the standard and dithering approaches with an aver- age rank of 2.83 out of a possible 3. This result is statisti- cally significantly better than either alternative with p-values 1.09 × 10−11 and 2.18 × 10−11 respectively, and the adver- sarial approach was ranked as the best or tied for best in 85.8% of 120 total evaluations (95% CI: 0.78-0.91). The dithering approach is also statistically significantly better than the standard approach. We also asked radiologists if banding was present (in any form) in the reconstructions in each case. This evaluation is highly subjective, as “banding” is hard to define in a pre- cise enough way to ensure consistency between evaluators. Considering each radiologist’s evaluation independently, on average banding is still reported to be present in 72.5% (95% CI: 0.62-0.82) of cases even with the adversarial learn- ing penalty. The radiologists were not consistent in their rankings; the overall percentages reported by the six radiol- ogists were 20%, 75%, 75%, 80%, 85%, and 100% for the adversarial reconstructions. In contrast, for the baseline and dithered reconstructions, only one radiologist reported less than 100% presence of banding for each method (80% and 85% presence respectively, from different radiologists). We believe these numbers could be improved if more tuning went into the model; however, it’s also possible that features of the sub-sampled reconstructions generally may be con- fused with banding, and so any method using sub-sampling might be considered by radiologists as having banding. Sub- sampled reconstructions generally have cleaner regional boundaries and lower noise levels than the corresponding ground-truth.
It's worth thinking of an MRI as a programmable machine for doing certain types of physics experiments.
Sometimes you have an area of interest, sometimes you don't. A lot of the practical (i.e. clinical level, not research work) on specific areas of interest is still in coil design, since body coils often don't do well.
There are all sorts of things that make it difficult (e.g. imaging is in frequency domain, localizing things with gradients can be time consuming in ways not entirely directly related to clarity, etc.)
This sort of thing is addressing issues that come up with acceleration techniques that rely on redundancy in the sampled space to "cheat" and not capture everything. The obvious concern with a ML approach here is that it may replace something interesting with something more normal.
I'd hate to be the one tasked with V&V for this, honestly.
What bothers me here is when the artifacts hide underlying pathology, and these algorithms "learn" what a normal knee mri looks like and just show you that. IMO it is a medical liability that must be addressed.
This is exactly what is done already.
Every method of one can name for reducing scan times is used, and some we can’t name are used too. Speed nearly always comes at the expense of quality, although some acceleration techniques and tech developments have lead to improvements that are pretty much without time penalty. These include signal digitisation at the coil and other methods of getting more for for less (note that this equation doesn’t include money!).
Edit: sorry I guess since there's an explicit rotation module it's closer to SRGAN+deformable convolutions.
I trust seasoned talent being paid hundreds of thousands of dollars a year in partnership with equally well paid healthcare professionals over PhD students scraping by on grant dollars, keeping their code and datasets in a private GitHub repo that will never see the light of day except for a citation in research papers of other scholars.
Not trying to be mean, but if Facebook is trying to fix their moral compass with dollars, go for it.
The tl;dr (in microscopy but apparently also in mri) is AI imaging can evidently enable new concrete solutions to intractable imaging problems, but the failure modes are really treacherous. The example on slide 39, taken from another excellent review paper, does a great job illustrating the problem. I think these methods will get more trustworthy, but i wouldn't stake my life (or my paper's prestigious research results) on them at the moment.
And this is distinctly different from compressed sensing which uses a high frequency and mathematical basis.
See the second part of my comment. This is only in principle. In practice compressed sensing uses a higher frequency basis and more importantly, this basis is generally not learned, preventing common case bias. Ie a rare condition won't be ignored because it isn't statistically common enough for the NN model to learn.