Needless to say, AI upscaling as described in this article would be a nightmare for radiologists. 90% of radiology is confirming the absence of disease when image quality is high, and asking for complementary studies when image quality is low. With AI enhanced images that look "normal", how can the radiologist ever say "I can confirm there is no brain bleed" when the computer might be incorrectly adding "normal" details when compensating for poor image quality?
A camera is supposed to take pictures of what it sees.
Imagine going to a restaurant, ordering French onion soup, and getting a bowl of brown food coloring in water.
Feels like that’s just a matter of expectations.
A phone used to be a device for voice communications. It’s right there in the Greek etymology, “phonē” for sound. But 95% of what people do today on devices called phones is something else than voice.
Similarly, if people start using cameras more to produce images of things they want rather than what exists in front of the lens, then that’s eventually what a camera will mean. Snapchat thinks of themselves as a camera company, but the images captured within their apps are increasingly synthesized.
(The etymology of “camera” already points to a journey of transformation. A photographic camera isn’t a literal room, as the camera obscura once was.)
Welcome to England!
People were commenting on that thread how apple phone ie mirrored only bunny within bigger picture of a bunny in the grass (thats rather hilarious 'bug'), and we all know how apple consistently removes all moles and wrinkles, changes completely skin tone and overall tonality like every single picture looks like its taken in the golden sunset hour. Ie that nasty samsung is much more truthful when it comes to this, including latest flagships.
That's outright lying too, IMHO much worse - moon is tidally locked so showing exactly same side with same features for millions of years, so they were adding details that are there, just impossible to see on non-stabilized tiny plastic lens&sensor combo in the night.
Making somebody 20 years younger, much prettier and changing their overall look on most important feature we humans have, doing it by default without any real option to turn it off, does a lot of long term body-perception damage in young folks.
Isn't that like 80% of the mass food industry and 99% of the fast food industry.
You wouldn't like the picture of what it sees. The lens is just not big enough. Even the pro raw and other features that phone introduced apply processing.
If people wanted cameras to actually take what it sees, then we wouldn't have autofocus, photoshop or instagram filters.
The goal of a cell phone camera is to capture what you are experiencing, not to literally record what light strikes the cmos chip.
There isn't necessarily a particularly neutral choice here: the MRI scan isn't in the pixel domain, artifacts are going to be 'weird' looking-- e.g. edges that move during the scan ringing across the whole image.
heads should roll etc
Not saying that we humans are always better but saying that we are believing in number and conclusions from apps created as-is.
This is that, but again, with AI.
A) Many MR reconstructions work by having a "physics model", typically in the form of a linear operator, acting upon the required data. The "OG" recon, an FT, is literally just a Fourier matrix acting on the data. Then people realised that it's possible to I) encode lots of artefacts, and ii) undersample k-space while using the spatial information using different physical rf coils, and shunt both these things into the framework of linear operators. This makes it possible to reconstruct it-- and Tikhonov regularisation became popular -- so you have an equation like argmin _theta (yhat - X_1 X_2 X_3.... X_n y) + lambda Laplace(y) to minimise, which does genuinely a fantastic job at the expense, usually, of non normal noise in the image. "AI" can out perform these algorithms a little, usually by having a strong prior on what the image is. I think it's helpful to consider this as some sort of upper bound on what there is to find. But as a warning, I've seen images of sneezes turned into knees with torn anterior cruciate ligaments, a matrix of zeros turned into basically the mean heart of a dataset, and a fuck ton of people talking bollocks empowered by AI. This isn't starting on diagnosis -- just image recon. The major driver is reducing scan time (=cost), required SNR (=sqrt(scan time)) or/and, rarely measuring new things that take too long. This almost falls into the second category
The main conference in the field has just happened and ironically the closing plenty was about the risks of AI, as it happens.
B) Low field itself has a few genuinely good advantages. The T2 is longer, the risks to the patient with implants are lower, and the machines may be cheaper to make. I'm not sold on that last one at all. I personally think that the bloody cost of the scanner isn't the few km of superconducting wires in it -- it's the tens of thousands of phd-educated hours of labour that went into making the thing and their large infrastructure requirements, to say nothing of the requirements of the people who look at the pictures. There are about 100-250k scanners in the world and they mostly last about a decade in an institution before being recycled -- either as niobium titanium or as a scanner on a different continent (typically). Low field may help with siting and electricity, but comes at the cost of concomitant field gradients, reduced chemical shift dispersion, a whole set of different (complicated) artefacts, and the same load of companies profiteering from them.
The AI used here as I read it is a generative approach trying to specifically compensate for EMI artifacts rather than a physics model and it likely wouldn’t be doing macro changes like sneezes to knees, no?
Lots of AI nonsense permeating radiology right now, which seems to be fairly effective click bait and an easy way to generate hype and headlines.
So essentially, the neural net was trained to what a healthy MRI looks like and would, when exposed to abnormal structures, correct them away as EMI noise leading to wrong diagnostics?
I won't be very dismissive of this approach and probably deep learning has a strong role to play in improving medical imaging. But this paper is far, far from sufficient to prove it. At a minimum, it would require mixed healthy / abnormal patients with particularities that don't exist in the training set, and each diagnostic reconfirmed later on a high resolution machine. You need to actually prove the algorithm does not distort the data, because an MRI that hallucinates a healthy patient is much more dangerous than no MRI at all.
You can read that as saying that the DL eliminated the background noise rather than saying that the system was conditioned on images of healthy people. From that it may well have been conditioned on just an empty machine or neutral test samples.
If so, there may be a good reason to suspect that it isn't likely to create artifacts that look like or mask anatomical structures.
Realistically, the training set is most likely MRIs of similar tissues and would be naturally biased towards healthy structures. Even the remotest possibility of a hallucination should be addressed and disproved for such an application but they make no mention of it, just "OMG magic ENHANCE button!".
Do clinicians really prefer that the computer make normative guesses to “clean up” the scan, versus working with the imagery reflecting the actual measurements and applying their own clinical judgment?
Let's say there's a CTA chest that is limited because the patient breathed while the scan was being acquired, I need to let the ordering clinician know that the study is not diagnostic, and recommend an alternative.
If AI eliminates the artifact by filling in expected but not actually acquired data, I am screwed and the patient is screwed.
A core part of processing MRI is the compressed sensing algorithm .
But in a emergency setting or especially for MRI-guided interventions these low-field MRIs can really play a significant role. Combining these low-field MRIs with rapid imaging techniques makes me really excited about what interventional techniques become possible.
https://www.science.org/doi/10.1126/science.adp0670
> This machine costs a fraction of current clinical scanners, is safer, and needs no costly infrastructure to run (2). Although low-field machines are not capable of yielding images that are as detailed as those from high-field clinical machines, the relatively low manufacturing and operational costs offer a potential revolution in MRI technology as a point-of-care screening tool.
I don't think this machine is being billed as replacement to high-field machines.
Countries where health regulation is less developed are likely to see misrepresentation where this form of MRI will be equated to full-field MRI by snake oil salesmen.
This means going from 0.05T to 1.5T boosts your sensitivity ~150x. Measurement time scales with sensitivity^2, so you'd have to measure 20k x longer.
Regardless, this is super neat.
> We developed a highly simplified whole-body ultra-low-field (ULF) MRI scanner that operates on a standard wall power outlet without RF or magnetic shielding cages. This scanner uses a compact 0.05 Tesla permanent magnet and incorporates active sensing and deep learning to address electromagnetic interference (EMI) signals. We deployed EMI sensing coils positioned around the scanner and implemented a deep learning method to directly predict EMI-free nuclear magnetic resonance signals from acquired data. To enhance image quality and reduce scan time, we also developed a data-driven deep learning image formation method, which integrates image reconstruction and three-dimensional (3D) multiscale super-resolution and leverages the homogeneous human anatomy and image contrasts available in large-scale, high-field, high-resolution MRI data.
> The brain images showed various brain tissues whereas the spine images revealed intervertebral disks, spinal cord, and cerebrospinal fluid. Abdominal images displayed major structures like the liver, kidneys, and spleen. Lung images showed pulmonary vessels and parenchyma. Knee images identified knee structures such as cartilage and meniscus. Cardiac cine images depicted the left ventricle contraction and neck angiography revealed carotid arteries.
Maybe there’s more to it that I’m missing, but this sounds like the main accomplishment is being able to identify that different tissues are present. Actually getting diagnostic information out of imagining requires more detail, and I’m not sure how much this could provide.
For anyone who is unaware, a standard MRI machine is about 1.5T (so 30x the magnetic strength) and uses 25kW+. For special purposes you may see machines up to 7T, you can imagine how much power they need and how sensitive the equipment is.
Lowering the barriers to access to MRIs would have a massive impact on effective diagnosis for many conditions.
Something like this low power MRI could be a key part of enabling a transformation of cancer treatment.
In the acute setting, faster and more ergonomic imaging could be big. E.g., in a purpose-build brain device, if first responders had a machine that tells hemorrhagic vs ischemic stroke, it would be easier to get within the tPA time window. If it included the neck, you could assess brain and spine trauma before transport (and plan immobilization accordingly).
Neither did Dropbox.
helium refrigeration cycle equals:
- elaborate and expensive cryogenic engineering in the MRI overall design.
- lots of power for the helium refrigeration cycle.
- requirements for pure helium supply chain, which is not possible in many parts of the world, including areas of Europe, North America, etc.
Even better, they just use a big permanent magnet.
But that's not the objective, and so your research is doomed
The medical equipment industry will not suffer fools who don't understand 'regulatory capture' and 'rent seeking.'
Those hospital machines are expensive and rare for reasons that have very little to do with cost or performance
very cool, but is it clinically useful if one edge of your voxel is 8mm?
But even if it were, plenty of interesting structures are many centimeters in size, a thousand fold decrease in costs from eliminating cryogenic / high power magnets could be very useful.
if you had a fracture/tumor/damage-of-some-type that's small enough to fit between those slices and you didn't get the slices lined up just right the scan would miss it, no?
I read a paper few years ago about utilization rate, machine/service cost, how many machines per citizen/hospital... They were running day and night. Cursory glance at other countries also reveal sensible prices.
Unless it gets to a point of ultra sound machine(i.e. machine in a the consulting room a doctor can use in 10 minutes), I don't think it will decrease price much.
So, we get worse SNR data from the device and then enhance it with compressed knowledge from millions of past MRI images? Isn't it like shooting the movie with Grandpa's 8mm camera and then enhancing and upscaling it like those folks on YouTube do with historical footage?
But some people actually like to have something that works.
Difficult stuff to store. I knew they needed cold gas, but liquid helium is crazy.
This is what this paper is basically doing it seems. "Look how clear the image is!" yea, because it's not real, it's AI generated garbage.
A colleague had a device and a veteran adviced him to 10x the price.
Surely a lot of small hospitals would jump at the chance at a small cheap MRI? I don't understand how the incumbents have much legal leverage here...
Everybody loves the idea of cheaper stuff, but nobody is going to take a chance. Medicine is extremely conservative. Overly in my opinion.
Particularly useful in poorer countries.