[0] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=wifi...
The Mandalorian production comes to mind, they've been pushing this kind of direction hard.
Very cool stuff, thanks for sharing!
Well that clears it right up, thanks! Lol.
I'm pleased that they think any general public person or even journalist is capable of seeing the difference between the left and right figure in the article.
Since according to Einstein "you can make it simpler but no simpler", me think he and his colleague did pretty well there if we refer to the reduced complexity equation provided here [2]. To be honest this simplified coherence estimation equation in words literally looks like the most complicated equation in words that I have ever seen (page 34 in [2]) - imagine the details elaboration for the mathematical equations but bear in mind that this is for the more sophisticated multi-static radar:
Coherence Estimation = [Radar equation system noise (gammaSNR)] x [Quanization Noise (gammaQuant)] x [Ambiguities (gammaAmb)] x [Baseline Decorrelation(gammaGeo) x [Doppler Decorrelation(gammaAz)] x [Volume Decorrelation(gammaVol)] x [Temporal Decorrelation(gammaTemp)] x [Processing & Coregistration Errors(gammaProc)]
[1]https://www.capellaspace.com/sar-101-an-introduction-to-synt...
[2]https://elib.dlr.de/43805/1/eusar06_tutorial_advanced_bistat...
and then Richard going in to the focus group trying to explain it science-speak rather than ELI5.
It seemed like this article's intended audience is regular people and it missed the mark. First question I had was... what is SAR?
What's impressive is that they're getting 50cm resolution from orbit. Doing this from aircraft is nothing new; that's been going on for decades. But 50cm from a satellite? That's an achievement.
SAR technique create a virtual or synthetic antenna by imaging coherently over a path that draws out the synthetic aperture.
Of course, time continues as the the platform (satellite in this case) is moving. SAR then requires precise spatial/temporal awareness to combine the returns from the ground into a coherent image.
It is similar to long exposure in photography leading to seeing dim objects far away.
In both cases, if objects in the imaged scene are moving during the scanning period, they become blurred in the final result.
If you took a SAR image of my home you would absolutely see how many vehicles are in my garage.
Some examples from Sandia labs SAR image gallery - https://www.sandia.gov/radar/imagery/index.html
(I believe these are aerial vs. orbital but the technology is the same)
Ku band SAR
Airport Historical Site - Can see parked vehicles right through the roof of the building on the left - https://www.sandia.gov/radar/_assets/images/gallery/ku-band-...
Ka band SAR
Golf Course - Can see right through the roof of the clubhouse and golfers (eg. human bodies) on the greens - https://www.sandia.gov/radar/_assets/images/gallery/ka-band-...
EDIT: I have some professional experience with the interpretation of satellite SAR imagery.
So when the data is visualised, the perspective in the SAR image is actually 90deg offset from the imagining direction. This is done because the collected data only contains distance, rather than distance and angle (which you would get with LiDAR).
So places where building look transparent are caused by the fact that visualisation perspective is different to the imagining perspective. In an image that appears to have been taken with the camera south of a target, it was actually imaged from the north (this is a simplification). So you will always get overlapping data, because the imaging and apparent visualisation perspectives are different. And the reason for not aligning the perspectives is because there isn’t actually enough data to do that (no return or transmission angle is collected along side the distance data).
It’s important to note that I say “apparent” perspective. Because the imaging perspective doesn’t actually change, I think the perspective change is caused by our brains recognising shapes, and computing a perspective angle. But this angle is actually incorrect because the RADAR imaging process is completely different to how our eyes work.
For those still confused about this. Try thinking about how the shadows are being cast, they’re not from the sun, and likely the only thing admitting radar signals to “illuminate” objects is the satellite emitting the RADAR chirps, and also doing the imaging.
Could someone tell me if I’ve got this right?
BUT! What do you think about this one? - https://news.ycombinator.com/item?id=25482504
Why am I worried about SAR when Google is ten times creepier, lol.
Could definitely be some kind of artifact
https://i.imgur.com/cDF8Fsr.jpg
Now take a look at this map image, and go back and forth to see if you feel that the X i marked is a reasonable approximation of the photographer's location
https://i.imgur.com/fvj0ZPj.jpg
Now take a look again at the SAR image and tell me if you can see any artifacts from the front of the building through the roof (hint, zoom in and look for the curved line of bright dots :) )
https://www.sandia.gov/radar/_assets/images/gallery/ka-band-...
There appears to be a couple tables that are hiden from Google's satellite view but clearly visible to the bottom right of the building in the ka band SAR image.
While in my mind, things were better with Fairness Doctrine and Equal-time Rule restrictions, I think we could solve this in the US by greater tax-funded federal, state, and local government-funded media, where the government has no control over that media aside from that of regulation to promote fair and equal representation within that media.
Basically, put PBS, state, and local programming into a streamed source similar to Netflix/Hulu/Prime, produce content with quality surpassing other streaming services, and give the public air time both via meritocratic decision by randomly selected electorate as well as via lottery and availability. The BBC might be a good model to emulate also.
At that point, there would at least be an alternative to B.S. journalism and bad media behavior reinforced by viewership and advertising dollars.
Unless your garage was made of materials that are transparent to the frequency band of the radar or the collection geometry enabled the radar to image through a door or large windows, this is not true.
One example (of many): https://www.kurzweilai.net/seeing-through-walls-in-real-time
Impulse radar with range gating substitutes time resolution for spatial resolution, and has been used to 'look' through walls with varying degrees of success. I think you can even buy a studfinder that uses similar principles.
Also, when you catch yourself writing things like this:
One of our radar scientists accurately
described the phenomenon (to reporters,
presumably): It helps to think of it
just as your brain’s interpretation of
a two dimensional representation of the
coherent sum of backscatter responses
from electromagnetic waves.
.... it's probably time to hire some marketing folks.Engineering PhD students coached on outreach presentation of their thesis sometimes improve dramatically in just a few hours. Some not so much.
EDIT: Ok, after Googling this a little I think I get it. The skyscrapers are upside-down. (I think?) The radar is measuring slant-range distance, and due to the viewing angle, the tops of the skyscrapers are closer to the radar than the bases. So the tops of the buildings are closer to the bottom of the image.
Since the skyscrapers are in the “wrong” place in the image, they get blended with ground features that are in the “right” place.
Is that right?
The image is displayed as if it were taken from a camera directly above the ground, but the radar is actually located at some other location. In order to create the image an assumption is made that everything is located at the same height. So when something like a skyscraper is actually hundreds of meters above the ground, the radar detections from that object are projected to the wrong location in the image. The term to look up is "foreshortening".
This is similar to (but not exactly) what happens when aerial photography is used to make maps. Building in Google Maps get flattened and the roofs are translated to someplace other than where they should be.
What also helps is to look for "shadows" in the SAR imagery. The shadow tells you where the radar is located. From that you can intuit what side of objects the radar is actually seeing.
A great example used in textbooks is this image of the Washington Monument. You can tell from the shadow that the radar is located to the north^. So even though it looks like you're seeing the south side of the monument, the image is actually showing the north side. The north face of the monument has been projected onto the ground in the direction of the radar. http://image.slidesharecdn.com/radar-2009-a18-synthetic-aper...
So looking at the Tokyo image again. The tree shadows are south of the trees, so the radar is located to the north. The tops of the skyscrapers are closer to the radar than the bottoms, so the tops of the skyscrapers will be shifted to the north towards the radar. However, because the radar is located to the north, what we see is actually the north side of the skyscrapers. The false perspective makes it seem like we should be seeing the south side of the skyscrapers, but that's an illusion.
^ I'm just assuming that "up" is north.
See how only the 2 near sides of the buildings got imaged. The skyscrapers in that Tokyo image only have the 2 far sides imaged.
So I guess the image was acquired upside-down on the satellite. The team fliped the 2D image and called it a day.
Now if Capella can release raw range information of their scans and let us play with it...
It's like an isometric view of a video game. (But not exactly, since the angles are different)
A tall thing and a lower thing nearby it are protected onto the same spot. (Usually in a video game the tall thing blocks the behind thing, instead of blending with it.) Blending the two images makes the front-looking thing look transparent, when in really you are seeing around one object (since the observation angle is not the same as the imaginary viewing angle of the image) and then drawing both objects in the same pixel.
Synthetic-aperture radar (SAR)?
For what it's worth, the state very likely needs a warrant to use technologies capable of looking inside structures [1]. Not that the law applies to them, but hypothetically.
https://www.sandia.gov/radar/imagery/index.html
Example: https://www.sandia.gov/radar/_assets/images/gallery/ka-band....
For example, the Singapore image of buildings and other detections by the SAR is clearly overlaid on an optical image with trees and bushes and respective shadows. Trees and bushes are invisible to X-band (10GHz) SAR. It has to be an optical image underlaying the radar data.
Look at it. The detections by SAR are bright white. Most of the image is grey-scale showing background items.
It is not advertised as such. As an interpretable image, perhaps this is an improvement over notoriously hard-to-understand monochrome SAR imagery. BUT! It is not described as such. Hence my subjective evaluation as "bullshit".
[0] https://ichef.bbci.co.uk/news/976/cpsprodpb/C172/production/...
Trees and bushes are very much visible to X-band radar systems. See e.g. http://web.eecs.umich.edu/~saraband/KSIEEE/J54IEEETGRSJan00S...
Foliage Penetration (FOPEN) radars, in my experience, do not use X-band.
All of the ones I have seen so far appear to be resized for use in their blog/press releases ... or are those images as high-res and as sharp as they get?
Again, really terrible explanation. It feels like they really didn't want to explain it at all.
The layover effect at the center of this article is due to the map being of skyscrapers.
As an indication of how sensitive it is, you can use SAR interferometry to see where Crossrail tunnels have been bored under London, despite the elevation change of the buildings and roads on top being too small to have had any impact on their structural integrity. The caveat is that this level of sensitivity you get when looking at smooth, undisturbed surfaces like the roofs of buildings isn't matched when you're looking at fields with growth and soil movement
It’s not a replacement for visual range (or infrared for that matter) but an additional sensor. It’s usually used to create things like elevation maps. So all those altitude measurements that you see on Google Earth might be at least partially SAR.
Perhaps you missed the point of the article?
While it is true that some implementations of SAR can image through walls at short distances (tens of meters), SAR is a very broad term and those systems are very different from the imaging systems in the article.
Do they understand how these diagrams are supposed to work?
Thanks
https://maps.disasters.nasa.gov/arcgis/apps/MapSeries/index....
So it can track your mum
"No, SAR Can't See Through Buildings" is the real title, which is completely different than the current (awkward) "No SAR Can't See Through Buildings"