24 hours is the time it takes the sun to return to the same spot in the sky due to earth having to rotate for another 3m56s to make up for angle gained by revolving around the sun in the same direction as the rotation of the Earth. This applies for the other planets that also rotate and revolve in the same direction - Mercury, Earth, Mars, Jupiter, Saturn, and Neptune. A sidereal day is 23h 56m 4.091s for distant stars to return to the same spot in the sky.
Damn, I knew that is why I botched my 6-stop exposure at my daughter's graduation! She can't blame me now! Thank you HN!
how about driving for 6-stop before taking the shot a tank with stabilized gun trained to the target. Now the tank gunner has the excuse too.
It helps to have a rough indication of the current latitude on startup, but you can also figure it out from the gyro outputs. Just takes longer.
With modern sensors (solid state laser gyroscopes) it has all become a lot smaller so if you really want to you can do this in a camera. It's just probably going to be too expensive for what it brings, because 6+ stops of stabilisation is a lot already.
Perhaps not, but a lot of cameras already have it for geotagging purposes (EXIF), so why not use it:
* https://en.wikipedia.org/wiki/List_of_cameras_which_provide_...
* https://www.digitalcameraworld.com/buying-guides/best-camera...
Simply using the accelerometers and gyro directly does give the info you want. GPS is useless.
I think you may be confusing two concepts: Measurement of true north and latitude via gyro (what the GP is talking about) and inertial navigation systems (which, yes, do drift).
You can measure those two things with just a single-axis gyro and no external references using a technique called "gyro-compassing". In fact, most internal navigation systems use gyro-compassing to directly measure true north and latitude to align the system on initial startup.
Worked makes it seem like you throw a switch and it just gives you position data. Those units take anywhere from 6 to 10 minutes to align, if you move the platform, it will error out and you must restart the alignment. The current systems take their initial fix from GPS, but the initial systems, the operator had to manually know and then key that information into the unit.
"Worked" with extreme care operated by a qualified professional.
Inertial measurement units for aircrafts and submarines cost as much as a house in California. Good luck putting those in a phone.
We have, naturally, also made better IMUs for places where it matters, ones which won't fit in your phone.
The question is therefore not suited to "aircraft grade, yes or no?", it's "how expensive is the cheapest IMU that's good enough for the specific need?" which in this case itself depends on how many stops is desired.
MEMS gyros have too much bias drift (both on a unit basis due to fab processes and on a temperature basis) to be practically useful here. You can measure the earth's rotation with a MEMS gyro, but you're really at the limit.
So, basically dieselgate but for image stabilization
Or maybe that is the method they assume for the second solution and they calculated that it's infeasible.
The image with the 2 earths.. that only works if the camera is not also on the ground, but it is? How is the rotation of the object and the camera not identical? Why would it rotate ‘upwards’?
Also, if the issue is relative motion or rotation between camera and object, wouldn’t two sensors, one on the camera and one on the subject be able to solve this, since we can see if their rotations/movement match up or not?
Unfortunately, this would be pretty bad for taking a picture of something that was right in front of the camera (relative to the surface of the Earth). You'd be in front of the camera, ready for your picture, and the camera would appear start rotating as it kept that distant star in view.
So with a perfect image stabilizer, this is what the camera is actually trying to do, even when standing on the Earth with a tripod. It actually senses the rotation of the Earth, and tries to cancel it out, just like it would cancel out your hands shaking. But while it's good to cancel out your hands shaking (because that's a motion that's independent of the subject of the photo), it's not good to cancel out the rotation of the Earth (because the subject of the photo is actually moving with you).
By this logic, the Earth's revolution would also cause similar issue, and even worse. But in reality only the rotation does.
I think at least some part of your explanation does not calculate.
Parts to make really good cameras could be taken out and used in missiles, to tell them where to go.
So we now have laws to keep those really good parts out of cameras, for safety. Cameras still work fine, but you need a tripod to get good pictures when it's dark out.
you can back calculate orientations with high pass filterd gyro data, to rotate the unfiltered gyro date into the current reference frame, then low pass the unfiltered but rotation corrected gyro data to get the earth rotation axis in the current reference frame, then one can estimate the expected rotation that should be ignored.
In theory, you can take the last N seconds of data from the gyroscope (I assume it is running while the camera is active) to get the overall drift, even if it is tumbling around for a while before being pointed at the subject... Assuming the tumbling has enough periods of time that are correlated with the earth's rotation (e.g. someone carrying it, not pointing it an an aircraft or something moving EW for the window duration that is anticorrelated with the rotation).
By far the more common case for image stabilization is one in which the photographer is hand-holding the camera and may not frame the subject until the moment before the exposure begins. The camera movement will likely be several orders of magnitude (~4 to 7) larger than the drift that you want to measure. A low pass filter will tell you nothing at all.
At a certain point we can just start using guide stars [0].
These seem trivial to work around. Just store the last known position and use that. It's rare that you'll be without a GPS signal or beside a magnet, and you certainly won't be traveling long distances in those conditions. And since when do magnets block GPS signals?
https://www.nikonusa.com/p/z-f/1761/overview
("Based on CIPA standards; when using the telephoto end of the NIKKOR Z 24-120mm f/4 S" - for clarity, that lens does not have optical VR in the lens itself, so this is all based on in-body stabilization.)
I think that for astrophotography, the shutter times are so long that you have to build it into the tripod, instead of relying on the tiny amount of stabilization that can be done in-camera.
Although maybe it would be helpful to cancel out some motor noise of vibrations from the tripod. But probably the existing image stabilization already does this.
For the very long exposure times, you can also hook a second camera up and run closed loop control on a specific star to keep your primary image sensor trained on the correct target to even tighter tolerances. There's companies making cameras that combine both the primary and secondary camera into a single housing so you don't need to fit a second camera + lens to your setup, or insert a prism to pick off part of the image to go to a second camera.
Amateur astrophotography today does tricks you needed access to a dedicated lab to do in previous decades. It's amazing!
It limits stabilization to two axes, but now any lens is essentially stabilized. And it also lets them do some tricks, since it's so integrated. One is to do sub-pixel sensor shifts for higher res photos, and another is to do astrophotography tracking when GPS data is available.
Much more limited in scope than a full tracking gimbal, but not bad considering it's built into the camera (earlier bodies had a GPS attachment that slotted into the hot shoe connector): https://www.lonelyspeck.com/pentax-k-1-mark-ii-astrophotogra...
There are various types of mounts, and each type can be either basic or fancy. The specific type of mount that deals with rotation of the sky (around the north/south stars):
* https://en.wikipedia.org/wiki/Equatorial_mount
You can can get non-fancy ones (US$ 240):
* https://optcorp.com/collections/equatorial-mounts/products/o...
Or fancy ones ($20K):
* https://optcorp.com/collections/equatorial-mounts/products/a...
https://youtu.be/DmwaUBY53YQ
https://youtu.be/zRTJ5ISmVXERight?
Might just not be practical at all.
On the other hand, shouldn't the earth rotate fast enough to figure this out in a short timeframe while the photographer starts looking through the finder?
I am probably missing something huge. But if the goal is a stable image why use gyros. use the image itself to apply the correction factor to the final integration. sort of the same way videos are stabilized.
The second way is undesirable because it's really hard. There is a lot of research into this and some of the results are good but some are not.
Photo hobbyists are snobs :)
Ignore the camera. Instead you have a planet (a circle in flatland), a gyroscope (an arrow that always points in the same direction on the page in flatland), and Mr Square.
--> [.]
|
/----\
| |
\----/
Start off at noon, with Mr Square and the arrow at the top of the planet, the gyroscope to the left of Mr Square pointing at him. Now progress time by 6 hours, by rotating the planet clockwise by 90 degrees. Mr Square and the gyroscope will move with the surface of the planet, resulting in them being on the right side of the circle on the page (the gyroscope above Mr Square on the page). Mr Square's feet will be on the surface of the planet, meaning his rotation matched the planet. However, the gyroscope always points in the same direction on the page. It's now pointing at the sky. /----\
| | -->
\----/-[.]
In conclusion: both Mr Square and the gyroscope move with the surface of the planet - in exactly the same way. However, Mr Square will always be standing (along with everything else on the planet), while the gyroscope always points in the same direction on the page (irrespective of the time of day). A camera using the gyroscope would have to account for that.We wouldn't have the same issue on a (non-rotating) space station. That's why planetary rotation is blamed.
> Due to Earth's rotation, its surface is not an inertial frame of reference. The Coriolis effect can deflect certain forms of motion as seen from Earth, and the centrifugal force will reduce the effective gravity at the equator. Nevertheless, it is a good approximation of an inertial reference frame in many low precision applications.
Once, a launch was aborted just before liftoff. The rocket stayed on the pad and the cosmonauts were sitting in the spacecraft for some time. Suddenly the abort system fired and pulled the capsule from the rocket. They landed safely on parachutes.
It was discovered that earth had rotated and the gyroscope had detected the tilt of the rocket, so it fired the escape system.
Soyuz 7K-OK No.1 was uncrewed and likely had the quirk with the gyros. One person near the launch on the ground was killed.
https://en.wikipedia.org/wiki/Soyuz_7K-OK_No.1
> Initially, it was suspected that the booster had been bumped when the gantry tower was put back in place following the abort and that this somehow managed to trigger the LES, but a more thorough investigation found a different cause. During the attempted launch, the booster switched from external to internal power as it normally would do, which then activated the abort sensing system. The Earth's rotation caused the rate gyros to register an approximately 8° tilt 27 minutes after the aborted liftoff, which the abort sensing system then interpreted as meaning that the booster had deviated from its flight path, and thus it activated the LES. The abort sensing system in the Soyuz was thus redesigned to prevent a recurrence of this unanticipated design flaw. On the other hand, the LES had also worked flawlessly and demonstrated its ability to safely pull cosmonauts from the booster should an emergency arise as it did years later in the Soyuz 7K-ST No.16L abort (26 September 1983).
The emergency condition of the Soyuz 7K-ST No.16L abort was not caused by rotation of the Earth, but by multiple failures that caused damage to the launch vehicle:
https://en.wikipedia.org/wiki/Soyuz_7K-ST_No.16L
> The crew was sitting on the pad awaiting fueling of the Soyuz-U booster to complete prior to liftoff. Approximately 90 seconds before the intended launch, a bad valve caused nitrogen pressurisation gas to enter the RP-1 turbopump of the Blok B strap-on. The pump began spinning up, but with no propellant in it, the speed of rotation quickly exceeded its design limits which caused it to rupture and allow RP-1 to leak out and start a fire which quickly engulfed the base of the launch vehicle. Titov and Strekalov could not see what was happening outside, but they felt unusual vibrations and realized that something was amiss. The launch control team activated the escape system but the control cables had already burned through, and the Soyuz crew could not activate or control the escape system themselves. The backup radio command to fire the LES required 2 independent operators to receive separate commands to do so and each act within 5 seconds, which took several seconds to occur. Then explosive bolts fired to separate the descent module from the service module and the upper launch payload shroud from the lower, the escape system motor fired, dragging the orbital module and descent module, encased within the upper shroud, free of the booster with an acceleration of 14 to 17g (137 to 167 m/s²) for five seconds. According to Titov, "We could feel the booster swaying from side to side. Then there was a sudden vibration and a jerking sensation as the LES activated".
> Your camera, which is using its IBIS system to attempt to keep everything as still as possible, may not realize that you are rotating with your subject and will instead try to zero out any rotation of the camera, including that of the Earth
The problem is that the stabilization system tries to compensate for the rotation of Earth (because it can't make the difference between the rotation of Earth, which shouldn't be compensated for, and the movement of the holder which should be).
So it would work if you were taking a photo of a subject not rotating together with the Earth. Like the stars.
Which is normally not a problem, but relative to something on the surface of the Earth the stars do move.
So I guess you should ask people to stand directly in front of Polaris if at all possible.
You can stabilize out everything and account for the rotation by simply watching the vector of gravity over time.
"Stops of stabilization" in this specific context refers to a standardized CIPA test which determines a shutter speed where the image remains acceptably sharp. They then calculate the number of stops to 1/focal-length, which is a rule of thumb for getting sharp images from the 1950s. So if a 200mm lens produced a sharp image at 1/10s in the CIPA test, then that would be 1/10 -> 1/20 -> 1/40 -> 1/80 -> 1/160 -> 1/200 about 4.3 "stops of stabilization".
The results from the CIPA test don't really hold up to the real world though once you move beyond ~4 stops.