Comment on Video Removing Heavy Wind Shake from FPV footage
Removing heavy wind shake from FPV footage at https://www.youtube.com/watch?v=iDRJCyCKWfQ
I have been collecting live video streams on the Internet for some time. I call the collection “shake wind”. Mainly beaches, bridges, mountains and places where the wind is fierce and the camera will often shake, sometimes violently. But the equation of motion of the camera is traceable, if not predictable. Registering pixels between frames allows for determining any displacements and rotations needed to fit the frames “as best as possible”.
What is the real kicker, is that not only is the motion of the camera predictable, but it is pure equation of motion with infinitely divisible intermediate positions. So a variation on “super-resolution” and “synthetic aperture” methods is possible to recover, not only the pixels and motion, but deeper into subdivisions of the pixels. The reason I wanted to apply it to the live video streams is that some of these videos stare at the same things for years, in all kinds of lighting. Since the time is fairly well known, the position of the sun (moon, planets, stars, some satellites) is known precisely. So those things can help recover not only the precise time, but the orientation and properties of the camera and lenses used.
It is a fun problem. Many of these “shake wind” streams are also “half sky”. That is they show – day and night – half of the frame is the sky. So tracking clouds and all those those things in the sky is possible. Now I have been trying to convince everyone on the Internet to use lossless formats for images, and sequences of images. Satellites looking down, satellites looking out, cameras with lens looking out, and all image and sensor data streams. It might take another decade, but I have been at it 24 years already, what is another decade or two?
I really really appreciate finding your video. Because, with the lossy formats, and many obligations, I could only trust a lifetime of using computer models, mathematics and working with data to “know” that these shake wind problems were almost exactly solvable. You show here that routine corrections are possible and practical.
Now, the frame to frame methods work well. I was recently reading deep into how “optical flow” is used for the tiny dedicated cameras in optical mice work. But, for image stabilization too – there are smaller, faster, better vibration, accelerometer, gyroscopic (angular acceleration meters) to measure the camera acceleration on three axes, integrate for velocity and position and direction – and get a pretty good guess as to how the frames should be registered. AND those measurements – applied with better quality sensors- mean that routine super resolution and SAR methods can be applied to get more out the pixels.
Some things that are not happening fast enough to suit me – higher bits per color, low cost hyperspectral methods, professional quality all sky webcam for ALL observatories – regardless of the frequency (all the radio astronomy sites should also operate all sky optical sensors too, AND low frequencies electromagnetic from nanoHertz to GigaHertz for electromagnetic interference monitoring and for all sky electromagnetic surveys.
Sorry to write so much here, but these things are on my mind constantly. I am very happy to find your video. Thank you.
Richard Collins, The Internet Foundation