Rolling Shutter Cameras encode time in images, names spaces, languages
I was reading “Rolling Shutter Imaging on The Electric Grid” by Mark Sheinin, Yoav Shechner, and Kirakos Kutulakos. They used an IDS UI-348xLE (monochrome) camera so I was wondering how much it costs and what image sensor it uses. But that was some time ago and expect you have changed. They also used a cell phone camera which I thought innovative. Any camera interface has to allow complete control and lossless data handling.
A more recent paper is URS-NeRF: Unordered Rolling Shutter Bundle Adjustment for Neural Radiance Fields by Bo Xu, Ziao Liu, Mengqi Guo, Jiancheng Li, Gim Hee Lee at https://arxiv.org/abs/2403.10119
There are many such groups but the cameras seem a bit random in their ability to capture and store data, where pixel time is precisely monitored. There are countless cameras on the Internet and many throw away useful data in the camera, the communication to storage and the software. Getting true lossless data is rather difficult. It could be me that is stupid, but there seems to be need for global open specifications and interfaces for all camera. These papers are interesting but I am interested in dark noise. So I try to find OEM and board cameras and sensor chips (no lens or case). MIPI would seem to be a wise decision, but every group makes up their own way of doing things. For the Internet as a whole, for most of the 5 Billion using the Internet, it is lots of wasted time and much duplication, and much delay for many new industries and applications.
Perhaps someone can read frames pixel by pixel. Not frame by frame and row by row. Not one photo detector, but many, and read sequentially. There are good statistical reasons for not using only one detector.
Richard Collins, The Internet Foundation
Louisa, Torsten,
Thanks for the link to that camera. I have noted it. But I am reviewing the source code for Chromium web browser now, for the Internet Foundation, and it has used up my work hours and my private research as well, and most of my energy.
Cameras like yours are not even supported by browsers now, and yet 5 billion humans depend on browsers for much of what they do. Installing tools on every computer is nearly impossible to support, so most of science technology engineering mathematics computing finance government and other subjects and professions is ignored.
I have tried to change it from the outside, but the only answer seems to be to make an “Internet tool kit” that handles all things on the Internet, and all things on computers as well, particularly devices and computers themselves.
Richard Collins, The Internet Foundation
Clif,
I am usually willing to find and fill in the missing information for a language. If it is a living language and intended to be used, there is usually enough for people to use it, if not to write a compiler for it. There are tens of millions of apps and programs and websites now that require humans to learn new “languages”, “new sets of commands and computer behaviors and capabilities”.
If you could help people outline and understand the programs written in various languages, that might lead towards making the programs usable. At least an intelligent compiler could show them the files that are part of what they need, and offer suggestions for learning the language or app or api or program. And help find missing pieces.
Your compiler should know the namespace, but most new apps and things out there, there is no clear map and now tools to build maps – for new languages. Humans and human groups would help. If they were told what to do. Humans are programmable too. They mostly understand human languages, not computer languages and internet jargon and buttons and things.