Prasad Raju – Geophysical time series

Prasad Raju,
I suggest you use online resources to look at the data. There is a lot of seismometer and magnetometer data. There is meteorological data. If you just want sources of interesting signals, there is some good solar data (solar dynamics observatory), data from lots of astronomical and astrophysical surveys. Radio telescope data might seem it would be useful, but they tend to process it, hide the raw data, and only give final images of things they are interested in.
I have found that a simple USB audio adapter (with 48000 or 96000 sps microphone and headset) and a piezo microphone (guitar pickup) and Audacity software is a powerful way to get started. If you plug it in, Audacity will let you use the microphone input with a cheap microphone, and you can take the output from the headphones (nothing attached) to use any sound source from the Internet. There are some live video streams on YouTube like beaches where the sound of surf and wind is quite complex.
I suggest you separate “learn how to gather and store time series data from sensors”, “learn how to display and share time series data”, “learn how to write program in Javascript or Python to read microphones, or read data from files and process it”, from countless statistical and visualization tools to try to understand what is going on.
In the seismometer data streams, if you patiently look (or let the computer look), they have three axis 100 sps broadband seismometers data at IRIS. The online timeseries data can be exported in text format, or directly into audio files that can be displayed in Audacity. Audacity can display multiple hour long bands. You save the individual channels and load them one by one as new channels. The speed of viewing, zooming, playing (speakers) is fast. You can select up to a million sample and export to text file. I use that a lot, then use Excel and Sheets for basic statistics, graphing, correlations.
In the seismometer time series, there are about 20 different kinds of events I have identified. There are quiet times where you can take the first derivative of the velocity data (seismometers mostly give velocity) differentiate it to get acceleration, and then correlate that with vector tidal gravity of the sun and moon. A great way to calibrate many instruments to work as one array.
There are storms, wind, temperature variations, pressure variations. Many of the global stations in ( have environmental sensors, many stations have multiple sensors.
I have been tracking all global sensor networks for about 20 years. And I look for new detectors every day. I am going through many “quantum” devices now because they are getting to where they are sensitive enough to pick up magnetic and gravitational noise that can be verified by correlation. Some of the “quantum entanglement” experiments are actually picking up gravitational fluctuations and magnetic fluctuations. The only way to sort those out it so use time of flight (speed of light and gravity are the same) correlation to identify the source. More and more active sources are available now, because there are so many new sensors. In principle you can use acoustic arrays and simultaneous drone based 3D imaging to compare 3D acoustic, 3D electromagnetic and 3D gravitational models of things like waves at the sea shore. Most every seismometer will have 5-10 second signals that are from waves. Those can be detected well inland.
There are lots of infrasound detectors, ocean bottom hydrophones, and more and more “large N” array experiments where they put hundred or thousands of sensors at one location. These each are three axis and 500 to 5000 samples per second (sps).
There is a LOT of data and very interesting time series from Software Defined Radio (SDR). I tried all of those pretty much and recommend go with RTL_SDR and their command line tools to collect and store data. There are Python tools. Those are about $35. They are 2.5 Msps at 16 bits (IQ signals), and there are ones that are more expensive that are 16 bit and 10 Msps (MegaSps) or higher.
Can use Raspberry PI cameras at 1000 frames per second (Fps) for 16×640 areas. Those can be summed. The image sensors are good because so many groups and method use that. Even the optical mouse is a camera at higher and higher frames rates. I think I went through about 40 cameras and talked to most of the camera manufacturers. The rolling shutter cameras read row by row. They can be used for time series. If you have 60 Fps at 640×480, that is 60*480 = 28,800 rows per second. Use monochrome cameras, avoid Bayer and lossy formats, and use that for nice time series. Monitor a single channel photodetector up to GigaSamplesPerSecond. There are lots of Gsps ADCS now for communications, radar and ultrasound, vibration analysis.
There are radar, lidar, GPS electron density, laser communication channel noise, microwave communication channel noise, noise in resistors (which is a mix of local electronic noise, power system noise, electromagnetic noise from lightning, and many human sources, plus a tiny tiny bit of gravitational noise.
The LIGO detectors I would recommend against. They are slowly sharing their data, but it is not in a form that is easy to use for earth and solar system geophysics. But there are lots of MEMS gravimeter groups, many gravitational sensor, and GRACE follow on global model and their errors which you can use.
Lots of interesting signals. You can work 120 hours a week and not exhaust all that is going on. I bet you can find groups in your area, your country and your interest areas who are doing things. Ask them to archive their raw data (lossless format) and share it for collaborative algorithm development.
I found about 1000 YouTube “LIVE” channels and many traffic cameras and security cameras. Those might be staring at fairly static targets 24/7 and you can use them as targets and watch the fluctuations. I have a group I label “shake wind” where the camera is continuously shaking from the wind. You can register each frame and solve for subpixel center and rotations, then use that to try for super-resolution. Most of the online video is LOSSY, so that is not usually possible, but you can shake your own cameras and try things like that. Having sensors of your own where you can control and monitor and model, has many advantages.
I do not know what you want to do. They ought to have a form for people who ask questions that says “What are you trying to do? and What have you already found and tried?”
I am probably missing a few hundred things. There is a lot of raw (lossless) time series data on the Internet, and much more that you can get if you ask or gather it yourself.
Richard Collins, The Internet Foundation
Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.

Leave a Reply

Your email address will not be published. Required fields are marked *