Albuquerque Seismological Laboratory

Tape Based Quality Control Practices at the
Albuquerque Seismological Laboratory
Data Collection Center

The purpose of this brief document is to describe the quality control tests and procedures which the Albuquerque Seismological Laboratory (ASL) Data Collection Center (DCC) applies to tape based data from the IRIS/USGS Global Seismographic Network.

The primary goal of the ASL DCC quality control (QC) effort is to ensure that ASL-operated stations produce the maximum possible amount of high-quality, problem-free, data. Thus, our highest priority is given to eliminating problems at the source: catching problems as quickly as possible, working with the field engineers to get these problems solved, then verifying the solutions are successful. The secondary goal of the QC effort is to repair data and/or document problems such that problem data becomes usable data. The third goal of our QC work is to document historical data problems. Such work typically results in flagging questionable or defective data.

Our QC practices can be roughly categorized as examining the data for problems in the following areas:

  • sensitivity
  • timing
  • polarity
  • orientation
  • general instrument behavior and data quality

In addition, we put extensive QC time and effort into:

  • review and maintenance of the station information database
  • data analysis for troubleshooting special problems
  • data analysis to resolve historical data problems

This document is organized from the perspective of what QC procedures are performed, what accuracies are obtainable, and what percentage of the total data flow are subject to these QC reviews.

Routine Data Quality Control Tests

sensitivity and orientation test:  comparison between co-located sensors

interval: each station is tested once per 10 days of data, on average

channels:  BH (from broadband sensors) and SH (from short period sensors)

description:  If a single station has instruments with overlapping pass bands (e.g. Geotech KS54000 and Streckeisen STS-2), then comparisons are made between the appropriate channels of the different instruments.  Sensitivity comparisons are done in the frequency domain (using power spectral density) with corrections for the different instrument responses (which has the added benefit of testing the specification of the instrument response in our database).  Orientation comparisons are usually performed in the time domain, by bandpass filtering and overplotting the corresponding channels from different sensors.

sensitivity and polarity test:  comparison to earth tide synthetics

interval:  every station is reviewed after installation and after service visits

channels:  VH

description:  All of the very long period (0.1 Hz sampling) data are reviewed for consistency with the amplitude and polarity of synthetic earth tides.   This can be a very accurate test of polarity.  However, as the earth tide synthetics are not highly precise, this is just a rough check on sensitivity.  In addition, at many stations the horizontal recordings at tidal frequencies are significantly contaminated by tilt and the tide recordings are nearly useless.  At many stations it is necessary to low-pass filter the data (e.g. corner at 0.001 Hz) before making these time domain comparisons.

orientation test:  earthquake analysis using particle motion

interval:  each station is tested on one to three earthquakes per every ten days of data

channels:  BH and SH

description:  We attempt to extract three earthquake windows (using earthquakes appearing in the Harvard rapid CMT bulletin) out of every ten days of data.   A simple three component particle motion analysis is performed to verify the earthquake location is indicated for the proper quadrant.  In most cases, one clear earthquake with usable P waves yields a satisfactory result.  More earthquakes are used if necessary or if problems are suspected (in some cases, the P wave is plotted on the focal sphere if a recording is suspected to be near-nodal).   Mis-orientations as small as 5° to 10° have been detected through routine particle motion analysis.  Smaller mis-orientations can be verified, but probably would not be caught by the routine checks.

timing test: earthquake analysis

interval: each station is tested on one to three earthquakes per every ten days of data

channels: BH and SH

description: We attempt to extract three earthquake windows (using earthquakes appearing in the Harvard rapid CMT bulletin) out of every ten days of data. Predicted travel times are plotted for each
earthquake using the IASPEI travel time tables. We estimate that all timing errors greater than 10 s will be caught with these routine checks. Smaller errors can be verified.


general check: visual review of triggered channels for earthquakes

interval: each station is tested on one to three earthquakes per every ten days of data

channels: various channels from short-period and/or low-gain sensors

description: We simply review that the triggered channels are working, by simultaneously plotting data from the continuously recorded channels. For some low-gain instruments there can be substantial
periods of time (e.g. weeks) between well recorded events (although triggers occur more frequently).


general check: review of background noise

interval: continuous

channels: VH data from broadband sensors, triggered channels from other sensors

description: We visually review background noise levels on all VH data from the broadband sensors. We review small segments of noise from instruments which only record in triggered mode. On the broadband sensors we typically are looking for spikes, steps, noise increases (e.g. indicating vacuum loss), or disappearance of noise (e.g. seismometer mass stuck against stops). This review catches numerous problems.

 

general check: review of mass position channels

interval: continuous

channels: VM or UM from Streckeisen STS-1 broadband sensors

description: We visually review mass position channels to see if masses are properly centered.



general check: review of logs from data acquisition system

interval: every magnetic tape received from stations running Ultrashear software (stations fill tapes about every ten days, on average)

description: The data acquisition computer writes a console log to the data tape. This log is extracted and parsed for various classes of errors and/or warnings. Plots are made of some variables. The
information which results from processing the logs is put on our WWW site, for future reference.



general check: check of SEED record duplication and consistency

interval: continuous

channels: all SEED records from all channels and all sensors

description: This is a new series of tests, installed in August and September, 1997. These tests check all SEED records in our system. Minor problems in SEED data-record headers are repaired and unexpected discrepancies are flagged. Unusual data record duplication problems caused by the Quanterra's are repaired.

Database Quality Control

Seismic data are useless unless one knows where the data were recorded and by what instrument. Thus, we maintain a very large database of coordinates and complete instrument response information for every channel of every sensor and data acquisition system ever operated at any of our stations since their initial installation. The database information is entered and updated by our database specialist. The specialist reviews all information for a station after every modification to the database. All routine interactions with the database are performed via a graphical user interface (GUI) to reduce the possibility of data entry errors. A new dataless SEED volume is made whenever the database is modified. These dataless SEED volumes are run through VERSEED to look for possible errors or inconsistencies. In addition, the dataless SEED volumes are parsed into RESP files using the RDSEED software. These RESP files are used for all QC analysis where an instrument response is required. Thus, many of our other QC tests are utilizing database information and provide a test of database consistency and accuracy.

Data Repair

If necessary, we ``repair'' data which have problems. Typical repairs involve altering the time stamps on data---usually in situations such as a station clock not properly detecting a leap second or locking on to the wrong minute, etc. In addition, we sometimes alter the station, channel, or network codes stamped on the data. Timing repairs are easily performed with a standard program. However, the much harder problem of determining what to repair, and by how much, often requires special tests, custom software, and a good deal of effort.

Special Problems

When we detect or suspect unusual problems at a station, we perform whatever tests and/or checks we think may shed light on the problem. For example, we may select a number of earthquakes where we can compare recordings made at a suspect station with recordings from a nearby station. We may compare the seismic channels to temperature or wind channels. We might compare noise spectra to long term (or published) values for a station. We will perform a detailed examination of the data acquisition log. In some instances it is necessary to examine data records in the order they were written to tape, one record at a time, to verify data acquisition related problems.

Station Calibration Project

Calibrations are regularly performed on IRIS/USGS stations and the resulting signals are analyzed. These calibrations do not assess the absolute calibration of the sensors, but are capable of detecting variations in instrument response as a function of frequency, and detecting long-term drift in the instrument response.

Data Sources

Quality control tests are performed on data which are arriving in a variety of ways. Perhaps the most important QC tests are those performed on data retrieved via dialup or by LISS while field engineers are on site at a station. Such data review has the greatest impact on preventing problems. Problems not detected during installation onsite visits can result in months of problematic data.

We perform daily review of waveforms arriving in near-real-time. We review nearly all events retrieved by SPYDER. By using the near-real-time and SPYDER data we are able to catch problems very quickly. All other data QC is performed on data arriving via tape. QC tests are performed on these data at a number of different points during the production process.

It is important to note that at this time, all data which the ASL DCC publishes arrive via tape, CD or in some cases by ftp. Data arriving via other mechanisms may be used for QC, but at present they are not normally used for publication.

Back to Publications Page