1. Reports Guide - Introduction

1.1. About This Document

This reports guide is intended to give a full explanation of the contents of the various AQS standard reports.

This document is focused solely on the content of the reports, not how to generate them. See the AQS User’s Guide for that.

This is the first release of a new format for the reports guide and it contains information only about the reports most related to certification. These are:

  • Quicklook Criteria

  • Data Completeness

  • DQI (Data Quality Indicators)

  • Certification

  • SO2 Design Value

For each report, this document contains the following sections:

  • What knowledge the report is intended to convey / why it exists.

  • A sample of the output

  • A description of each field included

    • In the header

    • In the body, which will usually be in columns.

If you have any feedback or suggested changes, please let Nick Mangus on the AQS team know (mangus.nick@epa.gov). We will continue to add to this document based on user reception and feedback.

1.2. Technical Details

You must have JavaScript enabled in your browser for the equations to properly display.

This guide is a single (static) HTML document and is fully transportable if you want to save it to local media.

1.3. Report Formats

Each report may come in one of two formats: a "Report" format or a "Workfile" format.

The Report format is a PDF document that is laid out to make the data easy to digest for a human reader. It is more suitable for printing with any header information repeated on each page, pages numbered, etc. The first page of every PDF report is the "cover page" which is summary of the selections that were entered into AQS when the report was created.

The Workfile format is a text file formatted to be read by a computer (spreadsheet or other program). It usually contains a header row with the column headings followed by rows of data. The AQS convention for commenting out data in text files is to begin the line with the pound symbol (#). (That is, if a line begins with a #, it is not a data line, but a line of instructions for the benefit of human readers; like a header, etc.)

This document will confine itself to the PDF formatted versions of the report for simplicity. Where work files are available, the contents should closely match the PDF format and the text files will have header information (lines beginning with the hash sign, #). When generating a Workfile from AQS, the results are returned in a zip archive that includes a PDF "cover page" which is summary of the selections that were entered into AQS when the report was created.

1.4. Naming

Reports naming in AQS has a long and obscure history. Every report has a name and a number. For example, the Data Certification Report and the AMP600 are the same thing. Some people prefer to use the name, others the number. This document will usually refer to the reports by name.

2. Quicklook Report (AMP450)

The AQS Quicklook report is also known as the AMP450.

This report is designed to give AQS users a "quick look" at the key annual statistics for criteria pollutants. Key statistics are the ones that may affect design value calculations (e.g., completeness, averaging time of the standard, form of the standard, etc.)

The Quick Look Report displays annual summary statistics for selected criteria parameters at air quality monitoring sites using the calculation rules for the pollutant standard selected at the time of report generation.

There is a unique format for each of the criteria pollutants. Each format is designed to highlight special calculations that are derived for the given pollutant in order to determine compliance with the National Ambient Air Quality Standards. In addition to these special formats, a listing of the reporting organization codes referenced in the report is provided, as well as a listing of referenced sampling methodology codes at the end of the report.

Important
The most important thing to remember about this report is that each parameter - duration combination will be displayed differently.

2.1. Sample output

Below is a sample of the PDF output.

Quicklook Sample Output

This report is sorted by parameter, then duration (with a page break between parameter - duration combinations), then by geography (site), then by year. The most important thing to remember about this report is that each page may include different columns (the columns displayed are specific to parameter - duration combination). This is done because the form of the standard is different for each pollutant.

Note that unless you specifically exclude these in the report options, there will be multiple entries for the same monitor for the same year for different durations (if there are different NAAQS durations) and exceptional data types (EDTs).

The separate durations will be on different pages, and the different EDT’s will be on separate lines of the same page.

2.2. Header Fields

2.2.1. Parameter

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

The parameter code is included parenthetically.

2.2.2. Duration

The length of time that air passes through the monitoring device before it is analyzed (measured). So, it represents an averaging period in the atmosphere (for example, a 24-hour sample duration draws ambient air over a collection filter for 24 straight hours). For continuous monitors, it can represent an averaging time of many samples (for example, a 1-hour value may be the average of four one-minute samples collected during each quarter of the hour).

2.2.3. State

The name of the state where the monitoring site is located.

2.2.4. Units (Standard for the Parameter)

The standard unit of measure for the Parameter. AQS converts all incoming data to a parameter specific "standard" unit of measure. This is done so that aggregate values can be computed.

2.3. Columns

Since the format of this report is different for each parameter, the columns are listed by parameter. The "common" identification fields shared by all the pages are listed first and then the data columns are listed for each parameter (and duration, where there are multiple durations).

Each page of the report has some header information, divided into three parts. On the left is the parameter and duration. The center contains the state name. And the right shows the standard units for the parameter.

2.3.1. Fields Common to all Parameters

Site ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

City

The name of the city where the monitoring site is located. This represents the legal incorporated boundaries of cities and not urban areas.

County

The name of the county where the monitoring site is located.

Address

The street address giving an approximate location of the site.

Year

The year the data represents.

Meth

A three-digit code representing the measurement method. A method code is only unique within a parameter (that is, method 132 for ozone is not the same as method 123 for benzene).

A description of the method codes displayed in the report is available on a page near the end of the report (usually the second to last page).

Cert

A Flag indicating the official certification status of the monitor-year, as assigned by the applicable regional office user.

See the AQS code tables for meanings of Certification Flags.

EDT

A designation indicating how a summary value is affected by exceptional events. It indicates whether exceptional data exists in the time period being summarized, and whether such exceptional data is included in the summary value.

For summaries of sample measurements, the following Exceptional Data Types are available:

0: No Events. None of the measurement data contributing to the summary has been flagged for exceptional events.

1: All Events Excluded. The summary excludes any measurements that have been flagged for exclusion because of exceptional events. (These measurements are excluded whether or not EPA has concurred with the flagging.)

2: All Events Included. Measurements included in the summary have been flagged for exceptional event exclusion but their data is included. (These measurements are included whether or not EPA has concurred with the flagging.)

5: Concurred Events Excluded. The summary excludes any measurements that have been flagged for exclusion for exceptional events AND the EPA Regional Office has concurred with the flagging.

For any site/monitor and summary time period, either a type 0 summary will exist (no data was flagged), or a type 1, type 2 and type 5 summary will all exist together.

For summaries created from lower-level summaries (e.g. daily summaries created from NAAQS_Average rows):

0: Created only when only lower-level summaries with EDT_ID = 0 exist

1: Created from lower-level summaries with EDT_ID 0 and 1

2: Created from lower-level summaries with EDT_ID 0 and 2

5: Created from lower-level summaries with EDT_ID 0 and 5

2.3.2. Fields Specific to Carbon Monoxide

# Obs

The number of observations (samples) taken during the averaging period.

1st-2nd Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Maxes for both 1-hour and 8-hour durations are shown on the report.

Obs > 1Hr Std

The number of samples in the summarized sample set that exceed the level of the primary standard. (Only applicable for criteria pollutants.)

Exceedance counts for both 1-hour and 8-hour durations are shown on the report.

2.3.3. Fields Specific to Sulfur Dioxide

Obs

The number of observations (samples) taken during the averaging period.

Comp Qtrs (Complete Quarters)

The number of quarterly summaries, with corresponding pollutant standard and exceptional data type, where the summary criterion is met.

1st-2nd Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Maxes for both 1-hour and 24-hour durations are shown on the report.

99th Pctl 1-Hr

The 99th percentile 1-hour value (the value that 99% of all values are equal to or below).

Days > 24-Hr Std

The number of days where the 24-hour average was above the 24-hour standard.

Arith Mean An - Std

The Arithmetic Mean calculated in the way that would be comparable to the annual standard.

Calculation method:

\[u = \frac {1}{v} {\sum_{i=1}^v d_i}\]

Where:

$u$ = mean value d_i = valid daily maximum occurring in the effective monitoring season v = count of valid days

2.3.4. Fields Specific to Ozone / 1-hour

Valid Days Meas

The number of required monitoring days in the aggregation period (e.g., year) where the monitoring criteria were met. (Only applicable for criteria pollutants.)

Num Days Req

The number of days during the year which the monitor was scheduled to take samples if measurements are required.

1st-4th Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The third highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the third maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The fourth highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the fourth maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Day Max > Std

The number of samples in the summarized sample set that exceed the level of the primary standard. (Only applicable for criteria pollutants.)

Est Days > Std

The estimated number of days greater than the standard for the year (or quarter if viewing quarterly data). It is computed for specific pollutants when an exceedance has occurred during the year. The underlying assumption is that missing data is just as likely to exceed the standard as reported data.

Calculation method:

\[e = d + \left(\left(\frac{d}{v}\right)\cdot(r-v-a)\right)\]

Where:

\( e = \) estimated days greater than standard
\( d = \) count of primary exceedances
\( v = \) count of valid days
\( r = \) count of required days
\( a = \) count of days assumed less than standard

Miss Days < Std

The number of invalid or missing days in the effective monitoring season whose daily maxima are assumed to be less than or equal to the standard. (Only applicable to 1-hour and 8-hour ozone.)

Calculation method:

A missing or invalid day is assumed to be less than the standard when either of the following conditions exists:

  • The daily maximums on the days immediately preceding, and immediately succeeding, the missing day were less than, or equal to, 75% of the standard.

  • The number of valid samples for the day was less than 18 and the sum of the following is greater than or equal to 18 (i.e., 75% of the possible values):

    • Number of valid samples;

    • Number of null samples that were flagged as not likely to exceed the standard and for which the Regional Office has indicated concurrence;

    • Number of omitted samples that were flagged with event qualifiers and for which the Regional Office has indicated concurrence.

2.3.5. Fields Specific to Ozone / 8-hour

% Obs

The ratio of recorded sample values that were reported divided by the number of sample values that were scheduled to have been reported for the year.

Calculation method:

\[p = \frac {(v+a)} {n} \cdot 100\]

Where:

\( p = \) observation percent
\( v = \) number of valid days
\( a = \) number of missing days assumed less than standard
\( n = \) number of required days

Valid Days Meas

The number of required monitoring days in the aggregation period (e.g., year) where the monitoring criteria were met. (Only applicable for criteria pollutants.)

Calculation method:

The number of active days within the effective monitoring season when minimum daily criteria were met, i.e., the daily Summary Criteria Met value is ?Y?.

Num Days Req

The number of days during the year which the monitor was scheduled to take samples if measurements are required.

Calculation method:

The number of active days within the effective monitoring season.

1st-4th Max

The max values listed are the daily max values. That is, each day the maximum 8-hour average is determined. This is the daily max value. For the year, the four highest daily max values are listed.

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The third highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the third maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The fourth highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the fourth maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Day Max > Std

The number of samples in the summarized sample set that exceed the level of the primary standard. (Only applicable for criteria pollutants.)

2.3.6. Fields Specific to PM10 / 24-Hour

Obs

The number of observations (samples) taken during the averaging period.

Num Req

The number of days during the year which the monitor was scheduled to take samples if measurements are required.

Calculation method:

The number of active days in the year.

Valid Days

The number of required monitoring days in the aggregation period (e.g., year) where the monitoring criteria were met. (Only applicable for criteria pollutants.)

Calculation method:

The number of valid 24-hour block Arithmetic Mean (NAAQS) values within the year.

% Obs

The ratio of recorded sample values that were reported divided by the number of sample values that were scheduled to have been reported for the year.

Calculation method:

\[p = \frac {v} {n} \cdot 100\]

Where:

\( p = \) observation percent
\( v = \) number of valid days
\( n = \) number of required (e.g., scheduled) days

1st-4th Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The third highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the third maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The fourth highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the fourth maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Day Max > Std

The number of samples in the summarized sample set that exceed the level of the primary standard. (Only applicable for criteria pollutants.)

Est Days > Std

The estimated number of days greater than the standard for the year (or quarter if viewing quarterly data). It is computed for specific pollutants when an exceedance has occurred during the year. The underlying assumption is that missing data is just as likely to exceed the standard as reported data.

Calculation method:

\[E = {\sum_{q=1}^4 e_q} \quad \text{Where} \quad e_q = \left( \frac {N_q} {m_q} \right) \cdot \sum_{i=1}^{m_q} \left( \frac {v_i} {k_i} \right)\]

Where:

\( E = \) estimated number of exceedances for the year
\( e_q = \) estimated number of exceedances for calendar quarter q
\( q = \) calendar qaurter
\( N_q = \) number of days in quarter q
\( m_q = \) number of strata with samples during quarter q
\( v_i = \) number of observed exceedances in stratum i
\( k_i = \) number of samples in stratum i

2.3.7. Fields Specific to PM2.5 / 24-hour

Num Cred Days

Number of scheduled and make-up days that are given credit when determining data completeness for a site. (Note: This may not be the number of values averaged, since "extra samples" may be included in the mean.

Calculation method:

The sum of valued, scheduled sampling days, plus make-ups for missing scheduled days.

Scheduled days are the number of days within the year that were scheduled for sampling, as determined by the EPA-defined calendar for the required collection frequency, and which also fall within the period of operation, as defined by sampling periods.

A make-up day is a sample recorded in the same stratum as, or exactly seven days after, a missing scheduled sample. In both conditions, the make-up sample must occur within the same quarter as the missed sample. A maximum of five make-up samples are allowed per quarter. (References: EPA-454/R-99-008 Guide line on Data Handling Conventions for the PM NAAQS; Memorandum: February 3, 1999 Use of Make-Up Samples to Replace Scheduled PM Samples.)

1st-4th Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The third highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the third maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The fourth highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the fourth maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

98th Percentile Value

The sample value in the summarized sample set where 98% of the values in that set are less than or equal to it. (For ozone, based on valid daily maxima; for PM2.5, based on seasonal and non-seasonal algorithms.)

Note, for seasonal PM 2.5 sampling, there is an alternate method for computing the 98th percentile that weights the seasons based on the number of samples.

Wtd Arith Mean

The weighted arithmetic mean for the sample values in the summarized sample set. (Only applicable to PM10 and PM2.5)

Calculation method:

\[Mean_{wtd} = \frac{\sum_{i=1}^{q} u_i} {q}\]

Where:

\( Mean_{wtd} = \) weighted arithmetic mean
\( q = \) number of active quarters
\( i = \) quarter
\( u_i = \) arithmetic mean of samples in quarter i of appropriate exceptional data type

Important
While all other PM2.5 data on the Quicklook report is calculated for the 24-hour PM2.5 standard, the weighted arithmetic mean is calculated for the annual standard. (Since there are two primary PM2.5 standards and we had room on the report, we included metrics reflecting each). To put it another way, a PM2.5 row in the Quicklook reports combines data from two different annual summary records. One for the 24-hour standard and one for the annual standard. They may have different samples excluded since exclusion is specific to a standard.

3. Data Completeness Report (AMP430)

The AQS QA Data Completeness Report is also known as the AMP256.

This report is designed to give AQS users a summary of the status of their data with respect to the operating information (e.g., sampling schedules) also entered in AQS.

This report consists of several sections. If applicable, the first pages list monitors that are "active" but did not report sample data. AQS considers a monitor active if it has an open sample period that overlaps with the time selection window.

For all monitors that did report data, the information will be sorted by region, state, reporting organization, and monitor type, with a page break on each.

3.1. Sample output

Below is a sample of the PDF output for the "Monitors Reporting" section of the Data Completeness report.

Data Completeness Report

3.2. Header Fields

3.2.1. Date Range

The date range for which the completeness statistics are valid. This will reflect the date range selected when the report was created.

3.2.2. Region

The EPA Region Code and the name of the city where the Regional Office is located.

3.2.3. State

The name of the state where the monitoring site is located.

3.2.4. REP ORG (Repoting Organization)

The name of the agency assigned as "Reporting" data for the monitor reporting data.

3.2.5. MONITOR TYPE

An administrative or regulatory classification for the monitor.

3.3. Columns

3.3.1. SITE ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

3.3.2. CITY

The name of the city where the monitoring site is located. This represents the legal incorporated boundaries of cities and not urban areas.

3.3.3. ADDRESS

The street address giving an approximate location of the site.

3.3.4. PARAMETER

The parameter code and name are listed.

3.3.5. POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

3.3.6. DURATION

The length of time that air passes through the monitoring device before it is analyzed (measured). So, it represents an averaging period in the atmosphere (for example, a 24-hour sample duration draws ambient air over a collection filter for 24 straight hours). For continuous monitors, it can represent an averaging time of many samples (for example, a 1-hour value may be the average of four one-minute samples collected during each quarter of the hour).

3.3.7. METHOD

A three-digit code representing the measurement method. A method code is only unique within a parameter (that is, method 132 for ozone is not the same as method 123 for benzene).

3.3.8. OBSERVATIONS

For each month the number of observations and the percent completeness for the month. Also listed for the year as a whole.

The following rules are applied to determining completeness:

  • Monitoring seasons are considered. If a monitor is seasonal, AQS only expects data to be reported during the season.

  • Sample schedules are used to determine if a monitor was active. AQS expects the monitor to have collected data during all sample periods (between all sample period begin dates and sample period end dates_).

  • If the date range selected for the report is not an entire calendar year, then only the months selected are included in the annual total. That is, an excluded month does not count against the annual total.

4. QA Data Quality Indicator Report (AMP256)

The AQS QA Data Quality Indicator Report is also known as the DQI Report and the AMP256.

This report is designed to give AQS users a summary of the status of their quality assurance activities.

The DQI Report displays a summary of quality assurance activities as required by Appendix A of the monitoring rule. The report is sorted by assessment, PQAO, parameter, and year. There is a page break after a change in assessment, PQAO, or parameter. At the end of each assessment section is a summary row by year, then by PQAO.

This report contains 6 sections related to the quality assurance assessment types:

  • One Point Quality Control

  • Flow Rate verifications (for particulate parameters)

  • Annual Performance Evaluation

  • Semi-Annual Flow Rate Audits

  • Collocation Summary

  • Performance Evaluation Program (PEP)

  • Lead Audit Strip Analysis

Each is described in it’s own section below.

The report begins with a "Notes" page with some explanatory information about the report. Primarily, it lists the Monitor Type (MT) codes that may appear elsewhere in the report.

4.1. One Point Quality Control

This section of the report summarizes the One Point QC checks performed on the monitors that meet the selection criteria entered when the report was created.

The report is sorted by PQAO and parameter with a page break on changing PQAO.

4.1.1. Sample output

Below is a sample of the PDF output for the One Point Quality Control layout.

DQI - One Point Quality Control

4.1.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.1.3. Columns

Year

The year the data represents.

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

State

The two letter abbreviation for the state.

Site IDs

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

MT (Monitor Type)

An administrative or regulatory classification for the monitor.

See the "Notes" page at the beginning of the report for the meanings of the abbreviations.

Begin Date

The greatest of: January 1 of the summary year; Earliest date falling within a sample period for the monitor and summary year; Earliest date falling within the PQAO assignment for the monitor and summary year.

End Date

The earliest of: December 31 of the summary year; Latest date falling within a sample period for the monitor and summary year; Latest date falling within the PQAO assignment for the monitor and summary year; Calendar date that the summary is retrieved.

Intervals Required

The number of intervals between the begin and end date that require a one point quality control check.

The number of two week periods falling completely within a sample period and the PQAO assignment for the monitor for the summary year.

Valued Intervals

The number of intervals between the begin and end date in which a one point quality control check was reported.

% Complete

The 1-point QC completeness data is evaluated in the following manner: * Count the number of checks in each 14-day interval starting with the Jan 1-14 interval. For each 14-day interval, multiple checks will only count as one. * Divide the total number of checks in #1 by 26

For certification, a green Y is > 75%. That means a monitoring organization could miss six 14-day intervals (meaning a check past the 14-day interval) and still get a green Y. For a yellow flag, they could miss nine 14-day intervals and get a warning. Missing ten 14-day intervals will elicit an N flag.

CV UB

The 90% Upper bound on Coefficient of Variance. This is often used as the precision estimate for one point quality control checks. Only valid observations are used in this calculation.

Calculation method:

\[CV = \sqrt{ \frac { n \cdot {\left( {\sum_{i=1}^n d_i^2 } \right)} - {\left( {\sum_{i=1}^n d_i } \right)^2} } {2n \cdot (n-1)} } \cdot \sqrt { \frac {n-1} { \chi_{0.1, n-1}^2} } \]

Where:

\(CV = \) the coefficient of variation
\(d = \) the percent difference for any given audit
\(n = \) the number of audits examined
\({ \chi_{0.1, n-1}^2} = \) the 10th percentile of a chi-squared distribution with n-1 degrees of freedom

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

Bias UB

The quality control bias estimator is an upper bound on the mean absolute value of the percent differences of regular and assessment samples.

Calculation method:

\[|bias| = AB + t_{0.95, n-1} \cdot \frac {AS} {\sqrt{n}} \]

Where:

\(\vert bias \vert = \) the bias estimate
\(AB = \) the mean of the absolute values of the percent differences
\(AS = \) the standard deviation of the absolute value of the percent differences
\(n = \) the number of audits examined
\(t_{0.95, n-1} = \) the 95th quantile of a t-distribution with n-1 degrees of freedom

Calculation method:

\[AB = \frac {1} {n} \cdot {\sum_{i=1}^n \vert d_i \vert } \]

Calculation method:

\[AS = \sqrt { \frac { ({n \cdot \sum_{i=1}^n {\vert d_i \vert}^2}) - ({\sum_{i=1}^n {\vert d_i \vert}})^2 } {n \cdot (n-1)} } \]

Sign (+, -, or +/-) of the bias of the percent differences.

A sign will be determined by rank ordering the percent differences of the QC check samples from a given site for a particular assessment interval.

Calculate the 25th and 75th percentiles of the percent differences for each site. The absolute bias upper bound should be flagged as positive if both percentiles are positive and negative if both percentiles are negative. If one is positive and the other is negative, the upper bound would not be flagged and +/- will be displayed.

4.2. Annual Performance Evaluation

This section of the report summarizes the Annual Performance Evaluation checks performed on the monitors that meet the selection criteria entered when the report was created.

The report is sorted by PQAO and parameter with a page break on changing PQAO.

4.2.1. Sample output

Below is a sample of the PDF output for the Annual Performance Evaluation layout.

DQI - Annual Performance Evaluation

4.2.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.2.3. Columns

Year

The year the data represents.

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

State

The two letter abbreviation for the state.

Site IDs

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

MT

An administrative or regulatory classification for the monitor.

Begin Date

The greatest of: January 1 of the summary year; Earliest date falling within a sample period for the monitor and summary year; Earliest date falling within the PQAO assignment for the monitor and summary year.

End Date

The earliest of: December 31 of the summary year; Latest date falling within a sample period for the monitor and summary year; Latest date falling within the PQAO assignment for the monitor and summary year; Calendar date that the summary is retrieved.

Avg %D / Lvl

The average of the percent differences for all pairs taken at each of the audit levels. Levels 1-5 are displayed on the first row, and levels 6-10 are displayed on the second row (i.e., level 6 is immediately below level 1).

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

Obs / Q

The number of audits performed in each calendar quarter.

Criteria Met?

An indicator of whether the Annual Performance Audit reporting requirement was met. A monitor has met the conditions if (a) an audit was performed at least once per year and (b) the audit was performed, at a minimum, within three consecutive levels (1,2,3 or 2,3,4 or 3,4,5) within each of those years.

Conf. Limits; Lower

The lower probability limit (95% confidence) of the compared percent differences. This is calculated for quality assurance assessment results.

Calculation method:

\[l = D - \left( S \cdot 1.96 \right)\]

Where:

\( l = \) lower 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

Conf. Limits; Upper

The upper probability limit (95% confidence) of the compared percent differences.

Calculation method:

\[u = D + \left( S \cdot 1.96 \right)\]

Where:

\( u = \) upper 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

% Bet. Cf Lim

The percent of audits that are between the upper and lower confidence limits. This value is rounded to a whole number.

4.3. Flow Rate Verification

This section of the report summarizes the Flow Rate Verifications performed on the monitors that meet the selection criteria entered when the report was created.

The report is sorted by PQAO and parameter with a page break on changing PQAO.

4.3.1. Sample output

Here is a sample of the PDF output for the Flow Rate Verification layout.

DQI - Flow Rate Verification

4.3.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.3.3. Columns

Year

The year the data represents.

Reg

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

St

The two letter abbreviation for the state.

Site IDs

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

MT (Monitor Type)

An administrative or regulatory classification for the monitor.

See the "Notes" page at the beginning of the report for the meanings of the abbreviations.

Begin Date

The greatest of: January 1 of the summary year; Earliest date falling within a sample period for the monitor and summary year; Earliest date falling within the PQAO assignment for the monitor and summary year.

End Date

The earliest of: December 31 of the summary year; Latest date falling within a sample period for the monitor and summary year; Latest date falling within the PQAO assignment for the monitor and summary year; Calendar date that the summary is retrieved.

# Obs Required

The number of intervals between the begin and end date that require a one point quality control check.

The number of observations (samples) taken during the averaging period.

# Obs

The number of observations (samples) taken during the averaging period.

% D

The relative difference between the known and measured concentrations, expressed as a percentage.

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

% Complete

The ratio of recorded sample values that were reported divided by the number of sample values that were scheduled to have been reported for the year.

Bias UB

The quality control bias estimator is an upper bound on the mean absolute value of the percent differences of regular and assessment samples.

Calculation method:

\[|bias| = AB + t_{0.95, n-1} \cdot \frac {AS} {\sqrt{n}} \]

Where:

\(\vert bias \vert = \) the bias estimate
\(AB = \) the mean of the absolute values of the percent differences
\(AS = \) the standard deviation of the absolute value of the percent differences
\(n = \) the number of audits examined
\(t_{0.95, n-1} = \) the 95th quantile of a t-distribution with n-1 degrees of freedom

Calculation method:

\[AB = \frac {1} {n} \cdot {\sum_{i=1}^n \vert d_i \vert } \]

Calculation method:

\[AS = \sqrt { \frac { ({n \cdot \sum_{i=1}^n {\vert d_i \vert}^2}) - ({\sum_{i=1}^n {\vert d_i \vert}})^2 } {n \cdot (n-1)} } \]

Sign (+, -, or +/-) of the bias of the percent differences.

A sign will be determined by rank ordering the percent differences of the QC check samples from a given site for a particular assessment interval.

Calculate the 25th and 75th percentiles of the percent differences for each site. The absolute bias upper bound should be flagged as positive if both percentiles are positive and negative if both percentiles are negative. If one is positive and the other is negative, the upper bound would not be flagged and +/- will be displayed.

4.4. Semi-Annual Flow Rate Audits

This section of the report summarizes the Semi-Annual Flow Rate Audits performed on the monitors that meet the selection criteria entered when the report was created.

The report is sorted by PQAO and parameter with a page break on PQAO changes.

4.4.1. Sample output

Below is a sample of the PDF output for the Semi-Annual Flow Rate Audits layout.

DQI - Semi-Annual Flow Rate Audits

4.4.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.4.3. Columns

Year

The year the data represents.

Reg

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

St

The two letter abbreviation for the state.

AQS Site IDs

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

MT (Monitor Type)

An administrative or regulatory classification for the monitor.

See the "Notes" page at the beginning of the report for the meanings of the abbreviations.

Begin Date

The greatest of: January 1 of the summary year; Earliest date falling within a sample period for the monitor and summary year; Earliest date falling within the PQAO assignment for the monitor and summary year.

End Date

The earliest of: December 31 of the summary year; Latest date falling within a sample period for the monitor and summary year; Latest date falling within the PQAO assignment for the monitor and summary year; Calendar date that the summary is retrieved.

# Req (Number Required)

The number of intervals between the begin and end date that require a one-point quality control check.

The number of two week periods falling completely within a sample period and the PQAO assignment for the monitor for the summary year.

#Q

The number distinct quarters in which an evaluation was performed.

% Complete

The ratio of actual evaluations to required evaluations. If sampler operates <9 months at least 1 is expected. If operated >9 months two audits expected.

This field is valued at 100% if there were 3 or 4 audits in 3 or 4 distinct quarters; or if there were 2 audits during the year separated by at least 5 months.

This field is valued at 50% if there were 2 audits during the year not separated by at least 5 months.

Obs / Q

The number of audits performed in each calendar quarter.

Avg % d (Average Percent Difference)

The average of the Percent Differences (d).

Calculate the average of the values determined using the equation below for each audit.

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

Conf. Limits; Lower

The lower probability limit (95% confidence) of the compared percent differences. This is calculated for quality assurance assessment results.

Calculation method:

\[l = D - \left( S \cdot 1.96 \right)\]

Where:

\( l = \) lower 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

Conf. Limits; Upper

The upper probability limit (95% confidence) of the compared percent differences.

Calculation method:

\[u = D + \left( S \cdot 1.96 \right)\]

Where:

\( u = \) upper 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

% Between Conf Lmt

The percent of audits that are between the upper and lower confidence limits. This value is rounded to a whole number.

4.5. Collocation Summary

This section of the report summarizes the Collocated Samples collected at the monitors that meet the selection criteria entered when the report was created.

Note, this section of the report is preceded by the collocation details section, which contains mostly the same information by monitor, where as the summary is by method.

The report is sorted by PQAO and method with a page break between PQAOs.

4.5.1. Sample output

Below is a sample of the PDF output for the Collocation Summary layout.

DQI - Collocation Summary

4.5.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.5.3. Columns

Year

The year the data represents.

Method

A three-digit code representing the measurement method. A method code is only unique within a parameter (that is, method 132 for ozone is not the same as method 123 for benzene).

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

State

The two letter abbreviation for the state.

# Sites

The number of collocated audits required. Calculated for PM as 15% of total sites. Calculated for Lead as If there are 5 or fewer lead sites in the PQAO measuring the parameter in a given year then 4 Lead (PM10 or TSP) collocated audits are required; if there more than 5 lead sites in the PQAO within the year, the number required is 6.

This number only includes sites that meet the selection criteria entered when the report was created and that are within the sort group of: method and PQAO.

# Colloc Required

The number of sites (within the group) that require collocated monitors.

# Colloc Actual

The number of sites (within the group) that have collocated monitors.

% of Req Sites Colloc

The percentage of sites that meet their collocation requirements, based on the previous two fields.

# Obs Req

The number of collocated audits that are required to be performed. Determined as CEIL(0.2 * Monitor_Count)

# Obs

The number of collocated data pairs reported to AQS.

# Valid Obs

The number of valid collocated value pairs recorded during the time period. For a single assessment, the value will be 1 if the pair is valid and 0 if it is not. The "valid" collocated observations are those where the primary and collocated measurement values are both equal to or above the limits identified in 40 CFR Part 58 Appendix A 4(c). These are the values that are used for precision and bias calculations (e.g. CV UB).

% Complete

The ratio of # Obs to # Obs Req.

CV UB

The 90% Upper bound on Coefficient of Variance. This is often used as the precision estimate for one point quality control checks. Only valid observations are used in this calculation.

Calculation method:

\[CV = \sqrt{ \frac { n \cdot {\left( {\sum_{i=1}^n d_i^2 } \right)} - {\left( {\sum_{i=1}^n d_i } \right)^2} } {2n \cdot (n-1)} } \cdot \sqrt { \frac {n-1} { \chi_{0.1, n-1}^2} } \]

Where:

\(CV = \) the coefficient of variation
\(d = \) the percent difference for any given audit
\(n = \) the number of audits examined
\({ \chi_{0.1, n-1}^2} = \) the 10th percentile of a chi-squared distribution with n-1 degrees of freedom

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

4.6. Performance Evaluation Program (PEP)

This section of the report summarizes the Performance Evaluation Program (PEP) audits performed at the monitors that meet the selection criteria entered when the report was created.

The report is sorted on PQAO and method with a page break between PQAOs.

4.6.1. Sample output

Here is a sample of the PDF output for the Performance Evaluation Program (PEP) layout.

DQI - Performance Evaluation Program (PEP)

4.6.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.6.3. Columns

Year

The year the data represents.

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

State

The two letter abbreviation for the state.

# Sites

The number of collocated audits required. Calculated for PM as 15% of total sites. Calculated for Lead as If there are 5 or fewer lead sites in the PQAO measuring the parameter in a given year then 4 Lead (PM10 or TSP) collocated audits are required; if there more than 5 lead sites in the PQAO within the year, the number required is 6.

This number only includes sites that meet the selection criteria entered when the report was created and that are within the sort group of: method and PQAO.

# PEP Required

The number of PEP (performance evaluation program) assessment observations that are required to meet regulatory obligations.

# PEP Collected

The number of PEP (performance evaluation program) assessment observations that were reported to AQS.

# Colloc PEP Required

The number of Collocated Performance Evaluation Program (PEP) audits required to be collected. This value is only computed for SUMMARY rows, where applicable.

# Colloc PEP Collected

Lead Only: Number of Collocated PEP Audits Collected

% Complete

The percentage of required observations (or scheduled days) made for the given assessment time period.

Bias

The quality control bias estimator is an upper bound on the mean absolute value of the percent differences of regular and assessment samples.

Calculation method:

\[|bias| = AB + t_{0.95, n-1} \cdot \frac {AS} {\sqrt{n}} \]

Where:

\(\vert bias \vert = \) the bias estimate
\(AB = \) the mean of the absolute values of the percent differences
\(AS = \) the standard deviation of the absolute value of the percent differences
\(n = \) the number of audits examined
\(t_{0.95, n-1} = \) the 95th quantile of a t-distribution with n-1 degrees of freedom

Calculation method:

\[AB = \frac {1} {n} \cdot {\sum_{i=1}^n \vert d_i \vert } \]

Calculation method:

\[AS = \sqrt { \frac { ({n \cdot \sum_{i=1}^n {\vert d_i \vert}^2}) - ({\sum_{i=1}^n {\vert d_i \vert}})^2 } {n \cdot (n-1)} } \]

Sign (+, -, or +/-) of the bias of the percent differences.

A sign will be determined by rank ordering the percent differences of the QC check samples from a given site for a particular assessment interval.

Calculate the 25th and 75th percentiles of the percent differences for each site. The absolute bias upper bound should be flagged as positive if both percentiles are positive and negative if both percentiles are negative. If one is positive and the other is negative, the upper bound would not be flagged and +/- will be displayed.

Conf. Limits; Lower

The lower probability limit (95% confidence) of the compared percent differences. This is calculated for quality assurance assessment results.

Calculation method:

\[l = D - \left( S \cdot 1.96 \right)\]

Where:

\( l = \) lower 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

Conf. Limits; Upper

The upper probability limit (95% confidence) of the compared percent differences.

Calculation method:

\[u = D + \left( S \cdot 1.96 \right)\]

Where:

\( u = \) upper 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

4.7. Lead Audit Strip Analysis

This section of the report summarizes the Lead Audit Strip Analysis audits performed by PQAO.

4.7.1. Sample output

Below is a sample of the PDF output for the Performance Evaluation Program (PEP) layout.

DQI - Performance Evaluation Program (PEP)

4.7.2. Header Fields

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

4.7.3. Columns

Year

The year the data represents.

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

St

The two letter abbreviation for the state.

Parameter Code

The AQS code corresponding to the parameter measured by the monitor.

Lab ID

The Agency Code of the laboratory performing the audits.

% Completeness

The breakdown by year and quarter of the ration of expected audits versus audits reported.

Bias UB

The quality control bias estimator is an upper bound on the mean absolute value of the percent differences of regular and assessment samples.

Calculation method:

\[|bias| = AB + t_{0.95, n-1} \cdot \frac {AS} {\sqrt{n}} \]

Where:

\(\vert bias \vert = \) the bias estimate
\(AB = \) the mean of the absolute values of the percent differences
\(AS = \) the standard deviation of the absolute value of the percent differences
\(n = \) the number of audits examined
\(t_{0.95, n-1} = \) the 95th quantile of a t-distribution with n-1 degrees of freedom

Calculation method:

\[AB = \frac {1} {n} \cdot {\sum_{i=1}^n \vert d_i \vert } \]

Calculation method:

\[AS = \sqrt { \frac { ({n \cdot \sum_{i=1}^n {\vert d_i \vert}^2}) - ({\sum_{i=1}^n {\vert d_i \vert}})^2 } {n \cdot (n-1)} } \]

Sign (+, -, or +/-) of the bias of the percent differences.

A sign will be determined by rank ordering the percent differences of the QC check samples from a given site for a particular assessment interval.

Calculate the 25th and 75th percentiles of the percent differences for each site. The absolute bias upper bound should be flagged as positive if both percentiles are positive and negative if both percentiles are negative. If one is positive and the other is negative, the upper bound would not be flagged and +/- will be displayed.

Conf. Limits; Lower

The lower probability limit (95% confidence) of the compared percent differences. This is calculated for quality assurance assessment results.

Calculation method:

\[l = D - \left( S \cdot 1.96 \right)\]

Where:

\( l = \) lower 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

Conf. Limits; Upper

The upper probability limit (95% confidence) of the compared percent differences.

Calculation method:

\[u = D + \left( S \cdot 1.96 \right)\]

Where:

\( u = \) upper 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

5. Certification Report (AMP600)

The AQS Certification Report is also known as the Certification Evaluation and Concurrence Report and the AMP600.

This report is designed to assist AQS users in certifying their data. Data from each calendar year for criteria pollutants measured by FRMs and FEMs must be certified by May 01 the following year. The Certification Report is a key part of that process. Running the report initiates the process whereby AQS assigns initial (suggested) certification flags to each monitor.

The report contains summary statistics about quality assurance and completeness for each monitor. More detailed information about quality assurance data is available in the Data Quality Indicators Report and more about completeness is available in the Data Completeness Report. Where the Certification report includes the same summary metric as one of the other reports, the values should match.

This document attempts to completely describe the report, however for more information about how flags are determined, information summarized, and how results displayed on this report may differ from other reports, you may consult Guidance on the Data Certification Process developed by our QA team.

The final section of this chapter contains a summary of the evaluation criteria used for flagging data as acceptable/green, warning/yellow or recommend N/red.

This report contains sections:

  • Summary

  • Gaseous Pollutants

  • Particulate Matter

  • Lead

Each section is described in detail below.

Certifying Agency vs. PQAO

The overall goal of the report is to aggregate and assess at the following values at the PQAO level:

  • NPAP Data (valid audits and NPAP bias)

  • Collocation Data (PM10, Pb and PM2.5 completeness and CV)

  • PEP Data (PM2.5 and Pb completeness and bias)

  • Pb Analysis Audit Data (completeness, bias)

The report assigns "recommended flags" to a variety of metrics with respect to certification requirements. All of the flags are assigned at the PQAO level. Therefore Monitoring organizations that are part of a larger PQAO but decide to certify the sites/data within their "certifying agency" will see the PQAO level results. (That is, the same results will be presented for each certifying agency or monitoring organization if they run separate reports.) For example, if there are three distinct monitoring organizations within a PQAO and organization #1 has 4 PM10 sites, organization #2 has 3 PM10 sites, and organization #3 has 7 PM10 sites, the collocation summary for each organization (if each organization decides to certify their own data) will identify a total of 14 sites requiring 2 collocated monitors for the PQAO (14*0.15=2.1). Like the QA Data Quality Indicator Report (AMP 256), this report will determine the percent complete and the precision estimate for the PQAO.

Data Completeness

Data completeness is based on the sample period start date and end date of the monitor and is not based on a calendar year. For example, if a monitor started on July 1, 2016 and reported their sample period start date as July 1, 2016 and monitored successfully at the required sampling frequency throughout the year (sample period end data was after December 31, 2016) then the completeness is calculated as 100%. From a NAAQS standpoint this monitor is incomplete but this report will show the monitor as 100% complete from the sample period start date.

For completeness for ozone data, the ozone season is used. If the monitor reports data after the ozone season it will not be used in completeness calculations. NCore ozone monitors are required to operate all year but this report uses the ozone season defined for the area where the site is located.

For continuous PM monitor completeness, there may be a difference between the estimate of routine data completeness between the Data Completeness Report (AMP 430) and this report. The Data Completeness Report evaluates completeness by hourly values while this report evaluates completeness by comparing the number of valid days to the number of scheduled days for the monitor. For example, consider a day where a monitor collects 18 samples, the Data Completeness Report estimates completeness as 75% (18/24) while this Report considers the day valid (75% complete) and therefore completeness is 100%. Since the Data Completeness Report evaluates completeness over a complete year (factoring in sample begin and end date), the discrepancy between the two reports should be small.

5.1. Data Evaluation and Concurrence Report Summary

This section of the report summarizes the results for each monitor by year and certifying agency.

5.1.1. Sample output

Here is a sample of the PDF output for the summary page layout.

Certification - Summary

5.1.2. Header Fields

Certification Year

Year for which the data is being certified. If the from of the design value is three years, data for the certification year and the two prior years will be considered.

Certifying Agency

The name of the agency assigned as "Certifying" for the monitor reporting data.

Pollutants in Report

This section contains one row summarizing the information for each pollutant for the Certifying Agency contained in the report.

PQAOs in Report

This section contains one row summarizing the information for each Primary Quality Assurance Organization affiliated with a monitor listed that has a PQAO that matches the one listed.

Summary of 'N' flags for all pollutants

This section contains one row for each monitor that AQS recommends against certification.

5.1.3. Columns

Parameter Name

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

Code

The AQS code corresponding to the parameter measured by the monitor.

Monitors Evaluated

The number of monitors for that pollutant evaluated by the report.

Out of those monitors evaluated for the pollutant, the number that pass all of the AQS automated checks and the system recommends the certification request be concurred by the regional office.

Out of those monitors evaluated for the pollutant, the number that do not pass all of the AQS automated checks and the system recommends the certification request be not be concurred by the regional office.

PQAO Name

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

PQAO Code

The code representing the Primary Quality Assurance Organization for this monitor.

TSA Date

The date of the most recent Technical Systems Audit of the PQAO.

PQAO

The code representing the Primary Quality Assurance Organization for this monitor.

Parameter Code

The AQS code corresponding to the parameter measured by the monitor.

AQS Site-ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

Certification Flag value requested by the certifying agency to override the AQS recommended value (there should be an accompanying explanation in the State Comment field of AQS).

Reason for AQS Recommendation

The AQS automated check causing the system to recommended against certification for the monitor.

5.2. Data Evaluation and Concurrence Report for Gaseous Pollutants

This section of the report gives details on the status for gaseous criteria pollutants for each year, certifying agency, and pollutant.

5.2.1. Sample output

Here is a sample of the PDF output for the gaseous details page layout.