1. Reports Guide - Introduction

1.1. About This Document

This reports guide is intended to give a full explanation of the contents of the various AQS standard reports.

This document is focused solely on the content of the reports, not how to generate them. See the AQS User’s Guide for that.

This is the first release of a new format for the reports guide and it contains information only about the reports most related to certification. These are:

  • Quicklook Criteria

  • Data Completeness

  • DQI (Data Quality Indicators)

  • Certification

  • SO2 Design Value

For each report, this document contains the following sections:

  • What knowledge the report is intended to convey / why it exists.

  • A sample of the output

  • A description of each field included

    • In the header

    • In the body, which will usually be in columns.

If you have any feedback or suggested changes, please let Nick Mangus on the AQS team know (mangus.nick@epa.gov). We will continue to add to this document based on user reception and feedback.

1.2. Technical Details

You must have JavaScript enabled in your browser for the equations to properly display.

This guide is a single (static) HTML document and is fully transportable if you want to save it to local media.

1.3. Report Formats

Each report may come in one of two formats: a "Report" format or a "Workfile" format.

The Report format is a PDF document that is laid out to make the data easy to digest for a human reader. It is more suitable for printing with any header information repeated on each page, pages numbered, etc. The first page of every PDF report is the "cover page" which is summary of the selections that were entered into AQS when the report was created.

The Workfile format is a text file formatted to be read by a computer (spreadsheet or other program). It usually contains a header row with the column headings followed by rows of data. The AQS convention for commenting out data in text files is to begin the line with the pound symbol (#). (That is, if a line begins with a #, it is not a data line, but a line of instructions for the benefit of human readers; like a header, etc.)

This document will confine itself to the PDF formatted versions of the report for simplicity. Where work files are available, the contents should closely match the PDF format and the text files will have header information (lines beginning with the hash sign, #). When generating a Workfile from AQS, the results are returned in a zip archive that includes a PDF "cover page" which is summary of the selections that were entered into AQS when the report was created.

1.4. Naming

Reports naming in AQS has a long and obscure history. Every report has a name and a number. For example, the Data Certification Report and the AMP600 are the same thing. Some people prefer to use the name, others the number. This document will usually refer to the reports by name.

2. Quicklook Report (AMP450)

The AQS Quicklook report is also known as the AMP450.

This report is designed to give AQS users a "quick look" at the key annual statistics for criteria pollutants. Key statistics are the ones that may affect design value calculations (e.g., completeness, averaging time of the standard, form of the standard, etc.)

The Quick Look Report displays annual summary statistics for selected criteria parameters at air quality monitoring sites using the calculation rules for the pollutant standard selected at the time of report generation.

There is a unique format for each of the criteria pollutants. Each format is designed to highlight special calculations that are derived for the given pollutant in order to determine compliance with the National Ambient Air Quality Standards. In addition to these special formats, a listing of the reporting organization codes referenced in the report is provided, as well as a listing of referenced sampling methodology codes at the end of the report.

Important
The most important thing to remember about this report is that each parameter - duration combination will be displayed differently.

2.1. Sample output

Below is a sample of the PDF output.

Quicklook Sample Output

This report is sorted by parameter, then duration (with a page break between parameter - duration combinations), then by geography (site), then by year. The most important thing to remember about this report is that each page may include different columns (the columns displayed are specific to parameter - duration combination). This is done because the form of the standard is different for each pollutant.

Note that unless you specifically exclude these in the report options, there will be multiple entries for the same monitor for the same year for different durations (if there are different NAAQS durations) and exceptional data types (EDTs).

The separate durations will be on different pages, and the different EDT’s will be on separate lines of the same page.

2.2. Header Fields

2.2.1. Parameter

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

The parameter code is included parenthetically.

2.2.2. Duration

The length of time that air passes through the monitoring device before it is analyzed (measured). So, it represents an averaging period in the atmosphere (for example, a 24-hour sample duration draws ambient air over a collection filter for 24 straight hours). For continuous monitors, it can represent an averaging time of many samples (for example, a 1-hour value may be the average of four one-minute samples collected during each quarter of the hour).

2.2.3. State

The name of the state where the monitoring site is located.

2.2.4. Units (Standard for the Parameter)

The standard unit of measure for the Parameter. AQS converts all incoming data to a parameter specific "standard" unit of measure. This is done so that aggregate values can be computed.

2.3. Columns

Since the format of this report is different for each parameter, the columns are listed by parameter. The "common" identification fields shared by all the pages are listed first and then the data columns are listed for each parameter (and duration, where there are multiple durations).

Each page of the report has some header information, divided into three parts. On the left is the parameter and duration. The center contains the state name. And the right shows the standard units for the parameter.

2.3.1. Fields Common to all Parameters

Site ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

City

The name of the city where the monitoring site is located. This represents the legal incorporated boundaries of cities and not urban areas.

County

The name of the county where the monitoring site is located.

Address

The street address giving an approximate location of the site.

Year

The year the data represents.

Meth

A three-digit code representing the measurement method. A method code is only unique within a parameter (that is, method 132 for ozone is not the same as method 123 for benzene).

A description of the method codes displayed in the report is available on a page near the end of the report (usually the second to last page).

Cert

A Flag indicating the official certification status of the monitor-year, as assigned by the applicable regional office user.

See the AQS code tables for meanings of Certification Flags.

EDT

A designation indicating how a summary value is affected by exceptional events. It indicates whether exceptional data exists in the time period being summarized, and whether such exceptional data is included in the summary value.

For summaries of sample measurements, the following Exceptional Data Types are available:

0: No Events. None of the measurement data contributing to the summary has been flagged for exceptional events.

1: All Events Excluded. The summary excludes any measurements that have been flagged for exclusion because of exceptional events. (These measurements are excluded whether or not EPA has concurred with the flagging.)

2: All Events Included. Measurements included in the summary have been flagged for exceptional event exclusion but their data is included. (These measurements are included whether or not EPA has concurred with the flagging.)

5: Concurred Events Excluded. The summary excludes any measurements that have been flagged for exclusion for exceptional events AND the EPA Regional Office has concurred with the flagging.

For any site/monitor and summary time period, either a type 0 summary will exist (no data was flagged), or a type 1, type 2 and type 5 summary will all exist together.

For summaries created from lower-level summaries (e.g. daily summaries created from NAAQS_Average rows):

0: Created only when only lower-level summaries with EDT_ID = 0 exist

1: Created from lower-level summaries with EDT_ID 0 and 1

2: Created from lower-level summaries with EDT_ID 0 and 2

5: Created from lower-level summaries with EDT_ID 0 and 5

2.3.2. Fields Specific to Carbon Monoxide

# Obs

The number of observations (samples) taken during the averaging period.

1st-2nd Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Maxes for both 1-hour and 8-hour durations are shown on the report.

Obs > 1Hr Std

The number of samples in the summarized sample set that exceed the level of the primary standard. (Only applicable for criteria pollutants.)

Exceedance counts for both 1-hour and 8-hour durations are shown on the report.

2.3.3. Fields Specific to Sulfur Dioxide

Obs

The number of observations (samples) taken during the averaging period.

Comp Qtrs (Complete Quarters)

The number of quarterly summaries, with corresponding pollutant standard and exceptional data type, where the summary criterion is met.

1st-2nd Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Maxes for both 1-hour and 24-hour durations are shown on the report.

99th Pctl 1-Hr

The 99th percentile 1-hour value (the value that 99% of all values are equal to or below).

Days > 24-Hr Std

The number of days where the 24-hour average was above the 24-hour standard.

Arith Mean An - Std

The Arithmetic Mean calculated in the way that would be comparable to the annual standard.

Calculation method:

\[u = \frac {1}{v} {\sum_{i=1}^v d_i}\]

Where:

$u$ = mean value d_i = valid daily maximum occurring in the effective monitoring season v = count of valid days

2.3.4. Fields Specific to Ozone / 1-hour

Valid Days Meas

The number of required monitoring days in the aggregation period (e.g., year) where the monitoring criteria were met. (Only applicable for criteria pollutants.)

Num Days Req

The number of days during the year which the monitor was scheduled to take samples if measurements are required.

1st-4th Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The third highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the third maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The fourth highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the fourth maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Day Max > Std

The number of samples in the summarized sample set that exceed the level of the primary standard. (Only applicable for criteria pollutants.)

Est Days > Std

The estimated number of days greater than the standard for the year (or quarter if viewing quarterly data). It is computed for specific pollutants when an exceedance has occurred during the year. The underlying assumption is that missing data is just as likely to exceed the standard as reported data.

Calculation method:

\[e = d + \left(\left(\frac{d}{v}\right)\cdot(r-v-a)\right)\]

Where:

\( e = \) estimated days greater than standard
\( d = \) count of primary exceedances
\( v = \) count of valid days
\( r = \) count of required days
\( a = \) count of days assumed less than standard

Miss Days < Std

The number of invalid or missing days in the effective monitoring season whose daily maxima are assumed to be less than or equal to the standard. (Only applicable to 1-hour and 8-hour ozone.)

Calculation method:

A missing or invalid day is assumed to be less than the standard when either of the following conditions exists:

  • The daily maximums on the days immediately preceding, and immediately succeeding, the missing day were less than, or equal to, 75% of the standard.

  • The number of valid samples for the day was less than 18 and the sum of the following is greater than or equal to 18 (i.e., 75% of the possible values):

    • Number of valid samples;

    • Number of null samples that were flagged as not likely to exceed the standard and for which the Regional Office has indicated concurrence;

    • Number of omitted samples that were flagged with event qualifiers and for which the Regional Office has indicated concurrence.

2.3.5. Fields Specific to Ozone / 8-hour

% Obs

The ratio of recorded sample values that were reported divided by the number of sample values that were scheduled to have been reported for the year.

Calculation method:

\[p = \frac {(v+a)} {n} \cdot 100\]

Where:

\( p = \) observation percent
\( v = \) number of valid days
\( a = \) number of missing days assumed less than standard
\( n = \) number of required days

Valid Days Meas

The number of required monitoring days in the aggregation period (e.g., year) where the monitoring criteria were met. (Only applicable for criteria pollutants.)

Calculation method:

The number of active days within the effective monitoring season when minimum daily criteria were met, i.e., the daily Summary Criteria Met value is ?Y?.

Num Days Req

The number of days during the year which the monitor was scheduled to take samples if measurements are required.

Calculation method:

The number of active days within the effective monitoring season.

1st-4th Max

The max values listed are the daily max values. That is, each day the maximum 8-hour average is determined. This is the daily max value. For the year, the four highest daily max values are listed.

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The third highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the third maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The fourth highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the fourth maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Day Max > Std

The number of samples in the summarized sample set that exceed the level of the primary standard. (Only applicable for criteria pollutants.)

2.3.6. Fields Specific to PM10 / 24-Hour

Obs

The number of observations (samples) taken during the averaging period.

Num Req

The number of days during the year which the monitor was scheduled to take samples if measurements are required.

Calculation method:

The number of active days in the year.

Valid Days

The number of required monitoring days in the aggregation period (e.g., year) where the monitoring criteria were met. (Only applicable for criteria pollutants.)

Calculation method:

The number of valid 24-hour block Arithmetic Mean (NAAQS) values within the year.

% Obs

The ratio of recorded sample values that were reported divided by the number of sample values that were scheduled to have been reported for the year.

Calculation method:

\[p = \frac {v} {n} \cdot 100\]

Where:

\( p = \) observation percent
\( v = \) number of valid days
\( n = \) number of required (e.g., scheduled) days

1st-4th Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The third highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the third maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The fourth highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the fourth maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

Day Max > Std

The number of samples in the summarized sample set that exceed the level of the primary standard. (Only applicable for criteria pollutants.)

Est Days > Std

The estimated number of days greater than the standard for the year (or quarter if viewing quarterly data). It is computed for specific pollutants when an exceedance has occurred during the year. The underlying assumption is that missing data is just as likely to exceed the standard as reported data.

Calculation method:

\[E = {\sum_{q=1}^4 e_q} \quad \text{Where} \quad e_q = \left( \frac {N_q} {m_q} \right) \cdot \sum_{i=1}^{m_q} \left( \frac {v_i} {k_i} \right)\]

Where:

\( E = \) estimated number of exceedances for the year
\( e_q = \) estimated number of exceedances for calendar quarter q
\( q = \) calendar qaurter
\( N_q = \) number of days in quarter q
\( m_q = \) number of strata with samples during quarter q
\( v_i = \) number of observed exceedances in stratum i
\( k_i = \) number of samples in stratum i

2.3.7. Fields Specific to PM2.5 / 24-hour

Num Cred Days

Number of scheduled and make-up days that are given credit when determining data completeness for a site. (Note: This may not be the number of values averaged, since "extra samples" may be included in the mean.

Calculation method:

The sum of valued, scheduled sampling days, plus make-ups for missing scheduled days.

Scheduled days are the number of days within the year that were scheduled for sampling, as determined by the EPA-defined calendar for the required collection frequency, and which also fall within the period of operation, as defined by sampling periods.

A make-up day is a sample recorded in the same stratum as, or exactly seven days after, a missing scheduled sample. In both conditions, the make-up sample must occur within the same quarter as the missed sample. A maximum of five make-up samples are allowed per quarter. (References: EPA-454/R-99-008 Guide line on Data Handling Conventions for the PM NAAQS; Memorandum: February 3, 1999 Use of Make-Up Samples to Replace Scheduled PM Samples.)

1st-4th Max

The highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The second highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the second maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The third highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the third maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

The fourth highest value for the indicated year. This only includes data at the selected duration or standard (e.g., it may be the fourth maximum daily value). For seasonal monitoring, it only includes data during the effective monitoring season.

98th Percentile Value

The sample value in the summarized sample set where 98% of the values in that set are less than or equal to it. (For ozone, based on valid daily maxima; for PM2.5, based on seasonal and non-seasonal algorithms.)

Note, for seasonal PM 2.5 sampling, there is an alternate method for computing the 98th percentile that weights the seasons based on the number of samples.

Wtd Arith Mean

The weighted arithmetic mean for the sample values in the summarized sample set. (Only applicable to PM10 and PM2.5)

Calculation method:

\[Mean_{wtd} = \frac{\sum_{i=1}^{q} u_i} {q}\]

Where:

\( Mean_{wtd} = \) weighted arithmetic mean
\( q = \) number of active quarters
\( i = \) quarter
\( u_i = \) arithmetic mean of samples in quarter i of appropriate exceptional data type

Important
While all other PM2.5 data on the Quicklook report is calculated for the 24-hour PM2.5 standard, the weighted arithmetic mean is calculated for the annual standard. (Since there are two primary PM2.5 standards and we had room on the report, we included metrics reflecting each). To put it another way, a PM2.5 row in the Quicklook reports combines data from two different annual summary records. One for the 24-hour standard and one for the annual standard. They may have different samples excluded since exclusion is specific to a standard.

3. Data Completeness Report (AMP430)

The AQS QA Data Completeness Report is also known as the AMP256.

This report is designed to give AQS users a summary of the status of their data with respect to the operating information (e.g., sampling schedules) also entered in AQS.

This report consists of several sections. If applicable, the first pages list monitors that are "active" but did not report sample data. AQS considers a monitor active if it has an open sample period that overlaps with the time selection window.

For all monitors that did report data, the information will be sorted by region, state, reporting organization, and monitor type, with a page break on each.

3.1. Sample output

Below is a sample of the PDF output for the "Monitors Reporting" section of the Data Completeness report.

Data Completeness Report

3.2. Header Fields

3.2.1. Date Range

The date range for which the completeness statistics are valid. This will reflect the date range selected when the report was created.

3.2.2. Region

The EPA Region Code and the name of the city where the Regional Office is located.

3.2.3. State

The name of the state where the monitoring site is located.

3.2.4. REP ORG (Repoting Organization)

The name of the agency assigned as "Reporting" data for the monitor reporting data.

3.2.5. MONITOR TYPE

An administrative or regulatory classification for the monitor.

3.3. Columns

3.3.1. SITE ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

3.3.2. CITY

The name of the city where the monitoring site is located. This represents the legal incorporated boundaries of cities and not urban areas.

3.3.3. ADDRESS

The street address giving an approximate location of the site.

3.3.4. PARAMETER

The parameter code and name are listed.

3.3.5. POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

3.3.6. DURATION

The length of time that air passes through the monitoring device before it is analyzed (measured). So, it represents an averaging period in the atmosphere (for example, a 24-hour sample duration draws ambient air over a collection filter for 24 straight hours). For continuous monitors, it can represent an averaging time of many samples (for example, a 1-hour value may be the average of four one-minute samples collected during each quarter of the hour).

3.3.7. METHOD

A three-digit code representing the measurement method. A method code is only unique within a parameter (that is, method 132 for ozone is not the same as method 123 for benzene).

3.3.8. OBSERVATIONS

For each month the number of observations and the percent completeness for the month. Also listed for the year as a whole.

The following rules are applied to determining completeness:

  • Monitoring seasons are considered. If a monitor is seasonal, AQS only expects data to be reported during the season.

  • Sample schedules are used to determine if a monitor was active. AQS expects the monitor to have collected data during all sample periods (between all sample period begin dates and sample period end dates_).

  • If the date range selected for the report is not an entire calendar year, then only the months selected are included in the annual total. That is, an excluded month does not count against the annual total.

4. QA Data Quality Indicator Report (AMP256)

The AQS QA Data Quality Indicator Report is also known as the DQI Report and the AMP256.

This report is designed to give AQS users a summary of the status of their quality assurance activities.

The DQI Report displays a summary of quality assurance activities as required by Appendix A of the monitoring rule. The report is sorted by assessment, PQAO, parameter, and year. There is a page break after a change in assessment, PQAO, or parameter. At the end of each assessment section is a summary row by year, then by PQAO.

This report contains 6 sections related to the quality assurance assessment types:

  • One Point Quality Control

  • Flow Rate verifications (for particulate parameters)

  • Annual Performance Evaluation

  • Semi-Annual Flow Rate Audits

  • Collocation Summary

  • Performance Evaluation Program (PEP)

  • Lead Audit Strip Analysis

Each is described in it’s own section below.

The report begins with a "Notes" page with some explanatory information about the report. Primarily, it lists the Monitor Type (MT) codes that may appear elsewhere in the report.

4.1. One Point Quality Control

This section of the report summarizes the One Point QC checks performed on the monitors that meet the selection criteria entered when the report was created.

The report is sorted by PQAO and parameter with a page break on changing PQAO.

4.1.1. Sample output

Below is a sample of the PDF output for the One Point Quality Control layout.

DQI - One Point Quality Control

4.1.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.1.3. Columns

Year

The year the data represents.

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

State

The two letter abbreviation for the state.

Site IDs

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

MT (Monitor Type)

An administrative or regulatory classification for the monitor.

See the "Notes" page at the beginning of the report for the meanings of the abbreviations.

Begin Date

The greatest of: January 1 of the summary year; Earliest date falling within a sample period for the monitor and summary year; Earliest date falling within the PQAO assignment for the monitor and summary year.

End Date

The earliest of: December 31 of the summary year; Latest date falling within a sample period for the monitor and summary year; Latest date falling within the PQAO assignment for the monitor and summary year; Calendar date that the summary is retrieved.

Intervals Required

The number of intervals between the begin and end date that require a one point quality control check.

The number of two week periods falling completely within a sample period and the PQAO assignment for the monitor for the summary year.

Valued Intervals

The number of intervals between the begin and end date in which a one point quality control check was reported.

% Complete

The 1-point QC completeness data is evaluated in the following manner: * Count the number of checks in each 14-day interval starting with the Jan 1-14 interval. For each 14-day interval, multiple checks will only count as one. * Divide the total number of checks in #1 by 26

For certification, a green Y is > 75%. That means a monitoring organization could miss six 14-day intervals (meaning a check past the 14-day interval) and still get a green Y. For a yellow flag, they could miss nine 14-day intervals and get a warning. Missing ten 14-day intervals will elicit an N flag.

CV UB

The 90% Upper bound on Coefficient of Variance. This is often used as the precision estimate for one point quality control checks. Only valid observations are used in this calculation.

Calculation method:

\[CV = \sqrt{ \frac { n \cdot {\left( {\sum_{i=1}^n d_i^2 } \right)} - {\left( {\sum_{i=1}^n d_i } \right)^2} } {2n \cdot (n-1)} } \cdot \sqrt { \frac {n-1} { \chi_{0.1, n-1}^2} } \]

Where:

\(CV = \) the coefficient of variation
\(d = \) the percent difference for any given audit
\(n = \) the number of audits examined
\({ \chi_{0.1, n-1}^2} = \) the 10th percentile of a chi-squared distribution with n-1 degrees of freedom

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

Bias UB

The quality control bias estimator is an upper bound on the mean absolute value of the percent differences of regular and assessment samples.

Calculation method:

\[|bias| = AB + t_{0.95, n-1} \cdot \frac {AS} {\sqrt{n}} \]

Where:

\(\vert bias \vert = \) the bias estimate
\(AB = \) the mean of the absolute values of the percent differences
\(AS = \) the standard deviation of the absolute value of the percent differences
\(n = \) the number of audits examined
\(t_{0.95, n-1} = \) the 95th quantile of a t-distribution with n-1 degrees of freedom

Calculation method:

\[AB = \frac {1} {n} \cdot {\sum_{i=1}^n \vert d_i \vert } \]

Calculation method:

\[AS = \sqrt { \frac { ({n \cdot \sum_{i=1}^n {\vert d_i \vert}^2}) - ({\sum_{i=1}^n {\vert d_i \vert}})^2 } {n \cdot (n-1)} } \]

Sign (+, -, or +/-) of the bias of the percent differences.

A sign will be determined by rank ordering the percent differences of the QC check samples from a given site for a particular assessment interval.

Calculate the 25th and 75th percentiles of the percent differences for each site. The absolute bias upper bound should be flagged as positive if both percentiles are positive and negative if both percentiles are negative. If one is positive and the other is negative, the upper bound would not be flagged and +/- will be displayed.

4.2. Annual Performance Evaluation

This section of the report summarizes the Annual Performance Evaluation checks performed on the monitors that meet the selection criteria entered when the report was created.

The report is sorted by PQAO and parameter with a page break on changing PQAO.

4.2.1. Sample output

Below is a sample of the PDF output for the Annual Performance Evaluation layout.

DQI - Annual Performance Evaluation

4.2.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.2.3. Columns

Year

The year the data represents.

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

State

The two letter abbreviation for the state.

Site IDs

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

MT

An administrative or regulatory classification for the monitor.

Begin Date

The greatest of: January 1 of the summary year; Earliest date falling within a sample period for the monitor and summary year; Earliest date falling within the PQAO assignment for the monitor and summary year.

End Date

The earliest of: December 31 of the summary year; Latest date falling within a sample period for the monitor and summary year; Latest date falling within the PQAO assignment for the monitor and summary year; Calendar date that the summary is retrieved.

Avg %D / Lvl

The average of the percent differences for all pairs taken at each of the audit levels. Levels 1-5 are displayed on the first row, and levels 6-10 are displayed on the second row (i.e., level 6 is immediately below level 1).

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

Obs / Q

The number of audits performed in each calendar quarter.

Criteria Met?

An indicator of whether the Annual Performance Audit reporting requirement was met. A monitor has met the conditions if (a) an audit was performed at least once per year and (b) the audit was performed, at a minimum, within three consecutive levels (1,2,3 or 2,3,4 or 3,4,5) within each of those years.

Conf. Limits; Lower

The lower probability limit (95% confidence) of the compared percent differences. This is calculated for quality assurance assessment results.

Calculation method:

\[l = D - \left( S \cdot 1.96 \right)\]

Where:

\( l = \) lower 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

Conf. Limits; Upper

The upper probability limit (95% confidence) of the compared percent differences.

Calculation method:

\[u = D + \left( S \cdot 1.96 \right)\]

Where:

\( u = \) upper 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

% Bet. Cf Lim

The percent of audits that are between the upper and lower confidence limits. This value is rounded to a whole number.

4.3. Flow Rate Verification

This section of the report summarizes the Flow Rate Verifications performed on the monitors that meet the selection criteria entered when the report was created.

The report is sorted by PQAO and parameter with a page break on changing PQAO.

4.3.1. Sample output

Here is a sample of the PDF output for the Flow Rate Verification layout.

DQI - Flow Rate Verification

4.3.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.3.3. Columns

Year

The year the data represents.

Reg

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

St

The two letter abbreviation for the state.

Site IDs

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

MT (Monitor Type)

An administrative or regulatory classification for the monitor.

See the "Notes" page at the beginning of the report for the meanings of the abbreviations.

Begin Date

The greatest of: January 1 of the summary year; Earliest date falling within a sample period for the monitor and summary year; Earliest date falling within the PQAO assignment for the monitor and summary year.

End Date

The earliest of: December 31 of the summary year; Latest date falling within a sample period for the monitor and summary year; Latest date falling within the PQAO assignment for the monitor and summary year; Calendar date that the summary is retrieved.

# Obs Required

The number of intervals between the begin and end date that require a one point quality control check.

The number of observations (samples) taken during the averaging period.

# Obs

The number of observations (samples) taken during the averaging period.

% D

The relative difference between the known and measured concentrations, expressed as a percentage.

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

% Complete

The ratio of recorded sample values that were reported divided by the number of sample values that were scheduled to have been reported for the year.

Bias UB

The quality control bias estimator is an upper bound on the mean absolute value of the percent differences of regular and assessment samples.

Calculation method:

\[|bias| = AB + t_{0.95, n-1} \cdot \frac {AS} {\sqrt{n}} \]

Where:

\(\vert bias \vert = \) the bias estimate
\(AB = \) the mean of the absolute values of the percent differences
\(AS = \) the standard deviation of the absolute value of the percent differences
\(n = \) the number of audits examined
\(t_{0.95, n-1} = \) the 95th quantile of a t-distribution with n-1 degrees of freedom

Calculation method:

\[AB = \frac {1} {n} \cdot {\sum_{i=1}^n \vert d_i \vert } \]

Calculation method:

\[AS = \sqrt { \frac { ({n \cdot \sum_{i=1}^n {\vert d_i \vert}^2}) - ({\sum_{i=1}^n {\vert d_i \vert}})^2 } {n \cdot (n-1)} } \]

Sign (+, -, or +/-) of the bias of the percent differences.

A sign will be determined by rank ordering the percent differences of the QC check samples from a given site for a particular assessment interval.

Calculate the 25th and 75th percentiles of the percent differences for each site. The absolute bias upper bound should be flagged as positive if both percentiles are positive and negative if both percentiles are negative. If one is positive and the other is negative, the upper bound would not be flagged and +/- will be displayed.

4.4. Semi-Annual Flow Rate Audits

This section of the report summarizes the Semi-Annual Flow Rate Audits performed on the monitors that meet the selection criteria entered when the report was created.

The report is sorted by PQAO and parameter with a page break on PQAO changes.

4.4.1. Sample output

Below is a sample of the PDF output for the Semi-Annual Flow Rate Audits layout.

DQI - Semi-Annual Flow Rate Audits

4.4.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.4.3. Columns

Year

The year the data represents.

Reg

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

St

The two letter abbreviation for the state.

AQS Site IDs

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

MT (Monitor Type)

An administrative or regulatory classification for the monitor.

See the "Notes" page at the beginning of the report for the meanings of the abbreviations.

Begin Date

The greatest of: January 1 of the summary year; Earliest date falling within a sample period for the monitor and summary year; Earliest date falling within the PQAO assignment for the monitor and summary year.

End Date

The earliest of: December 31 of the summary year; Latest date falling within a sample period for the monitor and summary year; Latest date falling within the PQAO assignment for the monitor and summary year; Calendar date that the summary is retrieved.

# Req (Number Required)

The number of intervals between the begin and end date that require a one-point quality control check.

The number of two week periods falling completely within a sample period and the PQAO assignment for the monitor for the summary year.

#Q

The number distinct quarters in which an evaluation was performed.

% Complete

The ratio of actual evaluations to required evaluations. If sampler operates <9 months at least 1 is expected. If operated >9 months two audits expected.

This field is valued at 100% if there were 3 or 4 audits in 3 or 4 distinct quarters; or if there were 2 audits during the year separated by at least 5 months.

This field is valued at 50% if there were 2 audits during the year not separated by at least 5 months.

Obs / Q

The number of audits performed in each calendar quarter.

Avg % d (Average Percent Difference)

The average of the Percent Differences (d).

Calculate the average of the values determined using the equation below for each audit.

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

Conf. Limits; Lower

The lower probability limit (95% confidence) of the compared percent differences. This is calculated for quality assurance assessment results.

Calculation method:

\[l = D - \left( S \cdot 1.96 \right)\]

Where:

\( l = \) lower 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

Conf. Limits; Upper

The upper probability limit (95% confidence) of the compared percent differences.

Calculation method:

\[u = D + \left( S \cdot 1.96 \right)\]

Where:

\( u = \) upper 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

% Between Conf Lmt

The percent of audits that are between the upper and lower confidence limits. This value is rounded to a whole number.

4.5. Collocation Summary

This section of the report summarizes the Collocated Samples collected at the monitors that meet the selection criteria entered when the report was created.

Note, this section of the report is preceded by the collocation details section, which contains mostly the same information by monitor, where as the summary is by method.

The report is sorted by PQAO and method with a page break between PQAOs.

4.5.1. Sample output

Below is a sample of the PDF output for the Collocation Summary layout.

DQI - Collocation Summary

4.5.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.5.3. Columns

Year

The year the data represents.

Method

A three-digit code representing the measurement method. A method code is only unique within a parameter (that is, method 132 for ozone is not the same as method 123 for benzene).

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

State

The two letter abbreviation for the state.

# Sites

The number of collocated audits required. Calculated for PM as 15% of total sites. Calculated for Lead as If there are 5 or fewer lead sites in the PQAO measuring the parameter in a given year then 4 Lead (PM10 or TSP) collocated audits are required; if there more than 5 lead sites in the PQAO within the year, the number required is 6.

This number only includes sites that meet the selection criteria entered when the report was created and that are within the sort group of: method and PQAO.

# Colloc Required

The number of sites (within the group) that require collocated monitors.

# Colloc Actual

The number of sites (within the group) that have collocated monitors.

% of Req Sites Colloc

The percentage of sites that meet their collocation requirements, based on the previous two fields.

# Obs Req

The number of collocated audits that are required to be performed. Determined as CEIL(0.2 * Monitor_Count)

# Obs

The number of collocated data pairs reported to AQS.

# Valid Obs

The number of valid collocated value pairs recorded during the time period. For a single assessment, the value will be 1 if the pair is valid and 0 if it is not. The "valid" collocated observations are those where the primary and collocated measurement values are both equal to or above the limits identified in 40 CFR Part 58 Appendix A 4(c). These are the values that are used for precision and bias calculations (e.g. CV UB).

% Complete

The ratio of # Obs to # Obs Req.

CV UB

The 90% Upper bound on Coefficient of Variance. This is often used as the precision estimate for one point quality control checks. Only valid observations are used in this calculation.

Calculation method:

\[CV = \sqrt{ \frac { n \cdot {\left( {\sum_{i=1}^n d_i^2 } \right)} - {\left( {\sum_{i=1}^n d_i } \right)^2} } {2n \cdot (n-1)} } \cdot \sqrt { \frac {n-1} { \chi_{0.1, n-1}^2} } \]

Where:

\(CV = \) the coefficient of variation
\(d = \) the percent difference for any given audit
\(n = \) the number of audits examined
\({ \chi_{0.1, n-1}^2} = \) the 10th percentile of a chi-squared distribution with n-1 degrees of freedom

Calculation method:

\[d = \frac { \left( Y - X \right) } {X} \cdot 100\]

Where:

\( d = \) percent difference
\( X = \) indicated (measured) concentration or flow rate
\( Y = \) known (true) concentration or flow rate

4.6. Performance Evaluation Program (PEP)

This section of the report summarizes the Performance Evaluation Program (PEP) audits performed at the monitors that meet the selection criteria entered when the report was created.

The report is sorted on PQAO and method with a page break between PQAOs.

4.6.1. Sample output

Here is a sample of the PDF output for the Performance Evaluation Program (PEP) layout.

DQI - Performance Evaluation Program (PEP)

4.6.2. Header Fields

Pollutant

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

APP A?

An indicator (Y or N) as to whether the listed pollutant is subject to the requirements of 40 CFR Part 50 Appendix A.

4.6.3. Columns

Year

The year the data represents.

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

State

The two letter abbreviation for the state.

# Sites

The number of collocated audits required. Calculated for PM as 15% of total sites. Calculated for Lead as If there are 5 or fewer lead sites in the PQAO measuring the parameter in a given year then 4 Lead (PM10 or TSP) collocated audits are required; if there more than 5 lead sites in the PQAO within the year, the number required is 6.

This number only includes sites that meet the selection criteria entered when the report was created and that are within the sort group of: method and PQAO.

# PEP Required

The number of PEP (performance evaluation program) assessment observations that are required to meet regulatory obligations.

# PEP Collected

The number of PEP (performance evaluation program) assessment observations that were reported to AQS.

# Colloc PEP Required

The number of Collocated Performance Evaluation Program (PEP) audits required to be collected. This value is only computed for SUMMARY rows, where applicable.

# Colloc PEP Collected

Lead Only: Number of Collocated PEP Audits Collected

% Complete

The percentage of required observations (or scheduled days) made for the given assessment time period.

Bias

The quality control bias estimator is an upper bound on the mean absolute value of the percent differences of regular and assessment samples.

Calculation method:

\[|bias| = AB + t_{0.95, n-1} \cdot \frac {AS} {\sqrt{n}} \]

Where:

\(\vert bias \vert = \) the bias estimate
\(AB = \) the mean of the absolute values of the percent differences
\(AS = \) the standard deviation of the absolute value of the percent differences
\(n = \) the number of audits examined
\(t_{0.95, n-1} = \) the 95th quantile of a t-distribution with n-1 degrees of freedom

Calculation method:

\[AB = \frac {1} {n} \cdot {\sum_{i=1}^n \vert d_i \vert } \]

Calculation method:

\[AS = \sqrt { \frac { ({n \cdot \sum_{i=1}^n {\vert d_i \vert}^2}) - ({\sum_{i=1}^n {\vert d_i \vert}})^2 } {n \cdot (n-1)} } \]

Sign (+, -, or +/-) of the bias of the percent differences.

A sign will be determined by rank ordering the percent differences of the QC check samples from a given site for a particular assessment interval.

Calculate the 25th and 75th percentiles of the percent differences for each site. The absolute bias upper bound should be flagged as positive if both percentiles are positive and negative if both percentiles are negative. If one is positive and the other is negative, the upper bound would not be flagged and +/- will be displayed.

Conf. Limits; Lower

The lower probability limit (95% confidence) of the compared percent differences. This is calculated for quality assurance assessment results.

Calculation method:

\[l = D - \left( S \cdot 1.96 \right)\]

Where:

\( l = \) lower 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

Conf. Limits; Upper

The upper probability limit (95% confidence) of the compared percent differences.

Calculation method:

\[u = D + \left( S \cdot 1.96 \right)\]

Where:

\( u = \) upper 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

4.7. Lead Audit Strip Analysis

This section of the report summarizes the Lead Audit Strip Analysis audits performed by PQAO.

4.7.1. Sample output

Below is a sample of the PDF output for the Performance Evaluation Program (PEP) layout.

DQI - Performance Evaluation Program (PEP)

4.7.2. Header Fields

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

4.7.3. Columns

Year

The year the data represents.

Region

A numerical identifier (with leading zero) for the EPA Region where the monitoring site resides.

St

The two letter abbreviation for the state.

Parameter Code

The AQS code corresponding to the parameter measured by the monitor.

Lab ID

The Agency Code of the laboratory performing the audits.

% Completeness

The breakdown by year and quarter of the ration of expected audits versus audits reported.

Bias UB

The quality control bias estimator is an upper bound on the mean absolute value of the percent differences of regular and assessment samples.

Calculation method:

\[|bias| = AB + t_{0.95, n-1} \cdot \frac {AS} {\sqrt{n}} \]

Where:

\(\vert bias \vert = \) the bias estimate
\(AB = \) the mean of the absolute values of the percent differences
\(AS = \) the standard deviation of the absolute value of the percent differences
\(n = \) the number of audits examined
\(t_{0.95, n-1} = \) the 95th quantile of a t-distribution with n-1 degrees of freedom

Calculation method:

\[AB = \frac {1} {n} \cdot {\sum_{i=1}^n \vert d_i \vert } \]

Calculation method:

\[AS = \sqrt { \frac { ({n \cdot \sum_{i=1}^n {\vert d_i \vert}^2}) - ({\sum_{i=1}^n {\vert d_i \vert}})^2 } {n \cdot (n-1)} } \]

Sign (+, -, or +/-) of the bias of the percent differences.

A sign will be determined by rank ordering the percent differences of the QC check samples from a given site for a particular assessment interval.

Calculate the 25th and 75th percentiles of the percent differences for each site. The absolute bias upper bound should be flagged as positive if both percentiles are positive and negative if both percentiles are negative. If one is positive and the other is negative, the upper bound would not be flagged and +/- will be displayed.

Conf. Limits; Lower

The lower probability limit (95% confidence) of the compared percent differences. This is calculated for quality assurance assessment results.

Calculation method:

\[l = D - \left( S \cdot 1.96 \right)\]

Where:

\( l = \) lower 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

Conf. Limits; Upper

The upper probability limit (95% confidence) of the compared percent differences.

Calculation method:

\[u = D + \left( S \cdot 1.96 \right)\]

Where:

\( u = \) upper 95% confidence limit
\( D = \) mean value
\( S = \) standard deviation of the percent differences

5. Certification Report (AMP600)

The AQS Certification Report is also known as the Certification Evaluation and Concurrence Report and the AMP600.

This report is designed to assist AQS users in certifying their data. Data from each calendar year for criteria pollutants measured by FRMs and FEMs must be certified by May 01 the following year. The Certification Report is a key part of that process. Running the report initiates the process whereby AQS assigns initial (suggested) certification flags to each monitor.

The report contains summary statistics about quality assurance and completeness for each monitor. More detailed information about quality assurance data is available in the Data Quality Indicators Report and more about completeness is available in the Data Completeness Report. Where the Certification report includes the same summary metric as one of the other reports, the values should match.

This document attempts to completely describe the report, however for more information about how flags are determined, information summarized, and how results displayed on this report may differ from other reports, you may consult Guidance on the Data Certification Process developed by our QA team.

The final section of this chapter contains a summary of the evaluation criteria used for flagging data as acceptable/green, warning/yellow or recommend N/red.

This report contains sections:

  • Summary

  • Gaseous Pollutants

  • Particulate Matter

  • Lead

Each section is described in detail below.

Certifying Agency vs. PQAO

The overall goal of the report is to aggregate and assess at the following values at the PQAO level:

  • NPAP Data (valid audits and NPAP bias)

  • Collocation Data (PM10, Pb and PM2.5 completeness and CV)

  • PEP Data (PM2.5 and Pb completeness and bias)

  • Pb Analysis Audit Data (completeness, bias)

The report assigns "recommended flags" to a variety of metrics with respect to certification requirements. All of the flags are assigned at the PQAO level. Therefore Monitoring organizations that are part of a larger PQAO but decide to certify the sites/data within their "certifying agency" will see the PQAO level results. (That is, the same results will be presented for each certifying agency or monitoring organization if they run separate reports.) For example, if there are three distinct monitoring organizations within a PQAO and organization #1 has 4 PM10 sites, organization #2 has 3 PM10 sites, and organization #3 has 7 PM10 sites, the collocation summary for each organization (if each organization decides to certify their own data) will identify a total of 14 sites requiring 2 collocated monitors for the PQAO (14*0.15=2.1). Like the QA Data Quality Indicator Report (AMP 256), this report will determine the percent complete and the precision estimate for the PQAO.

Data Completeness

Data completeness is based on the sample period start date and end date of the monitor and is not based on a calendar year. For example, if a monitor started on July 1, 2016 and reported their sample period start date as July 1, 2016 and monitored successfully at the required sampling frequency throughout the year (sample period end data was after December 31, 2016) then the completeness is calculated as 100%. From a NAAQS standpoint this monitor is incomplete but this report will show the monitor as 100% complete from the sample period start date.

For completeness for ozone data, the ozone season is used. If the monitor reports data after the ozone season it will not be used in completeness calculations. NCore ozone monitors are required to operate all year but this report uses the ozone season defined for the area where the site is located.

For continuous PM monitor completeness, there may be a difference between the estimate of routine data completeness between the Data Completeness Report (AMP 430) and this report. The Data Completeness Report evaluates completeness by hourly values while this report evaluates completeness by comparing the number of valid days to the number of scheduled days for the monitor. For example, consider a day where a monitor collects 18 samples, the Data Completeness Report estimates completeness as 75% (18/24) while this Report considers the day valid (75% complete) and therefore completeness is 100%. Since the Data Completeness Report evaluates completeness over a complete year (factoring in sample begin and end date), the discrepancy between the two reports should be small.

5.1. Data Evaluation and Concurrence Report Summary

This section of the report summarizes the results for each monitor by year and certifying agency.

5.1.1. Sample output

Here is a sample of the PDF output for the summary page layout.

Certification - Summary

5.1.2. Header Fields

Certification Year

Year for which the data is being certified. If the from of the design value is three years, data for the certification year and the two prior years will be considered.

Certifying Agency

The name of the agency assigned as "Certifying" for the monitor reporting data.

Pollutants in Report

This section contains one row summarizing the information for each pollutant for the Certifying Agency contained in the report.

PQAOs in Report

This section contains one row summarizing the information for each Primary Quality Assurance Organization affiliated with a monitor listed that has a PQAO that matches the one listed.

Summary of 'N' flags for all pollutants

This section contains one row for each monitor that AQS recommends against certification.

5.1.3. Columns

Parameter Name

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

Code

The AQS code corresponding to the parameter measured by the monitor.

Monitors Evaluated

The number of monitors for that pollutant evaluated by the report.

Out of those monitors evaluated for the pollutant, the number that pass all of the AQS automated checks and the system recommends the certification request be concurred by the regional office.

Out of those monitors evaluated for the pollutant, the number that do not pass all of the AQS automated checks and the system recommends the certification request be not be concurred by the regional office.

PQAO Name

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

PQAO Code

The code representing the Primary Quality Assurance Organization for this monitor.

TSA Date

The date of the most recent Technical Systems Audit of the PQAO.

PQAO

The code representing the Primary Quality Assurance Organization for this monitor.

Parameter Code

The AQS code corresponding to the parameter measured by the monitor.

AQS Site-ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

Certification Flag value requested by the certifying agency to override the AQS recommended value (there should be an accompanying explanation in the State Comment field of AQS).

Reason for AQS Recommendation

The AQS automated check causing the system to recommended against certification for the monitor.

5.2. Data Evaluation and Concurrence Report for Gaseous Pollutants

This section of the report gives details on the status for gaseous criteria pollutants for each year, certifying agency, and pollutant.

5.2.1. Sample output

Here is a sample of the PDF output for the gaseous details page layout.

Certification - Details for Gaseous Pollutants

5.2.2. Header Fields

Certifying Year

Year for which the data is being certified. If the from of the design value is three years, data for the certification year and the two prior years will be considered.

Certifying Agency

The name of the agency assigned as "Certifying" for the monitor reporting data.

Parameter

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

The body of the report is also organized by PQAO, which is listed in a sub-header.

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

QAPP Approval Date

Date the Quality Assurance Project Plan was approved.

Number of Passed Audits

Number of Monitor Level NPAP Assessments submitted where at least 3 levels meet bias criteria

NPAP Bias

National Performance Audit Program Bias Estimate.

Criteria Met

Status (Y/N) of NPAP PQAO-Level Summary

5.2.3. Columns

The main body of the report contains one row for each monitor.

AQS Site ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

Monitor Type

An administrative or regulatory classification for the monitor.

Routine Data Mean

The measure of central tendency obtained from the sum of the observed pollutant data values or National Ambient Air Quality Standards (NAAQS) averages in the yearly data set divided by the number of values that comprise the sum for the yearly data set. For criteria pollutants, the sum of values only adds the values with the appropriate flagging and concurrence for the exceptional data type.

Routine Data Min

The lowest sample value recorded during the aggregation period.

Routine Data Max

The maximum value at the given duration for the aggregation period.

Routine Data Exceed. Count

The number of samples during the year that exceeded the primary air quality standard.

Routine Data Outlier Count

Number of sample measurements with Stat/CR finding but without "validated value" qualifier flag

Routine Data Perc. Comp.

Annual percent completeness. The ratio of the number of reported samples to the number of scheduled samples (or assessments).

One Point Quality Check Precision

The monitor precision estimate is made by evaluating the Coefficient of Variation (CV) of all samples taken at a monitor over a given time.

One Point Quality Check Bias

The bias estimate made by evaluating the mean of the absolute values of the percent differences. (40 CFR Part 58, Appendix A defines bias as the systematic or persistent distortion of a measurement process which causes errors in one direction.)

In this case, it is the combined bias estimate for the 1-Point Quality Assurance assessments conducted at the monitor during the year.

One Point Quality Check Complete

Annual percent completeness. The ratio of the number of reported samples to the number of scheduled samples (or assessments).

The 1-point QC completeness data is evaluated in the following manner: * Count the number of checks in each 14-day interval starting with the Jan 1-14 interval. For each 14-day interval, multiple checks will only count as one. * Divide the total number of checks in #1 by 26

For certification, a green Y is > 75%. That means a monitoring organization could miss six 14-day intervals (meaning a check past the 14-day interval) and still get a green Y. For a yellow flag, they could miss nine 14-day intervals and get a warning. Missing ten 14-day intervals will elicit an N flag.

Annual PE Bias

The bias estimate made by evaluating the mean of the absolute values of the percent differences. (40 CFR Part 58, Appendix A defines bias as the systematic or persistent distortion of a measurement process which causes errors in one direction.)

In this case, it is the combined bias estimate for the Annual Performance Audit assessments conducted at the monitor during the year.

Annual PE Complete

Annual percent completeness. The ratio of the number of reported samples to the number of scheduled samples (or assessments).

NPAP Bias

For gaseous pollutants, this field will always be null

The bias estimate made by evaluating the mean of the absolute values of the percent differences. (40 CFR Part 58, Appendix A defines bias as the systematic or persistent distortion of a measurement process which causes errors in one direction.)

In this case, it is the combined bias estimate for the National Performance Audit Program assessments conducted at the monitor during the year.

NPAP PQAO Level Criteria

For gaseous pollutants, this field will always be Y

QAPP Appr.

Status (Y/N) of QAPP meeting in certification criteria.

The QAPP Approval Field is based on QAPP approval dates supplied from the monitoring organizations to the EPA Regions. (Note, the QAPP approval date appears in the header of the Data Certification Report.)

If the QAPP approval date is less than 5 years from the date the report is run, the entry will be a green Y. If the approval date is more than 5 years in the past, the entry will be a red N.

This criteria was established in a 2017 policy memo "EPA Review of Monitoring Organizations QAPP’s for Critical Criteria Conformance" at https://www.epa.gov/sites/production/files/2017-10/documents/qappmemo.pdf.

Concur. Flag Aqs Rec Flag

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

Concur. Flag CA Rec Flag

Certification Flag value requested by the certifying agency to override the AQS recommended value (there should be an accompanying explanation in the State Comment field of AQS).

Concur. Flag EPA Concur

Each row may be followed by comments relating to the certification of a particular monitor.

Submitter Comment

Comment provided by certifying agency about why the requested flag is different than the AQS recommended value

EPA Comment

Comment Provided by EPA Regional office about why the requested difference.

5.3. Data Evaluation and Concurrence Report for Particulate Matter

This section of the report gives details on the status for particulate matter for each year, certifying agency, and pollutant.

5.3.1. Sample output

Here is a sample of the PDF output for the gaseous details page layout.

Certification - Details for Gaseous Pollutants

5.3.2. Header Fields

Certifying Year

Year for which the data is being certified. If the from of the design value is three years, data for the certification year and the two prior years will be considered.

Certifying Agency

The name of the agency assigned as "Certifying" for the monitor reporting data.

Parameter

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

Quality Assurance Project Plan Approval Date

Date the Quality Assurance Project Plan was approved.

5.3.3. Collocation Summary Columns

This section may appear in your report. Each method designation that was reported as a primary monitor for a site will be listed in the collocation summary

PM10 Collocation. PM10 collocation is only required for manual (intermittent) samplers. In addition, CFR does not distinguish method designations for PM10 so all primary intermittent samplers are aggregated at the PQAO and 15% of the sites with intermittent monitors as primary are required to be collocated. Therefore, "Method" is not identified in the summary line for PM10.

Method

A three-digit code representing the measurement method. A method code is only unique within a parameter (that is, method 132 for ozone is not the same as method 123 for benzene).

# Sites

The number of sites within the network that were active in the year.

# Sites Req

The number of collocated audits required. Calculated for PM as 15% of total sites. Calculated for Lead as If there are 5 or fewer lead sites in the PQAO measuring the parameter in a given year then 4 Lead (PM10 or TSP) collocated audits are required; if there more than 5 lead sites in the PQAO within the year, the number required is 6.

# Sites Collocated

The number of sites that are collocated within the network. This value is only computed for SUMMARY rows.

% Collocated

The smallest of the following quantities: 100 OR 100 * Num_Actual_colloc/Num_Colloc_Req

CV Est

Coefficient of Variation of the Percent Differences (d).

CV UB

The 90% Upper bound on Coefficient of Variance. This is often used as the precision estimate for one point quality control checks. Only valid observations are used in this calculation.

Criteria Met?

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

This field indicates whether the PQAO-level evaluation criteria were met for collocation and is based on the completeness summary statistic and the precision estimate (CV-UB). See the table in the next section for the evaluation criteria.

5.3.4. PEP Summary Columns

This section may appear in your report.

# Methods

The number of methods used at the site during the year.

# Audited Methods

Number of distinct methods audited via PEP assessments.

# PEP Required

The number of Collocated Performance Evaluation Program (PEP) audits required to be collected. This value is only computed for SUMMARY rows, where applicable.

# PEP Submitted

Lead Only: Number of Collocated PEP Audits Collected

% Complete

The percentage of required Collocated Performance Evaluation Program (PEP) audits performed. This value is only computed for SUMMARY rows, where applicable.

Bias

The bias estimate made by evaluating the mean of the absolute values of the percent differences. (40 CFR Part 58, Appendix A defines bias as the systematic or persistent distortion of a measurement process which causes errors in one direction.)

Criteria Met?

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

5.3.5. Monitor Summary Columns

The main body of the report contains one row for each monitor.

AQS Site ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

Method

A three-digit code representing the measurement method. A method code is only unique within a parameter (that is, method 132 for ozone is not the same as method 123 for benzene).

Monitor Type

An administrative or regulatory classification for the monitor.

Routine Data Mean

The measure of central tendency obtained from the sum of the observed pollutant data values or National Ambient Air Quality Standards (NAAQS) averages in the yearly data set divided by the number of values that comprise the sum for the yearly data set. For criteria pollutants, the sum of values only adds the values with the appropriate flagging and concurrence for the exceptional data type.

Routine Data Min

The lowest sample value recorded during the aggregation period.

Routine Data Max

The maximum value at the given duration for the aggregation period.

Routine Data Exceed. Count

The number of samples during the year that exceeded the primary air quality standard.

Routine Data Outlier Count

Number of sample measurements with Stat/CR finding but without "validated value" qualifier flag

Routine Data Perc. Comp.

Annual percent completeness. The ratio of the number of reported samples to the number of scheduled samples (or assessments).

Flow Rate Audit Bias

The bias estimate made by evaluating the mean of the absolute values of the percent differences. (40 CFR Part 58, Appendix A defines bias as the systematic or persistent distortion of a measurement process which causes errors in one direction.)

Flow Rate Audit % Complete

Annual percent completeness. The ratio of the number of reported samples to the number of scheduled samples (or assessments).

Collocation CV

Coefficient of Variation of the Percent Differences (d).

Collocation % Complete

Annual percent completeness. The ratio of the number of reported samples to the number of scheduled samples (or assessments).

Collocation PQAO Crit. Met

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

PEP PQAO Crit. Met

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

QAPP Appr.

Status (Y/N) of QAPP meeting in certification criteria.

The QAPP Approval Field is based on QAPP approval dates supplied from the monitoring organizations to the EPA Regions. (Note, the QAPP approval date appears in the header of the Data Certification Report.)

If the QAPP approval date is less than 5 years from the date the report is run, the entry will be a green Y. If the approval date is more than 5 years in the past, the entry will be a red N.

This criteria was established in a 2017 policy memo "EPA Review of Monitoring Organizations QAPP’s for Critical Criteria Conformance" at https://www.epa.gov/sites/production/files/2017-10/documents/qappmemo.pdf.

Concur. Flag Aqs Rec Flag

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

Concur. Flag CA Rec Flag

Certification Flag value requested by the certifying agency to override the AQS recommended value (there should be an accompanying explanation in the State Comment field of AQS).

Concur. Flag EPA Concur

Each row may be followed by comments relating to the certification of a particular monitor.

Submitter Comment

Comment provided by certifying agency about why the requested flag is different than the AQS recommended value

EPA Comment

Comment Provided by EPA Regional office about why the requested difference.

5.4. Data Evaluation and Concurrence Report for Lead

This section of the report gives details on the status for lead for each year, certifying agency, and pollutant.

5.4.1. Sample output

Here is a sample of the PDF output for the gaseous details page layout.

Certification - Details for Gaseous Pollutants

5.4.2. Header Fields

Certifying Year

Year for which the data is being certified. If the from of the design value is three years, data for the certification year and the two prior years will be considered.

Certifying Agency

The name of the agency assigned as "Certifying" for the monitor reporting data.

Parameter

The name or description assigned in AQS to the parameter measured by the monitor. Parameters may be pollutants or non-pollutants (e.g., wind speed).

PQAO Name

The name of the agency assigned as Primary Quality Assurance Organization for the monitor reporting data.

Quality Assurance Project Plan Approval Date

Date the Quality Assurance Project Plan was approved.

5.4.3. Collocation Summary Columns

This section may appear in your report.

Number of Sites

The number of sites within the network that were active in the year.

Number of Colloc Sites Required

The number of collocated audits required. Calculated for PM as 15% of total sites. Calculated for Lead as If there are 5 or fewer lead sites in the PQAO measuring the parameter in a given year then 4 Lead (PM10 or TSP) collocated audits are required; if there more than 5 lead sites in the PQAO within the year, the number required is 6.

Number of Actual Colloc Sites

The number of sites that are collocated within the network. This value is only computed for SUMMARY rows.

Percent Collocated

The smallest of the following quantities: 100 OR 100 * Num_Actual_colloc/Num_Colloc_Req

CV Est

Coefficient of Variation of the Percent Differences (d).

CV UB

The 90% Upper bound on Coefficient of Variance. This is often used as the precision estimate for one point quality control checks. Only valid observations are used in this calculation.

Criteria Met

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

This field indicates whether the PQAO-level evaluation criteria were met for collocation and is based on the completeness summary statistic and the precision estimate (CV-UB). See the table in the next section for the evaluation criteria.

5.4.4. PEP Summary Columns

This section may appear in your report.

Number of Methods

The number of methods used at the site during the year.

Number of Methods Audited

Number of distinct methods audited via PEP assessments.

Number of PEP Audits Required

The number of Collocated Performance Evaluation Program (PEP) audits required to be collected. This value is only computed for SUMMARY rows, where applicable.

Number of Audits Submitted

Lead Only: Number of Collocated PEP Audits Collected

% Complete

The percentage of required Collocated Performance Evaluation Program (PEP) audits performed. This value is only computed for SUMMARY rows, where applicable.

Bias

The bias estimate made by evaluating the mean of the absolute values of the percent differences. (40 CFR Part 58, Appendix A defines bias as the systematic or persistent distortion of a measurement process which causes errors in one direction.)

Criteria Met

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

5.4.5. Analysis Audit Summary Columns

This section may appear in your report. The analysis audits are the audits described in 40 CFR Part 58 App. A section 3.3.4.2. Both the completeness and the bias estimate will be used in the "Lead Analysis Criteria Met" column at the monitor level.

Number Required

The number of lead analysis audits required.

Number Submitted

The number of lead analysis audits submitted.

% Complete

The percentage of required observations (or scheduled days) made for the given assessment time period.

Bias

The bias estimate made by evaluating the mean of the absolute values of the percent differences. (40 CFR Part 58, Appendix A defines bias as the systematic or persistent distortion of a measurement process which causes errors in one direction.)

Criteria Met

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

5.4.6. Monitor Summary Columns

The main body of the report contains one row for each monitor.

AQS Site ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

POC

This is the "Parameter Occurrence Code" used to distinguish different instruments that measure the same parameter at the same site. There is no meaning to the POC (e.g. POC 1 does not indicate the primary monitor). For example, the first monitor established to measure carbon monoxide (CO) at a site could have a POC of 1. If an additional monitor were established at the same site to measure CO, that monitor could have a POC of 2. However, if a new instrument were installed to replace the original instrument used as the first monitor, that would be the same monitor and it would still have a POC of 1.

Monitor Type

An administrative or regulatory classification for the monitor.

Routine Data Mean

The measure of central tendency obtained from the sum of the observed pollutant data values or National Ambient Air Quality Standards (NAAQS) averages in the yearly data set divided by the number of values that comprise the sum for the yearly data set. For criteria pollutants, the sum of values only adds the values with the appropriate flagging and concurrence for the exceptional data type.

Routine Data Min

The lowest sample value recorded during the aggregation period.

Routine Data Max

The maximum value at the given duration for the aggregation period.

Routine Data Exceed. Count

The number of samples during the year that exceeded the primary air quality standard.

Routine Data Outlier Count

Number of sample measurements with Stat/CR finding but without "validated value" qualifier flag

Routine Data Percent Comp.

Annual percent completeness. The ratio of the number of reported samples to the number of scheduled samples (or assessments).

Flow Rate Audit Bias

The bias estimate made by evaluating the mean of the absolute values of the percent differences. (40 CFR Part 58, Appendix A defines bias as the systematic or persistent distortion of a measurement process which causes errors in one direction.)

Flow Rate Audit % Complete

Annual percent completeness. The ratio of the number of reported samples to the number of scheduled samples (or assessments).

Collocation CV UB

The 90% Upper bound on Coefficient of Variance. This is often used as the precision estimate for one point quality control checks. Only valid observations are used in this calculation.

Collocation Percent Comp

The percentage of required observations (or scheduled days) made for the given assessment time period.

Collocation PQAO Crit. Met

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

PEP PQAO Crit. Met

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

Lead Analysis Crit. Met

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

Certification QAPP Appr.

Status (Y/N) of QAPP meeting in certification criteria.

The QAPP Approval Field is based on QAPP approval dates supplied from the monitoring organizations to the EPA Regions. (Note, the QAPP approval date appears in the header of the Data Certification Report.)

If the QAPP approval date is less than 5 years from the date the report is run, the entry will be a green Y. If the approval date is more than 5 years in the past, the entry will be a red N.

This criteria was established in a 2017 policy memo "EPA Review of Monitoring Organizations QAPP’s for Critical Criteria Conformance" at https://www.epa.gov/sites/production/files/2017-10/documents/qappmemo.pdf.

Certification AQs Rec V

AQS Recommended Certificaiton Flag based on the completeness of the data and the results of the QA assessments.

Certification CA Rec Value

Certification Flag value requested by the certifying agency to override the AQS recommended value (there should be an accompanying explanation in the State Comment field of AQS).

Certification EPA Concur

Each row may be followed by comments relating to the certification of a particular monitor.

Submitter Comment

Comment provided by certifying agency about why the requested flag is different than the AQS recommended value

EPA Comment

Comment Provided by EPA Regional office about why the requested difference.

5.5. Certification Recommendation Criteria

This section summarizes the criteria that AQS evaluates to generate acceptable (green or Y), warning (yellow), and recommend against (red or N) flags in the Certification Report.

The table below presents evaluation criteria related to specific metrics that can be associated with a monitor.

  • If any monitor has 1 recommend N (red) flag, AQS will recommend N (no) for monitor certification.

  • If any monitor has 3 warning (yellow) flags, AQS will recommend N (no) for monitor certification.

5.5.1. Technical Systems Audits

Assessment Current CFR Requirement or Guidance Green (Acceptable) Yellow (Warning) Red (Recommend N Flag) Comments

Technical Systems Audit

PQAO every 3 years

TSA within 3 years

TSA within 4 years

TSA > 5 years

Not a monitoring organization responsibility. Will be reported on summary page, not by pollutant.

5.5.2. Gaseous Criteria Pollutants

Assessment Current CFR Requirement or Guidance Green (Acceptable) Yellow (Warning) Red (Recommend N Flag) Comments

Routine Data Completeness

75%

> 80%

79 - 70%

< 70%

Based on CFR criteria for data use ( 100 * number of hourly obs / number of hours in monitor sample period ). Sample period is the time interval between the sample period start date and the sample period end date.

QAPP Approval

Approval date within 5 years of current date

Approval date within 5 years of current date

Not approved and/or approval date greater than 5 years from current date

Could be sole reason for “N” flag if QAPP not approved.

1-Point QC Completeness

75%

> 75%

65 - 75%

< 65%

Based on 26, 1-point QC for a year. Calculated based on the number of days the monitor operated.

1-Point QC Precision

< 7.1% O3,

< 10.1% CO, SO2

< 15.1% NO2

< 7.1% O3,

< 10.1% CO, SO2

< 15.1% NO2

8 - 20% O3

11 - 25% CO, SO2

16 - 25% NO2

> 20% O3

> 25% others

Based on all valid 1-point QC checks in AQS for the year. Value should reflect AMP256 value.

1-Point QC Bias

< ± 7% O3,

< ± 10.1% CO, SO2

< ± 15.1% NO2

< ± 7% O3,

< ± 10.1% CO, SO2

< ± 15.1% NO2

± 8 - 20% O3

±11 - 25% CO, SO2

± 16 - 25% NO2

> ± 20% O3

> ± 25% others

Based on all valid 1-point QC checks in AQS. Value should reflect AMP256 value.

Annual PE Completeness

1 PE / year

3 audit levels

1 PE / year

3 audit levels

1 PE / year

2 audit levels

No PE or

1 audit level

Will not count more than one actual value in an audit level. For example, two audits in one level count as 1 audit level.

Annual PE Bias

O3, SO2, NO2, CO

≤ ± 1.5 ppb / ≤ ± 15.1%

≤ ± 0.03 ppm / ± 15%

≤ ± 1.5 ppb / ± 15%

≤ ± 0.03 ppm / ± 15%

≤ ± 1.6-3.0 ppb / ± 16-25%

≤ ± 0.04-0.06 ppm / ± 16-25%

> ± 3.0 ppb / ± 25%

> ± 0.06 ppm / ± 25%

Average percent difference of all PE values for the monitor.

NPAP Audit Completeness - PQAO

20% of sites in PQAO

20% of sites in PQAO

10 - 19% of sites in PQAO

< 10% of sites in PQAO

Not a monitoring organization responsibility. Will always be marked Y.

NPAP Bias

< ± 10.1% O3

< ± 15.1% others

< ± 10.1% O3

< ± 15.1% others

± 10.1 - 20.0% O3

± 15.1 - 25.0% others

> ± 20.0% O3

> ± 25.0% others

Median percent difference for all values at a site and median PD for PQAO level estimate. Not used for certification recommendations.

NPAP Audit Completeness - Site

4 levels

4 levels

2 - 3 levels

≤ 1 level

Not a monitoring organization responsibility. Not used for certification recommendations.

Outliers

Not implemented / used for certification recommendations.

5.5.3. PM2.5

Assessment Current CFR Requirement or Guidance Green (Acceptable) Yellow (Warning) Red (Recommend N Flag) Comments

Routine Data Completeness

75%

≥ 80%

79 - 70%

< 70%

Based on CFR criteria for data use ( 100 * number of creditable samples / number of scheduled samples in monitor sample period ). Sample period is the time interval between the sample period start date and the sample period end date.

QAPP Approval

Approval date within 5 years of current date

Approval date within 5 years of current date

Not approved and/or approval date greater than 5 years from current date

Could be sole reason for “N” flag if QAPP not approved.

Flow Rate Verification Completeness

Every 30 days (12 / year)

Every 30 days (11-12 / year)

Every 45 Days (8-11 / year)

> 45 days (<8 / year)

Not implemented.

Flow Rate Verification Bias

< ± 4.1% of transfer standard

< ± 5.1% from design

± + 4.1% of transfer standard

< ± 5.1% from design

± 4.1 - 6.0% of transfer standard

± 5.1 - 7.0% from design

> ± 6.0% of transfer standard

> ± 7.0% from design

design = design flow rate

Average percent difference for audits at monitor level.

Value should reflect AMP256 value.

Not implemented.

Flow Rate Audit Completeness

2 / year (every 6 months)

2 / year (every 5-7 months) or 3 or 4 with one audit in 3 or 4 quarters

2 across 2 quarters

1 audit

Semi-annual flow rate audits.

Based on how long sampler operated. If sampler operates <9 months then at least 1 is expected. If operated >9 months then two audits expected.

Flow Rate Audit Bias

< ± 4% of transfer standard

< ± 5% from design

< ± 4% of transfer standard

< ± 5% from design

± 5 - 6% of transfer standard

± 6 - 7% from design

> ± 6% of transfer standard

> ± 7% from design

design = design flow rate.

Average percent difference for audits at monitor level.

Value should reflect AMP256 value.

Collocation Completeness

75%

≥ 75%

65 - 74%

< 65%

By method designation.

Summary level = average of completeness of site level values.

Site level = number of reported observations /30.

Based on how long sampler operated.

Collocation Precision

< 10.1%

< 10.1%

10.1 - 25.0%

> 25.0%

By method designation.

Same statistics as AMP256 for summary level and site level.

PM2.5 PEP Completeness

5 or 8

5 or 8

3-4 or 6-7

< 3 or 6

Not a monitoring organization responsibility. Not implemented.

PEP Bias

< ± 10.1%

< ± 10.1%

± 10.1 - 30.0%

> ± 30.0%

Value should reflect AMP256 value. Not implemented.

Outliers

Not implemented / used for certification recommendations.

5.5.4. PM10 Continuous Methods

Assessment Current CFR Requirement or Guidance Green (Acceptable) Yellow (Warning) Red (Recommend N Flag) Comments

Routine Data Completeness

75%

≥ 80%

79 - 70%

< 70%

Based on CFR criteria for data use ( 100 * number of valued strata (days per collection frequency) / total number of strata.

QAPP Approval

Approval date within 5 years of current date

Approval date within 5 years of current date

Not approved and/or approval date greater than 5 years from current date

Could be sole reason for “N” flag if QAPP not approved.

Flow Rate Verification Completeness

75%

≥ 75%

65 - 74%

< 65%

12 per year, based on how long sampler operated. Not implemented.

Flow Rate Verification Bias

< ± 7.1% of transfer standard

< ± 7.1% of transfer standard

± 7.1 - 9.0% of transfer standard

> ± 9.0% of transfer standard

Average of percent differences.

Value should reflect AMP256 value.

Not implemented.

Flow Rate Audit Completeness

2 / year (every 6 months)

2 / year (every 5-7 months) or 3 or 4 with one audit in 3 or 4 quarters

2 across 2 quarters

1 audit

Semi-annual flow rate audits.

Based on how long sampler operated. If sampler operates <9 months then at least 1 is expected. If operated >9 months then two audits expected.

Flow Rate Audit Bias

< ± 7% of transfer standard

< ± 7% of transfer standard

± 8 - 9% of transfer standard

> ± 9% of transfer standard

Semi-annual flow rate audits.

Average of percent differences.

Value should reflect AMP256 value.

Outliers

Not implemented / used for certification recommendations.

5.5.5. PM10 Manual Methods

Assessment Current CFR Requirement or Guidance Green (Acceptable) Yellow (Warning) Red (Recommend N Flag) Comments

Routine Data Completeness

75%

≥ 80%

80 - 70%

< 70%

Based on CFR criteria for data use ( 100 * number of valued strata (days per collection frequency) / total number of strata.

QAPP Approval

Approval date within 5 years of current date

Approval date within 5 years of current date

Not approved and/or approval date greater than 5 years from current date

Could be sole reason for “N” flag if QAPP not approved.

Flow Rate Verification Completeness

Every 30 days (12 / year)

Every 30 days (11-12 / year)

Every 45 days (8-11 / year)

> 45 days (<8 / year)

Not implemented.

Flow Rate Verification Bias

< ± 7.1% of transfer standard

< ± 7.1% of transfer standard

± 7.1 - 9.0% of transfer standard

> ± 9.0% of transfer standard

Average of percent differences.

Value should reflect AMP256 value.

Not implemented.

Flow Rate Audit Completeness

2 / year (every 6 months)

2 / year (every 5-7 months) or 3 or 4 with one audit in 3 or 4 quarters

2 across 2 quarters

1 audit

Semi-annual flow rate audits.

Based on how long sampler operated. If sampler operates <9 months then at least 1 is expected. If operated >9 months then two audits expected.

Flow Rate Audit Bias

< ± 10.1% of transfer standard

< ± 10.1% of transfer standard

± 10.1 - 12.0% of transfer standard

> ± 12.0% of transfer standard

Semi-annual flow rate audits.

Value should reflect AMP256 value.

Collocation Completeness

75%

≥ 75%

65 - 74%

< 65%

Summary level = average of completeness of site level values.

Site level = number of reported observations /30.

Based on how long sampler operated.

Collocation Precision

10%

10%

≤ 11 - 20%

> 20%

By method designation.

Same statistics as AMP256 for summary level and site level.

Outliers

Not implemented / used for certification recommendations.

5.5.6. Pb TSP

Assessment Current CFR Requirement or Guidance Green (Acceptable) Yellow (Warning) Red (Recommend N Flag) Comments

Routine Data Completeness

75%

≥ 80%

80 - 70%

< 70%

Based on CFR criteria for data use ( 100 * number of creditable samples / number of scheduled samples in monitor sample period ). Sample period is the time interval between the sample period start date and the sample period end date.

QAPP Approval

Approval date within 5 years of current date

Approval date within 5 years of current date

Not approved and/or approval date greater than 5 years from current date

Could be sole reason for “N” flag if QAPP not approved.

Flow Rate Verification Completeness

Every 90 days and 4 times per calendar year

Every 90 days and 4 times per calendar year

Every 120 days and 3 times per calendar year

> Every 120 days and < 3 times per calendar year

Not implemented.

Flow Rate Verification Bias

< ± 7.1% of transfer standard

< ± 7.1% of transfer standard

± 7.1 - 9.0% of transfer standard

> ± 9.0% of transfer standard

Not implemented.

Flow Rate Audit Completeness

2 / year (every 6 months)

2 / year (every 5-7 months) or 3 or 4 with one audit in 3 or 4 quarters

2 across 2 quarters

1 audit

Semi-annual flow rate audits.

Based on how long sampler operated. If sampler operates <9 months then at least 1 is expected. If operated >9 months then two audits expected.

Flow Rate Audit Bias

< ± 7.1% of transfer standard

< ± 7.1% of transfer standard

± 7.1 - 9.0% of transfer standard

> ± 9.0% of transfer standard

Semi-annual flow rate audits.

Value should reflect AMP256 value.

Collocation Completeness

75%

≥ 75%

65 - 74%

< 65%

Summary level = average of completeness of site level values.

Site level = number of reported observations /30.

Based on how long sampler operated.

Collocation Precision

< 20.1%

< 20.1%

20.1 - 30.0%

> 30.0%

Same statistics as AMP256 for summary level and site level.

Pb PEP Completeness

5 or 8

4 or 7

3 or 6

< 3 or 6

Not a monitoring organization responsibility.

Not implemented.

Pb PEP Bias

< ± 15.1%

< ± 15.1%

± 15.1 - 25.0%

> ± 25.0%

Average percent difference. Value should reflect AMP256 value.

Not implemented.

Analysis Audit Completeness

75%

75%

65 - 74%

< 65%

Average completeness by quarter than take average of all 4 quarters.

Analysis Audit Bias

< ± 10.1%

< ± 10.1%

± 10.1 - 18.0%

> ± 18.0%

Average percent difference. Value should reflect AMP256 value.

Outliers

Not implemented / used for certification recommendations.

5.5.7. Pb PM10

Assessment Current CFR Requirement or Guidance Green (Acceptable) Yellow (Warning) Red (Recommend N Flag) Comments

Routine Data Completeness

75%

≥ 80%

80 - 70%

< 70%

Based on CFR criteria for data use ( 100 * number of creditable samples / number of scheduled samples in monitor sample period ). Sample period is the time interval between the sample period start date and the sample period end date.

QAPP Approval

Approval date within 5 years of current date

Approval date within 5 years of current date

Not approved and/or approval date greater than 5 years from current date

Could be sole reason for “N” flag if QAPP not approved.

Flow Rate Audit Completeness

2 / year (every 6 months)

2 / year (every 5-7 months) or 3 or 4 with one audit in 3 or 4 quarters

2 across 2 quarters

1 audit

Semi-annual flow rate audits.

Based on how long sampler operated. If sampler operates <9 months then at least 1 is expected. If operated >9 months then two audits expected.

Flow Rate Audit Bias

< ± 4% of transfer standard

< ± 4% of transfer standard

± 5 - 6% of transfer standard

> ± 6% of transfer standard

Semi-annual flow rate audits.

Value should reflect AMP256 value.

Collocation Completeness

75%

≥ 75%

65 - 74%

< 65%

Summary level = average of completeness of site level values.

Site level = number of reported observations /30.

Based on how long sampler operated.

Not implemented.

Collocation Precision

< 20%

< 20%

20 - 30%

> 30%

Same statistics as AMP256 for summary level and site level. Not implemented.

Pb PEP Completeness

5 or 8

5 or 8

3 or 6

< 3 or 6

Not a monitoring organization responsibility. Not implemented.

Pb PEP Bias

< ± 15%

< ± 15%

± 16 - 25%

> ± 25%

Not implemented.

Analysis Audit Completeness

75%

75%

65-74%

< 65%

Based on 24 audits per year.

Analysis Audit Bias

< ± 10%

< ± 10%

± 10 - 18%

> ± 18%

Average percent difference. Value should reflect AMP256 value.

Outliers

Not implemented / used for certification recommendations.

6. Design Value Report (AMP480)

The AQS Design Value Report is also known as the DV Report and the AMP480.

This report is intended to provide the user a preliminary indication of design values.

The year selected for this report is the "Design Value Year". That is, if you select 2017, data from 2015 - 2017 is included in the report. (This report only includes parameters that have a design value form of 3 years, thus the CO and SO2 secondary standards are not included. Use the Quicklook Report to get the annual metrics that are the preliminary design values for those standards.)

All data in this report is considered preliminary in that it reflects only data that is in AQS (sample values; possibly incomplete, concurred exclusions, etc.) and does not reflect any information from any final regulatory designations or other decisions made outside of AQS.

The report defaults to the most recent current primary standard for each pollutant. You can change the design value calculations by selecting different standards on the options tab when creating the report.

The Design Value Reports shows each parameter on a separate page. This document currently includes only a description of the SO2 information (for the "abbreviated" release).

6.1. SO2

This section of the report displays the preliminary design values for the SO2 primary standard.

6.1.1. Sample output

Below is a sample of the PDF output for the One Point Quality Control layout.

Design Value - SO2 Primary Standard

6.1.2. Header Fields

Pollutant

The AQS parameter name (followed by the AQS parameter code in parentheses).

Standard Units

The Units of Measure in which data will be compared to the standard (followed by the AQS parameter code in parentheses).

NAAQS Standard

A description of the ambient air quality standard rules used to aggregate statistics. A pollutant standard will include the year of promulgation and the form of the standard.

Statistic

The annual statistic from AQS that is used to calculate the 3-year form of the design value.

Design Value Year

The year to which the degin value applies. This is the last year of a three year stretch for design values with a three year form.

Level

The level (concentration) of the listed standard in the standard units.

State Name

The name of the state where the monitoring site is located.

Exceptional Data Type

The header contains a reminder of the exceptional data type selected when creating the report: what kinds of qualified data are included or excluded from the calculations.

6.1.3. Columns

Site ID

The AQS Site ID in the format XX-YYY-ZZZZ where XX is the State FIPS code, YYY is the County FIPS code, and ZZZZ is the site number within the county.

Street Address

The street address giving an approximate location of the site.

Tip
The next three columns are repeated for each year in the 3-year design value period.
Comp. Qrtrs (Complete Quarters)

For the year listed, the number of complete quarters of SO2 data at the site.

99th Percentile

The sample value in the summarized sample set where 99% of the values in that set are less than or equal to it. (For ozone, based on valid daily maxima; for PM2.5, based on seasonal and non-seasonal algorithms.)

Cert & Eval

A Flag indicating the official certification status of the monitor-year, as assigned by the applicable regional office user.

Meanings of the abbreviations can be found on the code tables page

Design Value

The AQS calculated design value in the form of the indicated standard. It may be at the site level (for PM2.5 or Lead) or at the monitor level (all other parameters).

Valid Ind. (Validity Indicator)

Flag indicating whether the Design Value is valid (calculated using data that meets completeness criteria).

7. Change Log

Changes since last version
  • None (this is the initial version)