Data Analysis & Assessment Criteria
RCPAQAP myQAP login

Chemical Pathology

Introduction

The Chemical Pathology discipline programs are designed to review the performance of analytical procedures and provide peer group comparison by directly comparing individual participant results with results from participating laboratories.  The reports display graphical representations showing results from the same method, instrument and reagent groups (if applicable). Participants’ results are compared to a consensus median (calculated median from all method classifications or a calculated median of method-specific results) or a specific target for a particular test.

All Chemical Pathology reports are structured to provide:

  • Performance Summary and overall performance
  • Result review
  • Result to date / cumulative summary of the years performance
  • Method Comparison (if applicable)
  • Commentary

Quantitative assessments

Survey Reports

Survey Reports summarise every pair of specimens for each measurand and provide summary data on your performance throughout the cycle. Reports provide a graphical comparison of individual results with all results received and with participants using the same method categories – examples are the analytical principle, measurement system and reagent source.

Quantitative results are usually compared with the specific target value (if known), the “overall median” or the median of the main variable of the category median. Analytical Performance Specifications (APS) are set either side of the expected value. Non-numerical results (descriptive results) are compared with a target value or overall method group consensus.

The Analytical Performance Specifications (APS) are unique for each measurand, and the acceptable range for each specimen is calculated from the central value (target, median or weighed-in value). These ranges are displayed in the report histograms and Youden Plots. The comment “Low” or “High” is added if the result is outside the APS and will be highlighted for review.

Note: The z-score is provided as an additional parameter to demonstrate the number of standard deviations a participant’s result is away from the mean value. It measures performance based on what is achievable from test methods. Participants are not assessed against the z-score.

Report Interpretation

The new report format follows a standard structure for all disciplines that survey quantitative measurands with two samples in each survey challenge.

The survey reports incorporate linear regression analysis to determine the precision and accuracy of the testing procedure.

Some programs allow participants to submit results in SI or mass units. The report will default to the units submitted by the laboratory.

If survey results are not received by the close date an appropriate message is returned.

The structure of the report is as follows:

1. Summary of Performance

  • Measurand: All measurands that are performed by the participant are presented in the summary table.
  • Expected Result: The expected result lists the target result for the measurand. The target result is based on the type of target source used – these are listed below.
    1. Calculated ‘All method median’
    2. Calculated ‘Category median’, using the main variable of the test method. This could be the analytical principle, measurement system, reagent or calibrator. The target source is highlighted on the result page of the measurand.
    3. Specific target – A sample with a known quantity of the measurand. A specific target is usually assigned to samples that have been tested using a reference method or have been tested by a reference laboratory.
  • Your result: The result submitted by the participant.
  • Review: Results flagged for review are outside the Analytical Performance Specifications (APS) of the target source. Results highlighted in red are listed as ‘High’ or ‘Low’ when they are outside the APS of the All Method Median and Peer group median.  Results are highlighted in amber if they fall outside the APS range of the All Method Median but within range of the peer group and if results fall within the APS range of the All Method Median but outside the peer group APS range.
  • Z-score /APS score: The z-score and APS scores are two types of performance indicators and calculated against the target source of the measurand, whether it is the calculated median or mean of the method being used, the calculated median or mean of All results or a Specific target.
    The Z-score uses the ‘mean’ result and SD to indicate the number of standard deviations a participant is away from the average result. The APS score uses the ‘median’ and the assigned APS to calculate the APS score.
    Performance assessment is based on the APS score. If greater than 1.0 the measurand will be highlighted for review in the “Overall performance” section of the report. The z-score is provided as an additional parameter to demonstrate the number of standard deviations a participant’s result is away from the mean value. It measures performance based on what is achievable from test methods.
  • MPS (Measurand Performance score): The Measurand Performance Score (MPS) is calculated by performing linear regression analysis on a set of samples (the minimum number of samples to perform linear regression is 6) and uses the SD (precision) and Bias (accuracy) obtained to determine the total error. This is then compared to the sample range’s median APS (MPS = 2SD + Bias / Allowable limit of Assessment). If the score is greater than 1.0, it is highlighted in bold to consider for further review.

2. Overall Performance

Please review results returned for Samples(s): CP-TM-20-17 CP-TM-20-18
Adrenocorticotropic hormone High but within APS of category group High but within APS of category group
Beta-2-microglobulin Low but within APS of category group Low but within APS of category group
Thyroglobulin High but within APS of category group High but within APS of category group

The Summary of Performance provides participants with an assessment of their overall performance and indicates what measurands require further review.

APS score calculation

Measurand APS: APS +/- 0.5 up to 5.00, +/- 10.0% > 5.00 x109 cells/L (5.0 is the measurand decision point)

Lab result above decision point 
Lab result = 19.30 (above decision point – 10% range) 
Target result: 18.40
Measurand limit: 10% of 18.40 = 1.840 
APS Score = (19.30 – 18.40) / 1.840 
APS Score = -0.49  
Lab result below decision point 
Lab result = 2.90 (below decision point – 0.5 range)
Target result: 2.60
Measurand limit: 0.5
APS Score = (2.9 – 2.6) / 0.5
APS Score = 0.6

 

3. Result Review

A
Histograms

Provide a frequency distribution of the results submitted by all participants. The blue bar represents the median or target result, the blue circle indicates the participant’s result.

B
Lab results

Lists the results and method categories returned by the participant.

C
Analytical Performance Specification (APS)

The APS that has been assigned to the measurand.

D
Youden charts

Represent a scatter of two sample results plotted against each other. The sample with the higher measurand level is on the y-axis plotted against the lower level on the x-axis. Five Youden plots are presented and illustrate from left to right, results from all results, the participant’s method, analytical principle, measurement system and the reagent. The participant’s result is highlighted by the blue dot.

Examples of Youden charts

Biased results Biased Laboratory
E
Levey Jennings type plot (z-score)

Displays the participant’s z-score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.

F
Levey Jennings type plot (APS score)

Displays the APS score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.

Examples of Plots

Imprecision Bias
G
Linearity plot

Displays the participant result against the expected result, and indicates linearity across different survey sample levels.

Examples of Plots

Imprecision Bias

Measurand performance: The Measurand performance page provides participants with a breakdown of the results returned.

A
Histograms

Provide a frequency distribution of the results submitted by all participants. The blue bar represents the median or target result, the blue circle indicates the participant’s result.

B
Lab results

Lists the results and method categories returned by the participant.

C
Analytical Performance Specification (APS)

The APS that has been assigned to the measurand.

D
Youden charts

Represent a scatter of two sample results plotted against each other. The sample with the higher measurand level is on the y-axis plotted against the lower level on the x-axis. Five Youden plots are presented and illustrate from left to right, results from all results, the participant’s method, analytical principle, measurement system and the reagent. The participant’s result is highlighted by the blue dot.

Examples of Youden charts

Biased results Biased Laboratory
E
Levey Jennings type plot (z-score)

Displays the participant’s z-score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.

F
Levey Jennings type plot (APS score)

Displays the APS score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.

Examples of Plots

Imprecision Bias
G
Linearity plot

Displays the participant result against the expected result, and indicates linearity across different survey sample levels.

Examples of Plots

Imprecision Bias

Precision and Accuracy

High / Low or Low / High = Imprecision

High / High or Low / Low = Bias

4. Linear Regression

A
Summary of results

Indicates what targets are used to compare results (the BNP example above shows results assessed against the instrument category). The table columns represent the Sample ID, Measurement System category assigned for result comparison, median result, participant’s result, H/L review flag (W= Within, no review required), Z-score and the APS score.

B
Your Linearity – compared to the target source

Linear regression analysis is based against the target source of each sample, which could be the “median of all results”, “median of the assessment category” (measurement system) or the median of a “specified target”.

When performing linear regression, the following rules of assessment are applied:

  • A minimum of 6 samples are required for linear regression analysis (applies to all target sources)
  • If the target source is “median of the assessment category”, there must be a minimum of 6 results in your peer group, for all survey samples, otherwise, linear regression is based against the “median of all results”.
  • If there is a change of any method category, linear regression is based on the sample range from when the change occurred (applies to all target sources).
C
Your slope and intercept

Provides the slope and intercept calculated from the linear regression analysis. Also provides the target low and high level, with your corresponding low and high levels determined from the “line of best fit”.

Using the slope and intercept the values of your line of best fit are determined compared to the lowest and highest target values for the cycle.

D
Your Linearity – compared to the All and subgroup

To provide a comparison of all methods, linear regression is performed against the same data source (all results or specific target). The slopes represent the bias obtained from all laboratories, illustrating the slopes from all labs in ‘grey’, the slope from participants using the same assessment category in ‘navy blue’ and highlighting your slope in ‘light blue’.

E
Your Imprecision compared to All & Subgroup

This histogram represents the imprecision (CV%) obtained from all laboratories. It is calculated from the scatter around the target source linear regression line of best fit. The histogram illustrates all labs in ‘grey’, participants using the same assessment category in ‘navy blue’ and highlighting your CV% as a ‘light blue’ dot.

F
Your Precision and Accuracy

Provides participants with the SD, CV and Bias calculated from linear regression and how your results ranked against all laboratories that participated. Ranking (from 0 or best to 100, worst) is illustrated on the top row of the scale provided under the bar.

Standard Deviation: The SD is the standard error of the estimate (Sy.x) and can be regarded as the average SD across the range of concentrations analysed. SD provides a value in the units of the test. SD will tend to be high if you report high results and low if you report low results.

Coefficient of Variation: The SD divided by the mid-point of your laboratory’s range of concentrations, expressed as a percentage:

CV%= SD  x100
(low value + high value)/2

 

Average Bias:

Your biases at the low value, high value and mid value are determined. These are the differences between the line of expectation (45° line) and your line of best fit.

The average bias is calculated as:

Bias =  [low bias] + [mid bias] + [high bias]
3
G
Your Stats and Ranking

Provides participants with a table summarising their SD, CV, Bias, MPS and performance ranking.

H
Additional notes

Provides participants with information, such as the target source used for linear regression and lists any method category changes made across the survey sample range being analysed.

A
Summary of results

Indicates what targets are used to compare results (the BNP example above shows results assessed against the instrument category). The table columns represent the Sample ID, Measurement System category assigned for result comparison, median result, participant’s result, H/L review flag (W= Within, no review required), Z-score and the APS score.

B
Your Linearity – compared to the target source

Linear regression analysis is based against the target source of each sample, which could be the “median of all results”, “median of the assessment category” (measurement system) or the median of a “specified target”.

When performing linear regression, the following rules of assessment are applied:

  • A minimum of 6 samples are required for linear regression analysis (applies to all target sources)
  • If the target source is “median of the assessment category”, there must be a minimum of 6 results in your peer group, for all survey samples, otherwise, linear regression is based against the “median of all results”.
  • If there is a change of any method category, linear regression is based on the sample range from when the change occurred (applies to all target sources).
C
Your slope and intercept

Provides the slope and intercept calculated from the linear regression analysis. Also provides the target low and high level, with your corresponding low and high levels determined from the “line of best fit”.

Using the slope and intercept the values of your line of best fit are determined compared to the lowest and highest target values for the cycle.

D
Your Linearity – compared to the All and subgroup

To provide a comparison of all methods, linear regression is performed against the same data source (all results or specific target). The slopes represent the bias obtained from all laboratories, illustrating the slopes from all labs in ‘grey’, the slope from participants using the same assessment category in ‘navy blue’ and highlighting your slope in ‘light blue’.

E
Your Imprecision compared to All & Subgroup

This histogram represents the imprecision (CV%) obtained from all laboratories. It is calculated from the scatter around the target source linear regression line of best fit. The histogram illustrates all labs in ‘grey’, participants using the same assessment category in ‘navy blue’ and highlighting your CV% as a ‘light blue’ dot.

F
Your Precision and Accuracy

Provides participants with the SD, CV and Bias calculated from linear regression and how your results ranked against all laboratories that participated. Ranking (from 0 or best to 100, worst) is illustrated on the top row of the scale provided under the bar.

Standard Deviation: The SD is the standard error of the estimate (Sy.x) and can be regarded as the average SD across the range of concentrations analysed. SD provides a value in the units of the test. SD will tend to be high if you report high results and low if you report low results.

Coefficient of Variation: The SD divided by the mid-point of your laboratory’s range of concentrations, expressed as a percentage:

CV%= SD  x100
(low value + high value)/2

 

Average Bias:

Your biases at the low value, high value and mid value are determined. These are the differences between the line of expectation (45° line) and your line of best fit.

The average bias is calculated as:

Bias =  [low bias] + [mid bias] + [high bias]
3
G
Your Stats and Ranking

Provides participants with a table summarising their SD, CV, Bias, MPS and performance ranking.

H
Additional notes

Provides participants with information, such as the target source used for linear regression and lists any method category changes made across the survey sample range being analysed.

Assessment Criteria

The assessment criteria are defined as measurand performance. As analytical error is due to both imprecision and bias, program organisers have defined Total Error as follows:

Total Error = 2SD + Bias

The quality of your laboratory’s performance is then determined by comparing the Total Error to the Analytical Performance Specification at the mid-point of the range of measurand concentrations for the cycle as follows:

Measurand Performance Score (MPS) =  2SD + Bias
Analytical Performance Specification

These examples of bicarbonate analyses may assist in understanding this method of assessment.

  • QA specimens are as follows:
    Low Level 15.0 mmol/L
    High Level 35.0 mmol/L
    The mid-point concentration is therefore 25.0 mmol/L.
  • The Analytical Performance Specification for bicarbonate is:
    ± 2.0 mmol/L up to 20.0 mmol/L
    ± 10% when greater than 20.0 mmol/L
    The Analytical Performance Specification at the mid-point (25.0 mmol/L) is therefore 2.5 mmol/L.

Example – Laboratory 1

SD = 0.8 mmol/L Bias = 0.5 mmol/L
Total Error = (2 × 0.8) + 0.5 = 2.1 mmol/L
Measurand Performance = 2.1 = 0.84
2.5

Note: When the Total Error is less than the Analytical Performance Specification then the Measurand Performance will be less than 1.0. This is the desired level of performance.

Example – Laboratory 2

SD = 1.5 mmol/L Bias = 0.1 mmol/L
Total Error = (2 × 1.5) + 0.1 = 3.1 mmol/L
Measurand Performance = 3.1 = 1.24
2.5

An undesirable result – due predominantly to imprecision.

Example – Laboratory 3

SD = 0.5 mmol/L Bias = 2.0 mmol/L
Total Error = (2 × 0.5) + 2.0 = 3.0 mmol/L
Measurand Performance = 3.0 = 1.20
2.5

An undesirable result – due predominantly to bias.

5. Method Comparison

The method comparison provides a breakdown of all the methods (assessment category) used by participants and lists the statistics calculated for the latest survey results obtained (left) as well as the Precision and Accuracy results calculated from the linear regression analysis on the sample range used, providing the median values for each method listed, facilitating peer group comparison. The full set of statistics is seen only when there are four or more values in the dataset – three values in the dataset will illustrate the median value only and method categories with two or less users do not present any statistical data.

The “Survey Report Interpretation” flowchart can be found on the RCPAQAP website under “Resources” or can be accessed by clicking this link.

Qualitative reporting

Programs that survey qualitative measurands will also provide a simple, direct comparison of your qualitative results with all results received and with participants using the same method system.

A
Measurand

The full name of the measurand.

B
Method Classification

The method classification the laboratory has submitted. Stored with each pair of results.

Ensure that your method classification is correct. If the method classification information provided by us does not allow for adequate definition of your method then contact the RCPAQAP.

Note: Measurands with no method classification and no results will not be printed. Consequently, if you wish to receive a report for a measurand for which you do not submit results then provide a method classification.

C
All Results Histograms

Histograms showing the distribution of all results.

D
Result

Relative position of the result reported by your laboratory.

E
Current Data

A complete record of data held by program organisers for the cycle.

  • The expected value
  • Results returned by the participant
  • Assessment.
F
Histograms for Method Breakdowns – sample 1

These Histograms show the distribution of results for your method, your analytical principle and your reagent group across the assessment criteria groupings for the first sample analysed in this assessment.

G
Histograms for Method Breakdowns – sample 2

These Histograms show the distribution of results for your method, your analytical principle and your reagent group across the assessment criteria groupings for the second sample analysed in this assessment.

A
Measurand

The full name of the measurand.

B
Method Classification

The method classification the laboratory has submitted. Stored with each pair of results.

Ensure that your method classification is correct. If the method classification information provided by us does not allow for adequate definition of your method then contact the RCPAQAP.

Note: Measurands with no method classification and no results will not be printed. Consequently, if you wish to receive a report for a measurand for which you do not submit results then provide a method classification.

C
All Results Histograms

Histograms showing the distribution of all results.

D
Result

Relative position of the result reported by your laboratory.

E
Current Data

A complete record of data held by program organisers for the cycle.

  • The expected value
  • Results returned by the participant
  • Assessment.
F
Histograms for Method Breakdowns – sample 1

These Histograms show the distribution of results for your method, your analytical principle and your reagent group across the assessment criteria groupings for the first sample analysed in this assessment.

G
Histograms for Method Breakdowns – sample 2

These Histograms show the distribution of results for your method, your analytical principle and your reagent group across the assessment criteria groupings for the second sample analysed in this assessment.

Patient Report Comments Program

The Patient Report Comments Program transitions to the eQuality platform in 2023. This program is an educational self-assessment tool for individuals who, in the course of their duties, would attach comments to results sent out from their laboratories or provide interpretative advice by telephone (e.g. Duty Biochemists, Chemical Pathologists and Scientists), as well as personnel who are training for such duties.

Cases

One case per month is offered over ten months in a year. Each case report has patient information (age, gender and location of the patient, as well as brief clinical notes), the set of biochemistry results requiring comments and additional relevant information or results available to the laboratory.

Comments

Participants comment on the results assuming that they have been asked to provide an interpretative comment by the requesting clinician. The comment should follow the pattern of a written report.

  1. When commenting, assume you have been approached by a clinician for an interpretative comment on the results.
  2. Comment on the “Results for commenting”, not on the “Additional lab results” which are given to aid the interpretation of the “Results for commenting”.
  3. Take into account the clinical details rather than listing all possible causes for an abnormal result.
  4. State the most likely diagnoses or causes for the set of results given the clinical situation. Do not suggest a list of follow-up tests.
  5. Do not tell the clinician how to proceed (e.g. “Suggest examine patient”), and be cautious in suggesting invasive investigations, e.g. liver biopsy.
  6. This is not a case study; the focus is on the ability to offer useful advice to clinicians in a succinct manner.
  7. Restating an obvious abnormality (e.g. “hyponatraemia”, “raised potassium”) is generally not a preferred comment; however, quantifying the degree of an abnormality (e.g. “severe hyponatraemia”, “mild increase in potassium”) may be considered to add value.
  8. For the comment to add value, it should not restate the clinical question, e.g. if the clinical notes state “? hypothyroid”, a comment such as “consider hypothyroidism” has not provided new information to the clinician. The slightly different comment “results consistent with hypothyroidism” or an even stronger statement “results suggest hypothyroidism” may be considered more a useful answer to the clinical question.

Note that submitted comments which are longer than the allowable space provided for commenting will be truncated at the space limit.

Review of Comments

The participants’ comments historically have been broken down into components by RCPAQAP staff, and the components summarised into common keywords or phrases. Frequently used keywords or phrases are now provided as an additional option to participants to select upon result entry. Participants should still enter their free text comment as previously and can also choose related keywords based on their comments (if available from the dropdown list). Please note: if keywords are selected by a participant, they must still relate to the free text comment. RCPAQAP staff, in consultation with the Patient Comments Advisory Committee, will either add new keywords or delete keywords that do not relate to the free text comments submitted.

Each key-phrase will be classified as ‘Preferred’, ‘Less Relevant’, ‘Not Supported’ or ‘Misleading’’ by the Advisory committee who will generate a summary report with the classification of all key-phrases, a ‘suggested’ (ideal) comment and case discussion rationale.

Classification of KeyWords

Preferred

Keywords classified as ‘Preferred’ are those considered appropriate, correct and adding value to the results and therefore of utility to the clinician who would receive the report. They may relate to diagnosis or differential diagnosis, possible interferences or suggestions for further testing. They should assist with interpretation of the results for the measurement but may include reference to or input from additional information.

Relevant

Keywords classified as ‘Relevant’ are those considered not to add value to the current results; i.e. not useful to the requesting doctors, although not erroneous or misleading. Comments focusing on the additional information rather than the results for commenting are also likely to fall into this category.

Not Supported

Keywords classified as ‘Not Supported’ are diagnoses that are possibly correct but not supported by the supplied information, or tests not considered to be indicated given the available information.

Misleading

Keywords classified as ‘Misleading’ are those which may lead to a wrong interpretation, misdiagnosis or mismanagement of the patient.

Supervisor Reports

Supervisor Reports are designed for a nominated person (Coordinator) who has an interest in overseeing a group of participants and/or sites. This common interest may be regional, organisational, special interest, instrument or reagent groups.

A Supervisor Report may be set up by anyone wanting to set up a collaborative group of participants with a common interest. There must be sufficient participants sharing this common interest to make the generated statistics a true representation of the group. A minimum of five participants is generally suggested to make the Supervisor Report viable.

Each Supervisor Report has a designated Coordinator who is the recipient of the reports and has the responsibility to disseminate information to members of the group and to maintain confidentially of all results.

Supervisor Reports are provided to the nominated Coordinator of the group after each Survey run and at the end of each survey. There is an annual enrolment fee for a Supervisor Report.

New Supervisor Reports

The proposer of a new Supervisor Report should write to the RCPAQAP by logging a request through the myQAP participant portal to request the set-up of the group and to nominate a Coordinator.

The RCPAQAP will liaise with the proposed Coordinator of the group who will be sent Supervisor Report Coordinator Agreement and Supervisor Report Participant Authorisation forms to complete and send back to the RCPAQAP.

Supervisor Reports can be ordered through the myQAP website when enrolling or by contacting the RCPAQAP Customer Service Team directly.

Confidentiality

Program organisers hold information on each participant obtained from the RCPAQAP in strict confidence. The Coordinator of each Supervisor Report undertakes to keep the name of the individual participants confidential and only to release summaries of the performance of methods, instruments and coded results.

Available Supervisor Reports

There are a number of Supervisor Reports that are open to all participants to join. Participants who wish to be part of an existing Supervisor Report can fill in the Supervisor Report Participation Form.

The Supervisor Report Participation Form gives the RCPAQAP permission to release your results to the nominated Co-ordinator of the chosen Supervisor Report for the program(s) you nominate.

Supervisor Report Participation Form

Please follow the link to view the list of available Supervisor Reports and to access the Supervisor Report Participation Form.

Supervisor Report Interpretation Notes

Supervisor Reports are provided to the Supervisor Report Co-ordinator following each survey run. Participant data are included only if they have provided written approval.

An example of the Supervisor report can be found on the myQAP help page.

Last updated on October 11, 2024
Back to top
en_AUEnglish
It appears you're using an old version of Internet Explorer for safer and optimum browsing experience please upgrade your browser.