Data Analysis & Assessment Criteria
RCPAQAP myQAP login

Transfusion

Introduction

The Transfusion discipline programs are structured to provide a peer group comparison of individual participant results with all results received. The survey reports provide graphical representations of participant results for all methods used for the required tests performed by participating laboratories. Participant results for quantitative analysis are compared to a calculated median or target from all method classifications for a particular test, whereas a scoring system is used to assess the performance of participants enrolled in programs that require interpretive responses. A guide to interpreting Blood Group and Antibody reports is available on the myQAP help page.

Survey Report – General Information

Participant results are only assessed if the results of stability testing performed on or after the close date are acceptable; if stability testing results are deemed unacceptable the survey will not be assessed and participants will be notified in writing. Participant consensus of 80% (or median response on quantitative results) is used to determine the expected survey target, a minimum of six (6) responses is required to set a consensus target.

Quantitative assessments

Fetomaternal Haemorrhage Program

The survey report provides a simple, direct comparison of individual participant results with all results received and graphical representations showing results from the same method and reagent groups. Participants’ results will be compared to a calculated median from all method classifications for a particular test as well as comparing results within a method or reagent group. Survey reports are issued for each measurand/test after the scheduled closing date. Analytical Performance Specifications (APS) are unique for each test. APS is calculated from the target overall median value and are used in the histograms and youden plots. The APS is based on clinical needs and are set and reviewed by program organisers and expert committee members.

The FMH program provides assessment for quantitative and qualitative measurands.

Survey objective target values are determined by the RCPAQAP Transfusion Advisory Committee. Assessment of participant performance is based on the median for % foetal cells from submitted results. The selected values are used to calculate the target volume (mL) of cord blood and the target dose of anti-D immunoglobulin recommended.

Survey Reports

Survey Reports summarise every pair of specimens for each measurand and provide summary data on your performance throughout the cycle. Reports provide a graphical comparison of individual results with all results received and with participants using the same method categories – examples are the analytical principle, measurement system and reagent source.

Quantitative results are usually compared with the specific target value (if known), the “overall median” or the median of the main variable of the category median. Analytical Performance Specifications (APS) are set either side of the expected value. Non-numerical results (descriptive results) are compared with a target value or overall method group consensus.

The Analytical Performance Specifications (APS) are unique for each measurand, and the acceptable range for each specimen is calculated from the central value (target, median or weighed-in value). These ranges are displayed in the report histograms and Youden Plots. The comment “Low” or “High” is added if the result is outside the APS and will be highlighted for review.

Qualitative measurands (RhD Ig Vials) display the responses received, comparing results to specific target that have been set by organisers.

Note: The z-score is provided as an additional parameter to demonstrate the number of standard deviations a participant’s result is away from the mean value. It measures performance based on what is achievable from test methods. Participants are not assessed against the z-score.

Report Interpretation

The new report format follows a standard structure for all disciplines that survey quantitative measurands with two samples in each survey challenge.

The survey reports incorporate linear regression analysis to determine the precision and accuracy of the testing procedure.

Some programs allow participants to submit results in SI or mass units. The report will default to the units submitted by the laboratory.

If survey results are not received by the close date an appropriate message is returned.

The structure of the report is as follows:

1. Summary of Performance

  • Measurand: All measurands that are performed by the participant are presented in the summary table.
  • Expected Result: The expected result lists the target result for the measurand. The target result is based on the type of target source used – these are listed below.
    1. Calculated ‘All method median’
    2. Calculated ‘Category median’, using the main variable of the test method. This could be the analytical principle, measurement system, reagent or calibrator. The target source is highlighted on the result page of the measurand.
    3. Specific target – A sample with a known quantity of the measurand. A specific target is usually assigned to samples that have been tested using a reference method or have been tested by a reference laboratory.
  • Your result: The result submitted by the participant.
  • Review: Results flagged for review are outside the Analytical Performance Specifications (APS). These are highlighted in red and are listed as ‘High’ or ‘Low’.
  • Z-score /APS score: The z-score and APS scores are two types of performance indicators and calculated against the target source of the measurand, whether it is the calculated median or mean of the method being used, the calculated median or mean of All results or a Specific target.
    The Z-score uses the ‘mean’ result and SD to indicate the number of standard deviations a participant is away from the average result. The APS score uses the ‘median’ and the assigned APS to calculate the APS score.
    Performance assessment is based on the APS score. If greater than 1.0 the measurand will be highlighted for review in the “Overall performance” section of the report. The z-score is provided as an additional parameter to demonstrate the number of standard deviations a participant’s result is away from the mean value. It measures performance based on what is achievable from test methods.
  • MPS (Measurand Performance score): The Measurand Performance Score (MPS) is calculated by performing linear regression analysis on a set of samples (the minimum number of samples to perform linear regression is 6) and uses the SD (precision) and Bias (accuracy) obtained to determine the total error. This is then compared to the sample range’s median APS (MPS = 2SD + Bias / Allowable limit of Assessment). If the score is greater than 1.0, it is highlighted in bold to consider for further review.

2. Overall Performance

Please review results returned for Samples(s): TF-FM-23-07 TF-FM-23-08
Kleihauer percent High High
Kleihauer per mL High High
RhD Ig Vials (Kleihauer) Discordant Discordant

The Summary of Performance provides participants with an assessment of their overall performance and indicates what measurands require further review.

APS score calculation

Measurand APS: APS +/- 0.5 up to 5.00, +/- 10.0% > 5.00 x109 cells/L (5.0 is the measurand decision point)

Lab result above decision point 
Lab result = 19.30 (above decision point – 10% range) 
Target result: 18.40
Measurand limit: 10% of 18.40 = 1.840 
APS Score = (19.30 – 18.40) / 1.840 
APS Score = -0.49 
Lab result below decision point 
Lab result = 2.90 (below decision point – 0.5 range)
Target result: 2.60
Measurand limit: 0.5
APS Score = (2.9 – 2.6) / 0.5
APS Score = 0.6

 

3. Result Review

A
Histograms

Provide a frequency distribution of the results submitted by all participants. The blue bar represents the median or target result, the blue circle indicates the participant’s result.

B
Lab results

Lists the results and method categories returned by the participant.

C
Analytical Performance Specification (APS)

The APS that has been assigned to the measurand.

D
Youden charts

Represent a scatter of two sample results plotted against each other. The sample with the higher measurand level is on the y-axis plotted against the lower level on the x-axis. Five Youden plots are presented and illustrate from left to right, results from all results, the participant’s method, analytical principle, measurement system and the reagent. The participant’s result is highlighted by the blue dot.

Examples of Youden charts

Biased results Biased Laboratory
E
Levey Jennings type plot (z-score)

Displays the participant’s z-score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.

F
Levey Jennings type plot (APS score)

Displays the APS score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.

Examples of Plots

Imprecision Bias
G
Linearity plot

Displays the participant result against the expected result, and indicates linearity across different survey sample levels.

Examples of Plots

Imprecision Bias

Measurand performance: The Measurand performance page provides participants with a breakdown of the results returned.

A
Histograms

Provide a frequency distribution of the results submitted by all participants. The blue bar represents the median or target result, the blue circle indicates the participant’s result.

B
Lab results

Lists the results and method categories returned by the participant.

C
Analytical Performance Specification (APS)

The APS that has been assigned to the measurand.

D
Youden charts

Represent a scatter of two sample results plotted against each other. The sample with the higher measurand level is on the y-axis plotted against the lower level on the x-axis. Five Youden plots are presented and illustrate from left to right, results from all results, the participant’s method, analytical principle, measurement system and the reagent. The participant’s result is highlighted by the blue dot.

Examples of Youden charts

Biased results Biased Laboratory
E
Levey Jennings type plot (z-score)

Displays the participant’s z-score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.

F
Levey Jennings type plot (APS score)

Displays the APS score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.

Examples of Plots

Imprecision Bias
G
Linearity plot

Displays the participant result against the expected result, and indicates linearity across different survey sample levels.

Examples of Plots

Imprecision Bias

Precision and Accuracy

High / Low or Low / High = Imprecision

High / High or Low/Low = Bias

4. Linear Regression

A
Summary of results

Displays the samples used to calculate linear regression analysis, the main method category value for this program (the Kleihauer percent example displaying the “reagent method – Ferric Chloride). The table columns represent the Sample ID, Measurement System category assigned for result comparison, median result for the method category, median of ALL results, participant’s result, number of results from the method category, number of results from all results submitted, High/Low review flag (W= Within APS, no review required), Z-score and the APS score.

B
Your Linearity – compared to the target source

Linear regression analysis is based against the target source of each sample, which could be the “median of all results”, “median of the assessment category” (measurement system) or the median of a “specified target”.

When performing linear regression, the following rules of assessment are applied:

  • A minimum of 6 samples are required for linear regression analysis (applies to all target sources)
  • If the target source is “median of the assessment category”, there must be a minimum of 6 results in your peer group, for all survey samples, otherwise, linear regression is based against the “median of all results”.
  • If there is a change of any method category, linear regression is based on the sample range from when the change occurred (applies to all target sources).
C
Your slope and intercept

Provides the slope and intercept calculated from the linear regression analysis. Also provides the target low and high level, with your corresponding low and high levels determined from the “line of best fit”.

Using the slope and intercept the values of your line of best fit are determined compared to the lowest and highest target values for the cycle.

D
Your Linearity – compared to the All and subgroup

To provide a comparison of all methods, linear regression is performed against the same data source (all results or specific target). The slopes represent the bias obtained from all laboratories, illustrating the slopes from all labs in ‘grey’, the slope from participants using the same assessment category in ‘navy blue’ and highlighting your slope in ‘light blue’.

E
Your Imprecision compared to All & Subgroup

This histogram represents the imprecision (CV%) obtained from all laboratories. It is calculated from the scatter around the target source linear regression line of best fit. The histogram illustrates all labs in ‘grey’, participants using the same assessment category in ‘navy blue’ and highlighting your CV% as a ‘light blue’ dot.

F
Your Precision and Accuracy

Provides participants with the SD, CV and Bias calculated from linear regression and how your results ranked against all laboratories that participated. Ranking (from 0 or best to 100, worst) is illustrated on the top row of the scale provided under the bar.

Standard Deviation: The SD is the standard error of the estimate (Sy.x) and can be regarded as the average SD across the range of concentrations analysed. SD provides a value in the units of the test. SD will tend to be high if you report high results and low if you report low results.

Coefficient of Variation: The SD divided by the mid-point of your laboratory’s range of concentrations, expressed as a percentage:

CV%= SD  x100
(low value + high value)/2

 

Average Bias:

Your biases at the low value, high value and mid value are determined. These are the differences between the line of expectation (45° line) and your line of best fit.

The average bias is calculated as:

Bias =  [low bias] + [mid bias] + [high bias]
3
G
Your Stats and Ranking

Provides participants with a table summarising their SD, CV, Bias, MPS and performance ranking.

H
Additional notes

Provides participants with information, such as the target source used for linear regression and lists any method category changes made across the survey sample range being analysed.

A
Summary of results

Displays the samples used to calculate linear regression analysis, the main method category value for this program (the Kleihauer percent example displaying the “reagent method – Ferric Chloride). The table columns represent the Sample ID, Measurement System category assigned for result comparison, median result for the method category, median of ALL results, participant’s result, number of results from the method category, number of results from all results submitted, High/Low review flag (W= Within APS, no review required), Z-score and the APS score.

B
Your Linearity – compared to the target source

Linear regression analysis is based against the target source of each sample, which could be the “median of all results”, “median of the assessment category” (measurement system) or the median of a “specified target”.

When performing linear regression, the following rules of assessment are applied:

  • A minimum of 6 samples are required for linear regression analysis (applies to all target sources)
  • If the target source is “median of the assessment category”, there must be a minimum of 6 results in your peer group, for all survey samples, otherwise, linear regression is based against the “median of all results”.
  • If there is a change of any method category, linear regression is based on the sample range from when the change occurred (applies to all target sources).
C
Your slope and intercept

Provides the slope and intercept calculated from the linear regression analysis. Also provides the target low and high level, with your corresponding low and high levels determined from the “line of best fit”.

Using the slope and intercept the values of your line of best fit are determined compared to the lowest and highest target values for the cycle.

D
Your Linearity – compared to the All and subgroup

To provide a comparison of all methods, linear regression is performed against the same data source (all results or specific target). The slopes represent the bias obtained from all laboratories, illustrating the slopes from all labs in ‘grey’, the slope from participants using the same assessment category in ‘navy blue’ and highlighting your slope in ‘light blue’.

E
Your Imprecision compared to All & Subgroup

This histogram represents the imprecision (CV%) obtained from all laboratories. It is calculated from the scatter around the target source linear regression line of best fit. The histogram illustrates all labs in ‘grey’, participants using the same assessment category in ‘navy blue’ and highlighting your CV% as a ‘light blue’ dot.

F
Your Precision and Accuracy

Provides participants with the SD, CV and Bias calculated from linear regression and how your results ranked against all laboratories that participated. Ranking (from 0 or best to 100, worst) is illustrated on the top row of the scale provided under the bar.

Standard Deviation: The SD is the standard error of the estimate (Sy.x) and can be regarded as the average SD across the range of concentrations analysed. SD provides a value in the units of the test. SD will tend to be high if you report high results and low if you report low results.

Coefficient of Variation: The SD divided by the mid-point of your laboratory’s range of concentrations, expressed as a percentage:

CV%= SD  x100
(low value + high value)/2

 

Average Bias:

Your biases at the low value, high value and mid value are determined. These are the differences between the line of expectation (45° line) and your line of best fit.

The average bias is calculated as:

Bias =  [low bias] + [mid bias] + [high bias]
3
G
Your Stats and Ranking

Provides participants with a table summarising their SD, CV, Bias, MPS and performance ranking.

H
Additional notes

Provides participants with information, such as the target source used for linear regression and lists any method category changes made across the survey sample range being analysed.

Assessment Criteria

The assessment criteria are defined as measurand performance. As analytical error is due to both imprecision and bias, program organisers have defined Total Error as follows:

Total Error = 2SD + Bias

The quality of your laboratory’s performance is then determined by comparing the Total Error to the Analytical Performance Specification at the mid-point of the range of measurand concentrations for the cycle as follows:

Measurand Performance Score (MPS) =  2SD + Bias
Analytical Performance Specification

These examples of bicarbonate analyses may assist in understanding this method of assessment.

  • QA specimens are as follows:
    Low Level 15.0 mmol/L
    High Level 35.0 mmol/L
    The mid-point concentration is therefore 25.0 mmol/L.
  • The Analytical Performance Specification for bicarbonate is:
    ± 2.0 mmol/L up to 20.0 mmol/L
    ± 10% when greater than 20.0 mmol/L
    The Analytical Performance Specification at the mid-point (25.0 mmol/L) is therefore 2.5 mmol/L.

Example – Laboratory 1

SD = 0.8 mmol/L Bias = 0.5 mmol/L
Total Error = (2 × 0.8) + 0.5 = 2.1 mmol/L
Measurand Performance = 2.1 = 0.84
2.5

Note: When the Total Error is less than the Analytical Performance Specification then the Measurand Performance will be less than 1.0. This is the desired level of performance.

Example – Laboratory 2

SD = 1.5 mmol/L Bias = 0.1 mmol/L
Total Error = (2 × 1.5) + 0.1 = 3.1 mmol/L
Measurand Performance = 3.1 = 1.24
2.5

An undesirable result – due predominantly to imprecision.

Example – Laboratory 3

SD = 0.5 mmol/L Bias = 2.0 mmol/L
Total Error = (2 × 0.5) + 2.0 = 3.0 mmol/L
Measurand Performance = 3.0 = 1.20
2.5

An undesirable result – due predominantly to bias.

5. Method Comparison

The method comparison provides a breakdown of all the methods (assessment category) used by participants and lists the statistics calculated for the latest survey results obtained (left) as well as the Precision and Accuracy results calculated from the linear regression analysis on the sample range used, providing the median values for each method listed, facilitating peer group comparison. The full set of statistics is seen only when there are four or more values in the dataset – three values in the dataset will illustrate the median value only and method categories with two or less users do not present any statistical data.

The “Survey Report Interpretation” flowchart can be found on the RCPAQAP website under “Resources” or can be accessed by clicking this link.

 

Qualitative reporting

Programs that survey qualitative measurands will also provide a simple, direct comparison of your qualitative results with all results received and with participants using the same method system. Results are compared to a known target and assessed using standardised terms of assessment, “Concordant” or “Discordant”.

A
Histograms

Provide a frequency distribution of the results submitted by all participants. The blue circle indicates the participant’s result.

B
Method Used

Provides a breakdown of the method category selection provided by the participant.

C
Summary of results

Provides a summary of performance of all survey samples to date, listing the target result, results from the participant and the assessment provided.

D
Sample 1 histograms

Provides participants with a breakdown of Sample one’s result frequency based on method category, displaying the first three categories.
The grey bars represent the result distribution from all participants, the dark blue bars represent the participants method based on the category and the blue circle represents the participants result.

E
Sample 2 histograms

Provides participants with a breakdown of Sample two’s result frequency based on method category, displaying the first three categories.

A
Histograms

Provide a frequency distribution of the results submitted by all participants. The blue circle indicates the participant’s result.

B
Method Used

Provides a breakdown of the method category selection provided by the participant.

C
Summary of results

Provides a summary of performance of all survey samples to date, listing the target result, results from the participant and the assessment provided.

D
Sample 1 histograms

Provides participants with a breakdown of Sample one’s result frequency based on method category, displaying the first three categories.
The grey bars represent the result distribution from all participants, the dark blue bars represent the participants method based on the category and the blue circle represents the participants result.

E
Sample 2 histograms

Provides participants with a breakdown of Sample two’s result frequency based on method category, displaying the first three categories.

General and Basic Compatibility Program

The General and Basic Compatibility program targets and acceptable responses are determined by consensus. The final acceptable responses are at the discretion of the RCPAQAP Transfusion Advisory Committee. Each response is allocated a score which is shown on the report as well as a pointer indicating your result.

Performance assessment is based on a numerical scoring system and the ‘performance summary’ of the report indicates the number of points accrued by the laboratory compared to the target score. The scoring system for the general and basic compatibility programs has a maximum number of possible points and participants will lose points for errors. The report follows the standardised reporting structure, which provides the Performance summary, result review, Method comparison (if applicable), Commentary and Cumulative summary of the year’s performance.

The General Compatibility and Basic Compatibility programs are designed for prompt identification of areas requiring corrective action. Due to the complexity of the assessment process, delays to the survey report release date may be experienced. Therefore, in 2023 we plan to distribute a preliminary report one week after the closing date of each survey. This report is intended as a guide for those laboratories who wish to troubleshoot their submitted results compared to the expected results while their remaining samples are still viable.

The tables below show the numeric scoring and performance assessment systems:

General & Basic Compatibility Program – Performance Assessment System

Survey Component Points per Component (if applicable) Total Points
Patient Identification 2 points per assessed result 8 points
Patient Blood Group* 2 points per mandatory reaction 22 points
10 points for blood group*
Patient DAT 10 points
Patient Antibody Screen* 10 points
Patient Antibody Identification 10 points
Phenotyping 10 points per assessed result Variable
Donor Grouping* 2 point per mandatory reaction 18 points per donor unit
10 points for blood group*
Compatibility
– Crossmatch Reaction* 10 points per donor unit 10 points per donor unit
– Transfuse Unit? 10 points per donor unit 10 points per donor unit

*Indicates the critical areas of the Transfusion QAP surveys.

Phenotyping program

Participants provide results for antigens with the method system used. New participants and current participants introducing new antigen-testing profiles in their system must ensure the method classification is updated when entering survey results.

The Phenotyping module target and acceptable responses are determined by consensus (80% consensus must be reached to assess survey results). The final acceptable responses are at the discretion of the RCPAQAP Transfusion Advisory Committee.

The report provides participants with their performance assessment, a review of the performance for all antigens and the cumulative assessment. An example of the result review page is provided below.

Phenoytping program – Result review: Provides a breakdown of all participant responses returned , highlight the participant’s response as a light blue dot, other laboratories using the participant’s method highlighted in dark blue and the target response for each antigen listed below the respective histogram.

Antibody titre program

The Antibody Titre program target and acceptable range are determined by consensus, where titres above and below the target will be deemed as acceptable. In the case when a clear consensus cannot be demonstrated, the target value and acceptable range will be set by the Transfusion Advisory committee. Deviation of a survey target and acceptable range will be enunciated on the survey report.

The report is structured to provide the performance assessment, a review of the results returned, method performance, commentary and the participant’s cumulative assessment. An example of the result review page is provided below.

Antibody Titre result review – illustrating the responses received by all participants as a bar chart, highlighting peer group responses as dark blue on the bar chart and ‘your result as a light blue dot. The table below provides a summary of performance based on method.

Transfusion Competency Program

The Transfusion Competency Program target and acceptable responses are determined by consensus. The final acceptable responses are at the discretion of the RCPAQAP Transfusion Advisory Committee. Each response is allocated a score which is shown on the report as well as a light blue circle indicating your result.

Performance assessment is based on a numerical scoring system each section of the report indicates the number of points accrued by the laboratory compared to the expected target score.

The report follows the standardised reporting structure, which provides the Performance summary, result review, Method comparison (if applicable), Commentary and Cumulative summary of the year’s performance.

Transfusion Competency Program – Performance Assessment System

Survey Component Points per Component (if applicable) Total Points
1. Patient Identification 2 points 8 points
2. Patient Blood Group Reactions 2 points per mandatory reaction 22 points
Patient Blood Group Result 10 points
3. Donor Unit Blood Group Reactions 2 points per mandatory reaction 18 points per donor unit
Donor Unit Blood Group Result 10 points
4. Patient DAT 10 points
5. Patient Antibody Screen 10 points
6. Patient Antibody Identification 10 points
7. Patient Phenotyping 10 points 30 points
Donor Units Phenotyping 10 points per donor unit
8. Compatibility – Crossmatch Reaction  10 points per donor unit 20 points

Blood Grouping and Antibody Screen/Identification Program

Acceptable responses for the Blood Group and Antibody programs are based on consensus. The final acceptable responses are at the discretion of the RCPAQAP Transfusion Advisory Committee. After the survey close date, the survey results are reviewed and assessed and participants are issued with a survey report that highlights the laboratory or individual performance. The report follows the standardised reporting structure, which provides the Performance summary, result review, Method comparison (if applicable), Commentary and Cumulative summary of the year’s performance. The guide to interpret the report can be found on the myQAP help page.

Performance is based on a numerical scoring system with points allocated for each correct answer (see table below). Survey reports provide Year to Date scores to allow participants to assess the ongoing performance of the laboratory or individual and to highlight areas requiring corrective action.

Blood Group and Antibody Program – Performance Assessment Scoring System

Blood Group & Antibody Program – Performance Assessment System

Survey Component Points per Component (if applicable) Total Points
1. Patient Identification 2 points per correct identifier 8 points
2. Patient Blood Group Reactions 2 points for mandatory reaction, 10 points for blood group 20
3. Patient Antibody Screen 10 points
4. Patient Antibody Identification 10 points

Snake Venom Program

The survey objectives and acceptable responses for the Snake Venom program are reviewed and assessed against RCPAQAP Transfusion survey objectives and in consultation with a specialist Toxinologist to enable participant performance to be measured.

Reports follow the standardised format to illustrate survey results, including the Performance summary, Result review, Commentary and Cumulative summary that shows participant performance over the last three surveys.

The result review includes a distribution of laboratory results for the snake immunotype, clinical interpretation and reaction strength with your responses indicated by the light blue circle. An example of the result review page is provided below.

Snake Venom Detection Program Performance Assessment System

Test Component Component Assessment
Snake Immunotype Target Immunotype (Initial)
Clinical Interpretation Most appropriate Clinical Response (Initial)

Participants in the Snake Venom program are also asked to provide a revised interpretation of results based on supplied clinical information. This is not an assessable element of the survey however it is a useful educational tool to aid laboratory staff in understanding the clinical implications of reported findings.

Snake Venom – Result review page, illustrating the target responses in the table at the top of the page and charts illustrating the responses received from all laboratories. Results from the participant are highlighted as a blue dot.

Last updated on August 28, 2024
Back to top
en_AUEnglish
It appears you're using an old version of Internet Explorer for safer and optimum browsing experience please upgrade your browser.