Histograms
Provide a frequency distribution of the results submitted by all participants. The blue bar represents the median or target result, the blue circle indicates the participant’s result.
The Chemical Pathology discipline programs are designed to review the performance of analytical procedures and provide peer group comparison by directly comparing individual participant results with results from participating laboratories. The reports display graphical representations showing results from the same method, instrument and reagent groups (if applicable). Participants’ results are compared to a consensus median (calculated median from all method classifications or a calculated median of method-specific results) or a specific target for a particular test.
All Chemical Pathology reports are structured to provide:
Survey Reports summarise every pair of specimens for each measurand and provide summary data on your performance throughout the cycle. Reports provide a graphical comparison of individual results with all results received and with participants using the same method categories – examples are the analytical principle, measurement system and reagent source.
Quantitative results are usually compared with the specific target value (if known), the “overall median” or the median of the main variable of the category median. Analytical Performance Specifications (APS) are set either side of the expected value. Non-numerical results (descriptive results) are compared with a target value or overall method group consensus.
The Analytical Performance Specifications (APS) are unique for each measurand, and the acceptable range for each specimen is calculated from the central value (target, median or weighed-in value). These ranges are displayed in the report histograms and Youden Plots. The comment “Low” or “High” is added if the result is outside the APS and will be highlighted for review.
Note: The z-score is provided as an additional parameter to demonstrate the number of standard deviations a participant’s result is away from the mean value. It measures performance based on what is achievable from test methods. Participants are not assessed against the z-score.
The new report format follows a standard structure for all disciplines that survey quantitative measurands with two samples in each survey challenge.
The survey reports incorporate linear regression analysis to determine the precision and accuracy of the testing procedure.
Some programs allow participants to submit results in SI or mass units. The report will default to the units submitted by the laboratory.
If survey results are not received by the close date an appropriate message is returned.
The structure of the report is as follows:
Please review results returned for Samples(s): | CP-TM-20-17 | CP-TM-20-18 |
Adrenocorticotropic hormone | High but within APS of category group | High but within APS of category group |
Beta-2-microglobulin | Low but within APS of category group | Low but within APS of category group |
Thyroglobulin | High but within APS of category group | High but within APS of category group |
The Summary of Performance provides participants with an assessment of their overall performance and indicates what measurands require further review.
APS score calculation
Measurand APS: APS +/- 0.5 up to 5.00, +/- 10.0% > 5.00 x109 cells/L (5.0 is the measurand decision point)
Lab result above decision point Lab result = 19.30 (above decision point – 10% range) Target result: 18.40 Measurand limit: 10% of 18.40 = 1.840 APS Score = (19.30 – 18.40) / 1.840 APS Score = -0.49 |
Lab result below decision point Lab result = 2.90 (below decision point – 0.5 range) Target result: 2.60 Measurand limit: 0.5 APS Score = (2.9 – 2.6) / 0.5 APS Score = 0.6 |
Provide a frequency distribution of the results submitted by all participants. The blue bar represents the median or target result, the blue circle indicates the participant’s result.
Lists the results and method categories returned by the participant.
The APS that has been assigned to the measurand.
Represent a scatter of two sample results plotted against each other. The sample with the higher measurand level is on the y-axis plotted against the lower level on the x-axis. Five Youden plots are presented and illustrate from left to right, results from all results, the participant’s method, analytical principle, measurement system and the reagent. The participant’s result is highlighted by the blue dot.
Examples of Youden charts
Biased results | Biased Laboratory |
Displays the participant’s z-score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.
Displays the APS score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.
Examples of Plots
Imprecision | Bias |
Displays the participant result against the expected result, and indicates linearity across different survey sample levels.
Examples of Plots
Imprecision | Bias |
Measurand performance: The Measurand performance page provides participants with a breakdown of the results returned.
Provide a frequency distribution of the results submitted by all participants. The blue bar represents the median or target result, the blue circle indicates the participant’s result.
Lists the results and method categories returned by the participant.
The APS that has been assigned to the measurand.
Represent a scatter of two sample results plotted against each other. The sample with the higher measurand level is on the y-axis plotted against the lower level on the x-axis. Five Youden plots are presented and illustrate from left to right, results from all results, the participant’s method, analytical principle, measurement system and the reagent. The participant’s result is highlighted by the blue dot.
Examples of Youden charts
Biased results | Biased Laboratory |
Displays the participant’s z-score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.
Displays the APS score for up to 6 sets of returned results and provides an indication of the precision and accuracy of results within a survey year.
Examples of Plots
Imprecision | Bias |
Displays the participant result against the expected result, and indicates linearity across different survey sample levels.
Examples of Plots
Imprecision | Bias |
Precision and Accuracy
High / Low or Low / High = Imprecision
High / High or Low / Low = Bias
Indicates what targets are used to compare results (the BNP example above shows results assessed against the instrument category). The table columns represent the Sample ID, Measurement System category assigned for result comparison, median result, participant’s result, H/L review flag (W= Within, no review required), Z-score and the APS score.
Linear regression analysis is based against the target source of each sample, which could be the “median of all results”, “median of the assessment category” (measurement system) or the median of a “specified target”.
When performing linear regression, the following rules of assessment are applied:
Provides the slope and intercept calculated from the linear regression analysis. Also provides the target low and high level, with your corresponding low and high levels determined from the “line of best fit”.
Using the slope and intercept the values of your line of best fit are determined compared to the lowest and highest target values for the cycle.
To provide a comparison of all methods, linear regression is performed against the same data source (all results or specific target). The slopes represent the bias obtained from all laboratories, illustrating the slopes from all labs in ‘grey’, the slope from participants using the same assessment category in ‘navy blue’ and highlighting your slope in ‘light blue’.
This histogram represents the imprecision (CV%) obtained from all laboratories. It is calculated from the scatter around the target source linear regression line of best fit. The histogram illustrates all labs in ‘grey’, participants using the same assessment category in ‘navy blue’ and highlighting your CV% as a ‘light blue’ dot.
Provides participants with the SD, CV and Bias calculated from linear regression and how your results ranked against all laboratories that participated. Ranking (from 0 or best to 100, worst) is illustrated on the top row of the scale provided under the bar.
Standard Deviation: The SD is the standard error of the estimate (Sy.x) and can be regarded as the average SD across the range of concentrations analysed. SD provides a value in the units of the test. SD will tend to be high if you report high results and low if you report low results.
Coefficient of Variation: The SD divided by the mid-point of your laboratory’s range of concentrations, expressed as a percentage:
CV%= | SD | x100 |
(low value + high value)/2 |
Average Bias:
Your biases at the low value, high value and mid value are determined. These are the differences between the line of expectation (45° line) and your line of best fit. |
The average bias is calculated as:
Bias = | [low bias] + [mid bias] + [high bias] |
3 |
Provides participants with a table summarising their SD, CV, Bias, MPS and performance ranking.
Provides participants with information, such as the target source used for linear regression and lists any method category changes made across the survey sample range being analysed.
Indicates what targets are used to compare results (the BNP example above shows results assessed against the instrument category). The table columns represent the Sample ID, Measurement System category assigned for result comparison, median result, participant’s result, H/L review flag (W= Within, no review required), Z-score and the APS score.
Linear regression analysis is based against the target source of each sample, which could be the “median of all results”, “median of the assessment category” (measurement system) or the median of a “specified target”.
When performing linear regression, the following rules of assessment are applied:
Provides the slope and intercept calculated from the linear regression analysis. Also provides the target low and high level, with your corresponding low and high levels determined from the “line of best fit”.
Using the slope and intercept the values of your line of best fit are determined compared to the lowest and highest target values for the cycle.
To provide a comparison of all methods, linear regression is performed against the same data source (all results or specific target). The slopes represent the bias obtained from all laboratories, illustrating the slopes from all labs in ‘grey’, the slope from participants using the same assessment category in ‘navy blue’ and highlighting your slope in ‘light blue’.
This histogram represents the imprecision (CV%) obtained from all laboratories. It is calculated from the scatter around the target source linear regression line of best fit. The histogram illustrates all labs in ‘grey’, participants using the same assessment category in ‘navy blue’ and highlighting your CV% as a ‘light blue’ dot.
Provides participants with the SD, CV and Bias calculated from linear regression and how your results ranked against all laboratories that participated. Ranking (from 0 or best to 100, worst) is illustrated on the top row of the scale provided under the bar.
Standard Deviation: The SD is the standard error of the estimate (Sy.x) and can be regarded as the average SD across the range of concentrations analysed. SD provides a value in the units of the test. SD will tend to be high if you report high results and low if you report low results.
Coefficient of Variation: The SD divided by the mid-point of your laboratory’s range of concentrations, expressed as a percentage:
CV%= | SD | x100 |
(low value + high value)/2 |
Average Bias:
Your biases at the low value, high value and mid value are determined. These are the differences between the line of expectation (45° line) and your line of best fit. |
The average bias is calculated as:
Bias = | [low bias] + [mid bias] + [high bias] |
3 |
Provides participants with a table summarising their SD, CV, Bias, MPS and performance ranking.
Provides participants with information, such as the target source used for linear regression and lists any method category changes made across the survey sample range being analysed.
The assessment criteria are defined as measurand performance. As analytical error is due to both imprecision and bias, program organisers have defined Total Error as follows:
Total Error = 2SD + Bias
The quality of your laboratory’s performance is then determined by comparing the Total Error to the Analytical Performance Specification at the mid-point of the range of measurand concentrations for the cycle as follows:
Measurand Performance Score (MPS) = | 2SD + Bias |
Analytical Performance Specification |
These examples of bicarbonate analyses may assist in understanding this method of assessment.
Example – Laboratory 1
SD = 0.8 mmol/L | Bias = 0.5 mmol/L |
Total Error = (2 × 0.8) + 0.5 = 2.1 mmol/L |
Measurand Performance = | 2.1 | = 0.84 |
2.5 |
Note: When the Total Error is less than the Analytical Performance Specification then the Measurand Performance will be less than 1.0. This is the desired level of performance.
Example – Laboratory 2
SD = 1.5 mmol/L | Bias = 0.1 mmol/L |
Total Error = (2 × 1.5) + 0.1 = 3.1 mmol/L |
Measurand Performance = | 3.1 | = 1.24 |
2.5 |
An undesirable result – due predominantly to imprecision.
Example – Laboratory 3
SD = 0.5 mmol/L | Bias = 2.0 mmol/L |
Total Error = (2 × 0.5) + 2.0 = 3.0 mmol/L |
Measurand Performance = | 3.0 | = 1.20 |
2.5 |
An undesirable result – due predominantly to bias.
The method comparison provides a breakdown of all the methods (assessment category) used by participants and lists the statistics calculated for the latest survey results obtained (left) as well as the Precision and Accuracy results calculated from the linear regression analysis on the sample range used, providing the median values for each method listed, facilitating peer group comparison. The full set of statistics is seen only when there are four or more values in the dataset – three values in the dataset will illustrate the median value only and method categories with two or less users do not present any statistical data.
The “Survey Report Interpretation” flowchart can be found on the RCPAQAP website under “Resources” or can be accessed by clicking this link.
Programs that survey qualitative measurands will also provide a simple, direct comparison of your qualitative results with all results received and with participants using the same method system.
The full name of the measurand.
The method classification the laboratory has submitted. Stored with each pair of results.
Ensure that your method classification is correct. If the method classification information provided by us does not allow for adequate definition of your method then contact the RCPAQAP.
Note: Measurands with no method classification and no results will not be printed. Consequently, if you wish to receive a report for a measurand for which you do not submit results then provide a method classification.
Histograms showing the distribution of all results.
Relative position ● of the result reported by your laboratory.
A complete record of data held by program organisers for the cycle.
These Histograms show the distribution of results for your method, your analytical principle and your reagent group across the assessment criteria groupings for the first sample analysed in this assessment.
These Histograms show the distribution of results for your method, your analytical principle and your reagent group across the assessment criteria groupings for the second sample analysed in this assessment.
The full name of the measurand.
The method classification the laboratory has submitted. Stored with each pair of results.
Ensure that your method classification is correct. If the method classification information provided by us does not allow for adequate definition of your method then contact the RCPAQAP.
Note: Measurands with no method classification and no results will not be printed. Consequently, if you wish to receive a report for a measurand for which you do not submit results then provide a method classification.
Histograms showing the distribution of all results.
Relative position ● of the result reported by your laboratory.
A complete record of data held by program organisers for the cycle.
These Histograms show the distribution of results for your method, your analytical principle and your reagent group across the assessment criteria groupings for the first sample analysed in this assessment.
These Histograms show the distribution of results for your method, your analytical principle and your reagent group across the assessment criteria groupings for the second sample analysed in this assessment.
The Patient Report Comments Program transitions to the eQuality platform in 2023. This program is an educational self-assessment tool for individuals who, in the course of their duties, would attach comments to results sent out from their laboratories or provide interpretative advice by telephone (e.g. Duty Biochemists, Chemical Pathologists and Scientists), as well as personnel who are training for such duties.
Cases
One case per month is offered over ten months in a year. Each case report has patient information (age, gender and location of the patient, as well as brief clinical notes), the set of biochemistry results requiring comments and additional relevant information or results available to the laboratory.
Comments
Participants comment on the results assuming that they have been asked to provide an interpretative comment by the requesting clinician. The comment should follow the pattern of a written report.
Note that submitted comments which are longer than the allowable space provided for commenting will be truncated at the space limit.
Review of Comments
The participants’ comments historically have been broken down into components by RCPAQAP staff, and the components summarised into common keywords or phrases. Frequently used keywords or phrases are now provided as an additional option to participants to select upon result entry. Participants should still enter their free text comment as previously and can also choose related keywords based on their comments (if available from the dropdown list). Please note: if keywords are selected by a participant, they must still relate to the free text comment. RCPAQAP staff, in consultation with the Patient Comments Advisory Committee, will either add new keywords or delete keywords that do not relate to the free text comments submitted.
Each key-phrase will be classified as ‘Preferred’, ‘Less Relevant’, ‘Not Supported’ or ‘Misleading’’ by the Advisory committee who will generate a summary report with the classification of all key-phrases, a ‘suggested’ (ideal) comment and case discussion rationale.
Preferred
Keywords classified as ‘Preferred’ are those considered appropriate, correct and adding value to the results and therefore of utility to the clinician who would receive the report. They may relate to diagnosis or differential diagnosis, possible interferences or suggestions for further testing. They should assist with interpretation of the results for the measurement but may include reference to or input from additional information.
Relevant
Keywords classified as ‘Relevant’ are those considered not to add value to the current results; i.e. not useful to the requesting doctors, although not erroneous or misleading. Comments focusing on the additional information rather than the results for commenting are also likely to fall into this category.
Not Supported
Keywords classified as ‘Not Supported’ are diagnoses that are possibly correct but not supported by the supplied information, or tests not considered to be indicated given the available information.
Misleading
Keywords classified as ‘Misleading’ are those which may lead to a wrong interpretation, misdiagnosis or mismanagement of the patient.
Supervisor Reports are designed for a nominated person (Coordinator) who has an interest in overseeing a group of participants and/or sites. This common interest may be regional, organisational, special interest, instrument or reagent groups.
A Supervisor Report may be set up by anyone wanting to set up a collaborative group of participants with a common interest. There must be sufficient participants sharing this common interest to make the generated statistics a true representation of the group. A minimum of five participants is generally suggested to make the Supervisor Report viable.
Each Supervisor Report has a designated Coordinator who is the recipient of the reports and has the responsibility to disseminate information to members of the group and to maintain confidentially of all results.
Supervisor Reports are provided to the nominated Coordinator of the group after each Survey run and at the end of each survey. There is an annual enrolment fee for a Supervisor Report.
The proposer of a new Supervisor Report should write to the RCPAQAP by logging a request through the myQAP participant portal to request the set-up of the group and to nominate a Coordinator.
The RCPAQAP will liaise with the proposed Coordinator of the group who will be sent Supervisor Report Coordinator Agreement and Supervisor Report Participant Authorisation forms to complete and send back to the RCPAQAP.
Supervisor Reports can be ordered through the myQAP website when enrolling or by contacting the RCPAQAP Customer Service Team directly.
Confidentiality
Program organisers hold information on each participant obtained from the RCPAQAP in strict confidence. The Coordinator of each Supervisor Report undertakes to keep the name of the individual participants confidential and only to release summaries of the performance of methods, instruments and coded results.
Available Supervisor Reports
There are a number of Supervisor Reports that are open to all participants to join. Participants who wish to be part of an existing Supervisor Report can fill in the Supervisor Report Participation Form.
The Supervisor Report Participation Form gives the RCPAQAP permission to release your results to the nominated Co-ordinator of the chosen Supervisor Report for the program(s) you nominate.
Supervisor Report Participation Form
Please follow the link to view the list of available Supervisor Reports and to access the Supervisor Report Participation Form.
Supervisor Report Interpretation Notes
Supervisor Reports are provided to the Supervisor Report Co-ordinator following each survey run. Participant data are included only if they have provided written approval.
An example of the Supervisor report can be found on the myQAP help page.