Reporting

This is a summary of how we report on results that are submitted as part of our rounds. Please contact us with any queries regarding our reporting or suggestions for improvement to this guide

Here at BIRA, we use statistics endorsed by ISO13528 which paves way for future accreditation.

The assigned values are taken from the results submitted – not a value measured by a reference laboratory. To calculate these, we use robust statistics (the assigned value is the median) as opposed to a traditional approach (the assigned value is the mean). Robust statistics are less sensitive to outliers and better for smaller data sets. We do screen for gross outliers (typos, errors, etc…), but no further outlier removal is performed as it isn’t necessary using a robust approach.

To gauge acceptable variability in the results, we use a pre-assigned deviation (SDPA). This value is based upon our understanding of what is acceptable for the given method(s). If we are uncomfortable with our level of understanding of what an acceptable level of variation is for a particular method, we will use the normalised median absolute deviation (MAD). We also calculate this for each analyte regardless of whether we already have a pre-assigned deviation as it allows us to gauge our performance as a cohort. If the normalised MAD is greater than the pre-assigned deviation, then we can recognise room for improvement as a cohort.

As in most interlaboratory schemes, we assess individual performance using z scores. Each result is given a z score by subtracting the assigned value and dividing the result by the assigned deviation.

  • Between 0 and |2| is considered acceptable,
  • between |2| and |3| is questionable and should be monitored for subsequent performance,
  • nd > |3| is unacceptable and should be investigated.

If the group is performing as expected for the method (according to the assigned deviation), virtually results will score between -3 and 3.

BIRA uses paired samples – we get you to test each analyte twice across two distinct samples. This enables us to use Youden plots, a powerful graph format which simultaneously indicates intra-laboratory repeatability as well as inter-laboratory reproducibility. Sample A is plotted against Sample B. The central box encloses satisfactory results (a z-score of < |2|), the larger box encloses questionable results (a z-score of |2| & |3|) and points outside both boxes indicate unsatisfactory results (a z score > |3|).

The diagonal line can also be used to inform performance – the closer the point is to the diagonal line, the greater the intra-laboratory repeatability is. Points that are outside boxes along the diagonal line indicate that both samples were askew and likely the result of systematic error such as an incorrectly calibrated instrument). Points that are outside the boxes but away from the diagonal indicate that one sample was fine and the other askew which is likely a result of random error such as the filter paper for one sample leaking but not the other.