Have you ever wondered why testing the same material in different labs sometimes gives slightly different results? The answer usually comes down to standards. Choosing the right standards and knowing when to make your own determines whether your XRF results are reliable, comparable, and trustworthy.
Below, we’ll walk through the science behind XRF standards in simple terms, explain the “tricks” materials can play on measurements, and share practical tips for keeping your instrument honest.
XRF machines don’t measure how much of an element is present directly—they measure X-ray intensities. To turn those signals into accurate elemental concentrations, scientists use standards, which are materials with known compositions.
Standards act like a “ruler” for your XRF machine. Without them, the readings are just numbers, not meaningful composition. Using proper standards ensures results are traceable, comparable across labs, and defensible.
Even if an element is present in identical composition, the XRF reading can vary depending on the matrix and the rest of the material surrounding the element. Surrounding material/elements absorb X-rays or even enhance the signal from other elements.
This is why two identical-looking samples can give different numbers unless the standard matches the sample type (matrix).
XRF machines measure elements by detecting the X-rays they give off. But sometimes, the surrounding material—the “matrix” changes the fluorescence signal, making the readings too high or too low. This happens in two main ways:
Example: When measuring S in coal, Fe can absorb some of sulfur’s X-rays, so S seems lower than it actually is.
Example: In steel, Fe can excite Cr, so Cr readings are artificially higher.
Using matrix-matched standards means your standard is the same material and has a similar composition to the sample. This automatically corrects for absorption and enhancement effects, giving reproducible results across different labs.
A classic example is soil testing. Calibrating with pure oxides fails because real soils contain moisture and organics that affect X-ray absorption differently. Matrix-matched standards solve this mismatch.
Whether and how standards are used depends on the calibration method. Let’s think about how to choose the right one. There’s no single “best” calibration strategy because the best choice depends on your needs and what is available.
Yes! Sample preparation is critical to optimize sensitivity and reduce sample variations:
Even with perfect calibration, using the wrong sample form can introduce bias.
Even well-prepared pellets can differ in density or height. Compton normalization is especially useful for pressed powder pellets where preparation can vary. This correction method uses the Compton scatter peak—a signal created when tube X-rays scatter off the sample’s electrons. This peak is sensitive to the sample’s density and composition.
By comparing the analyte’s intensity to the Compton peak intensity, analysts can normalize measurements to account for differences in sample packing, particle size, or matrix variations.
Mining and catalyst labs often deal with uncommon or highly specialized sample matrices for which no commercial standards exist, making custom RMs necessary for accurate measurement and ongoing quality assurance.
Technically, the number of standards needed is the number of elements in the sample squared. Practically, a more useful guideline is the number of elements in the sample plus four. The number of standards is based on the number of elements to ensure there enough degrees of freedom to use correction factors to compensate for matrix absorption and enhancement effects, as well as optimize accuracy for both linear and non-linear responses.
As you can see, it can take large number of standards to calibrate for a complex multi-element material like soil or fly ash, and so the fundamental parameters approach in XRF is an excellent alternative requiring few standards.
Even the best calibration drifts over time. Regular QC checks are essential:
Small shifts over time are more dangerous than occasional outliers, so consistent monitoring is key.
ICP is better for ultratrace detection and can measure extremely low concentrations—down to sub-ppm or even ppb levels. ICP is preferred when maximum sensitivity is required.
XRF is fast, non-destructive, economic, and works especially well for solid, mineral-based, or metal samples such as ores, cements, alloys, and catalysts, where traces to major concentrations and bulk composition are the focus.
Why this matters: Understanding the strengths and limits of each technique helps labs choose the right tool. ICP when accuracy at very low levels is critical, and XRF when speed, cost, and convenience are more important.
I hope this article helps you understand XRF standards better and gives you more confidence in how you approach your own XRF analysis. If you ever have questions about standards, calibration choices, or sample preparation, you can talk to one of our XRF experts anytime.
Use the “Talk to an expert” button at the top right of the page or contact us at info@rigaku.com.