Quantitative Analysis with XRF: Calibration Steps for Accurate Results

Quantitative Analysis with XRF - Calibration Steps for Accurate Results

This is a written summary of a live webinar presented on February 27, 2026. The recording and resources are available on the recording page.

Presented by:

Carmen HS

Carmen Kaiser-Brügmann

XRF Application Scientist

Rigaku

Connect on LinkedIn

Webinar summary

The webinar is a practical walkthrough of how to build and maintain accurate quantitative calibrations for WDXRF, with the core message that XRF is a relative technique: Accuracy depends on how well measured intensities are tied to appropriate standards, corrected for matrix effects, and kept stable over time.

Carmen opens by distinguishing WDXRF (higher resolution/sensitivity due to optics like collimators, analyzing crystals, detectors) from EDXRF (simpler, compact, commonly used for screening but generally less accurate for quant work). She notes that, depending on hardware configuration, XRF can cover elements from boron through very heavy elements, and that achievable concentration ranges are fundamentally limited by how the calibration is designed.

A large part of the talk focuses on why calibration matters: because measured intensity must be mapped to concentration using known standards; because instrument response drifts over time; because many industries require traceability and compliance; and because different sample matrices change absorption/background and therefore apparent intensity. This sets up the recurring theme: calibration accuracy is not only about curve fitting, but about controlling matrix effects, choosing correct standards, optimizing measurement conditions, and validating/monitoring performance.

She defines the basic calibration idea as correlating intensity vs concentration at a characteristic 2θ angle for each element, aiming for linear behavior where possible, but noting software can support non-linear fits when needed. A key pitfall is analyzing unknowns whose matrix or preparation type (pressed powder vs polymer vs fusion bead, etc.) does not match the calibration’s intended matrix and range—this is a common source of bad results. She also states a practical design rule: calibration curves should cover at least ~95% of the natural composition distribution of expected unknowns, otherwise accuracy will suffer at the edges or outside range.

The webinar contrasts two main calibration strategies. In empirical calibration, matrix-matched standards are crucial, and the regression is improved with correction models such as theoretical alpha corrections or absorption/enhancement corrections. In fundamental parameters (FP) calibration, theoretical intensities are calculated and matrix corrections are inherently part of the model, often giving more linearity and allowing a wider practical range, but FP requires knowing (or estimating) the full sample composition and often relies on an “average” composition approach that can be a limitation. Carmen emphasizes that FP can be more flexible regarding exact standard matching than strictly empirical approaches, though standards are still required.

Before calibration curves are even built, Carmen stresses selecting measurement conditions that achieve the needed analytical precision. She walks through WDXRF hardware and how method setup choices affect sensitivity, resolution, and background: tube power settings (kV/mA) tailored to element groups; choosing excitation conditions to avoid unnecessary excitation of elements that can cause overlaps or raise background; and using spectral review (peak/background ratio and counting statistics) to predict precision. Several examples show how changing power settings can reduce background or improve peaks for heavy-element spectra and for light-element signals like Si.

She then describes optimization tools to improve signal-to-noise and reduce spectral interferences. Primary beam filters can suppress tube characteristic lines and flatten background to improve peak-to-background ratios (with the tradeoff of potentially reduced sensitivity). Collimators improve spectral resolution by controlling beam divergence; tighter collimation improves separation but reduces intensity. Crystal selection is framed as another major lever, especially for light elements (she uses fluorine as an example of where an optimized crystal yields a sharper, higher-intensity peak). She also highlights pulse height analysis (PHA) as an effective technique for rejecting noise and higher-order lines; it can also help separate problematic overlaps (e.g., tin/lead cases and second-order line issues), and narrowing the PHA window can lower background and improve detection limits for light elements such as sodium.

For very light elements, she notes the importance of stable chamber conditions and cites software-controlled automatic pressure control to stabilize vacuum degree and improve repeatability, showing that without it, vacuum fluctuations cause measurable signal instability. She also gives a concrete overlap example (phosphorus in stainless steel complicated by molybdenum overlap) and shows how peak-angle setting and overlap handling feed into correct quantification.

A central technical section explains matrix effects and the correction approaches used in XRF. Carmen defines matrix effects as absorption and enhancement phenomena caused by coexisting elements and by sample type (metals vs plastics, etc.), illustrating that the same concentration (e.g., 100 ppm Cd) can produce different intensities and backgrounds in different matrices. She adds that analysis depth varies by matrix density/composition, which affects representativeness and calibration transferability. She then discusses how matrix effects distort calibration curves, showing conceptual cases like heavy element in light matrix and vice versa, and provides binary-system examples: nickel being strongly absorbed by iron (absorption effect) and iron/chromium showing enhancement behavior. Line overlap is treated as another “matrix-style” correction need; the Mo–P overlap example is used to show how calculating overlap coefficients in software improves regression and recalculates apparent vs true concentrations.

She outlines correction “families” supported in software: mathematical/overlap corrections, theoretical alpha models (absorption/enhancement), ratio methods (internal standards), peak-to-background ratioing for varying matrices, and Compton scattering ratio corrections (especially useful for geological materials/trace elements). She emphasizes practical selection: internal standard ratioing (e.g., cobalt) can both improve calibration robustness and act as a control on sample preparation quality, while Compton ratio corrections can significantly improve regression for trace analysis across varying matrices.

The talk then moves from building calibrations to proving they work. Carmen distinguishes validation from verification: validation demonstrates the method is fit for purpose by measuring an independent certified reference material (CRM) not used in calibration and comparing results to the certificate (including uncertainty). She explains using uncertainty and a 95% confidence level framework to judge whether differences are acceptable. Verification, by contrast, is ensuring results meet an external standard or specification, such as slag compliance limits in an international standard, and can include ratio-based criteria used in quality decisions.

For ongoing quality control, she recommends routine measurement of secondary reference materials prepared like daily unknowns, so you monitor both the instrument method and sample preparation together. She describes control charting and software features to monitor whether check results remain within expected confidence limits. She then clarifies the differences between CRM (expensive, internationally certified, used for accuracy assessment and often calibration building) and secondary reference materials (characterized via round-robin/proficiency testing and/or accredited labs; more cost-effective and available in larger quantities for routine checks). She also mentions synthetic/artificial standards as another route for calibration sets (e.g., fusion beads made from pure chemicals).

Maintenance is treated as non-optional. Carmen stresses repeatability, reproducibility, stability checks after instrument changes (like gas changes), and especially drift correction: choosing appropriate drift monitor samples, storing them properly, watching drift coefficients, and recognizing drift correction is not a permanent fix—if coefficients move out of range, recalibration and drift reset are required. She gives examples of repeatability testing by preparing the same pellet multiple times, and reproducibility evaluation via inter-lab round robin data where a lab outside limits must investigate causes (often starting with sample preparation issues like milling/particle size).

Her final wrap-up reiterates the “system” view: calibration accuracy depends on choosing the right calibration range, applying appropriate matrix corrections, selecting correct standards (pressed powder vs fusion), and optimizing the WDXRF method setup (excitation conditions, collimators, crystals, filters, PHA) to achieve the required counting statistics and precision. She emphasizes that calibration must be continuously monitored, extended as processes change, supported by check samples and drift correction, and paired with verification of sample preparation integrity.

Key questions answered in the webinar

WDXRF (Wavelength Dispersive X-ray Fluorescence) is known for its high resolution and sensitivity due to its specific hardware setup, which includes X-ray tubes, collimators, analyzing crystals, and detectors. It provides a characteristic line for each particular element at a specific angle.

EDXRF (Energy Dispersive X-ray Fluorescence) is a more compact system with a simpler hardware design, typically using a semiconductor detector. While EDXRF is well-suited for screening analysis, it generally cannot achieve the same level of accuracy as WDXRF.

Because XRF is a relative technique: you measure intensities and then must align those intensities to known concentrations using appropriate standards. Without that link, intensity doesn’t automatically equal concentration. Calibration also matters because instrument response drifts over time (so you need a drift correction), and because many labs use XRF for process control where traceability and compliance to standards are required. Matrix effects are a major reason calibration is essential: two samples with the same analyte concentration can produce different intensities and backgrounds depending on the surrounding matrix, so calibration must anticipate and correct for that.

  • Accuracy of results: Calibration aligns the instrument's readings with known standards, ensuring that measured X-ray intensities correspond correctly to actual elemental concentrations.
  • Consistency over time: Instruments can drift due to wear, environmental changes, or detector aging. Regular calibration resets the baseline and prevents cumulative errors.
  • Traceability and compliance: Many industries (metals, mining, recycling, pharmaceuticals) require certified, traceable results. Calibration ensures compliance with quality standards and regulatory requirements.
  • Matrix effects correction: Different sample types (soil, alloys, glass, paint) can cause variations in X-ray absorption. Calibration using matrix-matched standards corrects these effects for more reliable results.

Calibration curves should cover at least about 95% of the natural composition distribution of your expected unknowns. If unknowns fall outside the calibration’s intended range or don’t match the calibration’s preparation type and matrix (pressed powder vs polymer vs fusion bead, etc.), accuracy is likely to collapse—a common pitfall. Standards must therefore match the concentration range you expect and be appropriate to the sample type. Sometimes a wide-range calibration may not deliver the accuracy needed at certain concentration levels, and in those cases you may need a narrower calibration range optimized for that region.

Empirical calibration relies heavily on matrix-matched standards and regression, often improved using corrections such as theoretical alpha corrections or absorption/enhancement corrections. It can work very well, but it is sensitive to matrix mismatches (especially in pressed powders where particle size and mineralogy matter). FP calibration uses theoretical intensity calculations and therefore incorporates matrix correction inherently, often producing improved linearity and allowing a wider practical calibration range. However, FP requires knowing the full composition of the sample (or using an average composition approach); if the assumed composition is wrong, results can suffer. FP can be more flexible about exact standard matching than strict empirical approaches, though standards are still needed.

Method setup is part of calibration accuracy. Measuring conditions like tube power (kV/mA) should be chosen to appropriately excite the element groups of interest and avoid unnecessarily exciting elements that create overlaps or raise background. The goal is strong peak-to-background ratio and good counting statistics. Changing power settings can materially lower background and sharpen useful peaks, and higher current can boost sensitivity for certain light-element cases.

Beyond power, primary beam filters can suppress tube lines and flatten background, collimators can improve resolution by tightening beam divergence (at the expense of intensity), and crystal choice can significantly improve signal quality, especially for difficult light elements (fluorine example). Pulse height analysis (PHA) is highlighted as a practical tool to reject noise and higher-order line interference, and narrowing the PHA window can reduce background and improve detection limits (sodium example).

Matrix effects are the absorption and enhancement phenomena caused by the other elements and the physical nature of the sample matrix (metals vs plastics vs other), which change how fluorescent X-rays are generated and absorbed. Also, analyzing depth varies by matrix, affecting representativeness and calibration transfer. These effects can bend or distort calibration curves away from linearity and are compounded by line overlaps (example discussed: phosphorus complicated by molybdenum overlap).

Correction options include overlap corrections (computing overlap coefficients in software), theoretical alpha/absorption-enhancement corrections, internal standard ratio methods (also useful to monitor preparation consistency), peak-to-background ratioing for variable matrices, and Compton scattering ratio corrections (especially for geological materials/trace elements). The point is not that one correction always wins, but that you choose a model that matches the matrix variability and interference risks of your application.

Validation is about proving the method is fit for purpose by measuring an independent certified reference material (CRM) not used in the calibration and comparing your measured values to the certificate (including uncertainty). The comparison should respect uncertainty and a 95% confidence framework; if results don’t align within uncertainty, you go back and investigate calibration/method choices. Verification is different: it’s about demonstrating that results meet an external requirement, such as an international standard or specification for a material (a slag compliance example is discussed, with limits like magnesium needing to be below a specified percentage). In short: validation proves analytical accuracy against an independent truth source; verification proves you meet a standard that governs quality decisions.

Calibration maintenance is ongoing, not “set and forget.” Core routines include repeatability checks (example: preparing the same pellet multiple times and verifying small variation), reproducibility assessment (including inter-lab round robin comparisons and investigating outliers, often starting with preparation factors like milling/particle size), stability checks after instrument changes (gas changes are mentioned), routine monitoring with secondary reference materials prepared like daily unknowns (so you check both the method and preparation), and drift correction with correctly selected drift samples stored properly. Drift correction is framed as necessary but not permanent: you must watch drift coefficients and, if they go out of range, recalibrate and reset drift rather than pretending drift correction can cover unlimited degradation.

Subscribe to the Bridge newsletter

Stay up to date with materials analysis news and upcoming conferences, webinars and podcasts, as well as learning new analytical techniques and applications.

Contact Us

Whether you're interested in getting a quote, want a demo, need technical support, or simply have a question, we're here to help.