Near-infrared (NIR) spectroscopy is growing in popularity within the pharmaceutical industry, as more companies implement process analytical technologies (PAT) and apply various chemometric methods to analyze resulting data. As a result, the need for calibration transfer is also growing: for example, when networking with several NIR spectrometers in different sites; when integrating instruments from other vendors yet maintaining existing methods for the original instrument base; or when adding new methods, once they are made commercially available.
The fundamental problem of standardization is ensuring that an instrument’s response conforms to a “standard” instrument response.  Since the setup of a calibration model from scratch requires considerable time and expense, this article discusses alternative methods for calibration transfer that offer substantial savings in both areas.
Multivariate calibration is often a challenge to transfer, as it can be very sensitive to small variations in the wavelengths or absorbance in the spectrum. For a successful calibration transfer, two analyzers must be as identical as possible in spectral range and must operate in the same mode, such as in transmittance.
Direct Standardization is one of the best multivariate methods to determine calibration transfer. Examining the Theory Calibration transfer tries to resolve two key differences between instruments: wavelengths and absorbance scale. Typically, there are three approaches used to achieve standardization:
1) making robust calibrations, 2) adjusting calibrations, and 3) adjusting spectra.
- Approaches to instrument standardization (calibration transfer). Making robust calibrations means pooling data from several instruments of the same type, when calibrating, such that calibration transfer works without standardization. Here, spectral pretreatments are important, in case there are no differences among wavelengths. A variant to this approach is to calculate some difference in spectra from the repeats of selected samples and include these in the calibration set with a reference value of zero. This is called a repeatability file . Adjusting calibrations refers to using bias/skew corrections estimated from a modest number of samples (12-15) with known reference values. The simplest method is a mean bias correction, which has to be applied for each constituent of interest in a product, although the same transfer samples can be used. If a skew adjustment is involved, it requires even more samples having reference values within a good range for each constituent. This approach performs a correction of the predicted values (i.e., in the Y-space) , and outlier diagnostics become unavailable afterward.To adjust spectra, one would use a Direct Standardization method. This method does not need or use reference values for transfer samples, as it corrects for differences between spectra (in the X-space) . Thus, independent realistic samples can be used, and selecting those samples is an important issue [4, 5]. The advantage of adjusting spectra is that all calibrations for several constituents of a product — including outlier checks — that were already developed on one instrument become available on a second instrument.
- Approaches to spectral adjustments. There are two directions for spectral adjustments : Adjusting backward makes the second instrument spectra look like the ones on the first instrument, applying the existing multivariate calibration, whereas adjusting forward makes the calibration spectra from a first instrument look like the spectra on a second one, rebuilding the multivariate calibration method with known settings. Adjusting forward is the preferred option, especially when the two instruments are of different types, or when real-time prediction is applied. For this purpose, a number of identical samples (without reference values for each constituent) must be scanned on both instruments side-by-side under exactly the same conditions. Available approaches to adjusting spectra are Direct Standardization (DS), the patented Piecewise Direct Standardization (PDS) method, [7, 8] and the patented Shenk-Westerhaus method. [4, 9] The Shenk-Westerhaus method of univariate spectral adjustment method separates absorbance correction from wavelength correction. After having truncated the pairs of transfer spectra acquired from measuring 30 sealed samples [4, 5] on two different instruments, their wavelengths are matched by interpolation,  and the overall process is called trimming. Then a linear regression is used, one wavelength at a time (Fig. 1a), to estimate offset and slope, in order to convert spectra from one instrument to another. This approach is called univariate full-spectrum correction. DS and PDS are multivariate approaches based on linear transformation of spectra. The data from transfer samples, measured on two instruments, are used for estimating a b vector and a transformation matrix F [3, 10] that best match the pairs of spectra from transfer samples. Then, the spectra of one instrument are adjusted by adding b and multiplying by F. One key to understanding the relationship between these multivariate versions is a matrix formulation using the following equation:
When both instruments are of the same type, then p=q and F becomes a square matrix (Fig. 1b). The F-matrix is given enough flexibility to cope with wavelength shifts and multiplicative absorbance shifts simultaneously. The offset b copes with an additive absorbance shift.
The advantage is that wavelength corrections are included. Direct Standardization (DS) uses a general F-matrix with all elements allowed to be non-zero (Fig 1b). It is assumed that one of the columns of F is analogous to a standard calibration problem, so transfer samples that are scanned on both instruments can be considered the training set, and Least Square (LS), Principal Component Regression (PCR) or Partial Least Square (PLS) can be used to estimate the F-matrix.