You are on page 1of 75

Characterization of Nanomaterials

By : Abhishek Sharma

Prepared Abhishek Sharma

U N I T 1

Prepared Abhishek Sharma

Unit -1
DIFFRACTION I: THE DIRECTIONS OF DIFFRACTED BEAMS
Introduction
After our preliminary survey of the physics of x-rays and the geometry of crystals, we can now proceed to fit the two together and discuss the phenomenon of x-ray diffraction, which is an interaction of the two. Historically, this is exactly the way this field of science developed. For many years, mineralogists and crystallographers had accumulated knowledge about crystals, chiefly by measurement of interfacial angles, chemical analysis, and determination of physical properties. There was little knowledge of interior structure, however, although some very shrewd guesses had been made, namely, that crystals were built up by periodic repetition of some unit, probably an atom or molecule, and that these units were situated some 1 or 2A apart. On the other hand, there were indications, but only indications, that x-rays might be electromagnetic waves about 1 or 2A in wavelength. In addition, the phenomenon of diffraction was well understood, and it was known that diffraction, as of visible light by a ruled grating, occurred whenever wave motion encountered a set of regularly spaced scattering objects, provided that the wavelength of the wave motion was of the same order of magnitude as the repeat distance between the scattering centers. Such was the state of knowledge in 1912 when the German physicist von Laue took up the problem. He reasoned that, if crystals were composed of regularly spaced atoms which might act as scattering centers for x-rays, and if x-rays were electromagnetic waves of wavelength about equal to the inter atomic distance in crystals, then it should be possible to diffract x-rays by means of crystals. Under his direction, experiments to test this hypothesis were carried out: a crystal of copper sulfate was set up in the path of a narrow beam of x-rays and a photographic plate was arranged to record the presence of diffracted beams, if any. The very first experiment was successful and showed without doubt that x-rays were diffracted by the crystal out of the primary beam to form a pattern of spots on the photographic plate. These experiments proved, at one and the same time, the wave nature of x-rays and the periodicity of the arrangement of atoms within a crystal. Hindsight is always easy and these ideas appear quite simple to us now, when viewed from the vantage point of more than forty years' development of the subject, but they were not at all obvious in 1912, and von Laue's hypothesis and its experimental verification must stand as a great intellectual achievement. The account of these experiments was read with great interest by two English physicists, W. H. Bragg and his son W. L. Bragg. The latter, although only a young student at the time it was still the year 1912 successfully analyzed the Laue experiment and was able to express the necessary conditions for diffraction in a somewhat simpler mathematical form than that used by von Laue. He also attacked the problem of crystal tructure with the new tool of x-ray diffraction and, in the following year, solved the structures of NaCl, KC1, KBr, and KI, all of which have the NaCl structure; these were the first complete crystal-structure determinations ever made.

Diffraction
Diffraction is due essentially to the existence of certain phase relations between two or more waves, and it is advisable, at the start, to get a clear notion of what is meant by phase relations. Consider a beam of x-rays, such as beam 1 in Fig. 3-1, proceeding from left to right. For convenience only, this beam is assumed to be plane-polarized in order that we may draw the electric field vector E always in one plane. We may imagine this beam to be composed of two equal parts, ray 2 and ray 3, each of half the amplitude of beam 1. These two rays, on the wave front AA', are said to be completely in phase or in step; i.e., their electric field vectors have the same magnitude and direction at the same instant at any point x measured along the direction of propagation of the wave. A wave front is a surface perpendicular to this direction of propagation.

Now consider an imaginary experiment, in which ray 3 is allowed to continue in a straight line but ray 2 is diverted by some means into a curved path before rejoining ray 3. What is the situation on the wave front BB where both rays are proceeding in the original direction? On this front, the electric vector of ray 2 has its maximum value at the instant shown, but that of ray 3 is zero. The two rays are therefore o u t of phase. If we add these two imaginary components of the beam together, we find that beam 1 now has the form shown in the upper right of the drawing. If the amplitudes of rays 2 and 3 are each 1 unit, then the amplitude of beam 1 at the left is 2 units and that of beam 1 at the right is 1.4 units, if a sinusoidal variation of E with x is assumed. Two c o n c l u s i o n s may b e drawn from t h i s i l l u s t r a t i o n :

Prepared Abhishek Sharma

(1) Differences in the l e n g t h of the p a t h

traveled lead t o d i f f e r e n c e s in phase.

(2) The introduction of phase differences produces a Change in a m p l i t u d e . The greater the path difference,the greater the difference in phase, since the path difference, measured in wavelengths, exactly equals the phase difference, also measured in wavelengths. If the diverted path of ray 2 in Fig. 31 were a quarter wavelength longer than shown, the phase difference would be a half wavelength. The two rays would then be completely out of phase on the wave front BB and beyond, and they would therefore annul each other, since at any point their electric vectors would be either both zero or of the same magnitude and opposite in direction. If the difference in path length were made three quarters of a wavelength greater than shown, the two rays would be one complete wavelength out of phase. A condition in distinguishable from being completely in phase since in cases the two waves would combine to form a beam of amplitude 2 -just like the original beam. We m a y conclude that t w o rays are politely in phase whenever their path lengths differ either by zero or whole number of wavelengths. Differences in the p a t h length of various rays a r i s e q u i t e naturally v we consider how a crystal diffracts x-rays. Figure 32 shows a section crystal, its atoms arranged on a set of parallel planes A, B, C, D, normal to the plane of the drawing and spaced a distance d apart. As that a beam of perfectly parallel, perfectly monochromatic x-rays of w length X is incident on this crystal at an angle 0, called the Bragg a, where 0 is measured between the incident beam and the particular cr planes under consideration. We wish t o know whether this incident beam of x-rays will be differed by the crystal and, if so, under what conditions. A diffracted beam me defined as a beam composed of a large number of scattered rays mutually forcing one another. Diffraction is, therefore, essentially a scattering

Prepared Abhishek Sharma

We have here regarded a diffracted beam as being built up of rays sca- ttered by successive planes of atoms within the crystal. It would be a mistake to assume, however, that a single plane of atoms A would diffract xrays just as the complete crystal does but less strongly. Actually, the single plane of atoms would produce, not only the beam in the direction 1 as the complete crystal does, but also additional beams in other directions, some of them not confined to the plane of the drawing. These additional beams do not exist in the diffraction from the complete crystal precisely because the atoms in the other planes scatter beams which destructively interfere with those scattered by the atoms in plane A, except in the direction 1. At first glance, the. diffraction of x-rays by crystals and the reflection of visible light by mirrors appear very similar, since in both phenomena the angle of incidence is equal to the angle of reflection. It seems that we might regard the planes of atoms as little mirrors which reflect the x rays. Diffraction and reflection, however, differ fundamentally in at least three aspects: (1) The diffracted beam from a crystal is built up of rays scattered by all the atoms of the crystal which lie in the path of the incident beam. The reflection of visible light takes place in a thin surface layer only. (2) The diffraction of monochromatic x-rays takes place only at those particular angles of incidence which satisfy the Bragg law. The reflection of visible light takes place at any angle of incidence. (3) The reflection of visible light by a good mirror is almost 100 percent efficient. The intensity of a diffracted x-ray beam is extremely small com- pared to that of the incident beam. Despite these differences, we often speak of reflecting planes and reflected beams when we really mean diffracting planes and diffracted beams. This is common usage and, from now on, we will frequently use these terms without quotation marks but with the tacit understanding that * we really mean diffraction and not reflection. To sum up, diffraction is essentially a scattering phenomenon in which a large number of atoms cooperate. Since the atoms are arranged periodically on a lattice, the rays scattered by them have definite phase relations between them; these phase relations are such that destructive interference occurs in most directions of scattering, but in a few directions constructive interference takes place and diffracted beams are formed. The two essentials are a wave motion capable of interference (x-rays) and a set of periodically arranged scattering centers (the atoms of a crystal).
*

For the sake of completeness, it should be mentioned that x-rays can be totally reflected by a solid surface, just like visible light by a mirror, but only at very small angles of incidence (below about one degree). This phenomenon is of little practical importance in x-ray metallographic and need not concern us furthe

Braggs Low:
In physics, Bragg's law states that when X-rays hit an atom, they make the electronic cloud move as does any electromagnetic wave. The movement of these charges re-radiates waveswith the same frequency (blurred slightly due to a variety of effects); this phenomenon is known as the Rayleigh scattering (or elastic scattering). The scattered waves can themselves be scattered but this secondary scattering is assumed to be negligible. A similar process occurs upon scattering neutron waves from the nuclei or by a coherent spin interaction with an unpaired electron. These re-emitted wave fields interfere with each other either constructively or destructively (overlapping waves either add together to produce stronger peaks or subtract from each other to some degree), producing a diffraction pattern on a detector or film. The resulting wave interference pattern is the basis of diffraction analysis. Both neutron and Xray wavelengths are comparable with inter-atomic distances (~150 pm) and thus are an excellent probe for this length scale. The interference is constructive when the phase shift is a multiple of 2; this condition can be expressed by Bragg's law

where n is an integer determined by the order given, is the wavelength of the X-rays (and moving electrons,protons and neutrons), d is the spacing between the planes in the atomic lattice, and is the angle between the incident ray and the scattering planes.

Prepared Abhishek Sharma

Bragg's Law is the result of experiments into the diffraction of X-rays or neutrons off crystal surfaces at certain angles, derived by physicist Sir William Lawrence Bragg[2] in 1912 and first presented on 1912-11-11 to theCambridge Philosophical Society. Although simple, Bragg's law confirmed the existence of real particles at the atomic scale, as well as providing a powerful new tool for studying crystals in the form of X-ray and neutron diffraction. William Lawrence Bragg and his father, Sir William Henry Bragg, were awarded the Nobel Prize in physics in 1915 for their work in determining crystal structures beginning with NaCl, ZnS, and diamond.

Reciprocal space

Although the misleading common opinion reigns that Bragg's Law measures atomic distances in real space, it does not. Furthermore, the term demonstrates that it measures the number of wavelengths fitting between two rows of atoms, thus measuring reciprocal distances. Max von Laue had interpreted this correctly in a vector form, the Laue equation

where

is a reciprocal lattice vector and

and

are the wave vectors of the incident and the diffracted beams.

Together with the condition for elastic scattering | kf | = | ki | and the introduction of the scattering angle 2 this leads equivalently to Bragg's equation. The concept of reciprocal lattice is the Fourier space of a crystal lattice and necessary for a full mathematical description of wave mechanics.

Alternate derivation
Suppose that a single monochromatic wave (of any type) is incident on aligned planes of lattice points, with separation d, at angle , as shown below.

Prepared Abhishek Sharma

There will be a path difference between the ray that gets reflected along AC' and the ray that gets transmitted, then reflected, along AB and BC respectively. This path difference is

The two separate waves will arrive at a point with the same phase, and hence undergo constructive interference, iff this path difference is equal to any integer value of the wavelength, i.e.

where the same definition of n and apply as above. Clearly,

and from which it follows that

Putting everything together,

Which simplifies to

Which is Braggs law IN OTHER WORD


BRAGS LOW:
The layers of a crystal act like weak reflecting mirrors for the X-rays. Only if the path difference of the reflected X-rays is a whole number of wavelengths does constructive interference occur. This is described by Bragg's Law:

n = 2dsin : wavelength of the X-rays d : the spacing of the layers : the incident angle of the photons

Prepared Abhishek Sharma

In wavelength dispersive XRF (WDXRF) spectroscopy, the X-ray energies are separated by means of a diffracting crystal and a detector that are placed in positions complying with Bragg's Law. The placement is either by turning of a goniometer, measuring the energies one after the other (sequential) or in fixed positions, measuring the energies all at the same time (simultaneous)

Prepared Abhishek Sharma

X-Ray Diffraction
Principle
X-ray diffraction (XRD) is a nondestructive technique that operates on the nanometre scale based on the elastic scattering of X-rays from structures that have long range order (i.e. an organised structure of some sort, e.g. periodicity, such as in a crystal or polymer). It can be used to identify and characterise a diverse range of materials, such as metals, minerals, polymers, catalysts, plastics, pharmaceuticals, proteins, thin-film coatings, ceramics and semiconductors. The two main types of XRD are X-ray crystallography and X-ray powder diffraction. X-ray crystallography, also known as single crystal diffraction, is a technique that is used to examine the whole structure of a crystal. The crystal is hit with X-rays and, in a typical experiment, the intensity of the X-rays diffracted from the sample is recorded as a function of angular movement of both the detector and the sample. The diffraction pattern of intensity versus angle can also be converted into its more useful form of probability distribution versus distance. The diffraction pattern produced can be analysed to reveal crystal details such as the spacing in the crystal lattice, bond lengths and angles. It can be difficult to obtain a pure crystal but, if achieved, the data obtained with this method can be very informative. Many compounds can be subjected to X-ray crystallography, such as macromolecules, small inor- ganic materials, biological compounds such as proteins and even small pharmaceuticals. When a single pure crystal cannot be obtained, X-ray powder diffraction can be used instead. It can still yield important information about the crystalline structure, such as crystal size, purity and texture, but the data set may not be as complete as X-ray crystallography. The sample under investigation is usually ground down to a fine microcrystalline powder first. Sometimes the sample must be rotated to obtain the optimal diffraction pattern.

Instrument
The instrument used is called a diffractometer. A schematic diagram is shown in Figure 1.

Source

Output

X-ray tube

PC

ADC

Monochromator

Positionsensitive counter

Discriminator
Soller slits

Detector Sample

Figure 1:-Schematic diagram of an X-ray diffraction instrument

Source The source is the sealed X-ray tube or a synchrotron (with much higher photon flux). Sample A single crystal can be mounted in a thin glass tube or on a glass fibre using grease or glue to hold it in place. The crystals are often cooled to reduce radiation damage and thermal motion during the experiment. The solid sample can be rotated about an axis during exposure to the X-rays to increase the chances of all orientations of the crystals in a powder sample being detected. The crystals act as 3-D diffraction gratings. Sample stages on most modern instruments allow for many different sample environments. These range from cryostats to room temperature to high temperature and humidity conditions. Discriminator The discriminator is a crystal monochromator such as graphite. Soller slits after the monochromator keep divergence of the beam to a minimum. Detector The position-sensitive detector registers the diffraction pattern of the sample by moving around the sample. The detector is usually a scintillation counter or more recently an array of X-ray detectors (CCD), which allows more data to be collected simultaneously. Output The diffraction data is recorded, manipulated and can be plotted at the computer in the form required. The diffraction pattern can be compared to a library of patterns (Interna- tional Centre for Diffraction Data) and, therefore, a positive identification made.

Prepared Abhishek Sharma

Information Obtained
Information such as spacing in the crystal lattice, bond lengths and angles, crystal size, purity and texture can all be obtained using XRD. Information about thermal motion can also be obtained. Overall, a picture of the molecules, unit cells and the crystal can be built up.

Developments/Specialist Techniques
There are now instruments that combine high-temperature XRD and DSC in one. In this method, any phase changes in the sample can be detected while the sample is subjected to a changing temperature program.

Applications
Areas of application include qualitative and quantitative phase analysis, chemical crys- tallography, texture and residual stress investigations, investigation of semiconductors and nanomaterials, and high-throughput polymorph screening. In medicine, 8 XRD is the most important technique for determining the chemical composition and phases of urinary stones . XRD is also very 9 useful for identifying drugs, especially in forensic applications and in the case of illegal or confiscated drugs . XRD is also widely 10 employed in proteomics

Prepared Abhishek Sharma

10

X-Ray Fluorescence
XRF is an elemental analysis technique with unique capabilities including (1) highly accurate determinations for major elements and (2) a broad elemental survey of the sample composition without standards. For example, XRF is used in analysis of rocks and metals with an accuracy of ~0.1% of the major elements. A technique known as Fundamental Parameters can estimate the elemental composition of unknowns without standards. And to top it all off, sometimes the analysis requires minimal sample preparation. Therefore, WCAS now offer's X-Ray Fluorescence. Detection limits for XRF are generally in the 10-100 ppm range for heavy elements, and elements lighter than Na are difficult or impossible to detect. Our Rigaku ZSX can analyze elements B-U, except N and O. When materials are exposed to short-wavelength x-rays or to gamma rays, ionisation of their component atoms may take place. Ionisation consists of the ejection of one or more electrons from the atom, and may take place if the atom is exposed to radiation with an energy greater than its ionisation potential. X-rays and gamma rays can be energetic enough to expel tightly held electrons from the inner orbitals of the atom. The removal of an electron in this way renders the electronic structure of the atom unstable, and electrons in higher orbitals "fall" into the lower orbital to fill the hole left behind. In falling, energy is released in the form of a photon, the energy of which is equal to the energy difference of the two orbitals involved. Thus, the material emits radiation, which has energy characteristic of the atoms present. The term fluorescence is applied to phenomena in which the absorption of higher-energy radiation results in the re-emission of lower-energy radiation.

How does XRF work?


High energy photons (x-rays) displace inner shell electrons. Outer shell electrons then fall into the vacancy left by the displaced electron. In doing so, they normally emit light (fluoresce) equivalent to the energy difference between the two states. Since each element has electrons with more or less unique energy levels, the wavelength of light emitted is characteristic of the element. And the intensity of light emitted is proportional to the elements concentration. Note that this is a highly simplified explanation.

XRF Process
There are generally two types of XRF spectrometers: wavelength dispersive and energy dispersive. Wavelength dispersive system uses a diffraction crystal to focus specific wavelengths onto a detector. A wavelength range is scanned by changing the angle in which the x-rays strike the crystal. An energy dispersive spectrometer focuses all the emitted x-rays onto an energy analyzing detector. While this is faster and less expensive, wavelength dispersive spectrometers are more sensitive and have higher resolution. For this reason we've chosen a wavelength dispersive system.

Prepared Abhishek Sharma

11

Detection
In energy dispersive analysis, dispersion and detection are a single operation, as already mentioned above. Proportional counters or various types of solid state detectors (PIN-diode, Si(Li), Ge(Li), Silicon Drift Detector SDD) are used. They all share the same detection principle: An incoming x-ray photon ionises a large number of detector atoms with the amount of charge produced being proportional to the energy of the incoming photon. The charge is then collected and the process repeats itself for the next photon. Detector speed is obviously critical, as all charge carriers measured have to come from the same photon to measure the photon energy correctly (peak length discrimination is used to eliminate events that seem to have been produced by two x-ray photons arriving almost simultaneously). The spectrum is then built up by dividing the energy spectrum into discreet bins and counting the number of pulses registered within each energy bin. EDXRF detector types vary in resolution, speed and the means of cooling (a low number of free charge carriers is critical in the solid state detectors): proportional counters with resolutions of several hundred eV cover the low end of the performance spectrum, followed by PIN-diode detectors, while the Si(Li), Ge(Li) and Silicon Drift Detectors (SDD) occupy the high end of the performance scale. In wavelength dispersive analysis, the single-wavelength radiation produced by the monochromator is passed into a photomultiplier, a detector similar to a Geiger counter, which counts individual photons as they pass through. The counter is a chamber containing a gas that is ionised by x-ray photons. A central electrode is charged at (typically) +1700 V with respect to the conducting chamber walls, and each photon triggers a pulse-like cascade of current across this field. The signal is amplified and transformed into an accumulating digital count. These counts are then processed to obtain analytical data.

X-ray Photoelectron Spectroscopy (XPS)


XPS was developed in the mid 1960s by K. Siegbahn and his research group. K. Siegbahn was awarded the Nobel Prize for Physics in 1981 for his work in XPS. The phenomenon is based on the photoelectric effect outlined by Einstein in 1905 where the concept of the photon was used to describe the ejection of electrons from a surface when photons impinge upon it. For XPS, Al Kalpha (1486.6eV) or Mg Kalpha (1253.6eV) are often the photon energies of choice. Other X-ray lines can also be chosen such as Ti Kalpha (2040eV). The XPS technique is highly surface specific due to the short range of the photoelectrons that are excited from the solid. The energy of the photoelectrons leaving the sample are determined using a CHA and this gives a spectrum with a series of photoelectron peaks. The binding energy of the peaks are characteristic of each element. The peak areas can be used (with appropriate sensitivity factors) to determine the composition of the materials surface. The shape of each peak and the binding energy can be slightly altered by the chemical state of the emitting atom. Hence XPS can provide chemical bonding information as well. XPS is not sensitive to hydrogen or helium, but can detect all other elements. XPS must be carried out in UHV conditions. See also UPS. X-ray photoelectron spectroscopy (XPS) is a quantitative spectroscopic technique that measures the elemental composition, empirical formula, chemical state and electronic state of the elements that exist within a material. XPS spectra are obtained by irradiating a material with a beam of X-rays while simultaneously measuring the kinetic energy (KE) and number of electrons that escape from the top 1 to 10 nm of the material being analyzed. XPS requires ultra high vacuum (UHV) conditions. XPS is a surface chemical analysis technique that can be used to analyze the surface chemistry of a material in its "as received" state, or after some treatment, for example: fracturing, cutting or scraping in air or UHV to expose the bulk chemistry, ion beam etching to clean off some of the surface contamination, exposure to heat to study the changes due to heating, exposure to reactive gases or solutions, exposure to ion beam implant, exposure to ultraviolet light.

XPS is also known as ESCA, an abbreviation for Electron Spectroscopy for Chemical Analysis. XPS detects all elements with an atomic number (Z) of 3 (lithium) and above. It cannot detect hydrogen (Z = 1) or helium (Z = 2). Detection limits for most of the elements are in the parts per thousand range. Detections limits of parts per million (ppm) are possible, but require special conditions: concentration at top surface or very long collection time (overnight). XPS is routinely used to analyze inorganic compounds, metal alloys, semiconductors, polymers, elements, catalysts, glasses, ceramics, paints, papers, inks, woods, plant parts, make-up, teeth, bones, medical implants, bio-materials, viscous oils, glues, ion modified materials and many others.

Prepared Abhishek Sharma

12

A typical XPS spectrum is a plot of the number of electrons detected (sometimes per unit time) (Y-axis, ordinate) versus the binding energy of the electrons detected (X-axis, abscissa). Each element produces a characteristic set of XPS peaks at characteristic binding energy values that directly identify each element that exist in or on the surface of the material being analyzed. These characteristic peaks correspond to the electron configuration of the electrons within the atoms, e.g., 1s, 2s, 2p, 3s, etc. The number of detected electrons in each of the characteristic peaks is directly related to the amount of element within the area (volume) irradiated. To generate atomic percentage values, each raw XPS signal must be corrected by dividing its signal intensity (number of electrons detected) by a "relative sensitivity factor" (RSF) and normalized over all of the elements detected. To count the number of electrons at each KE value, with the minimum of error, XPS must be performed under ultra-high vacuum (UHV) conditions because electron counting detectors in XPS instruments are typically one meter away from the material irradiated with X-rays. It is important to note that XPS detects only those electrons that have actually escaped into the vacuum of the instrument. The photoemitted electrons that have escaped into the vacuum of the instrument are those that originated from within the top 10 to 12 nm of the material. All of the deeper photo-emitted electrons, which were generated as the X-rays penetrated 15 micrometers of the material, are either recaptured or trapped in various excited states within the material. For most applications, it is, in effect, a non-destructive technique that measures the surface chemistry of any material.

Components of an XPS system


The main components of a commercially made XPS system include:

A source of X-rays An ultra-high vacuum (UHV) stainless steel chamber with UHV pumps An electron collection lens An electron energy analyzer Mu-metal magnetic field shielding An electron detector system A moderate vacuum sample introduction chamber Sample mounts A sample stage A set of stage manipulators\

Monochromatic aluminium K-alpha X-rays are normally produced by diffracting and focusing a beam of non-monochromatic Xrays off of a thin disc of natural, crystalline quartz with a <1010> lattice. The resulting wavelength is 8.3386 angstroms (0.83386 nm) which corresponds to a photon energy of 1486.7 eV. The energy width of the monochromated X-rays is 0.16 eV, but the common electron energy analyzer (spectrometer) produces an ultimate energy resolution on the order of 0.25 eV which, in effect, is the ultimate energy resolution of most commercial systems. When working under practical, everyday conditions, high energy resolution settings will produce peak widths (FWHM) between 0.4-0.6 eV for various pure elements and some compounds. Non-monochromatic magnesium X-rays have a wavelength of 9.89 angstroms (0.989 nm) which corresponds to a photon energy of 1253 eV. The energy width of the non-monochromated X-ray is roughly 0.70 eV, which, in effect is the ultimate energy resolution of a system using non-monochromatic X-rays. Non-monochromatic X-ray sources do not use any crystals to diffract the X-rays which allows all primary X-rays lines and the full range of high energy Bremsstrahlung X-rays (112 keV) to reach the surface. The typical ultimate high energy resolution (FWHM) when using this source is 0.91.0 eV, which includes with the spectrometer-induced broadening, passenergy settings and the peak-width of the non-monochromatic magnesium X-ray source.

Prepared Abhishek Sharma

13

Capabilities of advanced systems Measure uniformity of elemental composition across the top the surface (aka line profiling or mapping) Measure uniformity of elemental composition as a function of depth by ion beam etching (aka depth profiling) Measure uniformity of elemental composition as a function of depth by tilting the sample (aka angle resolved XPS) Routine limits of XPS
Quantitative accuracy

Quantitative accuracy depends on several parameters such as: S/N, peak intensity, accuracy of relative sensitivity factors, correction for electron transmission function, surface volume homogeneity, correction for energy dependency of electron mean free path, and degree of sample degradation due to analysis. Under optimum conditions, the quantitative accuracy of the atom % values calculated from the Major XPS Peaks is 9095% of the atom % values of each major peak. If a high level quality control protocol is used, the accuracy can be further improved. Under routine work conditions, where the surface is a mixture of contamination and expected material, the accuracy ranges from 80-90% of the value reported in atom % values. The quantitative accuracy for the weaker XPS signals, that have peak intensities 10-20% of the strongest signal, are 6080% of the true value.

Analysis times

110 minutes for a survey scan that measures the amount of all elements, 110 minutes for high energy resolution scans that reveal chemical state differences, 14 hours for a depth profile that measures 45 elements as a function of etched depth (usual final depth is 1,000 nm)

Detection limits

0.11.0 atom % (0.1 atom% = 1 part per thousand = 1000 ppm). (Ultimate detection limit for most elements is approximately 100 ppm, which requires 816 hours.)

Analysis area limits

Analysis area depends on instrument design. The minimum analysis area ranges from 10 to 200 micrometres. Largest size for a monochromatic beam of X-rays is 15 mm. Non-monochromatic beams are 1050 mm in diameter. Spectroscopic image resolution levels of 200 nm or below has been achieved on latest imaging XPS instruments using synchrotron radiation as X-ray source.

Sample size limits

Older instruments accept samples: 1x1 to 3x3 cm. Very recent systems can accept full 300 mm wafers and samples that are 30x30 cm.

Degradation during analysis

Depends on the sensitivity of the material to the wavelength of X-rays used, the total dose of the X-rays, the temperature of the surface and the level of the vacuum. Metals, alloys, ceramics and most glasses are not measurably degraded by either non-monochromatic or monochromatic X-rays. Some, but not all, polymers, catalysts, certain highly oxygenated compounds, various inorganic compounds and fine organics are degraded by either monochromatic or non-monochromatic X-ray sources. Non-monochromatic X-ray sources produce a significant amount of high energy Bremsstrahlung X-rays (115 keV of energy) which directly degrade the surface chemistry of various materials. Non-monochromatic X-ray sources also produce a significant amount of heat (100 to 200 C) on the surface of the sample because the anode that produces the X-rays is typically only 1 to 5 cm (2 in) away from the sample. This level of heat, when combined with the Bremsstrahlung X-rays, acts synergistically to increase the amount and rate of degradation for certain materials. Monochromatic X-ray sources, because they are far away (50100 cm) from the sample, do not produce any heat effects. Monochromatic X-ray sources are monochromatic because the quartz monochromator system diffracted the Bremsstrahlung X-rays out of the X-ray beam which means the sample only sees one X-ray energy, for example: 1.486 keV if aluminium K-alpha X-rays are used. Because the vacuum removes various gases (eg O2, CO) and liquids (eg water, alcohol, solvents) that were initially trapped within or on the surface of the sample, the chemistry and morphology of the surface will continue to change until the surface achieves a steady state. This type of degradation is sometimes difficult to detect.

Prepared Abhishek Sharma

14

Energy Dispersive X-ray Analysis


Energy dispersive X-ray spectroscopy (EDS) is an analytical technique used for the elemental analysis or chemical characterization of a sample. It is one of the variants of XRF. As a type of spectroscopy, it relies on the investigation of a sample through interactions between electromagnetic radiation and matter, analyzing x-rays emitted by the matter in response to being hit with charged particles. Its characterization capabilities are due in large part to the fundamental principle that each element has a unique atomic structure allowing xrays that are characteristic of an element's atomic structure to be identified uniquely from each other. To stimulate the emission of characteristic X-rays from a specimen, a high energy beam of charged particles such as electrons or protons (see PIXE), or a beam of X-rays, is focused into the sample being studied. At rest, an atom within the sample contains ground state (or unexcited) electrons in discrete energy levels or electron shells bound to the nucleus. The incident beam may excite an electron in an inner shell, ejecting it from the shell while creating an electron hole where the electron was. An electron from an outer, higher-energy shell then fills the hole, and the difference in energy between the higher-energy shell and the lower energy shell may be released in the form of an X-ray. The number and energy of the X-rays emitted from a specimen can be measured by an energy dispersive spectrometer. As the energy of the X-rays are characteristic of the difference in energy between the two shells, and of the atomic structure of the element from which they were emitted, this allows the elemental composition of the specimen to be measured. This technique is used in conjunction with SEM and is not a surface science technique. An electron beam strikes the surface of a conducting sample. The energy of the beam is typically in the range 10-20keV. This causes X-rays to be emitted from the point the material. The energy of the X-rays emitted depend on the material under examination. The X-rays are generated in a region about 2 microns in depth, and thus EDX is not a surface science technique. By moving the electron beam across the material an image of each element in the sample can be acquired in a manner similar to SAM. Due to the low X-ray intensity, images usually take a number of hours to acquire. Elements of low atomic number are difficult to detect by EDX. The SiLi detector (see below) is often protected by a Beryllium window. The absorbtion of the soft X-rays by the Be precludes the detection of elements below an atomic number of 11 (Na). In windowless systems, elements with as low atomic number as 4 (Be) have been detected, but the problems involved get progressively worse as the atomic number is reduced.

The Lithium drifted Silicon (SiLi) detector


The detector used in EDX is the Lithium drifted Silicon detector. This detector must be operated at liquid nitrogen temperatures. When an X-ray strikes the detector, it will generate a photoelectron within the body of the Si. As this photoelectron travels through the Si, it generates electron-hole pairs. The electrons and holes are attracted to opposite ends of the detector with the aid of a strong electric field. The size of the current pulse thus generated depends on the number of electron-hole pairs created, which in turn depends on the energy of the incoming X-ray. Thus, an X-ray spectrum can be acquired giving information on the elemental composition of the material under examination.

The X-ray microcalorimeter detector


Recently, an exciting development in the field of EDX is the X-ray microcalorimeter. This device has a much higher energy resolution (~3eV) than the traditional Si (Li) detector.

Principle
The excess energy of the electron that migrates to an inner shell to fill the newly-created hole can do more than emit an X-ray. Often, instead of X-ray emission, the excess energy is transferred to a third electron from a further outer shell, prompting its ejection. This ejected species is called an Auger electron, and the method for its analysis is known as Auger Electron Spectroscopy (AES).

X-ray Photoelectron Spectroscopy (XPS) is another close relative of EDS, utilizing ejected electrons in a manner similar to that of AES. Information on the quantity and kinetic energy of ejected electrons is used to determine the binding energy of these now-liberated electrons, which is element-specific and allows chemical characterization of a sample. EDS is often contrasted with its spectroscopic counterpart, WDS (Wavelength-Dispersive X-ray Spectroscopy). WDS differs from EDS in that it uses the diffraction patterns created by light-matter interaction as its raw data. WDS has a much finer spectral resolution than EDS. WDS also avoids the problems associated with artifacts in EDS (false peaks, noise from the amplifiers and microphonics. In WDS only one element can be analyzed at a time, while EDS gathers a spectrum of all elements, within limits, of a sample.

Prepared Abhishek Sharma

15

X-ray absorption spectroscopy


X-ray absorption spectroscopy (XAS) is a widely-used technique for determining the local geometric and/or electronic structure of matter. The experiment is usually performed at synchrotron radiation sources, which provide intense and tunable X-ray beams. Samples can be in the gas-phase, solution, or condensed matter (ie. solids). XAS data are obtained by tuning the photon energy using a crystalline monochromator to a range where core electrons can be excited (0.1-100 keV photon energy). The "name" of the edge depends upon the core electron which is excited: the principal quantum numbers n=1, 2, and 3, correspond to the K-, L-, and M-edges, respectively. For instance, excitation of a 1s electron occurs at the K-edge, while excitation of a 2p electron occurs at an L-edge (Figure 1).

Figure 1

There are three main regions found on a spectra generated by XAS data (Figure 2). The dominant feature is called the "rising edge", and is sometimes referred to as XANES (X-ray Absorption Near-Edge Structure) or NEXAFS (Near-edge X-ray Absorption Fine Structure). The pre-edge region is at energies lower than the rising edge. The EXAFS (Extended X-ray Absorption Fine Structure) region is at energies above the rising edge, and corresponds to the scattering of the ejected photoelectron off neighboring atoms. The combination of XANES and EXAFS is referred to as XAFS. Since XAS is a type of absorption spectroscopy, it follows the same quantum mechanical selection rules. The most intense features are due to electric-dipole allowed transitions (ie. l = 1) to unfilled orbitals. For example, the most intense features of a K -edge are due to 1s np transitions, while the most intense features of the L3-edge are due to 2p nd transitions. XAS methodology can be broadly divided into four experimental categories that can give complimentary results to each other: Metal K-edge, metal L-edge, ligand K-edge, and EXAFS.

Figure 2

Prepared Abhishek Sharma

16

Applications
XAS is an experimental technique used in different scientific fields including molecular and condensed matter physics, materials science and engineering, chemistry, earth science, and biology. In particular, its unique sensitivity to the local structure, as compared to xray diffraction, have been exploited for studying:

Amorphous solids and liquid systems Solid solutions Doping and ion implantation materials for electronics Local distortions of crystal lattices Organometallic compounds Metalloproteins Metal clusters Catalysis Vibrational dynamics Ions in solutions Speciation of elements Liquid water and aqueous solutions

Extended X-ray Absorption Fine Structure


A monochromatic X-ray beam is directed at the sample. The photon energy of the X-rays is gradually increased such that it traverses one of the absorption edges of the elements contained within the sample. Below the absorption edge, the photons cannot excite the electrons of the relevant atomic level and thus absorption is low. However, when the photon energy is just sufficient to excite the electrons, then a large increase in absorption occurs known as the absorption edge. The resulting photoelectrons have a low kinetic energy and can be backscattered by the atoms surrounding the emitting atom. The probability of backscattering is dependent on the energy of the photoelectrons. The backscattering of the photoelectron affects whether the X-ray photon is absorbed in the first place. Hence, the probability of X-ray absorption will depend on the photon energy (as the photoelectron energy will depend on the photon energy). The net result is a series of oscillations on the high photon energy side of the absorption edge. These oscillations can be used to determine the atomic number, distance and coordination number of the atoms surrounding the element whose absorption edge is being examined. The necessity to sweep the photon energy implies the use of synchrotron radiation in EXAFS experiments. By reflecting the X-rays from a surface at grazing incidence and detecting the resultant X-ray fluorescence with a Si(Li) detector (see EDX), a more surface sensitive signal can be obtained. This technique is known as REFLEXAFS. EXAFS spectra can be acquired in just a few seconds using the Quick EXAFS (QEXAFS) method. SEXAFS provides even greater surface sensitivity than REFLEXAFS. A related technique is NEXAFS. X-ray Absorption Spectroscopy (XAS) includes both Extended X-Ray Absorption Fine Structure (EXAFS) and X-ray Absorption Near Edge Structure (XANES). XAS is the measurement of the x-ray absorption coefficient ((E) in the equations below) of a material as a function of energy. X-rays of a narrow energy resolution are shone on the sample and the incident and transmitted x-ray intensity is recorded as the incident x-ray energy is incremented. The number of x-rays that are transmitted through a sample (It) is equal to the number of x-rays shone on the sample (I0) multiplied by a decreasing exponential that depends of the type of atoms in the sample, the absorption coefficient , and the thickness of the sample x. It = I0e x The absorption coefficient is obtained by taking the log ratio of the incident x-ray intensity to the transmitted x-ray intensity.

When the incident x-ray energy matches the binding energy of an electron of an atom within the sample, the number of x-rays absorbed by the sample increases dramatically, causing a drop in the transmitted x-ray intensity. This results in an absorption edge. Each element on the periodic table has a set of unique absorption edges corresponding to different binding energies of its electrons. This gives XAS element selectivity. XAS spectra are most often collected at synchrotrons. Because X-rays are highly penetrating, XAS samples can be gases, solids or liquids. And because of the brilliance of Synchrotron X-ray sources the concentration of the absorbing element can be as low as a few ppm. EXAFS spectra are displayed as graphs of the absorption coefficient of a given material versus energy, typically in a 500 1000 eV range beginning before an absorption edge of an element in the sample. The x-ray absorption coefficient is usually normalized to unit step height. This is done by regressing a line to the region before and after the absorption edge, subtracting the pre-edge line from the entire data set and dividing by the absorption step height, which is determined by the difference between the pre-edge and post-edge lines at the value of E0 (on the absorption edge).

Prepared Abhishek Sharma

17

The normalized absorption spectra are often called XANES spectra. These spectra can be used to determine the average oxidation state of the element in the sample. The XANES spectra are also sensitive to the coordination environment of the absorbing atom in the sample. Finger printing methods have been used to match the XANES spectra of an unknown sample to those of known "standards". Linear combination fitting of several different standard spectra can give an estimate to the amount of each of the known standard spectra within an unknown sample. X-ray absorption spectra are produced over the range of 200 35,000 eV. The dominant physical process is one where the absorbed photon ejects a core photoelectron from the absorbing atom, leaving behind a core hole. The atom with the core hole is now excited. The ejected photoelectrons energy will be equal to that of the absorbed photon minus the binding energy of the initial core state. The ejected photoelectron interacts with electrons in the surrounding non-excited atoms. If the ejected photoelectron is taken to have a wave-like nature and the surrounding atoms are described as point scatterers, it is possible to imagine the backscattered electron waves interfering with the forward-propagating waves. The resulting interference pattern shows up as a modulation of the measured absorption coefficient, thereby causing the oscillation in the EXAFS spectra. A simplified plane-wave single-scattering theory has been used for interpretation of EXAFS spectra for many years, although modern methods (like FEFF, GNXAS) have shown that curved-wave corrections and multiple-scattering effects can not be neglected. The photelectron scattering amplitude in the low energy range (5-200 eV) of the phoelectron kinetic energy become much larger so that multiple scattering events become dominant in the NEXAFS (or XANES) spectra. The wavelength of the photoelectron is dependent on the energy and phase of the backscattered wave which exists at the central atom. The wavelength changes as a function of the energy of the incoming photon. The phase and amplitude of the backscattered wave are dependent on the type of atom doing the backscattering and the distance of the backscattering atom from the central atom. The dependence of the scattering on atomic species makes it possible to obtain information pertaining to the chemical coordination environment of the original absorbing (centrally excited) atom by analyzing these EXAFS data.

Applications
XAS is an interdisciplinary technique and its unique properties, as compared to x-ray diffraction, have been exploited for understanding the details of local structure in:

glass, amorphous and liquid systems solid solutions Doping and ionic implantation materials for electronics local distortions of crystal lattices organ metallic compounds metalloproteinase metal clusters vibrational dynamics ions in solutions speciation of elements

Diamond anvil cell


The diamond anvil cell is a machine used by physicists to put samples under extremely high pressures (up to ~360 gigapascals) for the purpose of researching their properties, including phase transitions, atomic bonding, viscosity and diffraction levels, and crystallographic structure. Diamond anvil cells can simulate pressures of millions of atmospheres, recreating conditions similar to those at the center of the Earth or inside the gas giants. They are among the only laboratory apparatus capable of creating forms of degenerate matter like metallic hydrogen. Diamond anvil cells work on a simple principle -- by exerting a large amount of force on a small amount of area, tremendous net pressure may be obtained. The diamond anvil, successor to anvils made of carbon-tungsten alloy, was invented by researchers Weir, Lippincott, Van Valkenburg, and Bunting in the late 1950s as part of their work at the National Bureau of Standards (NBS). In addition to being the hardest material available at the time and virtually incompressible, the diamond is transparent, making it easy to view experimental samples as they are being compressed. It also helps in conducting spectroscopic experiments. Three main components make up the diamond anvil cell. First are two flawless diamonds, with a weight of 1/8 to 1/3 carats, with parallel faces opposing each other. The culet, the place where the two diamonds make contact, usually has a diameter of about 0.6 mm. For experiments that require even higher pressures, the culet can be made even smaller. The second component of the diamond anvil cell is a force-exerting device, pressing the diamonds against each other from both sides. These can be screws that tighten, gas pressing against a membrane, or a simple lever arm. The third component of the diamond anvil is a metallic gasket that encircles the perimeter of the culet, containing the sample and providing resistance to compression on the edges, lessening the possibility of anvil failure. The diamond anvil cell is an important piece of equipment that allows us to simulate pressures that we would otherwise never see, giving us access to a world of materials that would otherwise be unobservable. A diamond anvil cell (DAC) is a hand-top device used in scientific experiments. It allows compressing a small (sub-millimeter sized) piece of material to extreme pressures, which can exceed 3,000,000 atmospheres (300 gigapascals).[1]

Prepared Abhishek Sharma

18

The device has been used to recreate the pressure existing deep inside planets, creating materials and phases not observed under normal conditions. Notable examples include the non-molecular ice X[2], polymeric nitrogen[3] and MgSiO3 perovskite, thought to be the major component of the Earth's mantle. A DAC consists of two opposing diamonds with a sample compressed between the culets. Pressure may be monitored using a reference material whose behavior under pressure is known. Common pressure standards include ruby[4] fluorescence, and various structurally simple metals, such as copper or platinum.[5] The uniaxial pressure supplied by the DAC may be transformed into uniform hydrostatic pressure using a pressure transmitting medium, such as argon, xenon, hydrogen, helium, paraffin oil or a mixture of methanol and ethanol[6]. The pressure-transmitting medium is enclosed by a gasket and the two diamond anvils. The sample can be viewed through the diamonds and illuminated by X-rays and visible light. In this way, X-ray diffraction and fluorescence; optical absorption and photoluminescence; Mossbauer, Raman and Brillouin scattering; positron annihilation and other signals can be measured from materials under high pressure. Magnetic and microwave field can be applied externally to the cell allowing nuclear magnetic resonance, electron paramagnetic resonance and other magnetic measurements[7]. Attaching electrodes to the sample allows electrical and magnetoelectrical measurements as well as heating up the sample to a few thousand degrees. Much higher temperatures (up to 7000 K)[8] can be achieved with laser-induced heating, and cooling down to milli-Kelvin has been demonstrated

Principle
The operation of the diamond anvil cell relies on a simple principle: p=F/A where p is the pressure, F the applied force, and A the area. Therefore high pressure can be achieved by applying a moderate force on a sample with a small area, rather than applying a large force on a large area. In order to minimize deformation and failure of the anvils that apply the force, they must be made from a very hard and virtually incompressible material, such as diamond.

History

Prepared Abhishek Sharma

19

Percy Williams Bridgman, the great pioneer of high-pressure research during the first half of the 20th century, developed an opposed anvil device with small flat areas that were pressed one against the other with a lever-arm. The anvils were made of a tungsten-carbon alloy (WC). This device could achieve pressure of a few gigapascals, and was used in electrical resistance and compressibility measurements. The revolution in the field of high pressures came with the development of the diamond anvil cell in the late 1950s in the National Bureau of Standards (NBS) by Weir, Lippincott, Van Valkenburg, and Bunting [9]. The principles of the DAC are similar to the Bridgman anvils but in order to achieve the highest possible pressures without breaking the anvils, they were made of the hardest known material: a single crystal diamond. The first prototypes were limited in their pressure range and there was not a reliable way to calibrate the pressure. During the following decades DACs have been successively refined, the most important innovations being the use of gaskets and the ruby pressure calibration. The DAC evolved to be the most powerful lab device for generating static high pressure.[10] The range of static pressure attainable today extends to the estimated pressures at the Earths center (~360 GPa).

Components
There are many different DAC designs but all have four main components: 1.The force-generating device relies on the operation of either a lever arm, tightening screws, or pneumatic or hydraulic pressure applied to a membrane. In all cases the force is uniaxial and is applied to the tables (bases) of the two anvils 2.Two opposing diamond anvils made of high gem quality, flawless diamonds, usually with 16 facets. They typically weigh 1/8 to 1/3 carat (25 to 70 mg). The culet (tip) is ground and polished to a hexadecagonal surface parallel to the table. The culets of the two diamonds face one another, and must be perfectly parallel in order to produce uniform pressure and to prevent dangerous strains. Specially selected anvils are required for specific measurements - for example, low diamond absorption and luminescence is required in corresponding experiments. 3.Gasket a foil of ~0.2 mm thickness (before compression) that separates the two culets. It has an important role: to contain the sample with a hydrostatic fluid in a cavity between the diamonds, and to prevent anvil failure by supporting the diamond tips, thus reducing stresses at the edges of the culet. Standard gasket materials are hard metals and their alloys, such as stainless steel, Inconel, rhenium, iridium or tungsten carbide. They are not transparent to X-rays, and thus if X-ray illumination through the gasket is required then lighter materials, such as beryllium, boron nitride,[11] boron[12] or diamond[13] are used as a gasket.

4.Pressure-transmitting medium homogenizes the pressure. Methanol:ethanol 4:1 mixture is rather popular because of ease of handling. However, above ~20 GPa it turns into a glass and thus the pressure becomes nonhydrostatic.[6] Xenon, argon, hydrogen and helium are usable up to the highest pressures, and ingenious techniques have been developed to seal them in the cell.

Uses
Prior to the invention of the diamond anvil cell, static high-pressure apparatus required large hydraulic presses which weighed several metric tons and required large specialized laboratories. The simplicity and compactness of the DAC meant that it could be accommodated in a wide variety of experiments. Some contemporary DACs can easily fit into a cryostat for low-temperature measurements, and for use with a superconducting electromagnet. In addition to being hard, diamonds have the advantage of being transparent to a wide range of the electromagnetic spectrum from infrared to gamma rays, with the exception of the far ultraviolet and soft X-rays. This makes the DAC a perfect device for spectroscopic experiments and for crystallographic studies using hard X-rays. A variant of the diamond anvil, the hydrothermal diamond anvil cell (HDAC) is used in experimental petrology/geochemistry for the study of aqueous fluids, silicate melts, immiscible liquids, mineral solubility and aqueous fluid speciation at geologic pressures and temperatures. The HDAC is sometimes used to examine aqueous complexes in solution using the synchrotron light source techniques XANES and EXAFS. The design of HDAC is very similar to that of DAC, but it is optimized for studying liquids

Prepared Abhishek Sharma

20

U N I T 2
Prepared Abhishek Sharma 21

Microscopy uses radiation and optics to obtain a magnifi ed image of an object. The resolution of the imaging is limited by the minimum focus of the radiation due to diffraction. For light microscopy the diffraction limit is approximately 1 m. In the 1930s, electron microscopes were developed; these used an electron beam, rather than light rays, to give a theoretical resolution limit of about 4 nm. There are two common types the scanning electron microscope (SEM) and the transmission electron microscope (TEM). Then, in the 1980s, scanning probe microscopes (SPMs) were developed which fi nally allowed atomic resolution. In 1982, the scanning tunnelling microscope (STM) was invented and, in 1986, the atomic force microscope (AFM).

Scanning Probe Microscopy


Both the scanning tunneling microscope (STM) and the atomic force microscope (AFM) fall under the umbrella of scanning probe microscopy instruments. The heart of any scanning probe microscope is the ultra sharp electrified probe tip itself that is scanned over the sample surface to build up some form of image. The probe tip traces a map as it moves across the samples surface. The sample height information (topography) usually forms one aspect of the image but the images can yield other important information about the sample, such as crystal structure and hardness.

Scanning Tunneling Microscopy


Principle
The scanning tunneling microscope (STM) is used to obtain images of conductive surfaces on an atomic scale. It is a no optical microscope that exploits the quantum mechanical tunneling effect to determine the distance between the probe and a surface. It requires the sample to be conductive. It is an electro analytical instrument because it moves a tiny electrode over the surface and uses electric current measurements to discern the atomic-scale features of the surface. High quality STMs can reach suffi cient resolution to show single atoms. The STM is able to get within a few nanometers distance of what it is observing. It can also be used to alter the observed material by manipulating individual atoms, triggering chemical reactions, and creating ions by removing individual electrons from atoms and then reverting them to atoms by replacing the electrons. The STM produces 3-D images. The STM has higher resolution (0.2 nm) than the atomic force microscope (AFM) but requires a sample to be conductive whereas AFM does not and so has wider applicability.

Instrument
An STM instrument is shown in Figure Sample The sample is prepared appropriately for STM and mounted on the sample stage. Obviously, for a current to occur the substrate being scanned must be conductive. Insulators cannot be scanned through the STM. Source A voltage supply is supplied to the piezoelectric tube and tip. This voltage is applied to keep the preset tunneling current constant. Discriminator A sharp tip is mounted on a piezoelectric tube, which allows tiny movements by applying a voltage at its electrodes. This is the discriminator. This fi ne sharpened tip, which is presumed to have a single atom at the apex, is scanned at approximately 1 nm above the surface of the sample. When the tip and sample are connected with a voltage source, electrons will tunnel or jump from the tip to the surface (or vice versa depending on the polarity), resulting in a weak electric current; this is the tunneling current. This current can be measured, its size being exponentially dependent on the distance between probe and the surface. When the tip reaches a step in the sample, the tunneling current increases and the feedback loop, which keeps the tunneling current constant, retracts the piezoelectric tube (constant current mode) to allow it to climb the step. By scanning the tip over the surface and measuring the height, which is directly related to the voltage applied to the piezo element, the surface structure of the material under study can be reconstructed.

Prepared Abhishek Sharma

22

Detector The variation in tunneling current as it passes over the specimen is then recorded and a three-dimensional map of the surface can be obtained. Output The computer manipulates the data received and generates the images. Another means of representing the STM is shown in Figure

Information Obtained
STM gives a 3-D profi le of the surface, which is very useful for characterizing surface roughness, observing surface defects and determining the size and conformation of molecules and aggregates on the surface. Other possible measurements based on electron density include topography, which shows the total density of electrons on the surface as a function of position, and mapping, which shows the density of electrons at a particular energy as a function of position on the surface. STM can also perform a line cut, which is a series of spectra taken at equally spaced spatial points along a line drawn across the surface, and, fi nally, a spectrum , which shows the density of states as a function of energy at a particular spatial position. With STM, atoms can also be manipulated and moved around using the tip.

Developments/Specialist Techniques
Electrochemical STM is available; this enables the analyst to study electrochemical reactions as they occur. Potential profi ling and mapping as well as topographic imaging of the electrode itself are all possible with this range of instruments.

Applications
STM is capable of carrying out structural studies of bimolecular, e.g. DNA, and can be used to obtain images in aqueous solutions, allowing, in principle, the investigation of biological systems under near-physiological conditions. STM also allows investigation of protein folding/unfolding and is useful for studying proteinligand interactions. STM is an important technique in the study of new and modified surfaces11. The contrast of STM images can be improved by using chemically modified gold tips

Atomic Force Microscopy


Principle
The atomic force microscope (AFM) uses the various forces that occur when two objects are brought within nanometres of each other and does not require the sample to be conducting. The principle of AFM relies on the use of a sharp, pyramidal, ultra-fi ne probe mounted on a cantilever being brought into close proximity with the surface where it feels a chemical attraction or repulsion and moves up or down on its supporting cantilever. The probe movement is monitored using a laser beam that is refl ected or diffracted from the back of the cantilever. AFM topographic images of the surface are obtained by recording the cantilever defl ections as the sample is scanned. Like STM, AFM produces 3-D images. Unlike STM, AFM does not require the sample to be conducting and so has wider applicability. Unlike optical microscopes, AFM does not use a lens, so the resolution of the technique is limited by probe size (and strength of interaction between surface and probe tip) rather than by diffraction effects. The key to the sensitivity of AFM is in carefully monitoring the movement of the probe tip.

Instrument
An AFM instrument consists of the modules shown in Figure.

Prepared Abhishek Sharma

23

Sample
As in all SPM techniques, sample preparation is key. An AFM can work either when the probe is in contact with a surface, or when it is a few nanometers away. AFM can also be carried out on liquids.

Source
A laser is shone onto the probe tip at the end of the cantilever.

Discriminator
The discriminating element in the AFM is the cantilever and tip. When the tip moves over the surface, the cantilever moves and so too does the laser. The negative feedback loop moves the sample up and down via a piezoelectric scanner so as to maintain the interactive force. Typical AFM probe tips are made of silicon or silicon nitride, which provide lateral resolutions of 510 nm and 1040 nm, respectively. There are three main modes of AFM and all depend on this interaction between the tip and the sample: Contact mode with this mode, ionic repulsion forces are responsible for creating the image. Non contact mode with distances greater than 10 between the tip and the sample surface, Van der Waals, electrostatic, magnetic or capillary forces produce the surface images. This generally provides lower resolution than contact mode. Tapping mode this was developed as a method of achieving high resolution on soft, fragile samples without inducing destructive frictional forces.

Detector
Any deflection in the laser is picked up by the photodiode detector and is converted into topographical information about the sample.

Output
The detector sends a signal to the computer for analysis and image generation.

Information Obtained
Atomic force microscopy gives topographical information which is generally displayed byusing a colour map for height. AFMs can measure force and sample elasticity by pressing the tip into the sample and measuring the resulting cantilever defl ection. AFM can also measure the hardness or softness of a specimen or part of that specimen; information on crystallinity can also be obtained. AFM has recently been used to study dynamic interactions between emulsion droplets13. An example of an AFM is shown in Figure 4.10.

Developments/Specialist Techniques
A version of AFM is chemical force microscopy (CFM), where an AFM tip is coated with a thin chemical layer that interacts selectively with the different functional groups exposed on the sample surface. For example, gold-coated tips functionalized with a thiol monolayer terminated with carboxyl (COOH) groups interact strongly with a substrate also bearing protruding carboxyl groups, due to hydrogen bonding, but interact weakly with a substrate containing pendant methyl (CH3) groups. Pharmaceutical companies use CFM to measure the cohesive and adhesive forces between particles to predict how drug molecules interact with each other and to investigate how well a drug can be processed. Chemically modifying tips in this way can provide detailed local information on the chemical nature of the sample surface. Some researchers are adding tiny electrodes into the probe tips to enable high resolution electrochemical imaging. Much work has been invested in AFM tips in order to improve resolution further. Single wall carbon nano tubes offer higher resolution than conventional tips as they combine extremely small size with low adhesion to the sample surface. They also buckle rather than break when put under stress and are very durable. However, one hurdle to their use is the expense and diffi culty involved in fabricating them, which has made their manufacture a new area of research14. High speed AFM is another area of development for the technique. Hasma et al. have achieved 30 frames per second, which is 500 times faster than conventional AFM15. Applications of this work include the ability to measure real-time biological interactions.

Applications
AFM is particularly suited to more fragile samples that could be damaged by the tunneling current in STM. These include organic materials, biological macromolecules, polymers, ceramics and glasses, and they can be imaged in different environments, such as in liquid solution, under vacuum and at low temperatures. AFM is very useful in biology and is capable of carrying out structural studies of bimolecular such as DNA16, 17. Because it is amenable to aqueous solutions, the investigation of biological systems under near physiological conditions is possible. AFM also allows investigation of protein folding and unfolding and is useful for studying protein ligand interactions. An interesting article by Anderson has described the potential of AFM in chromatography and micro fluidics as it could provide sheardriven pumping of fluid, a mechanism for injecting samples, imaging of the liquid surfaces in the micro channels and, finally, removal of samples for further spectral analysis

Magnetic Force Microscopy


A magnetic force microscope derives from atomic force microscope (AFM). Unlike typical AFM, a magnetized tip is used to study magnetic materials, and thus, the tip-sample magnetic interactions are detected. Many kinds of magnetic interactions are measured by MFM, including magnetic dipolar interaction. MFM scanning often uses non-contact AFM (NC-AFM). In MFM measurements, the magnetic force between the sample and tip can be described by

where is the magnetic moment of the tip (approximated as a point dipole), magnetic permeability of free space.

is the magnetic stray field from the sample surface, and

is the

Prepared Abhishek Sharma

24

In above image, magnetic force microscopy (MFM) a ferromagnetic tip is scanned at a certain constant height of several ten nanometers above the surface. Thereby, the magneto static interaction between the evanescent stray field produced by the domain structure of the sample and the ferromagnetic tip is detected. MFM can be operated in the static and dynamic mode. Typically, lateral resolution is around 50 nm and the force (force gradient) sensitivity is better 10 pN (10 N/m). Since the magnetic stray field from the sample can affect the magnetized state and vice versa of the tip, one must be careful interpreting the magnetic information from the MFM measurement. For instance, to interpret the information quantitatively, the configuration of the tip magnetization must be known. Typical resolution of 30 nm can be achieved, although resolutions as low as 10 to 20 nm are attainable. A potential method of increasing the resolution would involve using an electromagnet on the tip instead of a permanent magnet. Enabling the magnetic tip only when placed over the pixel being sampled could increase the resolution. A huge increase in the interest and work done in surface science resulted from the following inventions: 1982 - Scanning Tunneling Microscopy (STM)

Tunneling current between tip and sample is used as the signal Both the tip and sample must be electrically conductive

1986 - Atomic Force Microscopy (AFM)

Forces (atomic/electrostatic) between the tip and sample are sensed from the deflections of a flexible lever (cantilever) The cantilever tip flies above the sample with typical distance in the order of tens of nanometer.

1987 - Magnetic Force Microscopy (MFM)

Derives from AFM in which the magnetic forces between the tip and sample are sensed [7] [8] Image of the magnetic stray field obtained by scanning the magnetized tip over the sample surface in a raster-like fashion

MFM Components
These are some of the main components of an MFM system: 1. Single Piezo tube

Moves the sample in an x, y, z direction Voltage is applied to separate electrodes for different directions. Typically, a 1 Volt potential results in 1 to 10 nm displacement. Image is put together by slowly scanning sample surface in a raster fashion Scan areas range around 200 micrometers

Prepared Abhishek Sharma

25

Imaging times range from about a few minutes to 30 minutes. Restoring force constants (k) on the cantilever range from 0.01 to 100 N/m depending on the material used to make the cantilever

2. Magnetized tip at one of a flexible lever (cantilever); generally an AFM probe with a magnetic coating.

In the past, tips were made of etched magnetic wires such as from Nickel Now, tips are batch fabricated (tip-cantilever) using a combination of micromachining and photolithography [9] [10] [11]. As a result, smaller tips are possible, and better mechanical control of the tip-cantilever is obtained Cantilever can be made of single crystalline silicon, silicon dioxide (SiO2), or silicon nitride (Si3N4). The Si3N4 cantilevertip are usually more durable and have smaller restoring force constants (k) Tips are coated with a thin (< 50 nm) magnetic film (such as Ni or Co), usually of high coercivity so that the tip magnetic state (or magnetization M) does not change during imaging The tip-cantilever is driven close resonance frequency by a piezo bimorph with typical frequencies ranging from 10 kHz to 1 MHz

Scanning Procedure
The scanning method when using an MFM is called the "lift height" method (developed by Digital Instruments). When the tip scans the surface of a sample at close distances (< 100 nm), not only magnetic forces are sensed, but also atomic and electrostatic forces. The lift height method helps to enhance the magnetic contrast by doing the following:

First, the topographic profile of each scan line is measured. That is, the tip is brought into close proximity of the sample to take AFM measurements. The magnetized tip is then lifted further away from the sample. On the second pass, the magnetic signal is extracted.

Mode of Operation
1. Static Mode: The first mode to operate an MFM is called "Static or DC" mode.

The stray field from the sample exerts a force on the magnetic tip. The force is detected by measuring the displacement of the cantilever end by optical means The cantilever end is either deflected away or towards the sample surface by a distance surface). (perpendicular to the


2.

We refer to static mode when the deflection of the cantilever is measured Forces in the range of tens of pico-Newtons are normally measured Dynamic Mode:

he second mode of operation is called "Dynamic or AC" mode.

For small deflections, the tip-cantilever can be modeled as a damped harmonic oscillator with a proof mass (m) in [kg], an ideal spring constant (k) in [N/m], and a damper (D) in [N*s/m]. A sample analysis and behavior of such a system is found in Cantilever Analysis If an external oscillating force (Fz) is applied to the cantilever, then the tip will be displaced by an amount z. Moreover, the displacement will also oscillate (be harmonic) but with a phase shift between applied force and displacement given by:

where the amplitude and phase shift are given by:

where the quality factor of resonance, resonance angular frequency, and damping factor are given by:

Prepared Abhishek Sharma

26

Dynamic mode of operation refers to measurements of the resonance frequency, or specifically, measurements in the shifts in the resonance frequency. The cantilever is driven to its resonance frequency and frequency shifts are detected Assuming small vibration amplitudes (which is generally true in MFM measurements), to a first-order approximation, the resonance frequency can be related to the natural frequency and the force gradient. That is, shifts on the resonance frequency is a result of changes of the spring constant due to the (repelling and attraction) forces on the tip.

The change in the natural resonance frequency is given by

where

For instance, the coordinate system is such that positive z is away from or perpendicular to the sample surface, so that an attractive force would be in the negative direction (F<0), and thus the gradient is positive. Consequently, for attractive forces, the resonance frequency of the cantilever decreases (as described by the equation). The image is encoded in such a way that attractive forces are generally depicted in black color, while repelling forces are coded white.

Imaging Formation
Calculating Forces on Magnetic Tips: Analytically, the magnetostatic energy (U) of the tip-sample system can be calculated in one of two ways [1] [5] [6] [13]:

One can either compute the magnetization (M) of the tip in the presence of the sample magnetic stray field (H) or Compute the magnetization of the sample in the presence of the tip magnetic stray field (which ever is easier) Then, integrate the (dot) product of the magnetization and stray field over the interaction volume

Compute the gradient of the energy over distance to obtain the force, F. Assuming that the cantilever deflects along the zaxis, and the tip is magnetized along a certain direction (like the z-axis), then the equations can be simplified

Since the tip is magnetized along a specific direction, it will be sensitive to the component of the sample magnetic stray field along the same direction.

Imaging Sample
The MFM can be used to image many types of magnetic structures including domain walls (Bloch and Neel), closure domains, recorded magnetic bits, etc. Furthermore, domain wall motion can also be studied as an external magnetic field is applied. MFM images of various materials can be seen in the following books and journal publications:

Thin films Nanoparticles Nanowires Permalloy disks Recording media

Prepared Abhishek Sharma

27

Below is a sample MFM image taken on a floppy disk at various lift heights

Advantages
The popularity of MFM is due to many reasons including [2]:

Sample does not need to be an electrical conductor Measurement can be performed in ambient temperature, in ultra high vacuum (UHV), in liquid environment, and at different temperatures Measurement is nondestructive to the crystal lattice or structure Long-range magnetic interactions are not sensitive to surface contamination Tests requires no special surface preparation or coating Deposition of thin non-magnetic layers on the sample do not alter the results Detectable magnetic field intensity sensitivity, H, is in the range of 10 A/m Detectable magnetic induction, B, is in the range of ~10-5 T Typical measured forces range as low as 10-14 N with spatial resolutions as low as 20 nm MFM can be combined with other scanning methods like STM

Limitations
There are some shortcomings or difficulties when working with an MFM, and some of these are as follows:

Image obtained depends on the type of tip (and magnetic coating) due to tip-sample interactions Magnetic field of the tip and sample can change each other's magnetization, M, which can result in nonlinear interactions, and thus, difficult interpretation of the obtained image Relatively short scanning range (order of 100's micrometer) along any direction Scanning (lift) height affects obtained image Housing of the MFM system is important to shield electromagnetic noise (Faraday cage), acoustic noise (anti-vibrating tables), air flow (air isolation cupboard), and static charge on the sample (ionizing units)

Nano Indentation
Nanoindentation refers to a variety of indentation hardness tests applied to small volumes. Indentation is perhaps the most commonly applied means of testing the mechanical properties of materials. The technique has its origins in the Mohs scale of mineral hardness, in which materials are ranked according to what they can scratch and are, in turn, scratched by. The characterization of solids in this way takes place on an essentially discrete scale, so much effort has been expended in order to develop techniques for evaluating material hardness over a continuous range. Hence, the adoption of the Meyer, Knoop, Brinell, Rockwell, and Vickers hardness tests. More recently (ca. 1975), the nanoindentation technique has been established as the primary tool for investigating the hardness of small volumes of material.

Back Ground
In a traditional indentation test (macro or micro indentation), a hard tip whose mechanical properties are known (frequently made of a very hard material like diamond) is pressed into a sample whose properties are unknown. The load placed on the indenter tip is increased as the tip penetrates further into the specimen and soon reaches a user-defined value. At this point, the load may be held constant for a period or removed. The area of the residual indentation in the sample is measured and the hardness, H, is defined as the maximum load, Pmax, divided by the residual indentation area, Ar, or

. For most techniques, the projected area may be measured directly using light microscopy. As can be seen from this equation, a given load will make a smaller indent in a "hard" material than a "soft" one. This technique is limited due to large and varied tip shapes, with indenter rigs which do not have very good spatial resolution (the location of the area to be indented is very hard to specify accurately). Comparison across experiments, typically done in different laboratories, is difficult and often meaningless. Nanoindentation improves on these macro and micro indentation tests by indenting on the nanoscale with a very precise tip shape, high spatial resolutions to place the indents, and by providing real-time load-displacement (into the surface) data while the indentation is in progress.

Nanoindentation
In nanoindentation small loads and tip sizes are used, so the indentation area may only be a few square micrometres or even nanometres. This presents problems in determining the hardness, as the contact area is not easily found. Atomic force microscopy or scanning electron microscopy techniques may be utilized to image the indentation, but can be quite cumbersome. Instead, an indenter with a geometry known to high precision (usually a Berkovich tip, which has a three-sided pyramid geometry) is employed. During the course of the instrumented indentation process, a record of the depth of penetration is made, and then the area of the indent is determined using the known geometry of the indentation tip. While indenting various parameters, such as load and depth of penetration, can be measured. A

Prepared Abhishek Sharma

28

record of these values can be plotted on a graph to create a load-displacement curve (such as the one shown in Figure 1). These curves can be used to extract mechanical properties of the material.

Modulus of elasticity: The slope of the curve, dP / dh, upon unloading is indicative of the stiffness S of the contact. This value generally includes a contribution from both the material being tested and the response of the test device itself. The stiffness of the contact can be used to calculate the reduced modulus of elasticity Er as

where A(hc) is the area of the indentation at the contact depth hc (the depth of the residual indentation), and is a geometrical constant on the order of unity. A(hc) is often approximated by a fitting polynomial as shown below for a Berkovich tip:

The reduced modulus Er is related to the modulus of elasticity Es of the test specimen through the following relationship from contact mechanics:

Here, the subscript i indicates a property of the indenter material and is Poisson's ratio. For a diamond indenter tip, Ei is 1140 GPa and i is 0.07. Poissons ratio varies between 0 and 0.5 for most materials (though it can be negative) and is typically around 0.3.

Hardness: There are two different types of hardness that can be obtained from a nano indenter: one is as in traditional
macroindentation tests where one attains a single hardness value per experiment; the other is based on the hardness as the material is being indented resulting in hardness as a function of depth. (This is only available in MTS indenters, the so called "continuous stiffness" option on older models [such as the Nano Indenter II].)

The hardness is given by the equation above, relating the maximum load to the indentation area. The area can be measured after the indentation by in-situ atomic force microscopy, or by 'after-the event' optical (or electron) microscopy. An example indentation image, from which the area may be determined, is shown at right. Some nanoindenters use an area function based on the geometry of the tip, compensating for elastic load during the test. Use of this area function provides a method of gaining real-time nanohardness values from a load-displacement graph. However, there is some controversy over the use of area functions to estimate the residual areas versus direct measurement. An area function A(h) typically describes the projected area of an indent as a 2nd-order polynomial function of the indenter depth h. Exclusive application of an area function in the absence of adequate knowledge of material response can lead to misinterpretation of resulting data. Crosschecking of areas microscopically is to be encouraged.

Strain-rate sensitivity: The strain-rate sensitivity of the flow stress m is defined as

where is the flow stress and is the strain rate produced under the indenter. For nanoindentation experiments which include a holding period at constant load (i.e. the flat, top area of the load-displacement curve), m can be determined from

Prepared Abhishek Sharma

29

The subscripts p indicate these values are to be determined from the plastic components only.

Activation volume: Interpreted loosely as the volume swept out by dislocations during thermal activation, the activation volume V
*

is

where T is the temperature and kB is Boltzmann's constant. From the definition of m, it is easy to see that

Limitations:
Conventional nanoindentation methods for calculation of Modulus of elasticity (based on the unloading curve) are limited to linear, isotropic materials. Problems associated with the "pile-up" or "sink-in" of the material on the edges of the indent during the indentation process remain a problem that is still under investigation. It is possible to measure the pile-up contact area using computerized image analysis of atomic force microscope (AFM) images of the indentations.[3] This process also depends on the linear isotropic elastic recovery for the indent reconstruction.

Prepared Abhishek Sharma

30

U N I T 3

Prepared Abhishek Sharma

31

Electron Microscopy
Electron microscopy is an imaging technique that uses an electron beam to probe a material. Since the wavelength of electrons is much smaller than the wavelength of visible light, diffraction effects occur at much smaller physical dimensions. The imaging resolution in electron microscopy is therefore much better than in light microscopy. In Table 4.1 the light and the electron microscope are compared. The electron microscope can reproduce very tiny components in an image called an electron micrograph. The main disadvantage of the technique is that samples have to be viewed in an air lock to maintain internal vacuum conditions. This means that no living material can be studied. Also, samples have to be specially prepared to give proper detail, e.g. freezing, fi xation or dehydration, which may result in artefacts being produced. This can give rise to the problem of distinguishing artefacts from the material of interest, particularly in biological samples. There are two main types of electron microscopy scanning electron microscopy (SEM) and transmission electron microscopy (TEM).

Scanning Electron Microscopy


The scanning electron microscope (SEM) is a type of electron microscope that images the sample surface by scanning it with a highenergy beam of electrons in a raster scan pattern. The electrons interact with the atoms that make up the sample producing signals that contain information about the sample's surface topography, composition and other properties such as electrical conductivity. The types of signals produced by an SEM include secondary electrons, back-scattered electrons (BSE), characteristic X-rays, light (cathodoluminescence), specimen current and transmitted electrons. Secondary electron detectors are common in all SEMs, but it is rare that a single machine would have detectors for all possible signals. The signals result from interactions of the electron beam with atoms at or near the surface of the sample. In the most common or standard detection mode, secondary electron imaging or SEI, the SEM can produce very high-resolution images of a sample surface, revealing details about less than 1 to 5 nm in size. Due to the very narrow electron beam, SEM micrographs have a large depth of field yielding a characteristic three-dimensional appearance useful for understanding the surface structure of a sample. This is exemplified by the micrograph of pollen shown to the right. A wide range of magnifications is possible, from about 10 times (about equivalent to that of a powerful hand-lens) to more than 500,000 times, about 250 times the magnification limit of the best light microscopes.Back-scattered electrons (BSE) are beam electrons that are reflected from the sample by elastic scattering. BSE are often used in analytical SEM along with the spectra made from the characteristic X-rays. Because the intensity of the BSE signal is strongly related to the atomic number (Z) of the specimen, BSE images can provide information about the distribution of different elements in the sample. For the same reason, BSE imaging can image colloidal gold immuno-labels of 5 or 10 nm diameter which would otherwise be difficult or impossible to detect in secondary electron images in biological specimens. Characteristic X-rays are emitted when the electron beam removes an inner shell electron from the sample, causing a higher energy electron to fill the shell and release energy. These characteristic X-rays are used to identify the composition and measure the abundance of elements in the sample.

Principle:
In brief, the scanning electron microscope (SEM) images the electrons that are reflected from a sample. These images are useful for studying surface morphology or measuring particle sizes. The SEM produces the images by detecting secondary electrons that are emitted from the surface due to excitation by the primary electron beam. Generally, SEM resolution is about an order of magnitude less than TEM resolution, but because the SEM image relies on surface processes rather than transmission it is able to image bulk samples and has a much greater depth of view, and so can produce images that are a good representation of the 3-D structure of the sample. The SEM has two major advantages over a conventional microscope: higher magnification and greater depth of fi eld. At higher magnification more details may be evident and the great depth of field makes the examination of specimens easier.

Instrument:
An SEM instrument can be depicted as illustrated in Figure. Source An electron gun delivers the electron beam, which is produced by applying a high voltageto a hot tungsten fi lament, and accelerates the emitted electrons through a high electric field, 1050 kV. The electron beam is then focused with magnetic fi eld lenses to a spot of 100 nm or less on the sample. The microscope column is under high vacuum.

Sample When using an SEM, biological samples are normally fi rst coated with a metal that readily reflects electrons, e.g. gold. This coating also provides a conducting surface for electrons to avoid charging of the sample. The SEM has a high voltage electron emitter that sends a

Prepared Abhishek Sharma

32

beam of electrons down the column, which is at very high vacuum, onto the sample where they bounce off and are used to form the image. The observer therefore sees a picture of the surface of the sample, without any internal information. The sample, after being prepared, is mounted on the comp stage /sample grid. Discriminator The electron beam is rastered across the sample via the scanning coils by ramping voltages on the x- and y-deflection plates through which the electron beam passes (the z-axis is the electron-beam direction). Detectors The sample can be viewed by detecting back-scattered electrons, secondary electrons or even X-rays emitted by the sample. Each type of detection can give different information about the sample. Output The computer controls the scanning coils and also manipulates and displays the data received by the detectors.

Information Obtained
Scanning electron microscopy gives the following qualitative information: topography (the surface features of an object and their texture), morphology (the shape, size and arrangement of the particles making up the object that are lying on the surface of the sample) and composition (the elements and compounds the sample is composed of and their relative ratios) if so equipped. All of these features are in the nanometre region of size. Crystallographic information is also possible in SEM, i.e. the arrangement of atoms in the specimen and their degree of order (only useful on single crystal particles _20 m).

These pollen grains taken on an SEM show the characteristic depth of field of SEM micrographs.

Scanning process and image formation


In a typical SEM, an electron beam is thermionically emitted from an electron gun fitted with a tungsten filament cathode. Tungsten is normally used in thermionic electron guns because it has the highest melting point and lowest vapour pressure of all metals, thereby allowing it to be heated for electron emission, and because of its low cost. Other types of electron emitters include lanthanum hexaboride (LaB6) cathodes, which can be used in a standard tungsten filament SEM if the vacuum system is upgraded and field emission guns (FEG), which may be of the cold-cathode type using tungsten single crystal emitters or the thermally-assisted Schottky type, using emitters of zirconium oxide. The electron beam, which typically has an energy ranging from a few hundred eV to 40 keV, is focused by one or two condenser lenses to a spot about 0.4 nm to 5 nm in diameter. The beam passes through pairs of scanning coils or pairs of deflector plates in the electron column, typically in the final lens, which deflect the beam in the x and y axes so that it scans in a raster fashion over a rectangular area of the sample surface. When the primary electron beam interacts with the sample, the electrons lose energy by repeated random scattering and absorption within a teardrop-shaped volume of the specimen known as the interaction volume, which extends from less than 100 nm to around 5 m into the surface. The size of the interaction volume depends on the electron's landing energy, the atomic number of the specimen and the specimen's density. The energy exchange between the electron beam and the sample results in the reflection of high-energy electrons by elastic scattering, emission of secondary electrons by inelastic scattering and the emission of electromagnetic radiation, each of which can be detected by specialized detectors. The beam current absorbed by the specimen can also be detected and used to create images of the distribution of specimen current. Electronic amplifiers of various types are used to amplify the signals which are displayed as variations in brightness on a cathode ray tube. The raster scanning of the CRT display is synchronised with that of the beam on the specimen in the microscope, and the resulting image is therefore a distribution map of the intensity of the signal being emitted from the scanned area of the specimen. The image may be captured by photography from a high resolution cathode ray tube, but in modern machines is digitally captured and displayed on a computer monitor and saved to a computer's hard disc.

Magnification
Magnification in a SEM can be controlled over a range of up to 6 orders of magnitude from about 10 to 500,000 times. Unlike optical and transmission electron microscopes, image magnification in the SEM is not a function of the power of the objective lens. SEMs may have condenser and objective lenses, but their function is to focus the beam to a spot, and not to image the specimen. Provided the electron gun can generate a beam with sufficiently small diameter, a SEM could in principle work entirely without condenser or objective lenses,

Prepared Abhishek Sharma

33

although it might not be very versatile or achieve very high resolution. In a SEM, as in scanning probe microscopy, magnification results from the ratio of the dimensions of the raster on the specimen and the raster on the display device. Assuming that the display screen has a fixed size, higher magnification results from reducing the size of the raster on the specimen, and vice versa. Magnification is therefore controlled by the current supplied to the x, y scanning coils, or the voltage supplied to the x, y deflector plates, and not by objective lens power.

Sample preparation
All samples must also be of an appropriate size to fit in the specimen chamber and are generally mounted rigidly on a specimen holder called a specimen stub. Several models of SEM can examine any part of a 6-inch (15 cm) semiconductor wafer, and some can tilt an object of that size to 45. For conventional imaging in the SEM, specimens must be electrically conductive, at least at the surface, and electrically grounded to prevent the accumulation of electrostatic charge at the surface. Metal objects require little special preparation for SEM except for cleaning and mounting on a specimen stub. Nonconductive specimens tend to charge when scanned by the electron beam, and especially in secondary electron imaging mode, this causes scanning faults and other image artifacts. They are therefore usually coated with an ultrathin coating of electrically-conducting material, commonly gold, deposited on the sample either by low vacuum sputter coating or by high vacuum evaporation. Conductive materials in current use for specimen coating include gold, gold/palladium alloy, platinum, osmium,[5] iridium, tungsten, chromium and graphite. Coating prevents the accumulation of static electric charge on the specimen during electron irradiation

.
An insect coated in gold, having been prepared for viewing with a scanning electron microscope.

Two important reasons for coating, even when there is more than enough specimen conductivity to prevent charging, are to maximise signal and improve spatial resolution, especially with samples of low atomic number (Z). Broadly, signal increases with atomic number, especially for backscattered electron imaging. The improvement in resolution arises because in low-Z materials, that is a material with low atomic number such as carbon, the electron beam can penetrate several micrometres below the surface, generating signals from an interaction volume much larger than the beam diameter and reducing spatial resolution. Coating with a high-Z material such as gold maximises secondary electron yield from within a surface layer a few nm thick, and suppresses secondary electrons generated at greater depths, so that the signal is predominantly derived from locations closer to the beam and closer to the specimen surface than would be the case in an uncoated, low-Z material. These effects are particularly, but not exclusively, relevant to biological samples. An alternative to coating for some biological samples is to increase the bulk conductivity of the material by impregnation with osmium using variants of the OTO staining method (O-osmium, T-thiocarbohydrazide, O-osmium).[6][7] Nonconducting specimens may be imaged uncoated using specialized SEM instrumentation such as the "Environmental SEM" (ESEM) or field emission gun (FEG) SEMs operated at low voltage. Environmental SEM instruments place the specimen in a relatively high pressure chamber where the working distance is short and the electron optical column is differentially pumped to keep vacuum adequately low at the electron gun. The high pressure region around the sample in the ESEM neutralizes charge and provides an amplification of the secondary electron signal. Low voltage SEM of non-conducting specimens can be operationally difficult to accomplish in a conventional SEM and is typically a research application for specimens that are sensitive to the process of applying conductive coatings. Low-voltage SEM is typically conducted in an FEG-SEM because the FEG is capable of producing high primary electron brightness even at low accelerating potentials. Operating conditions must be adjusted such that the local space charge is at or near neutral with adequate low voltage secondary electrons being available to neutralize any positively charged surface sites. This requires that the primary electron beam's potential and current be tuned to the characteristics of the sample specimen. Embedding in a resin with further polishing to a mirror-like finish can be used for both biological and materials specimens when imaging in backscattered electrons or when doing quantitative X-ray microanalysis.

Prepared Abhishek Sharma

34

An image of a house fly compound eye surface by using Scanning Electron Microscope at X450 magnification

Detection of secondary electrons


The most common imaging mode collects low-energy (<50 eV) secondary electrons that are ejected from the k-orbitals of the specimen atoms by inelastic scattering interactions with beam electrons. Due to their low energy, these electrons originate within a few nanometers from the sample surface.[16] The electrons are detected by an Everhart-Thornley detector[17] which is a type of scintillatorphotomultiplier system. The secondary electrons are first collected by attracting them towards an electrically-biased grid at about +400 V, and then further accelerated towards a phosphor or scintillator positively biased to about +2,000 V. The accelerated secondary electrons are now sufficiently energetic to cause the scintillator to emit flashes of light (cathodoluminescence) which are conducted to a photomultiplier outside the SEM column via a light pipe and a window in the wall of the specimen chamber. The amplified electrical signal output by the photomultiplier is displayed as a two-dimensional intensity distribution that can be viewed and photographed on an analogue video display, or subjected to analog-to-digital conversion and displayed and saved as a digital image. This process relies on a raster-scanned primary beam. The brightness of the signal depends on the number of secondary electrons reaching the detector. If the beam enters the sample perpendicular to the surface, then the activated region is uniform about the axis of the beam and a certain number of electrons "escape" from within the sample. As the angle of incidence increases, the "escape" distance of one side of the beam will decrease, and more secondary electrons will be emitted. Thus steep surfaces and edges tend to be brighter than flat surfaces, which results in images with a well-defined, three-dimensional appearance. Using this technique, image resolution less than 0.5 nm is possible.

Detection of backscattered electrons


Backscattered electrons (BSE) consist of high-energy electrons originating in the electron beam, that are reflected or back-scattered out of the specimen interaction volume by elastic scattering interactions with specimen atoms. Since heavy elements (high atomic number) backscatter electrons more strongly than light elements (low atomic number), and thus appear brighter in the image, BSE are used to detect contrast between areas with different chemical compositions.[16] The Everhart-Thornley detector, which is normally positioned to one side of the specimen, is inefficient for the detection of backscattered electrons because few such electrons are emitted in the solid angle subtended by the detector, and because the positively biased detection grid has little ability to attract the higher energy BSE electrons. Dedicated backscattered electron detectors are positioned above the sample in a "doughnut" type arrangement, concentric with the electron beam, maximising the solid angle of collection. BSE detectors are usually either of scintillator or semiconductor types. When all parts of the detector are used to collect electrons symmetrically about the beam, atomic number contrast is produced. However, strong topographic contrast is produced by collecting back-scattered electrons from one side above the specimen using an asymmetrical, directional BSE detector; the resulting contrast appears as illumination of the topography from that side. Semiconductor detectors can be made in radial segments that can be switched in or out to control the type of contrast produced and its directionality. Backscattered electrons can also be used to form an electron backscatter diffraction (EBSD) image that can be used to determine the crystallographic structure of the specimen.

Beam-injection analysis of semiconductors


The nature of the SEM's probe, energetic electrons, makes it uniquely suited to examining the optical and electronic properties of semiconductor materials. The high-energy electrons from the SEM beam will inject charge carriers into the semiconductor. Thus, beam electrons lose energy by promoting electrons from the valence band into the conduction band, leaving behind holes. In a direct bandgap material, recombination of these electron-hole pairs will result in cathodoluminescence; if the sample contains an internal electric field, such as is present at a p-n junction, the SEM beam injection of carriers will cause electron beam induced current (EBIC) to flow. Cathodoluminescence and EBIC are referred to as "beam-injection" techniques, and are very powerful probes of the optoelectronic behavior of semiconductors, particularly for studying nanoscale features and defects.

Cathod0- luminescence
Cathodoluminescence, the emission of light when atoms excited by high-energy electrons return to their ground state, is analogous to UV-induced fluorescence, and some materials such as zinc sulfide and some fluorescent dyes, exhibit both phenomena.

Prepared Abhishek Sharma

35

Cathodoluminescence is most commonly experienced in everyday life as the light emission from the inner surface of the cathode ray tube in television sets and computer CRT monitors. In the SEM, CL detectors either collect all light emitted by the specimen, or can analyse the wavelengths emitted by the specimen and display an emission spectrum or an image of the distribution of cathodoluminescence emitted by the specimen in real colour.

X-ray microanalysis
X-rays, which are also produced by the interaction of electrons with the sample, may also be detected in an SEM equipped for energydispersive X-ray spectroscopy or wavelength dispersive X-ray spectroscopy.

Resolution of the SEM


The spatial resolution of the SEM depends on the size of the electron spot, which in turn depends on both the wavelength of the electrons and the electron-optical system which produces the scanning beam. The resolution is also limited by the size of the interaction volume, or the extent to which the material interacts with the electron beam. The spot size and the interaction volume are both large compared to the distances between atoms, so the resolution of the SEM is not high enough to image individual atoms, as is possible in the shorter wavelength (i.e. higher energy) transmission electron microscope (TEM). The SEM has compensating advantages, though, including the ability to image a comparatively large area of the specimen; the ability to image bulk materials (not just thin films or foils); and the variety of analytical modes available for measuring the composition and properties of the specimen. Depending on the instrument, the resolution can fall somewhere between less than 1 nm and 20 nm. By 2009, The world's highest SEM resolution at high beam energies (0.4 nm at 30 kV) is obtained with the Hitachi S-5500. At low beam energies, the best resolution (by 2009) is achieved by the Magellan system from FEI Company (0.9 nm at 1 kV).

Environmental SEM
Conventional SEM requires samples to be imaged under vacuum, because a gas atmosphere rapidly spreads and attenuates electron beams. Consequently, samples that produce a significant amount of vapour, e.g. wet biological samples or oil-bearing rock need to be either dried or cryogenically frozen. Processes involving phase transitions, such as the drying of adhesives or melting of alloys, liquid transport, chemical reactions, solid-air-gas systems and living organisms in general cannot be observed. The first commercial development of the Environmental SEM (ESEM) in the late 1980s [18][19] allowed samples to be observed in lowpressure gaseous environments (e.g. 1-50 Torr) and high relative humidity (up to 100%). This was made possible by the development of a secondary-electron detector [20][21] capable of operating in the presence of water vapour and by the use of pressure-limiting apertures with differential pumping in the path of the electron beam to separate the vacuum region (around the gun and lenses) from the sample chamber. The first commercial ESEMs were produced by the Electro Scan Corporation in USA in 1988. Electro Scan were later taken over by Philips (now FEI Company) in 1996. ESEM is especially useful for non-metallic and biological materials because coating with carbon or gold is unnecessary. Uncoated Plastics and Elastomers can be routinely examined, as can uncoated biological samples. Coating can be difficult to reverse, may conceal small features on the surface of the sample and may reduce the value of the results obtained. X-ray analysis is difficult with a coating of a heavy metal, so carbon coatings are routinely used in conventional SEMs, but ESEM makes it possible to perform X-ray microanalysis on uncoated non-conductive specimens. ESEM may be the preferred for electron microscopy of unique samples from criminal or civil actions, where forensic analysis may need to be repeated by several different experts.

Applications:
SEM has been used to image DNA molecules tagged with gold nanoparticles3. This technique allowed counting and hence quantitative measurement of the molecules of interest. SEM with EDX has also been employed to measure aerosol particles in workroom air4. The size, morphology and chemical composition of over 2000 particles were determined. SEM is useful too for the study of geological materials, grains and crystal structure.

Transmission Electron Microscopy


Principles
The transmission electron microscope (TEM) images the electrons that pass through a sample. Since electrons interact strongly with matter, they are attenuated as they pass through a solid; this requires that the samples are prepared in very thin sections. The image of the sample is observed on a phosphor screen below the sample and can be recorded with fi lm. Generally, the TEM resolution (0.5 nm) is about an order of magnitude better than the SEM resolution (10 nm). Further information on TEM is available elsewhere5.

Instrument
A TEM instrument can be depicted as shown in Figure

Prepared Abhishek Sharma

36

Source An electron gun delivers the electron beam, as in SEM, and the microscope column is under high vacuum also. Sample A TEM produces an image that is a projection of the entire object, including the surface and the internal structures. The incoming electron beam interacts with the sample as it passes through the entire thickness of the sample (like a slide projector). The TEM column is at a very high vacuum, as with SEM. Objects with different internal structures can be differentiated because they give different projections. However, the image is two dimensional and depth information in structures is lost. Furthermore, the samples need to be thin (_0.1 m), or they will absorb too much of the electron beam. The prepared sample is mounted on the compustage/sample grid. Discriminator The controller is used to scan through the sample. Detectors The image strikes the phosphor image screen and light is generated, allowing the user to see the image. Diffraction information is also imaged on a separate platform. TEM microscopes can image individual atoms and their relative placement, and can give compositional information over an area of interest. Output The computer controls the scanning coils and also manipulates the data received by the detectors.

Information Obtained
TEM gives the following qualitative information: morphology or structural information the size, shape and arrangement of the particles or phases which make up the specimen/ sample), crystallographic information (the arrangement of atoms in the specimen, their degree of order and any defects) and, if so equipped, compositional information (the elements and compounds the sample is composed of and their relative ratios). All of these features are in the low nanometer region of size.

Applications
Because the electron beam goes through the sample, TEM reveals the interior of a specimen/sample. It is often used for the study of physical samples, such as minerals and rocks in geology, and new materials. It is also suitable for many types of biological, chemical and environmental samples, the main limitation being the ability to prepare sample sections thin enough to allow the electron beam to pass through. One environmental study used TEM to identify and quantify inorganic particles in a colloidal size range by applying a straightforward method of sample preparation (direct centrifugation of the samples on transmission electron microscopy grids) in conjunction with particle analysis using TEM and EDX9. Another environmental study examined ultrafine (_100 nm) ash particles in three coal fl y ashes10. Crystalline phases down to 10 nm in size were identified.

High Resolution transmission Electron Microscopy


High-resolution transmission electron microscopy (HRTEM) is an imaging mode of the transmission electron microscope (TEM) that allows the imaging of the crystallographic structure of a sample at an atomic scale.[1] Because of its high resolution, it is an invaluable tool to study nanoscale properties of crystalline material such as semiconductors and metals. At present, the highest resolution realised is 0.8 angstroms (0.08 nm) with microscopes such as the OAM at NCEM. Ongoing research and development such as efforts in the framework of TEAM will soon push the resolution of HRTEM to 0.5 . At these small scales, individual atoms and crystalline defects can be imaged. Since all crystal structures are 3-dimensional, it may be necessary to combine several views of the crystal, taken from different angles, into a 3D map. This technique is called electron crystallography. One of the difficulties with HRTEM is that image formation relies on phase-contrast. In phase-contrast imaging, contrast is not necessarily intuitively interpretable as the image is influenced by strong aberrations of the imaging lenses in the microscope. One major aberration is caused by focus and astigmatism, which often can be estimated from the Fourier transform of the HRTEM image.

Image contrast and interpretation


As opposed to conventional microscopy, HRTEM does not use amplitudes, i.e. absorption by the sample, for image formation. Instead, contrast arises from the interference in the image plane of the electron wave with itself. Due to our inability to record the phase of these waves, we generally measure the amplitude resulting from this interference, however the phase of the electron wave still carries the information about the sample and generates contrast in the image, thus the name phase-contrast imaging. This, however is true only if the sample is thin enough so that amplitude variations only slightly affect the image (the so-called weak phase object approximation, WPOA).

Prepared Abhishek Sharma

37

The interaction of the electron wave with the crystallographic structure of the sample is not entirely understood yet, but a qualitative idea of the interaction can readily be obtained. Each imaging electron interacts independently with the sample. Above the sample, the wave of an electron can be approximated as a plane wave incident on the sample surface. As it penetrates the sample, it is attracted by the positive atomic potentials of the atom cores, and channels along the atom columns of the crystallographic lattice (s-state model). At the same time, the interaction between the electron wave in different atom columns leads to Bragg diffraction. The exact description of dynamical scattering of electrons in a sample not satisfying the WPOA (almost all real samples) still remains the holy grail of electron microscopy. However, the physics of electron scattering and electron microscope image formation are sufficiently well known to allow accurate simulation of electron microscope images. [2] As a result of the interaction with the sample, the electron exit wave right below the sample e(x,u) as a function of the spatial coordinate x is a superposition of a plane wave and a multitude of diffracted beams with different in plane spatial frequencies u (high spatial frequencies correspond to large distances from the optical axis). The phase change of e(x,u) compared to the incident wave peaks at the location of the atom columns. The exit wave now passes through the imaging system of the microscope where it undergoes further phase change and interferes as the image wave in the imaging plane (photo plate or CCD). It is important to realize, that the recorded image is NOT a direct representation of the samples crystallographic structure. For instance, high intensity might or might not indicate the presence of an atom column in that precise location (see simulation). The relationship between the exit wave and the image wave is a highly nonlinear one and is a function of the aberrations of the microscope. It is described by the contrast transfer function.

The phase contrast transfer function


The phase contrast transfer function (CTF) is a function of limiting apertures and aberrations in the imaging lenses of a microscope. It describes their effect on the phase of the exit wave e(x,u) and propagates it to the image wave. Following Williams and Carter,[3] if we assume the WPOA holds (thin sample) the CTF becomes

where A(u) is the aperture function, E(u) describes the attenuation of the wave for higher spatial frequency u, also called envelope function. (u) is a function of the aberrations of the electron optical system. The last, sinusoidal term of the CTF will determine the sign with which components of frequency u will enter contrast in the final image. If one takes into account only spherical aberration to third order and defocus, is rotationally symmetric about the optical axis of the microscope and thus only depends on the modulus u = |u|, given by

where Cs is the spherical aberration coefficient, is the electron wavelength, and f is the defocus. In TEM, defocus can easily be controlled and measured to high precision. Thus one can easily alter the shape of the CTF by defocusing the sample. Contrary to optical applications, defocusing can actually increase the precision and interpretability of the micrographs. The aperture function cuts off beams scattered above a certain critical angle (given by the objective pole piece for ex), thus effectively limiting the attainable resolution. However it is the envelope function E(u) which usually dampens the signal of beams scattered at high angles, and imposes a maximum to the transmitted spatial frequency. This maximum determines the highest resolution attainable with a microscope and is known as the information limit. E(u) can be described as a product of single envelopes:

due to Es(u): angular spread of the source Ec(u): chromatic aberration Ed(u): specimen drift Ev(u): specimen vibration ED(u): detector

Prepared Abhishek Sharma

38

Specimen drift and vibration can be minimized relatively easily by a suitable working environment. It is usually the spherical aberration Cc that limits spatial coherency and defines Es(u) and the chromatical aberration, together with current and voltage instabilities that define the temporal coherency in Ec(u). These two envelopes determine the information limit. If we consider the probe to have a Gaussian distribution of electron intensity, the spatial envelope function is given by

where is the semiangle describing the Gaussian distribution. Clearly, if the spherical aberration Cs where zero, this envelope function would be a constant one for Gaussian focus. Else, the damping due to this envelope function can be minimized by optimizing the defocus at which the image is recorded (Lichte defocus). The temporal envelope function can be expressed as

here is the focus spread due to chromatical aberration Cc:

The terms Iobj / Iobj and Vacc / Vacc represent instabilities in of the currents in the objective lens and the high voltage supply of the electron gun. E / Vacc is the energy spread of electrons leaving the gun. The information limit of current (2006) microscopes lies little below 1 . The TEAM project aims at pushing the information limit to 0.5 . To do so it will be fully corrected for third and fifth order spherical aberration as well as chromatical aberration. Further, the electron beam will be highly monochromatised and current and voltage have to be stabilized.

Optimum defocus in HRTEM


Choosing the optimum defocus is crucial to fully exploit the capabilities of an electron microscope in HRTEM mode. However, there is no simple answer as to which one is the best. In Gaussian focus one sets the defocus to zero, the sample is in focus. As a consequence contrast in the image plane gets its image components from the minimal area of the sample, the contrast is localized (no blurring and information overlap from other parts of the sample). The CTF now becomes a function that oscillates quickly with Csu4. What this means is that for certain diffracted beams with a given spatial frequency u the contribution to contrast in the recorded image will be reversed, thus making interpretation of the image difficult.

Prepared Abhishek Sharma

39

Scherzer defocus
In Scherzer defocus, one aims to counter the term in u4 with the parabolic term fu2 of (u). Thus by choosing the right defocus value f one flattens (u) and creates a wide band where low spatial frequencies u are transferred into image intensity with a similar phase. In 1949, Scherzer found that the optimum defocus depends on microscope properties like the spherical aberration Cs and the accelerating voltage (through ) in the following way:

where the factor 1.2 defines the extended Scherzer defocus. For the CM300 at NCEM, Cs = 0.6mm and an accelerating voltage of 300keV ( = 1.97 pm) (Wavelength calculation) result in fScherzer = -41.25 nm. The point resolution of a microscope is defined as the spatial frequency ures where the CTF crosses the abscissa for the first time. At Scherzer defocus this value is maximized:

which corresponds to 6.1 nm1 on the CM300. Contributions with a spatial frequency higher than the point resolution can be filtered out with an appropriate aperture leading to easily interpretable images at the cost of a lot of information lost.

Gabor defocus
Gabor defocus is used in electron holography where both amplitude and phase of the image wave are recorded. One thus wants to minimize crosstalk between the two. The Gabor defocus can be expressed as a function of the Scherzer defocus as fGabor = 0.56fScherzer

Lichte defocus
To exploit all beams transmitted through the microscope up to the information limit, one relies on a complex method called exit wave reconstruction which consists in mathematically reversing the effect of the CTF to recover the original exit wave e(x,u). To maximize the information throughput, Hannes Lichte proposed in 1991 a defocus of a fundamentally different nature than the Scherzer defocus: because the dampening of the envelope function scales with the first derivative of (u), Lichte proposed a focus minimizing the modulus of d(u)/du[4] fLichte = 0.75Cs(umax)2, where umax is the maximum transmitted spatial frequency. For the CM300 with an information limit of 0.8 Lichte defocus lies at 272 nm.

Exit wave reconstruction


To calculate back to e(x,u) the wave in the image plane is back propagated numerically to the sample. If all properties of the microscope are well known, it is possible to recover the real exit wave with very high accuracy. First however, both phase and amplitude of the electron wave in the image plane must be measured. As our instruments only record amplitudes, an alternative method to recover the phase has to be used. There are two methods in use today:

Prepared Abhishek Sharma

40

Holography, which was developed by Gabor expressly for TEM applications, uses a prism to split the beam into a reference beam and a second one passing through the sample. Phase changes between the two are then translated in small shifts of the interference pattern, which allows recovering both phase and amplitude of the interfering wave. Through focal series method takes advantage of the fact that the CTF is focus dependent. A series of about 20 pictures is shot under the same imaging conditions with the exception of the focus which is incremented between each take. Together with exact knowledge of the CTF the series allows for computation of e(x,u) (see figure).

Both methods extend the point resolution of the microscope the information limit, which is the highest possible resolution achievable on a given machine. The ideal defocus value for this type of imaging is known as Lichte defocus and is usually several hundred nanometers negative.

Field Emission Scanning Electron Microscopy (FESEM)


Principle of Operation
A field-emission cathode in the electron gun of a scanning electronmicroscope provides narrower probing beams at low as well as high electronenergy, resulting in both improved spatial resolution and minimized samplecharging and damage. For applications which demand the highest magnification possible, we also offer In-lens FESEM.

Applications include
Semiconductor device cross section analyses for gate widths, gate oxides,film thicknesses, and construction details Advanced coating thickness and structure uniformity determination Small contamination feature geometry and elemental composition measurement

Why Field Emission SEM?


FESEM produces clearer, less electrostatically distorted images withspatial resolution down to 1 1/2 nm. That's 3 to 6 times better than conventionalSEM. Smaller-area contamination spots can be examined at electron acceleratingvoltages compatible with Energy Dispersive Xray Spectroscopy. Reduced penetration of low kinetic energy electrons probes closer tothe immediate material surface. High quality, low voltage images are obtained with negligible electricalcharging of samples. (Accelerating voltages range from 0.5 to 30 kV.) Need for placing conducting coatings on insulating materials is virtuallyeliminated. For ultra-high magnification imaging, use our In-lensFESEM.

Electron energy loss spectroscopy


In electron energy loss spectroscopy (EELS) a material is exposed to a beam of electrons with a known, narrow range of kinetic energies. Some of the electrons will undergo inelastic scattering, which means that they lose energy and have their paths slightly and randomly deflected. The amount of energy loss can be measured via an electron spectrometer and interpreted in terms of what caused the energy loss. Inelastic interactions include phonon excitations, inter and intra band transitions, plasmon excitations, inner shell ionizations, and erenkov radiation. The inner-shell ionizations are particularly useful for detecting the elemental components of a material. For example, one might find that a larger-than-expected number of electrons comes through the material with 285 eV (electron volts, a unit of energy) less energy than they had when they entered the material. It so happens that this is about the amount of energy needed to remove an inner-shell electron from a carbon atom. This can be taken as evidence that there's a significant amount of carbon in the part of the material that's being hit by the electron beam. With some care, and looking at a wide range of energy losses, one can determine the types of atoms, and the numbers of atoms of each type, being struck by the beam. The scattering angle (that is, the amount that the electron's path is deflected) can also be measured, giving information about the dispersion relation of whatever material excitation caused the inelastic scattering.

Prepared Abhishek Sharma

41

History
The technique was developed by James Hillier and RF Baker in the mid 1940s[2] but was not widely used over the next 50 years, only becoming more widespread in research in the 1990s due to advances in microscope instrumentation and vacuum technology. With modern instrumentation becoming widely available in laboratories worldwide, the technical and scientific developments from the mid 1990s have been rapid. The technique is able to take advantage of modern aberration-corrected probe forming systems to attain spatial resolutions down to ~0.1 nm, while with a monochromated electron source and/or careful deconvolution the energy resolution can be 100 meV or better.[3] This has enabled detailed measurements of the atomic and electronic properties of single columns of atoms, and in a few cases, of single atoms. EELS and EDX EELS is often spoken of as being complementary to energy-dispersive x-ray spectroscopy (variously called EDX, EDS, XEDS, etc.), which is another common spectroscopy technique available on many electron microscopes. EDX excels at identifying the atomic composition of a material, is quite easy to use, and is particularly sensitive to heavier elements. EELS has historically been a more difficult technique but is in principle capable of measuring atomic composition, chemical bonding, valence and conduction band electronic properties, surface properties, and element-specific pair distance distribution functions.[4] EELS tends to work best at relatively low atomic numbers, where the excitation edges tend to be sharp, well-defined, and at experimentally accessible energy losses (the signal being very weak beyond about 3 keV energy loss). EELS is perhaps best developed for the elements ranging from carbon through the 3d transition metals (from scandium to zinc).[5] For carbon, an experienced spectroscopist can tell at a glance the differences among diamond, graphite, amorphous carbon, and "mineral" carbon (such as the carbon appearing in carbonates). The spectra of 3d transition metals can be analyzed to identify the oxidation states of the atoms. Cu(I), for instance, has a different so-called "white-line" intensity ratio than does Cu(II). This ability to "fingerprint" different forms of the same element is a strong advantage of EELS over EDX. The difference is mainly due to the difference in energy resolution between the two techniques (~1 eV or better for EELS, perhaps a few times ten eV for EDX).

Variants
There are several basic flavors of EELS, primarily classified by the geometry and by the kinetic energy of the incident electrons (typically measured in kiloelectron-volts, or keV). Probably the most common today is transmission EELS, in which the kinetic energies are typically 100 to 300 keV and the incident electrons pass entirely through the material sample. Usually this occurs in a transmission electron microscope (TEM), although some dedicated systems exist which enable extreme resolution in terms of energy and momentum transfer at the expense of spatial resolution. Other flavors include reflection EELS (including reflection high-energy electron energy-loss spectroscopy (RHEELS), typically at 10 to 30 keV) and aloof EELS (sometimes called near-field EELS, in which the electron beam does not in fact strike the sample but instead interacts with it via the long-ranged Coulomb interaction; aloof EELS is particularly sensitive to surface properties but is limited to very small energy losses such as those associated with surface plasmons or direct interband transitions). Within transmission EELS, the technique is further subdivided into valence EELS (which measures plasmons and interband transitions) and inner-shell ionization EELS (which provides much the same information as x-ray absorption spectroscopy, but from much smaller volumes of material). The dividing line between the two, while somewhat ill-defined, is in the vicinity of 50 eV energy loss. High resolution electron energy loss spectroscopy, in which the electron beam is 1eV to 10eV, and highly monochromatic.

Thickness measurements
EELS allows quick and reliable measurement of local thickness in transmission electron microscope.[4] The most efficient procedure is the following:

Measure the energy loss spectrum in the energy range about -5..200 eV (wider better). Such measurement is quick (milliseconds) and thus can be applied to materials normally unstable under electron beam. Analyse the spectrum: (i) extract zero-loss peak (ZLP) using standard routines; (ii) calculate integrals under the ZLP (I0) and under the whole spectrum (I). The thickness t is calculated as mfp*ln(I/I0). Here mfp is the mean free path of electron inelastic scattering, which has recently been tabulated for most elemental solids and oxides.[7]
The spatial resolution of this procedure is limited by the plasmon localization and is about 1 nm,[4] meaning that spatial thickness maps can be measured in scanning transmission electron microscope with ~1 nm resolution.

Prepared Abhishek Sharma

42

Pressure measurements
The intensity and position of low-energy EELS peaks are affected by pressure. This fact allows mapping local pressure with ~1 nm spatial resolution.

Peak shift method is reliable and straightforward. The peak position is calibrated by independent (usually optical) measurement using a diamond anvil cell. However, the spectral resolution of most EEL spectrometers (0.3-2 eV, typically 1eV) is often too crude for the small pressure-induced shifts. Therefore, the sensitivity and accuracy of this method is relatively poor. Nevertheless, pressures as small as 0.2 GPa inside helium bubbles in aluminum have been measured.[8] Peak intensity method relies on pressure-induced change in the intensity of dipole-forbidden transitions. Because this intensity is zero for zero pressure the method is relatively sensitive and accurate. However, it requires existence of allowed and forbidden transitions of similar energies and thus is only applicable to specific systems, e.g., Xe bubbles in aluminum.

Electron probe micro-analyzer (EPMA)


An electron probe micro-analyzer is a microbeam instrument used primarily for the in situ non-destructive chemical analysis of minute solid samples. EPMA is also informally called an electron microprobe, or just probe. It is fundamentally the same as an SEM, with the added capability of chemical analysis. The primary importance of an EPMA is the ability to acquire precise, quantitative elemental analyses at very small "spot" sizes (as little as 1-2 microns), primarily by wavelength-dispersive spectroscopy (WDS). The spatial scale of analysis, combined with the ability to create detailed images of the sample, makes it possible to analyze geological materials in situ and to resolve complex chemical variation within single phases (in geology, mostly glasses and minerals). The electron optics of an SEM or EPMA allow much higher resolution images to be obtained than can be seen using visible-light optics, so features that are irresolvable under a light microscope can be readily imaged to study detailed microtextures or provide the fine-scale context of an individual spot analysis. A variety of detectors can be used for:

imaging modes such as secondary-electron imaging (SEI), back-scattered electron imaging (BSE), and cathodoluminescence imaging (CL), acquiring 2D element maps, acquiring compositional information by energy-dispersive spectroscopy (EDS) and wavelength-dispersive spectroscopy (WDS), analyzing crystal-lattice preferred orientations (EBSD).

Principle
An electron microprobe operates under the principle that if a solid material is bombarded by an accelerated and focused electron beam, the incident electron beam has sufficient energy to liberate both matter and energy from the sample. These electron-sample interactions mainly liberate heat, but they also yield both derivative electrons and x-rays. Of most common interest in the analysis of geological materials are secondary and back-scattered electrons, which are useful for imaging a surface or obtaining an average composition of the material. X-ray generation is produced by inelastic collisions of the incident electrons with electrons in the inner shells of atoms in the sample; when an inner-shell electron is ejected from its orbit, leaving a vacancy, a higher-shell electron falls into this vacancy and must shed some energy (as an X-ray) to do so. These quantized x-rays are characteristic of the element. EPMA analysis is considered to be "non-destructive"; that is, x-rays generated by electron interactions do not lead to volume loss of the sample, so it is possible to re-analyze the same materials more than one time.

Instrumentation - How Does It Work?


A beam of electrons is fired at a sample. The beam causes each element in the sample to emit X-rays at a characteristic frequency; the X-rays can then be detected by the electron microprobe.[2] The size of the electron beam determines the trade-off between resolution and scan time. EPMA consists of four major components, from top to bottom: 1. 2. An electron source, commonly a W-filament cathode referred to as a "gun." A series of electromagnetic lenses located in the column of the instrument, used to condense and focus the electron beam emanating from the source; this comprises the electron optics and operates in an analogous way to light optics. A sample chamber, with movable sample stage (X-Y-Z), that is under a vacuum to prevent gas and vapor molecules from interfering with the electron beam on its way to the sample; a light microscope allows for direct optical observation of the sample. A variety of detectors arranged around the sample chamber that are used to collect x-rays and electrons emitted from the sample.

3.

4.

Prepared Abhishek Sharma

43

Applications
Quantitative EPMA analysis is the most commonly used method for chemical analysis of geological materials at small scales. In most cases, EPMA is chosen in cases where individual phases need to be analyzed (e.g., igneous and metamorphic minerals), or where the material is of small size or valuable for other reasons (e.g., experimental run product, sedimentary cement, volcanic glass, matrix of a meteorite, archeological artifacts such as ceramic glazes and tools). In some cases, it is possible to determine a U-Th age of a mineral such as monazite without measuring isotopic ratios. EPMA is also widely used for analysis of synthetic materials such as optical wafers, thin films, microcircuits, semiconductors, and superconducting ceramics.

Strengths Limitations

An electron probe is essentially the same instrument as an SEM, but differs in that it is equipped with a range of crystal spectrometers that enable quantitative chemical analysis (WDS) at high sensitivity. An electron probe is the primary tool for chemical analysis of solid materials at small spatial scales (as small as 1-2 micron diameter); hence, the user can analyze even minute single phases (e.g., minerals) in a material (e.g., rock) with "spot" analyses. Spot chemical analyses can be obtained in situ, which allows the user to detect even small compositional variations within textural context or within chemically zoned materials. Electron probes commonly have an array of imaging detectors (SEI, BSE, and CL) that allow the investigator to generate images of the surface and internal compositional structures that help with analyses.

Although electron probes have the ability to analyze for almost all elements, they are unable to detect the lightest elements (H, He and Li); as a result, for example, the "water" in hydrous minerals cannot be analyzed. Some elements generate x-rays with overlapping peak positions (by both energy and wavelength) that must be separated. Microprobe analyses are reported as oxides of elements, not as cations; therefore, cation proportions and mineral formulae must be recalculated following stoichiometric rules. Probe analysis also cannot distinguish between the different valence states of Fe, so the ferric/ferrous ratio cannot be determined and must be evaluated by other techniques

Prepared Abhishek Sharma

44

U N I T 4

Prepared Abhishek Sharma

45

Spectrophotometry
Spectrophotometry is one of the most important practices carried out in physics laboratories. Spectrophotometry is the study of the reflection or transmission properties of a substance as a function of wavelength. It is the quantitative study of electromagnetic spectra of a material. During the process, the transmittance or reflectance of the substance is measured through the careful geometrical and spectral consideration. In physics, spectrophotometry is the quantifiable study of electromagnetic spectra. It is more specific than the general term electromagnetic spectroscopy in that spectrophotometry deals with visible light, near-ultraviolet, and near-infrared. Also, the term does not cover time-resolved spectroscopic techniques. Spectrophotometry involves the use of a spectrophotometer. A spectrophotometer is a photometer (a device for measuring light intensity) that can measure intensity as a function of the color, or more specifically, the wavelength of light. Important features of spectrophotometers are spectral bandwidth and linear range of absorption measurement. Perhaps the most common application of spectrophotometers is the measurement of light absorption, but they can be designed to measure diffuse or specular reflectance.[clarification needed] Strictly, even the emission half of a luminescence instrument is a kind of spectrophotometer. The use of spectrophotometers is not limited to studies in physics. They are also commonly used in other scientific fields such as chemistry, biochemistry, and molecular biology. They are widely used in many industries including printing and forensic examination.

Design
There are two major classes of spectrophotometers; single beam and double beam. A double beam spectrophotometer compares the light intensity between two light paths, one path containing a reference sample and the other the test sample. A single beam spectrophotometer measures the relative light intensity of the beam before and after a test sample is inserted. Although comparison measurements from double beam instruments are easier and more stable, single beam instruments can have a larger dynamic range and are optically simpler and more compact. Historically, spectrophotometers use a monochromator containing a diffraction grating to produce the analytical spectrum. There are also spectrophotometers that use arrays of photosensors. Especially for infrared spectrophotometers, there are spectrophotometers that use a Fourier transform technique to acquire the spectral information quicker in a technique called Fourier Transform InfraRed... The spectrophotometer quantitatively compares the fraction of light that passes through a reference solution and a test solution. Light from the source lamp is passed through a monochromator, which difracts the light into a "rainbow" of wavelengths and outputs narrow bandwidths of this diffracted spectrum. Discrete frequencies are transmitted through the test sample. Then the intensity of the transmitted light is measured with a photodiode or other light sensor, and the transmittance value for this wavelength is then compared with the transmission through a reference sample. In short, the sequence of events in a spectrophotometer is as follows: 1.The light source shines into a monochromator. 2.A particular output wavelength is selected and beamed at the sample. 3.The sample absorbs light. 4.The photodetector behind the sample responds to the light stimulus and outputs an analog electronic current which is converted to a usable format. 5.The numbers are either plotted straight away, or are fed to a computer to be manipulated (e.g. curve smoothing, baseline correction and coversion to absorbency, a log function of light transmittance through the sample) Many spectrophotometers must be calibrated by a procedure known as "zeroing." The absorbency of a reference substance is set as a baseline value, so the absorbencies of all other substances are recorded relative to the initial "zeroed" substance. The spectrophotometer then displays % absorbency (the amount of light absorbed relative to the initial substance).

Principle
A spectrophotometer is employed to measure the amount of light that a sample absorbs. The instrument operates by passing a beam of light through a sample and measuring the intensity of light reaching a detector. The beam of light consists of a stream of photons, represented by the purple balls in the simulation shown below. When a photon encounters an analyte molecule (the analyte is the molecule being studied), there is a chance the analyte will absorb the photon. This absorption reduces the number of photons in the beam of light, thereby reducing the intensity of the light beam.

UV and IR spectrophotometers
The most common spectrophotometers are used in the UV and visible regions of the spectrum, and some of these instruments also operate into the near-infrared region as well. Visible region 400-700 nm spectrophotometry is used extensively in colorimetry science. Ink manufacturers, printing companies, textiles vendors, and many more, need the data provided through colorimetry. They take readings in the region of every 10- 20 nanometers along the visible region, and produce a spectral reflectance curve or a data stream for alternative presentations. These curves can be used to test a new batch of colorant to check if it makes a match to specifications e.g., iso printing standards.

Prepared Abhishek Sharma

46

Traditional visual region spectrophotometers cannot detect if a colorant or the base material has fluorescence. This can make it difficult to manage color issues if for example one or more of the printing inks is fluorescent. Where a colorant contains fluorescence, a bispectral fluorescent spectrophotometer is used. There are two major setups for visual spectrum spectrophotometers, d/8 (spherical) and 0/45. The names are due to the geometry of the light source, observer and interior of the measurement chamber. Scientists use this machine to measure the amount of compounds in a sample. If the compound is more concentrated more light will be absorbed by the sample; within small ranges, the Beer-Lambert law holds and the absorbance between samples vary with concentration linearly. In the case of printing measurements 2 alternative settings are commonly used- without/with uv filter to control better the effect of uv brighteners within the paper stock. Samples are usually prepared in cuvettes; depending on the region of interest, they may be constructed of glass, plastic, or quartz.

IR spectrophotometry
Spectrophotometers designed for the main infrared region are quite different because of the technical requirements of measurement in that region. One major factor is the type of photosensors that are available for different spectral regions, but infrared measurement is also challenging because virtually everything emits IR light as thermal radiation, especially at wavelengths beyond about 5 m. Another complication is that quite a few materials such as glass and plastic absorb infrared light, making it incompatible as an optical medium. Ideal optical materials are salts, which do not absorb strongly. Samples for IR spectrophotometry may be smeared between two discs of potassium bromide or ground with potassium bromide and pressed into a pellet. Where aqueous solutions are to be measured, insoluble silver chloride is used to construct the cell.

Spectroradiometers
Spectroradiometers, which operate almost like the visible region spectrophotometers, are designed to measure the spectral density of illuminants in order to evaluate and categorize lighting for sales by the manufacturer, or for the customers to confirm the lamp they decided to purchase is within their specifications. Components: 1.The light source shines onto or through the sample. 2.The sample transmits or reflects light. 3.The detector detects how much light was reflected from or transmitted through the sample. 4.The detector then converts how much light the sample transmitted or reflected into a number.

UV- Vis Spectrophometers IR Spectrophotometers Fourier Transform Infrared Radiation (FTIR)


Principle
Infrared (IR) and Raman are vibrational spectroscopy techniques. They are extremely useful for providing structural information about molecules in terms of their functional groups, the orientation of those groups and information on isomers. They can be used to examine most kinds of sample and are nondestructive. They can also be used to provide quantitative information. In this book, IR refers to the mid IR region, which covers the range 4000400 cm_1 (250025,000 nm). Raman radiation spans the range 4000 down to about zero cm_1. IR and Raman spectroscopies are similar insofar as they both produce spectra because of vibrational transitions within a molecule and use the same region of the electromagnetic spectrum. They differ in how observation and measurement are achieved, since IR is an absorption (transmission) method and Raman is a scattering method.

Many molecules absorb IR radiation, which corresponds to the vibrational and rotational transitions of the molecules. For this absorption to occur, there must be a change in polarity of the molecule. IR radiation is too low in energy to excite electronic transitions. There are a number of vibrations and rotations that the molecule can undergo (a few of these are shown in Figure 2.7) which all result in absorption of IR radiation. In a similar fashion to a UVVis spectrum, an IR spectrum is a plot of transmittance versus wavelength. It is normally a complex series of sharp peaks corresponding to the vibrations of structural groups within the molecule. The IR spectrum for aspirin is shown in Figure 2.8 and it can be seen by comparing this with Figure 2.2 that IR yields much more useful qualitative data than the corresponding UV or UVVis spectrum. For quantitative

Prepared Abhishek Sharma

47

work, IR measurements can deviate from the BeerLambert law due to some scattered radiation and the use of relatively wide slits. Hence, a ratio method is often used where a peak that is apart from those being used for quantitative measurement is chosen and is employed as an internal standard. This strategy serves to minimize relative errors, such as those due to differences in sample size. However, under controlled experimental conditions, IR can comply with the BeerLambert Law directly for quantitative measurements. Conventional IR spectrometers are known as dispersive instruments but have now been largely replaced by Fourier Transform infrared (FTIR) spectrometers. Rather than a grating monochromator, an FTIR instrument uses an interferometer to obtain a spectrum. The are greater signal-to-noise ratio, speed and simultaneous measurement of all wavelengths. This gain in speed due to the simultaneous acquisition of data is sometimes called the Felgett Advantage and the gain in sensitivity due to the use of certain detectors is called the Jacquinot Advantage. There is a third advantage to be gained by using FTIR over dispersive IR and that is the Connes Advantage, whereby a heliumneon (HeNe) laser is used as an internal calibration standard which renders these instruments self-calibrating. The Raman effect arises when incident light distorts the electron density in the molecule, which subsequently scatters the light. Most of this scattered light is at the same wavelength as the incident light and is called Rayleigh scatter. However, a very small proportion of the light is scattered at a different wavelength. This inelastically scattered light is called Raman scatter or Raman effect. For this to occur, there must be a change in polarisability of the molecule. It results from the molecule changing its molecular vibrational motions and is a very weak signal. The energy difference between the incident light and the Raman scattered light is equal to the energy involved in getting the molecule to vibrate. This energy difference is called the Raman shift. These shifts are small and are known as Stokes and anti-Stokes shifts, which correspond to shifts to lower and higher frequencies respectively. Several different Raman shifted signals will often be observed, each being associated with different vibrational or rotational motions of molecules in the sample. The Raman signals observed are particular to the molecule under examination. A plot of Raman intensity versus Raman shift is a Raman spectrum. An example of a Raman spectrum for aspirin is shown in Figure

. Raman signals can be obscured by fluorescence so care must be taken when choosing the energy source. Raman is primarily used as a non-contact quantitative technique. Raman and IR are complementary techniques in that molecules that vibrate/rotate in this region of the spectrum will generally give a spectrum in both. However, polar functional groups with low symmetry generally give strong IR signals while molecules with polarisable functional groups with high symmetry generally give strong Raman signals. Hence, strong infrared absorptions appear usually as weak Raman ones and vice versa. Raman is not as sensitive to the environment of the molecule as IR. Also, Raman is relatively insensitive to water whereas the opposite is true of IR.

Instrument
The components for an FTIR instrument and an FTRaman instrument are shown, respectively, in Figures 2.10 and 2.11. Source There are a variety of black-body heat sources used in IR, such as the Nernst glower, the Globar (silicon carbide rod) and even synchotron (in research applications). Tunable lasers are also now coming into use. For FTRaman spectrometers, lasers in the NIR region (to eliminate fluorescence) such as neodymium yttrium aluminum garnet (Nd:YAG) at 1060 nm are used with high power output or lasers in the visible region, e.g. argon at 488 nm or heliumneon at 632.8 nm, can be employed. Since the intensity of the Raman lines varies with the fourth power of the exciting frequency, the 488 nm argon line provides scattering that is nearly three times as strong as that produced by the heliumneon laser with the same input power. Visible region lasers are usually operated at 100 mW, while Nd:YAG lasers can be used up to 350 mW without causing photodecomposition of organic samples. UV sources for Raman are now available.

Prepared Abhishek Sharma

48

Discriminator Most IR and many Raman instruments today use an interferometer, where the spectral encoding takes place, instead of a monochromator. The Michelson interferometer is a multiplex system with a simple design a fixed mirror, a moving mirror and an optical beam splitter. It is illustrated in Figure 2.12 and its use within an FTIR instrument is shown in Figure 2.13. The source radiation hits the beam splitter from where some of the light is reflected to the moving mirror and some is transmitted to the fixed mirror. The mirrors

Prepared Abhishek Sharma

49

reflect the light back to the beam splitter, some of which recombines and goes on to the detector. The key point of the moving mirror is to generate a difference in the optical paths of the two paths of light separated by the beam splitter; consequently, one is slightlyout of phase from the other since it travels a slightly different distance. The recombined light produces an interference spectrum of all the wavelengths in the beam before passing through the sample. In other words, the sample sees all the wavelengths simultaneously and the interference pattern changes with time as the mirror is continuously scanned at a linear velocity. The result of the sample absorbing radiation is a spectrum in the time domain called an interferogram. Fourier transformation (FT) converts this very complex signal to the frequency domain. Combined FTIR and Raman spectrometers based on the Michelson interferometer are commercially available. Depending on the spectral regions of interest required, it can be operated with a single calcium fluoride (CaF2) beam splitter or a combination of materials with an automatic changer. As with IR, both dispersive and FTRaman instruments exist. The main advantage of FTRaman is the ability to change from using visible to NIR excitation with an associated reduction in broad-band fluorescence. Secondly, an FTIR instrument can be configured to carry out FTRaman experiments without significant effort. However, Raman instruments that do not employ an interferometer are still commonly used owing to their high radiation throughput and lower cost. Sample For IR analysis, the beam is transmitted through (or reflected off the surface of) the sample, depending on the type of analysis being performed. Liquids are sampled in cells, which must be free of interferences, or as a thin fi lm between two IR windows, e.g. sodium chloride (NaCl) or potassium bromide (KBr) for non-aqueous media which have energy cutoffs at 650 and 350 cm_1 respectively, or calcium fluoride for aqueous samples. Some solids can be dissolved in suitable solvents or cast onto films but they usually need to be ground and dispersed and compressed into a transmitting medium, e.g. potassium bromide disk or pellet. Typically 1 mg of sample is mixed with 200 mg of dry potassium bromide. Other solids that are hygroscopic or polar can be ground into a mineral oil such as nujol to give a mull (dispersion). Hard materials can be compressed between two parallel diamond faces to produce a thin film as diamond transmits most of the mid IR. A number of special sampling techniques for solids exist. One that is used to obtain IR spectral information on the surface of a sample is called diffuse reflectance. Here reflected radiation instead of transmitted radiation is measured. Another technique is attenuated total reflectance (ATR), in which the sample is brought into close contact with the surface of a prism made of a material with a high refractive index, e.g. sapphire. A light beam approaching the interface from the optically denser medium at a large enough angle of incidence is totally reflected. However, the beam does penetrate a small distance into the optically thinner medium (the sample). If the sample absorbs IR radiation, an IR spectrum can be obtained. By changing the angle of incidence, depth profiling can be achieved. The fi lm (200 nm) is made thin enough to allow light to pass through to the adjoining aqueous phase. A diagram of such a cell is shown in Figure 2.14. ATR can also be exploited in other types of spectrometry and is useful in probes as well as in cells. The path length for IR samples is in the range 23 mm. Gases are easiest to sample in a longer path length cell, typically 10 cm in length. Fibre optic cables can be used where the sample is remote from the spectrometer.

For Raman analysis, sample preparation is much easier than with IR. In fact, the source light is simply focussed onto the solid or liquid sample directly. If a cuvette is used, quartz or glass windows can be used. If a slide or surface is used, a background spectrum should be taken to remove the possibility of any interfering peaks. Glass tubes are often used and since water is a weak Raman scatterer, aqueous samples can be easily analysed. Reflectance measurements, as distinct from transmissive measurements above, can also be made and are useful for studying films on metal surfaces or samples on diamond surfaces. Measurements should also ideally take place in the dark to remove ambient light interferences. Detector The most common detectors in IR are thermal, i.e. thermocouples, thermistors and bolometers. A thermocouple is based on the use of two different conductors connected by a junction. When a temperature difference is experienced at the junction, a potential difference can be measured. A series of thermocouples together is called a thermopile. Thermistors and bolometers are based on a change in resistance with temperature. They have a faster response time than thermocouples. With a Fourier Transform IR (FTIR), where rapid response and improved sensitivity is key, lead sulfide and InGaAs detectors are used as for NIR. Some arrays are also used. The most common detectors

Prepared Abhishek Sharma

50

in Raman instruments are PDAs and CCDs but for FTRaman, single channel detectors are used, e.g. InGaAs. An extra requirement for the FTRaman instrument is a notch or edge filter; it is included to reject scattered laser light at the strong Rayleigh line, which could otherwise obscure the FTRaman spectrum. \ Output With an FT instrument, the main function of the PC is to carry out the Fourier Transformation of the interferogram, i.e. conversion of the information from the time domain to the frequency domain (see Figure 2.15). However, the PC also carries out both qualitative and quantitative analysis. Library searching, spectral matching, chemometrics and other software are readily available. IR measurements can deviate from the BeerLambert Law and so it is not as easy to use as a quantitative technique. Raman spectral databases are also available but have less coverage of compounds than IR. Raman can be used as quantitative technique as well using scattering as the basis of signal generation, which can be difficult to standardise.

Information Obtained
The real strength of IR is its ability to identify the structural groups in a molecule, e.g. olefi n, carbonyl, and so IR absorption spectroscopy is a powerful identification technique. In particular, the fingerprint region below 1500 cm_1 is very dependen t on the molecules environment and it may be possible to identify a molecule by comparing its transmission bands in this region with spectra from an IR library. Mid IR can also determine the quality or consistency of a sample and the amount of components in a mixture. Examples of an FTIR and an FTRaman instrument are shown in Figures 2.16 and 2.17, respectively. The main advantage of Raman as a technique is its ability to obtain spectra through glass or plastic, which is very useful in the food and pharmaceutical industries where Raman spectra are routinely used for quality control purposes after a product has been packaged. Raman is relatively unaffected by water and so is an ideal technique for aqueous samples.

Applications
Infrared spectrometry can be used for testing emissions. For example, IR has been reported for determining carbon monoxide and nitrogen oxide12 and other trace gases13,14, remote sensing of volcanic gases and trace analysis of halocarbons in the atmosphere, among many other applications. IR is also employed in the beverage industry for monitoring alcohol, sugar and water content in drinks15, and sugars, fi bres and acidity in juices. In the food industry, IR spectrometry is very useful for determining protein, oil, ash, moisture and particle size in fl our16, for following fermentation and microbiological reactions17 and for the study of microorganisms in food products. Raman spectrometry is often used for investigating biological samples since water does not interfere signifi cantly, e.g. transformational changes in proteins and lipids can be readily followed18. Raman spectrometry for inorganic compounds, especially metal complexes, is also employed as it is a well understood area. Raman spectrometry is used extensively too in areas that are less well known such as space research, deep ocean investigation and is a useful technique in medical research and healthcare.

Luminescence
Principles
Luminescence is a phenomenon where light is emitted by a substance and that substance appears to glow. It occurs because an electron has returned to the electronic ground state from an excited state and loses its excess energy as a photon of light (Figure). Luminescence techniques are based on the measurement of this emitted radiation which is characteristic of the molecule under study. Compared to absorption techniques, these methods are more selective, sensitive and can have much wider linear dynamic ranges but, unfortunately, the number of compounds which naturally undergo luminescence is small.

Radiation is fi rst absorbed by the molecule. Electrons in the molecule use this energy to attain a higher energy state. The molecule is now excited. There are two possible transitions for the electron depending on the spin quantum numbers in the excited state . If the spins are opposite, the state is a singlet one and if they are in parallel, the state is a triplet one. After becoming excited, the electrons need to relax back down to the ground state. This emitted radiation, which is slightly lower in energy than the absorbed radiation, is luminescence. There are many types of luminescence: Photoluminescence _ Fluorescence _ Phosphorescence Radio-luminescence

Prepared Abhishek Sharma

51

Bioluminescence Chemi-luminescence. The most important of these analytically are the photoluminescence modes of fluorescence and phosphorescence. Fluorescence occurs when UV radiation provides the energy needed to excite electrons in the molecule to the excited singlet state and subsequently, after some radiationless decay, emit photons almost instantaneously (10_9 s) in order to return to the ground state. What occurs in the case of a fl uorescing molecule is depicted schematically in Figure. Various possible vibrational levels are superimposed on the two electronic levels (excited singlet state and ground state). When this molecule is excited, there are fi ve possible transitions for the electrons, of which the excitation to the V _ 1 is most pr obable. This transition corresponds to absorption of a photon at 254 nm, which is therefore the max

for the excitation spectrum. The emission spectrum occurs at lower energy and longer wavelength due to some radiationless losses of energy mainly due to vibrations internally in the molecule. Each of the excitation and emission spectra has five peaks corresponding to the five possible transitions. The excitation and emission spectra are also approximate mirror images of each other, shifted by the small loss of energy above (Figure 2.21). Stokes shift is the term given to describe the difference between the excitation maximum and the emission maximum. Only about 1015 % of organic compounds are able to fluoresce naturally and these tend to have certain features in common such as a rigid and planar structure, e.g. polyaromatics, heterocyclics and uranyl compounds (Figure 2.22). However, most molecules can be derivatised if fluorescence is the desired technique. Fluorescence does not have wide applicability but has excellent sensitivity as the signal is measured against a zero background. It can also be a very selective technique when the compound of interest fluoresces and other components in the sample do not or fluoresce at different wavelengths. The UV absorption spectrum and fl uorescence excitation spectrum of a molecule often occur at similar wavelengths. The same phenomenon is responsible for both spectra absorption of a photon and promotion of an electron from the ground state into a higher

Prepared Abhishek Sharma

52

energy level. The fluorescence excitation and emission spectra for aspirin are shown in Figure 2.23. Spectrum (1) can be compared with the UV absorption spectrum for aspirin given in Figure 2.2. Phosphorescence begins in the same way as fluorescence. Radiation provides the energy needed to excite the molecule and the electrons undergo a small amount of internal, radiationless loss of energy. However, instead of emitting fluorescence at this point, there is inter-system crossing (ISC) to the triplet excited state, from where the electrons undergo more radiationless energy loss and finally emit photons of lower energy as phosphorescence as they return to the ground state. This is shown diagrammatically in Figure 2.24. The emission spectrum of a phosphorescent molecule occurs at even longer wavelength than fluorescence. If quenching is prevented, e.g. by keeping the sample as cold or as solid as possible to minimize collisions between molecules, and by ensuring the sample is oxygenfree, phosphorescence can be seen. Phosphorescence is a longer-lasting luminescence than fluorescence and can sometimes be observed even after the excitation source has been removed, lasting for seconds or even minutes. The number of compounds that naturally phosphoresce is also small. Phosphorescence can be observed without interference from fluorescence by a process called time resolution. Instruments for measuring phosphorescence are very similar to those used for fluorescence but a mechanism that allows the sample to be irradiated and then, after a time delay, allows measurement of phosphorescent intensity (phosphoroscope) is required as an extra component. The instrument should also have the capability of keeping samples at very low temperatures. Another type of long-lived photoluminescence is time-delayed fluorescence, where the electrons in the molecule obtain enough energy to be excited from a special excited state to the normal excited state and then fluoresce.

Instrument
A schematic diagram of a fluorescence instrument is given in Figure 2.25. Source Generally, a UV source is required, e.g. xenon arc lamps, halogen lamps or lasers. The output of a xenon arc lamp is continuous over a broad wavelength range from UV to NIR. The nitrogen laser (337.1 nm) is a good example of a laser that is used for fluorescence. Tunable lasers can provide multiwavelength excitation. Discriminator Filters, monochromators or polychromators can be used. The excitation wavelength selector (discriminator 1) is used to select only the excitation wavelength and to fi lter out other wavelengths. The emission wavelength selector (discriminator 2) allows passage of only the emission (fluorescence) wavelength. Filters provide better detection limits but do not provide wavelength scanning abilities, which are often required if spectra need to be acquired. Instruments that use filters are called Fluor meters. Those which use monochromators are spectro Fluor meters. These allow scanning of the wavelengths and hence determination of the optimal wavelengths of both excitation and emission. Lasers generally provide monochromatic light, so an excitation wavelength selector (discriminator 1) is not needed in this case. Sample Cuvettes are normally 1 cm in path length and made of quartz (UV region) or glass (Vis region) for laboratory instruments. At least two adjacent faces will be transparent. Disposable

plastic cuvettes are sometimes used for work on aqueous solutions in the visible region of the spectrum. Optical fi bres can be used to transport light to and from luminescent samples in relatively inaccessible locations, such as in the field, process streams and in vivo. For phosphorescence measurements at room temperature, conventional fluorescence curettes work well, though steps may be taken to eliminate oxygen by controlling the environment. If low temperature control of the phosphorescent sample is required, the samples may be contained in a quartz cell surrounded by liquid nitrogen in a Dewar fl ask. Molecular luminescence easurements can be strongly affected by parameters such as solvent, temperature, pH and ionic strength so these need to be kept constant from sample to sample. Detector In fluorescent detection, the sample absorbs light and then re-emits it in all directions. A 90_ arrangement is usually used in the instrument because this minimises any incident light that could interfere. Photomultiplier tubes (PMTs) are the most common detectors and are sensitive in the range 200600 nm. Multichannel detectors such as PDAs, CCDs and CIDs can also be employed.

Prepared Abhishek Sharma

53

Output The computer collects, stores, manipulates, analyses and displays the spectral data. It can carry out both qualitative and quantitative analysis as well as spectral searching and matching if the libraries are available.

Information Obtained
As spectra for molecules with similar fluorescing groups tend to be very alike, this is not a very useful identification (qualitative) technique. But fluorescence is an optical technique, and is subject to the BeerLambert Law, so for substances that do fl uoresce, it is very useful for quantitative measurement. As already mentioned, fluorescence is a highly selective and sensitive technique due to very low background interference, so the limits of detection afforded are normally excellent.

Applications
Some cancer drugs are naturally fluorescent, e.g. anthracyclines. Hence, their levels can be measured in the blood of patients undergoing chemotherapy using fluorescence detection after separation by liquid chromatography23,24. This therapeutic drug monitoring can help to understand how the patient is metabolising the drugs. Many other drugs exhibit enough fluorescence to exploit for quantitative purposes25,26. Fluorescence was also used extensively on the human genome project. Native fluorescence has been used to study proteins27 and fluorescent probes have been employed in the study of micelles, nucleic acids and other biological molecules28. Immunochemical methods for compounds of biological interest such as antibodies and antigens involving fluorescent or phosphorescent labels have largely replaced older radiolabelling

Electroluminescence
Electroluminescence (EL) is an optical phenomenon and electrical phenomenon in which a material emits light in response to an electric current passed through it, or to a strong electric field. This is distinct from light emission resulting from heat (incandescence), chemical reaction (chemiluminescence), sound (sonoluminescence), or other mechanical action (mechanoluminescence).

Mechanism
Electroluminescence is the result of radiative recombination of electrons and holes in a material (usually a semiconductor). The excited electrons release their energy as photons - light. Prior to recombination, electrons and holes are separated either as a result of doping of the material to form a p-n junction (in semiconductor electroluminescent devices such as LEDs), or through excitation by impact of highenergy electrons accelerated by a strong electric field (as with the phosphors in electroluminescent displays).

The most typical inorganic Thin Film EL (TFEL), for example, is ZnS:Mn with its yellow-orange emission. Examples of the range of EL material include:

Powder zinc sulfide doped with Copper or Silver Thin film zinc sulfide doped with Manganese Natural blue diamond (diamond with boron as a dopant). III-V semiconductors - such as InP,GaAs,and GaN. Inorganic semiconductors - such as [Ru(bpy)3]2+(PF6-)2, where bpy is 2,2'-bipyridine Practical implementations
The most common EL devices are either powder (primarily used in lighting applications) or thin film (for information displays.) Electroluminescent automotive instrument panel backlighting, with each gauge pointer also an individual light source, entered production on 1960 Chrysler and Imperial passenger cars, and was continued successfully on several Chrysler vehicles through 1967. Sylvania Lighting Division in Salem and Danvers, MA, produced and marketed an EL night lamp (right), under the trade name "Panelescent" at roughly the same time that the Chrysler IP's entered production. These lamps have proven incredibly reliable, with some samples known to be still functional after nearly 50 years of continuous operation. Later in the 1960s Sylvania's Electronic Systems Division in Needham, MA, developed and manufactured several instruments for the Apollo Lunar Lander and Command Module using

Prepared Abhishek Sharma

54

electroluminescent display panels manufactured by the Electronic Tube Division of Sylvania at Emporium, PA. Raytheon, Sudbury, MA, manufactured the Apollo guidance computer which used a Sylvania electroluminescent display panel as part of its display-keyboard interface (DSKY). Powder phosphor-based electroluminescent panels are frequently used as backlights to liquid crystal displays. They readily provide a gentle, even illumination to the entire display while consuming relatively little electric power. This makes them convenient for batteryoperated devices such as pagers, wristwatches, and computer-controlled thermostats and their gentle green-cyan glow is a common sight in the technological world. They do, however, require relatively high voltage. For battery-operated devices, this voltage must be generated by a converter circuit within the device; this converter often makes an audible whine or siren sound while the backlight is activated. For linevoltage operated devices, it may be supplied directly from the power line. Electroluminescent nightlights operate in this fashion. Thin film phosphor electroluminescence was first commercialized during the 1980s by Sharp Corporation in Japan, Finlux (Oy Lohja Ab) in Finland, and Planar Systems in the USA. Here, bright, long life light emission is achieved in thin film yellow-emitting manganesedoped zinc sulfide material. Displays using this technology were manufactured for medical and vehicle applications where ruggedness and wide viewing angles were crucial, and liquid crystal displays were not well developed. Recently, blue, red, and green emitting thin film electroluminescent materials have been developed that offer the potential for long life and full color electroluminescent displays. In either case, the EL material must be enclosed between two electrodes and at least one electrode must be transparent to allow the escape of the produced light. Glass coated with indium oxide or tin oxide is commonly used as the front (transparent) electrode while the back electrode is coated with reflective metal. Additionally, other transparent conducting materials, such as carbon nanotube coatings or PEDOT can be used as the front electrode. The display applications are primarily "passive" (i.e. voltages are driven from edge of the display.) Similar to LCD trends, there have also been Active Matrix EL (AMEL) displays demonstrated, where circuitry is added to prolong voltages at each pixel. The solid state nature of TFEL allows for a very rugged and high resolution display fabricated even on silicon substrates. AMEL displays of 1280x1024 at over 1000 lines per inch (lpi) have been demonstrated by a consortium including Planar Systems. Electroluminescent technologies have low power consumption compared to competing lighting technologies, such as neon or fluorescent lamps. This, together with the thinness of the material, has made EL technology valuable to the advertising industry. Relevant advertising applications include electroluminescent billboards and signs. EL manufacturers are able to control precisely which areas of an electroluminescent sheet illuminate, and when. This has given advertisers the ability to create more dynamic advertising which is still compatible with traditional advertising spaces. In principle, EL lamps can be made in any color. However, the commonly-used greenish color closely matches the peak sensitivity of human vision, producing the greatest apparent light output for the least electrical power input. Unlike neon and fluorescent lamps, EL lamps are not negative resistance devices so no extra circuitry is needed to regulate the amount of current flowing through them.

Thermo Luminance
The thermal luminescence method is a scientific method which is used in the dating of the ware. The quartz containing the clay which is the materials of the ware contains the minerals such as a rock, calcite and mica. It is a phenomenon of thermal luminescence, and when this quartz is heated, the electron will vibrate in the crystal, and the electron which was accumulated will return to the old atom. Then, the quartz will emit light. The luminescence phenomenon is the radiant heat luminescence. using this method, in the Jomon ware found in Republic of Vanuatu, the age was measured at Oxford University in England, it turned out that it was a ware which was burned about 5,000 years ago.

Near-Field Scanning Optical Microscopy


Introduction
Near-field scanning optical microscopy (NSOM/SNOM) is a microscopic technique for nanostructure investigation that breaks the far field resolution limit by exploiting the properties of evanescent waves. This is done by placing the detector very close (distance much smaller than wavelength ) to the specimen surface. This allows for the surface inspection with high spatial, spectral and temporal resolving power. With this technique, the resolution of the image is limited by the size of the detector aperture and not by the wavelength of the illuminating light. In particular, lateral resolution of 20 nm and vertical resolution of 2-5 nm have been demonstrated.[1] As in optical microscopy, the contrast mechanism can be easily adapted to study different properties, such as refractive index, chemical structure and local stress. Dynamic properties can also be studied at a sub-wavelength scale using this technique.
NSOM/SNOM is a form of scanning probe microscopy.

A fundamental principle in diffraction-limited optical microscopy requires that the spatial resolution of an image is limited by the wavelength of the incident light and by the numerical apertures of the condenser and objective lens systems. The development of nearfield scanning optical microscopy (NSOM), also frequently termed scanning near-field optical microscopy (SNOM), has been driven by the need for an imaging technique that retains the various contrast mechanisms afforded by optical microscopy methods while attaining spatial resolution beyond the classical optical diffraction limit.

Theory
According to Abbes Theory of Image Formation, developed in 1873, the resolving capability of an optical component is ultimat ely limited by the spreading out of each image point due to diffraction. Unless the aperture of the optical component is large enough to collect all the diffracted light, the finer aspects of the image will not correspond exactly to the object. The minimum resolution (d) for the optical component are thus limited by its aperture size, and expressed by the Rayleigh criterion:

Prepared Abhishek Sharma

55

Here, 0 is the wavelength in vacuum; NA is the numerical aperture for the optical component (usually 1.3-1.4 for modern objectives). Thus, the resolution limit is usually around 0/2 for conventional optical microscopy.[9] This treatment only assumes the light diffracted into the far-field that propagates without any restrictions. NSOM makes use of evanescent or non propagating fields that exist only near the surface of the object. These fields carry the high frequency spatial information about the object and have intensities that drop off exponentially with distance from the object. Because of this, the detector must be placed very close to the sample in the near field zone, typically a few nanometers. As a result, near field microscopy remains primarily a surface inspection technique. The detector is then rastered across the sample using a piezoelectric stage. The scanning can either be done at a constant height or with regulated height by using a feedback mechanism.

Near-field scanning optical microscopy is classified among a much broader instrumental group referred to generally as scanning probe microscopes (SPMs). All SPMs owe their existence to the development of the scanning tunneling microscope (STM), which was invented by IBM research scientists Gerd Binnig and Heinrich Rohrer in the early 1980s. The theoretical resolution limit of conventional optical imaging methodology (200 to 300 nanometers for visible light) was the primary factor motivating the development of recent higher-resolution scanning probe techniques, such as STM and atomic force microscopy (AFM), and previously, transmission electron microscopy (TEM) and scanning electron microscopy (SEM). These and related techniques have enabled phenomenal gains in resolution, even to the level of visualizing individual atoms. However, prior to the development of near-field scanning optical methods, the superior resolution capabilities have come at the expense of the wide variety of contrastenhancing mechanisms available to optical microscopy. Furthermore, the extreme specimen preparation requirements for most of the highresolution methods have limited their application in many areas of study, particularly in biological investigations involving dynamic or in vitro measurements. The method of near-field scanning optical microscopy combines the extremely high topographic resolution of techniques such as AFM with the significant temporal resolution, polarization characteristics, spectroscopic capabilities, sensitivity, and flexibility inherent in many forms of optical microscopy. The interaction of light with an object, such as a microscope specimen, results in the generation of both near-field and far-field light components. The far-field light propagates through space in an unconfined manner and is the "normal" light utilized in conventional microscopy. The near-field (or evanescent) light consists of a nonpropagating field that exists near the surface of an object at distances less than a single wavelength of light. Light in the near-field carries more high-frequency information and has its greatest amplitude in the region within the first few tens of nanometers of the specimen surface. Because the near-field light decays exponentially within a distance less than the wavelength of the light, it usually goes undetected. In effect, as the light propagates away from the surface into the far-field region, the highest-frequency spatial information is filtered out, and the well known diffraction-based Abbe limit on resolution is imposed. By detecting and utilizing the near-field light before it undergoes diffraction, the NSOM makes available the full gamut of far-field optical contrast-enhancing mechanisms at much higher spatial resolution. In addition to non-diffraction-limited high-resolution optical imaging, near-field optical techniques can be applied to chemical and structural characterization through spectroscopic analysis at resolutions beneath 100 nanometers. The most recent commercial NSOM instruments combine the scanning force techniques of an AFM with the optical detection capabilities of conventional optical microscopy. The overall NSOM design can vary significantly, depending upon the requirements of the particular research project. One of the most common configurations is to incorporate the NSOM into an inverted fluorescence microscope. By basing the NSOM on a conventional optical instrument, many of the microscopist's familiar optical imaging modes are available in combination with near-field high-resolution capabilities. In addition to the optical information, NSOM can generate topographical or force data from the specimen in the same manner as the atomic force microscope. The two separate data sets (optical and topographical) can then be compared to determine the correlation between the physical structures and the optical contrast. The real power of the NSOM technique may rest in this unique capability of combining a topographical data set with a variety of corresponding optical data at resolutions far better than is possible under the diffraction limitations of focused light. Presented in Figure 1 is a near-field scanning instrument configured around a modern inverted optical microscope. Such an arrangement conveniently allows the NSOM head, incorporating the probe and its positioning mechanism, to be mounted at the specimen stage location, with the objective positioned beneath the stage. The system illustrated in the figure includes an external laser to provide illumination, a photomultiplier detector for optical signal collection, and a computer and electronic control unit for management of specimen and probe positioning and image acquisition. Although the scanning probe microscope family encompasses a vast array of specialized and highly varied instruments, their common operational motif is the employment of a local probe in close interaction with the specimen. A typical SPM local probe is equipped with a

Prepared Abhishek Sharma

56

nanometer-sized tip whose tip-to-specimen interactions can be sensed and recorded by a variety of mechanisms. Each different type of SPM is characterized by specific properties of the local probe and the nature of its interaction with the specimen surface.

A representation of the typical NSOM imaging scheme is presented in Figure 2, in which an illuminating probe aperture having a diameter less than the wavelength of light is maintained in the near field of the specimen surface. Because close proximity or contact between the specimen and probe (separation less than the wavelength) is a general requirement for non-diffraction-limited resolution, the vast majority of all SPMs require a feedback system that precisely controls the physical separation of the probe and specimen. In addition, an x-y-z scanner (usually piezoelectric) is utilized to control the movement of the probe over the specimen. The NSOM configuration illustrated in Figure 2 positions the objective in the far field, in the conventional manner, for collection of the image-forming optical signal. Depending upon the design of the particular instrument, the x-y-z scanner can either be attached to the specimen or to the local probe. If the scanner and specimen are coupled, then the specimen moves under the fixed probe tip in a raster pattern to generate an image from the signal produced by the tip-specimen interaction. The size of the area imaged is dependent only on the maximum displacement that the scanner can produce. A computer simultaneously evaluates the probe position, incorporating data obtained from the feedback system, and controls the scanning of the tip (or specimen) and the separation of the tip and specimen surface. The information generated as a result of sensing the interaction between the probe and specimen is collected and recorded by the computer point-by-point during the raster movement. The computer then renders this data into two-dimensional data sets (lines). Two-dimensional data sets gathered by the NSOM instrument are subsequently compiled and displayed as a three-dimensional reconstruction on a computer monitor. The typical size scale of features measured with a scanning probe microscope ranges from the atomic level (less than 1 nanometer) to more than 100 micrometers. The scanning probe microscopy family includes modalities based on magnetic force, electrical force, electrochemical interactions, mechanical interactions, capacitance, ion conductance, Hall coefficient, thermal properties, and optical properties (NSOM, for example). NSOM images are typically generated by scanning a sub-wavelength aperture over the specimen in a two-dimensional raster pattern and collecting the emitted radiation in the optical far-field, point-by-point. Previously developed high-resolution techniques, such as scanning electron microscopy, transmission electron microscopy, scanning tunneling microscopy, and atomic force microscopy, do not benefit from the wide array of contrast mechanisms available to optical microscopy, and in most cases, are limited to the study of specimen surfaces only. Aside from the available contrast-enhancing techniques of staining, fluorescence, polarization, phase contrast, and differential interference contrast, optical methods have inherent spectroscopic and temporal resolution capabilities. The high-resolution afforded by electron microscopy techniques is achieved at the cost of greater limitations on acceptable specimen types and increased demands of sample preparation, including vacuum compatibility requirements, preparation of thin sections for transmission microscopy, and generally, the application of a conductive coating for non-conducting specimens (STM also has this requirement). For biological materials, specimen preparation is especially demanding, as complete dehydration is generally required prior to carrying out sectioning or coating. Although atomic force microscopy is free from many of these specimen preparation considerations, and can be applied to study specimens near the atomic level in ambient conditions, the method does not readily provide spectroscopic information from the specimen. An additional limitation is that AFM is not able to take advantage of the wide array of reporter dyes available to fluorescence microscopy.

Prepared Abhishek Sharma

57

The NSOM method is particularly useful to nano-technologists (physicists, materials scientists, chemists, and biologists) who require ultra-high resolution spatial information from the broad range of materials encountered in their varied disciplines. Although newer nearfield instrumentation techniques are being developed to image three-dimensional volume sets, NSOM has typically been limited to specimens that are accessible by a local probe, which is physically attached to a macroscopic scan head. A schematic illustrating the control and information flow of an inverted optical microscope-based NSOM system is presented in Figure 3. The laser excitation source is coupled into a fiber optic probe for specimen illumination, with the probe tip movement being monitored through an optical feedback loop incorporating a second laser focused on the tip. The motion of the probe tip, translational stage movement, and acquisition and display of optical and topographic (or other force) images is controlled through additional electronics and the system computer.

History of NSOM
Edward H. Synge, beginning in 1928, published a series of articles that first conceptualized the idea of an ultra-high resolution optical microscope. Synge's proposal suggested a new type of optical microscope that would bypass the diffraction limit, but required fabrication of a 10-nanometer aperture (much smaller than the light wavelength) in an opaque screen. A stained and embedded specimen would be ground optically flat and scanned in close proximity to the aperture. While scanning, light illuminating one side of the screen and passing through the aperture would be confined by the dimensions of the aperture, and could be used to illuminate the specimen before undergoing diffraction. As long as the specimen remained within a distance less than the aperture diameter, an image with a resolution of 10 nanometers could be generated. In addition, Synge accurately outlined a number of the technical difficulties that building a near-field microscope would present. Included in these were the challenges of fabricating the minute aperture, achieving a sufficiently intense light source, specimen positioning at the nanometer scale, and maintaining the aperture in close proximity to the specimen. The proposal, although visionary and simple in concept, was far beyond the technical capabilities of the time. Experimental verification of the feasibility of Synge's proposals had to wait until 1972 when E. A. Ash and G. Nicholls demonstrated the near-field resolution of a sub-wavelength aperture scanning microscope operating in the microwave region of the electromagnetic spectrum (illustrated in Figure 4). Utilizing microwaves, with a wavelength of 3 centimeters, passing through a probe-forming aperture of 1.5 millimeters, the probe was scanned over a metal grating having periodic line features. Both the 0.5-millimeter lines and 0.5-millimeter gaps in the grating were easily resolvable, demonstrating sub-wavelength resolution having approximately one-sixtieth (0.017) the period of the imaging wavelength.

Extension of Synge's concepts to the shorter wavelengths in the visible spectrum presented significantly greater technological challenges (in aperture fabrication and positioning), which were not overcome until 1984 when a research group at IBM Corporation's Zurich laboratory reported optical measurements at a subdiffraction resolution level. An independent group working at Cornell University took a somewhat different approach to overcome the technological barriers of near-field imaging at visible wavelengths, and the two groups' results began the development that has led to the current NSOM instruments. The IBM researchers employed a metal-coated quartz crystal probe on which an aperture was fabricated at the tip, and designated the technique scanning near-field optical microscopy (SNOM). The Cornell group used electron-beam lithography to create apertures, smaller than 50 nanometers, in silicon and metal. The

Prepared Abhishek Sharma

58

IBM team was able to claim the highest optical resolution (to date) of 25 nanometers, or one-twentieth of the 488-nanometer radiation wavelength, utilizing a test specimen consisting of a fine metal line grating. Although the achievement of non-diffraction-limited imaging at visible light wavelengths had demonstrated the technical feasibility of the near-field aperture scanning approach, it was not until after 1992 that the NSOM began to evolve as a scientifically useful instrument. This advance in utility can be primarily attributed to the development of shear-force feedback systems and to the employment of a singlemode optical fiber as the NSOM probe, both of which were adapted for the near-field technique by Eric Betzig while working at AT&T Bell Laboratories.

NSOM Instrumentation
In an aperture-scanning NSOM instrument, the quantitative point-spread function in the near-field can be assessed by a Gaussian profile whose 1/e intensity value is of the same order as the radius of the aperture at the tip of the NSOM probe. The mode of light propagation is primarily evanescent (and parallel to the specimen surface) when the radius of the illuminating source is less than one-third of the imaging light wavelength. In order to achieve an optical resolution greater than the diffraction limit (the resolution limit of conventional optical microscopy), the probe tip must be brought within this near-field region. For NSOM, the separation distance between probe and specimen surface is typically on the order of a few nanometers. Radiation near the source is highly collimated within the nearfield region, but after propagation of a few wavelengths distance from the specimen, the radiation experiences significant diffraction, and enters the far-field regime. There are two fundamental differences between near-field and far-field (conventional) optical microscopy: the size of the specimen area that is illuminated, and the separation distance between the source of radiation and the specimen. In conventional far-field optical microscopy, the distance between the light source and the specimen is typically much greater than the wavelength of the incident light, whereas in NSOM, a necessary condition of the technique is that the illumination source is closer to the specimen than the wavelength of the illuminating radiation.

X-Y-Z Scanning
At the heart of all scanning probe microscopy techniques is the scanning system. Its design and function are primary determinants of the attainable scan resolution. The scanner must have low noise (small position fluctuations) and precision positioning capability (typically less than 1 nanometer). The required precision of the probe positioning usually necessitates that the entire instrument rest on a vibration isolation table, or be suspended by some other means, to eliminate the transfer of mechanical vibrations from the building to the instrument. Low-noise electronics and high-voltage amplifiers having large dynamic range are necessary to drive the piezo-electric actuators of the probe and specimen positioning systems. The piezos usually require power supplies providing 0 to +150 or -150 to +150 volts for full range displacement. For most NSOM applications, it is necessary to maintain the probe in constant feedback above the specimen surface being imaged. Precise control of the probe is required because it must be maintained within the narrow near-field regime, but prevented from actually contacting the surface. The stringent requirement of maintaining a constant gap between the probe and the specimen is best met by employing a real-time feedback control system. The advantages of this type of position control are numerous. Perhaps the most important consideration is damage to the probe tip or the specimen, which is likely if the two come into contact. Furthermore, it is possible for the tip to accumulate debris from the specimen surface being scanned if contact is made. Although much less likely, this artifact can occur even with the tip under feedback control, especially if the feedback set point is not correctly chosen. A further benefit of operating the probe scanning system with feedback control is to obtain accurate optical signal levels, eliminating the dramatic variations caused by the exponential dependence of these signals on the tip-to-specimen separation. The exponential variation of signal level with changing tip-to-specimen separation can produce artifacts in the image that do not accurately represent optical information related to the specimen. A critical requirement of the near-field techniques is that the probe tip must be positioned and held within a few nanometers of the surface in order to obtain high-resolution and artifact-free optical images, and this is not readily achieved without utilizing some form of feedback mechanism. Several different techniques have been employed to monitor the z-position of the probe tip, and its instantaneous separation from the specimen surface. These methods include:

Interferometric measurement of the tip amplitude, using either a two-beam interferometer or a fiber interferometer. Electron tunneling (limited to conductive specimens). Detection of the light emitted through the tip (either in transmission or collection mode) and photon tunneling. Constant force (atomic force feedback) is the most common method, and can be further subdivided as:
1. Diffraction of a separate light source by the tip. 2. Mechanical sensor attached to the tip (for example, a quartz tuning fork). Capacitance detection. To date, the two most commonly employed mechanisms of tip positioning have been optical methods that monitor the tip vibration amplitude (usually interferometric), and a non-optical tuning fork technique. Both of these are versions of the shear-force feedback method and are described in more detail in a following section.

Oscillatory Feedback Methods


In order to improve signal-to-noise ratios for the feedback signal, the NSOM tip is almost always oscillated at the resonance frequency of the probe. This allows lock-in detection techniques (basically a bandpass filter with the center frequency set at the reference oscillation frequency) to be utilized, which eliminates positional detection problems associated with low-frequency noise and drift. As the oscillating tip approaches the specimen, forces between the tip and specimen damp the amplitude of the tip oscillation.

Prepared Abhishek Sharma

59

A measure of the mechanical (or electrical) oscillator quality is given by a dimensionless parameter called the quality factor, or Qfactor, or simply Q. The quality factor is defined as the oscillator's resonance frequency divided by its resonance width. It is generally beneficial to maximize the Q of the probe oscillation in order to achieve greater stability and more sensitive tip height regulation. The lower the Q of the oscillating probe, the lower the signal-to-noise ratio, which results in correspondingly lower quality topographic information being obtained from the oscillatory feedback mechanism. Historically, the letter Q has been used to represent the ratio of reactance to resistance of an electrical circuit element. With regard to oscillator characteristics, the term "quality factor" was introduced after the symbol Q was arbitrarily chosen. Typically, both the peak resonance and the Q-factor are found to change upon approach of the probe tip to the specimen surface. The tip oscillation amplitude and frequency can be monitored by several different techniques, which generally can be categorized within two groups. The shear-force mode utilizes lateral oscillation shear forces generated between the tip and specimen (parallel to the surface) to control the tip-specimen gap during imaging. In contrast, the tapping mode relies on atomic forces occurring during oscillation of the tip perpendicular to the specimen surface (as in AFM) to generate the feedback signal for tip control. Each oscillatory mode has several advantages and disadvantages.

Shear-Force Feedback
The shear-force feedback method laterally dithers the probe tip at a mechanical resonance frequency in proximity to the specimen surface. The dither amplitude is usually kept low (less than 10 nanometers) to prevent adversely affecting the optical resolution. For optimum image quality, shear-force feedback techniques are usually restricted to use with specimens that have relatively low surface relief, and longer scan times are required compared to operation in tapping mode. However, the straight probes typically employed in shear-force feedback techniques are easier to fabricate and have a lower cost per probe than their bent probe counterparts. With respect to light throughput, the straight probe has a decided advantage over the bent probe, exhibiting much lower loss in propagation intensity. Shear-force imaging with a straight probe, however, is usually very difficult to perform in a liquid medium because the additional viscous damping of the fluid causes a dramatic decrease of the probe oscillation amplitude. In typical operation, as the oscillating probe approaches the specimen surface, the amplitude, phase, and frequency of oscillation each change, due to dissipative and adiabatic forces present at the tip of the probe. The probe oscillation damping due to tip-specimen interaction increases nonlinearly with decreasing tip-specimen separation. The nature of the shear forces that are responsible for damping the probe tip oscillations during near-field specimen approach is the subject of much research interest. One group of investigators used electron tunneling current measurements between a metallized NSOM probe and specimen, in shear-force feedback mode, to conclude that the probe actually contacts the surface during the approach cycle of the oscillation. Measurements of the tunneling current, made as the tip approaches the specimen, indicate that the tip touches the specimen initially as the probe goes into feedback and continues to lightly touch the surface once per oscillatory cycle. From this information it is clear that the most beneficial approach is to make the feedback set-point as high as possible (for example, approximately 99.9 percent of the original undamped signal) in order to reduce the physical interactions between the probe and the specimen. In practice, the upper limit on the feedback set-point is determined by the signal-to-noise ratio of the feedback signal. Optical feedback methods of monitoring the tip vibration amplitude were the most commonly employed during early development of shear-force techniques in NSOM, and can also be applied in the tapping mode. In this approach, for either the straight or bent probe types, a laser is tightly focused as close to the end of the NSOM probe as possible. With the straight probe variation, when under laser illumination, a shadow is cast by the probe onto a split photodiode. In the case of the bent probe method, the laser is reflected from the top surface of the probe to the split photodiode (similar to the optical feedback techniques employed in the AFM). With the laser feedback established, the probe is then vibrated in either tapping mode or shear-force mode, at a known frequency, utilizing a dither piezo (see Figure 7). The split photodiode collects the laser light, and the difference between the signals from each side of the detector is determined. A higher signal-to-noise ratio can be obtained by using a lock-in amplifier to select a portion of the signal that is at the same frequency as the dither piezo drive signal. The main problem associated with this type of feedback mechanism is that the light source (for example, a laser), which is used to detect the tip vibration frequency, phase, and amplitude, becomes a potential source of stray photons that can interfere with the detection of the NSOM signal. One mechanism for dealing with this effective increase in background signal is to provide a feedback light source that has a different wavelength (usually longer) than the near-field source. This scheme requires additional filtration in front of the detector to selectively block the unwanted photons originating within the feedback system. In most cases, the added filters also block a small percentage of the near-field photons, resulting in reduced signal levels. A non-optical feedback method is not subject to problems of this nature, and is a primary reason that methods such as the tuning-fork technique (described below) have become increasingly popular. Piezoelectric quartz tuning forks were first introduced into scanning probe microscopy for use in scanning near-field acoustic microscopy. Later, tuning forks were incorporated into the NSOM to serve as inexpensive and simple, non-optical excitation and detection devices in distance control functions. Quartz crystals have the property of generating an electric field when placed under pressure and, conversely, of changing dimensions when an electric field is applied. This property is termed piezoelectric and occurs when the crystal is composed of molecules that lack both centers and planes of symmetry. Quartz crystals suitable for use in precision oscillators (digital

Prepared Abhishek Sharma

60

clocks) and highly selective wave filters are mass-produced in huge quantities, making them relatively inexpensive. When quartz tuning forks are utilized for regulation in a feedback loop, their very high mechanical quality factor, Q (as high as approximately 10000), and corresponding high gain, provides the system with high sensitivity to small forces, typically on the order of a piconewton. The basic configuration of the tuning-fork method used for shear-force tip feedback consists of a single mode optical fiber attached to one arm of a quartz crystal tuning fork, which is oscillated at the tuning fork's resonance frequency. The equivalent circuit for the tuning fork is a series RLC resonator in parallel with package capacitance. The most common tuning fork resonance frequency is 32,768 hertz (Hz), but the devices are available with resonances ranging from 10 kilohertz to several tens of megahertz. The single mode optical fiber, routed to the NSOM head, is physically coupled to the crystal tuning fork, which in turn can be driven internally (electrically) or externally by a dither piezo to which the fork is rigidly attached. The mode of oscillation of the tuning fork depends upon the means of excitation. If the fork is (directly) driven electrically, the arms vibrate in opposite directions, whereas external mechanical excitation produces an oscillation in which both arms of the tuning fork move in the same direction. Figure 5 presents a schematic of a quartz tuning fork configured with an attached fiber for shear-force detection. The piezoelectric potential is acquired from electrodes on the fork and then amplified with a gain of approximately 100 (using an instrumentation amplifier) to produce a signal on the order of a few tens of millivolts. The signal is then fed into a lock-in amplifier and referenced to the drive signal of the oscillating tuning fork. The output from the lock-in amplifier (amplitude, phase, or a combination of amplitude and phase such as the x or y signals) is then compared to a user-specified reference signal in the control loop to maintain the probe in feedback above the specimen.

An example resonance curve produced by a 32.7-kHz tuning fork with the attached NSOM fiber is illustrated in Figure 6. The fork response was measured by sweeping the frequency from 31 kHz to 33 kHz and simultaneously measuring the amplitude and phase of the signal. Upon attachment of the fiber the resonance frequency shifts and the Q-factor of the resonance drops from approximately 20,000 to less than 1000. The Q is defined as: fr / Df where f(r) is the frequency at the maximum amplitude and Df is the width of the resonance peak at the points where the amplitude is equivalent to the peak amplitude divided by the square root of 2 (termed the root-mean-square), or approximately 70.7 percent of the peak amplitude. There are several advantages of the tuning fork method that have led to its increased favor over optical techniques of tip regulation. Since the detection of the tip motion is not optical, there is no risk of additional stray light being introduced in the vicinity of the aperture that might interfere with the NSOM signal detection. Additionally, the tuning fork system does not require the tedious alignment procedures of a separate external laser source and associated focusing optical components. Because of the compactness and relative ease of use, the tuning fork method lends itself to applications requiring remote operation, such as those employed in vacuum systems or environmental control chambers. Tapping-Mode Feedback Tapping-mode feedback is another popular method for tip-to-specimen distance control, and is implemented using several different probe types. A useful design consists of a modified AFM cantilever and transparent tip, usually fabricated from silicon nitride and coated with metal on the bottom of the probe tip (discussed and illustrated in the accompanying section on near-field probes). The most commonly employed probe for tapping-mode methods is the conventional fiber optic probe having a near 90-degree bend close to the tip aperture. A representation of a bent optical fiber NSOM probe is presented in Figure 7. The resolution of the tapping-mode near-field image is defined not only by the radius of the tip but also by the amplitude of the oscillation occurring perpendicular to the specimen surface. This is due to the acute sensitivity of the optical signal to the tip-to-specimen separation. In order to maintain high near-field resolution, it is necessary to either maintain a small oscillation amplitude relative to the tip aperture, or to compensate for larger oscillations. A mechanism that has been demonstrated to improve resolution is to synchronize the collected NSOM signal with the cycle of tip oscillation. Modulating the light coupled into the probe, and adjusting the phase such that the specimen is only illuminated when the tip is at its closest approach point, allows maintaining high resolution imaging at fairly large tip oscillation amplitudes. There are several drawbacks in the application of bent optical probes, each of which can be attributed to the bend itself. A significant problem is the increased difficulty of the probe fabrication, especially when applying a metal coating to the tip. An additional disadvantage is the increased optical loss that occurs due to the bend in the probe. This loss in throughput efficiency is significant, and some published measurements indicate that bent optical fiber probes are at least an order of magnitude less efficient than conventional straight fiber probes. In certain operational modes of NSOM, the intensity loss is not a serious limitation because additional light can be coupled into the fiber to compensate, assuming sufficient laser power is available. The increase in optical coupling is an option because the optical losses, as well as increased heating, occur at the bend in the fiber and not at the aperture of the probe, where local heating would present a major problem. Another potential drawback with bent probes is a change in certain tip properties that occurs due to the presence of the bend, such as a decrease in extinction coefficients when performing polarized light measurements. Extinction ratios of approximately 70:1 have been measured in the far-field utilizing bent tips, as compared to values of greater than 100:1 with conventional straight fiber probes.

Prepared Abhishek Sharma

61

In the bent probe feedback mode, the probe is oscillated perpendicular to the specimen surface similar to tapping-mode AFM. The amplitude of oscillation can be monitored either mechanically, with a piezo-electric device such as a quartz tuning fork, or optically by reflecting a laser from the top surface of the tip cantilever. The probe is excited to oscillate in one of its eigenmodes and the tip-specimen distance is recorded and used dynamically as the feedback signal. The tip of the probe is prevented from adhering to the specimen due to the oscillation, which provides both a short contact time and a reverse driving force due to the cantilever bending. The success of this feedback method is dependent on the increased sensitivity of the tip-specimen distance control resulting from the resonance enhancement of the tip vibration. The sharpness and sensitivity of the tip vibration is characterized by the Q of the cantilever (similar to the measured Q in the shear-force oscillation). The Q value of the probe is substantially reduced by the viscosity of a liquid environment, and this is usually accompanied by a large shift in the resonance frequency. Some of the limitations of near-field optical microscopy include:

Practically zero working distance and an extremely small depth of field. Extremely long scan times for high resolution images or large specimen areas. Very low transmissivity of apertures smaller than the incident light wavelength. Only features at the surface of specimens can be studied. Fiber optic probes are somewhat problematic for imaging soft materials due to their high spring constants, especially in
shear-force mode. NSOM is currently still in its infancy, and more research is needed toward developing improved probe fabrication techniques and more sensitive feedback mechanisms. The future of the technique may actually rest in refinement of apertureless near-field methods (including interferometric), some of which have already achieved resolutions on the order of 1 nanometer. However, typical resolutions for most NSOM instruments range around 50 nanometers, which is only 5 or 6 times better than that achieved by scanning confocal microscopy. This moderate increase in resolution comes at a considerable cost in time required to set up the NSOM instrument for proper imaging, and in the complexity of operation. The greatest advantage of NSOM probably rests in its ability to provide optical and spectroscopic data at high spatial resolution, in combination with simultaneous topographic information. Combining atomic force measurements and near-field scanning optical microscopy has proven to be an extremely powerful approach in certain areas of research, providing new information about a variety of specimen types that is simply not attainable with far-field microscopy.

Prepared Abhishek Sharma

62

U N I T 5

Prepared Abhishek Sharma

63

Nuclear Magnetic Resonator (NMR)


Nuclear magnetic resonance spectroscopy, most commonly known as NMR spectroscopy, is the name given to a technique which exploits the magnetic properties of certain nuclei. This phenomenon and its origins are detailed in a separate section on nuclear magnetic resonance. The most important applications for the organic chemist are proton NMR and carbon-13 NMR spectroscopy. In principle, NMR is applicable to any nucleus possessing spin. Many types of information can be obtained from an NMR spectrum. Much like using infrared spectroscopy to identify functional groups, analysis of a 1D NMR spectrum provides information on the number and type of chemical entities in a molecule. However, NMR provides much more information than IR. The impact of NMR spectroscopy on the natural sciences has been substantial. It can, among other things, be used to study mixtures of analytes, to understand dynamic effects such as change in temperature and reaction mechanisms, and is an invaluable tool in understanding protein and nucleic acid structure and function. It can be applied to a wide variety of samples, both in the solution and the solid state.

Basic NMR techniques


When placed in a magnetic field, NMR active nuclei (such as 1H or 13C) absorb at a frequency characteristic of the isotope. The resonant frequency, energy of the absorption and the intensity of the signal are proportional to the strength of the magnetic field. For example, in a 21 tesla magnetic field, protons resonate at 900 MHz. It is common to refer to a 21 T magnet as a 900 MHz magnet, although different nuclei resonate at a different frequency at this field strength. In the Earth's magnetic field the same nuclei resonate at audio frequencies. This effect is used in Earth's field NMR spectrometers and other instruments. Because these instruments are portable and inexpensive, they are often used for teaching and field work. Since a nucleus is a charged particle in motion, it will develop a magnetic field. 1H and 13C have nuclear spins of 1/2 and so they behave in a similar fashion to a simple, tiny bar magnet. In the absence of a magnetic field, these are randomly oriented but when a field is applied they line up parallel to the applied field, either spin aligned or spin opposed. The more highly populated state is the lower energy spin state spin aligned situation. Two schematic representations of these arrangements are shown below:

In NMR, EM radiation is used to "flip" the alignment of nuclear spins from the low energy spin aligned state to the higher energy spin opposed state. The energy required for this transition depends on the strength of the applied magnetic field (see below) but in is small and corresponds to the radio frequency range of the EM spectrum.

As this diagram shows, the energy required for the spin-flip depends on the magnetic field strength at the nucleus. With no applied field, there is no energy difference between the spin states, but as the field

Prepared Abhishek Sharma

64

increases so does the separation of energies of the spin states and therefore so does the frequency required to cause the spin-flip, referred to as resonance.

Instrumentation
The basic arrangement of an NMR spectrometer is shown to the left. The sample is positioned in the magnetic field and excited via pulsations in the radio frequency input circuit.

The realigned magnetic fields induce a radio signal in the output circuit which is used to generate the output signal. Fourier analysis of the complex output produces the actual spectrum. The pulse is repeated as many times as necessary to allow the signals to be identified from the background noise.

Chemical shift An NMR spectrum is a plot of the radio frequency applied against absorption. A signal in the spectrum is referred to as a resonance. The frequency of a signal is known as its chemical shift.
The chemical shift in absolute terms is defined by the frequency of the resonance expressed with reference to a standard compound which is defined to be at 0 ppm. The scale is made more manageable by expressing it in parts per million (ppm) and is indepedent of the spectrometer frequency.

It is often convenient to describe the relative positions of the resonances in an NMR spectrum. For example, a peak at a chemical shift, d, of 10 ppm is said to be downfield or deshielded with respect to a peak at 5 ppm, or if you prefer, the peak at 5 ppm is up field or shielded with respect to the peak at 10 ppm.

Typically for field strength of 4.7T the resonance frequency of a proton will occur around 200MHz and for a carbon, around 50.4MHz. The reference compound is the same for both, tetramethysilane (Si(CH3)4 often just referred to as TMS). What would be the chemical shift of a peak that occurs 655.2 Hz downfield of TMS on a spectrum recorded using a 90 MHz spectrometer? At what frequency would the chemical shift of chloroform (CHCl3, =7.28 ppm) occur relative to TMS on a spectrum recorded on a 300 MHz spectrometer? A 1 GHz (1000 MHz) NMR spectrometer is being developed, at what frequency and chemical shift would chloroform occur?

Prepared Abhishek Sharma

65

J-coupling
Some of the most useful information for structure determination in a one-dimensional NMR spectrum comes from J-coupling or scalar coupling (a special case of spin-spin coupling) between NMR active nuclei. This coupling arises from the interaction of different spin states through the chemical bonds of a molecule and results in the splitting of NMR signals. These splitting patterns can be complex or simple and, likewise, can be straightforwardly interpretable or deceptive. This coupling provides detailed insight into the connectivity of atoms in a molecule. Coupling to n equivalent (spin ) nuclei splits the signal into a n+1 multiplet with intensity ratios following Pascal's triangle as described on the right. Coupling to additional spins will lead to further splittings of each component of the multiplet e.g. coupling to two different spin nuclei with significantly different coupling constants will lead to a doublet of doublets (abbreviation: dd). Note that coupling between nuclei that are chemically equivalent (that is, have the same chemical shift) has no effect of the NMR spectra and couplings between nuclei that are distant (usually more than 3 bonds apart for protons in flexible molecules) are usually too small to cause observable splittings. Long-range couplings over more than three bonds can often be observed in cyclic and aromatic compounds, leading to more complex splitting patterns. For example, in the proton spectrum for ethanol described above, the CH3 group is split into a triplet with an intensity ratio of 1:2:1 by the two neighboring CH2 protons. Similarly, the CH2 is split into a quartet with an intensity ratio of 1:3:3:1 by the three neighboring CH3 protons. In principle, the two CH2 protons would also be split again into a doublet to form a doublet of quartets by the hydroxyl proton, but intermolecular exchange of the acidic hydroxyl proton often results in a loss of coupling information. Coupling to any spin nuclei such as phosphorus-31 or fluorine-19 works in this fashion (although the magnitudes of the coupling constants may be very different). But the splitting patterns differ from those described above for nuclei with spin greater than because the spin quantum number has more than two possible values. For instance, coupling to deuterium (a spin 1 nucleus) splits the signal into a 1:1:1 triplet because the spin 1 has three spin states. Similarly, a spin 3/2 nucleus splits a signal into a 1:1:1:1 quartet and so on. Coupling combined with the chemical shift (and the integration for protons) tells us not only about the chemical environment of the nuclei, but also the number of neighboring NMR active nuclei within the molecule. In more complex spectra with multiple peaks at similar chemical shifts or in spectra of nuclei other than hydrogen, coupling is often the only way to distinguish different nuclei.

Second-order (or strong) coupling


The above description assumes that the coupling constant is small in comparison with the difference in NMR frequencies between the inequivalent spins. If the shift separation decreases (or the coupling strength increases), the multiplet intensity patterns are first distorted, and then become more complex and less easily analyzed (especially if more than two spins are involved). Intensification of some peaks in a multiplet is achieved at the expense of the remainder, which sometimes almost disappear in the background noise, although the integrated area under the peaks remains constant. In most high-field NMR, however, the distortions are usually modest and the characteristic distortions (roofing) can in fact help to identify related peaks. Second-order effects decrease as the frequency difference between multiplets increases, so that high-field (i.e. high-frequency) NMR spectra display less distortion than lower frequency spectra. Early spectra at 60 MHz were more prone to distortion than spectra from later machines typically operating at frequencies at 200 MHz or above.

Magnetic inequivalence
More subtle effects can occur if chemically equivalent spins (i.e. nuclei related by symmetry and so having the same NMR frequency) have different coupling relationships to external spins. Spins that are chemically equivalent but are not indistinguishable (based on their coupling relationships) are termed magnetically inequivalent. For example, the 4 H sites of 1,2-dichlorobenzene divide into two chemically equivalent pairs by symmetry, but an individual member of one of the pairs has different couplings to the spins making up the other pair. Magnetic inequivalence can lead to highly complex spectra which can only be analyzed by computational modeling. Such effects are more common in NMR spectra of aromatic and other non-flexible systems, while conformational averaging about C-C bonds in flexible molecules tends to equalize the couplings between protons on adjacent carbons, reducing problems with magnetic inequivalence.

Correlation spectroscopy
Correlation spectroscopy is one of several types of two-dimensional nuclear magnetic resonance (NMR) spectroscopy. This type of NMR experiment is best known by its acronym, COSY. Other types of two-dimensional NMR include J-spectroscopy, exchange spectroscopy (EXSY), Nuclear Overhauser effect spectroscopy (NOESY), total correlation spectroscopy (TOCSY) and heteronuclear correlation experiments, such as HSQC, HMQC, and HMBC. Two-dimensional NMR spectra provide more information about a molecule than one-dimensional NMR spectra and are especially useful in determining the structure of a molecule, particularly for molecules that are too complicated to work with using one-dimensional NMR. The first two-dimensional experiment, COSY, was proposed by Jean Jeener, a professor at Universit Libre de Bruxelles, in 1971. This experiment was later implemented by Walter P. Aue, Enrico Bartholdi and Richard R. Ernst, who published their work in 1976.

Solid-state nuclear magnetic resonance


A variety of physical circumstances does not allow molecules to be studied in solution, and at the same time not by other spectroscopic techniques to an atomic level, either. In solid-phase media, such as crystals, microcrystalline powders, gels, anisotropic solutions, etc., it is in particular the dipolar coupling and chemical shift anisotropy that become dominant to the behaviour of the nuclear spin systems. In conventional solution-state NMR spectroscopy, these additional interactions would lead to a significant broadening of spectral lines. A variety of techniques allows to establish high-resolution conditions, that can, at least for 13C spectra, be comparable to solution-state NMR spectra. Two important concepts for high-resolution solid-state NMR spectroscopy are the limitation of possible molecular orientation by sample orientation, and the reduction of anisotropic nuclear magnetic interactions by sample spinning. Of the latter approach, fast spinning around the magic angle is a very prominent method, when the system comprises spin 1/2 nuclei. A number of intermediate techniques, with samples of partial alignment or reduced mobility, is currently being used in NMR spectroscopy.

Prepared Abhishek Sharma

66

Applications in which solid-state NMR effects occur are often related to structure investigations on membrane proteins, protein fibrils or all kinds of polymers, and chemical analysis in inorganic chemistry, but also include "exotic" applications like the plant leaves and fuel cells.

NMR spectroscopy applied to proteins


Much of the recent innovation within NMR spectroscopy has been within the field of protein NMR, which has become a very important technique in structural biology. One common goal of these investigations is to obtain high resolution 3-dimensional structures of the protein, similar to what can be achieved by X-ray crystallography. In contrast to X-ray crystallography, NMR is primarily limited to relatively small proteins, usually smaller than 35 kDa, though technical advances allow ever larger structures to be solved. NMR spectroscopy is often the only way to obtain high resolution information on partially or wholly intrinsically unstructured proteins. Proteins are orders of magnitude larger than the small organic molecules discussed earlier in this article, but the same NMR theory applies. Because of the increased number of each element present in the molecule, the basic 1D spectra become crowded with overlapping signals to an extent where analysis is impossible. Therefore, multidimensional (2, 3 or 4D) experiments have been devised to deal with this problem. To facilitate these experiments, it is desirable to isotopically label the protein with 13C and 15N because the predominant naturally occurring isotope 12C is not NMR-active, whereas the nuclear quadrupole moment of the predominant naturally occurring 14N isotope prevents high resolution information to be obtained from this nitrogen isotope. The most important method used for structure determination of proteins utilizes NOE experiments to measure distances between pairs of atoms within the molecule. Subsequently, the obtained distances are used to generate a 3D structure of the molecule using a computer program.

Raman Spectroscopy
Raman spectroscopy (named after C. V. Raman, pronounced /rmn/) is a spectroscopic technique used to study vibrational, rotational, and other low-frequency modes in a system.[1] It relies on inelastic scattering, or Raman scattering, of monochromatic light, usually from a laser in the visible, near infrared, or near ultraviolet range. The laser light interacts with phonons or other excitations in the system, resulting in the energy of the laser photons being shifted up or down. The shift in energy gives information about the phonon modes in the system. Infrared spectroscopy yields similar, but complementary, information. Typically, a sample is illuminated with a laser beam. Light from the illuminated spot is collected with a lens and sent through a monochromator. Wavelengths close to the laser line, due to elastic Rayleigh scattering, are filtered out while the rest of the collected light is dispersed onto a detector. Spontaneous Raman scattering is typically very weak, and as a result the main difficulty of Raman spectroscopy is separating the weak inelastically scattered light from the intense Rayleigh scattered laser light. Historically, Raman spectrometers used holographic gratings and multiple dispersion stages to achieve a high degree of laser rejection. In the past, photomultipliers were the detectors of choice for dispersive Raman setups, which resulted in long acquisition times. However, modern instrumentation almost universally employs notch or edge filters for laser rejection and spectrographs (either axial transmissive (AT), Czerny-Turner (CT) monochromator) or FT (Fourier transform spectroscopy based), and CCD detectors. There are a number of advanced types of Raman spectroscopy, including surface-enhanced Raman, tip-enhanced Raman, polarised Raman, stimulated Raman (analogous to stimulated emission), transmission Raman, spatially-offset Raman, and hyper Raman.

Theory of Raman Spectroscopy


The Raman effect occurs when light impinges upon a molecule and interacts with the electron cloud and the bonds of that molecule. The incident photon excites the molecule into a virtual state. For the spontaneous Raman effect, the molecule will be excited from the ground state to a virtual energy state, and relax into a vibrational excited state. Two series of lines exist around this central vibrational transition. They correspond to the complimentary rotational transition. Anti-Stokes lines correspond to rotational relaxation whereas Stokes lines correspond to rotational excitation. A change in the molecular polarization potential or amount of deformation of the electron cloud with respect to the vibrational coordinate is required for the molecule to exhibit the Raman effect. The amount of the polarizability change will determine the Raman scattering intensity, whereas the Raman shift is equal to the vibrational level that is involved.

OR When a photon of light interacts with a molecule, the photon can be absorbed or scattered. The process of light absorption requires first that the energy of the incident photon be equal to the energy difference between two states of the molecule (this is referred to as the resonance condition) and second, that the transition between the two states is accompanied by a change in the dipole moment of the

Prepared Abhishek Sharma

67

molecule. If these conditions are met, the molecule will be found in an electronic excited state (if an ultraviolet, visible, or perhaps nearinfrared photon was absorbed) or in a vibrational excited state (if an infrared photon was absorbed). Those photons that interact with a molecule, but are not absorbed, will be scattered and the incident photons need not be resonant with two states of the molecule for scattering to occur. Under these conditions, the photon merely polarizes the electronic cloud of the molecule and this interaction is said to form a virtual excited state, as shown in the energy level diagram below. The virtual excited state will be extremely short-lived and the energy of the incident photon will be quickly re-radiated. When re-radiation of the photon occurs without any change in the nuclear coordinates of the molecule, the photon is said to be elastically scattered (that is, the energies of the incident and scattered photons are identical). To a molecular spectroscopist, this process is commonly referred to as Rayleigh scattering. Because nuclear motion is much slower than electronic motion, Rayleigh scattering is a dominant process.

However, when nuclear, i.e. vibrational, motion occurs during the lifetime of the virtual state, a quantum of vibrational energy is transferred between the molecule and the incident photon and the remaining energy is scattered inelastically such that the energies of the incident and scattered photons are no longer equal. This inelastic scattering process is known as Raman scattering and because of its requirement that nuclear motion occur during the lifetime of the virtual state, it is typically more than a million times less probable than Rayleigh scattering. But, if one has a laser source available, i.e. a lot of photons, Raman scattering can be observed readily and used to study molecular structure. If the transfer of energy in the virtual state is from the photon to the molecule, the scattered photon will be lower in energy than the incident photon and the phenomenon is referred to as Stokes Raman scattering. Conversely, if the transfer of energy in the virtual state is from the molecule to the photon, the scattered photon will be higher in energy than the incident photon and the phenomenon is referred to as anti-Stokes Raman scattering. The observation of anti-Stokes Raman scattering requires that some of the molecules are present initially in higher-energy vibrational states. The population of vibrational states will be described by a Boltzmann distribution and higher-energy states will become more populated as the temperature is increased. Thus, the intensity of anti-Stokes scattering relative to Stokes scattering will increase with temperature and comparisons of these intensities can be used as a thermometer to determine the Boltzmann temperature of a sample.

Applications
Raman spectroscopy is commonly used in chemistry, since vibrational information is specific to the chemical bonds and symmetry of molecules. It therefore provides a fingerprint by which the molecule can be identified. For instance, the vibrational frequencies of SiO, Si2O2, and Si3O3 were identified and assigned on the basis of normal coordinate analyses using infrared and Raman spectra.[3] The fingerprint region of organic molecules is in the (wavenumber) range 500-2000 cm1. Another way that the technique is used is to study changes in chemical bonding, e.g., when a substrate is added to an enzyme. Raman gas analyzers have many practical applications. For instance, they are used in medicine for real-time monitoring of anaesthetic and respiratory gas mixtures during surgery. In solid state physics, spontaneous Raman spectroscopy is used to, among other things, characterize materials, measure temperature, and find the crystallographic orientation of a sample. As with single molecules, a given solid material has characteristic phonon modes that can help an experimenter identify it. In addition, Raman spectroscopy can be used to observe other low frequency excitations of the solid, such as plasmons, magnons, and superconducting gap excitations. The spontaneous Raman signal gives information on the population of a given phonon mode in the ratio between the Stokes (downshifted) intensity and anti-Stokes (upshifted) intensity. Raman scattering by an anisotropic crystal gives information on the crystal orientation. The polarization of the Raman scattered light with respect to the crystal and the polarization of the laser light can be used to find the orientation of the crystal, if the crystal structure (specifically, its point group) is known. Raman active fibers, such as aramid and carbon, have vibrational modes that show a shift in Raman frequency with applied stress. Polypropylene fibers also exhibit similar shifts. The radial breathing mode is a commonly used technique to evaluate the diameter of carbon nanotubes. In nanotechnology, a Raman microscope can be used to analyze nanowires to better understand the composition of the structures. Spatially Offset Raman Spectroscopy (SORS), which is less sensitive to surface layers than conventional Raman, can be used to discover counterfeit drugs without opening their internal packaging, and for non-invasive monitoring of biological tissue.[4] Raman spectroscopy can be used to investigate the chemical composition of historical documents such as the Book of Kells and contribute to knowledge of the social and economic conditions at the time the documents were produced.[5] This is especially helpful because Raman spectroscopy offers a non-invasive way to determine the best course of preservation or conservation treatment for such materials. Raman spectroscopy is being investigated as a means to detect explosives for airport security.

Prepared Abhishek Sharma

68

Micro spectroscopy
Raman spectroscopy offers several advantages for microscopic analysis. Since it is a scattering technique, specimens do not need to be fixed or sectioned. Raman spectra can be collected from a very small volume (< 1 m in diameter); these spectra allow the identification of species present in that volume. Water does not generally interfere with Raman spectral analysis. Thus, Raman spectroscopy is suitable for the microscopic examination of minerals, materials such as polymers and ceramics, cells and proteins. A Raman microscope begins with a standard optical microscope, and adds an excitation laser, a monochromator, and a sensitive detector (such as a charge-coupled device (CCD), or photomultiplier tube (PMT)). FT-Raman has also been used with microscopes. In direct imaging, the whole field of view is examined for scattering over a small range of wavenumbers (Raman shifts). For instance, a wavenumber characteristic for cholesterol could be used to record the distribution of cholesterol within a cell culture. The other approach is hyperspectral imaging or chemical imaging, in which thousands of Raman spectra are acquired from all over the field of view. The data can then be used to generate images showing the location and amount of different components. Taking the cell culture example, a hyperspectral image could show the distribution of cholesterol, as well as proteins, nucleic acids, and fatty acids. Sophisticated signal- and image-processing techniques can be used to ignore the presence of water, culture media, buffers, and other interferents. Raman microscopy, and in particular confocal microscopy, has very high spatial resolution. For example, the lateral and depth resolutions were 250 nm and 1.7 m, respectively, using a confocal Raman microspectrometer with the 632.8 nm line from a He-Ne laser with a pinhole of 100 m diameter. Since the objective lenses of microscopes focus the laser beam to several micrometres in diameter, the resulting photon flux is much higher than achieved in conventional Raman setups. This has the added benefit of enhanced fluorescence quenching. However, the high photon flux can also cause sample degradation, and for this reason some setups require a thermally conducting substrate (which acts as a heat sink) in order to mitigate this process. By using Raman microspectroscopy, in vivo time- and space-resolved Raman spectra of microscopic regions of samples can be measured. As a result, the fluorescence of water, media, and buffers can be removed. Consequently in vivo time- and space-resolved Raman spectroscopy is suitable to examine proteins, cells and organs. Raman microscopy for biological and medical specimens generally uses near-infrared (NIR) lasers (785 nm diodes and 1064 nm Nd:YAG are especially common). This reduces the risk of damaging the specimen by applying higher energy wavelengths. However, the intensity of NIR Raman is low (owing to the 4 dependence of Raman scattering intensity), and most detectors required very long collection times. Recently, more sensitive detectors have become available, making the technique better suited to general use. Raman microscopy of inorganic specimens, such as rocks and ceramics and polymers, can use a broader range of excitation wavelengths.

Polarized analysis
The polarization of the Raman scattered light also contains useful information. This property can be measured using (plane) polarized laser excitation and a polarization analyzer. Spectra acquired with the analyzer set at both perpendicular and parallel to the excitation plane can be used to calculate the depolarization ratio. Study of the technique is pedagogically useful in teaching the connections between group theory, symmetry, Raman activity and peaks in the corresponding Raman spectra. The spectral information arising from this analysis gives insight into molecular orientation and vibrational symmetry. In essence, it allows the user to obtain valuable information relating to the molecular shape, for example in synthetic chemistry or polymorph analysis. It is often used to understand macromolecular orientation in crystal lattices, liquid crystals or polymer samples.

The Spectrometer and Detector


The spectrometer itself is a commercial "triple-grating" system. Physically, it is separated into two stages which are shown schematically here.

The first stage is called a monochromator, but is really used as a filter. Its structure is basically two diffraction gratings, separated by a slit, with input and output focussing mirrors. The incoming signal from the collecting lenses is focussed on the first grating, which separates the different wavelengths. This spread-out light is then passed through a slit. Because light of different wavelengths is now travelling in different directions, the slit width can be tuned to reject wavelengths outside of a user-defined range. This rejection is often used to eliminate the light at the laser frequency. The light which makes it through the slit is then refocussed on the second grating, whose purpose is only to compensate for any wavelength-dependence in the dispersion of the first grating. This grating is oriented such that its dispersion pattern is the mirror image of that from the first grating. Finally the light is refocussed and sent out to the second stage. The second stage focusses the filtered light on the final grating. The dispersed light is now analyzed as a function of position, which corresponds to wavelength. The signal as a function of position is read by the system detector. In the present case the detector is a multichannel charge-coupled device array (CCD) in which the different positions (wavelengths) are read simultaneously. The

Prepared Abhishek Sharma

69

wavelength/intensity information is then read to a computer and converted in software to frequency/intensity. This is the Raman spectrum which appears as the raw data.

Secondary ion mass spectrometry


Secondary ion mass spectrometry (SIMS) is a technique used in materials science and surface science to analyze the composition of solid surfaces and thin films by sputtering the surface of the specimen with a focused primary ion beam and collecting and analyzing ejected secondary ions. These secondary ions are measured with a mass spectrometer to determine the elemental, isotopic, or molecular composition of the surface. SIMS is the most sensitive surface analysis technique, being able to detect elements present in the parts per billion range.

Instrumentation
Typically, a secondary ion mass spectrometer consists of:

primary ion gun generating the primary ion beam primary ion column, accelerating and focusing the beam onto the sample (and in some devices an opportunity to separate the primary
ion species by wien filter or to pulse the beam) high vacuum sample chamber holding the sample and the secondary ion extraction lens mass analyser separating the ions according to their mass to charge ratio ion detection unit.

Vacuum
SIMS requires a high vacuum with pressures below 10-4 Pascal (roughly 10-6 mbar or torr). This is needed to ensure that secondary ions do not collide with background gases on their way to the detector (i.e. the mean free path of gas molecules within the detector must be large compared to the size of the instrument), and it also prevents surface contamination by adsorption of background gas particles during measurement.

Primary ion sources


There are three basic types of ion guns. In one, ions of gaseous elements are usually generated with Duoplasmatrons or by electron ionization, for instance noble gases (Ar+, Xe+), oxygen (O-, O2+), or even ionized molecules such as SF5+ (generated from SF6) or C60+. This type of ion gun is easy to operate and generates roughly focused but high current ion beams. A second source type, the surface ionization source, generates Cs+ primary ions. Caesium atoms vaporize through a porous tungsten plug and are ionized during evaporation. Depending on the gun design, fine focus or high current can be obtained. A third source type, the liquid metal ion source (LMIG), operates with metals or metallic alloys, which are liquid at room temperature or slightly above. The liquid metal covers a tungsten tip and emits ions under influence of an intense electric field. While a gallium source is able to operate with elemental gallium, recently developed sources for gold, indium and bismuth use alloys which lower their melting points. The LMIG provides a tightly focused ion beam (<50 nm) with moderate intensity and is additionally able to generate short pulsed ion beams. It is therefore commonly used in static SIMS devices. The choice of the ion species and ion gun respectively depends on the required current (pulsed or continuous), the required beam dimensions of the primary ion beam and on the sample which is to be analyzed. Oxygen primary ions are often used to investigate electropositive elements due to an increase of the generation probability of positive secondary ions, while caesium primary ions often are used when electronegative elements are being investigated. For short pulsed ion beams in static SIMS, only LMIGs are deployable, but they are often combined with either an oxygen gun or a caesium gun for sample depletion.

Prepared Abhishek Sharma

70

Mass analyzers
Dependent on the SIMS type, there are three basic analyzers available: sector, quadrupole, and time-of-flight. A sector field mass spectrometer uses a combination of an electrostatic analyzer and a magnetic analyzer to separate the secondary ions by their mass to charge ratio. A quadrupole mass analyzer separates the masses by resonant electric fields, which allow only the selected masses to pass through. The time of flight mass analyzer separates the ions in a field-free drift path according to their kinetic energy. It requires pulsed secondary ion generation using either a pulsed primary ion gun or a pulsed secondary ion extraction. It is the only analyzer type able to detect all generated secondary ions simultaneously, and is the standard analyzer for static SIMS instruments.

Detectors
A Faraday cup measures the ion current hitting a metal cup, and is sometimes used for high current secondary ion signals. With an electron multiplier an impact of a single ion starts off an electron cascade, resulting in a pulse of 108 electrons which is recorded directly. A microchannel plate detector is similar to an electron multiplier, with lower amplification factor but with the advantage of laterallyresolved detection. Usually it is combined with a fluorescent screen, and signals are recorded either with a CCD-camera or with a fluorescence detector.

Detection limits and sample degradation


Detection limits for most trace elements are between 1012 and 1016 atoms per cubic centimeter, depending on the type of instrumentation used, the primary ion beam used and the analytical area, and other factors. Samples as small as individual pollen grains and microfossils can yield results by this technique. The amount of surface crate ring created by the process depends on the current (pulsed or continuous) and dimensions of the primary ion beam. While only charged secondary ions emitted from the material surface through the sputtering process are used to analyze the chemical composition of the material, these represent a small fraction of the particles emitted from the sample.

Static and dynamic modes


In the field of surface analysis, it is usual to distinguish static SIMS and dynamic SIMS. Static SIMS is the process involved in surface atomic monolayer analysis, usually with a pulsed ion beam and a time of flight mass spectrometer, while dynamic SIMS is the process involved in bulk analysis, closely related to the sputtering process, using a DC primary ion beam and a magnetic sector or quadrupole mass spectrometer.

Applications
The COSIMA instrument onboard Rosetta will be the first instrument to determine the composition of cometary dust with secondary ion mass spectrometry in 2014.

Auger electron spectroscopy


Auger electron spectroscopy (AES; Auger pronounced in French) is a common analytical technique used specifically in the study of surfaces and, more generally, in the area of materials science. Underlying the spectroscopic technique is the Auger effect, as it has come to be called, which is based on the analysis of energetic electrons emitted from an excited atom after a series of internal relaxation events. The Auger effect was discovered independently by both Lise Meitner and Pierre Auger in the 1920s. Though the discovery was made by Meitner and initially reported in the journal Zeitschrift fr Physik in 1922, Auger is credited with the discovery in most of the scientific community. Until the early 1950s Auger transitions were considered nuisance effects by spectroscopists, not containing much relevant material information, but studied so as to explain anomalies in x-ray spectroscopy data. Since 1953 however, AES has become a practical and straightforward characterization technique for probing chemical and compositional surface environments and has found applications in metallurgy, gas-phase chemistry, and throughout the microelectronics industry.

Electron transitions and the Auger effect


The Auger effect is an electronic process at the heart of AES resulting from the inter- and intrastate transitions of electrons in an excited atom. When an atom is probed by an external mechanism, such as a photon or a beam of electrons with energies in the range of 2 keV to 50 keV, a core state electron can be removed leaving behind a hole. As this is an unstable state, the core hole can be filled by an outer shell electron, whereby the electron moving to the lower energy level loses an amount of energy equal to the difference in orbital energies. The transition energy can be coupled to a second outer shell electron which will be emitted from the atom if the transferred energy is greater than the orbital binding energy. An emitted electron will have a kinetic energy of: Ekin = ECore State EB EC' where ECore State, EB, EC' are respectively the core level, first outer shell, and second outer shell electron energies, measured from the vacuum level. The apostrophe (tic) denotes a slight modification to the binding energy of the outer shell electrons due to the ionized nature of the atom; often however, this energy modification is ignored in order to ease calculations. Since orbital energies are unique to an atom of a specific element, analysis of the ejected electrons can yield information about the chemical composition of a surface. Figure 1 illustrates two schematic views of the Auger process.

Prepared Abhishek Sharma

71

Figure 1. Two views of the Auger process. (a) illustrates sequentially the steps involved in Auger deexcitation. An incident electron creates a core hole in the 1s level. An electron from the 2s level fills in the 1s hole and the transition energy is imparted to a 2p electron which is emitted. The final atomic state thus has two holes, one in the 2s orbital and the other in the 2p orbital. (b) illustrates the same process using spectroscopic notation, KL1L2,3.

The types of state-to-state transitions available to electrons during an Auger event are dependent on several factors, ranging from initial excitation energy to relative interaction rates, yet are often dominated by a few characteristic transitions. Due to the interaction between an electron's spin and orbital angular momentum (spin-orbit coupling) and the concomitant energy level splitting for various shells in an atom, there are a variety of transition pathways for filling a core hole. Energy levels are labeled using a number of different schemes such as the j-j coupling method for heavy elements , the Russell-Saunders L-S method for lighter elements (Z < 20), and a combination of both for intermediate elements. The j-j coupling method, which is historically linked to X-ray notation, is almost always used to denote Auger transitions. Thus for a KL1L2,3 transition, K represents the core level hole, L1 the relaxing electron's initial state, and L2,3 the emitted electron's initial energy state. Figure 1(b) illustrates this transition with the corresponding spectroscopic notation. The energy level of the core hole will often determine which transition types will be favored. For single energy levels, i.e. K, transitions can occur from the L levels, giving rise to strong KLL type peaks in an Auger spectrum. Higher level transitions can also occur, but are less probable. For multi-level shells, transitions are available from higher energy orbitals (different n,l quantum numbers) or energy levels within the same shell (same n, different l / number). The result are transitions of the type LMM and KLL along with faster CosterKronig transitions such as LLM. It should be noted that while CosterKronig transitions are faster, they are also less energetic and thus harder to locate on an Auger spectrum. As the atomic number Z increases, so too does the number of potential Auger transitions. Fortunately, the strongest electron-electron interactions are between levels which are close together, giving rise to characteristic peaks in an Auger spectrum. KLL and LMM peaks are some of the most commonly identified transitions during surface analysis. Finally, valence band electrons can also fill core holes or be emitted during KVV-type transitions. Several models, both phenomenological and analytical, have been developed to describe the energetics of Auger transitions. One of the most tractable descriptions, put forth by Jenkins and Chung, estimates the energy of Auger transition ABC as: EABC = EA(Z) 0.5[EB(Z) + EB(Z + 1)] 0.5[EC(Z) + EC(Z + 1)] Ei(Z) are the binding energies of the ith level in element of atomic number Z and Ei(Z + 1) are the energies of the same levels in the next element up in the periodic table. While useful in practice, a more rigorous model accounting for effects such as screening and relaxation probabilities between energy levels gives the Auger energy as: EABC = EA EB EC F(BC:x) + Rxin + Rxex where F(BC:x) is the energy of interaction between the B and C level holes in a final atomic state x and the R's represent intra- and extra-atomic transition energies accounting for electronic screening. Auger electron energies can be calculated based on measured values of the various Ei and compared to peaks in the secondary electron spectrum in order to identify chemical species. This technique has been used to compile several reference databases used for analysis in current AES setups.

Instrumentation
Surface sensitivity in AES arises from the fact that emitted electrons usually have energies ranging from 50 eV to 3 keV and at these values, electrons have a short mean free path in a solid. The escape depth of electrons is therefore localized to within a few nanometers of the target surface, giving AES an extreme sensitivity to surface species. Due to the low energy of Auger electrons, most AES setups are run under ultra-high vacuum (UHV) conditions. Such measures prevent electron scattering off of residual gas atoms as well as the formation of a thin "gas (adsorbate) layer" on the surface of the specimen which degrades analytical performance. A typical AES setup is shown schematically in figure 2. In this configuration, focused electrons are incident on a sample and emitted electrons are deflected into a cylindrical mirror analyzer (CMA). In the detection unit, Auger electrons are multiplied and the signal sent to data processing electronics. Collected Auger electrons are plotted as a function of energy against the broad secondary electron background spectrum.

Figure 2. AES experimental setup using a cylindrical mirror analyzer (CMA). An electron beam is focused onto a specimen and emitted electrons are deflected around the electron gun and pass through an aperture towards the back of the CMA. These electrons are then directed into an electron multiplier for analysis. Varying voltage at the sweep supply allows derivative mode plotting of the Auger data. An optional ion gun can be integrated for depth profiling experiments.

Prepared Abhishek Sharma

72

Since the intensity of the Auger peaks may be small compared to the noise level of the background, AES is often run in a derivative mode which serves to highlight the peaks by modulating the electron collection current via a small applied AC voltage. Since this V = ksin(t), the collection current becomes I(V + ksin(t)). Taylor expanding gives:

Using the setup in figure 2, detecting the signal at frequency will give a value for I' or . Plotting in derivative mode also emphasizes Auger fine structure which appear as small secondary peaks surrounding the primary Auger peak. These secondary peaks, not to be confused with high energy satellites which are discussed later, arise from the presence of the same element in multiple different chemical states on a surface (i.e. adsorbate layers) or from relaxation transitions involving valence band electrons of the substrate. Figure 3 illustrates a derivative spectrum from a copper nitride film clearly showing the Auger peaks. It should be noted that the peak in derivative mode is not the true Auger peak, but rather the point of maximum slope of N(E), but this concern is usually ignored.

Uses and limitations


There are a number of electron microscopes that have been specifically designed for use in Auger spectroscopy; these are termed scanning Auger microscopes (SAM) and can produce high resolution, spatially resolved chemical images. SAM images are obtained by stepping a focused electron beam across a sample surface and measuring the intensity of the Auger peak above the background of scattered electrons. The intensity map is correlated to a gray scale on a monitor with whiter areas corresponding to higher element concentration. In addition, sputtering is sometimes used with Auger spectroscopy to perform depth profiling experiments. Sputtering removes thin outer layers of a surface so that AES can be used to determine the underlying composition. Depth profiles are shown as either Auger peak height vs. sputter time or atomic concentration vs. depth. Precise depth milling through sputtering has made profiling an invaluable technique for chemical analysis of nanostructured materials and thin films. AES is also used extensively as an evaluation tool on and off fab lines in the microelectronics industry, while the versatility and sensitivity of the Auger process makes it a standard analytical tool in research labs. Despite the advantages of high spatial resolution and precise chemical sensitivity attributed to AES, there are several factors that can limit the applicability of this technique, especially when evaluating solid specimens. One of the most common limitations encountered with Auger spectroscopy are charging effects in non-conducting samples. Charging results when the number of secondary electrons leaving the sample are greater or less than the number of incident electrons, giving rise to a net polarity at the surface. Both positive and negative surface charges severely alter the yield of electrons emitted from the sample and hence distort the measured Auger peaks. To complicate matters, neutralization methods employed in other surface analysis techniques, such as secondary ion mass spectrometry (SIMS), are not applicable to AES, as these methods usually involve surface bombardment with either electrons or ions (i.e. flood gun). Several processes have been developed to combat the issue of charging, though none of them are ideal and still make quantification of AES data difficult. One such technique involves depositing conductive pads near the analysis area to minimize regional charging. However, this type of approach limits SAM applications as well as the amount of sample material available for probing. A related technique involves thinning or "dimpling" a non-conductive layer with Ar+ ions and then mounting the sample to a conductive backing prior to AES. This method has been debated, with claims that the thinning process leaves elemental artifacts on a surface and/or creates damaged layers that distort bonding and promote chemical mixing in the sample. As a result, the compositional AES data is considered suspect. The most common setup to minimize charging effects includes use of a glancing angle (~10) electron beam and a carefully tuned bombarding energy (between 1.5 keV and 3 keV). Control of both the angle and energy can subtly alter the number of emitted electrons vis--vis the incident electrons and thereby reduce or altogether eliminate sample charging. In addition to charging effects, AES data can be obscured by the presence of characteristic energy losses in a sample and higher order atomic ionization events. Electrons ejected from a solid will generally undergo multiple scattering events and lose energy in the form of collective electron density oscillations called plasmons. If plasmon losses have energies near that of an Auger peak, the less intense Auger process may become dwarfed by the plasmon peak. As Auger spectra are normally weak and spread over many eV of energy, they are difficult to extract from the background and in the presence of plasmon losses, deconvolution of the two peaks becomes extremely difficult. For such spectra, additional analysis through chemical sensitive surface techniques like x-ray photoelectron spectroscopy (XPS) is often required to disentangle the peaks. Sometimes an Auger spectrum can also exhibit "satellite" peaks at well-defined off-set energies from the parent peak. Origin of the satellites is usually attributed to multiple ionization events in an atom or ionization cascades in which a series of electrons are emitted as relaxation occurs for multiple core level holes. The presence of satellites can distort the true Auger peak and/or small peak shift information due to chemical bonding at the surface. Several studies have been undertaken to further quantify satellite peaks. Despite these sometimes substantial drawbacks, Auger electron spectroscopy is a widely used surface analysis technique that has been successfully applied to many diverse fields ranging from gas phase chemistry to nanostructure characterization. Very new class of highresolving electrostatic energy analyzers recently developed - the face-field analyzers (FFA) can be used for remote electron spectroscopy of distant surfaces or surfaces with large roughness or even with deep dimples. These instruments are designed as if to be specifically used in combined scanning electron microscopes (SEM). "FFA" in principle have no perceptible end-fields, which usually distort focusing in most of analysers known, for example, well known CMA. Sensitivity, quantitative detail, and easy of use have brought AES from an obscure nuisance effect to a functional and practical characterization technique in just over fifty years. With applications both in the research laboratory and industrial settings, AES will continue to be a cornerstone of surface-sensitive electron-based spectroscopies.

Deep-level transient spectroscopy


Deep Level Transient Spectroscopy (DLTS) is an experimental tool for studying electrically active defects (known as charge carrier traps) in semiconductors. DLTS enables to establish fundamental defect parameters and measure their concentration in the material. Some of the parameters are considered as defect finger prints used for their identifications and analysis. DLTS investigates defects present in a space charge (depletion) region of a simple electronic device. The most commonly used are Schottky diodes or p-n junctions. In the measurement process the steady-state diode reverse polarization voltage is disturbed by a voltage pulse. This voltage pulse reduces the electric field in the space charge region and allows free carriers from the semiconductor bulk to penetrate this region and recharge the defects causing their non-equilibrium charge state. After the pulse, when the voltage returns to its steady-state value, the defects start to emit trapped carriers due to the thermal emission process. The technique observes the device space

Prepared Abhishek Sharma

73

charge region capacitance where the defect charge state recovery causes the capacitance transient. The voltage pulse followed by the defect charge state recovery are cycled allowing an application of different [signal processing] methods for defect recharging process analysis. The DLTS technique has a higher sensitivity than almost any other semiconductor diagnostic technique. For example, in silicon it can detect impurities and defects at a concentration of one part in 1012 of the material host atoms. This feature together with a technical simplicity of its design made it very popular in research labs and semiconductor material production factories. The DLTS technique was pioneered by D. V. Lang (David Vern Lang of Bell Laboratories) in 1974. US Patent was awarded to Lang in 1975. Deep Level Transient Spectroscopy is used to study electrically active impurities and defects in semiconductors, due to contamination. DLTS is a destructive technique, as it requires forming either a Schottky diode or a p-n junction with a small sample, usually cut from a complete wafer. Majority carrier traps are observed by the application of a reverse bias pulse, while minority carrier traps can be observed by the application of a forward bias pulse. The technique works by observing the capacitance transient associated with the change in depletion region width as the diode returns to equilibrium from an initial non-equilibrium state.

Conventional DLTS
In conventional DLTS the capacitance transients are investigated by using a lock-in amplifier or double box-car averaging technique when the sample temperature is slowly varied (usually in a range from liquid nitrogen temperature to room temperature 300K or above). The equipment reference frequency is the voltage pulse repetition rate. In the conventional DLTS method this frequency multiplied by some constant (depending on the hardware used) is called the rate window. When during the sample temperature variation the emission rate of carriers from some defect equals to the rate window one obtains in the spectrum a peak. By setting up different rate windows in subsequent DLTS spectra measurements on obtains different temperatures at which some particular peak appears. Having a set of the emission rate and corresponding temperature pairs one can make an Arrhenius plot which allows for the deduction of defect activation energy for the thermal emission process. Usually this energy (sometimes called the defect energy level) together with the plot intersect value are defect parameters used for its identification or analysis.

Typical conventional DLTS spectra

MCTS and minority-carrier DLTS


For the Schottky diodes, majority carrier traps are observed by the application of a reverse bias pulse, while minority carrier traps can be observed when the reverse bias voltage pulses are replaced with light pulses with the photon energy from the above semiconductor bandgap spectral range. This method is called Minority Carrier Transient Spectroscopy (MCTS). The minority carrier traps can be also observed for the p-n junctions by application of forward bias pulses which inject minority carriers into the space charge region. In DLTS plots the minority carrier spectra usually are depicted with an opposite sign of amplitude in respect to the majority carrier trap spectra.

Laplace DLTS
There is an extension to DLTS known as a high resolution Laplace transform DLTS (LDLTS). Laplace DLTS is an isothermal technique in which the capacitance transients are digitized and averaged at a fixed temperature. Then the defect emission rates are obtained with a use of numerical methods being equivalent to the inverse Laplace transformation. The obtained emission rates are presented as a spectral plot. The main advantage of Laplace DLTS in comparison to conventional DLTS is the substantial increase in energy resolution understood here as an ability to distinguish very similar signals.

Prepared Abhishek Sharma

74

Laplace DLTS in combination with uniaxial stress results in a splitting of the defect energy level. Assuming a random distribution of defects in non-equivalent orientations, the number of split lines and their intensity ratios reflect the symmetry class. of the given defect. Application of LDLTS to MOS capacitors needs device polarization voltages in a range where the Fermi level extrapolated from semiconductor to the semiconductor-oxide interface intersects this interface within the semiconductor bandgap range. The electronic interface states present at this interface can trap carriers similarly to defects described above. If their occupancy with electrons or holes is disturbed by a small voltage pulse then the device capacitance recovers after the pulse to its initial value as the interface states start to emit carriers. This recovery process can be analyzed with the LDLTS method for different device polarization voltages. Such a procedure allows obtain the energy state distribution of the interface electronic states at the semiconductor-oxide (or dielectric) interfaces.

Constant-Capacitance DLTS
In general, the analysis of the capacitance transients in the DLTS measurements assumes that the concentration of investigated traps is much smaller than the material doping concentration. In cases when this assumption is not fulfilled then the constant capacitance DLTS (CCDLTS) method is used for more accurate determination of the trap concentration.[10] When the defects recharge and their concentration is high then the width of the device space region varies making the analysis of the capacitance transient inaccurate. The additional electronic circuitry maintaining the total device capacitance constant by varying the device bias voltage helps to keep the depletion region width constant. As a result, the varying device voltage reflects the defect recharge process. An analysis of the CCDLTS system using feedback theory was provided by Lau and Lam in 1982.

Prepared Abhishek Sharma

75

You might also like