Professional Documents
Culture Documents
Introduction
Nanotechnology is a topic that spans a range of science and engineering disciplines. It takes
place within the range of 1-100 nanometres. The nanometre scale is about a billionth of a
metre. These unusual physical and chemical characteristics come about because there is an
increase in surface area compared to volume as particles get smaller and also because they are
subject to quantum effects. This means they can behave in different ways and do not follow the
same laws of physics that larger objects do. The idea of nanotechnology first came from the
physicist Richard Feynman (born in 1959) who imagined the entire Encyclopaedia Britannica
could be written on the head of a pin. Carbon nanotubes, tiny tubes of carbon atoms, which are
very strong yet very light, started to be created in the 1950s. It was improvements in
microscopy in the 1980s that allowed researchers to see single atoms and then manipulate them
on a surface.
Nanotechnology is
Comprised of nano materials with at least one dimension that measures between
approximately 1 and 100 nm
Comprised of nano materials that exhibit unique properties as a result of their nano scale
size
The manipulation of these nano materials to develop new technologies, applications or
to improve on existing ones
Used in a wide range of applications from electronics to medicine to energy and more.
Nanotechnology is the creation of useful or functional materials, devices and systems
through control of matter on the nanometer length scale and exploitation of novel
phenomena and properties which arise because of the nanometer length scale.
earlier diagnosis, more individualized treatment options, and better therapeutic success
rates.
Nanotechnology is finding application in traditional energy sources and is greatly
enhancing alternative energy approaches to help meet the world’s increasing energy
demands.
Nanotechnology is improving the efficiency of fuel production from raw petroleum
materials through better catalysis. It is also enabling reduced fuel consumption in
vehicles and power plants through higher-efficiency combustion and decreased friction.
Nanotechnology could help meet the need for affordable, clean drinking water through
rapid, low-cost detection and treatment of impurities in water.
Engineers have developed a thin film membrane with nano pores for energy-efficient
desalination. This molybdenum disulphide (MoS2) membrane filtered two to five times
more water than current conventional filters.
Emergence of Nanotechnology
The emergence of nanotechnology has led to the design, synthesis, and manipulation of particles
in order to create a new opportunity for the utilization of smaller and more regular structures for
various applications. In recent years, nano-sized metal oxide particles have gotten much attention
in various fields of application due to its unique optical, electrical, magnetic, catalytic and
biomedical properties as well as their high surface to volume ratio and specific affinity for the
adsorption of inorganic pollutants and degradation of organic pollutants in aqueous systems.
Bottom up approach: These approaches include the miniaturization of materials components (up
to atomic level) with further self-assembly process leading to the formation
During self-assembly the physical forces operating at nano scale are used to combine basic units
into larger stable structures.
Typical examples are quantum dot formation during epitaxial growth and formation of nano
particles from colloidal dispersion.
Nano materials are synthesized by assembling the atoms/molecules together.
4
Instead of taking material away to make structures, the bottom-up approach selectively adds
atoms to create structures.
Eg) Plasma etching, Chemical vapour deposition
Top down approach: These approaches use larger (macroscopic) initial structures, which can be
externally controlled in the processing of nanostructures.
Typical examples are etching through the mask, ball milling, and application of severe plastic
deformation.
Nano materials are synthesized by breaking down of bulk solids into nano sizes
Top-down processing has been and will be the dominant process in semiconductor
manufacturing.
Eg) Ball Milling, Sol-Gel, lithography
Challenges in Nanotechnology
The most tremendous challenges in Nanotechnology are materials and properties about
Nano scale.
At present, nanotechnology has been widely applied to the area of drug development.
Some Nano particles could be toxic.
The Nano particles are small, which will cross the blood brain barrier, a membrane that
protects the brain from poisonous chemicals in the bloodstream.
Instruments to assess environmental exposure to nano materials.
Methods to evaluate the toxicity of nano materials.
Models for predicting the potential impact of new, engineered nano materials.
Ways of evaluating the impact of nano materials across their lifecycle.
Strategic programs to enable risk focused research.
Can use either refractory balls or steel balls or plastic balls depending on the material to be
synthesized.
When the balls rotate at a particular rpm, the necessary energy is transferred to the powder
which in turn reduces the powder of coarse grain-sized structure to ultrafine nano particle.
The energy transferred to the powder from the balls depends on many factors such
as
Rotational Speed of the balls
Size of the balls
Number of Balls
Milling time
Ratio of ball to powder mass
Milling medium /atmosphere
Cryogenic liquids can be used to increase the brittleness of the product, one has to take necessary
steps to prevent oxidation during milling process.
Advantages:
It produces very fine powder (particle size less than or equal to 10 microns).
It is suitable for milling toxic materials since it can be used in a completely enclosed form.
Has a wide application.
It can be used for continuous operation.
It is used in milling highly abrasive materials.
Disadvantages:
Contamination of product may occur as a result of wear and tear which occurs principally from
the balls and partially from the casing.
High machine noise level especially if the hollow cylinder is mode of metal, but much less if
rubber is used.
Relatively long time of milling.
It is difficult to clean the machine after use.
The sol is a name of a colloidal solution made of solid particles few hundred nm in
diameter, suspended in a liquid phase. The gel can be considered as a solid
macromolecule immersed in a solvent.
Sol-gel process consists in the chemical transformation of a liquid (the sol) into a gel
state and with subsequent post-treatment and transition into solid oxide material.
The particles in sol are polymerized through the removal of the stabilizing components and
produce a gel in a state of a continuous network.
The final heat treatments pyrolyze the remaining organic or inorganic components and form
an amorphous or crystalline coating.
Advantages
Can produce thin bond-coating to provide excellent adhesion between the metallic
substrate and the top coat.
Can produce thick coating to provide corrosion protection performance.
Can easily shape materials into complex geometries in a gel state.
Can have low temperature sintering capability, usually 200-600°C.
Can provide a simple, economic and effective method to produce high quality coatings.
Schematic representation of gas phase process of synthesis of single phase nano materials from a heated
crucible
8
quantities of metals are below 1 g/day, while quantities of oxides can be as high as 20 g/day for
simple oxide. The method is extremely slow.
Schematic representation of typical set-up for gas condensation synthesis of nano materials
Chemical Vapour Condensation(CVC).
Applications
The SEM is routinely used to generate high-resolution images of shapes of objects (SEI) and to
show spatial variations in chemical compositions:
1) acquiring elemental maps or spot chemical analyses using EDS,
2)discrimination of phases based on mean atomic number (commonly related to relative density)
using BSE, and
3) compositional maps based on differences in trace element "activitors" (typically transition
metal and Rare Earth elements) using CL.
4) The SEM is also widely used to identify phases based on qualitative chemical analysis and/or
crystalline structure. Precise measurement of very small features and objects down to 50 nm in
size is also accomplished using the SEM.
5) Backescattered electron images (BSE) can be used for rapid discrimination of phases in
multiphase samples.
12
6) SEMs equipped with diffracted backscattered electron detectors (EBSD) can be used to
examine microfabric and crystallographic orientation in many materials.
Advantages
1. There is arguably no other instrument with the breadth of applications in the study of solid
materials that compares with the SEM.
2. The SEM contribution is most concerned with geological applications, it is important to note
that these applications are a very small subset of the scientific and industrial applications that
exist for this instrumentation.
3. Most SEM's are comparatively easy to operate, with user-friendly "intuitive" interfaces.
4. For many applications, data acquisition is rapid (less than 5 minutes/image for SEI, BSE, spot
EDS analyses.) Modern SEMs generate data in digital formats, which are highly portable.
Limitations
1. Samples must be solid and they must fit into the microscope chamber.
2. Maximum size in horizontal dimensions is usually on the order of 10 cm, vertical dimensions
are generally much more limited and rarely exceed 40 mm.
3. For most instruments samples must be stable in a vacuum on the order of 10-5 - 10-6 torr.
Samples likely to outgas at low pressures (rocks saturated with hydrocarbons, "wet" samples
such as coal, organic materials or swelling clays, and samples likely to decrepitate at low
pressure) are unsuitable for examination in conventional SEM's.
4. However, "low vacuum" and "environmental" SEMs also exist, and many of these types of
samples can be successfully examined in these specialized instruments.
5. Most SEMs use a solid state x-ray detector (EDS), and while these detectors are very fast and
easy to utilize, they have relatively poor energy resolution and sensitivity to elements present in
low abundances when compared to wavelength dispersive x-ray detectors (WDS) on most
electron probe microanalyzers (EPMA).
6. An electrically conductive coating must be applied to electrically insulating samples for study
in conventional SEM's, unless the instrument is capable of operation in a low vacuum mode.
Tunneling Electron Microscopy (TEM) : The first electron microscope was built 1932 by the
German physicist Ernst Ruska, who was awarded the Nobel Prize in 1986 for its invention. The
first commercial TEM in 1939. 1nm resolution Typical accel. volt. = 100-400 kV (some
instruments - 1-3 MV)
BASIC PRINCIPLES
The design of a transmission electron microscope (TEM) is analogous to that of an optical
microscope. In a TEM high-energy (>100 kV) electrons are used instead of photons and
electromagnetic lenses instead of glass lenses. The electron beam passes an electron- transparent
13
sample and a magnified image is formed using a set of lenses. This image is projected onto a
fluorescent screen or a CCD camera. Whereas the use of visible light limits the lateral resolution
in an optical microscope to a few tenths of a micrometer, the much smaller wavelength of
electrons allows for a resolution of nm in a TEM.
Instrument components
Working (imaging):
Image contrast is obtained by interaction of the electron beam with the sample. In the
resulting TEM image denser areas and areas containing heavier elements appear darker due to
scattering of the electrons in the sample. In addition, scattering from crystal planes introduces
diffraction contrast. This contrast depends on the orientation of a crystalline area in the sample
with respect to the electron beam. As a result, in a TEM image of a sample consisting of
randomly oriented crystals each crystal will have its own grey-level. In this way one can
distinguish between different materials, as well as image individual crystals an crystal defects.
Because of the high resolution of the TEM, atomic arrangements in crystalline structures can be
imaged in large detail
ADVANCES IN TEM
CRYO-TEM: Using dedicated equipment, it is possible to freeze 0.1 μm thick water films
and study these films at -170˚C in the TEM. This enables imaging of the natural shape of
organic bilayer structures. Also, agglomeration processes in a dispersion can be studied. In
addition, the application of cryogenic conditions facilitates studies of beam-sensitive
samples.
ENERGY FILTERED TEM (EFTEM): A special filter on the TEM allows for selection of a
very narrow window of energies in the EELS spectrum. Using the corresponding electrons
for imaging, EFTEM is performed. As a result, a qualitative elemental map is obtained.
EFTEM is the only chemical analysis procedure in the TEM that does not use a scanning
beam. As a consequence, it is much faster.
SEM TEM
1. In SEM is based on scattered electrons 1. TEM is based on transmitted
2. The scattered electrons in SEM electrons
produced the image of the sample after 2. In TEM, electrons are directly
the microscope collects and counts the pointed toward the sample.
scattered electrons. 3. TEM seeks to see what is inside or
3. SEM focuses on the sample’s surface beyond the surface.
and its composition. 4. TEM shows the sample as a whole.
4. SEM shows the sample bit by bit 5. TEM delivers a two-dimensional
5. SEM provides a three- dimensional picture.
image 6. TEM has up to a 50 million
6. SEM only offers 2 million as a magnification
maximum level of magnification. 7. The resolution of TEM is 0.5
7. SEM has 0.4 nanometers. angstroms
16
All diffraction methods are based on generation of X-rays in an X-ray tube. These X-rays are
directed at the sample, and the diffracted rays are collected. A key component of all diffraction is
the angle between the incident and diffracted rays. Powder and single crystal diffraction vary in
instrumentation beyond this.
X-ray Powder Diffraction (XRD) Instrumentation - How Does It Work?
X-ray diffractometers consist of three basic elements: an X-ray tube, a sample holder, and
an X-ray detector. X-rays are generated in a cathode ray tube by heating a filament to produce
electrons, accelerating the electrons toward a target by applying a voltage, and bombarding the
target material with electrons. When electrons have sufficient energy to dislodge inner shell
electrons of the target material, characteristic X-ray spectra are produced. These spectra consist
of several components, the most common being Kα and Kβ. Kα consists, in part, of Kα1 and Kα2.
Kα1 has a slightly shorter wavelength and twice the intensity as Kα2.
The specific wavelengths are characteristic of the target material (Cu, Fe, Mo, Cr).
Filtering, by foils or crystal monochrometers, is required to produce monochromatic X-rays
needed for diffraction. Kα1and Kα2 are sufficiently close in wavelength such that a weighted
average of the two is used. Copper is the most common target material for single-crystal
diffraction, with CuKα radiation = 1.5418Å. These X-rays are collimated and directed onto the
sample. As the sample and detector are rotated, the intensity of the reflected X-rays is recorded.
When the geometry of the incident X-rays impinging the sample satisfies the Bragg
Equation, constructive interference occurs and a peak in intensity occurs. A detector records and
processes this X-ray signal and converts the signal to a count rate which is then output to a
device such as a printer or computer monitor.
The geometry of an X-ray diffractometer is such that the sample rotates in the path of the
collimated X-ray beam at an angle θ while the X-ray detector is mounted on an arm to collect the
diffracted X-rays and rotates at an angle of 2θ. The instrument used to maintain the angle and
18
rotate the sample is termed a goniometer. For typical powder patterns, data is collected at 2θ
from ~5° to 70°, angles that are preset in the X-ray scan.
Applications
X-ray powder diffraction is most widely used for the identification of unknown crystalline
materials (e.g. minerals, inorganic compounds). Determination of unknown solids is critical to
studies in geology, environmental science, material science, engineering and biology.
Other applications include:
characterization of crystalline materials
identification of fine-grained minerals such as clays and mixed layer clays that are
difficult to determine optically
determination of unit cell dimensions
measurement of sample purity
With specialized techniques, XRD can be used to:
determine crystal structures using Rietveld refinement
determine of modal amounts of minerals (quantitative analysis)
characterize thin films samples by:
o determining lattice mismatch between film and substrate and to inferring stress
and strain
o determining dislocation density and quality of the film by rocking curve
measurements
o measuring superlattices in multilayered epitaxial structures
o determining the thickness, roughness and density of the film using glancing
incidence X-ray reflectivity measurements
make textural measurements, such as the orientation of grains, in a polycrystalline sample
Strengths
Powerful and rapid (< 20 min) technique for identification of an unknown mineral
In most cases, it provides an unambiguous mineral determination
Minimal sample preparation is required
XRD units are widely available
Data interpretation is relatively straight forward
Limitations
Homogeneous and single phase material is best for identification of an unknown
Must have access to a standard reference file of inorganic compounds (d-spacings, hkls)
Requires tenths of a gram of material which must be ground into a powder
For mixed materials, detection limit is ~ 2% of sample
For unit cell determinations, indexing of patterns for non-isometric crystal systems is
complicated
Peak overlay may occur and worsens for high angle 'reflections'
Sample Preparation
Obtain a few tenths of a gram (or more) of the material, as pure as possible
Grind the sample to a fine powder, typically in a fluid to minimize inducing extra strain
(surface energy) that can offset peak positions, and to randomize orientation. Powder less
than ~10 μm(or 200-mesh) in size is preferred
Place into a sample holder or onto the sample surface:
o smear uniformly onto a glass slide, assuring a flat upper surface
o pack into a sample container
o sprinkle on double sticky tape
19
Principles: Scanning probe microscopes (SPMs) are a family of tools used to make images of
nanoscale surfaces and structures, including atoms. They use a physical probe to scan back and
forth over the surface of a sample. During this scanning process, a computer gathers data that are
used to generate an image of the surface. In addition to visualizing nanoscale structures, some
kinds of SPMs can be used to manipulate individual atoms and move them to make specific
patterns. SPMs are different from optical microscopes because the user doesn’t “see” the surface
directly. Instead, the tool “feels” the surface and creates an image to represent it.
Working: SPMs are a very powerful family of microscopes, sometimes with a resolution of less
than a nanometer. (A nanometer is a billionth of a meter.) An SPM has a probe tip mounted on
the end of a cantilever. The tip can be as sharp as a single atom. It can be moved precisely and
accurately back and forth across the surface, even atom by atom. When the tip is near the sample
surface, the cantilever is deflected by a force. SPMs can measure deflections caused by many
kinds of forces, including mechanical contact, electrostatic forces, magnetic forces, chemical
bonding, van der Waals forces, and capillary forces. The distance of the deflection is measured
by a laser that is reflected off the top of the cantilever and into an array of photodiodes (similar
to the devices used in digital cameras). SPMs can detect differences in height that are a fraction
of a nanometer, about the diameter of a single atom. The tip is moved across the sample many
times. This is why these are called “scanning” microscopes. A computer combines the data to
create an image. The images are inherently colorless because they are measuring properties other
than the reflection of light. However, the images are often colorized, with different colors
representing different properties (for example, height) along the surface. Scientists use SPMs in a
number of different ways, depending on the information they’re trying to gather from a sample.
The two primary modes are contact mode and tapping mode. In contact mode, the force between
the tip and the surface is kept constant. This allows a scientist to quickly image a surface. In
tapping mode, the cantilever oscillates, intermittently touching the surface. Tapping mode is
especially useful when a scientist is imaging a soft surface.There are several types of SPMs.
Atomic force microscopes (AFMs) measure the electrostatic forces between the cantilever tip
and the sample. Magnetic force microscopes (MFMs) measure magnetic forces. And scanning
tunneling microscopes (STMs) measure the electrical current flowing between the cantilever tip
and the sample.
20
Principles:
The AFM consists of a cantilever with a sharp tip (probe) at its end that is used to scan
the specimen surface. The cantilever is typically silicon or silicon nitride with a tip radius of
curvature on the order of nanometers.
When the tip is brought into proximity of a sample surface, forces between the tip and the
sample lead to a deflection of the cantilever according to Hooke's law. Depending on the
situation, forces that are measured in AFM include mechanical contact force, van der Waals
21
forces, capillary forces, chemical bonding, electrostatic forces, magnetic forces (see magnetic
force microscope, MFM), Casimir forces, solvation forces, etc.
Along with force, additional quantities may simultaneously be measured through the use
of specialized types of probes (see scanning thermal microscopy, scanning joule expansion
microscopy, photothermal microspectroscopy, etc.).
AFM typically consists of the following features.; Cantilever (1) : Small spring-like
cantilever (1) is supported on the support (2) by means of a piezoelectric element (3) so as to
oscillate the cantilever (1) at its eigen frequency ; Sharp tip (4) which is fixed to open end of a
cantilever (1) . Detector (5) configured to detect the deflection and motion of the cantilever (1) .
Sample (6) will be measure by AFM are mounted on Sample stage (8). xyz-drive (7) which
permits a sample (6) and Sample stage (8) to be displaced in x, y, and z directions with respect to
a tip apex(4) Controllers and plotter (not shown). Here, numbers surrounded by parentheses like
"(1)" are sign to indicate feature (see Fig..). Datum coordination system (0) is intended to
indicates the x-y-z direction.
The small spring-like cantilever (1) is carried by the support (2). Optionally, a
piezoelectric element (typically made of a ceramic material) (3) oscillates the cantilever (1). The
sharp tip (4) is fixed to the free end of the cantilever (1). The detector (5) records the deflection
and motion of the cantilever (1). The sample (6) is mounted on the sample stage (8). An xyz
drive (7) permits to displace the sample (6) and the sample stage (8) in x, y, and z directions with
respect to the tip apex (4). Although Fig. 3 shows the drive attached to the sample, the drive can
also be attached to the tip, or independent drives can be attached to both, since it is the relative
displacement of the sample and tip that needs to be controlled. Controllers and plotter are not
shown in Fig.
According to the configuration described above, the interaction between tip and sample,
which can be an atomic scale phenomenon, is transduced into changes of the motion of
cantilever which is a macro scale phenomenon. Several different aspects of the cantilever motion
can be used to quantify the interaction between the tip and sample, most commonly the value of
the deflection, the amplitude of an imposed oscillation of the cantilever, or the shift in resonance
frequency of the cantilever .
Imaging modes
AFM operation is usually described as one of three modes, according to the nature of the
tip motion: contact mode, also called static mode (as opposed to the other two modes, which are
called dynamic modes); tapping mode, also called intermittent contact, AC mode, or vibrating
mode, or, after the detection mechanism, amplitude modulation AFM; non-contact mode, or,
again after the detection mechanism, frequency modulation AFM. It should be noted that despite
23
the nomenclature, repulsive contact can occur or be avoided both in amplitude modulation AFM
and frequency modulation AFM, depending on the settings.
Contact mode
In contact mode, the tip is "dragged" across the surface of the sample and the contours of
the surface are measured either using the deflection of the cantilever directly or, more
commonly, using the feedback signal required to keep the cantilever at a constant position.
Because the measurement of a static signal is prone to noise and drift, low stiffness cantilevers
(i.e. cantilevers with a low spring constant, k) are used to achieve a large enough deflection
signal while keeping the interaction force low.
Close to the surface of the sample, attractive forces can be quite strong, causing the tip to
"snap-in" to the surface. Thus, contact mode AFM is almost always done at a depth where the
overall force is repulsive, that is, in firm "contact" with the solid surface.
Tapping mode
In ambient conditions, most samples develop a liquid meniscus layer. Because of this,
keeping the probe tip close enough to the sample for short-range forces to become detectable
while preventing the tip from sticking to the surface presents a major problem for contact mode
in ambient conditions. Dynamic contact mode (also called intermittent contact, AC mode or
tapping mode) was developed to bypass this problem. Nowadays, tapping mode is the most
frequently used AFM mode when operating in ambient conditions or in liquids.
In tapping mode, the cantilever is driven to oscillate up and down at or near its resonance
frequency. This oscillation is commonly achieved with a small piezo element in the cantilever
holder, but other possibilities include an AC magnetic field (with magnetic cantilevers),
piezoelectric cantilevers, or periodic heating with a modulated laser beam. The amplitude of this
oscillation usually varies from several nm to 200 nm. In tapping mode, the frequency and
amplitude of the driving signal are kept constant, leading to a constant amplitude of the
cantilever oscillation as long as there is no drift or interaction with the surface.
The interaction of forces acting on the cantilever when the tip comes close to the surface,
Van der Waals forces, dipole-dipole interactions, electrostatic forces, etc. cause the amplitude of
the cantilever's oscillation to change (usually decrease) as the tip gets closer to the sample. This
amplitude is used as the parameter that goes into the electronic servo that controls the height of
the cantilever above the sample. The servo adjusts the height to maintain a set cantilever
oscillation amplitude as the cantilever is scanned over the sample. A tapping AFM image is
therefore produced by imaging the force of the intermittent contacts of the tip with the sample
surface.
When operating in tapping mode, the phase of the cantilever's oscillation with respect to
the driving signal can be recorded as well. This signal channel contains information about the
energy dissipated by the cantilever in each oscillation cycle. Samples that contain regions of
varying stiffness or with different adhesion properties can give a contrast in this channel that is
24
not visible in the topographic image. Extracting the sample's material properties in a quantitative
manner from phase images, however, is often not feasible.
Non-contact mode
In non-contact atomic force microscopy mode, the tip of the cantilever does not contact
the sample surface. The cantilever is instead oscillated at either its resonant frequency (frequency
modulation) or just above (amplitude modulation) where the amplitude of oscillation is typically
a few nanometers (<10 nm) down to a few picometers. The van der Waals forces, which are
strongest from 1 nm to 10 nm above the surface, or any other long-range force that extends
above the surface acts to decrease the resonance frequency of the cantilever.
This decrease in resonant frequency combined with the feedback loop system maintains a
constant oscillation amplitude or frequency by adjusting the average tip-to-sample distance.
Measuring the tip-to-sample distance at each (x,y) data point allows the scanning software to
construct a topographic image of the sample surface.
Schemes for dynamic mode operation include frequency modulation where a phase-
locked loop is used to track the cantilever's resonance frequency and the more common
amplitude modulation with a servo loop in place to keep the cantilever excitation to a defined
amplitude. In frequency modulation, changes in the oscillation frequency provide information
about tip-sample interactions. Frequency can be measured with very high sensitivity and thus the
frequency modulation mode allows for the use of very stiff cantilevers. Stiff cantilevers provide
stability very close to the surface and, as a result, this technique was the first AFM technique to
provide true atomic resolution in ultra-high vacuum conditions.
Advantages
AFM has several advantages over the scanning electron microscope (SEM). Unlike the
electron microscope, which provides a two-dimensional projection or a two-dimensional image
of a sample, the AFM provides a three-dimensional surface profile. In addition, samples viewed
by AFM do not require any special treatments (such as metal/carbon coatings) that would
irreversibly change or damage the sample, and does not typically suffer from charging artifacts
in the final image. While an electron microscope needs an expensive vacuum environment for
proper operation, most AFM modes can work perfectly well in ambient air or even a liquid
environment. This makes it possible to study biological macromolecules and even living
organisms. In principle, AFM can provide higher resolution than SEM.
It has been shown to give true atomic resolution in ultra-high vacuum (UHV) and, more
recently, in liquid environments. High resolution AFM is comparable in resolution to scanning
tunneling microscopy and transmission electron microscopy. AFM can also be combined with a
variety of optical microscopy and spectroscopy techniques such as fluorescent microscopy of
infrared spectroscopy, giving rise to scanning near-field optical microscopy, nano-FTIR and
further expanding its applicability. Combined AFM-optical instruments have been applied
primarily in the biological sciences but have recently attracted strong interest in photovoltaics
and energy-storage research, polymer sciences, nanotechnology, and even medical research..
25
Disadvantages
A disadvantage of AFM compared with the scanning electron microscope (SEM) is the
single scan image size. In one pass, the SEM can image an area on the order of square
millimeters with a depth of field on the order of millimeters, whereas the AFM can only image a
maximum scanning area of about 150×150 micrometers and a maximum height on the order of
10-20 micrometers. One method of improving the scanned area size for AFM is by using parallel
probes in a fashion similar to that of millipede data storage.
The scanning speed of an AFM is also a limitation. Traditionally, an AFM cannot scan
images as fast as an SEM, requiring several minutes for a typical scan, while an SEM is capable
of scanning at near real-time, although at relatively low quality. The relatively slow rate of
scanning during AFM imaging often leads to thermal drift in the image. making the AFM less
suited for measuring accurate distances between topographical features on the image. However,
several fast-acting designs were suggested to increase microscope scanning productivity
including what is being termed videoAFM (reasonable quality images are being obtained with
videoAFM at video rate: faster than the average SEM). To eliminate image distortions induced
by thermal drift, several methods have been introduced.
AFM images can also be affected by nonlinearity, hysteresis,[ and creep of the
piezoelectric material and cross-talk between the x, y, z axes that may require software
enhancement and filtering. Such filtering could "flatten" out real topographical features.
However, newer AFMs utilize real-time correction software (for example, feature-oriented
scanning) or closed-loop scanners, which practically eliminate these problems. Some AFMs also
use separated orthogonal scanners.
As with any other imaging technique, there is the possibility of image artifacts, which
could be induced by an unsuitable tip, a poor operating environment, or even by the sample
itself, as depicted on the right. These image artifacts are unavoidable; however, their occurrence
and effect on results can be reduced through various methods. Artifacts resulting from a too-
coarse tip can be caused for example by inappropriate handling or de facto collisions with the
sample by either scanning too fast or having an unreasonably rough surface, causing actual
wearing of the tip.
Due to the nature of AFM probes, they cannot normally measure steep walls or
overhangs. Specially made cantilevers and AFMs can be used to modulate the probe sideways as
well as up and down (as with dynamic contact and non-contact modes) to measure sidewalls, at
the cost of more expensive cantilevers, lower lateral resolution and additional artifacts.
The latest efforts in integrating nanotechnology and biological research have been
successful and show much promise for the future. Since nanoparticles are a potential vehicle of
drug delivery, the biological responses of cells to these nanoparticles are continuously being
explored to optimize their efficacy and how their design could be improved.
26
Pyrgiotakis et al. were able to study the interaction between CeO2 and Fe2O3 engineered
nanoparticles and cells by attaching the engineered nanoparticles to the AFM tip.
Studies have taken advantage of AFM to obtain further information on the behavior of
live cells in biological media. Real-time atomic force spectroscopy (or nanoscopy) and dynamic
atomic force spectroscopy have been used to study live cells and membrane proteins and their
dynamic behavior at high resolution, on the nanoscale. Imaging and obtaining information on the
topography and the properties of the cells has also given insight into chemical processes and
mechanisms that occur through cell-cell interaction and interactions with other signaling
molecules (ex. ligands).
Evans and Calderwood used single cell force microscopy to study cell adhesion forces,
bond kinetics/dynamic bond strength and its role in chemical processes such as cell signaling.
Scheuring, Lévy, and Rigaud reviewed studies in which AFM to explore the crystal
structure of membrane proteins of photosynthetic bacteria.
Alsteen et al. have used AFM-based nanoscopy to perform a real-time analysis of the
interaction between live mycobacteria and antimycobacterial drugs (specifically isoniazid,
ethionamide, ethambutol, and streptomycine), which serves as an example of the more in-depth
analysis of pathogen-drug interactions that can be done through AFM.
An electron probe micro-analyzer is a microbeam instrument used primarily for the in situ
non-destructive chemical analysis of minute solid samples. EPMA is also informally called an
electron microprobe, or just probe. It is fundamentally the same as an SEM, with the added
capability of chemical analysis. The primary importance of an EPMA is the ability to acquire
precise, quantitative elemental analyses at very small "spot" sizes (as little as 1-2 microns),
primarily by wavelength-dispersive spectroscopy (WDS).
The spatial scale of analysis, combined with the ability to create detailed images of the
sample, makes it possible to analyze geological materials in situ and to resolve complex
chemical variation within single phases (in geology, mostly glasses and minerals). The electron
optics of an SEM or EPMA allow much higher resolution images to be obtained than can be seen
using visible-light optics, so features that are irresolvable under a light microscope can be readily
imaged to study detailed microtextures or provide the fine-scale context of an individual spot
analysis.
An electron microprobe operates under the principle that if a solid material is bombarded
by an accelerated and focused electron beam, the incident electron beam has sufficient energy to
liberate both matter and energy from the sample. These electron-sample interactions mainly
liberate heat, but they also yield both derivative electrons and x-rays. Of most common interest
in the analysis of geological materials are secondary and back-scattered electrons, which are
useful for imaging a surface or obtaining an average composition of the material. X-ray
generation is produced by inelastic collisions of the incident electrons with electrons in the inner
shells of atoms in the sample; when an inner-shell electron is ejected from its orbit, leaving a
vacancy, a higher-shell electron falls into this vacancy and must shed some energy (as an X-ray)
to do so.
These quantized x-rays are characteristic of the element. EPMA analysis is considered to
be "non-destructive"; that is, x-rays generated by electron interactions do not lead to volume loss
of the sample, so it is possible to re-analyze the same materials more than one time.
Applications
Quantitative EPMA analysis is the most commonly used method for chemical analysis of
geological materials at small scales.
In most cases, EPMA is chosen in cases where individual phases need to be analyzed
(e.g., igneous and metamorphic minerals), or where the material is of small size or
valuable for other reasons (e.g., experimental run product, sedimentary cement, volcanic
glass, matrix of a meteorite, archeological artifacts such as ceramic glazes and tools).
In some cases, it is possible to determine a U-Th age of a mineral such as monazite
without measuring isotopic ratios.
EPMA is also widely used for analysis of synthetic materials such as optical wafers, thin
films, microcircuits, semi-conductors, and superconducting ceramics.
Sample Preparation
Unlike an SEM, which can give images of 3D objects, analysis of solid materials by EPMA
requires preparation of flat, polished sections. A brief protocol is provided here:
1. Nearly any solid material can be analyzed. In most cases, samples are prepared as
standard-size 27 x 46 mm rectangular sections, or in 1-inch round disks.
2. Rectangular sections of rock or similar materials are most often prepared as 30-micron-
thick sections without cover slips. Alternatively, 1-inch cores can be polished. Chips or
grains can be mounted in epoxy disks, and then polished half way through to expose a
cross-section of the material.
3. The most critical step prior to analysis is giving the sample a fine polish so that surface
imperfections do not interfere with electron-sample interactions. This is particularly
important for samples containing minerals with different hardnesses; polishing should
yield a flat surface of uniform smoothness.
4. Most silicate minerals are electrical insulators. Directing an electron beam at the sample
can lead to electrical charging of the sample, which must be dissipated. Prior to analysis,
samples are typically coated with a thin film of a conducting material (carbon, gold and
aluminum are most common) by means of evaporative deposition. Once samples are
placed in a holder, the coated sample surface must be put in electrical contact with the
holder (typically done with a conductive paint or tape).
5. Choice of coating depends on the type of analysis to be done; for example, most EPMA
chemical analysis is done on samples coated by C, which is thin and light enough that
interference with the electron beam and emitted X-rays is minimal.
6. Samples are loaded into the sample chamber via a vacuum interlock and mounted on the
29
sample stage. The sample chamber is then pumped to obtain a high vacuum.
7. To begin a microprobe session, suitable analytical conditions must be selected, such as
accelerating voltage and electron beam current, and the electron beam must be properly
focused.
8. If quantitative analyses are planned, the instrument first must be standardized for the
elements desired.
Strengths
An electron probe is essentially the same instrument as an SEM, but differs in that it is
equipped with a range of crystal spectrometers that enable quantitative chemical analysis
(WDS) at high sensitivity.
An electron probe is the primary tool for chemical analysis of solid materials at small
spatial scales (as small as 1-2 micron diameter); hence, the user can analyze even minute
single phases (e.g., minerals) in a material (e.g., rock) with "spot" analyses.
Spot chemical analyses can be obtained in situ, which allows the user to detect even small
compositional variations within textural context or within chemically zoned materials.
Electron probes commonly have an array of imaging detectors (SEI, BSE, and CL) that
allow the investigator to generate images of the surface and internal compositional
structures that help with analyses.
Limitations
Although electron probes have the ability to analyze for almost all elements, they are
unable to detect the lightest elements (H, He and Li); as a result, for example, the "water"
in hydrous minerals cannot be analyzed.
Some elements generate x-rays with overlapping peak positions (by both energy and
wavelength) that must be separated.
Microprobe analyses are reported as oxides of elements, not as cations; therefore, cation
proportions and mineral formulae must be recalculated following stoichiometric rules.
Probe analysis also cannot distinguish between the different valence states of Fe, so the
ferric/ferrous ratio cannot be determined and must be evaluated by other techniques.