Professional Documents
Culture Documents
from industrial manufacturing and processing activities, both for continuous and discrete processes.
The scope includes: technology and equipment modifications, reformulation or redesign of products,
substitution of alternative materials, and in-process
changes. Although these methods are thought of in
the chemical, biochemical, and materials process
industries, they are appropriate in other industries
as well, such as semiconductor manufacturing systems. Areas of research include:
Biological Applications: Includes bioengineering
techniques such as metabolic engineering and
bioprocessing to prevent pollution. Examples
are conversion of waste biomass to useful products, genetic engineering to produce more specific biocatalysts, increase of energy efficiency,
decreased use of hazardous reactants or
byproducts, or development of more cost effective methods of producing environmentally benign products.
Fluid and Thermal Systems: Includes improved
manufacturing systems that employ novel thermal or fluid and/or multiphase/particulate systems resulting in significantly lower hazardous
effluent production. Examples are novel refrigeration cycles using safe and environmentally
benign working fluids to replace halogenated
hydrocarbons hazardous to upper atmosphere
ozone levels; improved automobile combustion
process design for reduced pollutant production.
Interfacial Transport and Separations: Includes
materials substitutions and process alternatives
which prevent or reduce environmental harm,
such as change of raw materials or the use of
less hazardous solvents, organic coatings, and
metal plating systems where the primary focus
is on non-reactive diffusional and interfacial
phenomena. Examples include: use of special
surfactant systems for surface cleaning and
reactions; novel, cost-effective methods for the
highly efficient in-process separation of useful
materials from the components of process waste
streams (for example, field enhanced and hybrid separation processes); novel processes for
molecularly chemical and materials synthesis
of thin films and membranes.
Design, Manufacturing, and Industrial Innovations: Includes: (a) New and improved manufacturing processes that reduce production of hazardous effluents at the source. Examples include:
machining without the use of cutting fluids that
currently require disposal after they are contaminated; eliminating toxic electroplating solutions by replacing them with ion or plasma-
based dry plating techniques; new bulk materials and coatings with durability and long life;
and other desirable engineering properties that
can be manufactured with reduced environmental impact. (b) Optimization of existing discrete parts manufacturing operations to prevent, reduce, or eliminate waste. Concepts
include: increased in-process or in-plant recycling and improved and intelligent process control and sensing capabilities; in-process techniques that minimize generation of pollutants
in industrial waste incineration processes.
Chemical Processes and Reaction Engineering:
Includes improved reactor, catalyst, or chemical
process design in order to increase product yield,
improve selectivity, or reduce unwanted by-products. Approaches include novel reactors such
as reactor-separator combinations that provide
for product separation during the reaction, alternative energy sources for reaction initiation,
and integrated chemical process design and
operation, including control. Other approaches
are: new multifunctional catalysts that reduce
the number of process stages; novel heterogeneous catalysts that replace state-of-the -art
homogeneous ones; new photo- or electro catalysts that operate at low temperatures with high
selectivity; novel catalysts for currently
uncatalyzed reactions; processes that use renewable resources in place of synthetic intermediates as feedstocks; novel processes for
molecularly controlled materials synthesis and
modification.
3.8 Synphony
Synphony provides the ability to determine all feasible flowsheets from a given set of operating units
and raw materials to produce a given product and
then ranks these by investment and operating costs.
Synphony has been proven to significantly reduce
investment and operating costs by minimizing byproducts and identifying the best overall design. The
software analyzes both new process designs and
retrofits of existing operations to generate all feasible solutions and ranks the flowsheets based on
investment and operating costs.
The program called Synphony is commercially
available. A case study using Synphony at a manufacturing facility demonstrated a 40% reduction in
waste water and a 10% reduction in operating costs.
The software analyzes both new process designs and
retrofits of existing operations to generate all feasible solutions and ranks the flowsheets based on
investment and operating costs. Synphony is the
first flowsheet synthesis software program to rigorously define all feasible flowsheet structures from a
set of feasible unit operations and to rank flowsheets
rations may give rise to some highly designable structures that can be formed in many different ways. If
one uses such structures for self-assembly tasks, a
general approach to improving their reliability will
be realized.
Manufacturing builds objects from their components by placing them in prescribed arrangements.
This technique requires knowledge of the precise
structure needed to serve a desired function, the
ability to create the components with the necessary
tolerances, and the ability to place each component
in its proper location in the final structure.
If such requirements are not met, self-assembly
offers another approach to building structures from
components. This method involves a statistical exploration of many possible structures before settling
into a possible one. The particular structure produced from given components is determined by biases in the exploration, given by component interactions. These may arise when the strength of the
interactions depends on their relative locations in
the structure. These interactions can reflect constraints on the desirability of a component being
near its neighbors in the final structure. For each
possible structure the intersections combine to give
a measure of the extent to which the constraints are
violated, which can be viewed as a cost of energy for
that structure. Through the biased statistical exploration of structures, each set of components tends
to assemble into that structure with the minimum
energy for that set. Thus, self-assembly can be viewed
as a process using a local specification, in terms of
the components and their interactions, to produce a
resulting global structure. The local specification is,
in effect, a set of instructions that implicitly describes the resulting structure.
We describe here some characteristics of the statistical distributions of self-assembled structures.
Self-assembly can form structures beyond the
current capacity of direct manufacturing. The most
straightforward technique for designing self-assembly is to examine with a computer simulation the
neighbors of each component in the desired global
structure, and then choose the interactions between
components to encourage these neighbors to be close
together.
A difficulty in designing the self-assembly process
is the indirect or emergent connection between the
interactions and the properties of resulting global
structures. There is a possibility of errors due to
defective components or environmental noise. To
address this problem, it would be useful to arrange
the self-assembly so the desired structure can be
formed in many ways, increasing the likelihood they
will be correctly constructed even with some unexpected changes in the components or their interac-
aided by computer-based tools such as process simulators and unit operation design programs.
These designs have scope for improvement, frequently large and expensive. Now engineers realize it
is just as important to assemble the building blocks
correctly as it is to select and design them correctly
as individual components. This led to integrated
process design or process integration which is a
holistic approach to design that emphasizes the unity
of the whole process. Pinch analysis was an example
of this. It is the definitive way to design heat recovery
networks, to select process wide utility heating and
cooling levels to establish the energy/capital tradeoff
for heat recovery equipment. Mass flow is more
recent in process integration. It is similar to energy
integration but tackles the core of the process and
has a consequence of more direct and significant
impact on process performance. It addresses the
conversion, routing, and separation of mass and
deals directly with the reaction, separation, and
byproduct/waste processing systems. It guides designers in routing all species to their most desirable
destinations and allows them to establish massrelated cost tradeoffs. Mass integration also defines
the heating, cooling, and shaft work requirements of
the process. It also provides insight into other design issues such as providing resources (e.g., fuel
and water) to break up bottlenecks in the utility
systems and selecting the catalysts and other material utilities.
3.16 LSENS
LSENS, from the NASA Lewis Research Center, has
been developed for solving complex, homogeneous,
gas-phase, chemical kinetics problems. It was motivated by the interest in developing detailed chemical
reaction mechanisms for complex reactions such as
the combustion of fuels and pollutant formation and
destruction. Mathematical descriptions of chemical
kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs).
The number of ODEs can be very large because of
the numerous chemical species involved in the reaction mechanism. Further complicating the situation
are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For
example, the mechanism describing the oxidation of
the simplest hydrocarbon fuel, methane, involves
over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction
mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions.
Consequently, there is a need for fast and reliable
numerical solution techniques for chemical kinetics
problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know
what effects variations in either initial condition
values or chemical reaction parameters have on the
solution. Such a need arises in the development of
reaction mechanisms from experimental data. The
rate coefficients are often not known with great
precision and in general, the experimental data are
not sufficiently detailed to accurately estimate the
rate coefficient parameters. The development of reaction mechanism is facilitated by a systematic sensitivity analysis which provides the relationships
3.17 Chemkin
Complex chemically reacting flow simulations are
commonly employed to develop a quantitative understanding and to optimize reaction conditions in
systems such as combustion, catalysis, chemical
vapor deposition, and plasma processing. They all
share the need for accurate, detailed descriptions of
the chemical kinetics occurring in the gas-phase or
on reactive surfaces. The Chemkin suite of codes
broadly consists of three packages for dealing with
gas-phase reaction kinetics, heterogeneous reaction
kinetics, and species transport properties. The
Chemkin software was developed to aid the incorporation of complex gas-phase reaction mechanisms
into numerical simulations. Currently, there are a
number of numerical codes based on Chemkin which
solve chemically reacting flows. The Chemkin interface allows the user to specify the necessary input
through a high-level symbolic interpreter, which
parses the information and passes it to a Chemkin
application code. To specify the needed information,
the user writes an input file declaring the chemical
elements in the problem, the name of each chemical
species, thermochemical information about each
chemical species, a list of chemical reactions (written in the same fashion a chemist would write them),
and rate constant information, in the form of modified Arrhenius coefficients. The thermochemical information is entered in a very compact form as a
series of coefficients describing the species entropy
(S), enthalpy (H), and heat capacity (Cp) as a function of temperature. The thermochemical database
is in a form compatible with the widely used NASA
chemical equilibrium code. Because all of the information about the reaction mechanism is parsed and
summarized by the chemical interpreter, if the user
desires to modify the reaction mechanism by adding
species or deleting a reaction, for instance, they only
change the interpreter input file and the Chemkin
application code does not have to be altered. The
modular approach of separating the description of
the chemistry from the set-up and solution of the
reacting flow problem allows the software designer
great flexibility in writing chemical-mechanism-independent code. Moreover, the same mechanism
can be used in different chemically reacting flow
codes without alteration.
Once the nature of the desired or substituted
product or intermediate or reactant is known, we
wish to describe how it and the other species change
with time, while obeying thermodynamic laws. In
order to do this we use another program called
(1)
(2)
(3)
After the 1st, 2nd, and 3rd line press enter. After
the fourth line press enter twice.
If the thermodynamic data is in chemlib.exe\
thermdat, substitute that for sandia.dat.
In every run the file rates.out will be created.
The above is repeated for every instant of time for
which there is a calculation in the forward and
reverse direction and then the equations are ranked
according to the speed of their reaction. Note that
when the sign is negative in the fifth column, the
reaction proceeds in the reverse direction. Also note
that these data are very important in determining
the mechanism of the reaction.
Another file, sense.out, is created when the code
in ethane.sam indicates that it is desired for up to
five species.
There is often a great deal of uncertainty in the
rate constants for some reaction mechanisms. It is,
therefore, desirable to have an ability to quantify the
effect of an uncertain parameter on the solution to
a problem. A sensitivity analysis is a technique which
is used to help achieve this end. Applying sensitivity
analysis to a chemical rate mechanism requires
partial derivatives of the production rates of the
species with respect to parameters in the rate constants for the chemical reactions. This file shows the
partial derivatives and how the increase or decrease
of each species changes the speed or velocity of each
reaction shown for every interval of time and like the
rates.out file is very important in determining the
mechanism and optimizing the reactions.
operating unit 4. After executing step 5 and eventually returning to step 2, set g is found to be empty.
As a result only one process structure is generated
by algorithm SSG. This process structure is to be
evaluated (See Figures 20, 22, and 25).
In the design of an industrial process, the number
of possible structures is 3465 in this real example
for producing material A61 (Folpet) with the operating units listed in Figure 88 and the maximal structure listed as Figure 85. Materials A5, A14, A16,
A22, A24, A25, A48, and A61 belong to class c. If
operating unit 23 is selected for producing material
A14, then 584 different structures remain. With an
additional decision on material A61, the number of
structures is reduced to 9. This number is small
enough so that all the structures can be evaluated
by an available simulation or design program.
The solution-structures of an industrial process
synthesis problem with a set of M materials has 65
elements, M={A1, A2,.....A65}, where, R = {A1, A2.
A3, A4, A6, A7, A8, A11, A15, A17, A18, A19, A20,
A23, A27, A28, A29, A30, A34, A43, A47, A49, A52,
A54} is the set of raw materials. Moreover, 35 operating units are available for producing the product,
material A61. The solution structure of the problem
is given in Figure 87. The structure of Synphony is
outlined in Figure 89 as outlined by Dr. L. T. Fan.
An algorithm and a computer program were developed to facilitate the design decisions of the discrete
parameters of a complex chemical process to reduce
the number of processes to be optimized by a simulation program. They are highly effective for both
hypothetical and real examples.
3.21 Kintecus
After encountering James Iannis work on Kintecus
on the Internet, I arranged an interview with him at
Drexel University. He is a graduate student in Metallurgical Engineering. The program models the reactions of chemical, biological, nuclear, and atmospheric processes. It is extremely fast and can model
over 4,000 reactions in less than 8 megabytes of
RAM running in pure high speed 32-bit under DOS.
It has full output of normalized sensitivity coefficients that are selectable at any specified time. They
are used in accurate mechanism reduction, determining which reactions are the main sources and
sinks, which reactions require accurate rate constants, and which ones can have guessed rate constants. The program can use concentration profiles
of any wave pattern for any species or laser profile
for any hv. A powerful parser with mass and charge
balance checker is present for those reactions that
the OCR or operator entered incorrectly, yet the
model is yielding incorrect results or divergent re-
3.22 SWAMI
The Strategic Waste Minimization Initiative (SWAMI)
software program is a user friendly computer tool for
enhancing process analysis techniques to identify
waste minimization opportunities within an industrial setting. It is involved in promoting waste reduction and pollution prevention at the source.
The software program assists the user in:
Simplifying the highly complex task of process
analysis of hazardous materials use, identification, and tracking.
Storing process information for any future reassessment and evaluation of pollution prevention opportunities due to changes in process
design.
Simulating the effect of waste stream analysis
based on process changes in promoting pollution prevention alternatives.
Developing mass balance calculations for the
entire process and for unit operation by total
mass, individual chemical compounds, and special chemical elements.
Performing cost benefit studies for one or more
feasible waste reduction or pollution prevention
solutions.
Prioritizing opportunity points by a cost of treatment and disposal or volume of hazardous waste
generated.
Developing flow diagrams of material inputs,
process sequencing, and waste output streams.
Process Description
feed2, feed2,3 and 4 mix react and are separated into 5 and 6.
5 split into 3,7.
6 separates to 8,9. Separates to 4 11.
Process Flowsheet
Click here for flowsheet
Node List
Process contains 14 nodes:
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
1 (feed)
2 (feed)
3 (mixer)
4 (mixer)
5 (mixer)
6 (reactor)
7 (reactor)
8 (separator)
9 (splitter)
10 (separator)
11 (separator)
12 (product)
13 (product)
14 (product)
Node Information
Node 1 is a feed
Precipitator
Extractor
Component Split
Incinerator
Compressor
Stripper
Reactor
Exchanger
Bioreactor
Manipulate
Controller
Feedforward
Crystallizer
Clarifier
Sensitivity
Membrane
VV(UF, RO)
Electrodialysis
Saturator
Dehydrator
Sensitivity Analysis
The sensitivity block allows the user to determine
easily the sensitivity of output results to changes in
Block Parameters and physical constants.
Environmentalists contend that zero chlorine input to the industrial base means zero chorinated
toxins discharged to the environment. Industry experts claim that such a far reaching program is
unnecessary and will have large socioeconomic impacts. Environmentalists have responded with the
argument that overall socioeconomic impacts will be
small since there are adequate substitutes for many
of the products that currently contain chlorine.
The effects of coal quality on utility boiler performance are difficult to predict using conventional
methods. As a result of environmental concerns,
more utilities are blending and selecting coals that
are not the design coals for their units. This has led
to a wide range of problems, from grindability and
moisture concerns to fly ash collection. To help
utilities predict the impacts of changing coal quality,
the Electric Power Research Institute (EPRI) and the
U.S. Department of Energy (DOE) have initiated a
program called Coal Quality Expert (CQE). The program is undertaken to quantify coal quality impacts
using data generated in field-, pilot-, and laboratoryscale investigations. As a result, FOULER is a mechanistic model placed into a computer code that predicts the coal ash deposition in a utility boiler and
SLAGGO is a computer model that predicts the effects of furnace slagging in a coal-fired boiler.
In Europe, particularly Prof. Mike Pilling and Dr.
Sam Saunders at the Department of Chemistry at
the University of Leeds, England, have worked on
tropospheric chemistry modeling and have had a
large measure of success. They have devised the
MCM (Master Chemical Mechanism), a computer
system for handling large systems of chemical equations, and were responsible for quantifying the potential that each VOC exhibits to the development of
the Photochemical Ozone Creation Potential (POCP)
concept. The goal is to improve and extend the
Photochemical Trajectory Model for the description
of the roles of VOC and NOx in regional scale photooxidant formation over Europe. In their work they
use Burcats Thermochemical Data for Combustion
Calculations in the NASA format.
Statistical methods, pattern recognition methods,
neural networks, genetic algorithms and graphics
programming are being used for reaction prediction,
synthesis design, acquisition of knowledge on chemical reactions, interpretation and simulation of mass
spectra analysis and simulation of infrared spectra,
analysis and modeling of biological activity, finding
new lead structures, generation of 3D-dimensional
molecular models, assessing molecular similarity,
prediction of physical, chemical, and biological properties, and databases of algorithms and electronic
publishing. Examples include the course of a chemical reaction and its products for given starting materials using EROS (Elaboration of Reactions for
Organic Synthesis) where the knowledge base and
the problem solving techniques are clearly separated. Another case includes methods for defining
appropriate, easily obtainable starting materials for
the synthesis of desired product. This includes the
individual reaction steps of the entire synthesis plan.
It includes methods to derive the definition of structural similarities between the target structure and
topology, distance geometry, and symbolic computation have begun to play roles in chemical studies.
Many problems in computational chemistry require
a concise description of the large-scale geometry and
topology of a high-dimensional potential surface.
Usually, such a compact description will be statistical, and many questions arise as to the appropriate
ways of characterizing such a surface. Often such
concise descriptions are not what is sought; rather,
one seeks a way of fairly sampling the surface and
uncovering a few representative examples of simulations on the surface that are relevant to the appropriate chemistry. An example is a snapshot or typical configuration or movie of a kinetic pathway.
Several chemical problems demand the solution of
mathematical problems connected with the geometry of the potential surface. Such a global understanding is needed to be able to picture long time
scale complex events in chemical systems. This includes the understanding of the conformation transitions of biological molecules. The regulation of
biological molecules is quite precise and relies on
sometimes rather complicated motions of a biological molecule. The most well studied of these is the
so-called allosteric transition in hemoglobin, but
indeed, the regulation of most genes also relies on
these phenomena. These regulation events involve
rather long time scales from the molecular viewpoint. Their understanding requires navigating
through the complete navigation space. Another such
long-time scale process that involves complex organization in the configuration space is bimolecular
folding itself.
Similarly, specific kinetic pathways are important.
Some work has been done on how the specific pathways can emerge on a statistical energy landscape.
These ideas are, however, based on the quasi-equilibrium statistical mechanics of such systems, and
there are many questions about the rigor of this
approach. Similarly, a good deal of work has been
carried out to characterize computationally pathways on complicated realistic potential energy surfaces. Techniques based on path integrals have been
used to good effect in studying the recombination of
ligands in biomolecules and in the folding events
involved in the formation of a small helix from a
coiled polypeptide. These techniques tend to focus
on individual optimal pathways, but it is also clear
that sets of pathways are very important in such
problems. How these pathways are related to each
other and how to discover them and count them is
still an open computational challenge.
The weak point in the whole scenario of new drug
discovery has been identification of the lead. There
may not be a good lead in a companys collection.
The wrong choice can doom a project to never finding compounds that merit advanced testing. Using
only literature data to derive the lead may mean that
the company abandons the project because it cannot patent the compounds found. These concerns
have led the industry to focus on the importance of
molecular diversity as a key ingredient in the search
for a lead. Compared to just 10 years ago, orders of
magnitude more compounds can be designed, synthesized, and tested with newly developed strategies. These changes present an opportunity for the
imaginative application of mathematics.
There are three aspects to the problem of selecting
samples from large collections of molecules: First,
what molecular properties will be used to describe
the compounds? Second, how will the similarity of
these properties between pairs of molecules be quantified? Third, how will the molecules be paired or
quantified?
For naturally occurring biomolecules, one of the
most important approaches is the understanding of
the evolutionary relationships between macromolecules. The study of the evolutionary relationship
between biomolecules has given rise to a variety of
mathematical questions in probability theory and
sequence analysis. Biological macromolecules can
be related to each other by various similarity measures, and at least in simple models of molecular
evolution, these similarity measures give rise to an
ultrametric organization of the proteins. A good deal
of work has gone into developing algorithms that
take the known sequences and infer from these a
parsimonious model of their biological descent.
An emerging technology is the use of multiple
rounds of mutation, recombination, and selection to
obtain interesting macromolecules or combinatorial
covalent structures. Very little is known as yet about
the mathematical constraints on finding molecules
in this way, but the mathematics of such artificial
evolution approaches should be quite challenging.
Understanding the navigational problems in a highdimensional sequence space may also have great
relevance to understanding natural evolution. Is it
punctuated or is it gradual as many have claimed in
the past? Artificial evolution may obviate the need to
completely understand and design biological molecules, but there will be a large number of interesting mathematical problems connected with the design.
Drug leads binding to a receptor target can be
directly visualized using X-ray crystallography. There
is physical complexity because the change in free
energy is complex as it involves a multiplicity of
factors including changes in ligand bonding (with
both solvent water and the target protein), changes
3.34 WMCAPS
A system is herein proposed that uses coding theory,
cellular automata, and both the computing power of
Envirochemkin and a program that computes chemical equilibrium using the minimization of the chemical potential. The program starts with the input
ingredients defined as the number of gram-atoms of
each chemical element as
bi , i = 1, 2, 3, 4. ....
i = 1,.....,m
xj > 0
j = 1,......,n
with n >= m. Subject to these constraints it is desired to minimize the total Gibbs free energy of the
system.
jn=1 cj xj + nj=1 x log (xj /ni=1xi)
where cj = Fj/RT + log P
Fj = Gibbs energy per mole of jth gas at
temperature T and unit atmospheric
pressure
R = universal gas constant
My experience is that this method works like a
charm on a digital computer and is very fast.
Now we have the equilibrium composition at the
given temperature and pressure in our design for
our industrial plant. This is a very important first
step. However our products must go through a series of other operations at different conditions. Also,
our products are at their equilibrium values and
they may not be allowed to reach their true values
for the residence time in the reactor. This is where
Envirochemkin comes in. Starting with the equilibrium values of each compound, it has rate constants
for each reaction in the reactor and again at the
proper temperature and pressure will calculate the
concentration of each compound in the mixture.