You are on page 1of 42

contents: 1. Genetic pollution 2. Synthetic and system biology 3. Biopiracy 4. Bioreactors 5. Dna barcoding 6.

Consortium for the barcode of life 7. In vitro fertilization 8. Bioremediation 9. Bio-prospecting 10. Bio-farming 11. Bio-fortification 12. Bio-indicators 13. Bio-dynamics 14. Gene-doping 15. Protein-engineering 16. Bio-processing 17. Monoclonal antibiotics 18. Cell culture technology 19. Biosensors 20. Nano biotechnology 21. Micro arrays 22. Induced pluripotent stem cells(ipsc) 23. Mining with microbes 24. Petroleum microbiology 25. Persistent organic pollutant 26. Biorobotics 27. Nanorobotics 28. Bioterrorism 29. Bionics 30. Biomagnification 31. Bioaccumulation 32. Molecular nanotechnology 33. Nano medicine 34. Nano sensors 35. Quantum dots 36. Photodynamaic therapy 37. Sensodynamic thrapy 38. Impalefaction 39. Plastic electronics 40. Grapheme 41. Thermal imaging 42. C-T scanning 43. Synthetic fuels 44. Hythane 45. Waste to energy 46. Clean coal technology

47. Gas hydrate 48. Enhanced oil recovery 49. UCG(underground coal gasification) 50. Mobile tv 51. Wireless application protocol 52. E-waste disposal 53. Nuclear energy developments in india 54. Radiometric dating 55. Generation iv reactors 56. Food irradiation 57. India based nutriono obsverbatory 58. Space based telescope 59. Ipv4 and ipv6 60. Bar code scaner

Genetic Pollution The term genetic pollution refers to the gene flow occurring between genetically modified organisms and non-genetically engineered organisms. It is also used to describe the gene flow between non-native and native species as well as the gene flow between domesticated and wild species. The Problem of Genetic Pollution Genetically modified organisms are organisms that have been modified by the insertion of one or more genes. The inserted genes are called transgenes and these can be taken from a different species from the same kingdom or from an entirely different kingdom. In some cases, already occuring genes may simply be tweaked to modify them. The whole point is to imbibe the genetically modifed organism with certain desirable genetic traits that would not occur in it naturally. If these genetically modified organisms were to breed with non-genetically engineered organisms, they could pass on the modified traits to the non-GE organisms. These modified traits would "pollute" the natural genetic traits of these organisms and could create several ecological problem: They might edge out the local species and drive them to extinction. They might cause loss of genetic diversity. Herbicide or pest resistant strains might spread rapidly and create a nightmare for farmers. Genetic pollution in plants Gene flow occurs from genetically modified plants to sexually compatible non-genetically engineered plants. Gene flow from a GM plant to non-GM plant may occur by wind pollination, water pollination or animal pollination. Genetic pollution may occur by unknowingly or knowingly providing GM seeds and food as food aid or seed stocks to Third World countries. The most well-known example of genetic pollution, often cited by researchers, was the Quist and Chapela report of discovery of transgenes from GE maize in landraces of maize in Oaxaca, Mexico. However, this report has since been criticized for insufficient evidence and genetically modified corn did not show up in later studies in the area. A clear example of genetic pollution is the genetically modified, herbicide resistant creeping bentgrass produced by the Scotts Company. This GM bent grass species was seen, in a 2004 study, to be easily transmitted over long distances by wind pollination to breed with naturally occurring species of bentgrass. Genetic pollution in animals Genetic pollution in animals occurs when transgenic individuals mate with non-GM animals. This is somewhat less common than genetic pollution in plants, as, unlike GM crops, GM animals have not been approved for human consumption and are not as widespread. One example is the farmed Atlantic salmon breeding with the wild Atlantic salmon. The transgenes may spread within the non-GM populations or be lost over time, depending upon factors such as the following: Size of non-GM population. Extent of breeding between GM and non-GM species. Whether or not fertile generations are produced. Advantages or disadvantages conferred by the GM organisms on the non-GM ones. Some researchers and environmentalists consider the term 'genetic pollution' controversial and inappropriate. They prefer the term 'genetic mixing'. They give the following reasons: Pure gene pools may not necessarily be the better ones.

Mixing of gene pools does not always lead to a genetic decline. Hybrids might be biologically better than the parentals. Synthetic Biology

Synthetic Biology is the design and construction of new biological parts, devices, and systems, and the re-design of existing, natural biological systems for useful purposes. Synthetic Biology is a new interdisciplinary endeavour which involves the recruitment of engineering principles to biology. Simple biological elements can be adopted as reusable, components, which are well characterised and can be used for the construction of more complex devices and systems. The approach allows the biological application of engineering concepts such as modularity, abstraction and insulation from underlying detail. Hierarchy and Modular (recurrent) organization allows biology to be understandable and synthetic biology to be possible. What is the difference between synthetic biology and systems biology? Systems biology studies complex biological systems as integrated wholes, using tools of modeling, simulation, and comparison to experiment. The focus tends to be on natural systems, often with some (at least long term) medical significance. Synthetic biology studies how to build artificial biological systems for engineering applications, using many of the same tools and experimental techniques. But the work is fundamentally an engineering application of biological science, rather than an attempt to do more science. The focus is often on ways of taking parts of natural biological systems, characterizing and simplifying them, and using them as a component of a highly unnatural, engineered, biological system. What technologies would benefit synthetic biology? Key enabling technologies are Sequencing, Fabrication, modeling, measurement. Fast and cheap DNA sequencing and synthesis would allow for rapid design, fabrication, and testing of systems. Software tools that enable system design and simulation are also needed. Still-better measurement technologies that allow for observation of biological system state (i.e., the equivalent of a biological debugger) are also needed. Benefits/ potential applications The study of synthetic biology may lead to more efficient ways to produce medical treatments (e.g. against malaria) Adding features to plants to reduce environmental requirements/impact Molecular medicine Cancer destroying robots Bio sensing Bio remediation Creating Photosynthetic apparatus in the lab Concerns: Synthetic biology raises questions for ethics, biosecurity, biosafety, health, energy and intellectual property. Potential hazards associated with synthetic biology include (a) the accidental release of an unintentionally harmful organism or system, (b) the purposeful design and release of an intentionally harmful organism or system, (e.g., smallpox) by malicious actors

(c) a future over-reliance on our ability to design and maintain engineered biological systems in an otherwise natural world. Biopiracy Biopiracy can refer to unauthorised use of biological resources o e.g., plants, animals, organs, micro-organisms, genes; unauthorised use of traditional communities' knowledge on biological resources; unequal share of benefits between a patent holder and the indigenous community whose resource and/or knowledge has been used; patenting of biological resources with no respect to patentable criteria (novelty, non-obviousness and usefulness). Analysis: Biopiracy has emerged as a term to describe the ways that corporations from the developed world claim ownership of or take unfair advantage of, the genetic resources and traditional knowledge and technologies of developing countries. Biopiracy allegedly contributes to inequality between developing countries rich in biodiversity, and developed countries served by pharmaceutical industry exploiting those resources. During the last decades, an erosion of biodiversity was observed. The majority of the actors estimated that the first cause of biodiversity erosion was the lack or the wrong definition of the rights of ownership. Indeed, before 1992, the living resources were regarded as Common Heritage of Mankind. As common resources, private companies could take and use each resource without having any justification or compensation to give. The Convention on Biological Diversity (CBD) entered into force in 1994. It gave sovereign national rights over biological resources. One of the advantage of it was that it would enable developing countries to better benefit from their resources and traditional knowledge. Under these new rules, it is expected that bioprospecting implies a prior informed consent, and must result in a share of the benefits between the biodiversity-rich country and the prospecting firm. However, some critics believe that the CBD has failed to establish appropriate regulations to prevent biopiracy. The issue of biopiracy is mostly raised by under-developed biodiversity-rich countries (e.g. India, Brazil, and Malaysia, among others) and by some NGOs. Basmati rice case : In 1997, a Texas company called RiceTec won a patent (on "basmati rice lines and grains." The patent secures lines of basmati and basmati-like rice and ways of analyzing that rice. RiceTec, faced international outrage over allegations of biopiracy, later RiceTec has lost most of the claims of the patent, including, most importantly, the right to call their rice lines "basmati." Developed Countries arguments: Some companies argue that under-developed countries are themselves guilty of piracy. the developing countries do not have adequate and efficient intellectual property protection laws, and say they are losing millions of dollars per year because of lack of respect of patents. These companies have been applying pressure for the strengthening of intellectual property issues within the WTO. Companies say access to biological resources allow them to develop new products that could help solve food and health essential issues. They also argue that research, development and commercialization authorisations have a cost that must be balanced by the protection of the resulting product. Patents offer this much needed revenue and favour innovation.

One of the solutions suggested to solve this developed and developing countries disagreement was to define bilateral contracts between source-country and pharmaceutical or seed companies. These contracts of bioprospecting lay down the rules of benefit sharing, and can potentially bring substantial royalties to developing countries. Indias efforts against Biopiracy Enactment of Biodversity Act 2002 which primarily addresses access to genetic resources and associated knowledge by foreign individuals, institutions or companies, to ensure equitable sharing of benefits arising out of the use of these resources and knowledge to the country and the people. Further, it would be required to obtain the approval of the National Biodiversity Authority before seeking any IPR based on biological material and associated knowledge obtained from India. Bioreactors Bioreactors are the vessels in which raw materials are biologically converted into specific product, individual enzymes, etc, using microbial plant, animal or human cells. A bioreactor provides the optimal conditions for achieving the desired product by providing the optimal growth conditions (temperature, pH, substrate, salts, vitamins, oxygen). A bioreactor is a vessel in which is carried out a chemical process which involves organisms or biochemically active substances derived from such organisms. This process can either be aerobic or anaerobic. A bioreactor may also refer to a device or system meant to grow cells or tissues in the context of cell culture. These devices are being developed for use in tissue engineering. On the basis of mode of operation, a bioreactor may be classified as batch, fed batch or continuous. The most commonly used bioreactors are of stirred type, which are shown in the figure.

A stirred-tank reactor is usually cylindrical or with curved base to facilitate the mixing of the reactor contents. The stirrer facilitates even mixing and oxygen availability throughout the bioreactor. Alternatively air can be bubbled through the reactor. Uses:

Bioreactors are also designed to treat sewage and wastewater.

Photobioreactors (a bioreactor which incorporates some type of light source) are used to grow phototroph small organisms like cyanobacteria, algae , or moss plant. These organisms use light through photosynthesis as their energy source and do not require sugars or lipids as energy source.

NASA tissue cloning bioreactor: NASA has developed a new type of bioreactor that artificially grows tissue in cell cultures. NASA's tissue bioreactor can grow heart tissue, skeletal tissue, ligaments, cancer tissue for study, and other types of tissue. In bioreactors where the goal is to grow cells or tissues for experimental or therapeutic purposes, the design is significantly different from industrial bioreactors. DNA Barcoding: DNA barcoding is a technique for characterizing species of organisms using a short DNA sequence from a standard and agreed-upon position in the genome. DNA barcode sequences are very short relative to the entire genome and they can be obtained reasonably quickly and cheaply. The cytochrome c oxidase subunit 1 mitochondrial region (COI) is emerging as the standard barcode region for higher animals. DNA barcoding is already a well-established technique in animals. This method was proposed in 2003 by the University of Guelph in Ontario, Canada. Recently a "barcode" gene that can be used to distinguish between the majority of plant species has been identified which makes this technique to be useful for cataloging the plant species. This gene can be used to catalogue plant life as it has a slightly different code between species but is nearly identical within a species. While the plant barcode gene will not be able to identify every plant species on Earth, it is most likely to be able to distinguish between 90% of them. The technique, which currently serves comparative biologists as a quick reference guide, complements conventional taxonomy; it is being refined in the case of plants for greater accuracy. The progress in species identification techniques is of great interest to India, which has remarkable megadiversity. Its forests in the Western Ghats, the Eastern Himalayas, and parts of the Northeast are species-rich. Bar coding projects have four components: 1.) The Specimens 2.) The Laboratory Analysis 3.) The Database 4.) The Data Analysis Benefits: Barcoding can identify a species from bits and pieces. When established, barcoding will quickly identify undesirable animal or plant material in processed foodstuffs and detect commercial products derived from regulated species. Barcoding can distinguish among species that look alike, uncovering dangerous organisms masquerading as harmless ones and enabling a more accurate view of biodiversity. Biologists everywhere are racing to classify all plants and animals on earth before key habitats are degraded or destroyed. With such comprehensive information, they hope to see an encyclopaedia of life hosted on the Internet, explaining and depicting the appearance, features, and functional role of millions of species in nature. Opens the way for creating an electronic handheld field guide, the Life Barcoder. Controversy: DNA barcoding is not without its controversy, most of which rests on the fact that species identification is based on one single non-nuclear gene. But it [bar-coding] can be made more reliable if supplemented with information from one or more nuclear genes. This would reduce the problem of reliance on a single gene and help identify cases where non-nuclear DNA behaves differently to the nuclear genome. CONSORTIUM FOR THE BARCODE OF LIFE (CBOL)

COBOL is an international collaboration of natural history museums, herbaria, biological repositories, and biodiversity inventory sites, together with academic and commercial experts in genomics, taxonomy, electronics, and computer science. The initial organizational support for CBOL is provided by a 2.5 year grant from the Sloan Foundation. The mission of CBOL is to rapidly accelerate compiling of DNA barcodes of known and newly discovered plant and animal species, establish a public library of sequences linked to named specimens, and promote development of portable devices for DNA barcoding.

In vitro fertilization (IVF) IVF is the basic assisted reproduction technique, in which fertilization occurs in vitro (literally, in glass). The man's sperm and the woman's egg are combined in a laboratory dish, and after fertilization, the resulting embryo is then transferred to the woman's uterus. The five basic steps cycle are: (i) superovulation (stimulating the development of more than one egg in a cycle) (ii) egg retrieval (iii) fertilization (iv) embryo culture (v) embryo transfer Complications Risk of multiple births- Multiple births are related to increased risk of pregnancy loss, obstetrical complications, prematurity, and neonatal morbidity with the potential for long term damage. Birth defects- heart defects, cleft lips, etc. Ethical issue IVF has allowed women to be pregnant in their fifties and sixties. IVF for a lesbian couple Religious objections Extra: A sperm bank or cryobank is a facility that collects and stores human sperm mainly from sperm donors, primarily for the purpose of achieving pregnancies through third party reproduction, notably by artificial insemination. Bioremediation Bioremediation can be defined as any process that uses microorganisms, fungi, green plants or their enzymes to return the natural environment altered by contaminants to its original condition. Application Desalination of agricultural land The elimination of a wide range of pollutants and wastes from the environment protect the plant roots from nematodes and pathogens recycle nutrients advantage over conventional physical and chemical methods such as precipitation, adsorption, electrodialysis and reverse osmosis In India: Indo-U.S. knowledge initiative on agriculture Under Biotechnology, a strategic alliance has been envisaged for training and research on development of transgenic crops with resistance to economically important viruses, tolerance to drought, heat and salinity

and micro-nutrient utilization efficiency; molecular breeding and genomics in crops and animals, molecular approaches in plants and animal health protection have been agreed.

Bio-prospecting It is the exploration, extraction, and screening of biological diversity and indigenous knowledge for commercially valuable genetic and biochemical resources. Merit (i) important for discovering new drugs (ii) benefit to industry in RnD, host country and community (iii) help pharmaceutical firms in research (iv) discovery of several life-saving drugs Limitations (i) Excessive exploitation caused imbalance in ecosystem (ii) companies often patent the products and betrayal of indigenous community (iii) so far there is a lack of regulation in monitoring the agreements (iv) MNCs often debt ridden third world countries (v) Role of intermediaries often disguises the agency negotiating with the community Suggestions (i) equitable sharing of discoveries (ii) transparency of agreements (iii) natives must be trained for providing raw materials to the firms (iv) intellectual integrity of the indigenous people must conform to the Biological Convention (v) no patenting must be allowed (vi) part of profit must go for environmental protection Bio farming It is the application biotechnology to farming including protection against pests, diseases or toleration against herbicides. Benefits (i) reduce costs of farming by reducing dependence on chemicals (ii) strengthening crop ability to defend (iii) harmless to humans (iv) increased production and income of farmers (v) reduced soil erosion (vi) healthier, nutrient-rich soil (vii) reduced potential for flood (viii) low dust and smoke (ix) low emission of GHGs (x) improved water quality In India

(i) (ii) (iii) (iv)

Bio-dynamic association in India (BDAI) is promoting and coordinating the bio-dynamic movement in India. NPK consumption has increased drastically in India. Recent reports on rejection of large consignments of Indian food exports by the United States and some European countries on grounds of several sanitary and phytosanitary (SPS) measures have raised a question mark on the future of the country's agricultural exports. we will be in a better position to address the health concerns of our people

Bio-fortification Biofortification is a method of breeding crops to increase their nutritional value. This can be done either through conventional selective breeding, or through genetic engineering. Benefits (i) increasing micronutrient levels in staple crops (ii) implementation costs are low (iii) advantageous over other health interventions (iv) beneficial to rural areas Problems (i) bio-fortified crops may be difficult to accept if have different characteristic than its unfortified counterpart (ii) they encourage further simplification of human diet and food system (iii) may lead to the diversification of foods in developing countries In India (i) ICRISAT is funding it in India (ii) Bill Gates foundation is funding (iii) Deficiencies of micronutrients like Vitamin A, iron, and iodine in staple can degrade public health in India Bio-indicators/Bio-monitor Biological indicators are species used to monitor the health of an environment or ecosystem through their function, population, or status. For example the copepods and other small water crustaceans present in many water bodies. Types: (i) Plant-Lichen comprising of algae, fungi (indicator plant) (ii) Animals (iii)Microorganism In India (i) CPCB officials say that nearly 80 per cent of pollution in Indian rivers is of bacteriological origin (ii)The project, Indo-Dutch collaboration, will be executed by the CPCB along with the Zoological Survey of India (ZSI) to identify aquatic microorganisms which indicate the state of riverine pollution. (iii)Sustainable Forest Management (SFM) in India Issues (i)It is difficult to find one common bioindicator for all the agro-climatic conditions of the entire country. (ii)It is imperative to identify the species of bioindicators relevant to different biogeographic zones of India Bio-dynamics

Biodynamic agriculture is a method of organic farming that treats farms as unified and individual organisms. It emphasizes: (i) balancing the holistic development (ii)interrelationship of the soil, plants, animals as a self-nourishing system without external inputs Advantages (i) It uses very limited external inputs (ii) reuses most on farm waste (iii) it has a low impact on the environment (iv) provides an economical way of farming in which most of the costs are met at the time they are incurred (v) offers a solution to conflicts between economics and the environment (vi) the quality of the produce In India (i) It started in India in the early 1990's (ii) Amongst the first initiatives were the Kurinji farms near Kodaikanal, the Maikaal cotton project in Madhya Pradesh and the tea projects in Darjeeling (Ambootia, Selimbong, Makaibari) and South India (Singampatti - Oothu) (iii) There are many other practitioners of the BD method who are individual farmers Gene-doping : The World Anti-Doping Agency defines it as "the non-therapeutic use of cells, genes, genetic elements, or of the modulation of gene expression, having the capacity to improve athletic performance". Issue An example of gene doping could involve the recreational use of gene therapies intended to treat musclewasting disorders. Many of these chemicals may be indistinguishable from their natural counterparts. In such cases, nothing unusual would enter the bloodstream so officials would detect nothing in a blood or urine test. Response (i) The World Anti-Doping Agency (WADA) has already asked scientists to help find ways to prevent gene therapy from becoming the newest means of doping. (ii) At the WADA meeting, the delegates drafted a declaration on gene doping which, for the first time, included a strong discouragement of the use of genetic testing for performance. Protein engineering It is the process of developing useful or valuable proteins. There are two general strategies for protein engineering. (i) Rational design- use of detailed knowledge of the structure and function of the protein to make desired changes. This has the advantage of being generally inexpensive and easy. (ii) Directed evolution- mimic of natural evolution. It generally produces superior results and requires no prior structural knowledge of a protein, nor is it necessary to be able to predict what effect a given mutation will have. Benefits and applications (i) crop improvement (ii) vaccines and biotheraputics (iii) to facilitate protein design for the production of protein and peptide mimics (iv) enzyme inhibitors which are effective as pharmaceuticals

(v) India (i) (ii)

further applications from materials technology to bioelectronics, and from ecology to health India has world class facilities for DNA sequencing, protein engineering It is an integral part of Bio-technology policy.

Extra: An additional technique known as DNA shuffling mixes and matches pieces of successful variants in order to produce better results. This process mimics recombination that occurs naturally during sexual reproduction.

Bio processing It uses living cells (mostly single cell microbes like yeast, bacteria) or the molecular components (often enzymes) of their manufacturing machinery to produce desired products. E.g. - microbial fermentation, recombinant DNA tech, etc. Monoclonal antibiotics It uses immune system cells that make proteins called antibodies. They can locate substances that occur in miniscule amounts and measure them great accuracy. Uses: (i) detect harmful microbes in food (ii) locate environmental defects in food (iii) distinguish b/w cancer and normal cells (iv) diagnose infectious diseases (v) can also provide highly specific therapeutic compounds (vi) if joined to a toxin can selectively deliver chemotherapy to a cancer cell while avoiding healthy cells. (vii) India is developing them to treat organ-transplant rejection and autoimmune diseases Cell culture technology CCT is the growing of cells outside the living environments Plant CC (i) An essential step in creating transgenic crops (ii) Provides an environmentally sound and economically feasible option for obtaining naturally occurring products with therapeutic value (healing) (iii) An important source of compounds used as flavors, colors, and aromas bu food processing technology Insect CC (i) can develop biological control agents that can selectively kill harmful insects and pests (ii) can remove constraints in manufacturing biological control products (iii) can act as production method of therapeutic proteins (iv) also used for production of VLP (virus like particle) vaccines against diseases like SARS and influenza and could lower their costs Mammalian CC (i) used livestock breeding (ii) can supplement and replace animal testing to assess the safety and efficacy of medicines

(iii) (iv)

to synthesise therapeutic compounds like certain proteins which are difficult to be made from genetically modified microbes as a production technology for vaccines

Biosensors It is a detecting device that rely on the specificity of cells and molecules to identify anf measure substances at extremely low concentrations. It is compound of a biological component like cell, enzyme or antibody which is linked to a tiny transducer that is powered by one system to supply power to another system.

Feature: (i) (ii) substance Uses (i) (ii) (iii) (iv)

It couples knowledge of biology with advances in microelectronics. The transducer produces an electrical or optical signal proportional to the concentration of the measure the nutritional value of vital biological components provide measures for of vital blood components locate and measure environmental pollutants detect and quantify explosives, toxins and biowarfare agents.

NANO-BIOTECHNOLOGY The study, manipulation, and manufacture of ultra-small structures and machines made of as few as one molecule. It joins the breakthroughs in nanotechnology to those in molecular biology. Potential applications: (i) increasing the power and speed of disease diagnostics (ii) creating bio-nanostructures for inserting functional molecules into cells (iii) improving the specificity and timing of drug delivery (iv) miniaturizing biosensors by integrating the biological and electronic components into a single, microscopic component In India Research in nano-biotechnology is being pursued in various institutes like Bharat biotech, Center for advanced research and dev., Tata chemicals innovation center, etc. Research areas are food technology, Cancer diagnostics and imaging, cancer therapeutics, targeted drug delivery, novel materials for nano-bio applications. Microarrays Microarrays technology is transforming research because it permits analysis of tens of thousand of samples simultaneously. Scientists use it study gene structure and function. Thousand of DNA or protein molecules are arranged on glass slides (beads) to create DNA and protein chips. A.)DNA Microarrays Uses: (i) detect mutations in disease related genes (ii) monitor gene activity (iii) diagnose infectious diseases and identify the best antibiotic treatment

(iv) identify genes important to crop productivity (v) improve screening of microbes used in bioremediation It has played a key role in efforts to convert the raw genetic data provided by the Human Genome project and others. B.)Protein microarrays There is a gradual shift from DNA microarrays to protein microarrays. Each cell type contains thousand of different proteins, some of which are unique to that cells job. Uses: (i) discover protein biomarkers that indicate disease stages (ii) assess potential efficacy and toxity of drugs before clinical trials (iii) measures differential protein production across cell types and developmental stages, and in both healthy and diseased states (iv) study the relationship between protein structure and function C.) Tissue microarrays Uses: (i) analysis of thousands of tissue samples on a single glass slide (ii) detection of protein profiles in healthy and diseased tissues and validation of potential drugs targets (iii) brain tissue sample arrayed on slides with electrode allow scientist to measure the electrical activity of nerve cells exposed to certain drugs Induced pluripotent stem cells: Induced pluripotent stem cells (iPS cells or iPSCs) are a type of pluripotent stem cell artificially derived from a non-pluripotent cell by inducing a "forced" expression of certain genes. They are believed to be identical to natural pluripotent stem cells, such as embryonic stem (ES) cells. Skin cells of mice have been reprogrammed into induced Pluripotent Stem (iPS) cells. The Hyderabad-based LV Prasad Eye Institute has successfully converted skin cells of mice into induced Pluripotent Stem (iPS) cells that behave like embryonic stem cells. There are two route of introduction: (i) a viral vector, mostly a weak attenuated adeno virus that cannot cause any disease (ii) a non-viral introduction. Advantage: (i) no scope of rejection as skin cells are used (ii) no ethical issues arise as adult cells from the skin are used (iii) Can treat people with hereditary genetic diseases Mining with Microbes A new method of extracting metals from their ores is being developed that is more, than others before. This alternative technique uses Thiobacillus ferro-oxidan and Thiobacillus thio-oxidan bacteria to leach metal elements from their ores. Method: (i)the tailings of ore from previous copper mining are laid on an impermeable (ii)bacteria are then sprayed onto the piles of ore, in an acidic solution, and left to work (iii)T. ferro- and T. thio-oxidan microbes oxidise S2- and Fe2+ ions thus releasing the copper into solution. 'Bio-mining' microbe's secrets revealed: Scientists in Chile have described the molecular machinery of a bacterium used to extract copper and other metals from low-grade mineral concentrates in a process called bio-mining.

The findings were announced at the 12th International Biotechnology Symposium, held in Santiago in 2008. Using two sequences of the bacterium's DNA they identified the molecular processes enabling the microbe to acquire energy from ores, and confirmed their findings in laboratory experiments. Mechanism: (i) A. ferrooxidans can form thin layers called 'bio-films' made up of many individual bacteria. (ii) It can break the bonds between copper and sulphur to obtain energy. This results in the metal's release Advantages: (i) Can extract Cu from low-grade ore reserve (ii)Commercial use is less expensive and (iii)has fewer environmental impacts than conventional processes.

Petroleum microbiology: Many species of bacteria, fungi, and algae have the enzymatic capability to use petroleum hydrocarbons as food. Part of the hydrocarbons are converted into carbon dioxide and water and part into cellular materials, such as proteins and nucleic acids. Applications: (i) oil pollution control (ii) enhanced oil recovery (iii)microbial contamination of petroleum fuels and oil emulsions (iv)and conversion of petroleum hydrocarbons into microbial products. Advantages: (i) prevention of oil pollution (ii) prevent the world's oceans becoming completely covered with a layer of oil. (iii)The largest potential application of petroleum microbiology is in the field of enhanced oil recovery. (iv) Production of variety of valuable materials, such as amino acids, carbohydrates, nucleotides, vitamins, enzymes, antibiotics (v) anaerobic degradation of hydrocarbons to produce methane Examples: Xanthan, Emulsan, Acinetobacter calcoaceticus Persistent Organic Pollutant: Persistent Organic Pollutants (POPs) are chemical substances that persist in the environment, bioaccumulate through the food web, and pose a risk of causing adverse effects to human health and the environment. They have the ability to travel great distances. The wind may carry chemicals into the country from places that still use them. Thus POPs can be found all over the world, including in areas where they have never been used and remote regions such as the middle of oceans and Antarctica. Exposure can take place through: (i) diet (ii) environmental exposure (iii) accidents POP exposure can cause death and illnesses including disruption of the endocrine, reproductive, and immune systems; neurobehavioral disorders; and cancers possibly including breast cancer. Stockholm Convention on Persistent Organic Pollutants aims to eliminate or restrict the production and use of persistent organic pollutants (POPs).

In 1995, the Governing Council of the United Nations Environment Programme (UNEP) called for global action to be taken on POPs. Biorobotics: Biorobotics is a term that loosely covers the fields of cybernetics, bionics and even genetic engineering as a collective study. Biorobotics is often used to refer to a real subfield of robotics: studying how to make robots that emulate or simulate living biological organisms mechanically or even chemically. The field is in its infancy and is sometimes known as synthetic biology or bionanotechnology. Nanorobotics: Nanorobotics is the technology of creating machines or robots at or close to the microscopic scale of a nanometre (10-9 metres). Nanorobots include nanobots, nanoids, nanites or nanomites Bio terrorism: Bioterrorism is terrorism by intentional release or dissemination of biological agents (bacteria, viruses, or toxins); these may be in a naturally-occurring or in a human-modified form. It involves increasing their ability to cause disease, make them resistant to current medicines, or to increase their ability to be spread into the environment. Biological agents can be spread through the air, through water, or in food. Terrorists may use biological agents because they can be extremely difficult to detect and do not cause illness for several hours to several days. Bioterrorism is an attractive weapon because biological agents are relatively easy and inexpensive to obtain or produce, can be easily disseminated, and can cause widespread fear and panic beyond the actual physical damage they can cause. Bionics: Bionics (biomimetics, bio-inspiration, biognosis, biomimicry, or bionical creativity engineering) is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology. It refers to the flow of concepts from biology to engineering and vice versa. In medicine, bionics means the replacement or enhancement of organs or other body parts by mechanical versions. Biomagnification: Biomagnification, also known as bioamplification or biological magnification, is the increase in concentration of a substance, such as the pesticide DDT, that occurs in a food chain. Bioaccumulation: Bioaccumulation occurs within a trophic level, and is the increase in concentration of a substance in certain tissues of organisms' bodies due to absorption from food and the environment.. Molecular nanotechnology Technology to design machines at the molecular scale and built them atom-by-atom. It is based on Richard Feynman's vision of miniature factories using nanomachines. It would combine physical principles demonstrated by chemistry, other nanotechnologies, and the molecular machinery of life. Potential applications: (i) Smart materials and nanosensors (ii) Replicating nanorobots (iii) Medical nanorobots

(iv)

Utility fog- in which a cloud of networked microscopic robots would change its shape and properties to form macroscopic objects and tools in accordance with software commands. (v) Phased-array optics- would permit the duplication of any sort of optical effect but virtually. Users could request holograms, sunrises and sunsets, or floating lasers as the mood strikes. Potential social impacts (i) would elicit a strong public-opinion backlash (like GM crops and cloning) (ii) If MNT were realized, some resources would remain limited, because unique physical objects are limited. (iii) Doubt regarding the feasibility of repairing cells (iv) It conceivably could enable cheaper and more destructive conventional weapons. (v) Doubts regarding its ability to form machines at atomic scales. Nano-medicine: the medical application of nanotechnology Areas: (i) nanomaterials (ii) nanoelectronic biosensors (iii) MNT Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials. Medical uses: (i) to improve the bioavailability of a drug i.e., where and when the drug is required (ii) Protein and Peptide Delivery- usage of quantum dot to treat cancer, tumor,etc. (iii) Surgery (iv) Visualisation (v) Neuro-electronic interfaces- construction of nanodevices that will permit computers to be joined and linked to the nervous system (vi) Nano-robots (vii) Cell repair machines (viii) Nanophrology- deal with kidney Nanosensors They are any biological, chemical, or sugery sensory points used to convey information about nanoparticles to the macroscopic world.\ Uses: (i) various medicinal purposes and as gateways to building other nanoproducts, such as computer chips that work at the nanoscale and nanorobots. (ii) By measuring changes in properties of cells in a body, nanosensors may be able to distinguish between and recognize certain cells (iii) deliver medicine or monitor development to specific places in the body (iv) Developed nanosensor quantum dots would be specifically constructed to find only the particular cell for which the body was at risk. Quantum-dot(QD): a semiconductor whose excitons are confined in all three spatial dimensions.

Other quantum confined semiconductors include: 1. quantum wires, which confine electrons or holes in two spatial dimensions and allow free propagation in the third. 2. quantum wells, which confine electrons or holes in one dimension and allow free propagation in two dimensions. Preparation of QDs (i) Colloidal synthesis (ii) Fabrication (iii) Viral assembly (iv) Electrochemical assembly (v) Cadmium-free quantum dots - CFQD

Optical properties: An immediate optical feature of colloidal quantum dots is their coloration. Applications (i) optical applications due to theoretically high quantum yield (ii) have a sharper density of states than higher-dimensional structures (iii) in solid-state quantum computation (iv) superior to traditional organic dyes on several counts (v) highly sensitive cellular imaging (vi) in vivo toxicity (vii) Photovoltaic devices- Nanocrystal solar cell (viii) LED- QD-LED, QD-WLED (white LED) Photodynamic therapy (PDT), matured as a feasible medical technology in the 1980s at several institutions throughout the world, is a third-level treatment for cancer involving three key components: a photosensitizer, light, and tissue oxygen. Sonodynamic therapy is an experimental cancer therapy which uses ultrasound to enhance the cytotoxic effects of drugs known as sonosensitizers. It has been tested in vitro and in animals. Impalefection is a method of gene delivery using nanomaterials, such as carbon nanofibers, nanotubes, nanowires.

Plastic electronics Plastic electronics is a branch of electronics that deals with devices made from organic polymers, or conductive plastics, as opposed to silicon. The basic substrate will be polyethylene terephthalate, commonly used to manufacture plastic bottles. Circuits will then be printed on to these sheets. The highly conductive polymers needed for electronic devices were first discovered in the early 1960s. They are already used in some electronic devices. The plastic chips will then be used as the "control circuits" behind large flexible "electronic paper" displays. These devices, currently being developed and sold by firms such as Panasonic and Sony, can hold the equivalent of thousands of books. It is hoped that one day these devices will become as common as newspapers and books. How do these differ from traditional electronic devices? Traditionally, semiconductors have been manufactured from inorganic materials, such as silicon. However, this must be processed at high temperatures in expensive clean room facilities. In contrast, polymers can be printed using traditional inkjet printers or techniques similar to those used to produce magazines and wallpaper. This means they are cheaper, easier and quicker to produce. As the polymers can be printed onto flexible substrates they can also be used in totally new types of devices such as electronic paper. Plastic electronics are also more robust than delicate silicon devices. Will plastic ever replace silicon in microchips? Not at the moment. High speed computer chips require ultra-pure materials and precision design. Computer chips routinely use components which are nanometres (billionths of a metre) in size. But using the present printing techniques used in manufacturing plastic components has been able to create components which are only micrometers in size. However, Plastic Logic (a company) say it is currently working on plastic circuits with components 60 nanometers in size. If these are incorporated into working devices it could mean that cheap, flexible electronic chips could be built. One final obstacle could be performance. Although the plastic devices are suitable for electronic paper displays for example, the speed requirements of modern chips are very different. But teams are now working on overcoming these limitations. Are other companies working on developing plastic chips? US firm Lucent, Philips of the Netherlands, Samsung of South Korea and Japan's Hitachi are all interested in developing plastic chips.

Graphene: The research carried out in 2004 by Andre Geim and Kostya Novoselov at the Manchester University, United Kingdom on the isolation of graphene, a single layer of graphite. Graphene is the first isolated 2D nanomaterial. Graphene is the name given to a single layer of carbon atoms densely packed into a benzene ring structure. Graphene is the basic structural element of some carbon allotropes including graphite, carbon nanotubes and fullerenes .Graphene sheets are one-atom thick, 2D layers of sp2-bonded carbon and predicted to have unusual properties. The one atomic thick 2D graphene monolayer sheets are not only ultra-thin, but ultra-strong, can be made as highly-insulating or highly-conductive. Graphene is quite stable under ambient conditions. Graphene is the building block for carbon materials of all other dimensionalities and therefore the mother of all graphitic materials. Thus, the 2D material can be wrapped up into 0D fullerenes, rolled into 1D nanotubes or stacked into 3D graphite. Another interesting experimental observation on graphene is the anomalous quantum Hall effect (QHE). QHE is usually observed at very low temperatures, typically below -243 C. The astonishing observation of QHE in graphene at room temperature in 2007 opens up new vistas for graphenebased resistance standards and quantum devices.

Potentials uses The Manchester group has already developed a graphene-based gas sensor and electronic devices. In March 2006, Professor Walt de Heer at the Georgia Institute of Technology in U.S. produced graphene-based transistors, loop devices and electronic circuitry. Graphene's high electrical conductivity and high optical transparency make it a candidate for transparent conducting electrodes, required for such applications as touchscreens, liquid crystal displays, organic photovoltaic cells, and Organic light-emitting diodes. Graphene Biodevices: Graphene's modifiable chemistry, large surface area, atomic-thickness and molecularly-gatable structure make antibody-functionalized-graphene-sheets excellent candidate for mammalian and microbial detection and diagnosis. Due to the incredibly high surface area to mass ratio of graphene, one potential application is in the conductive plates of ultracapacitors, where graphene could be used to produce ultracapacitors with a greater energy storage density than is currently available. Graphene layers less than 10 atoms thick can form the basis for revolutionary electronic systems that would manipulate electrons as waves rather than particles, much like photonic systems control light waves. The anticipation that graphene transistors will provide life to electronic devices after the death of silicon may not be an exaggeration after all. Thermal Imaging: Infrared Thermography, thermal imaging, thermographic imaging, , is a type of infrared imaging science. Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 90014,000 nanometers ) and produce images of that radiation, called thermograms. Since infrared radiation is emitted by all objects based on their temperatures, according to the black body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. Thermal infrared imagers convert the energy in the infrared wavelength into a visible light video display.

When viewed by thermographic camera, warm objects stand out well against cooler backgrounds; humans and other warm-blooded animals become easily visible against the environment, day or night. As a result, thermography's extensive use can historically be ascribed to the military and security services. Uses: They are used by the police and military for night vision, surveillance, and navigation aid. The use of thermal imaging has increased dramatically with governments and airports staff using the technology to detect suspected swine flu cases during the 2009 pandemic. Firefighters use it to see through smoke, find persons, and localize the base of a fire. Electrical and Mechanical System Inspection can be used to detect flaws in materials or structures. Corrosion Damage (Metal Thinning): thermal imaging can be used to detect material thinning of relatively thin structures since areas with different thermal masses with absorb and radiate heat at different rates. Some physiological activities, particularly responses, in human beings and other warm-blooded animals can also be monitored with thermographic imaging. Advantages of thermography It shows a visual picture so temperatures over a large area can be compared It is capable of catching moving targets in real time It is able to find deteriorating, i.e., higher temperature components prior to their failure It can be used to measure or observe in areas inaccessible or hazardous for other methods It is a non-destructive test method It can be used to see better in dark areas. Limitations of thermography: Images can be difficult to interpret accurately when based upon certain objects, specifically objects with erratic temperatures, although this problem is reduced in active thermal imaging. Accurate temperature measurements are hindered by differing emissivities and reflections from other surfaces. Most cameras have 2% accuracy or worse and are not as accurate as contact methods. Only able to directly detect surface temperatures. CT SCANING A CT (computerised tomography) scanner is a special kind of X-ray machine. Instead of sending out a single X-ray through your body as with ordinary X-rays, several beams are sent simultaneously from different angles. The X-rays from the beams are detected after they have passed through the body and their strength is measured. Beams that have passed through less dense tissue such as the lungs will be stronger, whereas beams that have passed through denser tissue such as bone will be weaker. A computer can use this information to work out the relative density of the tissues examined. Each set of measurements made by the scanner is, in effect, a cross-section through the body. The computer processes the results, displaying them as a two-dimensional picture shown on a monitor. The technique of CT scanning was developed by the British inventor Sir Godfrey Hounsfield, who was awarded the Nobel Prize for his work.

CT scans are far more detailed than ordinary X-rays. The information from the two-dimensional computer images can be reconstructed to produce three-dimensional images by some modern CT scanners. They can be used to produce virtual images that show what a surgeon would see during an operation. CT scans have already allowed doctors to inspect the inside of the body without having to operate or perform unpleasant examinations. CT scanning has also proven invaluable in pinpointing tumours and planning treatment with radiotherapy.

Synthetic Fuels Synthetic Fuels, liquid or gaseous fuels extracted or fabricated from solid earth materials that are rich in hydrocarbonscompounds containing hydrogen and carbon. Although similar in composition to gasoline, synthetic fuels are not refined from petroleum, but are extracted instead from coal, oil shale, tar sands, natural gas, and biomass (plants and plant-derived substances). Synthetic fuels are classified based on what feedstock was used to create them. By far, the three most prominent processes are Coal-To-Liquids (CTL), Gas-To-Liquids (GTL) and Biomass-To-Liquids (BTL). Like petroleum-based fuels, synthetic fuels can be used in a variety of applications in transportation, manufacturing, businesses, and homes. Because producing synthetic fuels is more costly than refining petroleum, however, the use of synthetic fuels is not widespread. One of the positive defining characteristics of synthetic fuels production is the ability to use multiple feedstocks (coal, gas, or biomass) to produce the same product from the same plant. Production of synthetic fuel: There are numerous processes that can be used to produce synthetic fuels.These broadly fall into three categories: Indirect, Direct, and Biofuel processes 1. Indirect conversion Indirect conversion has the widest deployment worldwide, with global production totaling around 260,000 barrels per day. Indirect conversion broadly refers to a process in which biomass, coal, or natural gas is converted to a mix of hydrogen and carbon monoxide known as syngas either through gasification or steam methane reforming, and that syngas is processed into a liquid transportation fuel using one of a number of different conversion techniques depending on the desired end product. The primary technologies that produce synthetic fuel from syngas are Fischer-Tropsch synthesis and the Mobil process (also known as Methanol To Gasoline, or MTG). 2. Direct conversion Direct conversion refers to processes in which coal or biomass feedstocks are converted directly into intermediate or final products, without going through the intermediate step of conversion to syngas via gasification. Direct conversion processes can be broadly broken up into two different methods: Pyrolysis and carbonization, and hydrogenation. 3. Oil sand and oil shale processes

Synthetic crude may also be created by upgrading bitumen (a tar like substance found in oil sands), or synthesizing liquid hydrocarbons from oil shale. There are number of processes extracting shale oil (synthetic crude oil) from oil shale by pyrolysis, hydrogenation, or thermal dissolution. 4. Biomass to liquid The Fischer Tropsch process is used to produce synfuels from gasified biomass. Carbonaceous material is gasified and the gas is processed to make purified syngas (a mixture of carbon monoxide and hydrogen). The Fischer-Tropsch polymerizes syngas into diesel-range hydrocarbons. While biodiesel and bio-ethanol production so far only use parts of a plant, i.e. oil, sugar, starch or cellulose, BTL production uses the whole plant which is gasified by gasification. a) Coal Liquefaction: Coal liquefaction converts coal into a liquid fuel that is similar in composition to crude petroleum. Several techniques are used in coal liquefaction. In the first method, called indirect liquefaction, coal is gasified, forming carbon monoxide, hydrogen, and methane. The carbon monoxide and hydrogen are extracted and combined in the presence of a catalyst and this reaction produces liquid fuel. A second technique for coal liquefaction, called catalytic liquefaction, adds hydrogen gas to solid coal in a high-pressure chamber, and this combination is then heated in the presence of a catalyst. When cooled, this mixture forms a liquid fuel b) Gas to liquid: Natural gas can be converted into liquid fuels, including gasoline, by using gas-toliquids technology, which links methane into larger hydrocarbon molecules. Methane that is joined to form carbon chains or rings can be processed into gasoline, diesel fuel, and jet fuel. Adding steam and oxygen to methane links methane carbon atoms and produces synthesis gas. This synthesis gas is then brought together with hydrogen at high temperatures in the presence of a catalyst. The resulting liquid synthetic fuels are typically clean-burning, high-quality fuels. Why to use GTL: Natural gas is four times more expensive to transport than oil. Converting remote natural gas into a liquid before transport is more cost-effective. Declining GTL production costs, growing worldwide diesel demand, stringent diesel exhaust emission standards, and fuel specifications are driving the petroleum industry to revisit the GTL process for producing higher quality diesel fuels. Since the late 1990s, major oil companies including ARCO, BP, Conoco Phillips, ExxonMobil, Statoil, Sasol, Sasol Chevron, Shell, and Texaco have announced plans to build GTL plants to produce the fuel. c) Bio mass to liquid: Liquid fuels such as alcohol, ether, and oil can be produced from plants and plant-derived substances, known collectively as biomass. Biofuels can be synthesized from a variety of plants and grains. For example, soybeans and rapeseed can be processed into a diesel-like fuel. Corn and sugarcane can be fermented into alcohol. Other organic matter, such as wood, paper, and grass, can also be synthesized into alcohol when certain fermentation-triggering fungi (organisms that decompose organic matter) are added. Biomass alcohol is mixed with gasoline (in a 1:10 alcohol to gas ratio) in certain urban regions to reduce automobile emissions. Hythane Hythane is a blend of a 20% hydrogen and 80% CNG (Compressed Natural Gas) by volume. Natural gas is generally about 90+% methane, along with small amounts of ethane, propane, higher hydrocarbons, and "inerts" like carbon dioxide or nitrogen. Hythane can be used in any CNG vehicle, but modifications are required to get emissions benefits. It is said that it will be able to bridge the gap between the fossil-fueled present and the hydrogen future. Benefits:

Hydrogen is a powerful combustion stimulant for accelerating the methane combustion within an engine, and hydrogen is also a powerful reducing agent for efficient catalysis at lower exhaust temperatures. The main benefit sought by including hydrogen in the alternative fuels mix is emissions reduction eventually by 100%. Methane has a relatively narrow flammability range that limits the fuel efficiency and oxides of nitrogen (NOx) emissions improvements that are possible at lean air/fuel ratios. The addition of even a small amount of hydrogen, however, extends the lean flammability range significantly. Unlike other hydrocarbons that produce carbon dioxide, a hydrogen engine produces water vapour. This makes it more eco-friendly. Hydrogen, as a fuel, produces three times more energy than CNG. Like CNG, hydrogen produces a lot of heat and requires a special supply of coolant to keep the temperature under control

Disadvantage/limitations: But hydrogen is costly. Hydrogen costs four times more than CNG and 1.8 times more than petroleum. Hythane can be used in any CNG vehicle, but modifications are required to get emissions benefits. Without tuning the engine, burning Hythane in a CNG vehicle can increase NOx levels. Hydrogen is stored in a cylinder at a higher pressure than CNG. Indias efforts: IOC has now set up its first commercial Hythane filling station in the country at Dwarka in Delhi. This station will make available hydrogen-blended compressed natural gas (CNG) for three-wheelers and cars. The fuel also known as hythane will contain 18 per cent hydrogen and 82 per cent CNG. IOC has claimed that the 18 per cent blend has been chosen to make use of the existing CNG fuelling infrastructure to gain experience with storage and fuelling of hydrogen and to demonstrate its use for running vehicles. IOC has initiated this project in order to diversify the energy mix of the country which ultimately results in energy security in the future. Waste-to-energy Waste-to-energy (WtE) or energy-from-waste (EfW) is the process of creating energy in the form of electricity or heat from the incineration of waste source. WtE is a form of energy recovery. Most WtE processes produce electricity directly through combustion, or produce a combustible fuel commodity, such as methane, methanol, ethanol or synthetic fuels. The main technology options for setting up waste to enrgy projects include Anaerobic digestion/ Bio methanation; Combustion/Incineration; Pyrolysis; Gasification; land fill recovery; and densification/Pelletization for waste preparation. In India approximately 30 million tonnes of solid waste and 4,400 million cubic meters of liquid waste are generated every year in the urban areas of the country. Most of the wastes generated find their way into land and water bodies, without proper treatment, emitting gases like Methane (CH4), Carbon Dioxide (CO2), etc, resulting in bad odour, air and water pollution, as well as increase in the emission of green house gases. This problem can be significantly mitigated through

adoption of environment-friendly waste-to-energy technologies for treatment and processing wastes before disposal. Benefits: It not only reduces the quantity of wastes, but also improves its quality to meet the required pollution control standards, besides generating substantial quantity of energy. Demand for the landfill sites gets reduced. cost of transportation of waste to landfill sites gets reduced. In bio methanation process, waste slurry could be used as compost, depending upon waste composition. Barriers to growth of this sector are: Segregated Municipal Solid waste is generally not available due to a low level of compliance of MSW Rules 2000. Cost of projects, especially those based on bio methanation route, is high. National Programme The National Programme on Energy Recovery from Urban & Industrial wastes, launched during the year 1995-96, has the following objectives: a) To promote setting up of projects for recovery of energy from wastes of renewable nature from Urban and Industrial sectors; and b) To create conducive conditions and environment, with fiscal and financial regime, to develop, demonstrate and disseminate utilisation of wastes for recovery of energy. c) To develop and demonstrate new technologies on waste-to-energy through R&D projects and pilot plants. Clean coal technology Clean coal technology is an umbrella term used to describe technologies being developed that aim to reduce the environmental impact of coal energy generation. These include chemically washing minerals and impurities from the coal, Coal gasification, treating the flue gases with steam to remove sulfur dioxide, carbon capture and storage technologies to capture the carbon dioxide from the flue gas and Dewatering lower rank coals (brown coals) to improve the calorific quality, and thus the efficiency of the conversion into electricity.

Clean coal technology usually addresses atmospheric problems resulting from burning coal. Historically, the primary focus was on sulfur dioxide and particulates, since it is the most important gas in the causation of acid rain. More recent focus has been on carbon dioxide (due to its impact on global warming) as well as other pollutants. Concerns exist regarding the economic viability of these technologies and the timeframe of delivery, potentially high hidden economic costs in terms of social and environmental damage, and the costs and viability of disposing of removed carbon and other toxic matter. The world's first "clean coal" power plant went on-line in September 2008 in Spremberg, Germany. The plant is state-owned and has been built by the Swedish firm Vattenfall. The plant is state owned because of

the high costs of this technology, since private investors are only willing to invest in other sources such as nuclear, solar and wind. The facility captures CO2 and acid rain producing sulfides, separates them, and compresses the CO2 into a liquid state. Plans are to inject the CO2 into depleted natural gas fields or other geological formations. This technology is considered to not be a final solution for CO2 reduction in the atmosphere, but provides an achievable solution in the near term while more desirable alternative solutions to power generation can be made economically practical. Gas Hydrate: Gas hydrate is methane gas trapped in a cage of water molecules. They are ice-like crystals that lie 200 to 800 metre below the sea bed, at very high pressures and very low temperatures. Many gases have molecular sizes suitable to form hydrate, including such naturally occurring gases as carbon dioxide, hydrogen sulfide, and several low-carbon-number hydrocarbons, but most marine gas hydrates that have been analyzed are methane hydrates.

Indias position: India has around 2,000 trillion cubic feet of prognostic reserves of gas hydrates off the countrys east coast. India has the thickest gas hydrates in the world. Till now, India has drilled at 20 sites and 11, all of them on off east coast. These have been found in Krishna Godavari, North East Coast, Mahanadi and Andaman basin. If mined and brought to atmospheric conditions, they produce 160 times their volume of methane but the technology to mine these hydrates is at its infancy. Global reserves of gas hydrates are estimated to be twice the known oil and gas reserves of the world. Indias steps: The National Gas Hydrate Programme was initially started in 1997 by Ministry of Petroleum and Natural Gas with participating agencies i.e. ONGG, GAIL, DGH, OIL, NIO and Department of Ocean Development. The programme was conceived by the government for exploring for gas hydrate in the Indian deep waters being a future source of unconventional hydrocarbons. The government's National Gas Hydrate Programme in its second phase will conduct surveys to map the hydrate pools. Under Indo-American joint project, an American drill ship 'JOIDES Resolution' did drilling and coring operations in the Exclusive Economic Zone (EEZ) of India - opening doors to new areas in the field of microbiology, geochemistry and sedimentology of gas hydrate bearing sediments. Challenges: The production of gas from gas hydrates itself is a major challenge befor the scientific community. The basic challenge is to find out a suitable technology to first dissociate the gas hydrate present in the solid form below the sea bed in deep sea conditions. Another challenge faced, is to produce the dissociated gas from gas hydrates in a commercial rate. At the moment the whole activity is uneconomic. Enhanced Oil Recovery (Improved oil recovery) Ageing of wells is a perpetual and crucial concern that the global oil industry faces. Thousands of oil wells lie abandonedthey are either unproductive or yield oil in insignificant quantities. An oil well becomes sick when approximately 30% of oil in place has been recovered. The reason: natural gas in the reservoir (responsible for pushing oil up to the mouth of the well) diminishes in quantity and loses pressure because

of deep extraction. As a result, the oil flow decreases and eventually stops. These so-called dead or sick wells still have a substantial quantity of oil left in them. Enhanced Oil Recovery (abbreviated EOR) is a generic term for techniques for increasing the amount of crude oil that can be extracted from an oil field. Using EOR, 30-60 %, or more, of the reservoir's original oil can be extracted, compared with 20-40%using primary and secondary recovery. Enhanced oil recovery is also called improved oil recovery or tertiary recovery (as opposed to primary and secondary recovery). Sometimes the term quaternary recovery is used to refer to more advanced, speculative, EOR techniques Enhanced oil recovery is achieved by gas injection, chemical injection, ultrasonic stimulation, microbial injection, or thermal recovery (which includes cyclic steam, steamflooding, and fireflooding). Gas injection: Gas injection is presently the most-commonly used approach to enhanced recovery. A gas is injected into the oil-bearing stratum under high pressure. That pressure pushes the oil into the pipe and up to the surface. In addition to the beneficial effect of the pressure, this method sometimes aids recovery by reducing the viscosity of the crude oil as the gas mixes with it.Gases commonly used include CO2, natural gas or nitrogen. Chemical injection: Several possible methods have been proposed. Some successful applications are injection of polymers, which can either reduce the crude's viscosity or increase the viscosity of water which has also been injected to force the crude out of the stratum. Detergent-like surfactants such as rhamnolipids are injected to lower the capillary pressure that impedes oil droplets from moving through a reservoir. Ultrasonic stimulation: It has been proposed to use high-power ultrasonic vibrations from a piezoelectric vibration unit lowered into the drillhead, to "shake" the oil droplets from the rock matrices, allowing them to move more freely toward the drillhead. This technique is projected to be most effective immediately around the drillhead. Thermal recovery:In this approach, various methods are used to heat the crude oil either during its flow upward in the drillhead, or in the pool, which would allow it to flow more easily toward the drillhead. Adding oil recovery methods adds to the cost of oil in the case of CO2 typically between 0.5-8.0 US$ per tonne of CO2. The increased extraction of oil on the other hand, is an economic benefit with the revenue depending on prevailing oil prices. Microbially enhanced oil recovery Conventional methods of recovery are extremely expensive and costs can vary from 140,000 to 200,000 dollars per well. However, as oil reserves dry up globally, the depth of wells increases, and temperatures inside the reservoirs also increase (it varies between 80 C and 120 C), these methods prove ineffective and the task becomes more challenging. Mechanism: The MEOR (microbial enhanced oil recovery) mechanism of extracting oil from less productive wells has solved an age-old problem that perplexed the oil industry the world over. Also called the huff-puff method of oil recovery (it involves injecting microbes and then sucking up oil), it extracts over three times the oil than any other conventional process. After these microbes are injected into an oil well, they take close to a fortnight to do their job. To understand what happens, it is better, one needs to visualize rocks with pores, much like a honeycomb. Oil, being viscous, is trapped in these pores. These microbes produce carbon dioxide and methane, gases that enter the pores and squeeze out every ounce of oil. They also produce bio-surfactants (detergent-like compounds) that reduce the tension between

oil and the rock surface and help release the oil. The reaction of these microbes in oil also releases alcohol and volatile fatty acids. The alcohol reduces the viscosity of oil, making it light enough to flow out. The fatty acids solubilize the rock surface and thus push oil off them. The MEOR process of oil recovery actually offers more than the advantages of conventional methods of oil recovery plus the added strengths of the microbes. Beneficiaries Wells of the ONGC (Oil and Natural Gas Corporation) in Gujarat have been revived, are functioning again. The MEOR technology, when applied in 25 oil wells of ONGC, extracted 4500 cubic metres of oil from one of the sick wells, translating into revenues of more than 675 000 dollars. Its benefits such as cost-effective use and environment-friendly nature have generated interest among oil firms in the Middle East and other oil-producing countries. Applications/benefits Oil recovered through microbial technology has helped bring down the cost of oil substantially. Depending upon the nature of recovery, the price of each barrel of oil can decrease by as much as 35%40%. Moreover, the environment-friendly nature of this form of oil recovery gives it an edge over other conventional methods. The possibilities that these invisible, living organisms offer us are immense and still not fully tapped. Underground coal gasification (in situ gasification) Underground coal gasification (UCG) is an industrial process, which enables the coal to be converted into product gas. UCG is an in-situ gasification process carried out in non-mined coal seams using injection of oxidants, and bringing the product gas to surface through production wells drilled from the surface. The product gas could to be used as a chemical feedstock or as fuel for power generation. The technique can be applied to resources that are otherwise not economical to extract and also offers an alternative to conventional coal mining methods for some resources. Compare to the traditional coal mining and gasification, the UCG has less environmental and social impact. The major environmental advantages of ISG are minimal disturbance of the surface and utilisation of the underground cavities left after gasification to sequester carbon dioxide that arises from burning the product gases in the combined cycle power plant. This is being studied now in a number of coal rich countries such as Australia, the United Kingdom, Spain, China and Russia but the process is still in the experimental stage. Process: The basic underground coal gasification process consists of one production well drilled into the unmined coal-seam for injection of the oxidants, and another production well to bring the product gas to surface (See Diagram). The coal seam is ignited via the first well and burns at temperatures as high as 1,500 K (1,230 C), generating carbon dioxide (CO2), hydrogen (H2), carbon monoxide (CO) and small quantities of methane (CH4) and hydrogen sulphide(H2S) at high pressure. As the coal face burns and the immediate area is depleted, the oxidants injected are controlled by the operator, ultimately with the objective of guiding the burn along the seam. UCG product gas is optimally used to fire combined cycle gas turbine (CCGT) power plants. Underground coal gasification allow access to more coal resources than economically recoverable by traditional technologies. UCG product gas can also be used for: Synthesis of liquid fuels at a predicted cost equivalent to US$17/bbl; Manufacture of chemicals such as ammonia and fertilizers; Enhanced oil recovery (EOR).

In the roles listed above, UCG product gas replaces the use of natural gas and can provide substantial cost savings. Additional cost savings can be made over traditional coal mining and required coal transport, whereby the UCG process: produces syngas which can piped directly to the end-user, reducing need for rail / road infrastructure; and; lowers the cost of environmental cleanup due to solid waste being confined underground.

Social impacts of UCG Due to the absence of mining in UCG, a number of social benefits are evident. Firstly, the risk of injury or death to humans is significantly reduced given that workers no longer need to enter a mine. Secondly, as the impact on the environment is greatly reduced, local communities do not face the detrimental impacts.

Mobile TV Mobile TV is a service which allows mobile phone owners to watch television on their phones from a service provider via mobile telecommunications networks. Television data can be obtained either through an existing cellular network or a propriety network. Mobile TV over cellular networks allows viewers to enjoy personalized, interactive TV with content specifically adapted to the mobile medium. The subscribers can watch both live shows and pre-recorded program over Mobile TV. Live TV is transmitted through DVB-H (digital video broadcasting-handheld) and DMB (digital multimedia broadcasting) while pre-recorded shows can be watched similar to Video-On-Demand services. The services and viewing experience of mobile TV over cellular networks differs in a variety of ways from traditional TV viewing. In addition to mobility, mobile TV delivers a variety of services including video-on-demand, traditional/linear and live TV programs. Another exciting opportunity for users is Mobile TV pod casts, where content is delivered to a users mobile on demand or by subscriptions. Stored locally on the handset, this content can then be viewed even when theres no network connection. And a service provider can schedule the delivery to off-peak hours, for example during the night. Technically, there are currently two main ways of delivering mobile TV. The first is via a two-way cellular network and the second is through a one-way dedicated broadcast network. These include digital video broadcasting-handheld (DVB-H), digital multimedia broadcasting (DMB). None is ideal as all have drawbacks of one kind or another: spectral frequencies used or needed, signal strength required, new antennas and towers, network capacity required, or business model. Using the 3G network is the fastest and easiest way to get Mobile TV off the ground. It allows for the quick start, an operator needs to grab the initiative and develop relationships with both customers and content providers. There is more than enough capacity in 3G networks to scale up for a mass market of Mobile TV services, particularly if an operator has High Speed Packet Access (HSPA) as this will provide for several steps of capacity increases.

Present Status: Out of the 120 plus commercially launched mobile TV services worldwide, more than 90% of these are based on existing two-way cellular networks, using unicast. With unicast, content is transmitted separately from a single source to a single destination, like from a server to a mobile handset. And that is how each individual can get the content they want. With broadcast, the same content is delivered to a very large number of mobile handsets in a single transmission. Digital broadcast flavour of Mobile TV is still in its infancy worldwide. Only South Korea has recently introduced public services based on DMB while DVB-H trials have begun in some Europian countries Challenges 1. Device Manufacturers challenges: High Power Consumption: Memory: To support the high buffer requirements of the mobile TV. Current memory capabilities will not be suited for long hours of mobile TV viewing. User Interface Design: A large number of mobile phones do not support mobile TV; users have to purchase new handsets with improved LCD display and user interface that support mobile TV Processing Power: Device manufacturers should improve the processing power significantly to support a MIPS intensive application like mobile TV.

2. Content Providers challenges: The mobile TV industry opens up a new market for the content specifically tailored for mobile TVs. These could include making new mobisodes mobile episodes of popular shows which are relatively shorter in length (3 to 5 minutes), modifying the content to suit mobile TV. Providers need to think of innovative ways of editing content, increasing close-up shots for clarity on small screen, etc. Status in India: In India, Nokia has already launched DVB-H enabled devices with which the Delhi users can see eight channels of Doordarshan. Every mobile operator in India provides this service but it is very costly right now. An auction of spectrum for Mobile TV services has been recently recommended by TRAI. Wireless Application Protocol Wireless Application Protocol (commonly referred to as WAP) is an open international standard for application layer network communications in a wireless communication environment. Its main use is to enable access to the Mobile Web from a mobile phone or PDA. A WAP browser provides all of the basic services of a computer based web browser but simplified to operate within the restrictions of a mobile phone, such as its smaller view screen. WAP sites are websites written in, or dynamically converted to, WML (Wireless Markup Language) and accessed via the WAP browser. Before the introduction of WAP, service providers had extremely limited opportunities to offer interactive data services. Interactive data applications are required to support now commonplace activities such as: * Email by mobile phone * Tracking of stock market prices * Sports results * News headlines * Music downloads The Japanese i-mode system is another major competing wireless data protocol.

Your old computers are piling up around your neighbourhood. Unless the govt enacts e-waste disposal laws fast, things could get alarming, finds Harsimran Singh
A day in the life of an e-waster recycler 14-year old Ram Kumar wakes up at 4 a.m. in the morning. He starts his search for broken mobile phones, keyboards and CPU cabinets. By 9 a.m., his brown gunny bag is brimming with electronic junk from the stream of sewage and garbage dumps lining a nallah in Patparganj, East Delhi. Broken mobile phones, cathode ray tubes, radiators, mangled printed circuit boards and smashed refrigerator parts - the e-waste bulges out of the gunny bag. He offloads his electronic knick-knack daily at the nearest assemblers for Rs 100 a bag. On a lucky day, a PC motherboard fetches him Rs 30. As the sun settles, on the banks of the nallah, the assemblers ship all the e-junk to Seelampur, where the e-waste recyclers reside near the slums. Scores of concrete bath tubs, filled with lead acid, line up in the area. The recyclers dump all e-waste into the acid bath overnight. As dawn breaks, on the banks of the Yamuna, metal scrap from the circuit boards, melts away and settle at the bottom of the acid bath. As the acid loses its corrosiveness after 4 to 5 uses, the bath tubs are drained into the Yamuna. From the river, poisonous metals and chemicals find their way into the ground water, from where it reaches your drinking water supply. About 3.3 lakh tones of e-waste was generated in 2007, which was dumped into the rivers, nallahs, landfills and sewage drains of the country. An additional 50,000 metric tonnes was illegally imported into the country. While the chemicals seep into the ground water, the e-waste (like junk refrigerator bodies, compressors from air conditioners and waste plastic used to make phones) just keep on piling up. Around residential areas, just off city limits, these dumps are growing. Out of the 3.3 lakh MT, only 19,000 MT of the annual e-waste is recycled, every year. This is due to high refurbishing and reuse of electronics products in the country and also due to poor recycling infrastructure. E-waste is going to be one the major problems facing the world after climate change and poverty, says Nokia India MD D Shivakumar. At Nokia, we have realised this and have started a programme under which anybody can move into a Nokia priority care store and put any mobile phone in a box. We then collect all this e-waste and get it recycled via authorised recyclers, he adds. Globally, Nokia has collection points for recycling used mobile phones and accessories across 5000 Nokia care centres in 85 countries and engages in collection campaigns with retailers, operators, other manufacturers and local authorities around the world. Nokias proactive approach has made it the top electronics major in Greenpeace Indias Annual Guide to Greener Electronics 2008. In India, Nokia has installed take-back bins in more than 600 care centres across India, with a free gift for people depositing their old mobile phones. According to a MAIT report, e-waste from discarded computers, television sets and mobile phones is projected to grow to more than 800,000 MT by 2012 with a growth rate of 15% in the country. THE PILING PROBLEM Despite the growing concern over the issue, India does not have a legislation to mandate authorised recycling of ewaste. If the situation is not controlled then we may see large land fills of junk ewaste lying in and outside cities 10 years down the line, says Vinnie Mehta, executive director, MAIT, the industry body for electronics and hardware, who has been instrumental in bringing the issue to the governments notice. He is pushing for a legislation for mandatory guidelines for recycling of e-waste. India has a 27 million PC installed base, 130 million TVs and 380 million mobile phones. The active life of a mobile phone is two years, for a PC its three years and for TV sets and fridges its more than 10 years, as the technology is much more stable. And while mobile phone and computer parts are fairly easy to recycle, recycling junk from TVs and refrigerators is difficult. While PCs are growing at the rate of 8 million additions per annum, mobile phones are

growing the rate of 100 million additions per year. Go to any large IT company today and you will find warehouses with hundreds of old monitors, CPUs, and keyboards lying on top of each other. Companies such as Infosys and TCS which employ over 80,000 employees each and have about a lakh PCs, cannot figure out what to do with the ejunk. Laws dont permit a selloff of assets in STP units. Thus, most companies donate the PCs to foundations and NGOs and schools. From there it generally lands into the unauthorised recycling market. Many companies have an end of lifecycle use takeback policy in place, though many dont have clear cut policies on what they do with their e-waste. THE GREENPEACE CAMPAIGN Expressing deep concern over the problem, a Greenpeace India official says that most consumer electronics companies have been slow in getting serious about climate change. Despite much green marketing, many brands including all Indian brands still show little engagement with the issue. Motorola, Dell, Apple, Lenovo, Samsung, and LG Electronics are notably lagging behind, with no plans to cut absolute emissions from their own operations and no support for the targets and timelines needed to avoid catastrophic climate change. Among Indian brands, Zenith and PCS Technology are yet to address this issue, whereas not much commitment is forthcoming from HCL and Wipro. These huge companies could make a big difference by doing their part to avoid a climate crisis and asking their governments to do the same, says the Greenpeace India official. But companies disagree with this point of view. HCL, the largest domestic IT hardware company says it is adopting policies whereby it facilitates consumers to ensure that all end of life products manufactured by HCL will be recycled/disposed of in an environmentally safe manner. Says George Paul, executive vice president, HCL Infosystems: HCL extends the recycling facility to all HCL users regardless when and where they purchased the product. HCL facilitates its consumers to ensure that all end of life products manufactured by HCL shall be recycled/disposed of in an environmentally safe manner. But as a part of exchange offer HCL donates customers old PCs to NGOs. But Greenpeace India toxics campaigner, Abhishek Pratap says that it is unfortunate that most Indian companies lack the systems to implement their policy commitment to make products that are toxin-free, easy to recycle and energy efficient. In its Greener Electronics India Ranking 2008, Greenpeace has dropped Hewlett-Packard to 13th place for failing to operationalise the principle of individual producer responsibility and for its weak voluntary take-back programme, which is mainly oriented towards business rather than individual customers. But HP disagrees. The company says that this year it announced the expansion of its product return and recycling programme to enterprise customers in India. HP has offered to take back endof-life HP and non-HP computing equipment like personal computers, laptops, computer monitors, handhelds, notebooks, servers, printers, scanners and fax machines, as well as associated external components such as cables, mice and keyboards from consumers. Customers are integral to our commitment to the environment, says Jean-Claude Vanderstraeten, director, environmental management, HP Asia Pacific & Japan. The number of PCs, servers, print cartridges and other electronics reaching the end of their usable life is growing rapidly. Plastics and metals recovered from products recycled by HP have been used in new HP products, as well as a range of other uses, including auto body parts, clothes hangers, plastic toys, fence posts, serving trays and roof tiles, he adds. HPs proactive approach towards product reuse and recycling helps to divert material from landfill to environmentally sound recycling. Customers can now simply follow a four step process to participate in this programme. Meanwhile, in Greenpeaces annual rankings Dell has dropped down from 5th place in 2005, to 12th position in

2008, albeit with the same score. The NGO says that Dell loses points for withdrawing from its commitment to eliminate all PVC plastic and brominated flame retardants (BFRs) by the end of 2009. But the worlds second largest computer company Dell disagrees. Dell has a global recycling policy in place that offers consumers free recycling for any Dell branded product at anytime, and even free recycling for other branded products with purchase of new Dell equipment. We also offer value-added services to businesses and institutions for recycling of excess IT equipment. We partner with product recycling vendors to manage the recycling process, says a Dell spokesperson. LACK OF PUNITIVE LAWS A study by GTZ, an international cooperation enterprise for sustainable development, reveals some significant findings about Indian businesses. Though a lot of business organisations are aware of e-waste, but the knowledge of proper disposal is lacking. The lack of holistic knowledge about the problem is the reason for 94% of the organisations not having the relevant IT disposal policy, the reports says. The problem, according to sources is the lack of will on the part of ministry of environment and forests, to enforce new mandatory guidelines for electronic waste. A general guideline for disposal of normal waste already exists though no separate law exists for e-waste. The electronics companies are willing to help in the enforcement of guidelines. But their basic contention is that it will make the cost of a PC or mobile phone go up, if bought from an authorised dealer. Unauthorised players and sellers of electronic equipment should also be made a part of the guidelines, they feel. Or they may start losing marketshare to the grey market. Mehta says that the situation could assume alarming proportions and therefore it is high time we pay serious attention to the issue of e-waste and take corrective actions to contain this problem. It is essential that the electronics industry encourages reuse of obsolete electronics items by suitably refurbishing them and by providing them necessary service support. Further, institutional users must mandatorily put in place a policy on ewaste management and for disposal of obsolete electronic equipment. THE SOLUTION While a guideline for handling hazardous waste already exists in the country, and registration of recyclers is mandatory. No such mandatory registration exists for e-waste, which is different from a chemical or fertiliser waste as the source is not always one company. A huge complex value chain and distribution is involved in handling of an electronics item But for all this to happen, the government has to define roles of each stakeholder including the vendors, the users, the recyclers and the regulator for environmentally friendly recycling. The informal recyclers should also be included in this model. Severe penalties on violation of these norms should be levied. Otherwise it may go the same way as the disposal of normal waste is done in the country. The industry is ready and so are the citizens who are willing to incur a cost on recycling. But the governments apathy is making the e-garbage stock pile up and making your air, food and water more poisonous everyday. The ball is now in the governments court. TOP TEN E-WASTE GENERATING STATES Maharashtra Tamil Nadu Andhra Pradesh Uttar Pradesh West Bengal Delhi Karnataka Gujarat MP Punjab TOP TEN E-WASTE GENERATING CITIES Mumbai Delhi Bangalore Chennai Kolkata Ahmedabad Hyderabad Pune Surat Nagpur HOW GREEN IS YOUR GIZMO Greenpeace RANKINGS: Guide to Greener Electronics 2008 1 NOKIA: Creative take-back scheme 2 SONY ERICSSON: New environmental warranty 3 TOSHIBA: Reporting use of renewable energy 4 SAMSUNG: Good on toxic chem, poor recycling 5 FUJITSU SIEMENS: Good on

energy, poor e-waste 6 LG Improved score on recycling & energy 7 MOTOROLA: Improved on energy & recycling 8 SONY: Has room for improvement on energy 9 PANASONIC: Good energy, poor e-waste criteria 10 SHARP: Reporting of energy efficiency of products weak 11 ACER To improve on cutting toxicity & recycling 12 DELL Loses on withdrawing from its commitment to eliminate PVC plastic & BFRs by 09 end 13 HP: Still needs to improve on e-waste 14 APPLE: Now reporting product carbon footprint , new iPods are free of PVC and BFRs 15 PHILIPS: Scores well on toxics and energy but scores zero on other e-waste criteria 16 LENOVO: Scores well on toxic chemicals, poor on recycling & energy 17 MICROSOFT: Poor score on recycling & energy 18 NINTENDO: Zero on most criteria except chemicals mgmt & energy Source: Greenpeace

Future Fast Reactor: Baldev Raj, Director, IGCAR, has been elected chairman of a seven-country international project to define a future fast reactor with closed nuclear fuel cycle (FR with CNFC) that will contribute to the generation of 300 GWe to 500 GWe of nuclear power by 2050. This futuristic reactor will meet seven specific requirements: safety, economy, non-proliferation, technology, environmental concerns, waste management and infrastructure. The seven countries are India, Russia, China, France, Japan, South Korea and Ukraine. . The U.S. and Canada are likely to join the project. The initiative is under the auspices of the International Project on Innovative Nuclear Reactors and Fuel Cycle, called INPRO, of the International Atomic Energy Agency (IAEA). This reactor will have a capacity of 1000 MWe. The fuel that the reactor will use will be identified soon. Research and development on the reactor would be done individually by some countries and others in a collaborative mode. Representatives from the seven-member countries of the project met three times at Obninsk, Russia; Vienna, Australia; and Kalpakkam to discuss the specifications of the new reactor. The meeting at Kalpakkam was held in March 2006. INPRO has prepared a manual listing the scientific ways of this reference reactor. The reactor would be assessed jointly as per this manual. Why such reactors The cost of uranium an important fuel for nuclear power reactors had gone up three-fold in the past 10 years. Uranium resources were also limited worldwide. "So only FRs with CNFC will provide sustainability." given the world scenario of nuclear power it has been concluded that FRs with CNFC are an inevitable option if a large amount of energy is to be provided at a reasonable cost and less waste.

The project to define the characteristics of the new reactor is called Joint Assessment Study on FRs with CNFC. (Unlike Fast Breeder Reactors (FBRs), FRs will not breed fuel from the fuel they use. Closed nuclear fuel cycle means mastering the technology of reprocessing the spent fuel from the reactors). There is worldwide interest in FRs and Fast Breeder Reactors (FBRs). FBRs are operational in Russia. In India, the IGCAR has designed the 13 MWe Fast Breeder Test Reactor(FBTR) operational at Kalpakkam, and the 500 MWe Prototype Fast Breeder Reactor under construction at Kalpakkam. India plans to build four more 500 MWe FBRs before 2020. France has Phenix breeder reactor. Japan has two breeders. China is building an experimental breeder reactor.

Nuclear Energy What are the achievements and failures of the Department of Atomic Energy in the last 60 years? We have a large, capable human resource pool of scientists and technologists. This, I think, is a very important achievement. The second important achievement is that our programme, on the basis of self-reliance, has demonstrated that we can take our R&D efforts, carried out in our laboratories, to commercial scale of excellence in the marketplace. The third achievement is that the first stage of India's nuclear power programme, presently consisting of 12 Pressurized Heavy Water Reactors (PHWRs), is completely in the industrial domain. It will grow on its own steam. Lastly, as a result of the consolidation of the entire work done in the last 50 years, we now have a clearly defined roadmap for future R&D and its commercialisation. In terms of failures I will not call them failures but we did see several challenges. For example, embargoes have been a major challenge. Embargoes have not deterred us from making progress and, in fact, they have made our self-reliance that much more robust. Obviously, the dimensions of our programme would have been bigger if we had been able to do things at a much faster pace. We have different technologies for various applications. Nuclear energy applications in agriculture, health, food security and so on. While we have done this, we have also contributed towards nuclear weapons ability in the country. India today is a country with nuclear weapons to ensure its long-term security. At the same time, we have domestic capability to guarantee longterm energy security in a manner that will help in preserving the environment and avoiding the adverse impact of climate change. How important are the fast-breeder reactors in ensuring India's energy security? Fast-breeder reactors are more important to India than to other countries which have capabilities in nuclear power technology. This is because of the nuclear resource profile we have in the country. Our uranium reserves what we have as per the present state of exploration will be able to support 10,000 MWe generating

capacity, which is not large. But it is the starting point for setting up fast reactors. When the same uranium, which will support 10,000 MWe generating capacity in the PHWRs, comes out as spent fuel and we process that spent fuel into plutonium and residual uranium, and use it in the fast reactors, we will be able to go to electricity capacity which will be as large 5,00,000 MWe. This is due to the breeding potential of the fast reactors, using the plutonium-uranium cycle. That is the importance of the fast-breeder reactors under Indian conditions, compared to other countries. Production of Atomic Energy using Thorium India has formulated a three stage nuclear power programme to optimally use its modest uranium and vast thorium resources. Large scale thorium utilization is contemplated in the third stage of this programme, where Uranium-233 bred is Fast Breeder Reactors of the second stage, will be used together with thorium. The government has taken a number of steps to develop appropriate technologies for the utilization of thorium. A few of the major steps are: 1) Setting up the research reactor Kamini at Kalpakkam using Uranium-233 fuel obtained from irradiated thorium. The reactor has been operating since 1997. The fuel for the reactor is bred, reprocessed and fabricated indigenously. 2) Irradiation of thorium fuel bundles in research reactor at Trombay and in Pressurised Heavy Water Reactors (PHWRs) has been carried out. 3) Design and development of Advanced Heavy Water Reactor (AHWR) using thorium based fuel. This reactor will serve as a technology demonstrator. Around 2,00,000 GW-yr electricity potential exists in India using domestic thorium through the route of breeder technology. Radiometric dating Radiometric dating (often called radioactive dating) is a technique used to date materials, usually based on a comparison between the observed abundance of a naturally occurring radioactive isotope and its decay products, using known decay rates. It is the principal source of information about the absolute age of rocks and other geological features, including the age of the Earth itself, and can be used to date a wide range of natural and man-made materials. Radiometric dating has been carried out since 1905 when it was invented by Ernest Rutherford as a method by which one might determine the age of the Earth. Dating can now be performed on samples as small as a billionth of a gram using a mass spectrometer. Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied. Among the best-known techniques are radiocarbon dating, potassium-argon dating and Uranium-lead dating. Uses: By allowing the establishment of geological timescales, it provides a significant source of information about the ages of fossils and the deduced rates of evolutionary change. Radiometric dating is also used to date archaeological materials, including ancient artifacts. Generation IV Reactors: Generation IV reactors (Gen IV) are a set of theoretical nuclear reactor designs currently being researched. Most of these designs are generally not expected to be available for commercial construction before 2030,

with the exception of a version of the Very High Temperature Reactor (VHTR) called the Next Generation Nuclear Plant (NGNP). The NGNP is to be completed by 2021. Research into these reactor types was officially started by the Generation IV International Forum (GIF) (estb in 2001) based on eight technology goals. The primary goals being to improve nuclear safety, improve proliferation resistance, minimize waste and natural resource utilization, and to decrease the cost to build and run such plants. Current reactors in operation around the world are generally considered second- or third-generation systems. Advantages Relative to current nuclear power plant technology the claimed benefits for 4th generation reactors includes:i) Nuclear waste that lasts decades instead of millennia. ii) 100-300 times more energy yield from the same amount of nuclear fuel. iii) The ability to consume existing nuclear waste in the production of electricity. VHTR The very-high temperature reactor is a next step in the evolutionary development of high-temperature reactors. The VHTR is a helium gas-cooled, graphite-moderated, thermal neutron spectrum reactor with a core outlet temperature greater than 900C, and a goal of 1000C, sufficient to support production of hydrogen by thermo-chemical processes. The reference reactor thermal power is set at a level that allows passive decay heat removal, currently estimated to be about 600 MWth. The VHTR is primarily dedicated to the cogeneration of electricity and hydrogen, as well as to other process heat applications. It can produce hydrogen from water by using thermo-chemical, electro-chemical or hybrid processes with reduced emission of CO2 gases. At first, a once-through LEU (<20% 235U) fuel cycle will be adopted, but a closed fuel cycle will be assessed, as well as potential symbiotic fuel cycles with other types of reactors (especially light-water reactors) for waste reduction. Food Irradiation: Food irradiation is the process of exposing food to ionizing radiation to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. The genuine effect of processing food by ionizing radiation involves damage to DNA, the basic genetic information for life. Microorganisms can no longer proliferate and continue their malignant or pathogenic activities. Spoilage-causing micro-organisms cannot continue their activities. Insects do not survive, or become incapable of proliferation. Plants cannot continue the natural ripening or aging process. Irradiation is known as a cold process. It does not significantly increase the temperature or change the physical or sensory characteristics of most foods. During irradiation, the energy waves affect unwanted organisms but are not retained in the food. Two things are needed for the irradiation process A source of radiant energy, and a way to confine that energy. For food irradiation, the sources are radioisotopes (radioactive materials) and machines that produce high-energy beams. Three types of radiations are mainly used - Gamma radiation (Cobalt-60 is the main isotope used) - X-ray radiation - Electron radiation Benefits: Food is irradiated mainly to eliminate or reduce harmful bacteria and insects that cause spoilage, to increase shelf life by delaying ripening or inhibiting sprouting in the case of fruits and vegetables. Irradiation however does not obviate the need for proper food handling practices.

can replace potentially harmful chemical fumigants when used to eliminate insects from dried grain, legumes, spices, dried nuts, etc. While irradiation for sterilizing medical products has been in use for more than 30 years in India, its application for certain food items was first approved in 1994. India is among the 40-odd countries that allow food irradiation. The Department of Atomic Energy (DAE) that is popularizing the technology as also for Indian farmers. Apart from two demonstration facilities run by the DAE, there are now six private facilities for irradiating medical and food products. An agreement was concluded recently between Indian and U.S. government, on the use of irradiation before export to rid mangoes from India of pests and to delay their ripening. Irradiation is also used for non-food items, such as medical hardware, plastics, tubes for gaspipelines, hoses for floor-heating, shrink-foils for food packaging, automobile parts, wires and cables (isolation), tires, and even gemstones Apprehensions: It is assumed that the food products may become radioactive after irradiation. It has been well documented that irradiating food with gamma rays using cobalt-60 or cesium-137 does not induce radioactivity nor will electron energy up to 10 MeV. Another apprehension is that the food irradiation might - be used to mask spoiled food, - discourage strict adherence to Good Manufacturing Practices, - preferentially kill 'good' bacteria, encourage growth of 'bad' bacteria, - devitalise and denature irradiated food. - impair the flavour, - not destroy bacterial toxins already present India-based Neutrino Observatory (INO) India-based Neutrino Observatory (INO) is a proposed particle physics research project to primarily study atmospheric neutrinos in a 1,300 meters (4,265 ft) deep cave under Ino Peak near Masinagudi, Tamil Nadu, India.The Neutrino Collaboration Group (NCG) has finalised the location for Rs 500 crore Indian Neutrino Observatory (INO) at Udagamandalam (Ooty) in Tamil Nadu. The project is a multi-institute collaboration and one of the biggest experimental particle physics projects undertaken in India. A group of scientists and engineers spread over 25 scientific research institutions and universities in India is actively involved in the creation of INO. It is unique basic science collaboration in the country. It has been approved for funding by the Department of Atomic Energy and the Department of Science and Technology and included by the Planning Commission as a mega science project under the Eleventh Five-Year Plan. A huge cavern of size 120m x 25m x 30m will be dug under the Nilgiri mountains at 1.3 km below the peak and this will be accessed through a horizontal tunnel of more than 2 km in length.A gigantic magnetised detector weighing 50,000 tonnes will be constructed inside this cavern and will be used to detect and study the neutrinos. In the beginning, this detector will be used to study the neutrinos produced by cosmic rays. Why research in Neutrino important: Neutrinos are part of a set of elementary particles which form the basic constituent of matter in nature, are filling the Universe in abundance but are very elusive. They are very light (almost mass-less) and have no electric charge and hardly interact with matter. Trillions of neutrinos are passing through our bodies every second without affecting us. They are also one of the least understood. Very important discoveries have been made recently in neutrino physics and neutrino astronomy.

One of the most important discoveries of the last decade is that neutrinos have mass. Until this discovery, it was thought that neutrinos are massless particles like photons, the quanta of light. This has led to active planning of many more neutrino laboratories round the world, especially considering that a considerable part of neutrino physics is yet to be discovered. The Primary goals of the INO are the following: The primary goal of the project is to study the properties and interactions of weakly interacting, naturally occurring particles, called neutrinos. Neutrinos hold the key to several important and fundamental questions on the origin of the universe and energy production in the Sun and other stars. INO could also be used in studying geosciences, material application, monitoring nuclear tests and biological activities of microbes. This project is notable in that it is anticipated to provide a precise of measurement of neutrino mixing parameters. Study of charge-conjugation and charge parity (CP) violation in the leptonic sector as well as possible charge-conjugation, parity, time-reversal (CPT) violation studies. Some of the exciting applications of neutrino technology will be these: Since neutrinos are the most penetrating radiation known to mankind (a typical neutrino can travel a million Earth diameters of matter without getting stopped), neutrino beams will be the ultimate tools for the tomography of Earth. A new window on geophysics opened a few years ago when a neutrino detector in Japan detected geoneutrinos emitted by radioactive uranium and thorium ore buried in the bowels of the Earth. This leads to the possibility of mapping the whole Earth as far as its radioactive content is concerned. Space Based Telescopes: NASA's series of Great Observatories satellites are four large, powerful space-based telescopes. Each of the Great Observatories has had a similar size and cost at program outset, and each has made a substantial contribution to astronomy. The four missions each examined a region of the electromagnetic spectrum to which it was particularly suited. Great Observatories a) The Hubble Space Telescope (HST) primarily observes visible light and near-ultraviolet. A 1997 servicing mission added capability in the near-infrared range.The Hubble Space Telescope (HST) is a 2.4 meter (in diameter) mirror telescope, which is revolving around the earth in a satellite orbit since 1990.Because the missing of an atmosphere on this altitude above the surface of the earth and lacking of earthly light sources, it is possible to make very sharp photographs of moons, planets, stars, gas and dust nebulae and distant star systems. b) The Chandra X-ray Observatory (CXO) was initially named the Advanced X-ray Astronomical Facility (AXAF). It primarily observes soft x-rays. The Chandra X-ray telescope revolves in a satellite orbit around our earth since July 1999. Because the lacking of the atmosphere on this altitude above the surface of the earth photographs can be made in X-ray light (X-radiation). Particularly interesting is to look here for black holes in cores of galaxies, as well as quasars, pulsars and remnants of supernova's. These objects make themselves recognizable by transmitting intense X-radiation.

c) The Compton Gamma Ray Observatory (CGRO) primarily observed gamma rays, though it extended into hard x-rays as well. It was launched in 1991 aboard the Space Shuttle Atlantis during STS-37. It was deorbited in 2000 after failure of a gyroscope. d) The Spitzer Space Telescope (SST) was called the Space Infrared Telescope Facility (SIRTF) before launch. It observes the infrared spectrum, and was launched in 2003 aboard a Delta II rocket. GAMMA-RAY LARGE AREA SPACE TELESCOPE (GLAST) GLAST was designed to probe the most violent events and exotic objects in the cosmos from gamma-ray bursts to black holes and beyond. The launch was scheduled for May 16, 2008. With GLAST, astronomers will at long last have a superior tool to study how black holes, notorious for pulling matter in, can accelerate jets of gas outward at fantastic speeds. Physicists will be able to study subatomic particles at energies far greater than those seen in ground-based particle accelerators. And cosmologists will gain valuable information about the birth and early evolution of the Universe. SOHO IN ORBIT AROUND THE SUN Our largest and best interplanetary weather station is SOHO an unmanned space station in a satellite orbit around the sun. The station is doing observations of the sun and interplanetary space. SOHO (Solar & Heliospheric Observatory) was launched in1995 as a common project of NASA and ESA. SOHO has an ultraviolet telescope and many other instruments for detecting all kinds of radiation and particles. HERSCHEL SPACE OBSERVATORY This observatory will be the largest ever infrared space observatory, it was launched in 2009 by ESA. Herschel will observe at wavelengths that have never previously been explored. Herschel will collect longwavelength infrared radiation from some of the coolest and most distant objects in the Universe. Herschel will be the only space observatory to cover the spectral range from far-infrared to sub-millimeter wavelengths. The questions that Herschel will seek answers to include: How galaxies formed and evolved in the early Universe. And how stars form and evolve and their interrelationship with the interstellar medium. Herschel will also investigate the chemistry of our Galaxy and the molecular chemistry of planetary, cometary and satellite atmospheres in the Solar System.

IPv4 and IPv6


Internet Protocol is a set of technical rules that defines how computers communicate over a network. There are currently two versions: IP version 4 (IPv4) and IP version 6 (IPv6). IPv4 was the first version of Internet Protocol to be widely used, and accounts for most of todays Internet traffic. There are just over 4 billion IPv4 addresses. While that is a lot of IP addresses, it is not enough to last foreve IPv6 is a newer numbering system that provides a much larger address poolthan IPv4. It was deployed in 1999 and should meet the worlds IP addressing needs well into the future. The technical functioning of the Internet remains the same with both versions and it is likely that both versions will continue to operate simultaneously on networks well into the future. To date, most networks that use IPv6 support both IPv4 and IPv6 addresses in their networks. IPv4 IPv6 Deployed 1981 1999 Address size 32 bit number 128 bit Address format Dotted decimal notation Hexadecimal notation

3FFE:F200:0234:AB00: 0123:4567:8901:ABCD Prefix Notation 192.149.0.0/24 3FFE:F200:0234::/48 Number of address 2^32 2^128 = ~4,294,967,296 = ~340,282,366, 920,938,463,463,374, 607,431,768,211,456 http://www.thehindu.com/sci-tech/technology/internet/centre-unveils-ipv6-roadmap/article4551252.ece

192.149.192.77

You might also like