Friday, 3 June 2016

COGNIZANT IS HIRING FRESHERS FOR PROGRAMMER TRAINEE - APPLY NOW

About Company: Cognizant Technology Solutions Corp is an American multinational corporation that provides custom information technology, consulting, and business process outsourcing services. It is headquartered in TeaneckNew Jersey, United States. Over two thirds of its employees are based in India. Cognizant is listed in the NASDAQ-100 and the S&P 500indices. Originally founded as an in-house technology unit of Dun & Bradstreet in 1994, Cognizant started serving external clients in 1996.
It made an IPO in 1998, after a series of corporate splits and restructures of its parent companies. It was the first software services firm listed on the Nasdaq. During the dot com bust, it grew by accepting the application maintenance work that the bigger players were unwilling to perform. Grad 
Job Details:
Education: B.E/B.Tech

Experience: 
Freshers

Job Location: Bangalore, Chennai, Coimbatore, Gurgaon, Hyderabad, Kochi/Cochin, Kolkata, Pune

Venue Location: Cochin, Coimbatore, Chennai, Bangalore, Hyderabad, Pune, Delhi & Kolkata

Job Description:

Programmer Analyst Trainee
2015 batch of B.E/B.Tech belonging to CSE / IT / ECE / EEE / EIE / E&E / Applied Electronics / 

Computer & Technology / Electrical / ETE / ICE / Software Engineering (Full time only/ Regular 

courses only)




Min 60% in UG (for Universities in Other States except Andhra Pradesh, Telangana, Karnataka, 

Kerala, Pondicherry & Tamil Nadu.

No standing arrears at the time of test/interview process


Selection Process

Online Registration

Aptitude Test

Technical Interview

MISYS COMPANY IS HIRING FRESHERS FOR SOFTWARE ENGINEER - APPLY NOW

About Company: The company was founded in 1979 as a computer systems supplier to UK insurance brokers.
The purchase of multiple established players in respective markets by Misys under the leadership of Kevin Lomax (Chairman and/or CEO 1985-2006) allowed Misys to evolve at various times into a software supplier to the US healthcare industry, to banks (worldwide) and to fund managers worldwide. Many of the companies acquired and consolidated by Misys were themselves products of previous mergers.
In 1987 Misys shares were first traded on the Unlisted Securities Market. It was admitted to the Main List of the London Stock Exchange in 1989, leveraged on the initial successes of the purchases, cross selling to an expanded client base and economies of scale arising from consolidation.
In 1994 Misys entered the banking software space by purchasing Kapiti Ltd. In 1995 Misys purchased ACT (which included BIS and Kindle), thus rapidly giving Misys control of 3 of the 4 biggest selling core banking packages at the time. At time of purchase Midas had the biggest installed base of any "off-the shelf" banking software package.

Misys Company is Hiring Freshers For Software Engineer
Qualification: BE, B.Tech, Any Graduate
Experience: Fresher
Salary: Not Disclosed
Job Location: Bangalore
Job Role: Associate Software Engineer
• Team Player with ability to question the status quo.
• Professional demeanor
• Build quality products with the “do it right the first time” mindset.
• Ability to build enterprise software, preferably in a distributed team using Agile/Scrum development methodologies.
• Excellent problem solving skills

HONEYWELL COMPANY IS HIRING EXPERIENCED CANDIDATES - APPLY NOW

Honeywell International, Inc. is an American multinational conglomerate companythat produces a variety of commercial and consumer products, engineering services and aerospace systemsfor a wide variety of customers, from private consumers to major corporationsand governments. The company operates three business units, known as aStrategic Business Unit–Aerospace, Automation and Control Solutions (ACS), andPerformance Materials and Technologies 

Honeywell Company is Hiring for Software Engineer

Qualification: BE, B.Tech, MCA

Experience: 2 yrs

Description
 •Responsible for software development, configuration, testing and support for Terminal Automation Software’s. 
•Understanding of P&ID, Logic/ Interlock diagrams, Logic Flow charts. 
•Inspection of field devices, Dell server, installation, cable laying, & termination 
•Providing Support for generating Functional Design Specifications including User interface / database design 
•Communicates with internal customer to understand the requirements and other contractual communications. 
•Participate in software and Hardware Factory Acceptance Test and Site Acceptance Test of Terminal automation projects. 
•Design Document and code reviews. 
•Startup and commissioning of the Terminal automation system.

BMW COMPANY HUGE RECRUITMENT FOR FRESHERS FOR VARIOUS POSITIONS

BMW(Bayerische Motoren Werke) is top most motor vehicle company in the world.BMW was established as a business entity following a restructuring of the Rapp Motoren werke aircraft manufacturing firm in 1917. After the end of World War I in 1918, BMW was forced to cease aircraft-engine production by the terms of theVersailles Armistice Treaty. The company consequently shifted to motorcycle production as the restrictions of the treaty started to be lifted in 1923, followed by automobiles in 1928–29.

BMW's first significant aircraft engine, and commercial product of any sort, was theBMW IIIa inline-six liquid-cooled engine of 1918, known for good fuel economy and high-altitude performance. With German rearmament in the 1930s, the company again began producing aircraft engines for the Luftwaffe. The factory in Munich made ample use of forced labour: foreign civilians, prisoners of war and inmates of the concentration camp Dachau.
BMW urgent Wanted FRES/EXPE candidates :

Qualification :Any Degree/P.G/B.Tech/M.B.A/M.Tech/Msc

No of Post : 6738+

Salary : 4.5 - 7.8 LAKHS

Last Date : 26/06/2016

Apply Mode  :  Online

Apply Now

TECH MAHINDRA HUGE RECRUITMENT-2016 FOR FRESHERS APPLY NOW

About Company: Tech Mahindra Limited is an Indian multinational provider of information technology (IT), networking technology solutions andBusiness Process Outsourcing (BPO) to the telecommunications industry. It is a specialist in digital transformation, consulting and business re-engineering solutions. Anand Mahindra is the founder of Tech Mahindra which is headquartered at Pune, India.
ech Mahindra announced completion of Mahindra Satyam's merger with itself to create nation's fifth largest software services company with a turnover of USD 2.7 billion. Tech Mahindra got the approval from the registrar of companies for the merger late in the night at 11:45 pm on June 24, 2013. July 5, 2013 has been determined as record date on which the Satyam Computer Services ('Mahindra Satyam') shares will be swapped for Tech Mahindra shares under the approved scheme, which was approved by both the boards. 

Qualification :Degree/B.Tech/M.B.A/M.Tech/Msc/P.G

No of Post : 1140+

Salary : 4.2 - 10.5 Lakhs  per year

Last Date: 14-06-2016

Apply Mode  :  Online

Apply Now

Tuesday, 15 December 2015

Lemon Leaves project

1. INTRODUCTION

1.1  Nanoscience and Nanotechnology
In recent day’s nanotechnology has induced great scientific advancement in the field of research and technology. Nanotechnology is the study and application of small object which can be used across all fields such as chemistry, biology, physics, material science and engineering. Nanoparticle is a core particle which performs as a whole unit in terms of transport and property. As the name indicates nano means a billionth or 10-9 unit. Its size range usually from 1-100nm due to small size it occupies a position in various fields of nano science and nanotechnology. Nano size particles are quite unique in nature because nano size increase surface to volume ratio and also its physical, chemical and biological properties are different from bulk material. So the main aim to study its minute size is to trigger chemical activity with distinct crystallography that increases the surface area. Thus in recent years much research is going on metallic nanoparticle and its properties like catalyst, sensing to optics, antibacterial activity and data storage capacity. The concept of nanotechnology emerged on 9th century. For the first time in 1959, Richard Feynmangave a talk on the concept of nanotechnology and described about molecular machines built with atomic precision where he discussed about nanoparticles and entitled that “There’s plenty of space at the bottom”. Professor Peter Paul Speiserand his research group were first to investigate on polyacrylic beads for oral administration and target on microcapsule. In the year of 1960 nanoparticle develop for drug delivery and also vaccine purpose which change the medicinal scenario. The first paper published in1980 by K. Eric Drexlerof Space Systems Laboratory Massachusetts Institute of Technology was titled as “An approach to the development of general capabilities for molecular manipulation”. The term “nanotechnology” first time used as scientific field by NarioTanigushiin the 1974 his paper was “Nanotechnology” mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or one molecule.                                                             
Nanotechnology is a fast growing area in the field on science which is a interdisciplinary field of both science and technology that increase the scope of investing and regulating at cell level between synthetic material and biological system. Nanotechnology proceeds by three processes - separation, consolidation, deformation of material by one atom or molecule. It is divided into three types- Wet nanotechnologywhich deals with the biological system such as enzymes, membrane, cellular components. Dry nanotechnologydeals with the surface science, physical chemistry & gives importance on fabrication of structure in carbon, silicon, inorganic materials.Computational nanotechnologywhich deals with modeling& stimulating the complex nanometer scale structure, these three fields are interdependent to each other. There are two methods of synthesis of metallic nanoparticles which are chemical method and physical method. In chemical approach it includes chemical reduction, electrochemical technique and photochemical reduction.The chemical process is again subdivided into classical chemical method where some chemical reducing agent (such as hydrazine, sodium borohydride, hydrogen) is used, radiation chemical method generated by ionization radiation. In the physical approach it includes condensation, evaporation and laser ablation for metal nanoparticle synthesis. The biological synthesis of nanoparticle is a challenging concept which is very well known as green synthesis. The biological synthesis of nano material can solve the environmental challenges like solar energy conservation, agricultural production, catalysis, electronic, optics, and biotechnological area. Green synthesis of nanoparticle are cost effective, easily available, ecofriendly, nontoxic, large scale production and act as reducing and capping agentin compared to the chemical method which is a very costly as well as it emits hazardous by-product which can have some deleterious effect on the environment. Biological synthesis utilizes naturally occupying reducing agent such as plant extract, microorganism, enzyme, polysaccharide which are simple and viable which is the alternative to the complex and toxic chemical processes. Plants can be described as nano factories which provide potential pathway to bioaccumulation into food chain and environment. Among the different biological agents plants provide safe and beneficial way to the synthesis of metallic nanoparticle as it is easily available so there is possibilities for large scale production apart from this the synthesis route is eco-friendly,the rate of production is faster in comparison to other biological models such as bacteria, algae and fungi. From the various literature studies it can be stated that the amount of accumulation of nanoparticle varies with reduction potential of ions and the reducing capacity of plant depends on the presence of various polyphenols and other heterocyclic compounds.            Nanoparticle of gold, silver, copper, silicon, zinc, titanium, magnetite, palladium formation by plants has been reported. Colloid zinc nanoparticle had exhibited distinct properties such as catalytic, antibacterial, good conductivity, and chemical stability. Silver nanoparticles have its application in the field of bio labeling, sensor, antimicrobial, catalysis, electronic and other medical application such as drug delivery and disease diagnosis.                                          
Nanotechnology is defined as a highly developing field because of its vast array of applications in various fields of medical science, technology and various research areas. The word ‘Nano’isoriginated from a Greek word whose meaning is extremely small or dwarfs. The basic concepts behind nanoscience and nanotechnology was reported with a talk entitled “There’s a Plenty of Room at the Bottom” by the physicist Richard Feynman at the California Institute of Technology (Caltech) on December 29, 1959.The term “Nanotechnology” was later coined by Professor Norio Taniguchi by using Feynman’s explorations of ultra-precision machining.In 1974, the term nanotechnology was pioneered by Nori Taniguchi on the talk of Production Engineering at the Tokyo International Conference. He focused on the ultra-precision machining in his talk and hisresearch was based on the mechanisms of machining hard and brittle materials like ceramics of silicon, alumina and quartz crystalsby ultrasonic machining.Richard Feynman established a talk on the scanning electron microscope that, it could be developed in resolution and stability whichwill bevaluable and functional foranyone to “see” the atoms.Scanning tunneling microscope helps in the proper detection and identification of individual atoms from whichthe modern nanotechnology began which was developed in 1981.Feynman also continued to identify the ability to arrange atoms which iswithin the bounds of chemical stability for the minute structures which in turn would lead to synthesis of materials in molecular and atomiclevels.                                                
The study about Nanoscience and nanotechnology provides the well-developedapplication of exceptionallyminiature things and be capable of the encroachment of all the fields of scientific research and development likePhysics, Chemistry, Materials andMetallurgy engineering, Biology and also in Biotechnology. Bio-nanotechnology is the conjunction between biotechnology and nanotechnology for developingvarious biosynthetic and environmental ecofriendlytechnologies for the synthesis of various nanomaterials. Multiple of research on synthesis of nanoparticles has put forth great interesttowards the emerging field of science due to their distinguishing physical and chemical properties than the macroscopic particles.The vast array of development of novel synthesis protocols and various characterization techniques are the evidences for the vastadvancement for the nanotechnology.                      A brief and general definition of nanotechnology by the US National Science and Technology Council states: “The essence of nanotechnology has the capability to work at the molecular level such as atom by atomfor the creation of large structures with essentiallyinnovative molecular organization. The aim is to exploit these properties by gaining control of structures and devices at atomic, molecular, and supramolecular levels and to become skilled atwell-organized manufacture and use these devices.” The United States National Science Foundation (USNSF) defines nanoscience or nanotechnology is the study which deals with materials and systems deserving the following key properties:
1. The dimension must be at least one dimension from 1-100 nm.    
2. The process can be designed with various methodologies which show elementary control over the physical and chemical properties of structures that can be measure by the molecular-scale.                                                  
3. According to the building block property, larger structures form by the combination of smaller one. According to the Microbiological study, the     Nanoscience leds to the sizes of different types of bio particles which deals with bacteria, viruses, enzymes, etc. fall within the nanometer range.
Nanotechnology has the capability at atomic precision which is useful formaking of materials, various instruments and systems. Nanotechnology is the science and engineeringof recent well established technology which refers at the nanoscale.It is concerning 1 to 100 nanometer size. Nanotechnology can also be defined as the study and investigation about the synthesis, characterization, exploration and application of nanosizedmaterials which is of 1-100nm in size and will be valuable and functional for the development of science.A recent advance in the emerging field of nanotechnology has thecapability for the preparation of the highly ordered nanoparticles of various different size and shape which led to the development of new biocidalagents. The prefix nano means a factor of one billionth (10-9) and can be applied, e.g., to time (nanosecond), volume (nanoliter), weight (nanogram) or length (nanometer or nm). In thiscommon use “nano”refers to the length and the nanoscale refers to a length from the atomic level of around 1nm up to 100nm.      Nanotechnology is useful in the techniques for various diagnostic processes, drug delivery, sunscreens, antimicrobial bandages, disinfectant and a friendly manufacturing process which reduces waste products thateventuallymost important to atomically precise molecular manufacturing with less waste products which is also serves as catalyst for greater efficiency in present manufacturing process by minimizing or eliminating the use of toxic materials for reduction of pollution such aswater and air, and, as an alternative energy production which issolar and fuel cells.The goal of nanotechnology is to close sizegap between the smallest lithographically fabricated structures and chemicallysynthesized large molecules.
            The successful use of nanotechnology in the food industry can take a number of forms. These include the use of nanotechnology in packaging of materials. It is usefulfor developing the antimicrobial packaging of the food products. The nanoparticles are dispersed throughout the plastic and are capable to block reachingoxygen, carbon dioxide and moisture from fresh meats or other foods. Packaging can able to contribute the control of microbial growth in foodstuffswhich can lead to spoiling or in the case of a range of pathogenic microorganisms and appearance of disease due to spread of infection. They are also being investigated for their potential use as fungicides in agriculture, as anticancer drugs, and imaging in biomedical applications. An additional advantage of nanobiotechnology is the development of consistent and more reliable processes for thesynthesis of nanomaterial excess of a range of sizes with superiormonodisperse character and chemicalcomposition.

1.2. Nanomaterials
            The improvement of reliable experimental and investigational protocols for the synthesis of nanomaterialsmore over a range of chemical compositions, sizes and high monodispersity is one of the most challenging issues in current nanotechnology.The term ‘Nanomaterial’ is now frequently used for the conventional materials which are consciously and deliberately engineered to nanostructure of modern application of nanotechnology. Nanomaterials are the forms of matter at nanoscale. Generallynanomaterials are referred as the infrastructure or building blocks element for nanotechnology.The “Building blocks” for nanomaterials consist of carbon-based components and organics, semiconductors, metals and metal oxides.Nanomaterials with structural features at the nanoscale can be found in the form of clusters, thin films, multilayer and nanocrystalline materials which are often expressed by the dimensionality of 0, 1, 2 and 3. The materials include metals, amorphousand crystalline alloys, semiconductors, oxides, nitride and carbide ceramics in theform of clusters, thin films, multilayer and the bulk nanocrystalline materials, etc.
Fig.1.1. Classification of nanomaterials according to the dimension
Table.1.Classification based on dimensionality
Dimension

Example of nanomaterials

0D
Colloids, nanoparticles, nanodots, nanoclusters

1D
Nanowires, nanotubes, nanobelts, nanorods

2D
Quantum wells, super lattices, membranes

3D
Nanocomposites, filamentary composites, cellular materials, porous materials, hybrids, nanocrystal arrays, block polymers


The familiar nanomaterial ‘carbon black’ was used in industrial production over a century ago. The other early nanomaterials are fumed silica, a form of silicon dioxide (SiO2), titanium dioxide (TiO2) and zinc oxide (ZnO). Nanomaterials can have different properties at nanoscale which is also established by the Quantum effects. Among all nanomaterials, some are having better conductivity towards heat and electricity, different magnetic properties, light reflection and also change colors according to their size which is also changed. These properties are a little bit different from the bulk materials. Nanomaterials also have larger surface areas than similar volumes of larger-scale materials which signify the meaning of more surfaces is available for interactions with other materials around them.Nanomaterials are being controlled to nano crystalline size which is less than 100 nm which can show atom-like behaviors. This results from its higher surface energy due to their large surface area and wider band gap between valence and conduction band. It occurs when they are divided to near atomic size. Nanomaterials are also referred as ‘‘a wonder of modern medicine’’. It signifies the importance of antibiotics which kill atleast six different disease-causing organisms whether the nanomaterials can kill at least 650cells.Nanomaterials are being enthusiastically researched for specific function like microbial growth inhibition,carriers of antibiotics and also act as killing agents.

1.2.1Metal oxide nanoparticles
Different types of nanoparticles developed by the chemical and physical methods show poor morphology.Mostly toxic chemicals are used in these processes and elevated temperaturewhich isbeing helpfulfor the synthesis of respective nanoparticles is toxic to our environment.Metal oxides have also been serves as sorbents for various environmental pollutants. But the biological methods of synthesis are more favorable than the chemical and physical methods of synthesis since these methods are eco-friendly.The biological methods for synthesisof nanoparticles by using various microorganisms, enzymes, plants,and their extracts have been suggested as the probable and promisingecofriendly alternatives to the chemical and physical methods of synthesis. The biocompatibility of nanoparticles is more essential for specific biomedical applications and researches.

The metal nanoparticles have various functions which are not observed in bulk phase. All these arealreadybeen studied broadlydueto their exclusive electronic, magnetic, optical, catalyticallyand antimicrobial properties of wound healing and also for anti-inflammatory properties.The metal nanoparticles have the surface Plasmon resonance absorption in the UV–visible region. Over the past few decades, the structure of inorganic nanoparticles exhibit appreciable, drastic, novel and highly improved physical, chemical and biological properties with well recognized function due to their nanoscale size. Recent studies reported that nanoparticles of some materials including metal oxides can also induce the cell death in eukaryotic cells and the growth inhibition in prokaryotic cells due to cytotoxicity nature. The transition metal oxides with nanostructure and semiconductors with dimensions in the nanometer have capacity to attract towards the areas belonging to Physics, Chemistry, Materials and Metallurgy engineering, Biotechnology, Information technology and Environmental science with their respective technologies in several aspect of development and research. Different types of specific applications can be usedfor synthesis of metal oxide nanoparticles as major components like sensors, pigments and various medical materials.The dispersion of metal oxide nanoparticles in physiological solutions is also important for biological in vitro and in vivo studies.

1.3Lemon Leaves:
The lemon leaves are dark green in color and arranged alternately on the stem. What appears to be a slender second leaf (called a wing) is present on the petiole.Lime leaves are small, pale green, and oblong in shape.

Fig: Lemon leaves

Common noun :Lemon tree
Scientific noun:Citrus limonumRisso, Citrus limon (L.) Burm
Family.Citrus family-Rutaceae
Habitat: Cultivated because of its fruits and as a garden tree in warm Mediterranean places next to thesea. It probably descends from the species " Citrusmedica L. ", native from India.
Characteristics : Perennial tree of the Citrus family - rutaceae- up to 3 m. Toothed, elliptical or lanceolateleaves, pointed. Flowers white inside, rosy at the margin of the petals. The fruit is a hesperidium till 12,5 cm.wide, with a thick rind, dark yellow when fully ripe.


1.3.1Active components:
The main components are:
Flavonoids: hesperidoside, limocitrin in the pericarp of the Spanish lemons.
Acids: Ascorbic (vitamin C), citric; caffeic ( fruit )
Essential oil : rich in isopulegol, alpha-bergamotene, alpha- pinene, alpha- terpinene, alpha- thujene, betabisolobene, beta- bergamotene, beta- phelandrene, citral, limonene and sabinene, ( in the fruit , specially inlemons from California)
Caffeine (leaves, flowers)
Pectin
Minerals: potassium and calcium.

1.4 Silver nitrate :
Silver nitrate is an inorganic compound with chemical formula AgNO3. This compound is a versatile precursor to many other silver compounds, such as those used in  photography. It is far less sensitive to light than the halides. It was once called lunar caustic because silver was called luna by the ancient alchemists, who believed that silver was associated with the moon.


Silver nitrate can be prepared by reacting silver, such as a silver bullion or silver foil, with nitric acid, resulting in silver nitrate, water, and oxides of nitrogen. Reaction byproducts depend upon the concentration of nitric acid used.
3 Ag + 4 HNO3 (cold and diluted) → 3 AgNO3 + 2 H2O + NO
Ag + 2 HNO3 (hot and concentrated) → AgNO3 + H2O + NO2
This is performed under a fume hood because of toxic nitrogen oxide(s) evolved during the reaction.

1.4.1 USES

 

Precursor to other silver compounds

Silver nitrate is the least expensive salt of silver; it offers several other advantages as well. It is non-hygroscopic, in contrast to silver fluoroborate and silver perchlorate. It is relatively stable to light. Finally, it dissolves in numerous solvents, including water. The nitrate can be easily replaced by other ligands, rendering AgNO3 versatile. Treatment with solutions of halide ions gives a precipitate of AgX (X = Cl, Br, I). When making photographic film, silver nitrate is treated with halide salts of sodium or potassium to form insoluble silver halide in situ in photographic gelatin, which is then applied to strips of tri-acetate or polyester. Similarly, silver nitrate is used to prepare some silver-based explosives, such as the fulminate, azide, or acetylide, through a precipitation reaction.
Treatment of silver nitrate with base gives dark grey silver oxide
2 AgNO3 + 2 NaOH → Ag2O + 2 NaNO3 + H2O

Halide abstraction

The silver cation, Ag+, reacts quickly with halide sources to produce the insoluble silver halide, which is a cream precipitate if Br- is used, a white precipitate if Cl− is used and a yellow precipitate if I−is used. This reaction is commonly used in inorganic chemistry to abstract halides:
Ag+ + X(aq) → AgX
where X− = Cl−, Br−, or I−.
Other silver salts with non-coordinating anions, namely silver tetrafluoroborate and silver hexafluorophosphate are used for more demanding applications.
Similarly, this reaction is used in analytical chemistry to confirm the presence of chloride, bromide, or iodide ions can be tested by adding silver nitrate solution. Samples are typically acidified with dilute nitric acid to remove interfering ions, e.g. carbonate ions and sulfide ions. This step avoids confusion ofsilver sulfide or silver carbonate precipitates with that of silver halides. The color of precipitate varies with the halide: white (silver chloride), pale yellow/cream (silver bromide), yellow (silver iodide). AgBr and especially AgI photo-decompose to the metal, as evidence by a grayish color on exposed samples.
The same reaction is used on board ships in order to determine whether or not boiler feedwater has been contaminated with seawater. It is also used to determine if moisture on formerly dry cargo is a result of condensation from humid air, or from seawater leaking through the hull.[13]

Organic synthesis

Silver nitrate is used in many ways in organic synthesis, e.g. for deprotection and oxidations. Ag+
 binds alkenes reversibly, and silver nitrate has been used to separate mixtures of alkenes by selective absorption. The resulting adduct can be decomposed with ammonia to release the free alkene.[14]

Biology

In histology, silver nitrate is used for silver staining, for demonstrating reticular fibers, proteins and nucleic acids. For this reason it is also used to demonstrate proteins in PAGE gels. It can be used as a stain in scanning electron microscopy



1.5 Agar
Agar is derived from the polysaccharide agarose, which forms the supporting structure in the cell walls of certain species of algae, and which is released on boiling. These algae are known as agarophytes and belong to the Rhodophyta (red algae) phylum. Agar is actually the resulting mixture of two components: the linear polysaccharide agarose, and a heterogeneous mixture of smaller molecules called agaropectin.



1.5.1 Agar is used:
·         As an impression material in dentistry.
·         To make salt bridges for use in electrochemistry.
·         In formicariums as a transparent substitute for sand and a source of nutrition.
·         As a natural ingredient to form modelling clay for young children to play with.
·         Gelidium agar is used primarily for bacteriological plates. Gracilaria agar is used mainly in food applications.


1.6 Dextrose
Dextrose is the name of a simple sugar chemically identical to glucose (blood sugar) that is made from corn. While dextrose is used in baking products as a sweetener, it also has medical purposes. Dextrose is dissolved in solutions that are given intravenously, which can be combined with other drugs, or used to increase a person’s blood sugar. Dextrose is also available as an oral gel or tablet. Because dextrose is a “simple” sugar, the body can quickly use it for energy.

Dextrose is used in various concentrations for different purposes. For example, a doctor may prescribe dextrose in an IV solution when someone is dehydrated and has low blood sugar. Dextrose IV solutions can also be combined with many drugs, for IV administration. These solutions may be used to reduce the sodium level in the blood. The extra dextrose in a person’s body can cause sodium to go into the cells, reducing the amount in the bloodstream.
Dextrose is a carbohydrate, which is one part of nutrition in a normal diet. Solutions containing dextrose provide calories and may be given intravenously in combination with amino acids and fats. This is called total parenteral nutrition (TPN) and is used to provide nutrition to those who can’t eat normally.

1.7 Potato dextrose agar :
Potato dextrose agar and potato dextrose broth are common microbiological growth media made from potato infusion, and dextrose. Potato dextrose agar (abbreviated "PDA") is the most widely used medium for growing fungi and bacteria
Figure: Potato Dextrose Agar
1.7.1. Required components for PDA :
grams
ingredient
1000
water
200
potatoes
(sliced washed unpeeled)
20
Dextrose
20
agar powder
Potato infusion can be made by boiling 200 grams of sliced (washed but unpeeled) potatoes in ~ 1 liter (0.22 imp gal; 0.26 US gal) distilled water for 30 minutes and then decanting or straining the broth through cheeseclothDistilled water is added such that the total volume of the suspension is 1 liter. 20 grams (0.71 oz) dextrose and 20 grams (0.71 oz) agar powder is then added and the medium is sterilized by autoclaving at 15 pounds per square inch  for 15 minutes.
A similar growth medium, Potato dextrose broth (abbreviated "PDB") is formulated identically to PDA, omitting the agar. Common organisms that can be cultured on PDB are yeasts such as Candida albicans and Saccharomyces cerevisiae and molds such as Aspergillus niger.
1.8 Centrifugation
Centrifugation is a process that involves the use of the centrifugal force for the sedimentation of heterogeneous mixtures with a centrifuge, used in industry and in laboratory settings. This process is used to separate two immiscible liquids. More-dense components of the mixture migrate away from the axis of the centrifuge, while lessdense components of the mixture migrate towards the axis. Chemists and biologists may increase the effective gravitational force on a test tube so as to more rapidly and completely cause the precipitate ("pellet") to gather on the bottom of the tube. The remaining solution is properly called the "supernate" or "supernatant liquid". The supernatant liquid is then either quickly decanted from the tube without disturbing the precipitate, or withdrawn with a Pasteur pipette.


1.8.1.Centrifugation in biological research
1 Micro centrifuges
2 High-speed centrifuges
3 Fractionation process
4 Ultracentrifuges

Figure: Centrifugation

1.11. UV-Vis Spectroscopy                                                                                                           Ultraviolet-visible spectroscopy or ultraviolet-visible spectro photometry (UV-Vis or UV/Vis) refers to absorption spectroscopy or reflectance spectroscopy in the ultraviolet-visible spectral region. This means it uses light in the visible and adjacent (near-UV and near-infrared [NIR]) ranges. The absorption or reflectance in the visible range directly affects the perceived color of the chemicals involved. In this region of the electromagnetic spectrum, molecules undergo electronic. This technique is complementary to fluorescence spectroscopy, in that fluorescence deals with transitions from the excited state to the ground state, while absorption measures transitions from the ground state to the excited state.

Fig.1.3. UV/Vis spectroscopy
1.11.1. Working Principle
Molecules containing π-electrons or non-bonding electrons (n-electrons) can absorb the energy in the form of ultraviolet or visible light to excite these electrons to higher anti-bonding molecular orbitals. The more easily excited the electrons (i.e. lower energy gap between the HOMO and the LUMO), the longer the wavelength of light it can absorb.                                                                                                           
1.11.2. Applications                                                                                                                        UV/Vis spectroscopy is routinely used in analytical chemistry for the quantitative determination of different analyses, such as transition metal ions, highly conjugated organic compounds, and biological macromolecules. Spectroscopic analysis is commonly carried out in solutions but solids and gases may also be studied.
·        Solutions of transition metal ions can be colored (i.e., absorb visible light) because electrons within the metal atoms can be excited from one electronic state to another. The color of metal ion solutions is strongly affected by the presence of other species, such as certain anions or ligands. For instance, the color of a dilute solution of copper sulfate is a very light blue; adding ammonia intensifies the color and changes the wavelength of maximum absorption (λmax).
·        Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum. The solvents for these determinations are often water for water-soluble compounds, or ethanol for organic-soluble compounds. (Organic solvents may have significant UV absorption; not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly at most wavelengths.) Solvent polarity and pH can affect the absorption spectrum of an organic compound. Tyrosine, for example, increases in absorption maxima and molar extinction coefficient when pH increases from 6 to 13 or when solvent polarity decreases.
·        While charge transfer complexes also give rise to colors, the colors are often too intense to be used for quantitative measurement.
1.11.3.Beer Lambert’s Law                                                                                              The method is most often used in a quantitative way to determine concentrations of an absorbing species in solution, using the Beer-Lambert law:
,
Where A is the measured absorbance, in Absorbance Units (AU),   is the intensity of the incident light at a given wavelength  is the transmitted intensity, L the path length through the sample, and c the concentration of the absorbing species. For each species and wavelength, ε is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of   or often  .
The absorbance and extinction ε are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm.                                                                           The Beer-Lambert Law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (Xylenol Orange or Neutral Red, for example).
1.11.4. UV-spectro photometer                                                              
The instrument used in ultraviolet-visible spectroscopy is called a UV/Vis spectrophotometer. It measures the intensity of light passing through a sample( ), and compares it to the intensity of light before it passes through the sample ( ). The ratio   is called the transmittance, and is usually expressed as a percentage (%T). The absorbance , is based on the transmittance:
The UV-visible spectrophotometer can also be configured to measure reflectance. In this case, the spectrophotometer measures the intensity of light reflected from a sample ( ), and compares it to the intensity of light reflected from a reference material ( ) (such as a white tile). The ratio   is called the reflectance, and is usually expressed as a percentage (%R).                                                                     The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating in a monochromator or a prism to separate the different wavelength of light, and a detector. The radiation source is often a Tungsten filament (300-2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190-400 nm), Xenon arc lamp, which is continuous from 160-2,000 nm; or more recently, light emitting diodes (LED)[8] for the visible wavelengths. The detector is typically a photomultiplier tube, a photodiode, a photodiode array or a charge-coupled device (CCD). Single photodiode detectors and photomultiplier tubes are used with scanning monochromators, which filter the light so that only light of a single wavelength reaches the detector at one time. The scanning monochromator moves the diffraction grating to "step-through" each wavelength so that its intensity may be measured as a function of wavelength. Fixed monochromators are used with CCDs and photodiode arrays. As both of these devices consist of many detectors grouped into one or two dimensional arrays, they are able to collect light of different wavelengths on different pixels or groups of pixels simultaneously.

Fig.1.4. Schematic UV- visible spectrophotometer.
A spectrophotometer can be either single beam or double beam. In a single beam instrument (such as theSpectronic 20), all of the light passes through the sample cell.  must be measured by removing the sample. This was the earliest design and is still in common use in both teaching and industrial labs.
In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some double-beam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through abeam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case, the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken.                    Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length,  , in the Beer-Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths. Specialized instruments have also been made. These include attaching spectrophotometers to telescopes to measure the spectra of astronomical features.
UV-Visible micro spectrophotometers consist of a UV-Visible microscope integrated with a UV-Visible spectrophotometer. A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. By removing the concentration dependence, the extinction coefficient (ε) can be determined as a function of wavelength.           
1.12. FTIR Spectra analysis                                                                                                         FT-IR stands for Fourier Transform Infrared, the preferred method of infrared spectroscopy. Ininfrared spectroscopy, IR radiation is passed through a sample. Some of the infrared radiation isabsorbed by the sample andsome of it is passed through(transmitted). The resultingspectrum represents the molecularabsorption and transmission,creating a molecular fingerprintof the sample. Like a fingerprintno two unique molecularstructures produce the sameinfrared spectrum. This makesinfrared spectroscopy useful forseveral types of analysis.

Fig.1.5.Fourier transform infrared spectroscopy
Fig.1.6. Spectrometer
So, FT-IR will provideinformation
• It can identify unknown materials
• It can determine the quality or consistency of a sample
• It can determine the amount of components in a mixture
1.12.1. Infrared Spectroscopy                                                                            
Infrared spectroscopy has been a workhorse technique for materials analysis in the laboratory for overseventy years. An infrared spectrum represents a fingerprint of a sample with absorption peaks whichcorrespond to the frequencies of vibrations between the bonds of the atoms making up the material.Because each different material is a unique combination of atoms, no two compounds produce theexact same infrared spectrum. Therefore, infrared spectroscopy can result in a positive identification(qualitative analysis) of every different kind of material. In addition, the size of the peaksin thespectrum is a direct indication of the amountof material present. With modern software algorithms,infrared is an excellent tool for quantitative analysis.
1.12.2. Older Technology             
The original infrared instruments were of the dispersivetype. These instruments separated theindividual frequencies of energy emitted from the infrared source. This was accomplished by the useof a prism or grating. An infrared prism works exactly the same as a visible prism which separatesvisible light into its colors (frequencies). A grating is a more modern dispersive element which betterseparates the frequencies of infrared energy. The detector measures the amount of energy at eachfrequency which has passed through the sample. This results in a spectrumwhich is a plot ofintensity vs. frequency.
Fourier transform infrared spectroscopy is preferred over dispersive or filter methods of infrared spectral analysis for several reasons
• It is a non-destructive technique
• It provides a precise measurement method which requires no external calibration
• It can increase speed, collecting a scan every second
• It can increase sensitivity – one second scans can be co-added together to ratio out random noise
• It has greater optical throughput
• It is mechanically simple with only one moving part

1.12.3. Importance FT-IR
            Fourier Transform Infrared (FT-IR) spectrometry was developed in order to overcome the limitationsencountered with dispersive instruments. The main difficulty was the slow scanning process. A methodfor measuring all of the infrared frequencies simultaneously, rather than individually, was needed.A solution was developed which employed a very simple optical device called an interferometer.The interferometer produces a unique type of signal which has all of the infrared frequencies “encoded”into it. The signal can be measured very quickly, usually on the order of one secondor so. Thus,the time element per sample is reduced to a matter of a few seconds rather than several minutes.
Most interferometers employ a beam splitter which takes the incoming infrared beam anddivides it into two optical beams. One beam reflects off of a flat mirror which is fixed in place. Theother beam reflects off of a flat mirror which is on amechanism which allows this mirror to move a veryshort distance (typically a few millimeters) away fromthe beam splitter. The two beams reflect off of theirrespective mirrors and are recombined when they meetback at the beam splitter. Because the path that one beamtravels is a fixed length and the other is constantlychanging as its mirror moves, the signal which exits the interferometer is the result of these two beams “interfering” with each other. The resulting signalis called an interferogramwhich has the unique property that every data point (a function of themoving mirror position) which makes up the signal has information about every infrared frequencywhich comes from the source.
Fig.Interferogram

This means that as the interferogram is measured; all frequencies are being measuredsimultaneously. Thus, the use of the interferometer results in extremely fast measurements.                                                                                                                        Because the analyst requires a frequency spectrum (a plot of the intensity at each individualfrequency) in order to make identification, the measured interferogram signal cannot be interpreteddirectly. A means of “decoding” the individual frequencies is required. This can be accomplished via awell-known mathematical technique called the Fourier transformation. This transformation isperformed by the computer which then presents the user with the desired spectral information for analysis.
Fig. Conversion of signal into spectrum by using FFT
1.12.4. The Sample Analysis Process                                                                 
The normal instrumental process is as follows:
1. The Source: Infrared energy is emitted from a glowing black-body source. This beam passesthrough an aperture which controls the amount of energy presented to the sample (and, ultimately, to the detector).
2. The Interferometer: The beam enters the interferometer where the “spectral encoding” takesplace. The resulting interferogram signal then exits the interferometer.
3. The Sample: The beam enters the sample compartment where it is transmitted through or reflectedoff of the surface of the sample, depending on the type of analysis being accomplished. This is where specific frequencies of energy, which are uniquely characteristic of the sample, are absorbed.
4. The Detector: The beam finally passes to the detector for final measurement. The detectors usedare specially designed to measure the special interferogram signal.
5. The Computer: The measured signal is digitized and sent to the computer where the Fouriertransformation takes place. The final infrared spectrum is then presented to the user forinterpretation and any further manipulation.
Fig.1.9.Sample analysis process
Because there needs to be a relative scale for the absorption intensity, a background spectrummust also be measured. This is normally a measurement with no sample in the beam. This can becompared to the measurement with the sample in the beam to determine the “percent transmittance.”This technique results in a spectrum which has all of the instrumental characteristics removed.Thus all spectral features which are present are strictly due to the sample. A single backgroundmeasurement can be used for many sample measurements because this spectrum is characteristic ofthe instrumentitself.

1.12.5. Advantages of FT-IR
Some of the major advantages of FT-IR over the dispersive technique include:
Speed: Because all of the frequencies are measured simultaneously, most measurements by FTIRare made in a matter of seconds rather than several minutes. This is sometimes referred to as theFelgett Advantage.
Sensitivity: Sensitivity is dramatically improved with FT-IR for many reasons. The detectorsemployed are much more sensitive, the optical throughput is much higher (referred to as theJacquinot Advantage) which results in much lower noise levels, and the fast scans enable theaddition of several scans in order to reduce the random measurement noise to any desired level(referred to as signal averaging).
Mechanical Simplicity: The moving mirror in the interferometer is the only continuouslymoving part in the instrument. Thus, there is very little possibility of mechanical breakdown.
Internally Calibrated: These instruments employ a HeNe laser as an internal wavelengthcalibration standard (referred to as the Connes Advantage). These instruments are self-calibratingand never need to be calibrated by the user.
These advantages, along with several others, make measurements made by FT-IR extremelyaccurate and reproducible. Thus, it a very reliable technique for positive identificationofvirtuallysample. The sensitivity benefits enable identification of even the smallest of contaminants. Thismakes FT-IR an invaluable tool for quality controlor quality assurance applications whether it isbatch-to-batch comparisons to quality standards or analysis of an unknown contaminant. In addition,the sensitivity and accuracy of FT-IR detectors, along with a wide variety of software algorithms, havedramatically increased the practical use of infrared for quantitative analysis. Quantitative methodscan be easily developed and calibrated and can be incorporated into simple procedures for routine analysis.
Thus, the Fourier Transform Infrared (FT-IR) technique has brought significant practicaladvantages to infrared spectroscopy. It has made possible the development of many new samplingtechniques which were designed to tackle challenging problems which were impossible by oldertechnology. It has made the use of infrared analysis virtually limitless.

1.13. SEM measurement  
The surface morphology of the as-synthesized silver nanoparticles was observed by Scanning Electron Microscope (SEM) (Zeiss Scanning electron microscope). The sample was coated on the conductive carbon tape.

Fig.1.10. SEM Microscope
A scanning electron microscope (SEM) is a type of electron microscope that produces images of a sample by scanning it with a focused beam of electrons. The electrons interact with atoms in the sample, producing various signals that can be detected and that contain information about the sample's surface topography and composition. The electron beam is generally scanned in a raster scan pattern, and the beam's position is combined with the detected signal to produce an image. SEM can achieve resolution better than 1 nanometer. Specimens can be observed in high vacuum, in low vacuum, in wet conditions (in environmental SEM), and at a wide range of cryogenic or elevated temperatures.                                                                         The most common mode of detection is by secondary electrons emitted by atoms excited by the electron beam. On a flat surface, the plume of secondary electrons is mostly contained by the sample, but on a tilted surface, the plume is partially exposed and more electrons are emitted. By scanning the sample and detecting the secondary electrons, an image displaying the topography of the surface is created.
1.13.1. History                                                                                                        
An account of the early history of SEM has been presented by McMullan Although Max Knoll produced a photo with a 50 mm object-field-width showing channeling contrast by the use of an electron beam scanner, it was Manfred von Ardennewho in 1937 invented a true microscope with high magnification by scanning a very small raster with a demagnified and finely focused electron beam. Ardenne applied the scanning principle not only to achieve magnification but also to purposefully eliminate the chromatic aberration otherwise inherent in the electron microscope. He further discussed the various detection modes, possibilities and theory of SEM, together with the construction of the first high magnification SEM. Further work was reported by Zworykin's group, followed by the Cambridge groups in the 1950s and early 1960s headed by Charles Oatley, all of which finally led to the marketing of the first commercial instrument by Company as the "Stereo scan" in 1965 (delivered to DuPont).
1.13.2. Working principle                                                                                    
The types of signals produced by a SEM include secondary electrons (SE), back-scattered electrons (BSE), characteristic X-rays, light (cathodoluminescence) (CL), specimen current and transmitted electrons. Secondary electron detectors are standard equipment in all SEMs, but it is rare that a single machine would have detectors for all possible signals. The signals result from interactions of the electron beam with atoms at or near the surface of the sample. In the most common or standard detection mode, secondary electron imaging or SEI, the SEM can produce very high-resolution images of a sample surface, revealing details less than 1 nm in size. Due to the very narrow electron beam, SEM micrographs have a large depth of field yielding a characteristic three-dimensional appearance useful for understanding the surface structure of a sample. This is exemplified by the micrograph of pollen shown above. A wide range of magnifications is possible, from about 10 times (about equivalent to that of a powerful hand-lens) to more than 500,000 times, about 250 times the magnification limit of the best light microscopes. Back-scattered electrons (BSE) are beam electrons that are reflected from the sample by elastic scattering. BSE are often used in analytical SEM along with the spectra made from the characteristic X-rays, because the intensity of the BSE signal is strongly related to the atomic number (Z) of the specimen. BSE images can provide information about the distribution of different elements in the sample. For the same reason, BSE imaging can image colloidal gold immune-labels of 5 or 10 nm diameter, which would otherwise be difficult or impossible to detect in secondary electron images in biological specimens. Characteristic X-rays are emitted when the electron beam removes an inner shell electron from the sample, causing a higher-energy electron to fill the shell and release energy. These characteristic X-rays are used to identify the composition and measure the abundance of elements in the sample.
1.13.3. Sample preparation                                                                                             
All samples must also be of an appropriate size to fit in the specimen chamber and are generally mounted rigidly on a specimen holder called a specimen stub. Several models of SEM can examine any part of a 6-inch (15 cm) semiconductor wafer, and some can tilt an object of that size to 45°.For conventional imaging in the SEM, specimens must be electrically conductive, at least at the surface, and electrically grounded to prevent the accumulation of electrostatic charge at the surface. Metal objects require little special preparation for SEM except for cleaning and mounting on a specimen stub. Nonconductive specimens tend to charge when scanned by the electron beam, and especially in secondary electron imaging mode, this causes scanning faults and other image artifacts. They are therefore usually coated with an ultrathin coating of electrically conducting material, deposited on the sample either by low-vacuum sputter coating or by high-vacuum evaporation.                    
An alternative to coating for some biological samples is to increase the bulk conductivity of the material by impregnation with osmium using variants of the OTO staining method (O-osmium, T-thiocarbohydrazide, O-osmium).              Nonconducting specimens may be imaged uncoated using environmental SEM (ESEM) or low-voltage mode of SEM operation. Environmental SEM instruments place the specimen in a relatively high-pressure chamber where the working distance is short and the electron optical column is differentially pumped to keep vacuum adequately low at the electron gun. The high-pressure region around the sample in the ESEM neutralizes charge and provides an amplification of the secondary electron signal. Low-voltage SEM is typically conducted in an FEG-SEM because the field emission guns (FEG) is capable of producing high primary electron brightness and small spot size even at low accelerating potentials. Operating conditions to prevent charging of non-conductive specimens must be adjusted such that the incoming beam current was equal to sum of outcoming secondary and backscattered electrons currents. It usually occurs at accelerating voltages of 0.3–4 kV.                                      Embedding in a resin with further polishing to a mirror-like finish can be used for both biological and materials specimens when imaging in backscattered electrons or when doing quantitative X-ray microanalysis.                                                                      The main preparation techniques are not required in the environmental SEM outlined below, but some biological specimens can benefit from fixation.
1.13.4. Scanning process and image formation

Fig.1.11. Schematic of an SEM
In a typical SEM, an electron beam is thermionically emitted from an electron gun fitted with a tungsten filament cathode. Tungsten is normally used in thermionic electron guns because it has the highest melting point and lowest vapor pressure of all metals, thereby allowing it to be heated for electron emission, and because of its low cost.      The electron beam, which typically has an energy ranging from 0.2 keV to 40 keV, is focused by one or two condenser lenses to a spot about 0.4 nm to 5 nm in diameter. The beam passes through pairs of scanning coils or pairs of deflector plates in the electron column, typically in the final lens, which deflect the beam in the x and y axes so that it scans in a raster fashion over a rectangular area of the sample surface.                                                                                                                When the primary electron beam interacts with the sample, the electrons lose energy by repeated random scattering and absorption within a teardrop-shaped volume of the specimen known as the interaction volume, which extends from less than 100 nm to approximately 5 µm into the surface. The size of the interaction volume depends on the electron's landing energy, the atomic number of the specimen and the specimen's density. The energy exchange between the electron beam and the sample results in the reflection of high-energy electrons by elastic scattering, emission of secondary electrons by inelastic scattering and the emission of electromagnetic radiation, each of which can be detected by specialized detectors. The beam current absorbed by the specimen can also be detected and used to create images of the distribution of specimen current. Electronic amplifiers of various types are used to amplify the signals, which are displayed as variations in brightness on a computer monitor (or, for vintage models, on a cathode ray tube). Each pixel of computer video memory is synchronized with the position of the beam on the specimen in the microscope, and the resulting image is therefore a distribution map of the intensity of the signal being emitted from the scanned area of the specimen. In older microscopes image may be captured by photography from a high-resolution cathode ray tube, but in modern machines image is saved to computer data storage.
1.13.5. Magnification                                                                                                                     Magnification in a SEM can be controlled over a range of up to 6 orders of magnitude from about 10 to 500,000 times. Unlike optical and transmission electron microscopes, image magnification in the SEM is not a function of the power of the objective lens. SEMs may have condenser and objective lenses, but their function is to focus the beam to a spot, and not to image the specimen. Provided the electron can generate a beam with sufficiently small diameter, a SEM could in principle work entirely without condenser or objective lenses, although it might not be very versatile or achieve very high resolution. In a SEM, as in scanning probe microscopy, magnification results from the ratio of the dimensions of the raster on the specimen and the raster on the display device. Assuming that the display screen has a fixed size, higher magnification results from reducing the size of the raster on the specimen, and vice versa. Magnification is therefore controlled by the current supplied to the x, y scanning coils, or the voltage supplied to the x, y deflector plates, and not by objective lens power.                 
1.13.6. Resolution of SEM
Fig.1.12. Resolution of SEM
A video illustrating a typical practical magnification range of a scanning electron microscope designed for biological specimens. The video starts at 25x, about 6 mm across the whole field of view, and zooms in to 12000x, about 12 μm across the whole field of view. The spherical objects are glass beads with a diameter of 10 μm, similar in diameter to a red blood cell.
The spatial resolution of the SEM depends on the size of the electron spot, which in turn depends on both the wavelength of the electrons and the electron-optical system that produces the scanning beam. The resolution is also limited by the size of the interaction volume, the volume of specimen material that interacts with the electron beam. The spot size and the interaction volume are both large compared to the distances between atoms, so the resolution of the SEM is not high enough to image individual atoms, as is possible in the shorter wavelength (i.e. higher energy) transmission electron microscope (TEM). The SEM has compensating advantages, though, including the ability to image a comparatively large area of the specimen; the ability to image bulk materials (not just thin films or foils); and the variety of analytical models available for measuring the composition and properties of the specimen. Depending on the instrument, the resolution can fall somewhere between less than 1 nm and 20 nm. By 2009, The world's highest SEM resolution at high-beam energies (0.4 nm at 30 kV) is obtained with the Hitachi SU-9000.






Transmission electron microscopy (TEM) is a microscopy technique in which a beam of electrons is transmitted through an ultra-thin specimen, interacting with the specimen as it passes through. An image is formed from the interaction of the electrons transmitted through the specimen; the image is magnified and focusedonto an imaging device, such as a fluorescent screen, on a layer of photographic film, or to be detected by a sensor such as a CCD camera.
TEMs are capable of imaging at a significantly higher resolution than light microscopes, owing to the small de Broglie wavelength of electrons. This enables the instrument's user to examine fine detail—even as small as a single column of atoms, which is thousands of times smaller than the smallest resolvable object in a light microscope. TEM forms a major analysis method in a range of scientific fields, in both physical and biological sciences. TEMs find application in cancer research, virology, materials science as well as pollution, nanotechnology, and semiconductor research.
At smaller magnifications TEM image contrast is due to absorption of electrons in the material, due to the thickness and composition of the material. At higher magnifications complex wave interactions modulate the intensity of the image, requiring expert analysis of observed images. Alternate modes of use allow for the TEM to observe modulations in chemical identity, crystal orientation, electronic structure and sample induced electron phase shift as well as the regular absorption based imaging.
The first TEM was built by Max Knoll and Ernst Ruska in 1931, with this group developing the first TEM with resolution greater than that of light in 1933 and the first commercial TEM in 1939.
A TEM image of the polio virus. The polio virus is 30 nm in size.
Electrons
Theoretically, the maximum resolution, d, that one can obtain with a light microscope has been limited by the wavelength of the photons that are being used to probe the sample, λ and the numerical aperture of the system, NA.
Early twentieth century scientists theorized ways of getting around the limitations of the relatively large wavelength of visible light (wavelengths of 400–700 nanometers) by using electrons. Like all matter, electrons have both wave and particle properties (as theorized by Louis-Victor de Broglie), and their wave-like properties mean that a beam of electrons can be made to behave like a beam of electromagnetic radiation. The wavelength of electrons is related to their kinetic energy via the de Broglie equation. An additional correction must be made to account for relativistic effects, as in a TEM an electron's velocity approaches the speed of light, c.
where, h is Planck's constantm0 is the rest mass of an electron and E is the energy of the accelerated electron. Electrons are usually generated in an electron microscope by a process known asthermionic emission from a filament, usually tungsten, in the same manner as a light bulb, or alternatively by field electron emission. The electrons are then accelerated by an electric potential(measured in volts) and focused by electrostatic and electromagnetic lenses onto the sample. The transmitted beam contains information about electron density, phase and periodicity; this beam is used to form an image.
Source formation
Layout of optical components in a basic TEM
Single crystal LaB6 filament
Hairpin style tungsten filament
From the top down, the TEM consists of an emission source, which may be a tungsten filament, or a lanthanum hexaboride (LaB6) source.[19] For tungsten, this will be of the form of either a hairpin-style filament, or a small spike-shaped filament. LaB6 sources utilize small single crystals. By connecting this gun to a high voltage source (typically ~100–300 kV) the gun will, given sufficient current, begin to emit electrons either by thermionic or field electron emission into the vacuum. This extraction is almost always aided by the use of a Wehnelt cylinder to provide preliminary focus by consolidating and directing the electrons in these initial phases of forming the emitted electrons into a beam. The upper lenses of the TEM then further focus the electron beam to the desired size and location for subsequent interaction with the sample.[20]
Manipulation of the electron beam is performed using two physical effects. The interaction of electrons with a magnetic field will cause electrons to move according to the left hand rule, thus allowing for electromagnets to manipulate the electron beam. The use of magnetic fields allows for the formation of a magnetic lens of variable focusing power, the lens shape originating due to the distribution of magnetic flux. Additionally, electrostatic fields can cause the electrons to be deflected through a constant angle. Coupling of two deflections in opposing directions with a small intermediate gap allows for the formation of a shift in the beam path, this being used in TEM for beam shifting, subsequently this is extremely important toSTEM. From these two effects, as well as the use of an electron imaging system, sufficient control over the beam path is possible for TEM operation[citation needed]. The optical configuration of a TEM can be rapidly changed, unlike that for an optical microscope, as lenses in the beam path can be enabled, have their strength changed, or be disabled entirely simply via rapid electrical switching, the speed of which is limited by effects such as the magnetic hysteresis of the lenses.
Optics
The lenses of a TEM allow for beam convergence, with the angle of convergence as a variable parameter, giving the TEM the ability to change magnification simply by modifying the amount of current that flows through the coil, quadrupole or hexapole lenses. The quadrupole lens is an arrangement of electromagnetic coils at the vertices of the square, enabling the generation of a lensing magnetic fields, the hexapole configuration simply enhances the lens symmetry by using six, rather than four coils.
Typically a TEM consists of three stages of lensing. The stages are the condenser lenses, the objective lenses, and the projector lenses. The condenser lenses are responsible for primary beam formation, while the objective lenses focus the beam that comes through the sample itself (in STEM scanning mode, there are also objective lenses above the sample to make the incident electron beam convergent). The projector lenses are used to expand the beam onto the phosphor screen or other imaging device, such as film. The magnification of the TEM is due to the ratio of the distances between the specimen and the objective lens' image plane.[21] Additional quad or hexapole lenses allow for the correction of asymmetrical beam distortions, known asastigmatism. It is noted that TEM optical configurations differ significantly with implementation, with manufacturers using custom lens configurations, such as in spherical aberration corrected instruments, or TEMs utilizing energy filtering to correct electron chromatic aberration.