Friday, June 3, 2011
APPLIED PHYSICS
Applied physics is a general term for physics which is intended for a particular technological or practical use. "Applied" is distinguished from "pure" by a subtle combination of factors such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving an engineering problem. This approach is similar to that of applied mathematics. In other words, applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of these scientific principles in practical devices and systems. Applied physicists can also be interested in the use of physics for scientific research. For instance, people working on accelerator physics seek to build better accelerators for research in theoretical TIPUS.
Fields and areas of research
* Accelerator physics
* Acoustics
* Agrophysics
* Analog Electronics
* Force microscopy and imaging
* Ballistics
* Biophysics
* Communication Physics
* Computational physics
* Control Theory
* Digital Electronics
* Econophysics
* Engineering physics
* Fiber Optics
* Fluid dynamics
* Geophysics
* Laser physics
* Medical physics
* Metrological Physics
* Microfluidics
* Nanotechnology
* Nondestructive testing
* Nuclear engineering
* Nuclear technology
* Optics
* Optoelectronics
* Photovoltaics
* Plasma physics
* Quantum electronics
* Semiconductor physics and devices
* Soil Physics
* Solid state physics
* Space physics
* Spintronics
* Superconductors
* Vehicle dynamics
Accelerator physics
Accelerator physics deals with the problems of building and operating particle accelerators.
The experiments conducted with particle accelerators are not regarded as part of accelerator physics. These belong (according to the objectives of the experiments) to particle physics, nuclear physics, condensed matter physics, materials physics, etc. as well as to other sciences and technical fields. The types of experiments done at a particular accelerator and/or its other uses are largely constrained by the characteristics of the accelerator itself, such as energy (per particle), types of particles, beam intensity, beam quality, etc.
Accelerator physics itself is the study of the motion of the particle beam through the machine, control and manipulation of the beam, interaction with the machine itself, and measurements of the various parameters associated with particle beams.
Content.
Equations of motion
The motion of charged particles through an accelerator is controlled using applied electro-magnetic fields, and the equations of motion may be derived from (or, since in many cases a general solution is not possible, approximated from) relativistic Hamiltonian mechanics. Typically, a separate Hamiltonian is written down for each element (e.g. for a single quadrupole magnet, or accelerating structure) to allow the equations of motion to be solved for this one element. Once this has been done for each element encountered in the machine, the full trajectory of each particle may be calculated for the entire machine.
In many cases a general solution of the full Hamiltonian is not possible, so it is necessary to make approximations. This may take the form of the Paraxial approximation (a Taylor series in the dynamical variables, truncated to low order), however, even in the cases of strongly non-linear magnetic fields, a Lie transform may be used to construct an integrator with a high degree of accuracy, and the paraxial approximation is not necessary.
Diagnostics
A vital component of any accelerator are the diagnostic devices that allow various properties of the particle bunches to be measured.
A typical machine may use many different types of measurement device in order to measure different properties. These include (but are not limited to) Beam Position Monitors (BPMs) to measure the position of the bunch, screens (fluorescent screens, Optical Transition Radiation (OTR) devices) to image the profile of the bunch, wire-scanners to measure its cross-section, and toroids or ICTs to measure the bunch charge (i.e. the number of particles per bunch).
While many of these devices rely on well understood technology, designing a device capable of measuring a beam for a particular machine is a complex task requiring much expertise. Not only is a full understanding of the physics of the operation of the device necessary, but it is also necessary to ensure that the device is capable of measuring the expected parameters of the machine under consideration.
Success of the full range of beam diagnostics often underpins the success of the machine as a whole.
Machine tolerances
Errors in the alignment of components, field strength, etc., are inevitable in machines of this scale, so it is important to consider the tolerances under which a machine may operate.
Engineers will provide the physicists with expected tolerances for the alignment and manufacture of each component to allow full physics simulations of the expected behaviour of the machine under these conditions. In many cases it will be found that the performance is degraded to an unacceptable level, requiring either re-engineering of the components, or the invention of algorithms that allow the machine performance to be 'tuned' back to the design level.
This may require many simulations of different error conditions in order to determine the relative success of each tuning algorithm, and to allow recommendations for the collection of algorithms to be deployed on the real machine.
Interactions between the beam and the machine
Due to the strong electro-magnetic fields that follow the beam, it is possible for it to interact with any electrical impedance in the walls of the beam pipe. This may be in the form of a resistive impedance (i.e. the finite resistivity of the beam pipe material) or an inductive/capacitive impedance (due to the geometric changes in the beam pipe's cross section).
These impedances will induce so called 'wake-fields' (a strong warping of the electromagnetic field of the beam) that can interact with later particles. Since this interaction may have a negative effect, it must be studied to determine its magnitude, and to determine any actions that may be taken to mitigate it.
Acoustics
Acoustics is the interdisciplinary science that deals with the study of all mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics can be seen in almost all aspects of modern society with the most obvious being the audio and noise control industries.
Hearing is one of the most crucial means of survival in the animal world, and speech is one of the most distinctive characteristics of human development and culture. So it is no surprise that the science of acoustics spreads across so many facets of our society—music, medicine, architecture, industrial production, warfare and more. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Lindsay's 'Wheel of Acoustics' is a well accepted overview of the various fields in acoustics.
The word "acoustic" is derived from the Greek word ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear" and that from ἀκουστός (akoustos), "heard, audible", which in turn derives from the verb ἀκούω (akouo), "I hear".
The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. After acousticians had extended their studies to frequencies above and below the audible range, it became conventional to identify these frequency ranges as "ultrasonic" and "infrasonic" respectively, while letting the word "acoustic" refer to the entire frequency range without limit.
History of acoustics
Early research in acoustics
The fundamental and the first 6 overtones of a vibrating string. The earliest records of the study of this phenomenon are attributed to the philosopher Pythagoras in the 6th century BC.
In the 6th century BC, the Greek philosopher Pythagoras wanted to know why some intervals seemed more beautiful than others, and he found answers in terms of numerical ratios representing the harmonic overtone series on a string. He is reputed to have observed that when the lengths of vibrating strings are expressible as ratios of integers (e.g. 2 to 3, 3 to 4), the tones produced will be harmonious. If, for example, a string sounds the note C when plucked, a string twice as long will sound the same note an octave lower. The tones in between are then given by 16:9 for D, 8:5 for E, 3:2 for F, 4:3 for G, 6:5 for A, and 16:15 for B, in ascending order. Aristotle (384-322 BC) understood that sound consisted of contractions and expansions of the air "falling upon and striking the air which is next to it...", a very good expression of the nature of wave motion. In about 20 BC, the Roman architect and engineer Vitruvius wrote a treatise on the acoustic properties of theatres including discussion of interference, echoes, and reverberation—the beginnings of architectural acoustics.
Principles of acoustics were applied since ancient times : Roman theatre in the city of Amman.
The physical understanding of acoustical processes advanced rapidly during and after the Scientific Revolution. Mainly Galileo Galilei (1564–1642) but also Marin Mersenne (1588–1648), independently, discovered the complete laws of vibrating strings (completing what Pythagoras and Pythagoreans had started 2000 years earlier). Galileo wrote "Waves are produced by the vibrations of a sonorous body, which spread through the air, bringing to the tympanum of the ear a stimulus which the mind interprets as sound", a remarkable statement that points to the beginnings of physiological and psychological acoustics. Experimental measurements of the speed of sound in air were carried out successfully between 1630 and 1680 by a number of investigators, prominently Mersenne. Meanwhile Newton (1642–1727) derived the relationship for wave velocity in solids, a cornerstone of physical acoustics (Principia, 1687).
The Age of Enlightenment and onward
The eighteenth century saw major advances in acoustics as mathematicians applied the new techniques of calculus to elaborate theories of sound wave propagation. In the nineteenth century the major figures of mathematical acoustics were Helmholtz in Germany, who consolidated the field of physiological acoustics, and Lord Rayleigh in England, who combined the previous knowledge with his own copious contributions to the field in his monumental work "The Theory of Sound". Also in the 19th century, Wheatstone, Ohm, and Henry developed the analogy between electricity and acoustics.
The twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place. The first such application was Sabine’s groundbreaking work in architectural acoustics, and many others followed. Underwater acoustics was used for detecting submarines in the first World War. Sound recording and the telephone played important roles in a global transformation of society. Sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing. The ultrasonic frequency range enabled wholly new kinds of application in medicine and industry. New kinds of transducers (generators and receivers of acoustic energy) were invented and put to use.
Fundamental concepts of acoustics
At Jay Pritzker Pavilion, a LARES system is combined with a zoned sound reinforcement system, both suspended on an overhead steel trellis, to synthesize an indoor acoustic environment outdoors.
The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations.
The fundamental acoustical process
The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into acoustic energy, producing the acoustic wave. There is one fundamental equation that describes acoustic wave propagation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert.
The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many forms including longitudinal waves, transverse waves and surface waves.
Acoustics looks first at the pressure levels and frequencies in the sound wave. Transduction processes are also of special importance.
Wave propagation: pressure levels
Spectrogram of a young girl saying "oh, no"
In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear, known as the threshold of hearing, is nine orders of magnitude smaller than the ambient pressure. The loudness of these disturbances is called the sound pressure level (SPL), and is measured on a logarithmic scale in decibels.
Wave propagation: frequency
Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because this is how our ears interpret sound. What we experience as "higher pitched" or "lower pitched" sounds are pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic measurement, acoustic signals are sampled in time, and then presented in more meaningful forms such as octave bands or time frequency plots. Both these popular methods are used to analyze sound and better understand the acoustic phenomenon.
The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls between 20 Hz and 20,000 Hz. This range is important because its frequencies can be detected by the human ear. This range has a number of applications, including speech communication and music. The ultrasonic range refers to the very high frequencies: 20,000 Hz and higher. This range has shorter wavelengths which allows better resolution in imaging technologies. Medical applications such as ultrasonography and elastography rely on the ultrasonic frequency range. On the other end of the spectrum, the lowest frequencies are known as the infrasonic range. These frequencies can be used to study geological phenomena such as earthquakes.
Analytic instruments such as the Spectrum analyzer facilitate visualization and measurement of acoustic signals and their properties. The Spectrogram produced by such an instrument is a graphical display of the time varying pressure level and frequency profiles which give a specific acoustic signal its defining character.
Transduction in acoustics
An inexpensive low fidelity 3.5 inch driver, typically found in small radios
A transducer is a device for converting one form of energy into another. In an acoustical context, this usually means converting sound energy into electrical energy (or vice versa). For nearly all acoustic applications, some type of acoustic transducer is necessary. Acoustic transducers include loudspeakers, microphones, hydrophones and sonar projectors. These devices convert an electric signal to or from a sound pressure wave. The most widely used transduction principles are electromagnetism, electrostatics and piezoelectricity.
The transducers in most common loudspeakers (e.g. woofers and tweeters), are electromagnetic devices that generate waves using a suspended diaphragm driven by an electromagnetic voice coil, sending off pressure waves. Electret microphones and condenser microphones employ electrostatics. As the sound wave strikes the microphone's diaphragm, it moves and induces a voltage change. The ultrasonic systems used in medical ultrasonography employ piezoelectric transducers. These are made from special ceramics in which elastic vibrations and electrical fields are interlinked through a property of the material itself.
Divisions of acoustics
The table below shows seventeen major subfields of acoustics established in the PACS classification system. These have been grouped into three domains: physical acoustics, biological acoustics and acoustical engineering.
Physical acoustics
* Aeroacoustics
* General linear acoustics
* Nonlinear acoustics
* Structural acoustics and vibration
* Underwater sound
Biological acoustics
* Bioacoustics
* Musical acoustics
* Physiological acoustics
* Psychoacoustics
* Speech communication (production;
perception; processing and communication systems)
Acoustical engineering
* Acoustic measurements and instrumentation
* Acoustic signal processing
* Architectural acoustics
* Environmental acoustics
* Transduction
* Ultrasonics
* Room acoustics
Agrophysics
Agrophysics is a branch of science bordering on agronomy and physics, whose objects of study are the agroecosystem - the biological objects, biotope and biocoenosis affected by human activity, studied and described using the methods of physical sciences.
Agrophysics is closely related to biophysics, but is restricted to the biology of the plants, animals, soil and an atmosphere involved in agricultural activities and biodiversity. It is different from biophysics in having the necessity of taking into account the specific features of biotope and biocoenosis, which involves the knowledge of nutritional science and agroecology, agricultural technology, biotechnology, genetics etc.
Principles of physical sciences
Agrophysics is close to certain fundamental sciences like biology, whose methods and knowledge it utilizes (especially in the field of environmental ecology and plant physiology), and physics, from which it acquires the research methods, especially that of physical experiment and model.
The scope of interest of agrophysics is not focused solely on technical problems from agronomy and on practical implementation of sciences and that are aspects that makes it different from agricultural engineering which provides grounds for classifying agrophysics as the fundamental sciences.
Physical models, closely related to biophysics, are ready to solve either global or local aspects of behaviour of the complex ecosystems to be studied, including of energy consumption, food safety etc.
Principles of history
The needs of agriculture, concerning the past experience study of the local complex soil and next plant-atmosphere systems, lay at the root of the emergnece of new branch - agrophysics dealing this with experimental physics. The scope of the branch starting from soil science (physics) and originally limited to the study of relations within the soil environment, expanded over time onto influencing the properties of agricultural crops and produce as foods and raw postharvest materials, and onto the issues of quality, safety and labeling concerns, considered distinct from the field of nutrition for application in food science.
A research centre that is focused on the development of the Science is the Institute of Agrophysics, Polish Academy of Sciences in Lublin. cyt: "Agrophysics, utilizing the achievements of Exact Sciences for solving major problems of Agriculture, is involved in study of materials and processes occurring in the production and processing of agricultural crops, with particular emphasis on the condition of the environment and the quality of farming materials and food productions."
Analogue electronics
Analogue electronics (or analog in American English) are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two different levels. The term "analogue" describes the proportional relationship between a signal and a voltage or current that represents the signal. The word analogue is derived from the Greek word ανάλογος (analogos) meaning "proportional".
Analogue signals
An analogue signal uses some attribute of the medium to convey the signal's information. For example, an aneroid barometer uses the angular position of a needle as the signal to convey the information of changes in atmospheric pressure. Electrical signals may represent information by changing their voltage, current, frequency, or total charge. Information is converted from some other physical form (such as sound, light, temperature, pressure, position) to an electrical signal by a transducer which converts one type of energy into another (e.g. a microphone).
The signals take any value from a given range, and each unique signal value represents different information. Any change in the signal is meaningful, and each level of the signal represents a different level of the phenomenon that it represents. For example, suppose the signal is being used to represent temperature, with one volt representing one degree Celsius. In such a system 10 volts would represent 10 degrees, and 10.1 volts would represent 10.1 degrees.
Another method of conveying an analogue signal is to use modulation. In this, some base carrier signal has one of its properties altered: amplitude modulation (AM) involves altering the amplitude of a sinusoidal voltage waveform by the source information, frequency modulation (FM) changes the frequency. Other techniques, such as phase modulation or changing the phase of the carrier signal, are also used.
In an analogue sound recording, the variation in pressure of a sound striking a microphone creates a corresponding variation in the current passing through it or voltage across it. An increase in the volume of the sound causes the fluctuation of the current or voltage to increase proportionally while keeping the same waveform or shape.
Mechanical, pneumatic, hydraulic and other systems may also use analogue signals.
Inherent noise
Analogue systems invariably include noise; that is, random disturbances or variations, some caused by the random thermal vibrations of atomic particles. Since all variations of an analogue signal are significant, any disturbance is equivalent to a change in the original signal and so appears as noise.[5] As the signal is copied and re-copied, or transmitted over long distances, these random variations become more significant and lead to signal degradation. Other sources of noise may include external electrical signals or poorly designed components. These disturbances are reduced by shielding, and using low-noise amplifiers (LNA).
Analogue vs. digital electronics
Since the information is encoded differently in analogue and digital electronics, the way they process a signal is consequently different. All operations that can be performed on an analogue signal such as amplification, filtering, limiting, and others, can also be duplicated in the digital domain. Every digital circuit is also an analogue circuit, in that the behaviour of any digital circuit can be explained using the rules of analogue circuits.
The first electronic devices invented and mass produced were analogue. The use of microelectronics has reduced the cost of digital techniques and now makes digital methods feasible and cost-effective such as in the field of human-machine communication by voice.
The main differences between analogue and digital electronics are listed below:
Noise
Because of the way information is encoded in analogue circuits, they are much more susceptible to noise than digital circuits, since a small change in the signal can represent a significant change in the information present in the signal and can cause the information present to be lost. Since digital signals take on one of only two different values, a disturbance would have to be about one-half the magnitude of the digital signal to cause an error; this property of digital circuits can be exploited to make signal processing noise-resistant. In digital electronics, because the information is quantized, as long as the signal stays inside a range of values, it represents the same information. Digital circuits use this principle to regenerate the signal at each logic gate, lessening or removing noise.
Precision
A number of factors affect how precise a signal is, mainly the noise present in the original signal and the noise added by processing. See signal-to-noise ratio. Fundamental physical limits such as the shot noise in components limits the resolution of analogue signals. In digital electronics additional precision is obtained by using additional digits to represent the signal; the practical limit in the number of digits is determined by the performance of the analogue-to-digital converter (ADC), since digital operations can usually be performed without loss of precision. The ADC takes an analogue signal and changes into a series of binary numbers. The ADC may be used in simple digital display devices e. g. thermometers, light meters but it may also be used in digital sound recording and in data acquisition. However, a digital-to-analogue converter (DAC) is used to change a digital signal to an analogue signal. A DAC takes a series of binary numbers and converts it to an analogue signal. It is common to find a DAC in the gain-control system of an op-amp which in turn may be used to control digital amplifiers and filters.
Design difficulty
Analogue circuits are harder to design, requiring more skill, than comparable digital systems. This is one of the main reasons why digital systems have become more common than analogue devices. An analogue circuit must be designed by hand, and the process is much less automated than for digital systems. However, if a digital electronic device is to interact with the real world, it will always need an analogue interface. For example, every digital radio receiver has an analogue preamplifier as the first stage in the receive chain.
Atomic force microscopy
Atomic force microscopy (AFM) or scanning force microscopy (SFM) is a very high-resolution type of scanning probe microscopy, with demonstrated resolution on the order of fractions of a nanometer, more than 1000 times better than the optical diffraction limit. The precursor to the AFM, the scanning tunneling microscope, was developed by Gerd Binnig and Heinrich Rohrer in the early 1980s at IBM Research - Zurich, a development that earned them the Nobel Prize for Physics in 1986. Binnig, Quate and Gerber invented the first atomic force microscope (also abbreviated as AFM) in 1986. The first commercially available atomic force microscope was introduced in 1989. The AFM is one of the foremost tools for imaging, measuring, and manipulating matter at the nanoscale. The information is gathered by "feeling" the surface with a mechanical probe. Piezoelectric elements that facilitate tiny but accurate and precise movements on (electronic) command enable the very precise scanning. In some variations, electric potentials can also be scanned using conducting cantilevers. In newer more advanced versions, currents can even be passed through the tip to probe the electrical conductivity or transport of the underlying surface, but this is much more challenging with very few research groups reporting reliable data.
Basic principles
Electron micrograph of a used AFM cantilever image width ~100 micrometers...
and ~30 micrometers
The AFM consists of a cantilever with a sharp tip (probe) at its end that is used to scan the specimen surface. The cantilever is typically silicon or silicon nitride with a tip radius of curvature on the order of nanometers. When the tip is brought into proximity of a sample surface, forces between the tip and the sample lead to a deflection of the cantilever according to Hooke's law. Depending on the situation, forces that are measured in AFM include mechanical contact force, van der Waals forces, capillary forces, chemical bonding, electrostatic forces, magnetic forces (see magnetic force microscope, MFM), Casimir forces, solvation forces, etc. Along with force, additional quantities may simultaneously be measured through the use of specialized types of probe (see scanning thermal microscopy, scanning joule expansion microscopy, photothermal microspectroscopy, etc.). Typically, the deflection is measured using a laser spot reflected from the top surface of the cantilever into an array of photodiodes. Other methods that are used include optical interferometry, capacitive sensing or piezoresistive AFM cantilevers. These cantilevers are fabricated with piezoresistive elements that act as a strain gauge. Using a Wheatstone bridge, strain in the AFM cantilever due to deflection can be measured, but this method is not as sensitive as laser deflection or interferometry.
Atomic force microscope topographical scan of a glass surface. The micro and nano-scale features of the glass can be observed, portraying the roughness of the material. The image space is (x,y,z) = (20um x 20um x 420nm).
If the tip was scanned at a constant height, a risk would exist that the tip collides with the surface, causing damage. Hence, in most cases a feedback mechanism is employed to adjust the tip-to-sample distance to maintain a constant force between the tip and the sample. Traditionally, the sample is mounted on a piezoelectric tube, that can move the sample in the z direction for maintaining a constant force, and the x and y directions for scanning the sample. Alternatively a 'tripod' configuration of three piezo crystals may be employed, with each responsible for scanning in the x,y and z directions. This eliminates some of the distortion effects seen with a tube scanner. In newer designs, the tip is mounted on a vertical piezo scanner while the sample is being scanned in X and Y using another piezo block. The resulting map of the area z = f(x,y) represents the topography of the sample.
The AFM can be operated in a number of modes, depending on the application. In general, possible imaging modes are divided into static (also called contact) modes and a variety of dynamic (or non-contact) modes where the cantilever is vibrated.
Imaging modes
The primary modes of operation for an AFM are static mode and dynamic mode. In static mode, the cantilever is "dragged" across the surface of the sample and the contours of the surface are measured directly using the deflection of the cantilever. In the dynamic mode, the cantilever is externally oscillated at or close to its fundamental resonance frequency or a harmonic. The oscillation amplitude, phase and resonance frequency are modified by tip-sample interaction forces. These changes in oscillation with respect to the external reference oscillation provide information about the sample's characteristics.
Contact mode
In the static mode operation, the static tip deflection is used as a feedback signal. Because the measurement of a static signal is prone to noise and drift, low stiffness cantilevers are used to boost the deflection signal. However, close to the surface of the sample, attractive forces can be quite strong, causing the tip to "snap-in" to the surface. Thus static mode AFM is almost always done in contact where the overall force is repulsive. Consequently, this technique is typically called "contact mode". In contact mode, the force between the tip and the surface is kept constant during scanning by maintaining a constant deflection.
Non-contact mode
AFM - non-contact mode
In this mode, the tip of the cantilever does not contact the sample surface. The cantilever is instead oscillated at a frequency slightly above its resonant frequency where the amplitude of oscillation is typically a few nanometers (<10 nm). The van der Waals forces, which are strongest from 1 nm to 10 nm above the surface, or any other long range force which extends above the surface acts to decrease the resonance frequency of the cantilever. This decrease in resonant frequency combined with the feedback loop system maintains a constant oscillation amplitude or frequency by adjusting the average tip-to-sample distance. Measuring the tip-to-sample distance at each (x,y) data point allows the scanning software to construct a topographic image of the sample surface. Non-contact mode AFM does not suffer from tip or sample degradation effects that are sometimes observed after taking numerous scans with contact AFM. This makes non-contact AFM preferable to contact AFM for measuring soft samples. In the case of rigid samples, contact and non-contact images may look the same. However, if a few monolayers of adsorbed fluid are lying on the surface of a rigid sample, the images may look quite different. An AFM operating in contact mode will penetrate the liquid layer to image the underlying surface, whereas in non-contact mode an AFM will oscillate above the adsorbed fluid layer to image both the liquid and surface. Schemes for dynamic mode operation include frequency modulation and the more common amplitude modulation. In frequency modulation, changes in the oscillation frequency provide information about tip-sample interactions. Frequency can be measured with very high sensitivity and thus the frequency modulation mode allows for the use of very stiff cantilevers. Stiff cantilevers provide stability very close to the surface and, as a result, this technique was the first AFM technique to provide true atomic resolution in ultra-high vacuum conditions.[1] In amplitude modulation, changes in the oscillation amplitude or phase provide the feedback signal for imaging. In amplitude modulation, changes in the phase of oscillation can be used to discriminate between different types of materials on the surface. Amplitude modulation can be operated either in the non-contact or in the intermittent contact regime. In dynamic contact mode, the cantilever is oscillated such that the separation distance between the cantilever tip and the sample surface is modulated. Amplitude modulation has also been used in the non-contact regime to image with atomic resolution by using very stiff cantilevers and small amplitudes in an ultra-high vacuum environment. Tapping mode
Single polymer chains (0.4 nm thick) recorded in a tapping mode under aqueous media with different pH.
In ambient conditions, most samples develop a liquid meniscus layer. Because of this, keeping the probe tip close enough to the sample for short-range forces to become detectable while preventing the tip from sticking to the surface presents a major problem for non-contact dynamic mode in ambient conditions. Dynamic contact mode (also called intermittent contact or tapping mode) was developed to bypass this problem.
In tapping mode, the cantilever is driven to oscillate up and down at near its resonance frequency by a small piezoelectric element mounted in the AFM tip holder similar to non-contact mode. However, the amplitude of this oscillation is greater than 10 nm, typically 100 to 200 nm. Due to the interaction of forces acting on the cantilever when the tip comes close to the surface, Van der Waals force, dipole-dipole interaction, electrostatic forces, etc. cause the amplitude of this oscillation to decrease as the tip gets closer to the sample. An electronic servo uses the piezoelectric actuator to control the height of the cantilever above the sample. The servo adjusts the height to maintain a set cantilever oscillation amplitude as the cantilever is scanned over the sample. A tapping AFM image is therefore produced by imaging the force of the intermittent contacts of the tip with the sample surface.
This method of "tapping" lessens the damage done to the surface and the tip compared to the amount done in contact mode. Tapping mode is gentle enough even for the visualization of supported lipid bilayers or adsorbed single polymer molecules (for instance, 0.4 nm thick chains of synthetic polyelectrolytes) under liquid medium. With proper scanning parameters, the conformation of single molecules can remain unchanged for hours.
AFM cantilever deflection measurement
AFM beam deflection detection
Laser light from a solid state diode is reflected off the back of the cantilever and collected by a position sensitive detector (PSD) consisting of two closely spaced photodiodes whose output signal is collected by a differential amplifier. Angular displacement of the cantilever results in one photodiode collecting more light than the other photodiode, producing an output signal (the difference between the photodiode signals normalized by their sum) which is proportional to the deflection of the cantilever. It detects cantilever deflections <10 nm (thermal noise limited). A long beam path (several centimeters) amplifies changes in beam angle. Force spectroscopy
Another major application of AFM (besides imaging) is force spectroscopy, the direct measurement of tip-sample interaction forces as a function of the gap between the tip and sample (the result of this measurement is called a force-distance curve). For this method, the AFM tip is extended towards and retracted from the surface as the deflection of the cantilever is monitored as a function of piezoelectric displacement. These measurements have been used to measure nanoscale contacts, atomic bonding, Van der Waals forces, and Casimir forces, dissolution forces in liquids and single molecule stretching and rupture forces. Furthermore, AFM was used to measure, in an aqueous environment, the dispersion force due to polymer adsorbed on the substrate. Forces of the order of a few piconewtons can now be routinely measured with a vertical distance resolution of better than 0.1 nanometers. Force spectroscopy can be performed with either static or dynamic modes. In dynamic modes, information about the cantilever vibration is monitored in addition to the static deflection.
Problems with the technique include no direct measurement of the tip-sample separation and the common need for low stiffness cantilevers which tend to 'snap' to the surface. The snap-in can be reduced by measuring in liquids or by using stiffer cantilevers, but in the latter case a more sensitive deflection sensor is needed. By applying a small dither to the tip, the stiffness (force gradient) of the bond can be measured as well.
Identification of individual surface atoms
The AFM can be used to image and manipulate atoms and structures on a variety of surfaces. The atom at the apex of the tip "senses" individual atoms on the underlying surface when it forms incipient chemical bonds with each atom. Because these chemical interactions subtly alter the tip's vibration frequency, they can be detected and mapped. This principle was used to distinguish between atoms of silicon, tin and lead on an alloy surface, by comparing these 'atomic fingerprints' to values obtained from large-scale density functional theory (DFT) simulations.
The trick is to first measure these forces precisely for each type of atom expected in the sample, and then to compare with forces given by DFT simulations. The team found that the tip interacted most strongly with silicon atoms, and interacted 23% and 41% less strongly with tin and lead atoms, respectively. Thus, each different type of atom can be identified in the matrix as the tip is moved across the surface.
Advantages and disadvantages
The first atomic force microscope
Just like any other tool, an AFM's usefulness has limitations. When determining whether or not analyzing a sample with an AFM is appropriate, there are various advantages and disadvantages that must be considered.
Advantages
AFM has several advantages over the scanning electron microscope (SEM). Unlike the electron microscope which provides a two-dimensional projection or a two-dimensional image of a sample, the AFM provides a three-dimensional surface profile. Additionally, samples viewed by AFM do not require any special treatments (such as metal/carbon coatings) that would irreversibly change or damage the sample. While an electron microscope needs an expensive vacuum environment for proper operation, most AFM modes can work perfectly well in ambient air or even a liquid environment. This makes it possible to study biological macromolecules and even living organisms. In principle, AFM can provide higher resolution than SEM. It has been shown to give true atomic resolution in ultra-high vacuum (UHV) and, more recently, in liquid environments. High resolution AFM is comparable in resolution to scanning tunneling microscopy and transmission electron microscopy.
Disadvantages
A disadvantage of AFM compared with the scanning electron microscope (SEM) is the single scan image size. In one pass, the SEM can image an area on the order of square millimeters with a depth of field on the order of millimeters. Whereas the AFM can only image a maximum height on the order of 10-20 micrometers and a maximum scanning area of about 150×150 micrometers. One method of improving the scanned area size for AFM is by using parallel probes in a fashion similar to that of millipede data storage.
The scanning speed of an AFM is also a limitation. Traditionally, an AFM cannot scan images as fast as a SEM, requiring several minutes for a typical scan, while a SEM is capable of scanning at near real-time, although at relatively low quality. The relatively slow rate of scanning during AFM imaging often leads to thermal drift in the image[9][10] making the AFM microscope less suited for measuring accurate distances between topographical features on the image. However, several fast-acting designs [11][12] were suggested to increase microscope scanning productivity including what is being termed videoAFM (reasonable quality images are being obtained with videoAFM at video rate: faster than the average SEM). To eliminate image distortions induced by thermal drift, several methods have been introduced.
AFM images can also be affected by hysteresis of the piezoelectric material[13] and cross-talk between the x, y, z axes that may require software enhancement and filtering. Such filtering could "flatten" out real topographical features. However, newer AFMs utilize closed-loop scanners which practically eliminate these problems. Some AFMs also use separated orthogonal scanners (as opposed to a single tube) which also serve to eliminate part of the cross-talk problems.
As with any other imaging technique, there is the possibility of image artifacts, which could be induced by an unsuitable tip, a poor operating environment, or even by the sample itself. These image artifacts are unavoidable however, their occurrence and effect on results can be reduced through various methods.
Due to the nature of AFM probes, they cannot normally measure steep walls or overhangs. Specially made cantilevers and AFMs can be used to modulate the probe sideways as well as up and down (as with dynamic contact and non-contact modes) to measure sidewalls, at the cost of more expensive cantilevers, lower lateral resolution and additional artifacts.
Piezoelectric scanners
AFM scanners are made from piezoelectric material, which expands and contracts proportionally to an applied voltage. Whether they elongate or contract depends upon the polarity of the voltage applied. The scanner is constructed by combining independently operated piezo electrodes for X, Y, and Z into a single tube, forming a scanner which can manipulate samples and probes with extreme precision in 3 dimensions.
Scanners are characterized by their sensitivity which is the ratio of piezo movement to piezo voltage, i.e., by how much the piezo material extends or contracts per applied volt. Because of differences in material or size, the sensitivity varies from scanner to scanner. Sensitivity varies non-linearly with respect to scan size. Piezo scanners exhibit more sensitivity at the end than at the beginning of a scan. This causes the forward and reverse scans to behave differently and display hysteresis between the two scan directions. This can be corrected by applying a non-linear voltage to the piezo electrodes to cause linear scanner movement and calibrating the scanner accordingly.
The sensitivity of piezoelectric materials decreases exponentially with time. This causes most of the change in sensitivity to occur in the initial stages of the scanner’s life. Piezoelectric scanners are run for approximately 48 hours before they are shipped from the factory so that they are past the point where they may have large changes in sensitivity. As the scanner ages, the sensitivity will change less with time and the scanner would seldom require recalibration.
Ballistics
Ballistics (gr. βάλλειν ('ba'llein'), "throw") is the science of mechanics that deals with the flight, behavior, and effects of projectiles, especially bullets, gravity bombs, rockets, or the like; the science or art of designing and accelerating projectiles so as to achieve a desired performance.
A ballistic body is a body which is free to move, behave, and be modified in appearance, contour, or texture by ambient conditions, substances, or forces, as by the pressure of gases in a gun, by rifling in a barrel, by gravity, by temperature, or by air particles. A ballistic missile is a missile only guided during the relatively brief initial powered phase of flight, whose course is subsequently governed by the laws of classical mechanics.
Gun ballistics
Gun ballistics is the work of projectiles from the time of shooting to the time of impact with the target. Gun ballistics is often broken down into the following four categories, which contain detailed information on each category:
* Internal ballistics, (sometimes called interior ballistics) the study of the processes originally accelerating the projectile, for example the passage of a bullet through the barrel of a rifle.
* Transition ballistics, (sometimes called intermediate ballistics) the study of the projectile's behavior when it leaves the barrel and the pressure behind the projectile is equalized.
* External ballistics, (sometimes called exterior ballistics) the study of the passage of the projectile through a medium, most commonly earth's atmosphere. [4]
* Terminal ballistics, the study of the interaction of a projectile with its target, whether that be flesh (for a hunting bullet), steel (for an anti-tank round), or even furnace slag (for an industrial slag disruptor).
Forensic ballistics
Forensic ballistics involves analysis of bullets and bullet impacts to determine information of use to a court or other part of a legal system. Separately from ballistics information, firearm and tool mark examinations ("ballistic fingerprinting") involve analysing firearm, ammunition, and tool mark evidence in order to establish whether a certain firearm or tool was used in the commission of a crime.
Biophysics
Biophysics is an interdisciplinary science that uses the methods of physical science to study biological systems. Studies included under the branches of biophysics span all levels of biological organization, from the molecular scale to whole organisms and ecosystems. Biophysical research shares significant overlap with biochemistry, nanotechnology, bioengineering, agrophysics and systems biology.
Molecular biophysics typically addresses biological questions that are similar to those in biochemistry and molecular biology, but the questions are approached quantitatively. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques is used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy and atomic force microscopy (AFM) are often used to visualize structures of biological significance. Conformational change in structure can be measured using techniques such as dual polarisation interferometry and circular dichroism. Direct manipulation of molecules using optical tweezers or AFM can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting units which can be understood through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems.
Focus as a subfield
Biophysics often does not have university-level departments of its own, but has presence as groups across departments within the fields of molecular biology, biochemistry, chemistry, computer science, mathematics, medicine, pharmacology, physiology, physics, and neuroscience. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments.
* Biology and molecular biology - Almost all forms of biophysics efforts are included in some biology department somewhere. To include some: gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics.
* Structural biology - Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof.
* Biochemistry and chemistry - biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships.
* Computer science - Neural networks, biomolecular and drug databases.
* Computational chemistry - molecular dynamics simulation, molecular docking, quantum chemistry
* Bioinformatics - sequence alignment, structural alignment, protein structure prediction
* Mathematics - graph/network theory, population modeling, dynamical systems, phylogenetics.
* Medicine and neuroscience - tackling neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permitivity, gene therapy, understanding tumors.
* Pharmacology and physiology - channel biology, biomolecular interactions, cellular membranes, polyketides.
* Physics - biomolecular free energy, stochastic processes, covering dynamics.
* Quantum biophysics involves quantum information processing of coherent states, entanglement between coherent protons and transcriptase components, and replication of decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing.
* Agronomy Agriculture
Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were traditional physicists, chemists, and biologists by training.
Engineering physics
Engineering physics or engineering science is a multidisciplinary and interdisciplinary field that combines the physical sciences with traditional engineering disciplines such as aerospace engineering, electrical engineering, or mechanical engineering. Unlike traditional engineering disciplines, engineering science/physics is not necessarily confined to a particular branch of science or physics. Instead, engineering science/physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, materials science, applied mechanics, nanotechnology, microfabrication, mechanical engineering, electrical engineering, control theory, aerodynamics, energy or solid-state physics. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis.
Engineering physics or engineering science degrees are respected academic degrees awarded in many countries. It is notable that in many languages the term for "engineering physics" would be directly translated into English as "technical physics". In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees, with the former specializes in nuclear power research, and the latter closer to engineering physics. In some institutions, engineering (or applied) physics major is a discipline or specialization within the scope of engineering science, or applied science.
In many universities, engineering science programs may be offered at the levels of B.Tech, B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, solid mechanics, operations research, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics. While typical undergraduate engineering programs generally focus on the application of established methods to the design and analysis of engineering solutions, undergraduate program in engineering science focuses on the creation and use of more advanced experimental or computational techniques where standard approaches are inadequate (i.e., development of engineering solutions to contemporary problems in the physical and life sciences by applying fundamental principles). Due to rigorous nature of the academic curriculum, an undergraduate major in engineering science is an honors program at some universities such as the University of Toronto and Pennsylvania State University.
Geophysics
Geophysics is the physics of the Earth and its environment in space. Its subjects include the shape of the Earth, its gravitational and magnetic fields, the dynamics of the Earth as a whole and of its component parts, the Earth's internal structure, composition and tectonics, the generation of magmas, volcanism and rock formation, the hydrological cycle including snow and ice, all aspects of the oceans, the atmosphere, ionosphere, magnetosphere and solar-terrestrial relations, and analogous problems associated with the Moon and other planets.
Geophysics is also applied to societal needs, such as mineral resources, mitigation of natural hazards and environmental protection. Geophysical survey data are used to analyze potential petroleum reservoirs and mineral deposits, to locate groundwater, to locate archaeological finds, to find the thicknesses of glaciers and soils, and for environmental remediation.
The gravitational pull of the Moon and Sun give rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes. Therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide.[6]
Gravitational forces make rocks press down on deeper rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earth's surface and above it can be used to look for mineral deposits (see also gravity anomaly and gravimetry). They also reflect the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth. The geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents (such as with very narrow canals).
Heat flow
The Earth is cooling, and the resulting heat flow generates the Earth's magnetic field through the geodynamo and plate tectonics through mantle convection. The main sources of heat are the primordial heat and radioactivity, although there are also contributions from phase transitions. Heat is mostly carried to the surface by thermal convection, although there are two thermal boundary layers - the core-mantle boundary and the lithosphere - in which heat is transported by conduction. Some heat is carried up from the bottom of the mantle by mantle plumes. The heat flow at the Earth's surface is about 4.2 × 1013 W , and it is a potential source of geothermal energy.
Vibrations
Seismic waves are vibrations that travel through the Earth's interior or along its surface. The entire Earth can also oscillate in forms that are called normal modes. Ground motions from waves or normal modes are measured using seismographs. If the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection.
Seismic waves can also provide information on the region that the waves travel through. If the density or composition of the rock changes suddenly, some of the waves are reflected. Reflections can provide information on near-surface structure. Changes in the travel direction, called refraction, can be used to infer the deep structure of the Earth.
Earthquakes pose a risk to humans. Understanding their mechanisms, which depend on the type of earthquake (e.g., intraplate or deep focus), can lead to better estimates of earthquake risk and improvements in earthquake engineering.
Radioactivity
Further information: Radiometric dating and geotherm
Example of a radioactive decay chain (see Radiometric dating).
Radioactive decay, in addition to being the main source of heat in the Earth (see geotherm), is an invaluable tool for geochronology. Unstable isotopes decay at predictable rates, and the decay rates of different isotopes cover several orders of magnitude, so radioactive decay can be used to accurately date both recent events and events in past geologic eras.
Electricity
Although we mainly notice electricity during thunderstorms, there is always a downward electric field near the surface that averages 120 V m-1.[8] Relative to the solid Earth, the atmosphere has a net positive charge due to bombardment by cosmic rays. A current of about 1800 A flows in the global circuit.[8] It flows downward from the ionosphere over most of the Earth and back upwards through thunderstorms. The flow is manifested by lightning below the clouds and sprites above.
A variety of electric methods are used in geophysical survey. Some measure spontaneous potential, a potential that arises in the ground because of man-made or natural disturbances. Telluric currents flow in Earth and the oceans. They have two causes: electromagnetic induction by the time-varying, external-origin geomagnetic field and motion of conducting bodies (such as seawater) across the Earth's permanent magnetic field.[9] The distribution of telluric current density can be used to detect variations in electrical resistivity of underground structures. Geophysicists can also provide the electric current themselves (see induced polarization and electrical resistivity tomography).
Electromagnetic waves
Electromagnetic waves occur in the ionosphere and magnetosphere as well as the Earth's outer core. Dawn chorus is caused by high-energy electrons that get caught in the Van Allen radiation belt. Whistlers are produced by lightning strikes. Hiss may be generated by both. Electromagnetic waves may also be generated by earthquakes (see seismo-electromagnetics).
In the Earth's outer core, electric currents in the highly conductive liquid iron create magnetic fields by electromagnetic induction (see geodynamo). Alfvén waves are magnetohydrodynamic waves in the magnetosphere or the Earth's core. In the core, they probably have little observable effect on the geomagnetic field, but slower waves such as magnetic Rossby waves may be one source of geomagnetic secular variation.
Electromagnetic methods that are used for geophysical survey include transient electromagnetics and magnetotellurics.
Magnetism
Further information: Geomagnetism and paleomagnetism
The variation between magnetic north and "true" north (see Earth's magnetic field).
The Earth's magnetic field protects the Earth from the deadly solar wind and has long been used for navigation. It originates in the fluid motions of the Earth's outer core (see geodynamo). The magnetic field in the upper atmosphere gives rise to the auroras.
The Earth's field is roughly like a tilted dipole, but it changes over time (a phenomenon called geomagnetic secular variation). Mostly the geomagnetic pole stays near the geographic pole, but at random intervals averaging a million years or so, the polarity of the Earth's field reverses. These geomagnetic reversals are recorded in rocks (see natural remanent magnetization) and their signature can be seen as parallel linear magnetic anomaly stripes on the seafloor. These stripes provide quantitative information on seafloor spreading, a part of plate tectonics. In addition, the magnetization in rocks can be used to measure the motion of continents.
Fluid dynamics
Main article: Geophysical fluid dynamics
Fluid motions occur in the magnetosphere, atmosphere, ocean, mantle and core. Even the mantle, though it has an enormous viscosity, flows like a fluid over long time intervals (see geodynamics). This flow is reflected in phenomena such as isostasy, post-glacial rebound and mantle plumes. The mantle flow drives plate tectonics and the flow in the Earth's core drives the geodynamo.
Geophysical fluid dynamics is a primary tool in physical oceanography and meteorology. The rotation of the Earth has profound effects on the Earth's fluid dynamics, often due to the Coriolis effect. In the atmosphere it gives rise to large-scale patterns like Rossby waves and determines the basic circulation patterns of storms. In the ocean they drive large-scale circulation patterns as well as Kelvin waves and Ekman spirals at the ocean surface. In the Earth's core, the circulation of the molten iron is structured by Taylor columns.
Waves and other phenomena in the magnetosphere can be modeled using magnetohydrodynamics.
Condensed matter physics
The physical properties of minerals must be understood to infer the composition of the Earth's interior from seismology, the geothermal gradient and other sources of information. Mineral physicists study the elastic properties of minerals as well as their high-pressure phase diagrams, melting points and equations of state at high pressure. Studies of creep determine how rocks that are brittle at the surface can flow deep down. These properties determine the rheology that determines the geodynamics.
Water is a very complex substance and its unique properties are essential for life. Its physical properties shape the hydrosphere and are an essential part of the water cycle and climate. Its thermodynamic properties determine evaporation and the thermal gradient in the atmosphere. The many types of precipitation involve a complex mixture of processes such as coalescence, supercooling and supersaturation. Some of the precipitated water becomes groundwater, and groundwater flow includes phenomena such as percolation, while the conductivity of water makes electrical and electromagnetic methods useful for tracking groundwater flow. Physical properties of water such as salinity have a large effect on its motion in the oceans.
The many phases of ice form the cryosphere and come in forms like ice sheets, glaciers, sea ice, freshwater ice, snow, and frozen ground (or permafrost).
Regions of the Earth
Size and form of the Earth
The Earth is roughly spherical, but it bulges towards the Equator, so it is roughly in the shape of an ellipsoid (see Earth ellipsoid). This bulge is due to its rotation and is nearly consistent with an Earth in hydrostatic equilibrium. The detailed shape of the Earth, however, is also affected by the distribution of continents and ocean basins, and to some extent by the dynamics of the plates.[12]
Structure of the Earth
Mapping the interior of the Earth with earthquake waves.
Evidence from seismology, heat flow at the surface, and mineral physics is combined with the Earth's mass and moment of inertia to infer models of the Earth's interior - its composition, density, temperature, pressure. The Earth's mass is M = 5.975 × 1024 kg and its mean radius is R = 6371 km , so its mean specific gravity is < ρ > = 5.515. This is substantially higher than the typical specific gravity (2.7–3.3) of rocks at the surface. Its moment of inertia is 0.33 M R2, whereas it would be 0.4 M R2 if the earth was a sphere of constant density. Both lines of evidence point to a concentration of mass near the center. However, the density of the rock will increase with depth because of the increasing pressure. To determine how large this effect is, the Adams–Williamson equation is used to determine how density increases with pressure. The conclusion is that pressure alone cannot account for the increase in density. Instead, we know that the Earth's core is composed of an alloy of iron and other minerals.
Reconstructions of seismic waves in the deep interior of the Earth show that there are no S-waves in the outer core. This indicates that the outer core is liquid, because liquids cannot support shear. The outer core is liquid, and the motion of this highly conductive fluid generates the Earth's field (see geodynamo). The inner core, however, is solid because of the enormous pressure.
Reconstruction of seismic reflections in the deep interior indicate some major discontinuities in seismic velocities that demarcate the major zones of the Earth: inner core, outer core, mantle, lithosphere and crust. The mantle itself is divided into the upper mantle, transition zone, lower mantle and D′′ layer. Between the crust and the mantle is the Mohorovičić discontinuity.
The seismic model of the Earth does not by itself determine the composition of the layers. For a complete model of the Earth, mineral physics is needed to interpret seismic velocities in terms of composition. The mineral properties are temperature-dependent, so the geotherm must also be determined. This requires physical theory for thermal conduction and convection and the heat contribution of radioactive elements. The main model for the radial structure of the interior of the Earth is the Preliminary Reference Earth Model (PREM). Some parts of this model have been updated by recent findings in mineral physics (see post-perovskite) and supplemented by seismic tomography. The mantle is mainly composed of silicates, and the boundaries between layers of the mantle are probably due to phase transitions.
The mantle acts as a solid for seismic waves, but under high pressures and temperatures it deforms so that over millions of years it acts like a liquid. This makes plate tectonics possible. Geodynamics is the study of the fluid flow in the mantle and core.
The magnetosphere
The solar wind is deflected by the magnetosphere. If a planet's magnetic field is strong enough, its interaction with the solar wind forms a magnetosphere around a planet. Early space probes mapped out the gross dimensions of the terrestrial magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles, the Van Allen radiation belts.
Other fields and related disciplines
Fields
* Geodesy, measurement of the Earth: GPS, vertical and horizontal motions of the Earth's surface, navigation, the study of the Earth's gravitational field, and the size and form of the Earth
* The study of large-scale motions of the Earth's surface and interior, including:
* Tectonophysics, the study of the physical processes that cause and result from plate tectonics
* Geodynamics, the study of modes of transport deformation within the Earth: rock deformation, mantle flow and convection, heat flow, lithosphere dynamics
* Geomagnetism, the study of the Earth's magnetic field, including its origin, telluric currents driven by the magnetic field, the Van Allen belts, and the interaction between the magnetosphere and the solar wind. This field is associated with paleomagnetism, or the measurement of the orientation of the Earth's magnetic field over the geologic past.
* Seismology, the study of the structure and composition of the Earth through seismic waves, and of surface deformations during earthquakes and seismic hazards
* Mathematical geophysics, The development and applications of mathematical methods and techniques for the solution of geophysical problems.
* Geophysical surveying:
* Exploration and engineering geophysics, using surface methods to detect or infer the presence and position of concentrations of ore minerals and hydrocarbons
* Archaeological geophysics, for archaeological imaging or mapping
* Environmental and Engineering Geophysics, for locating underground storage tanks (USTs) or utilities, Unexploded ordnance (UXO), delineating landfills, locating voids or potential subsidence, finding depth to, P-wave or S-wave velocity of, or rippability of bedrock, or the pathway of groundwater movement
* Shallow seismology is used in exploration geophysics (to find oil and gas) and for environmental characterization of the subsurface
Related disciplines
* Volcanology, the study of volcanoes, volcanic features (hot springs, geysers, fumaroles), volcanic rock, and heat flow related to volcanoes
* Atmospheric sciences, which includes:
* Atmospheric electricity and the ionosphere
* Aeronomy, the study of the physical structure and chemistry of the atmosphere.
* Meteorology and Climatology, which both involve studies of the weather.
* The study of water on the Earth, hydrology, physical oceanography and glaciology
* Geological and geophysical engineering and Engineering geology, applying geophysics to the engineering design of facilities including roads, tunnels, and mines
* The study of the rocks and minerals, including petrophysics and aspects of mineralogy such as physical mineralogy and crystal structure
Methods of geophysics
Space probes
Space probes made it possible to collect data from not only the visible light region, but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics.
Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which lead to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins.
In 2002, NASA launched the Gravity Recovery and Climate Experiment, wherein two twin satellites map variations in Earth's gravity field by making measurements of the distance between the two satellites using GPS and a microwave ranging system. Gravity variations detected by GRACE include those caused by changes in ocean currents; runoff and ground water depletion; melting ice sheets and glaciers.
Medical physics
Medical physics is a division of Healthcare science concerning the application of physics to medicine. It generally concerns physics as applied to medical imaging and radiotherapy, although a medical physicist may also work in many other areas of healthcare. A medical physics department may be based in either a hospital or a university and its work is likely to include research, technical development, and clinical healthcare.
Of the large body of medical physicists in academia and clinics, roughly 85% practice or specialize in various forms of therapy, 10% in diagnostic imaging, and 5% in nuclear medicine. Areas of specialty in medical physics however are widely varied in scope and breadth.
Areas of specialty
Medical imaging
* Diagnostic radiology, including x-rays, fluoroscopy, mammography, dual energy X-ray absorptiometry, angiography and computed tomography
* Ultrasound, including intravascular ultrasound
* Non-ionizing radiation (Lasers, Ultraviolet etc.)
* Nuclear medicine, including single photon emission computed tomography (SPECT) and positron emission tomography (PET)
* Magnetic resonance imaging (MRI), including functional magnetic resonance imaging (fMRI) and other methods for functional neuroimaging of the brain.
o For example, nuclear magnetic resonance (often referred to as magnetic resonance imaging to avoid the common concerns about radiation), uses the phenomenon of nuclear resonance to image the human body.
* Magnetoencephalography
* Electrical impedance tomography
* Diffuse optical imaging
* Optical coherence tomography
Treatment of disease
* Defibrillation
* Treatment of disease
* High intensity focussed ultrasound, including lithotripsy
* Interventional radiology
* Non-ionising radiation Lasers, Ultraviolet etc. including photodynamic therapy and Lasik
* Nuclear medicine, including unsealed source radiotherapy
* Photomedicine, the use of light to treat and diagnose disease
* Radiotherapy
o TomoTherapy
o Cyberknife
o Gamma knife
o Proton therapy
o Brachytherapy
o Boron Neutron Capture Therapy
* Sealed source radiotherapy
* Terahertz radiation
Physiological measurement techniques
ECG trace
Used to monitor and measure various physiological parameters. Many physiological measurement techniques are non-invasive and can be used in conjunction with, or as an alternative to, other invasive methods.
* Electrocardiography
* electric current
* Electromyography
* Electroencephalography
* Electronystagmography
* Endoscopy
* Medical ultrasonography
* Non-ionising radiation (Lasers, Ultraviolet etc.)
* Near infrared spectroscopy
* Pulse oximetry
* Blood gas monitor
* Blood pressure measurement
Radiation protection
* Background radiation
* Radiation protection
* Dosimetry
* Health Physics
* Radiological Protection of Patients
Medical computing and mathematics
CT image reconstruction
* Medical informatics
* Telemedicine
* Picture archiving and communication systems (PACS)
* DICOM
* Tomographic reconstruction, an ill-posed inverse problem
Education and training
In North America
In North America, medical physics training is offered at a master's, doctorate, post-doctorate and/or residency levels. Several universities offer these degrees in Canada and the United States.
As of October 2010, twenty-seven universities in North America have medical physics graduate programs that are accredited by The Commission on Accreditation of Medical Physics Education Programs (CAMPEP). The same organization has accredited forty-three medical physics clinical residency programs.
Professional certification is obtained from the American Board of Radiology, the American Board of Medical Physics, the American Board of Science in Nuclear Medicine, and the Canadian College of Physicists in Medicine. As of 2012, enrollment in a CAMPEP-accredited residency or graduate program is required to start the ABR certification process. Starting in 2014, completion of a CAMPEP-accredited residency will be required to advance to part 2 of the ABR certification process.
In the United Kingdom
The person concerned must first gain a first or upper second-class honours degree in a physical or engineering science subject before they can start the Part I medical physics training within the National Health Service.
Trainees can complete Part I training in fifteen months provided they hold an MSc from an IPEM accredited center in the United Kingdom or the Republic of Ireland (National University of Ireland, Galway). For these candidates, the Part I training consists of pure clinical experience. Trainees applying for Part I trainee holding only a degree in a engineering or physical science subject must undertake a combined study and clinical training programme. This programme consists of two years of clinical placement, during which the trainee will study for an MSc in Medical Physics which is approved by the Institute of Physics and Engineering in Medicine (IPEM). The MSc will be either at University College London, Swansea, Sheffield, Surrey, Birmingham, Leeds, Manchester, Aberdeen, Glasgow, King's or Queen Mary's. Open University also offers a Master of Science in Medical Physics, but the prospective student should first check that this degree will satisfy the accreditation requirements or that it is accepted before embarking on it. Successful completion of the Part I training programme leads to an IPEM Diploma. The trainee can then apply for a Part II position, which will consists of the IPEM's Programme of Advanced Training (PAT) which takes a further two years and leads to Corporate Membership of the IPEM, and registration as a Clinical Scientist (if successful).
Sunday, May 29, 2011
SOLAR PANEL
A solar panel (photovoltaic module or photovoltaic panel) is a packaged interconnected assembly of solar cells, also known as photovoltaic cells. The solar panel can be used as a component of a larger photovoltaic system to generate and supply electricity in commercial and residential applications.
Because a single solar panel can only produce a limited amount of power, many installations contain several panels. A photovoltaic system typically includes an array of solar panels, an inverter, may contain a battery and interconnection wiring.
Theory and construction
Solar panels use light energy (photons) from the sun to generate electricity through the photovoltaic effect. The structural (load carrying) member of a module can either be the top layer or the back layer. The majority of modules use wafer-based crystalline silicon cells or thin-film cells based on cadmium telluride or silicon. The conducting wires that take the current off the panels may contain silver, copper or other conductive (but generally not magnetic) transition metals.
The cells must be connected electrically to one another and to the rest of the system. Cells must also be protected from mechanical damage and moisture. Most solar panels are rigid, but semi-flexible ones are available, based on thin-film cells.
Electrical connections are made in series to achieve a desired output voltage and/or in parallel to provide a desired current capability.
Separate diodes may be needed to avoid reverse currents, in case of partial or total shading, and at night. The p-n junctions of mono-crystalline silicon cells may have adequate reverse current characteristics that these are not necessary. Reverse currents waste power and can also lead to overheating of shaded cells. Solar cells become less efficient at higher temperatures and installers try to provide good ventilation behind solar panels.
Some recent solar panel designs include concentrators in which light is focused by lenses or mirrors onto an array of smaller cells. This enables the use of cells with a high cost per unit area (such as gallium arsenide) in a cost-effective way.
Depending on construction, photovoltaic panels can produce electricity from a range of frequencies of light, but usually cannot cover the entire solar range (specifically, ultraviolet, infrared and low or diffused light). Hence much of the incident sunlight energy is wasted by solar panels, and they can give far higher efficiencies if illuminated with monochromatic light. Therefore another design concept is to split the light into different wavelength ranges and direct the beams onto different cells tuned to those ranges. This has been projected to be capable of raising efficiency by 50%. The use of infrared photovoltaic cells has also been proposed to increase efficiencies, and perhaps produce power at night.
Sunlight conversion rates (solar panel efficiencies) can vary from 5-18% in commercial products, typically lower than the efficiencies of their cells in isolation. Panels with conversion rates around 18% are in development incorporating innovations such as power generation on the front and back sides.[citation needed] The Energy Density of a solar panel is the efficiency described in terms of peak power output per unit of surface area, commonly expressed in units of Watts per square foot (W/ft2). The most efficient mass-produced solar panels have energy density values of greater than 13 W/ft2.
Crystalline silicon modules
Most solar modules are currently produced from silicon PV cells. These are typically categorized into either monocrystalline or multicrystalline modules.
Thin-film modules
Third generation solar cells are advanced thin-film cells. They produce high-efficiency conversion at low cost.
Rigid thin-film modules
In rigid thin film modules, the cell and the module are manufactured in the same production line.
The cell is created on a glass substrate or superstrate, and the electrical connections are created in situ, a so called "monolithic integration". The substrate or superstrate is laminated with an encapsulant to a front or back sheet, usually another sheet of glass.
The main cell technologies in this category are CdTe, or a-Si, or a-Si+uc-Si tandem, or CIGS (or variant). Amorphous silicon has a sunlight conversion rate of 6-12%.
Flexible thin-film modules
Flexible thin film cells and modules are created on the same production line by depositing the photoactive layer and other necessary layers on a flexible substrate.
If the substrate is an insulator (e.g. polyester or polyimide film) then monolithic integration can be used.
If it is a conductor then another technique for electrical connection must be used.
The cells are assembled into modules by laminating them to a transparent colourless fluoropolymer on the front side (typically ETFE or FEP) and a polymer suitable for bonding to the final substrate on the other side. The only commercially available (in MW quantities) flexible module uses amorphous silicon triple junction (from Unisolar).
So-called inverted metamorphic (IMM) multijunction solar cells made on compound-semiconductor technology are just becoming commercialized in July 2008. The University of Michigan's solar car that won the North American Solar challenge in July 2008 used IMM thin-film flexible solar cells.
The requirements for residential and commercial are different in that the residential needs are simple and can be packaged so that as technology at the solar cell progress, the other base line equipment such as the battery, inverter and voltage sensing transfer switch still need to be compacted and unitized for residential use. Commercial use, depending on the size of the service will be limited in the photovoltaic cell arena, and more complex parabolic reflectors and solar concentrators are becoming the dominant technology.
The global flexible and thin-film photovoltaic (PV) market, despite caution in the overall PV industry, is expected to experience a CAGR of over 35% to 2019, surpassing 32 GW according to a major new study by IntertechPira.
Module embedded electronics
Several companies have begun embedding electronics into PV modules. This enables performing Maximum Power Point Tracking (MPPT) for each module individually, and the measurement of performance data for monitoring and fault detection at module level. Some of these solutions make use of Power Optimizers, a DC to DC converter technology developed to maximize the power harvest from solar photovoltaic systems.
Module performance and lifetime
Module performance is generally rated under Standard Test Conditions (STC) : irradiance of 1,000 W/m², solar spectrum of AM 1.5 and module temperature at 25°C.
Electrical characteristics include nominal power (PMAX, measured in W), open circuit voltage (VOC), short circuit current (ISC, measured in amperes), maximum power voltage (VMPP), maximum power current (IMPP), peak power, kWp, and module efficiency (%).
Nominal voltage refers to the voltage of the battery that the module is best suited to charge; this is a leftover term from the days when solar panels were used only to charge batteries. The actual voltage output of the panel changes as lighting, temperature and load conditions change, so there is never one specific voltage at which the panel operates. Nominal voltage allows users, at a glance, to make sure the panel is compatible with a given system.
Open circuit voltage or VOC is the maximum voltage that the panel can produce when not connected to an electrical circuit or system. VOC can be measured with a meter directly on an illuminated panel's terminals or on its disconnected cable.
The peak power rating, kWp, is the maximum output according to STC (not the maximum possible output).
Solar panels must withstand heat, cold, rain and hail for many years. Many crystalline silicon module manufacturers offer a warranty that guarantees electrical production for 10 years at 90% of rated power output and 25 years at 80%.
Production
15.9 GW of solar PV system installations were completed in 2010. Solar PV pricing survey and market research company, PVinsights reported a 117.8% growth of solar PV installation on a year-on-year basis.With over 100% year-on-year growth in PV system installation, PV module makers dramatically rose up their shipments of solar panel in 2010. They actively expanded their capacity and turned themselves into Giga-watt GW Player. According to PVinsights, five of top 10 PV module companies in 2010 are GW player. Suntech, First Solar, Sharp, Yingli, Trina Solar are GW players now and most of them doubled their shipments in 2010.
Top ten
Top ten solar panel producers(by MW shipments) in 2010 were:
1. Suntech
2. First Solar
3. Sharp Solar
4. Yingli
5. Trina Solar
6. Canadian Solar
7. Hanwha Solarone
8. Sunpower
9. Renewable Energy Corporation
10. Solarworld
Price
Average pricing information divides in three pricing categories: those buying small quantities (modules of all sizes in the kilowatt range annually), mid-range buyers (typically up to 10 MWp annually), and large quantity buyers (self explanatory—and with access to the lowest prices). Over the long term—and only in the long-term—there is clearly a systematic reduction in the price of cells and modules. For example in 1998 it was estimated that the quantity cost per watt was about $4.50, which was 33 times lower than the cost in 1970 of $150.
Following to RMI, Balance-of-System (BoS) elements, this is, non-module cost of non-microinverter solar panels (as wiring, converters, racking systems and various components) make up about half of the total costs of installations. Also, standardizing technologies could encourage greater adoption of solar panels and, in turn, economies of scale.
Mounting Systems
Trackers
Solar Trackers increase the amount of energy produced per panel at a cost of mechanical complexity and need for maintenance.
Fixed Racks
Fixed racks hold panels in a single location as the sun moves across the sky.
The fixed rack sets the angle at which the panel is held. Tilt angles equivalent to an installation's latitude is common.
Standards
Standards generally used in photovoltaic panels:
* IEC 61215 (crystalline silicon performance), 61646 (thin film performance) and 61730 (all modules, safety)
* ISO 9488 Solar energy—Vocabulary.
* UL 1703
* CE mark
* Electrical Safety Tester (EST) Series (EST-460, EST-22V, EST-22H, EST-110).
Devices with photovoltaic modules
Further information: Solar panels on spacecraft and Solar charger
Electric devices that includes solar panels:
* Solar cell phone : Sharp announced that its first solar-powered cell phone would be released in summer, 2009.
* Solar lamp
* Solar notebook: IUNIKA makes the first Solar Powered Netbook, the Gyy.[10]
* Solar-pumped laser
* Solar vehicle
* Solar plane
Space stations and various spacecraft employ, or have employed photovoltaic panels to generate power.
* Soyuz spacecraft
* International Space Station
* Skylab space laboratory
* Mir space station
A PV MODULE ON ISS
Thursday, May 26, 2011
BIOGRAPHY OF ALBERT EINSTEIN
Albert Einstein (March 14, 1879 – April 18, 1955) was a German-born American theoretical physicist who is widely regarded as the greatest scientist of the 20th century. He proposed the theory of relativity and also made major contributions to the development of quantum mechanics, statistical mechanics, and cosmology. He was awarded the 1921 Nobel Prize for Physics for his explanation of the photoelectric effect and "for his services to Theoretical Physics".
The parents of Albert Einstein, Pauline Koch and Hermann Einstein
After his general theory of relativity was formulated in November 1915, Einstein became world famous, an unusual achievement for a scientist. In his later years, his fame exceeded that of any other scientist in history, and in popular culture, Einstein has become a byword for great intelligence or even genius.
Einstein himself was deeply concerned with the social impact of scientific discovery. An individual of monumental intellectual achievement, he remains the most influential theoretical physicist of the modern era. Einstein's reverence for all creation, his belief in the grandeur, beauty, and sublimity of the universe (the primary source of inspiration in science), his awe for the scheme that is manifested in the material universe—all of these show through in his work and philosophy. To this day Einstein receives popular recognition unprecedented for a scientist.
Biography
Youth and college
First Photo of Albert Einstein
Young Einstein before the Einsteins moved from Germany to Italy.
Einstein was born at Ulm in Baden-Württemberg, Germany, about 100 km east of Stuttgart. His parents were Hermann Einstein, a featherbed salesman who later ran an electrochemical works, and Pauline, whose maiden name was Koch. They were married in Stuttgart-Bad Cannstatt. The family was Jewish (and non-observant); Albert attended a Catholic elementary school and, at the insistence of his mother, was given violin lessons.
At age five, his father showed him a pocket compass, and Einstein realized that something in "empty" space acted upon the needle; he would later describe the experience as one of the most revelatory of his life. Though he built models and mechanical devices for fun, he was considered a slow learner, possibly due to dyslexia, simple shyness, or the significantly rare and unusual structure of his brain (examined after his death). He later credited his development of the theory of relativity to this slowness, saying that by pondering space and time later than most children, he was able to apply a more developed intellect. Another, more recent, theory about his mental development is that he had Asperger's syndrome, a condition related to autism.
Einstein began to learn mathematics around age twelve. There is a recurring rumor that he failed mathematics later in his education, but this is untrue; a change in the way grades were assigned caused confusion years later. Two of his uncles fostered his intellectual interests during his late childhood and early adolescence by suggesting and providing books on science and mathematics.
In 1894, following the failure of Hermann's electrochemical business, the Einsteins moved from Munich to Pavia, Italy (near Milan). During this year, Einstein's first scientific work was written (called "The Investigation of the State of Aether in Magnetic Fields"). Albert remained behind in Munich lodgings to finish school, completing only one term before leaving the gymnasium in spring 1895, before rejoining his family in Pavia. He quit without telling his parents and a year and a half prior to final examinations, Einstein convinced the school to let him go with a medical note from a friendly doctor, but this meant he had no secondary-school certificate.
Despite excelling in the mathematics and science portion, his failure of the liberal arts portion of the Eidgenössische Technische Hochschule (Swiss Federal Institute of Technology, in Zurich) entrance exam the following year was a setback; his family sent him to Aarau, Switzerland, to finish secondary school, where he received his diploma in September 1896. During this time he lodged with Professor Jost Winteler's family and became enamoured with Marie, their daughter, his first sweetheart. Albert's sister Maja was to later marry their son Paul, and his friend Michele Besso married their other daughter Anna. Einstein subsequently enrolled at the Eidgenössische Technische Hochschule in October and moved to Zurich, while Marie moved to Olsberg for a teaching post. The same year, he renounced his Württemberg citizenship, becoming stateless.
Einsteins wife Mileva with her sons Eduard and Hans Albert
In the spring of 1896, the Serbian Mileva Marić (an acquaintance of Nikola Tesla) started initially as a medical student at the University of Zurich, but after a term switched to the same section as Einstein, and as the only woman that year, to study for the same diploma. Einstein's relationship with Mileva developed into romance over the next few years.
In 1900, he was granted a teaching diploma by the Eidgenössische Technische Hochschule and was accepted as a Swiss citizen in 1901. During this time Einstein discussed his scientific interests with a group of close friends, including Mileva. He and Mileva had a daughter Lieserl, born in January 1902. Lieserl, at the time, was considered illegitimate because the parents were unwed.
Work and doctorate
Einstein, in 1905, when he wrote the "Annus Mirabilis Papers"
Upon graduation, Einstein could not find a teaching post, mostly because his brashness as a young man had apparently irritated most of his professors. The father of a classmate helped him obtain employment as a technical assistant examiner at the Swiss Patent Office [3] in 1902. There, Einstein judged the worth of inventors' patent applications for devices that required a knowledge of physics to understand. He also learned how to discern the essence of applications despite sometimes poor descriptions, and was taught by the director how "to express myself correctly". He occasionally rectified their design errors while evaluating the practicality of their work.
Einstein married Mileva Marić on January 6, 1903. Einstein's marriage to Marić, who was a mathematician, was both a personal and intellectual partnership: Einstein referred to Mileva as "a creature who is my equal and who is as strong and independent as I am". Ronald W. Clark, a biographer of Einstein, claimed that Einstein depended on the distance that existed in his and Mileva's marriage in order to have the solitude necessary to accomplish his work. Abram Joffe, a Soviet physicist who knew Einstein, in an obituary of Einstein, wrote, "The author of [the papers of 1905] was ... a bureaucrat at the Patent Office in Bern, Einstein-Marić" and this has recently been taken as evidence of a collaborative relationship. However, according to Alberto A. Martínez of the Center for Einstein Studies at Boston University, Joffe only ascribed authorship to Einstein, as he believed that it was a Swiss custom at the time to append the spouse's last name to the husband's name.[4] Whatever the truth, the extent of her influence on Einstein's work is a highly controversial and debated question.
On May 14, 1904, the couple's first son, Hans Albert Einstein, was born. In 1904, Einstein's position at the Swiss Patent Office was made permanent. He obtained his doctorate after submitting his thesis "A new determination of molecular dimensions" ("Eine neue Bestimmung der Moleküldimensionen") in 1905.
That same year, he wrote four articles that provided the foundation of modern physics, without much scientific literature to which he could refer or many scientific colleagues with whom he could discuss the theories. Most physicists agree that three of those papers (on Brownian motion, the photoelectric effect, and special relativity) deserved Nobel Prizes. Only the paper on the photoelectric effect would win one. This is ironic, not only because Einstein is far better-known for relativity, but also because the photoelectric effect is a quantum phenomenon, and Einstein became somewhat disenchanted with the path quantum theory would take. What makes these papers remarkable is that, in each case, Einstein boldly took an idea from theoretical physics to its logical consequences and managed to explain experimental results that had baffled scientists for decades.
Annus Mirabilis Papers
For a more detailed treatment of this topic, see the subarticle Annus Mirabilis Papers.
Max Planck and Einstein
Einstein submitted the series of papers to the "Annalen der Physik". They are commonly referred to as the "Annus Mirabilis Papers" (from Annus mirabilis, Latin for 'year of wonders'). The International Union of Pure and Applied Physics (IUPAP) plans to commemorate the 100th year of the publication of Einstein's extensive work in 1905 as the 'World Year of Physics 2005'.
Photoelectric effect, Physicist / Astronomers Stamps
The first paper, named "On a Heuristic Viewpoint Concerning the Production and Transformation of Light", ("Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt") proposed the idea of "energy quanta" (which underlies the concept of what are now called photons) and showed how it could be used to explain such phenomena as the photoelectric effect.
His second article in 1905, named "On the Motion—Required by the Molecular Kinetic Theory of Heat—of Small Particles Suspended in a Stationary Liquid", ("Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen") covered his study of Brownian motion, and provided empirical evidence for the existence of atoms.
Einstein's third paper that year, "On the Electrodynamics of Moving Bodies" ("Zur Elektrodynamik bewegter Körper"), was published on June 30, 1905. While developing this paper, Einstein wrote to Mileva about "our work on relative motion", and this has led some to ask whether Mileva played a part in its development. This paper introduced the special theory of relativity, a theory of time, distance, mass and energy which was consistent with electromagnetism, but omitted the force of gravity.
A fourth paper, "Does the Inertia of a Body Depend Upon Its Energy Content?", ("Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?") published late in 1905, showed one further deduction from relativity's axioms, the famous equation that the energy of a body at rest (E) equals its mass (m) times the speed of light (c) squared.
Middle years
Marcel Grossmann
In 1906, Einstein was promoted to technical examiner second class. In 1908, Einstein was licensed in Bern, Switzerland, as a Privatdozent (unsalaried teacher at a university). Einstein's second son, Eduard, was born on July 28, 1910. In 1911, Einstein became first associate professor at the University of Zurich, and shortly afterwards full professor at the (German) University of Prague, only to return the following year to Zurich in order to become full professor at the ETH Zurich. At that time, he worked closely with the mathematician Marcel Grossman. In 1912, Einstein started to refer to time as the fourth dimension.
In 1914, just before the start of World War I, Einstein settled in Berlin as professor at the local university and became a member of the Prussian Academy of Sciences. He took German citizenship. His pacifism and Jewish origins irritated German nationalists. After he became world-famous, nationalistic hatred of him grew and for the first time he was the subject of an organized campaign to discredit his theories. From 1914 to 1933, he served as director of the Kaiser Wilhelm Institute for Physics in Berlin, and it was during this time that he was awarded his Nobel Prize and made his most groundbreaking discoveries.
Einstein divorced Mileva on February 14, 1919, and married his cousin Elsa Löwenthal (née Einstein: Löwenthal was the surname of her first husband, Max) on June 2, 1919. Elsa was Albert's first cousin (maternally) and his second cousin (paternally). She was three years older than Albert, and had nursed him to health after he had suffered a partial nervous breakdown combined with a severe stomach ailment. There were no children from this marriage. The fate of Albert and Mileva's first child, Lieserl, is unknown: some believe she died in infancy, while others believe she was given out for adoption. Eduard was institutionalized for schizophrenia and died in an asylum, while Hans became a professor of hydraulic engineering at the University of California, Berkeley, having little interaction with his father. In 1922, Einstein and his wife Elsa boarded the SS Kitano Maru bound for Japan. The trip also took them to other ports including Singapore, Hong Kong and Shanghai.
General relativity
"Einstein theory triumphs," declared the New York Times on November 10, 1919.
In November 1915, Einstein presented a series of lectures before the Prussian Academy of Sciences in which he described his theory of general relativity. The final lecture climaxed with his introduction of an equation that replaced Newton's law of gravity. This theory considered all observers to be equivalent, not only those moving at a uniform speed. In general relativity, gravity is no longer a force (as it is in Newton's law of gravity) but is a consequence of the curvature of space-time.
The theory provided the foundation for the study of cosmology and gave scientists the tools for understanding many features of the universe that were discovered well after Einstein's death. A truly revolutionary theory, general relativity has so far passed every test posed to it — unlike many other scientific theories — and become a method of perceiving all of physics.
Initially, scientists were skeptical because the theory was derived by mathematical reasoning and rational analysis, not by experiment or observation. But in 1919, predictions made using the theory were confirmed by Arthur Eddington's measurements (during a solar eclipse), of how much the light emanating from a star was bent by the Sun's gravity when it passed close to the Sun. On November 7, The Times reported the confirmation, cementing Einstein's fame.
However, many scientists were still unconvinced for various reasons, ranging from disagreement with Einstein's interpretation of the experiments, to not being able to tolerate the absence of an absolute frame of reference. In Einstein's view, many of them simply could not understand the mathematics involved. Einstein's public fame which followed the 1919 article created resentment among these scientists, some of which lasted well into the 1930s.
In the early 1920s, Einstein was the lead figure in a famous weekly physics colloquium at the University of Berlin. On March 30, 1921, Einstein went to New York to give a lecture on his new theory. In the same year, he was finally awarded the Nobel Prize. Though he is now most famous for his work on relativity, it was for his earlier work on the photoelectric effect that he was given the Prize, because his work on relativity was still disputed and the Nobel committee decided that citing his less-contested theory would be a better political move.
The "Copenhagen" interpretation
Einstein's relationship with quantum physics was quite remarkable. He was the first to say that quantum theory was revolutionary. His idea of light quanta, now known as photons, marked a landmark break with the classical physics. In 1909, Einstein presented his first paper to a gathering of physicists and told them that they must find some way to understand waves and particles together.
In the mid-1920s, as the original quantum theory was replaced with a new quantum mechanics, Einstein balked at the Copenhagen interpretation of the new equations because it settled for a probabilistic, non-visualizable account of physical behavior. Einstein agreed that the theory was the best available, but he looked for a more "complete" explanation, i.e., more deterministic. He could not abandon the belief that physics described the laws that govern "real things", the belief which had led to his successes with atoms, photons, and gravity.
In a 1926 letter to Max Born, Einstein made a remark that is now famous:
Quantum mechanics is certainly imposing. But an inner voice tells me it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the Old One. I, at any rate, am convinced that He does not throw dice.
To this, Bohr, who sparred with Einstein on quantum theory, retorted, "Stop telling God what He must do!" The Bohr-Einstein debates on foundational aspects on quantum mechanics happened during the Solvay conferences.
It was not a rejection of probabilistic theories per se—Einstein had used statistical analysis in his work on Brownian motion and photoelectricity, and in papers published before the miraculous year 1905, and had even discovered Gibbs ensembles on his own—but he believed that, at the core, physical reality behaved deterministically. Experimental evidence against this belief has been found only much later with the discovery of Bell's Theorem and Bell's inequality. However, there is still space for controversial discussions about the interpretation of quantum mechanics.
Bose-Einstein statistics
In 1924, Einstein received a short paper from a young Indian physicist named Satyendra Nath Bose describing light as a gas of photons and asking for Einstein's assistance in publication. Einstein realized that the same statistics could be applied to atoms, and published an article in German (then the lingua franca of physics) which described Bose's model and explained its implications. Bose-Einstein statistics now describe any assembly of these indistinguishable particles known as bosons. The Bose-Einstein condensate phenomenon was predicted in the 1920s by Bose and Einstein, based on Bose's work on the statistical mechanics of photons, which was then formalized and generalized by Einstein. The first such condensate was produced by Eric Cornell and Carl Wieman in 1995 at the University of Colorado at Boulder.
Einstein also assisted Erwin Schrödinger in the development of the Quantum Boltzmann distribution, a mixed classical and quantum mechanical gas model—although he realized that this was less significant than the Bose-Einstein model, and declined to have his name included on the paper.
Later years
Einstein and Szilárd's patent diagram.
Einstein and former student Leó Szilárd co-invented a unique type of refrigerator (usually called the Einstein Refrigerator) in 1926. [5] [6] On November 11, 1930, U.S. Patent 1,781,541 was awarded to Albert Einstein and Leó Szilárd. The patent covered a thermodynamic refrigeration cycle providing cooling with no moving parts, at a constant pressure, with only heat as an input. The refrigeration cycle used ammonia, butane, and water.
After Adolf Hitler came to power in 1933, expressions of hatred for Einstein reached new levels. He was accused by the National Socialist regime of creating "Jewish physics" in contrast with Deutsche Physik—German or "Aryan physics". Nazi physicists (notably including the Nobel laureates Johannes Stark and Philipp Lenard) continued the attempts to discredit his theories and to blacklist politically those German physicists who taught them (such as Werner Heisenberg). Einstein renounced his German citizenship and fled to the United States, where he was given permanent residency. He accepted a position at the newly founded Institute for Advanced Study in Princeton Township, New Jersey. He became an American citizen in 1940, though he still retained Swiss citizenship.
Einstein spent the last fourteen years of his life trying to unify gravity and electromagnetism, giving a new subtle understanding of quantum mechanics. He was looking for a classical unification of gravity and electromagnetism.
Institute for Advanced Study
His work at the Institute for Advanced Study focused on the unification of the laws of physics, which he referred to as the Unified Field Theory. He attempted to construct a model, under the appropriate conditions, which described all of the fundamental forces as different manifestations of a single force. His attempt was in a way doomed to failure because the strong and weak nuclear forces were not understood independently until around 1970, fifteen years after Einstein's death. Einstein's goal survives in the current drive for unification of the forces, embodied most notably by string theory.
Generalized theory
Einstein began to form a generalized theory of gravitation with the universal law of gravitation and the electromagnetic force in his first attempt to demonstrate the unification and simplification of the fundamental forces. In 1950, he described his work in a Scientific American article. Einstein was guided by a belief in a single statistical measure of variance for the entire set of physical laws, and he investigated the similar properties of the electromagnetic and gravity forces, as they are infinite and obey inverse-square laws.
Einstein's generalized theory of gravitation is a universal mathematical approach to field theory. He investigated reducing the different phenomena by the process of logic to something already known or evident. Einstein tried to unify gravity and electromagnetism in a way that also led to a new subtle understanding of quantum mechanics.
Einstein assumed a four-dimensional space-time continuum expressed in axioms represented by five component vectors. Particles appear in his research as a limited region in space in which the field strength or the energy density are particularly high. Einstein treated subatomic particles as objects embedded in the unified field, influencing it and existing as an essential constituent of the unified field but not of it. Einstein also investigated a natural generalization of symmetrical tensor fields, treating the combination of two parts of the field as being a natural procedure of the total field and not the symmetrical and antisymmetrical parts separately. He researched a way to delineate the equations and systems to be derived from a variational principle.
Einstein became increasingly isolated in his research on a generalized theory of gravitation and was ultimately unsuccessful in his attempts.
Final years
Einstein's two-story house, white frame with front porch in Greek revival style, in Princeton (112 Mercer Street).
In 1948, Einstein served on the original committee which resulted in the founding of Brandeis University. A portrait of Einstein was taken by Yousuf Karsh on February 11 of that same year. In 1952, the Israeli government proposed to Einstein that he take the post of second president. He declined the offer, and remains the only United States citizen to ever be offered a position as a foreign head of state. On March 30, 1953, Einstein released a revised unified field theory.
He died in his sleep at a hospital in Princeton, New Jersey, on April 18, 1955, leaving the Generalized Theory of Gravitation unsolved. The only person present at his deathbed, a hospital nurse, said that just before his death he mumbled several words in German that she did not understand. He was cremated without ceremony on the same day he died at Trenton, New Jersey, in accordance with his wishes. His ashes were scattered at an undisclosed location.
His brain was preserved in a jar by Dr. Thomas Stoltz Harvey, the pathologist who performed the autopsy on Einstein. Harvey found nothing unusual with his brain, but in 1999 further analysis by a team at McMaster University revealed that his parietal operculum region was missing and, to compensate, his inferior parietal lobe was 15% wider than normal [7]. The inferior parietal region is responsible for mathematical thought, visuospatial cognition, and imagery of movement.
Personality
Albert Einstein Stamps
Albert Einstein was much respected for his kind and friendly demeanor rooted in his pacifism. He was modest about his abilities, and had distinctive attitudes and fashions—for example, he minimized his wardrobe so that he would not need to waste time in deciding on what to wear. He occasionally had a playful sense of humor, and enjoyed sailing and playing the violin. He was also the stereotypical "absent-minded professor"; he was often forgetful of everyday items, such as keys, and would focus so intently on solving physics problems that he would often become oblivious to his surroundings.
Religious views
Although he was raised Jewish, he was not a believer in Judaism. He simply admired the beauty of nature and the universe. From a letter written in English, dated March 24, 1954, Einstein wrote, "It was, of course, a lie what you read about my religious convictions, a lie which is being systematically repeated. I do not believe in a personal God and I have never denied this but have expressed it clearly. If something is in me which can be called religious then it is the unbounded admiration for the structure of the world so far as our science can reveal it."
He also said (in an essay reprinted in Living Philosophies, vol. 13 (1931)): "A knowledge of the existence of something we cannot penetrate, our perceptions of the profoundest reason and the most radiant beauty, which only in their most primitive forms are accessible to our minds - it is this knowledge and this emotion that constitute true religiosity; in this sense, and this alone, I am a deeply religious man."
The following is a response made to Rabbi Herbert Goldstein of the International Synagogue in New York which read, "I believe in Spinoza's God who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with the fates and actions of human beings." After being pressed on his religious views by Martin Buber, Einstein exclaimed, "What we [physicists] strive for is just to draw His lines after Him." Summarizing his religious beliefs, he once said: "My religion consists of a humble admiration of the illimitable superior spirit who reveals himself in the slight details we are able to perceive with our frail and feeble mind."
He also expressed admiration for Buddhism, which he said "has the characteristics of what would be expected in a cosmic religion for the future: It transcends a personal God, avoids dogmas and theology; it covers both the natural and the spiritual, and it is based on a religious sense aspiring from the experience of all things, natural and spiritual, as a meaningful unity."
Victor J. Stenger, author of Has Science Found God? (2001), wrote of Einstein's presumed pantheism, "Both deism and traditional Judeo-Christian-Islamic theism must also be contrasted with pantheism, the notion attributed to Baruch Spinoza that the deity is associated with the order of nature or the universe itself. This also crudely summarizes the Hindu view and that of many indigenous religions around the world. When modern scientists such as Einstein and Stephen Hawking mention 'God' in their writings, this is what they seem to mean: that God is Nature."
He was a fond lover of Mahatma Gandhi and his political views.
Political views
Einstein considered himself a pacifist [8] and humanitarian [9], and in later years, a committed democratic socialist. He once said, "I believe Gandhi's views were the most enlightened of all the political men of our time. We should strive to do things in his spirit: not to use violence for fighting for our cause, but by non-participation of anything you believe is evil." Einstein's views on other issues, including socialism, McCarthyism and racism, were controversial (see Einstein on socialism). Einstein was a co-founder of the liberal German Democratic Party.
The U.S. FBI kept a 1,427 page file on his activities and recommended that he be barred from immigrating to the United States under the Alien Exclusion Act, alleging that Einstein "believes in, advises, advocates, or teaches a doctrine which, in a legal sense, as held by the courts in other cases, 'would allow anarchy to stalk in unmolested' and result in 'government in name only'", among other charges. They also alleged that Einstein "was a member, sponsor, or affiliated with thirty-four communist fronts between 1937-1954" and "also served as honorary chairman for three communist organizations."[10]
Einstein opposed tyrannical forms of government, and for this reason (and his Jewish background), opposed the Nazi regime and fled Germany shortly after it came to power. He initially favored construction of the atomic bomb, in order to ensure that Hitler did not do so first, and even sent a letter [11] to President Roosevelt (dated August 2, 1939, before World War II broke out, and likely authored by Leó Szilárd) encouraging him to initiate a program to create a nuclear weapon. Roosevelt responded to this by setting up a committee for the investigation of using uranium as a weapon, which in a few years was superseded by the Manhattan Project.
After the war, though, Einstein lobbied for nuclear disarmament and a world government: "I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones."
Einstein was a supporter of Zionism. He supported Jewish settlement of the ancient seat of Judaism and was active in the establishment of the Hebrew University in Jerusalem, which published (1930) a volume titled About Zionism: Speeches and Lectures by Professor Albert Einstein, and to which Einstein bequeathed his papers. However, he opposed nationalism and expressed skepticism about whether a Jewish nation-state was the best solution. He may have imagined Jews and Arabs living peacefully in the same land. In later life he was offered the post of second president of the newly created state of Israel, but declined the offer, claiming that he lacked the necessary people skills.
Einstein, along with Albert Schweitzer and Bertrand Russell, fought against nuclear tests and bombs. As his last public act, and just days before his death, he signed the Russell-Einstein Manifesto, which led to the Pugwash Conferences on Science and World Affairs. His letter to Russell read:
Dear Bertrand Russell,
Thank you for your letter of April 5. I am gladly willing to sign your excellent statement. I also agree with your choice of the prospective signers.
With kind regards, A. Einstein
Popularity and cultural impact
Einstein's popularity has led to widespread use of Einstein in advertising and merchandising, including the registration of "Albert Einstein" as a trademark.
"He who joyfully marches to music in rank and file has already earned my contempt. He has been given a large brain by mistake, since for him the spinal cord would fully suffice. This disgrace to civilization should be done away with at once. Heroism at command, senseless brutality, and all the loathsome nonsense that goes by the name of patriotism, how violently I hate all this, how despicable and ignoble war is; I would rather be torn to shreds than be part of so base an action! It is my conviction that killing under the cloak of war is nothing but an act of murder."
The photo (detail from the original) of this humorous expression was taken during Einstein's birthday on March 14, 1951, UPI
Entertainment
Albert Einstein has become the subject of a number of novels, films and plays, including Nicolas Roeg's film Insignificance, Fred Schepisi's film I.Q., Alan Lightman's novel Einstein's Dreams, and Steve Martin's comedic play "Picasso at the Lapin Agile". He was the subject of Philip Glass's groundbreaking 1976 opera Einstein on the Beach. Since 1978, Einstein's humorous side has been the subject of a live stage presentation Albert Einstein: The Practical Bohemian, a one man show performed by actor Ed Metzger.
He is often used as a model for depictions of eccentric scientists in works of fiction; his own character and distinctive hairstyle suggest eccentricity, electricity, or even lunacy and are widely copied or exaggerated.
On Einstein's 72nd birthday in 1951, the UPI photographer Arthur Sasse was trying to coax him into smiling for the camera. Having done this for the photographer many times that day, Einstein stuck out his tongue instead [12]. The image has become an icon in pop culture for its contrast of the genius scientist displaying a moment of levity. Yahoo Serious, an Australian film maker, used the photo as an inspiration for the intentionally anachronistic movie Young Einstein.
Licensing
The Roger Richman Agency, Inc. licences the commercial use of the name "Albert Einstein" and associated imagery and likenesses of Einstein, as agent for the Hebrew University of Jerusalem. Einstein actively supported the university during his life and this support continues with the royalties received from licensing activities. As head licensee the agency can control commercial usage of Einstein's name which does not comply with certain standards (e.g., when Einstein's name is used as a trademark, the ™ symbol must be used [13]).
Honors
Einstein has received a number of posthumous honors, including:
100 Years Relativity - Atoms- Quanta, 2005 German Stamp
in 1999, he was named "Person of the Century" by TIME magazine.
the year 2005 was designated as the "World Year of Physics" by UNESCO for its coinciding with the centennial of the "Annus Mirabilis" papers, celebrated at the Einstein Symposium.
Among Einstein's many namesakes are:
* a unit used in photochemistry, the einstein.
* the chemical element 99, einsteinium.
* the asteroid 2001 Einstein.
* the Albert Einstein Peace Prize.
* the Albert Einstein College of Medicine of Yeshiva University was named after Einstein upon his death in 1955.
Subscribe to:
Comments (Atom)



















