ALBERT EINSTEIN

ALBERT EINSTEIN
THE GREAT PHYSICIST

MY PHYSICS WORLD

EVERYTHING IS ALL ABOUT PHYSICS. THINK PHYSICS, THINK POSSIBILITY!

Friday, June 3, 2011

APPLIED PHYSICS


Applied physics is a general term for physics which is intended for a particular technological or practical use. "Applied" is distinguished from "pure" by a subtle combination of factors such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving an engineering problem. This approach is similar to that of applied mathematics. In other words, applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of these scientific principles in practical devices and systems. Applied physicists can also be interested in the use of physics for scientific research. For instance, people working on accelerator physics seek to build better accelerators for research in theoretical TIPUS.

Fields and areas of research
* Accelerator physics
* Acoustics
* Agrophysics
* Analog Electronics
* Force microscopy and imaging
* Ballistics
* Biophysics
* Communication Physics
* Computational physics
* Control Theory
* Digital Electronics
* Econophysics
* Engineering physics
* Fiber Optics
* Fluid dynamics
* Geophysics
* Laser physics
* Medical physics
* Metrological Physics
* Microfluidics
* Nanotechnology
* Nondestructive testing
* Nuclear engineering
* Nuclear technology
* Optics
* Optoelectronics
* Photovoltaics
* Plasma physics
* Quantum electronics
* Semiconductor physics and devices
* Soil Physics
* Solid state physics
* Space physics
* Spintronics
* Superconductors
* Vehicle dynamics

Accelerator physics
Accelerator physics deals with the problems of building and operating particle accelerators.
The experiments conducted with particle accelerators are not regarded as part of accelerator physics. These belong (according to the objectives of the experiments) to particle physics, nuclear physics, condensed matter physics, materials physics, etc. as well as to other sciences and technical fields. The types of experiments done at a particular accelerator and/or its other uses are largely constrained by the characteristics of the accelerator itself, such as energy (per particle), types of particles, beam intensity, beam quality, etc.
Accelerator physics itself is the study of the motion of the particle beam through the machine, control and manipulation of the beam, interaction with the machine itself, and measurements of the various parameters associated with particle beams.
Content.

Equations of motion
The motion of charged particles through an accelerator is controlled using applied electro-magnetic fields, and the equations of motion may be derived from (or, since in many cases a general solution is not possible, approximated from) relativistic Hamiltonian mechanics. Typically, a separate Hamiltonian is written down for each element (e.g. for a single quadrupole magnet, or accelerating structure) to allow the equations of motion to be solved for this one element. Once this has been done for each element encountered in the machine, the full trajectory of each particle may be calculated for the entire machine.
In many cases a general solution of the full Hamiltonian is not possible, so it is necessary to make approximations. This may take the form of the Paraxial approximation (a Taylor series in the dynamical variables, truncated to low order), however, even in the cases of strongly non-linear magnetic fields, a Lie transform may be used to construct an integrator with a high degree of accuracy, and the paraxial approximation is not necessary.

Diagnostics
A vital component of any accelerator are the diagnostic devices that allow various properties of the particle bunches to be measured.
A typical machine may use many different types of measurement device in order to measure different properties. These include (but are not limited to) Beam Position Monitors (BPMs) to measure the position of the bunch, screens (fluorescent screens, Optical Transition Radiation (OTR) devices) to image the profile of the bunch, wire-scanners to measure its cross-section, and toroids or ICTs to measure the bunch charge (i.e. the number of particles per bunch).
While many of these devices rely on well understood technology, designing a device capable of measuring a beam for a particular machine is a complex task requiring much expertise. Not only is a full understanding of the physics of the operation of the device necessary, but it is also necessary to ensure that the device is capable of measuring the expected parameters of the machine under consideration.
Success of the full range of beam diagnostics often underpins the success of the machine as a whole.

Machine tolerances
Errors in the alignment of components, field strength, etc., are inevitable in machines of this scale, so it is important to consider the tolerances under which a machine may operate.
Engineers will provide the physicists with expected tolerances for the alignment and manufacture of each component to allow full physics simulations of the expected behaviour of the machine under these conditions. In many cases it will be found that the performance is degraded to an unacceptable level, requiring either re-engineering of the components, or the invention of algorithms that allow the machine performance to be 'tuned' back to the design level.
This may require many simulations of different error conditions in order to determine the relative success of each tuning algorithm, and to allow recommendations for the collection of algorithms to be deployed on the real machine.

Interactions between the beam and the machine
Due to the strong electro-magnetic fields that follow the beam, it is possible for it to interact with any electrical impedance in the walls of the beam pipe. This may be in the form of a resistive impedance (i.e. the finite resistivity of the beam pipe material) or an inductive/capacitive impedance (due to the geometric changes in the beam pipe's cross section).
These impedances will induce so called 'wake-fields' (a strong warping of the electromagnetic field of the beam) that can interact with later particles. Since this interaction may have a negative effect, it must be studied to determine its magnitude, and to determine any actions that may be taken to mitigate it.

Acoustics
Acoustics is the interdisciplinary science that deals with the study of all mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics can be seen in almost all aspects of modern society with the most obvious being the audio and noise control industries.
Hearing is one of the most crucial means of survival in the animal world, and speech is one of the most distinctive characteristics of human development and culture. So it is no surprise that the science of acoustics spreads across so many facets of our society—music, medicine, architecture, industrial production, warfare and more. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Lindsay's 'Wheel of Acoustics' is a well accepted overview of the various fields in acoustics.
The word "acoustic" is derived from the Greek word ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear" and that from ἀκουστός (akoustos), "heard, audible", which in turn derives from the verb ἀκούω (akouo), "I hear".
The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. After acousticians had extended their studies to frequencies above and below the audible range, it became conventional to identify these frequency ranges as "ultrasonic" and "infrasonic" respectively, while letting the word "acoustic" refer to the entire frequency range without limit.

History of acoustics
Early research in acoustics
The fundamental and the first 6 overtones of a vibrating string. The earliest records of the study of this phenomenon are attributed to the philosopher Pythagoras in the 6th century BC.
In the 6th century BC, the Greek philosopher Pythagoras wanted to know why some intervals seemed more beautiful than others, and he found answers in terms of numerical ratios representing the harmonic overtone series on a string. He is reputed to have observed that when the lengths of vibrating strings are expressible as ratios of integers (e.g. 2 to 3, 3 to 4), the tones produced will be harmonious. If, for example, a string sounds the note C when plucked, a string twice as long will sound the same note an octave lower. The tones in between are then given by 16:9 for D, 8:5 for E, 3:2 for F, 4:3 for G, 6:5 for A, and 16:15 for B, in ascending order. Aristotle (384-322 BC) understood that sound consisted of contractions and expansions of the air "falling upon and striking the air which is next to it...", a very good expression of the nature of wave motion. In about 20 BC, the Roman architect and engineer Vitruvius wrote a treatise on the acoustic properties of theatres including discussion of interference, echoes, and reverberation—the beginnings of architectural acoustics.
Principles of acoustics were applied since ancient times : Roman theatre in the city of Amman.
The physical understanding of acoustical processes advanced rapidly during and after the Scientific Revolution. Mainly Galileo Galilei (1564–1642) but also Marin Mersenne (1588–1648), independently, discovered the complete laws of vibrating strings (completing what Pythagoras and Pythagoreans had started 2000 years earlier). Galileo wrote "Waves are produced by the vibrations of a sonorous body, which spread through the air, bringing to the tympanum of the ear a stimulus which the mind interprets as sound", a remarkable statement that points to the beginnings of physiological and psychological acoustics. Experimental measurements of the speed of sound in air were carried out successfully between 1630 and 1680 by a number of investigators, prominently Mersenne. Meanwhile Newton (1642–1727) derived the relationship for wave velocity in solids, a cornerstone of physical acoustics (Principia, 1687).

The Age of Enlightenment and onward
The eighteenth century saw major advances in acoustics as mathematicians applied the new techniques of calculus to elaborate theories of sound wave propagation. In the nineteenth century the major figures of mathematical acoustics were Helmholtz in Germany, who consolidated the field of physiological acoustics, and Lord Rayleigh in England, who combined the previous knowledge with his own copious contributions to the field in his monumental work "The Theory of Sound". Also in the 19th century, Wheatstone, Ohm, and Henry developed the analogy between electricity and acoustics.
The twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place. The first such application was Sabine’s groundbreaking work in architectural acoustics, and many others followed. Underwater acoustics was used for detecting submarines in the first World War. Sound recording and the telephone played important roles in a global transformation of society. Sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing. The ultrasonic frequency range enabled wholly new kinds of application in medicine and industry. New kinds of transducers (generators and receivers of acoustic energy) were invented and put to use.

Fundamental concepts of acoustics
At Jay Pritzker Pavilion, a LARES system is combined with a zoned sound reinforcement system, both suspended on an overhead steel trellis, to synthesize an indoor acoustic environment outdoors.
The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations.

The fundamental acoustical process

The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into acoustic energy, producing the acoustic wave. There is one fundamental equation that describes acoustic wave propagation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert.
The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many forms including longitudinal waves, transverse waves and surface waves.
Acoustics looks first at the pressure levels and frequencies in the sound wave. Transduction processes are also of special importance.

Wave propagation: pressure levels


Spectrogram of a young girl saying "oh, no"

In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear, known as the threshold of hearing, is nine orders of magnitude smaller than the ambient pressure. The loudness of these disturbances is called the sound pressure level (SPL), and is measured on a logarithmic scale in decibels.

Wave propagation: frequency
Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because this is how our ears interpret sound. What we experience as "higher pitched" or "lower pitched" sounds are pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic measurement, acoustic signals are sampled in time, and then presented in more meaningful forms such as octave bands or time frequency plots. Both these popular methods are used to analyze sound and better understand the acoustic phenomenon.
The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls between 20 Hz and 20,000 Hz. This range is important because its frequencies can be detected by the human ear. This range has a number of applications, including speech communication and music. The ultrasonic range refers to the very high frequencies: 20,000 Hz and higher. This range has shorter wavelengths which allows better resolution in imaging technologies. Medical applications such as ultrasonography and elastography rely on the ultrasonic frequency range. On the other end of the spectrum, the lowest frequencies are known as the infrasonic range. These frequencies can be used to study geological phenomena such as earthquakes.
Analytic instruments such as the Spectrum analyzer facilitate visualization and measurement of acoustic signals and their properties. The Spectrogram produced by such an instrument is a graphical display of the time varying pressure level and frequency profiles which give a specific acoustic signal its defining character.

Transduction in acoustics
An inexpensive low fidelity 3.5 inch driver, typically found in small radios
A transducer is a device for converting one form of energy into another. In an acoustical context, this usually means converting sound energy into electrical energy (or vice versa). For nearly all acoustic applications, some type of acoustic transducer is necessary. Acoustic transducers include loudspeakers, microphones, hydrophones and sonar projectors. These devices convert an electric signal to or from a sound pressure wave. The most widely used transduction principles are electromagnetism, electrostatics and piezoelectricity.

The transducers in most common loudspeakers (e.g. woofers and tweeters), are electromagnetic devices that generate waves using a suspended diaphragm driven by an electromagnetic voice coil, sending off pressure waves. Electret microphones and condenser microphones employ electrostatics. As the sound wave strikes the microphone's diaphragm, it moves and induces a voltage change. The ultrasonic systems used in medical ultrasonography employ piezoelectric transducers. These are made from special ceramics in which elastic vibrations and electrical fields are interlinked through a property of the material itself.

Divisions of acoustics
The table below shows seventeen major subfields of acoustics established in the PACS classification system. These have been grouped into three domains: physical acoustics, biological acoustics and acoustical engineering.
Physical acoustics
* Aeroacoustics
* General linear acoustics
* Nonlinear acoustics
* Structural acoustics and vibration
* Underwater sound

Biological acoustics
* Bioacoustics
* Musical acoustics
* Physiological acoustics
* Psychoacoustics
* Speech communication (production;
perception; processing and communication systems)

Acoustical engineering
* Acoustic measurements and instrumentation
* Acoustic signal processing
* Architectural acoustics
* Environmental acoustics
* Transduction
* Ultrasonics
* Room acoustics

Agrophysics
Agrophysics is a branch of science bordering on agronomy and physics, whose objects of study are the agroecosystem - the biological objects, biotope and biocoenosis affected by human activity, studied and described using the methods of physical sciences.
Agrophysics is closely related to biophysics, but is restricted to the biology of the plants, animals, soil and an atmosphere involved in agricultural activities and biodiversity. It is different from biophysics in having the necessity of taking into account the specific features of biotope and biocoenosis, which involves the knowledge of nutritional science and agroecology, agricultural technology, biotechnology, genetics etc.

Principles of physical sciences
Agrophysics is close to certain fundamental sciences like biology, whose methods and knowledge it utilizes (especially in the field of environmental ecology and plant physiology), and physics, from which it acquires the research methods, especially that of physical experiment and model.
The scope of interest of agrophysics is not focused solely on technical problems from agronomy and on practical implementation of sciences and that are aspects that makes it different from agricultural engineering which provides grounds for classifying agrophysics as the fundamental sciences.
Physical models, closely related to biophysics, are ready to solve either global or local aspects of behaviour of the complex ecosystems to be studied, including of energy consumption, food safety etc.

Principles of history
The needs of agriculture, concerning the past experience study of the local complex soil and next plant-atmosphere systems, lay at the root of the emergnece of new branch - agrophysics dealing this with experimental physics. The scope of the branch starting from soil science (physics) and originally limited to the study of relations within the soil environment, expanded over time onto influencing the properties of agricultural crops and produce as foods and raw postharvest materials, and onto the issues of quality, safety and labeling concerns, considered distinct from the field of nutrition for application in food science.
A research centre that is focused on the development of the Science is the Institute of Agrophysics, Polish Academy of Sciences in Lublin. cyt: "Agrophysics, utilizing the achievements of Exact Sciences for solving major problems of Agriculture, is involved in study of materials and processes occurring in the production and processing of agricultural crops, with particular emphasis on the condition of the environment and the quality of farming materials and food productions."

Analogue electronics
Analogue electronics (or analog in American English) are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two different levels. The term "analogue" describes the proportional relationship between a signal and a voltage or current that represents the signal. The word analogue is derived from the Greek word ανάλογος (analogos) meaning "proportional".

Analogue signals
An analogue signal uses some attribute of the medium to convey the signal's information. For example, an aneroid barometer uses the angular position of a needle as the signal to convey the information of changes in atmospheric pressure. Electrical signals may represent information by changing their voltage, current, frequency, or total charge. Information is converted from some other physical form (such as sound, light, temperature, pressure, position) to an electrical signal by a transducer which converts one type of energy into another (e.g. a microphone).
The signals take any value from a given range, and each unique signal value represents different information. Any change in the signal is meaningful, and each level of the signal represents a different level of the phenomenon that it represents. For example, suppose the signal is being used to represent temperature, with one volt representing one degree Celsius. In such a system 10 volts would represent 10 degrees, and 10.1 volts would represent 10.1 degrees.
Another method of conveying an analogue signal is to use modulation. In this, some base carrier signal has one of its properties altered: amplitude modulation (AM) involves altering the amplitude of a sinusoidal voltage waveform by the source information, frequency modulation (FM) changes the frequency. Other techniques, such as phase modulation or changing the phase of the carrier signal, are also used.
In an analogue sound recording, the variation in pressure of a sound striking a microphone creates a corresponding variation in the current passing through it or voltage across it. An increase in the volume of the sound causes the fluctuation of the current or voltage to increase proportionally while keeping the same waveform or shape.
Mechanical, pneumatic, hydraulic and other systems may also use analogue signals.

Inherent noise
Analogue systems invariably include noise; that is, random disturbances or variations, some caused by the random thermal vibrations of atomic particles. Since all variations of an analogue signal are significant, any disturbance is equivalent to a change in the original signal and so appears as noise.[5] As the signal is copied and re-copied, or transmitted over long distances, these random variations become more significant and lead to signal degradation. Other sources of noise may include external electrical signals or poorly designed components. These disturbances are reduced by shielding, and using low-noise amplifiers (LNA).

Analogue vs. digital electronics
Since the information is encoded differently in analogue and digital electronics, the way they process a signal is consequently different. All operations that can be performed on an analogue signal such as amplification, filtering, limiting, and others, can also be duplicated in the digital domain. Every digital circuit is also an analogue circuit, in that the behaviour of any digital circuit can be explained using the rules of analogue circuits.
The first electronic devices invented and mass produced were analogue. The use of microelectronics has reduced the cost of digital techniques and now makes digital methods feasible and cost-effective such as in the field of human-machine communication by voice.
The main differences between analogue and digital electronics are listed below:

Noise
Because of the way information is encoded in analogue circuits, they are much more susceptible to noise than digital circuits, since a small change in the signal can represent a significant change in the information present in the signal and can cause the information present to be lost. Since digital signals take on one of only two different values, a disturbance would have to be about one-half the magnitude of the digital signal to cause an error; this property of digital circuits can be exploited to make signal processing noise-resistant. In digital electronics, because the information is quantized, as long as the signal stays inside a range of values, it represents the same information. Digital circuits use this principle to regenerate the signal at each logic gate, lessening or removing noise.

Precision
A number of factors affect how precise a signal is, mainly the noise present in the original signal and the noise added by processing. See signal-to-noise ratio. Fundamental physical limits such as the shot noise in components limits the resolution of analogue signals. In digital electronics additional precision is obtained by using additional digits to represent the signal; the practical limit in the number of digits is determined by the performance of the analogue-to-digital converter (ADC), since digital operations can usually be performed without loss of precision. The ADC takes an analogue signal and changes into a series of binary numbers. The ADC may be used in simple digital display devices e. g. thermometers, light meters but it may also be used in digital sound recording and in data acquisition. However, a digital-to-analogue converter (DAC) is used to change a digital signal to an analogue signal. A DAC takes a series of binary numbers and converts it to an analogue signal. It is common to find a DAC in the gain-control system of an op-amp which in turn may be used to control digital amplifiers and filters.

Design difficulty
Analogue circuits are harder to design, requiring more skill, than comparable digital systems. This is one of the main reasons why digital systems have become more common than analogue devices. An analogue circuit must be designed by hand, and the process is much less automated than for digital systems. However, if a digital electronic device is to interact with the real world, it will always need an analogue interface. For example, every digital radio receiver has an analogue preamplifier as the first stage in the receive chain.

Atomic force microscopy
Atomic force microscopy (AFM) or scanning force microscopy (SFM) is a very high-resolution type of scanning probe microscopy, with demonstrated resolution on the order of fractions of a nanometer, more than 1000 times better than the optical diffraction limit. The precursor to the AFM, the scanning tunneling microscope, was developed by Gerd Binnig and Heinrich Rohrer in the early 1980s at IBM Research - Zurich, a development that earned them the Nobel Prize for Physics in 1986. Binnig, Quate and Gerber invented the first atomic force microscope (also abbreviated as AFM) in 1986. The first commercially available atomic force microscope was introduced in 1989. The AFM is one of the foremost tools for imaging, measuring, and manipulating matter at the nanoscale. The information is gathered by "feeling" the surface with a mechanical probe. Piezoelectric elements that facilitate tiny but accurate and precise movements on (electronic) command enable the very precise scanning. In some variations, electric potentials can also be scanned using conducting cantilevers. In newer more advanced versions, currents can even be passed through the tip to probe the electrical conductivity or transport of the underlying surface, but this is much more challenging with very few research groups reporting reliable data.

Basic principles
Electron micrograph of a used AFM cantilever image width ~100 micrometers...
and ~30 micrometers
The AFM consists of a cantilever with a sharp tip (probe) at its end that is used to scan the specimen surface. The cantilever is typically silicon or silicon nitride with a tip radius of curvature on the order of nanometers. When the tip is brought into proximity of a sample surface, forces between the tip and the sample lead to a deflection of the cantilever according to Hooke's law. Depending on the situation, forces that are measured in AFM include mechanical contact force, van der Waals forces, capillary forces, chemical bonding, electrostatic forces, magnetic forces (see magnetic force microscope, MFM), Casimir forces, solvation forces, etc. Along with force, additional quantities may simultaneously be measured through the use of specialized types of probe (see scanning thermal microscopy, scanning joule expansion microscopy, photothermal microspectroscopy, etc.). Typically, the deflection is measured using a laser spot reflected from the top surface of the cantilever into an array of photodiodes. Other methods that are used include optical interferometry, capacitive sensing or piezoresistive AFM cantilevers. These cantilevers are fabricated with piezoresistive elements that act as a strain gauge. Using a Wheatstone bridge, strain in the AFM cantilever due to deflection can be measured, but this method is not as sensitive as laser deflection or interferometry.
Atomic force microscope topographical scan of a glass surface. The micro and nano-scale features of the glass can be observed, portraying the roughness of the material. The image space is (x,y,z) = (20um x 20um x 420nm).
If the tip was scanned at a constant height, a risk would exist that the tip collides with the surface, causing damage. Hence, in most cases a feedback mechanism is employed to adjust the tip-to-sample distance to maintain a constant force between the tip and the sample. Traditionally, the sample is mounted on a piezoelectric tube, that can move the sample in the z direction for maintaining a constant force, and the x and y directions for scanning the sample. Alternatively a 'tripod' configuration of three piezo crystals may be employed, with each responsible for scanning in the x,y and z directions. This eliminates some of the distortion effects seen with a tube scanner. In newer designs, the tip is mounted on a vertical piezo scanner while the sample is being scanned in X and Y using another piezo block. The resulting map of the area z = f(x,y) represents the topography of the sample.
The AFM can be operated in a number of modes, depending on the application. In general, possible imaging modes are divided into static (also called contact) modes and a variety of dynamic (or non-contact) modes where the cantilever is vibrated.

Imaging modes
The primary modes of operation for an AFM are static mode and dynamic mode. In static mode, the cantilever is "dragged" across the surface of the sample and the contours of the surface are measured directly using the deflection of the cantilever. In the dynamic mode, the cantilever is externally oscillated at or close to its fundamental resonance frequency or a harmonic. The oscillation amplitude, phase and resonance frequency are modified by tip-sample interaction forces. These changes in oscillation with respect to the external reference oscillation provide information about the sample's characteristics.

Contact mode
In the static mode operation, the static tip deflection is used as a feedback signal. Because the measurement of a static signal is prone to noise and drift, low stiffness cantilevers are used to boost the deflection signal. However, close to the surface of the sample, attractive forces can be quite strong, causing the tip to "snap-in" to the surface. Thus static mode AFM is almost always done in contact where the overall force is repulsive. Consequently, this technique is typically called "contact mode". In contact mode, the force between the tip and the surface is kept constant during scanning by maintaining a constant deflection.

Non-contact mode
AFM - non-contact mode
In this mode, the tip of the cantilever does not contact the sample surface. The cantilever is instead oscillated at a frequency slightly above its resonant frequency where the amplitude of oscillation is typically a few nanometers (<10 nm). The van der Waals forces, which are strongest from 1 nm to 10 nm above the surface, or any other long range force which extends above the surface acts to decrease the resonance frequency of the cantilever. This decrease in resonant frequency combined with the feedback loop system maintains a constant oscillation amplitude or frequency by adjusting the average tip-to-sample distance. Measuring the tip-to-sample distance at each (x,y) data point allows the scanning software to construct a topographic image of the sample surface. Non-contact mode AFM does not suffer from tip or sample degradation effects that are sometimes observed after taking numerous scans with contact AFM. This makes non-contact AFM preferable to contact AFM for measuring soft samples. In the case of rigid samples, contact and non-contact images may look the same. However, if a few monolayers of adsorbed fluid are lying on the surface of a rigid sample, the images may look quite different. An AFM operating in contact mode will penetrate the liquid layer to image the underlying surface, whereas in non-contact mode an AFM will oscillate above the adsorbed fluid layer to image both the liquid and surface. Schemes for dynamic mode operation include frequency modulation and the more common amplitude modulation. In frequency modulation, changes in the oscillation frequency provide information about tip-sample interactions. Frequency can be measured with very high sensitivity and thus the frequency modulation mode allows for the use of very stiff cantilevers. Stiff cantilevers provide stability very close to the surface and, as a result, this technique was the first AFM technique to provide true atomic resolution in ultra-high vacuum conditions.[1] In amplitude modulation, changes in the oscillation amplitude or phase provide the feedback signal for imaging. In amplitude modulation, changes in the phase of oscillation can be used to discriminate between different types of materials on the surface. Amplitude modulation can be operated either in the non-contact or in the intermittent contact regime. In dynamic contact mode, the cantilever is oscillated such that the separation distance between the cantilever tip and the sample surface is modulated. Amplitude modulation has also been used in the non-contact regime to image with atomic resolution by using very stiff cantilevers and small amplitudes in an ultra-high vacuum environment. Tapping mode
Single polymer chains (0.4 nm thick) recorded in a tapping mode under aqueous media with different pH.
In ambient conditions, most samples develop a liquid meniscus layer. Because of this, keeping the probe tip close enough to the sample for short-range forces to become detectable while preventing the tip from sticking to the surface presents a major problem for non-contact dynamic mode in ambient conditions. Dynamic contact mode (also called intermittent contact or tapping mode) was developed to bypass this problem.
In tapping mode, the cantilever is driven to oscillate up and down at near its resonance frequency by a small piezoelectric element mounted in the AFM tip holder similar to non-contact mode. However, the amplitude of this oscillation is greater than 10 nm, typically 100 to 200 nm. Due to the interaction of forces acting on the cantilever when the tip comes close to the surface, Van der Waals force, dipole-dipole interaction, electrostatic forces, etc. cause the amplitude of this oscillation to decrease as the tip gets closer to the sample. An electronic servo uses the piezoelectric actuator to control the height of the cantilever above the sample. The servo adjusts the height to maintain a set cantilever oscillation amplitude as the cantilever is scanned over the sample. A tapping AFM image is therefore produced by imaging the force of the intermittent contacts of the tip with the sample surface.
This method of "tapping" lessens the damage done to the surface and the tip compared to the amount done in contact mode. Tapping mode is gentle enough even for the visualization of supported lipid bilayers or adsorbed single polymer molecules (for instance, 0.4 nm thick chains of synthetic polyelectrolytes) under liquid medium. With proper scanning parameters, the conformation of single molecules can remain unchanged for hours.

AFM cantilever deflection measurement

AFM beam deflection detection

Laser light from a solid state diode is reflected off the back of the cantilever and collected by a position sensitive detector (PSD) consisting of two closely spaced photodiodes whose output signal is collected by a differential amplifier. Angular displacement of the cantilever results in one photodiode collecting more light than the other photodiode, producing an output signal (the difference between the photodiode signals normalized by their sum) which is proportional to the deflection of the cantilever. It detects cantilever deflections <10 nm (thermal noise limited). A long beam path (several centimeters) amplifies changes in beam angle. Force spectroscopy
Another major application of AFM (besides imaging) is force spectroscopy, the direct measurement of tip-sample interaction forces as a function of the gap between the tip and sample (the result of this measurement is called a force-distance curve). For this method, the AFM tip is extended towards and retracted from the surface as the deflection of the cantilever is monitored as a function of piezoelectric displacement. These measurements have been used to measure nanoscale contacts, atomic bonding, Van der Waals forces, and Casimir forces, dissolution forces in liquids and single molecule stretching and rupture forces. Furthermore, AFM was used to measure, in an aqueous environment, the dispersion force due to polymer adsorbed on the substrate. Forces of the order of a few piconewtons can now be routinely measured with a vertical distance resolution of better than 0.1 nanometers. Force spectroscopy can be performed with either static or dynamic modes. In dynamic modes, information about the cantilever vibration is monitored in addition to the static deflection.
Problems with the technique include no direct measurement of the tip-sample separation and the common need for low stiffness cantilevers which tend to 'snap' to the surface. The snap-in can be reduced by measuring in liquids or by using stiffer cantilevers, but in the latter case a more sensitive deflection sensor is needed. By applying a small dither to the tip, the stiffness (force gradient) of the bond can be measured as well.

Identification of individual surface atoms
The AFM can be used to image and manipulate atoms and structures on a variety of surfaces. The atom at the apex of the tip "senses" individual atoms on the underlying surface when it forms incipient chemical bonds with each atom. Because these chemical interactions subtly alter the tip's vibration frequency, they can be detected and mapped. This principle was used to distinguish between atoms of silicon, tin and lead on an alloy surface, by comparing these 'atomic fingerprints' to values obtained from large-scale density functional theory (DFT) simulations.
The trick is to first measure these forces precisely for each type of atom expected in the sample, and then to compare with forces given by DFT simulations. The team found that the tip interacted most strongly with silicon atoms, and interacted 23% and 41% less strongly with tin and lead atoms, respectively. Thus, each different type of atom can be identified in the matrix as the tip is moved across the surface.

Advantages and disadvantages
The first atomic force microscope
Just like any other tool, an AFM's usefulness has limitations. When determining whether or not analyzing a sample with an AFM is appropriate, there are various advantages and disadvantages that must be considered.

Advantages
AFM has several advantages over the scanning electron microscope (SEM). Unlike the electron microscope which provides a two-dimensional projection or a two-dimensional image of a sample, the AFM provides a three-dimensional surface profile. Additionally, samples viewed by AFM do not require any special treatments (such as metal/carbon coatings) that would irreversibly change or damage the sample. While an electron microscope needs an expensive vacuum environment for proper operation, most AFM modes can work perfectly well in ambient air or even a liquid environment. This makes it possible to study biological macromolecules and even living organisms. In principle, AFM can provide higher resolution than SEM. It has been shown to give true atomic resolution in ultra-high vacuum (UHV) and, more recently, in liquid environments. High resolution AFM is comparable in resolution to scanning tunneling microscopy and transmission electron microscopy.

Disadvantages
A disadvantage of AFM compared with the scanning electron microscope (SEM) is the single scan image size. In one pass, the SEM can image an area on the order of square millimeters with a depth of field on the order of millimeters. Whereas the AFM can only image a maximum height on the order of 10-20 micrometers and a maximum scanning area of about 150×150 micrometers. One method of improving the scanned area size for AFM is by using parallel probes in a fashion similar to that of millipede data storage.
The scanning speed of an AFM is also a limitation. Traditionally, an AFM cannot scan images as fast as a SEM, requiring several minutes for a typical scan, while a SEM is capable of scanning at near real-time, although at relatively low quality. The relatively slow rate of scanning during AFM imaging often leads to thermal drift in the image[9][10] making the AFM microscope less suited for measuring accurate distances between topographical features on the image. However, several fast-acting designs [11][12] were suggested to increase microscope scanning productivity including what is being termed videoAFM (reasonable quality images are being obtained with videoAFM at video rate: faster than the average SEM). To eliminate image distortions induced by thermal drift, several methods have been introduced.
AFM images can also be affected by hysteresis of the piezoelectric material[13] and cross-talk between the x, y, z axes that may require software enhancement and filtering. Such filtering could "flatten" out real topographical features. However, newer AFMs utilize closed-loop scanners which practically eliminate these problems. Some AFMs also use separated orthogonal scanners (as opposed to a single tube) which also serve to eliminate part of the cross-talk problems.
As with any other imaging technique, there is the possibility of image artifacts, which could be induced by an unsuitable tip, a poor operating environment, or even by the sample itself. These image artifacts are unavoidable however, their occurrence and effect on results can be reduced through various methods.
Due to the nature of AFM probes, they cannot normally measure steep walls or overhangs. Specially made cantilevers and AFMs can be used to modulate the probe sideways as well as up and down (as with dynamic contact and non-contact modes) to measure sidewalls, at the cost of more expensive cantilevers, lower lateral resolution and additional artifacts.

Piezoelectric scanners
AFM scanners are made from piezoelectric material, which expands and contracts proportionally to an applied voltage. Whether they elongate or contract depends upon the polarity of the voltage applied. The scanner is constructed by combining independently operated piezo electrodes for X, Y, and Z into a single tube, forming a scanner which can manipulate samples and probes with extreme precision in 3 dimensions.
Scanners are characterized by their sensitivity which is the ratio of piezo movement to piezo voltage, i.e., by how much the piezo material extends or contracts per applied volt. Because of differences in material or size, the sensitivity varies from scanner to scanner. Sensitivity varies non-linearly with respect to scan size. Piezo scanners exhibit more sensitivity at the end than at the beginning of a scan. This causes the forward and reverse scans to behave differently and display hysteresis between the two scan directions. This can be corrected by applying a non-linear voltage to the piezo electrodes to cause linear scanner movement and calibrating the scanner accordingly.
The sensitivity of piezoelectric materials decreases exponentially with time. This causes most of the change in sensitivity to occur in the initial stages of the scanner’s life. Piezoelectric scanners are run for approximately 48 hours before they are shipped from the factory so that they are past the point where they may have large changes in sensitivity. As the scanner ages, the sensitivity will change less with time and the scanner would seldom require recalibration.

Ballistics
Ballistics (gr. βάλλειν ('ba'llein'), "throw") is the science of mechanics that deals with the flight, behavior, and effects of projectiles, especially bullets, gravity bombs, rockets, or the like; the science or art of designing and accelerating projectiles so as to achieve a desired performance.
A ballistic body is a body which is free to move, behave, and be modified in appearance, contour, or texture by ambient conditions, substances, or forces, as by the pressure of gases in a gun, by rifling in a barrel, by gravity, by temperature, or by air particles. A ballistic missile is a missile only guided during the relatively brief initial powered phase of flight, whose course is subsequently governed by the laws of classical mechanics.

Gun ballistics
Gun ballistics is the work of projectiles from the time of shooting to the time of impact with the target. Gun ballistics is often broken down into the following four categories, which contain detailed information on each category:
* Internal ballistics, (sometimes called interior ballistics) the study of the processes originally accelerating the projectile, for example the passage of a bullet through the barrel of a rifle.
* Transition ballistics, (sometimes called intermediate ballistics) the study of the projectile's behavior when it leaves the barrel and the pressure behind the projectile is equalized.
* External ballistics, (sometimes called exterior ballistics) the study of the passage of the projectile through a medium, most commonly earth's atmosphere. [4]
* Terminal ballistics, the study of the interaction of a projectile with its target, whether that be flesh (for a hunting bullet), steel (for an anti-tank round), or even furnace slag (for an industrial slag disruptor).

Forensic ballistics
Forensic ballistics involves analysis of bullets and bullet impacts to determine information of use to a court or other part of a legal system. Separately from ballistics information, firearm and tool mark examinations ("ballistic fingerprinting") involve analysing firearm, ammunition, and tool mark evidence in order to establish whether a certain firearm or tool was used in the commission of a crime.

Biophysics
Biophysics is an interdisciplinary science that uses the methods of physical science to study biological systems. Studies included under the branches of biophysics span all levels of biological organization, from the molecular scale to whole organisms and ecosystems. Biophysical research shares significant overlap with biochemistry, nanotechnology, bioengineering, agrophysics and systems biology.
Molecular biophysics typically addresses biological questions that are similar to those in biochemistry and molecular biology, but the questions are approached quantitatively. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques is used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy and atomic force microscopy (AFM) are often used to visualize structures of biological significance. Conformational change in structure can be measured using techniques such as dual polarisation interferometry and circular dichroism. Direct manipulation of molecules using optical tweezers or AFM can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting units which can be understood through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems.

Focus as a subfield
Biophysics often does not have university-level departments of its own, but has presence as groups across departments within the fields of molecular biology, biochemistry, chemistry, computer science, mathematics, medicine, pharmacology, physiology, physics, and neuroscience. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments.
* Biology and molecular biology - Almost all forms of biophysics efforts are included in some biology department somewhere. To include some: gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics.
* Structural biology - Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof.
* Biochemistry and chemistry - biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships.
* Computer science - Neural networks, biomolecular and drug databases.
* Computational chemistry - molecular dynamics simulation, molecular docking, quantum chemistry
* Bioinformatics - sequence alignment, structural alignment, protein structure prediction
* Mathematics - graph/network theory, population modeling, dynamical systems, phylogenetics.
* Medicine and neuroscience - tackling neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permitivity, gene therapy, understanding tumors.
* Pharmacology and physiology - channel biology, biomolecular interactions, cellular membranes, polyketides.
* Physics - biomolecular free energy, stochastic processes, covering dynamics.
* Quantum biophysics involves quantum information processing of coherent states, entanglement between coherent protons and transcriptase components, and replication of decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing.
* Agronomy Agriculture
Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were traditional physicists, chemists, and biologists by training.

Engineering physics
Engineering physics or engineering science is a multidisciplinary and interdisciplinary field that combines the physical sciences with traditional engineering disciplines such as aerospace engineering, electrical engineering, or mechanical engineering. Unlike traditional engineering disciplines, engineering science/physics is not necessarily confined to a particular branch of science or physics. Instead, engineering science/physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, materials science, applied mechanics, nanotechnology, microfabrication, mechanical engineering, electrical engineering, control theory, aerodynamics, energy or solid-state physics. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis.
Engineering physics or engineering science degrees are respected academic degrees awarded in many countries. It is notable that in many languages the term for "engineering physics" would be directly translated into English as "technical physics". In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees, with the former specializes in nuclear power research, and the latter closer to engineering physics. In some institutions, engineering (or applied) physics major is a discipline or specialization within the scope of engineering science, or applied science.
In many universities, engineering science programs may be offered at the levels of B.Tech, B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, solid mechanics, operations research, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics. While typical undergraduate engineering programs generally focus on the application of established methods to the design and analysis of engineering solutions, undergraduate program in engineering science focuses on the creation and use of more advanced experimental or computational techniques where standard approaches are inadequate (i.e., development of engineering solutions to contemporary problems in the physical and life sciences by applying fundamental principles). Due to rigorous nature of the academic curriculum, an undergraduate major in engineering science is an honors program at some universities such as the University of Toronto and Pennsylvania State University.

Geophysics
Geophysics is the physics of the Earth and its environment in space. Its subjects include the shape of the Earth, its gravitational and magnetic fields, the dynamics of the Earth as a whole and of its component parts, the Earth's internal structure, composition and tectonics, the generation of magmas, volcanism and rock formation, the hydrological cycle including snow and ice, all aspects of the oceans, the atmosphere, ionosphere, magnetosphere and solar-terrestrial relations, and analogous problems associated with the Moon and other planets.
Geophysics is also applied to societal needs, such as mineral resources, mitigation of natural hazards and environmental protection. Geophysical survey data are used to analyze potential petroleum reservoirs and mineral deposits, to locate groundwater, to locate archaeological finds, to find the thicknesses of glaciers and soils, and for environmental remediation.
The gravitational pull of the Moon and Sun give rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes. Therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide.[6]
Gravitational forces make rocks press down on deeper rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earth's surface and above it can be used to look for mineral deposits (see also gravity anomaly and gravimetry). They also reflect the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth. The geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents (such as with very narrow canals).

Heat flow
The Earth is cooling, and the resulting heat flow generates the Earth's magnetic field through the geodynamo and plate tectonics through mantle convection. The main sources of heat are the primordial heat and radioactivity, although there are also contributions from phase transitions. Heat is mostly carried to the surface by thermal convection, although there are two thermal boundary layers - the core-mantle boundary and the lithosphere - in which heat is transported by conduction. Some heat is carried up from the bottom of the mantle by mantle plumes. The heat flow at the Earth's surface is about 4.2 × 1013 W , and it is a potential source of geothermal energy.

Vibrations
Seismic waves are vibrations that travel through the Earth's interior or along its surface. The entire Earth can also oscillate in forms that are called normal modes. Ground motions from waves or normal modes are measured using seismographs. If the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection.
Seismic waves can also provide information on the region that the waves travel through. If the density or composition of the rock changes suddenly, some of the waves are reflected. Reflections can provide information on near-surface structure. Changes in the travel direction, called refraction, can be used to infer the deep structure of the Earth.
Earthquakes pose a risk to humans. Understanding their mechanisms, which depend on the type of earthquake (e.g., intraplate or deep focus), can lead to better estimates of earthquake risk and improvements in earthquake engineering.

Radioactivity
Further information: Radiometric dating and geotherm
Example of a radioactive decay chain (see Radiometric dating).
Radioactive decay, in addition to being the main source of heat in the Earth (see geotherm), is an invaluable tool for geochronology. Unstable isotopes decay at predictable rates, and the decay rates of different isotopes cover several orders of magnitude, so radioactive decay can be used to accurately date both recent events and events in past geologic eras.

Electricity
Although we mainly notice electricity during thunderstorms, there is always a downward electric field near the surface that averages 120 V m-1.[8] Relative to the solid Earth, the atmosphere has a net positive charge due to bombardment by cosmic rays. A current of about 1800 A flows in the global circuit.[8] It flows downward from the ionosphere over most of the Earth and back upwards through thunderstorms. The flow is manifested by lightning below the clouds and sprites above.
A variety of electric methods are used in geophysical survey. Some measure spontaneous potential, a potential that arises in the ground because of man-made or natural disturbances. Telluric currents flow in Earth and the oceans. They have two causes: electromagnetic induction by the time-varying, external-origin geomagnetic field and motion of conducting bodies (such as seawater) across the Earth's permanent magnetic field.[9] The distribution of telluric current density can be used to detect variations in electrical resistivity of underground structures. Geophysicists can also provide the electric current themselves (see induced polarization and electrical resistivity tomography).

Electromagnetic waves
Electromagnetic waves occur in the ionosphere and magnetosphere as well as the Earth's outer core. Dawn chorus is caused by high-energy electrons that get caught in the Van Allen radiation belt. Whistlers are produced by lightning strikes. Hiss may be generated by both. Electromagnetic waves may also be generated by earthquakes (see seismo-electromagnetics).
In the Earth's outer core, electric currents in the highly conductive liquid iron create magnetic fields by electromagnetic induction (see geodynamo). Alfvén waves are magnetohydrodynamic waves in the magnetosphere or the Earth's core. In the core, they probably have little observable effect on the geomagnetic field, but slower waves such as magnetic Rossby waves may be one source of geomagnetic secular variation.
Electromagnetic methods that are used for geophysical survey include transient electromagnetics and magnetotellurics.

Magnetism
Further information: Geomagnetism and paleomagnetism
The variation between magnetic north and "true" north (see Earth's magnetic field).
The Earth's magnetic field protects the Earth from the deadly solar wind and has long been used for navigation. It originates in the fluid motions of the Earth's outer core (see geodynamo). The magnetic field in the upper atmosphere gives rise to the auroras.
The Earth's field is roughly like a tilted dipole, but it changes over time (a phenomenon called geomagnetic secular variation). Mostly the geomagnetic pole stays near the geographic pole, but at random intervals averaging a million years or so, the polarity of the Earth's field reverses. These geomagnetic reversals are recorded in rocks (see natural remanent magnetization) and their signature can be seen as parallel linear magnetic anomaly stripes on the seafloor. These stripes provide quantitative information on seafloor spreading, a part of plate tectonics. In addition, the magnetization in rocks can be used to measure the motion of continents.

Fluid dynamics
Main article: Geophysical fluid dynamics
Fluid motions occur in the magnetosphere, atmosphere, ocean, mantle and core. Even the mantle, though it has an enormous viscosity, flows like a fluid over long time intervals (see geodynamics). This flow is reflected in phenomena such as isostasy, post-glacial rebound and mantle plumes. The mantle flow drives plate tectonics and the flow in the Earth's core drives the geodynamo.
Geophysical fluid dynamics is a primary tool in physical oceanography and meteorology. The rotation of the Earth has profound effects on the Earth's fluid dynamics, often due to the Coriolis effect. In the atmosphere it gives rise to large-scale patterns like Rossby waves and determines the basic circulation patterns of storms. In the ocean they drive large-scale circulation patterns as well as Kelvin waves and Ekman spirals at the ocean surface. In the Earth's core, the circulation of the molten iron is structured by Taylor columns.
Waves and other phenomena in the magnetosphere can be modeled using magnetohydrodynamics.

Condensed matter physics
The physical properties of minerals must be understood to infer the composition of the Earth's interior from seismology, the geothermal gradient and other sources of information. Mineral physicists study the elastic properties of minerals as well as their high-pressure phase diagrams, melting points and equations of state at high pressure. Studies of creep determine how rocks that are brittle at the surface can flow deep down. These properties determine the rheology that determines the geodynamics.
Water is a very complex substance and its unique properties are essential for life. Its physical properties shape the hydrosphere and are an essential part of the water cycle and climate. Its thermodynamic properties determine evaporation and the thermal gradient in the atmosphere. The many types of precipitation involve a complex mixture of processes such as coalescence, supercooling and supersaturation. Some of the precipitated water becomes groundwater, and groundwater flow includes phenomena such as percolation, while the conductivity of water makes electrical and electromagnetic methods useful for tracking groundwater flow. Physical properties of water such as salinity have a large effect on its motion in the oceans.
The many phases of ice form the cryosphere and come in forms like ice sheets, glaciers, sea ice, freshwater ice, snow, and frozen ground (or permafrost).

Regions of the Earth
Size and form of the Earth
The Earth is roughly spherical, but it bulges towards the Equator, so it is roughly in the shape of an ellipsoid (see Earth ellipsoid). This bulge is due to its rotation and is nearly consistent with an Earth in hydrostatic equilibrium. The detailed shape of the Earth, however, is also affected by the distribution of continents and ocean basins, and to some extent by the dynamics of the plates.[12]
Structure of the Earth
Mapping the interior of the Earth with earthquake waves.
Evidence from seismology, heat flow at the surface, and mineral physics is combined with the Earth's mass and moment of inertia to infer models of the Earth's interior - its composition, density, temperature, pressure. The Earth's mass is M = 5.975 × 1024 kg and its mean radius is R = 6371 km , so its mean specific gravity is < ρ > = 5.515. This is substantially higher than the typical specific gravity (2.7–3.3) of rocks at the surface. Its moment of inertia is 0.33 M R2, whereas it would be 0.4 M R2 if the earth was a sphere of constant density. Both lines of evidence point to a concentration of mass near the center. However, the density of the rock will increase with depth because of the increasing pressure. To determine how large this effect is, the Adams–Williamson equation is used to determine how density increases with pressure. The conclusion is that pressure alone cannot account for the increase in density. Instead, we know that the Earth's core is composed of an alloy of iron and other minerals.
Reconstructions of seismic waves in the deep interior of the Earth show that there are no S-waves in the outer core. This indicates that the outer core is liquid, because liquids cannot support shear. The outer core is liquid, and the motion of this highly conductive fluid generates the Earth's field (see geodynamo). The inner core, however, is solid because of the enormous pressure.
Reconstruction of seismic reflections in the deep interior indicate some major discontinuities in seismic velocities that demarcate the major zones of the Earth: inner core, outer core, mantle, lithosphere and crust. The mantle itself is divided into the upper mantle, transition zone, lower mantle and D′′ layer. Between the crust and the mantle is the Mohorovičić discontinuity.
The seismic model of the Earth does not by itself determine the composition of the layers. For a complete model of the Earth, mineral physics is needed to interpret seismic velocities in terms of composition. The mineral properties are temperature-dependent, so the geotherm must also be determined. This requires physical theory for thermal conduction and convection and the heat contribution of radioactive elements. The main model for the radial structure of the interior of the Earth is the Preliminary Reference Earth Model (PREM). Some parts of this model have been updated by recent findings in mineral physics (see post-perovskite) and supplemented by seismic tomography. The mantle is mainly composed of silicates, and the boundaries between layers of the mantle are probably due to phase transitions.
The mantle acts as a solid for seismic waves, but under high pressures and temperatures it deforms so that over millions of years it acts like a liquid. This makes plate tectonics possible. Geodynamics is the study of the fluid flow in the mantle and core.

The magnetosphere
The solar wind is deflected by the magnetosphere. If a planet's magnetic field is strong enough, its interaction with the solar wind forms a magnetosphere around a planet. Early space probes mapped out the gross dimensions of the terrestrial magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles, the Van Allen radiation belts.

Other fields and related disciplines
Fields
* Geodesy, measurement of the Earth: GPS, vertical and horizontal motions of the Earth's surface, navigation, the study of the Earth's gravitational field, and the size and form of the Earth
* The study of large-scale motions of the Earth's surface and interior, including:

* Tectonophysics, the study of the physical processes that cause and result from plate tectonics
* Geodynamics, the study of modes of transport deformation within the Earth: rock deformation, mantle flow and convection, heat flow, lithosphere dynamics

* Geomagnetism, the study of the Earth's magnetic field, including its origin, telluric currents driven by the magnetic field, the Van Allen belts, and the interaction between the magnetosphere and the solar wind. This field is associated with paleomagnetism, or the measurement of the orientation of the Earth's magnetic field over the geologic past.
* Seismology, the study of the structure and composition of the Earth through seismic waves, and of surface deformations during earthquakes and seismic hazards
* Mathematical geophysics, The development and applications of mathematical methods and techniques for the solution of geophysical problems.
* Geophysical surveying:

* Exploration and engineering geophysics, using surface methods to detect or infer the presence and position of concentrations of ore minerals and hydrocarbons
* Archaeological geophysics, for archaeological imaging or mapping
* Environmental and Engineering Geophysics, for locating underground storage tanks (USTs) or utilities, Unexploded ordnance (UXO), delineating landfills, locating voids or potential subsidence, finding depth to, P-wave or S-wave velocity of, or rippability of bedrock, or the pathway of groundwater movement
* Shallow seismology is used in exploration geophysics (to find oil and gas) and for environmental characterization of the subsurface

Related disciplines
* Volcanology, the study of volcanoes, volcanic features (hot springs, geysers, fumaroles), volcanic rock, and heat flow related to volcanoes
* Atmospheric sciences, which includes:

* Atmospheric electricity and the ionosphere
* Aeronomy, the study of the physical structure and chemistry of the atmosphere.
* Meteorology and Climatology, which both involve studies of the weather.

* The study of water on the Earth, hydrology, physical oceanography and glaciology
* Geological and geophysical engineering and Engineering geology, applying geophysics to the engineering design of facilities including roads, tunnels, and mines
* The study of the rocks and minerals, including petrophysics and aspects of mineralogy such as physical mineralogy and crystal structure

Methods of geophysics
Space probes
Space probes made it possible to collect data from not only the visible light region, but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics.
Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which lead to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins.
In 2002, NASA launched the Gravity Recovery and Climate Experiment, wherein two twin satellites map variations in Earth's gravity field by making measurements of the distance between the two satellites using GPS and a microwave ranging system. Gravity variations detected by GRACE include those caused by changes in ocean currents; runoff and ground water depletion; melting ice sheets and glaciers.

Medical physics
Medical physics is a division of Healthcare science concerning the application of physics to medicine. It generally concerns physics as applied to medical imaging and radiotherapy, although a medical physicist may also work in many other areas of healthcare. A medical physics department may be based in either a hospital or a university and its work is likely to include research, technical development, and clinical healthcare.
Of the large body of medical physicists in academia and clinics, roughly 85% practice or specialize in various forms of therapy, 10% in diagnostic imaging, and 5% in nuclear medicine. Areas of specialty in medical physics however are widely varied in scope and breadth.

Areas of specialty
Medical imaging
* Diagnostic radiology, including x-rays, fluoroscopy, mammography, dual energy X-ray absorptiometry, angiography and computed tomography
* Ultrasound, including intravascular ultrasound
* Non-ionizing radiation (Lasers, Ultraviolet etc.)
* Nuclear medicine, including single photon emission computed tomography (SPECT) and positron emission tomography (PET)
* Magnetic resonance imaging (MRI), including functional magnetic resonance imaging (fMRI) and other methods for functional neuroimaging of the brain.
o For example, nuclear magnetic resonance (often referred to as magnetic resonance imaging to avoid the common concerns about radiation), uses the phenomenon of nuclear resonance to image the human body.
* Magnetoencephalography
* Electrical impedance tomography
* Diffuse optical imaging
* Optical coherence tomography

Treatment of disease
* Defibrillation
* Treatment of disease
* High intensity focussed ultrasound, including lithotripsy
* Interventional radiology
* Non-ionising radiation Lasers, Ultraviolet etc. including photodynamic therapy and Lasik
* Nuclear medicine, including unsealed source radiotherapy
* Photomedicine, the use of light to treat and diagnose disease
* Radiotherapy
o TomoTherapy
o Cyberknife
o Gamma knife
o Proton therapy
o Brachytherapy
o Boron Neutron Capture Therapy
* Sealed source radiotherapy
* Terahertz radiation

Physiological measurement techniques
ECG trace
Used to monitor and measure various physiological parameters. Many physiological measurement techniques are non-invasive and can be used in conjunction with, or as an alternative to, other invasive methods.
* Electrocardiography
* electric current
* Electromyography
* Electroencephalography
* Electronystagmography
* Endoscopy
* Medical ultrasonography
* Non-ionising radiation (Lasers, Ultraviolet etc.)
* Near infrared spectroscopy
* Pulse oximetry
* Blood gas monitor
* Blood pressure measurement

Radiation protection
* Background radiation
* Radiation protection
* Dosimetry
* Health Physics
* Radiological Protection of Patients

Medical computing and mathematics
CT image reconstruction

* Medical informatics
* Telemedicine
* Picture archiving and communication systems (PACS)
* DICOM
* Tomographic reconstruction, an ill-posed inverse problem

Education and training
In North America
In North America, medical physics training is offered at a master's, doctorate, post-doctorate and/or residency levels. Several universities offer these degrees in Canada and the United States.
As of October 2010, twenty-seven universities in North America have medical physics graduate programs that are accredited by The Commission on Accreditation of Medical Physics Education Programs (CAMPEP). The same organization has accredited forty-three medical physics clinical residency programs.
Professional certification is obtained from the American Board of Radiology, the American Board of Medical Physics, the American Board of Science in Nuclear Medicine, and the Canadian College of Physicists in Medicine. As of 2012, enrollment in a CAMPEP-accredited residency or graduate program is required to start the ABR certification process. Starting in 2014, completion of a CAMPEP-accredited residency will be required to advance to part 2 of the ABR certification process.

In the United Kingdom
The person concerned must first gain a first or upper second-class honours degree in a physical or engineering science subject before they can start the Part I medical physics training within the National Health Service.
Trainees can complete Part I training in fifteen months provided they hold an MSc from an IPEM accredited center in the United Kingdom or the Republic of Ireland (National University of Ireland, Galway). For these candidates, the Part I training consists of pure clinical experience. Trainees applying for Part I trainee holding only a degree in a engineering or physical science subject must undertake a combined study and clinical training programme. This programme consists of two years of clinical placement, during which the trainee will study for an MSc in Medical Physics which is approved by the Institute of Physics and Engineering in Medicine (IPEM). The MSc will be either at University College London, Swansea, Sheffield, Surrey, Birmingham, Leeds, Manchester, Aberdeen, Glasgow, King's or Queen Mary's. Open University also offers a Master of Science in Medical Physics, but the prospective student should first check that this degree will satisfy the accreditation requirements or that it is accepted before embarking on it. Successful completion of the Part I training programme leads to an IPEM Diploma. The trainee can then apply for a Part II position, which will consists of the IPEM's Programme of Advanced Training (PAT) which takes a further two years and leads to Corporate Membership of the IPEM, and registration as a Clinical Scientist (if successful).