You are on page 1of 4

Heat and Temperature

Heat energy is most intense in substances whose molecules are moving rapidly in a very disorderly way. Such a
substance will give up some of its heat to another substance whose molecules are less agitated. When this happens,
the heat is said to flow from one substance to another (or from one body to another). The energy transfer is
indicated by a change in temperature.
Temperature, therefore, is not the same thing as heatalthough the two words are often used interchangeably.
Temperature can be defined as the degree of intensity of hotness or coldness. Hotness and coldness, however,
are comparative terms. A flame, for example, is hot when compared with ice but cold when compared with the sun.
This definition of temperature, therefore, is vague and unscientific, although it does convey the correct impression
that temperature is a measure of relative intensity rather than of quantity.
A more specific definition is: temperature is the ability of one body to give up heat energy to another body. A hot body
becomes cooler, and a cold body becomes warmer, as long as heat is flowing from one to the other. The hot body
has a greater ability to give up heat and therefore has a higher temperature. After a time the two bodies may reach a
condition of heat equilibrium, or balance of heat intensity. Then, heat flow ceases. At the point of equilibrium both
bodies can be said to be at the same temperature.
Measurement of Temperature
Temperature is measured by means of instruments called thermometers. Several temperature scales have been
devised for relating the hotness and coldness of bodies to fixed temperatures, such as the freezing point and boiling
point of water. On most temperature scales, the unit of temperature is called a degree. The Kelvin scale is an
exception; its unit of temperature is the kelvin.
The Fahrenheit, Celsius (or centigrade), and Reaumur scales are used in the range of temperatures important for
human comfort, laboratory experiments, and industrial processes.
The Rankine scale and the Kelvin scale are based on the concept of absolute zero; all temperature readings on these
scales are positive numbers. The Kelvin scale is widely used in scientific work. The Rankine scale is used primarily
by British and American engineers.
Absolute Zero
Experiments have shown that every 1 C. increase or decrease in temperature causes the pressure exerted by a gas
to increase or decrease at the constant rate of 1/273.15 of its pressure at 0 C. This means that at -273.15 C. an
ideal (theoretical) gas would exert no pressure at all. Since experiments with real gases have shown a clear relation
between pressure and temperature, zero pressure would indicate that the ideal gas had lost all its ability to give up
heat. Its molecules would be absolutely motionless. This is impossiblemolecules are always agitated, to some
extentand therefore the absolute zero of temperature remains a theoretical concept. The concept is, however, a
useful one, for it gives a base point to which all temperature measurements may be referred, in positive numbers.
The idea that absolute zero can never be reached is sometimes considered important enough to be called the third
law of thermodynamics. Scientists have succeeded in cooling substances to within a small fraction of a degree above
absolute zero. The study of the behavior of substances at very low temperatures is called cryogenics.
High Temperatures
Absolute zero is the lower limit for temperature, but there is no upper limit. The hottest substances known are ionized
gases in certain stars, with temperatures of a billion degrees or more.
Measurement of Heat
The heat released or absorbed in a physical or chemical process can be measured with an instrument called a
calorimeter. Commonly used units for measuring heat are the calorie and the British thermal unit, or Btu. Heat is also
measured in such other units as the joule (the unit of energy in the SI, or metric system).

The existence of electricity, the phenomenon associated with stationary or moving electric
charges, has been known since the Greeks discovered that amber, rubbed with fur, attracted
light objects such as feathers. Ben Franklin proved the electrical nature of lightning (the
famous key experiment) and also established the conventional use of negative and positive
types of charges.

It was also known that certain materials blocked electric charge, called insulators,
such as glass or cork. Other materials transfered electric charge with ease, called
conductors, such as metal.
By the 18th century, physicist Charles Coulomb defined the quantity of electricity
later known as a coulomb, and determined the force law between electric charges,
known as Coulomb's law. Coulomb's law is similar to the law of gravity in that the
electrical force is inversely proportional to the distance of the charges squared, and
proportional to the product of the charges.
By the end of the 18th century, we had determined that electric charge could be
stored in a conducting body if it is insulated from its surroundings. The first of these
devices was the Leyden jar. consisted of a glass vial, partly filled with sheets of
metal foil, the top of which was closed by a cork pierced with a wire or nail. To
charge the jar, the exposed end of the wire is brought in contact with a friction
device.
Modern atomic theory explains this as the ability of atoms to either lose or gain an
outer electron and thus exhibit a net positive charge or gain a net negative charge
(since the electron is negative). Today we know that the basic quantity of electric
charge is the electron, and one coulomb is about 6.24x1018 electrons.
The battery was invented in the 19th century, and electric current and static
electricity were shown to be manifestations of the same phenomenon, i.e. current is
the motion of electric charge. Once a laboratory curiosity, electricity becomes the
focus of industrial concerns when it is shown that electrical power can be
transmitted efficiently from place to place and with the invention of the
incandescent lamp.
The discovery of Coulomb's law, and the behavior or motion of charged particles
near other charged particles led to the development of the electric field concept.

A field can be considered a type of energy in space, or energy with position. A field is
usually visualized as a set of lines surrounding the body, however these lines do not
exist, they are strictly a mathematical construct to describe motion. Fields are used
in electricity, magnetism, gravity and almost all aspects of modern physics.
An electric field is the region around an electric charge in which an electric force is exerted
on another charge. Instead of considering the electric force as a direct interaction of two
electric charges at a distance from each other, one charge is considered the source of an
electric field that extends outward into the surrounding space, and the force exerted on a
second charge in this space is considered as a direct interaction between the electric field
and the second charge.

Magnetism is the phenomenon associated with the motion of electric charges, although the
study of magnets was very confused before the 19th century because of the existence of
ferromagnets, substances such as iron bar magnets which maintain a magnetic field where
no obvious electric current is present (see below). Basic magnetism is the existence of
magnetic fields which deflect moving charges or other magnets. Similar to electric force in
strength and direction, magnetic objects are said to have `poles' (north and south, instead of
positive and negative charge). However, magnetic objects are always found in pairs, there do
not exist isolated poles in Nature.

Although conceived of as distinct phenomena until the 19th


century, electricity and magnetism are now known to be components of the unified
theory ofelectromagnetism.
A connection between electricity and magnetism had long been suspected, and in
1820 the Danish physicist Hans Christian Orsted showed that an electric current
flowing in a wire produces its own magnetic field. Andre-Marie Ampere of France
immediately repeated Orsted's experiments and within weeks was able to express
the magnetic forces between current-carrying conductors in a simple and elegant
mathematical form. He also demonstrated that a current flowing in a loop of wire
produces a magnetic dipole indistinguishable at a distance from that produced by a
small permanent magnet; this led Ampere to suggest that magnetism is caused by
currents circulating on a molecular scale, an idea remarkably near the modern
understanding.

Faraday, in the early 1800's, showed that a changing electric field produces a
magnetic field, and that vice-versus, a changing magnetic field produces an electric
current. An electromagnet is an iron core which enhances the magnetic field
generated by a current flowing through a coil, was invented by William Sturgeon in
England during the mid-1820s. It later became a vital component of both motors
and generators.
The unification of electric and magnetic phenomena in a complete mathematical
theory was the achievement of the Scottish physicist Maxwell (1850's). In a set of
four elegant equations, Maxwell formalized the relationship between electric and
magnetic fields. In addition, he showed that a linear magnetic and electric field can
be self-reinforcing and must move at a particular velocity, the speed of light. Thus,
he concluded that light is energy carried in the form of opposite but supporting
electric and magnetic fields in the shape of waves, i.e. self-propagating
electromagnetic waves.

You might also like