Está en la página 1de 34

Heat transfer

Heat transfer is the transition of thermal energy from a hotter object to a cooler object
("object" in this sense designating a complex collection of particles which is capable of
storing energy in many different ways). When an object or fluid is at a different temperature
than its surroundings or another object, transfer of thermal energy, also known as heat
transfer, or heat exchange, occurs in such a way that the body and the surroundings reach
thermal equilibrium; this means that they are at the same temperature. Heat transfer always
occurs from a higher-temperature object to a cooler-temperature one as described by the
second law of thermodynamics or the Clausius statement. Where there is a temperature
difference between objects in proximity, heat transfer between them can never be stopped; it
can only be slowed.

Conduction
Main article: Heat conduction
Conduction is the transfer of heat by direct contact of particles of matter. The transfer of
energy could be primarily by elastic impact as in fluids or by free electron diffusion as
predominant in metals or phonon vibration as predominant in insulators. In other words, heat
is transferred by conduction when adjacent atoms vibrate against one another, or as electrons
move from atom to atom. Conduction is greater in solids, where atoms are in constant
contact. In liquids (except liquid metals) and gases, the molecules are usually further apart,
giving a lower chance of molecules colliding and passing on thermal energy.
Heat conduction is directly analogous to diffusion of particles into a fluid, in the situation
where there are no fluid currents. This type of heat diffusion differs from mass diffusion in
behaviour, only in as much as it can occur in solids, whereas mass diffusion is mostly limited
to fluids.
Metals (e.g. copper, platinum, gold, iron, etc.) are usually the best conductors of thermal
energy. This is due to the way that metals are chemically bonded: metallic bonds (as opposed
to covalent or ionic bonds) have free-moving electrons which are able to transfer thermal
energy rapidly through the metal.
As density decreases so does conduction. Therefore, fluids (and especially gases) are less
conductive. This is due to the large distance between atoms in a gas: fewer collisions between
atoms means less conduction. Conductivity of gases increases with temperature. Conductivity
increases with increasing pressure from vacuum up to a critical point that the density of the
gas is such that molecules of the gas may be expected to collide with each other before they
transfer heat from one surface to another. After this point in density, conductivity increases
only slightly with increasing pressure and density.
To quantify the ease with which a particular medium conducts, engineers employ the thermal
conductivity, also known as the conductivity constant or conduction coefficient, k. In thermal
conductivity k is defined as "the quantity of heat, Q, transmitted in time (t) through a
thickness (L), in a direction normal to a surface of area (A), due to a temperature difference
(ΔT) [...]." Thermal conductivity is a material property that is primarily dependent on the
medium's phase, temperature, density, and molecular bonding.
A heat pipe is a passive device that is constructed in such a way that it acts as though it has
extremely high thermal conductivity.
Transient conduction vs. steady-state conduction
Steady state conduction is the form of conduction which happens when the temperature
difference is constant, so that an equilibration time, the spatial distribution of temperatures in
an object does not change (for example, a bar may be cold at one end and hot at the other, but
the gradient of temperatures along the bar do not change with time). In short temperature at a
section remains constant and it varies linearly along direction of heat transfer. In steady state
the amount of heat entering a section is equal to amount of heat coming out. In steady state
all laws of current electricity (direct current) can be applied for heat current as well. There
also exist situations wherein the temperature drop or raise occurs more drastically, such as
when a hot copper ball is dropped into oil at a low temperature, and the interest is in
analysing the spatial change of temperature in the object over time. This mode of heat
conduction can be referred to as unsteady mode of conduction or transient conduction.
Analysis of these systems is more complex and (except for simple shapes) calls for the
application of approximation theories and/or numerical analysis by computer.
Lumped system analysis
A common approximation in transient conduction, which may be used whenever heat
conduction within an object is much faster than heat conduction across the boundary of the
object, is lumped system analysis. This is a method of approximation that suitably reduces
one aspect of the transient conduction system (that within the object) to an equivalent steady
state system (that is, it is assumed that the temperature within the object is completely
uniform, although its value may be changing in time). In this method, a term known as the
Biot number is calculated, which is defined as the ratio of resistance to heat transfer across
the object's boundary with a uniform bath of different temperature, to the conductive heat
resistance within the object. When the thermal resistance to heat transferred into the object is
less than the resistance to heat being diffused completely within the object, the Biot number
is small, and the approximation of spatially uniform temperature within the object can be
used. As this is a mode of approximation, the Biot number must be less than 0.1 for accurate
approximation and heat transfer analysis. The mathematical solution to the lumped system
approximation gives Newton's law of cooling, discussed below.
This mode of analysis has been applied to forensic sciences to analyse the time of death of
humans. Also it can be applied to HVAC (heating, ventilating and air-conditioning, or
building climate control), to ensure more nearly instantaneous effects of a change in comfort
level setting.[1]

Convection
Convection is the transfer of heat energy between a solid surface and the nearby liquid or gas
in motion. As fluid motion goes more quickly the convective heat transfer increases. The
presence of bulk motion of fluid enhances the heat transfer between the solid surface and the
fluid.[2]
There are two types of Convective Heat Transfer:
• Natural Convection: is when the fluid motion is caused by buoyancy forces that result
from the density variations due to variations of temperature in the fluid. For example
in the absence of an external source, when the mass of the fluid is in contact with a
hot surface its molecules separate and scatter causing the mass of fluid to become less
dense. When this happens, the fluid is displaced vertically or horizontally while the
cooler fluid gets denser and the fluid sinks. Thus the hotter volume transfers heat
towards the cooler volume of that fluid.[3]
• Forced Convection: is when the fluid is forced to flow over the surface by external
source such as fans and pumps. It creates an artificially induced convection current.[4]
Internal and external flow can also classify convection. Internal flow occurs when the fluid is
enclosed by a solid boundary such as a flow through a pipe. An external flow occurs when
the fluid extends indefinitely without encountering a solid surface. Both these convections,
either natural or forced, can be internal or external as they are independent of each other.[3]
The formula for Rate of Convective Heat Transfer:[5]
q = hA(Ts − Tb)
A is the surface area of heat transfer. Ts is the surface temperature and while Tb is the
temperature of the fluid at bulk temperature. However Tb varies with each situation and is the
temperature of the fluid “far” away from the surface. The h is the constant heat transfer
coefficient which depends upon physical properties of the fluid such as temperature and the
physical situation in which convection occurs. Therefore, the heat transfer coefficient must be
derived or found experimentally for every system analyzed. Formulae and correlations are
available in many references to calculate heat transfer coefficients for typical configurations
and fluids. For laminar flows the heat transfer coefficient is rather low compared to the
turbulent flows, this is due to turbulent flows having a thinner stagnant fluid film layer on
heat transfer surface.[6]
Radiation
Radiation is the transfer of heat energy through empty space. All objects with a temperature
above absolute zero radiate energy at a rate equal to their emissivity multiplied by the rate at
which energy would radiate from them if they were a black body. No medium is necessary
for radiation to occur, for it is transfered through electromagnetic waves; radiation works
even in and through a perfect vacuum. The energy from the Sun travels through the vacuum
of space before warming the earth.
Both reflectivity and emissivity of all bodies is wavelength dependent. The temperature
determines the wavelength distribution of the electromagnetic radiation as limited in intensity
by Planck’s law of black-body radiation. For any body the reflectivity depends on the
wavelength distribution of incoming electromagnetic radiation and therefore the temperature
of the source of the radiation. The emissivity depends on the wave length distribution and
therefore the temperature of the body itself. For example, fresh snow, which is highly
reflective to visible light, (reflectivity about 0.90) appears white due to reflecting sunlight
with a peak energy wavelength of about 0.5 micrometres. Its emissivity, however, at a
temperature of about -5C, peak energy wavelength of about 12 micrometres, is 0.99.
Gases absorb and emit energy in characteristic wavelength patterns that are different for each
gas.
Visible light is simply another form of electromagnetic radiation with a shorter wavelength
(and therefore a higher frequency) than infrared radiation. The difference between visible
light and the radiation from objects at conventional temperatures is a factor of about 20 in
frequency and wavelength; the two kinds of emission are simply different "colours" of
electromagnetic radiation.
Clothing and building surfaces, and radiative transfer
Lighter colors and also whites and metallic substances absorb less illuminating light, and thus
heat up less; but otherwise color makes little difference as regards heat transfer between an
object at everyday temperatures and its surroundings, since the dominant emitted
wavelengths are nowhere near the visible spectrum, but rather in the far infrared. Emissivities
at those wavelengths have little to do with visual emissivities (visible colours); in the far
infrared, most objects have high emissivities. Thus, except in sunlight, the color of clothing
makes little difference as regards warmth; likewise, paint color of houses makes little
difference to warmth except when the painted part is sunlit. The main exception to this is
shiny metal surfaces, which have low emissivities both in the visible wavelengths and in the
far infrared. Such surfaces can be used to reduce heat transfer in both directions; an example
of this is the multi-layer insulation used to insulate spacecraft. Low-emissivity windows in
houses are a more complicated technology, since they must have low emissivity at thermal
wavelengths while remaining transparent to visible light.
Newton's law of cooling
A related principle, Newton's law of cooling, states that the rate of heat loss of a body is
proportional to the difference in temperatures between the body and its surroundings. The
law is

Q = Thermal energy in joules


h = Heat transfer coefficient
A = Surface area of the heat being transferred
T = Temperature of the object's surface and interior (since these are the same in this
approximation)
Tenv = Temperature of the environment
ΔT(t) = T(t) − Tenv is the time-dependent thermal gradient between environment and
object
This form of heat loss principle is sometimes not very precise; an accurate formulation may
require analysis of heat flow, based on the (transient) heat transfer equation in a
nonhomogeneous, or else poorly conductive, medium. An analog for continuous gradients is
Fourier's Law.
The following simplification (called lumped system thermal analysis and other similar terms)
may be applied, so long as it is permitted by the Biot number, which relates surface
conductance to interior thermal conductivity in a body. If this ratio permits, it shows that the
body has relatively high internal conductivity, such that (to good approximation) the entire
body is at the same uniform temperature throughout, even as this temperature changes as it is
cooled from the outside, by the environment. If this is the case, these conditions give the
behavior of exponential decay with time, of temperature of a body.
In such cases, the entire body is treated as lumped capacitance heat reservoir, with total heat
content which is proportional to simple total heat capacity C , and T, the temperature of the
body, or Q = C T. From the definition of heat capacity C comes the relation C = dQ/dT.
Differentiating this equation with regard to time gives the identity (valid so long as
temperatures in the object are uniform at any given time): dQ/dt = C (dT/dt). This expression
may be used to replace dQ/dt in the first equation which begins this section, above. Then, if
T(t) is the temperature of such a body at time t , and Tenv is the temperature of the
environment around the body:

where
r = hA/C is a positive constant characteristic of the system, which must be in units of 1/time,
and is therefore sometimes expressed in terms of a characteristic time constant t0 given by: r
= 1/t0 = ΔT/[dT(t)/dt] . Thus, in thermal systems, t0 = C/hA. (The total heat capacity C of a
system may be further represented by its mass-specific heat capacity cp multiplied by its mass
m, so that the time constant t0 is also given by mcp/hA).
The solution of this differential equation, by standard methods of integration and substitution
of boundary conditions, gives:

Here, T(t) is the temperature at time t, and T(0) is the initial temperature at zero time, or t =
0.
If:

is defined as : where is the initial temperature


difference at time 0,
then the Newtonian solution is written as:

Uses: For example, simplified climate models may use Newtonian cooling instead of a full
(and computationally expensive) radiation code to maintain atmospheric temperatures.
One dimensional application, using thermal circuits
A very useful concept used in heat transfer applications is the representation of thermal
transfer by what is known as thermal circuits. A thermal circuit is the representation of the
resistance to heat flow as though it were an electric resistor. The heat transferred is analogous
to the current and the thermal resistance is analogous to the electric resistor. The value of the
thermal resistance for the different modes of heat transfer are calculated as the denominators
of the developed equations. The thermal resistances of the different modes of heat transfer are
used in analyzing combined modes of heat transfer. The equations describing the three heat
transfer modes and their thermal resistances, as discussed previously are summarized in the
table below:
In cases where there is heat transfer through different media (for example through a
composite), the equivalent resistance is the sum of the resistances of the components that
make up the composite. Likely, in cases where there are different heat transfer modes, the
total resistance is the sum of the resistances of the different modes. Using the thermal circuit
concept, the amount of heat transferred through any medium is the quotient of the
temperature change and the total thermal resistance of the medium. As an example, consider
a composite wall of cross- sectional area A. The composite is made of an L1 long cement
plaster with a thermal coefficient k1 and L2 long paper faced fiber glass, with thermal
coefficient k2. The left surface of the wall is at Ti and exposed to air with a convective
coefficient of hi. The Right surface of the wall is at To and exposed to air with convective
coefficient ho.

Using the thermal resistance concept heat flow through the composite is as follows:

Insulation and radiant barriers


Main articles: Thermal insulation and Radiant barrier
Thermal insulators are materials specifically designed to reduce the flow of heat by limiting
conduction, convection, or both. Radiant barriers are materials which reflect radiation and
therefore reduce the flow of heat from radiation sources. Good insulators are not necessarily
good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and poor
insulator.
The effectiveness of an insulator is indicated by its R- (resistance) value. The R-value of a
material is the inverse of the conduction coefficient (k) multiplied by the thickness (d) of the
insulator. The units of resistance value are in SI units: (K·m²/W)

Rigid fiberglass, a common insulation material, has an R-value of 4 per inch, while poured
concrete, a poor insulator, has an R-value of 0.08 per inch.[7]
The effectiveness of a radiant barrier is indicated by its reflectivity, which is the fraction of
radiation reflected. A material with a high reflectivity (at a given wavelength) has a low
emissivity (at that same wavelength), and vice versa (at any specific wavelength, reflectivity
= 1 - emissivity). An ideal radiant barrier would have a reflectivity of 1 and would therefore
reflect 100% of incoming radiation. Vacuum bottles (Dewars) are 'silvered' to approach this.
In space vacuum, satellites use multi-layer insulation which consists of many layers of
aluminized (shiny) mylar to greatly reduce radiation heat transfer and control satellite
temperature.
Critical insulation thickness
To reduce the rate of heat transfer, one would add insulating materials i.e with low thermal
conductivity (k). The smaller the k value, the larger the corresponding thermal resistance (R)
value.
The units of thermal conductivity(k) are W·m-1·K-1 (watts per meter per kelvin), therefore
increasing width of insulation (x meters) decreases the k term and as discussed increases
resistance.
This follows logic as increased resistance would be created with increased conduction path
(x).
However, adding this layer of insulation also has the potential of increasing the surface area
and hence thermal convection area (A).
An obvious example is a cylindrical pipe:
• As insulation gets thicker, outer radius increases and therefore surface area increases.
• The point where the added resistance of increasing insulation width becomes
overshadowed by the effects of surface area is called the critical insulation
thickness. In simple cylindrical pipes:[8]

For a graph of this phenomenon in a cylidrical pipe example see: External Link: Critical
Insulation Thickness diagram as at 26/03/09
Heat exchangers
Main article: Heat exchanger
A heat exchanger is a device built for efficient heat transfer from one fluid to another,
whether the fluids are separated by a solid wall so that they never mix, or the fluids are
directly contacted. Heat exchangers are widely used in refrigeration, air conditioning, space
heating, power generation, and chemical processing. One common example of a heat
exchanger is the radiator in a car, in which the hot radiator fluid is cooled by the flow of air
over the radiator surface.
Common types of heat exchanger flows include parallel flow, counter flow, and cross flow.
In parallel flow, both fluids move in the same direction while transferring heat; in counter
flow, the fluids move in opposite directions and in cross flow the fluids move at right angles
to each other. The common constructions for heat exchanger include shell and tube, double
pipe, extruded finned pipe, spiral fin pipe, u-tube, and stacked plate.
When engineers calculate the theoretical heat transfer in a heat exchanger, they must contend
with the fact that the driving temperature difference between the two fluids varies with
position. To account for this in simple systems, the log mean temperature difference (LMTD)
is often used as an 'average' temperature. In more complex systems, direct knowledge of the
LMTD is not available and the number of transfer units (NTU) method can be used instead.
Boiling heat transfer
See also: boiling and critical heat flux
Heat transfer in boiling fluids is complex but of considerable technical importance. It is
characterised by an s-shaped curve relating heat flux to surface temperature difference (see
say Kay & Nedderman 'Fluid Mechanics & Transfer Processes', CUP, 1985, p. 529).
At low driving temperatures, no boiling occurs and the heat transfer rate is controlled by the
usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs
and vapour bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is
sub-cooled nucleate boiling and is a very efficient heat transfer mechanism. At high bubble
generation rates the bubbles begin to interfere and the heat flux no longer increases rapidly
with surface temperature (this is the departure from nucleate boiling DNB). At higher
temperatures still, a maximum in the heat flux is reached (the critical heat flux). The regime
of falling heat transfer which follows is not easy to study but is believed to be characterised
by alternate periods of nucleate and film boiling. Nukleate boiling slowing the heat transfer
due to gas phase {bubbles} creation on the heater surface, as mentioned, gas phase thermal
conductivity is much lower than liquid phase thermal conductivity, so the outcome is a kind
of "gas thermal barrier".
At higher temperatures still, the hydrodynamically quieter regime of film boiling is reached.
Heat fluxes across the stable vapour layers are low, but rise slowly with temperature. Any
contact between fluid and the surface which may be seen probably leads to the extremely
rapid nucleation of a fresh vapour layer ('spontaneous nucleation').
Condensation heat transfer
Condensation occurs when a vapor is cooled and changes its phase to a liquid. Condensation
heat transfer, like boiling, is of great significance in industry. During condensation, the latent
heat of vaporization must be released. The amount of the heat is the same as that absorbed
during vaporization at the same fluid pressure.
There are several modes of condensation:
• Homogeneous condensation (as during a formation of fog).
• Condensation in direct contact with subcooled liquid.
• Condensation on direct contact with a cooling wall of a heat exchanger-this is the
most common mode used in industry:
○ Filmwise condensation (when a liquid film is formed on the subcooled
surface, usually occurs when the liquid wets the surface).
○ Dropwise condensation (when liquid drops are formed on the subcooled
surface, usually occurs when the liquid does not wet the surface). Dropwise
condensation is difficult to sustain reliably; therefore, industrial equipment is
normally designed to operate in filmwise condensation mode.
Heat transfer in education
Heat transfer is typically studied as part of a general chemical engineering or mechanical
engineering curriculum. Typically, thermodynamics is a prerequisite to undertaking a course
in heat transfer, as the laws of thermodynamics are essential in understanding the mechanism
of heat transfer. Other courses related to heat transfer include energy conversion,
thermofluids and mass transfer.
Heat transfer methodologies are used in the following disciplines, among others:
• Automotive engineering
• Thermal management of electronic devices and systems
• HVAC
• Insulation
• Materials processing
• Power plant engineering

Thermodynamics
In physics, thermodynamics (from the Greek θερμ-<θερμότης, therme, meaning "heat"[1] and
δυναμις, dynamis, meaning "power") is the study of the conversion of energy into work and
heat and its relation to macroscopic variables such as temperature, volume and pressure. Its
progenitor, based on statistical predictions of the collective motion of particles from their
microscopic behavior, is the field of statistical thermodynamics (or statistical mechanics), a
branch of statistical physics.[2][3][4] Historically, thermodynamics developed out of need to
increase the efficiency of early steam engines.[5]
Typical thermodynamic system, showing input from a heat source (boiler) on the left and
output to a heat sink (condenser) on the right. Work is extracted, in this case by a series of
pistons.

Introduction

The starting point for most thermodynamic considerations are the laws of thermodynamics,
which postulate that energy can be exchanged between physical systems as heat or work.[6]
They also postulate the existence of a quantity named entropy, which can be defined for any
isolated system that is in thermodynamic equilibrium.[7] In thermodynamics, interactions
between large ensembles of objects are studied and categorized. Central to this are the
concepts of system and surroundings. A system is composed of particles, whose average
motions define its properties, which in turn are related to one another through equations of
state. Properties can be combined to express internal energy and thermodynamic potentials,
which are useful for determining conditions for equilibrium and spontaneous processes.
With these tools, the usage of thermodynamics describes how systems respond to changes in
their surroundings. This can be applied to a wide variety of topics in science and engineering,
such as engines, phase transitions, chemical reactions, transport phenomena, and even black
holes. The results of thermodynamics are essential for other fields of physics and for
chemistry, chemical engineering, aerospace engineering, mechanical engineering, cell
biology, biomedical engineering, materials science, and economics to name a few.[8][9]

Developments
Sadi Carnot (1796-1832), the father of thermodynamics
Main article: History of thermodynamics
The history of thermodynamics as a scientific discipline generally begins with Otto von
Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a
vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order
to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after
Guericke, the Irish physicist and chemist Robert Boyle had learned of Guericke's designs and,
in 1656, in coordination with English scientist Robert Hooke, built an air pump.[10] Using this
pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In
time, Boyle's Law was formulated, which states that pressure and volume are inversely
proportional. Then, in 1679, based on these concepts, an associate of Boyle's named Denis
Papin built a bone digester, which was a closed vessel with a tightly fitting lid that confined
steam until a high pressure was generated.
Later designs implemented a steam release valve that kept the machine from exploding. By
watching the valve rhythmically move up and down, Papin conceived of the idea of a piston
and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in
1697, based on Papin's designs, engineer Thomas Savery built the first engine. Although
these early engines were crude and inefficient, they attracted the attention of the leading
scientists of the time. Their work led 127 years later to Sadi Carnot, the "father of
thermodynamics", who, in 1824, published Reflections on the Motive Power of Fire, a
discourse on heat, power, and engine efficiency. The paper outlined the basic energetic
relations between the Carnot engine, the Carnot cycle, and Motive power. It marked the start
of thermodynamics as a modern science.[3]. The term thermodynamics was coined by James
Joule in 1849 to designate the science of relations between heat and power.[3] By 1858,
"thermo-dynamics", as a functional term, was used in William Thomson's paper An Account
of Carnot's Theory of the Motive Power of Heat.[11] The first thermodynamic textbook was
written in 1859 by William Rankine, originally trained as a physicist and a civil and
mechanical engineering professor at the University of Glasgow.[12]
Classical thermodynamics is the original early 1800s variation of thermodynamics concerned
with thermodynamic states, and properties as energy, work, and heat, and with the laws of
thermodynamics, all lacking an atomic interpretation. In precursory form, classical
thermodynamics derives from chemist Robert Boyle’s 1662 postulate that the pressure P of a
given quantity of gas varies inversely as its volume V at constant temperature; i.e. in equation
form: PV = k, a constant. From here, a semblance of a thermo-science began to develop with
the construction of the first successful atmospheric steam engines in England by Thomas
Savery in 1697 and Thomas Newcomen in 1712. The first and second laws of
thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William
Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin).
With the development of atomic and molecular theories in the late 1800s and early 1900s,
thermodynamics was given a molecular interpretation. This field, called statistical mechanics
or statistical thermodynamics, relates the microscopic properties of individual atoms and
molecules to the macroscopic or bulk properties of materials that can be observed in everyday
life, thereby explaining thermodynamics as a natural result of statistics and mechanics
(classical and quantum) at the microscopic level. The statistical approach is in contrast to
classical thermodynamics, which is a more phenomenological approach that does not include
microscopic details. The foundations of statistical thermodynamics were set out by physicists
such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J.
Willard Gibbs.
Chemical thermodynamics is the study of the interrelation of energy with chemical reactions
or with a physical change of state within the confines of the laws of thermodynamics. During
the years 1873-76 the American mathematical physicist Josiah Willard Gibbs published a
series of three papers, the most famous being On the Equilibrium of Heterogeneous
Substances, in which he showed how thermodynamic processes could be graphically
analyzed, by studying the energy, entropy, volume, temperature and pressure of the
thermodynamic system, in such a manner to determine if a process would occur
spontaneously.[13] During the early 20th century, chemists such as Gilbert N. Lewis, Merle
Randall, and E. A. Guggenheim began to apply the mathematical methods of Gibbs to the
analysis of chemical processes.[14]
The Four Laws
Main article: Laws of thermodynamics
The present article is focused on classical thermodynamics, which is focused on systems in
thermodynamic equilibrium. It is wise to distinguish classical thermodynamics from non-
equilibrium thermodynamics, which is concerned with systems that are not in thermodynamic
equilibrium.
In thermodynamics, there are four laws that do not depend on the details of the systems under
study or how they interact. Hence these laws are very generally valid, can be applied to
systems about which one knows nothing other than the balance of energy and matter transfer.
Examples of such systems include Einstein's prediction, around the turn of the 20th century,
of spontaneous emission, and ongoing research into the thermodynamics of black holes.
These four laws are:
• Zeroth law of thermodynamics, about thermal equilibrium:
If two thermodynamic systems are separately in thermal equilibrium with a third, they
are also in thermal equilibrium with each other.
If we grant that all systems are (trivially) in thermal equilibrium with themselves, the
Zeroth law implies that thermal equilibrium is an equivalence relation on the set of
thermodynamic systems. This law is tacitly assumed in every measurement of
temperature. Thus, if we want to know if two bodies are at the same temperature, it is
not necessary to bring them into contact and to watch whether their observable
properties change with time.[15]
• First law of thermodynamics, about the conservation of energy:
The change in the internal energy of a closed thermodynamic system is equal to the
sum of the amount of heat energy supplied to or removed from the system and the
work done on or by the system or we can say " In an isolated system the heat is
constant".
• Second law of thermodynamics, about entropy:
The total entropy of any isolated thermodynamic system always increases over time,
approaching a maximum value or we can say " in an isolated system, the entropy
never decreases".
• Third law of thermodynamics, about the absolute zero of temperature:
As a system asymptotically approaches absolute zero of temperature all processes
virtually cease and the entropy of the system asymptotically approaches a minimum
value; also stated as: "the entropy of all systems and of all states of a system is zero at
absolute zero" or equivalently "it is impossible to reach the absolute zero of
temperature by any finite number of processes".
See also: Bose–Einstein condensate and negative temperature.
Potentials
Main article: Thermodynamic potentials
As can be derived from the energy balance equation (or Burks' equation) on a thermodynamic
system there exist energetic quantities called thermodynamic potentials, being the
quantitative measure of the stored energy in the system. The five most well known potentials
are:

Internal energy

Helmholtz free energy

Enthalpy

Gibbs free energy

Grand potential

Other thermodynamic potentials can be obtained through Legendre transformation. Potentials


are used to measure energy changes in systems as they evolve from an initial state to a final
state. The potential used depends on the constraints of the system, such as constant
temperature or pressure. Internal energy is the internal energy of the system, enthalpy is the
internal energy of the system plus the energy related to pressure-volume work, and Helmholtz
and Gibbs energy are the energies available in a system to do useful work when the
temperature and volume or the pressure and temperature are fixed, respectively.
System models
Main article: Thermodynamic system
An important concept in thermodynamics is the “system”. Everything in the universe except
the system is known as surroundings. A system is the region of the universe under study. A
system is separated from the remainder of the universe by a boundary which may be
imaginary or not, but which by convention delimits a finite volume. The possible exchanges
of work, heat, or matter between the system and the surroundings take place across this
boundary. Boundaries are of four types: fixed, moveable, real, and imaginary.
Basically, the “boundary” is simply an imaginary dotted line drawn around a volume of
something when there is going to be a change in the internal energy of that something.
Anything that passes across the boundary that effects a change in the internal energy of the
something needs to be accounted for in the energy balance equation. That something can be
the volumetric region surrounding a single atom resonating energy, such as Max Planck
defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot
defined in 1824; it can be the body of a tropical cyclone, such as Kerry Emanuel theorized in
1986 in the field of atmospheric thermodynamics; it could also be just one nuclide (i.e. a
system of quarks) as some are theorizing presently in quantum thermodynamics.
For an engine, a fixed boundary means the piston is locked at its position; as such, a constant
volume process occurs. In that same engine, a moveable boundary allows the piston to move
in and out. For closed systems, boundaries are real while for open system boundaries are
often imaginary. There are five dominant classes of systems:
1. Isolated Systems – matter and energy may not cross the boundary
2. Adiabatic Systems – heat must not cross the boundary
3. Diathermic Systems - heat may cross boundary
4. Closed Systems – matter may not cross the boundary
5. Open Systems – heat, work, and matter may cross the boundary (often called a control
volume in this case)
As time passes in an isolated system, internal differences in the system tend to even out and
pressures and temperatures tend to equalize, as do density differences. A system in which all
equalizing processes have gone practically to completion, is considered to be in a state of
thermodynamic equilibrium.
In thermodynamic equilibrium, a system's properties are, by definition, unchanging in time.
Systems in equilibrium are much simpler and easier to understand than systems which are not
in equilibrium. Often, when analysing a thermodynamic process, it can be assumed that each
intermediate state in the process is at equilibrium. This will also considerably simplify the
situation. Thermodynamic processes which develop so slowly as to allow each intermediate
step to be an equilibrium state are said to be reversible processes.
Conjugate variables
Main article: Conjugate variables (thermodynamics)
The central concept of thermodynamics is that of energy, the ability to do work. By the First
Law, the total energy of a system and its surroundings is conserved. Energy may be
transferred into a system by heating, compression, or addition of matter, and extracted from a
system by cooling, expansion, or extraction of matter. In mechanics, for example, energy
transfer equals the product of the force applied to a body and the resulting displacement.
Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a
"force" applied to some thermodynamic system, the second being akin to the resulting
"displacement," and the product of the two equalling the amount of energy transferred. The
common conjugate variables are:
• Pressure-volume (the mechanical parameters);
• Temperature-entropy (thermal parameters);
• Chemical potential-particle number (material parameters).
Instrumentation
Main article: Thermodynamic instruments
There are two types of thermodynamic instruments, the meter and the reservoir. A
thermodynamic meter is any device which measures any parameter of a thermodynamic
system. In some cases, the thermodynamic parameter is actually defined in terms of an
idealized measuring instrument. For example, the zeroth law states that if two bodies are in
thermal equilibrium with a third body, they are also in thermal equilibrium with each other.
This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure
temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From
the ideal gas law PV=nRT, the volume of such a sample can be used as an indicator of
temperature; in this manner it defines temperature. Although pressure is defined
mechanically, a pressure-measuring device, called a barometer may also be constructed from
a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is
used to measure and define the internal energy of a system.
A thermodynamic reservoir is a system which is so large that it does not appreciably alter its
state parameters when brought into contact with the test system. It is used to impose a
particular value of a state parameter upon the system. For example, a pressure reservoir is a
system at a particular pressure, which imposes that pressure upon any test system that it is
mechanically connected to. The Earth's atmosphere is often used as a pressure reservoir.
It is important that these two types of instruments are distinct. A meter does not perform its
task accurately if it behaves like a reservoir of the state variable it is trying to measure. If, for
example, a thermometer were to act as a temperature reservoir it would alter the temperature
of the system being measured, and the reading would be incorrect. Ideal meters have no
effect on the state variables of the system they are measuring.
States & processes
When a system is at equilibrium under a given set of conditions, it is said to be in a definite
state. The thermodynamic state of the system can be described by a number of intensive
variables and extensive variables. The properties of the system can be described by an
equation of state which specifies the relationship between these variables. State may be
thought of as the instantaneous quantitative description of a system with a set number of
variables held constant.
A thermodynamic process may be defined as the energetic evolution of a thermodynamic
system proceeding from an initial state to a final state. Typically, each thermodynamic
process is distinguished from other processes, in energetic character, according to what
parameters, as temperature, pressure, or volume, etc., are held fixed. Furthermore, it is useful
to group these processes into pairs, in which each variable held constant is one member of a
conjugate pair. The seven most common thermodynamic processes are shown below:
1. An isobaric process occurs at constant pressure.
2. An isochoric process, or isometric/isovolumetric process, occurs at constant volume.
3. An isothermal process occurs at a constant temperature.
4. An adiabatic process occurs without loss or gain of energy by heat.
5. An isentropic process (reversible adiabatic process) occurs at a constant entropy.
6. An isenthalpic process occurs at a constant enthalpy.
7. A steady state process occurs without a change in the internal energy of a system.

[
Strength of materials
From Wikipedia, the free encyclopedia
In materials science, the strength of a material is its ability to withstand an applied stress
without failure. Yield strength refers to the point on the engineering stress-strain curve (as
opposed to true stress-strain curve) beyond which the material begins deformation that cannot
be reversed upon removal of the loading. Ultimate strength refers to the point on the
engineering stress-strain curve corresponding to the maximum stress. The applied stress may
be tensile, compressive, or shear, torsion .
A material's strength is dependent on its microstructure. The engineering processes to which
a material is subjected can alter this microstructure. The variety of strengthening mechanisms
that alter the strength of a material includes work hardening, solid solution strengthening,
precipitation hardening and grain boundary strengthening and can be quantified and
qualitatively explained. However, strengthening mechanisms are accompanied by the caveat
that some mechanical properties of the material may degenerate in an attempt to make the
material stronger. For example, in grain boundary strengthening, although yield strength is
maximized with decreasing grain size, ultimately, very small grain sizes make the material
brittle. In general, the yield strength of a material is an adequate indicator of the material's
mechanical strength. Considered in tandem with the fact that the yield strength is the
parameter that predicts plastic deformation in the material, one can make informed decisions
on how to increase the strength of a material depending its microstructural properties and the
desired end effect. Strength is considered in terms of compressive strength, tensile strength,
and shear strength, namely the limit states of compressive stress, tensile stress and shear
stress, respectively. The effects of dynamic loading is probably the most important practical
part of the strength of materials, especially the problem of fatigue. Repeated loading often
initiates brittle cracks, which grow slowly until failure occurs.
However, the term strength of materials most often refers to various methods of calculating
stresses in structural members, such as beams, columns and shafts. The methods that can be
employed to predict the response of a structure under loading and its susceptibility to various
failure modes may take into account various properties of the materials other than material
(yield or ultimate) strength. For example failure in buckling is dependent on material stiffness
(Young's Modulus).


Definitions
Stress terms

A material being loaded in a) compression, b) tension, c) shear.


Uniaxial stress is expressed by

where F is the force [N] acting on an area A [m2]. The area can be the undeformed area or the
deformed area, depending on whether engineering stress or true stress is used.
• Compressive stress (or compression) is the stress state caused by an applied load that
acts to reduce the length of the material (compression member) in the axis of the
applied load, in other words the stress state caused by squeezing the material. A
simple case of compression is the uniaxial compression induced by the action of
opposite, pushing forces. Compressive strength for materials is generally higher than
that of tensile stress. However, structures loaded in compression are subject to
additional failure modes dependent on geometry, such as Euler buckling.
• Tensile stress is the stress state caused by an applied load that tends to elongate the
material in the axis of the applied load, in other words the stress caused by pulling the
material. The strength of structures of equal cross sectional area loaded in tension is
independent of cross section geometry. Materials loaded in tension are susceptible to
stress concentrations such as material defects or abrupt changes in geometry.
However, materials exhibiting ductile behavior(metals for example) can tolerate some
defects while brittle materials (such as ceramics) can fail well below their ultimate
stress.
• Shear stress is the stress state caused by a pair of opposing forces acting along
parallel lines of action through the material, in other words the stress caused by
sliding faces of the material relative to one another. An example is cutting paper with
scissors.
Strength terms
• Yield strength is the lowest stress that gives permanent deformation in a material. In
some materials, like aluminium alloys, the point of yielding is hard to define, thus it is
usually given as the stress required to cause 0.2% plastic strain. This is called a 0.2%
proof stress.
• Compressive strength is a limit state of compressive stress that leads to compressive
failure in the manner of ductile failure (infinite theoretical yield) or in the manner of
brittle failure (rupture as the result of crack propagation, or sliding along a weak plane
- see shear strength).
• Tensile strength or ultimate tensile strength is a limit state of tensile stress that leads
to tensile failure in the manner of ductile failure (yield as the first stage of failure,
some hardening in the second stage and break after a possible "neck" formation) or in
the manner of brittle failure (sudden breaking in two or more pieces with a low stress
state). Tensile strength can be given as either true stress or engineering stress.
• Fatigue strength is a measure of the strength of a material or a component under
cyclic loading, and is usually more difficult to assess than the static strength
measures. Fatigue strength is given as stress amplitude or stress range (Δσ = σmax −
σmin), usually at zero mean stress, along with the number of cycles to failure.
• Impact strength, it is the capability of the material in withstanding by the suddenly
applied loads in terms of energy. Often measured with the Izod impact strength test or
Charpy impact test, both of which measure the impact energy required to fracture a
sample.

Strain (deformation) terms


• Deformation of the material is the change in geometry when stress is applied (in the
form of force loading, gravitational field, acceleration, thermal expansion, etc.).
Deformation is expressed by the displacement field of the material.
• Strain or reduced deformation is a mathematical term to express the trend of the
deformation change among the material field. For uniaxial loading - displacements of
a specimen (for example a bar element) it is expressed as the quotient of the
displacement and the length of the specimen. For 3D displacement fields it is
expressed as derivatives of displacement functions in terms of a second order tensor
(with 6 independent elements).
• Deflection is a term to describe the magnitude to which a structural element bends
under a load.
Stress-strain relations
• Elasticity is the ability of a material to return to its previous shape after stress is
released. In many materials, the relation between applied stress and the resulting
strain is directly proportional (up to a certain limit), and a graph representing those
two quantities is a straight line.
The slope of this line is known as Young's Modulus, or the "Modulus of Elasticity." The
Modulus of Elasticity can be used to determine stress-strain relationships in the linear-elastic
portion of the stress-strain curve. The linear-elastic region is taken to be between 0 and 0.2%
strain, and is defined as the region of strain in which no yielding (permanent deformation)
occurs.
• Plasticity or plastic deformation is the opposite of elastic deformation and is accepted
as unrecoverable strain. Plastic deformation is retained even after the relaxation of the
applied stress. Most materials in the linear-elastic category are usually capable of
plastic deformation. Brittle materials, like ceramics, do not experience any plastic
deformation and will fracture under relatively low stress. Materials such as metals
usually experience a small amount of plastic deformation before failure while soft or
ductile polymers will plasticly deform much more.
Consider the difference between a carrot and chewed bubble gum. The carrot will stretch
very little before breaking, but nevertheless will still stretch. The chewed bubble gum, on the
other hand, will plastically deform enormously before finally breaking.
Design terms
Ultimate strength is an attribute directly related to a material, rather than just specific
specimen of the material, and as such is quoted force per unit of cross section area (N/m²).
For example, the ultimate tensile strength (UTS) of AISI 1018 Steel is 440 MN/m². In
general, the SI unit of stress is the pascal, where 1 Pa = 1 N/m². In Imperial units, the unit of
stress is given as lbf/in² or pounds-force per square inch. This unit is often abbreviated as psi.
One thousand psi is abbreviated ksi.
Factor of safety is a design constraint that an engineered component or structure must
achieve. FS = UTS / R, where FS: the Factor of Safety, R: The applied stress, and UTS: the
Ultimate force (or stress).
Margin of Safety is also sometimes used to as design constraint. It is defined MS=Factor of
safety - 1
For example to achieve a factor of safety of 4, the allowable stress in an AISI 1018 steel
component can be worked out as R = UTS / FS = 440/4 = 110 MPa, or R = 110×106 N/m².

Creep (deformation)
Mechanical failure
modes
Buckling
Corrosion
Creep
Fatigue
Fracture
Impact
Mechanical overload
Rupture
Thermal shock
Wear
Yielding
This box: view • talk • edit

Creep is the tendency of a solid material to slowly move or deform permanently under the
influence of stresses. It occurs as a result of long term exposure to levels of stress that are
below the yield strength of the material. Creep is more severe in materials that are subjected
to heat for long periods, and near the melting point. Creep always increases with temperature.
The rate of this deformation is a function of the material properties, exposure time, exposure
temperature and the applied structural load. Depending on the magnitude of the applied stress
and its duration, the deformation may become so large that a component can no longer
perform its function — for example creep of a turbine blade will cause the blade to contact
the casing, resulting in the failure of the blade. Creep is usually of concern to engineers and
metallurgists when evaluating components that operate under high stresses or high
temperatures. Creep is a deformation mechanism that may or may not constitute a failure
mode. Moderate creep in concrete is sometimes welcomed because it relieves tensile stresses
that might otherwise lead to cracking.
Unlike brittle fracture, creep deformation does not occur suddenly upon the application of
stress. Instead, strain accumulates as a result of long-term stress. Creep deformation is "time-
dependent" deformation.
The temperature range in which creep deformation may occur differs in various materials.
For example, Tungsten requires a temperature in the thousands of degrees before creep
deformation can occur while ice formations will creep in freezing temperatures.[1] As a rule of
thumb, the effects of creep deformation generally become noticeable at approximately 30%
of the melting point for metals and 40–50% of melting point for ceramics. Virtually any
material will creep upon approaching its melting temperature. Since the minimum
temperature is relative to melting point, creep can be seen at relatively low temperatures for
some materials. Plastics and low-melting-temperature metals, including many solders, creep
at room temperature as can be seen markedly in old lead hot-water pipes. Planetary ice is
often at a high temperature relative to its melting point, and creeps.
Creep deformation is important not only in systems where high temperatures are endured
such as nuclear power plants, jet engines and heat exchangers, but also in the design of many
everyday objects. For example, metal paper clips are stronger than plastic ones because
plastics creep at room temperatures. Aging glass windows are often erroneously used as an
example of this phenomenon: measurable creep would only occur at temperatures above the
glass transition temperature around 500 °C (900 °F). While glass does exhibit creep under the
right conditions, apparent sagging in old windows may instead be a consequence of obsolete
manufacturing processes, such as that used to create crown glass, which resulted in
inconsistent thickness.[2][3]
An example of an application involving creep deformation is the design of tungsten light bulb
filaments. Sagging of the filament coil between its supports increases with time due to creep
deformation caused by the weight of the filament itself. If too much deformation occurs, the
adjacent turns of the coil touch one another, causing an electrical short and local overheating,
which quickly leads to failure of the filament. The coil geometry and supports are therefore
designed to limit the stresses caused by the weight of the filament, and a special tungsten
alloy with small amounts of oxygen trapped in the crystallite grain boundaries is used to slow
the rate of coble creep.
In steam turbine power plants, pipes carry steam at high temperatures (566 °C or 1050 °F)
and pressures (above 24.1 MPa or 3500 psi). In jet engines, temperatures can reach up to
1400 °C (2550 °F) and initiate creep deformation in even advanced-coated turbine blades.
Hence, it is crucial for correct functionality to understand the creep deformation behavior of
materials.

Stages of creep
In the initial stage, or primary creep, the strain rate is relatively high, but slows with
increasing strain. This is due to work hardening. The strain rate eventually reaches a
minimum and becomes near constant. This is due to the balance between work hardening and
annealing (thermal softening). This stage is known as secondary or steady-state creep. This
stage is the most understood. The characterized "creep strain rate" typically refers to the rate
in this secondary stage. Stress dependence of this rate depends on the creep mechanism. In
tertiary creep, the strain rate exponentially increases with strain because of necking
phenomena.
Mechanisms of creep
The mechanism of creep depends on temperature and stress. The various methods are:
• Bulk diffusion
• Climb — here the strain is actually accomplished by climb
• Climb-assisted glide — here the climb is an enabling mechanism, allowing
dislocations to get around obstacles
• Grain boundary diffusion
• Thermally activated glide — e.g., via cross-slip
General creep equation

where is the creep strain, C is a constant dependent on the material and the particular creep
mechanism, m and b are exponents dependent on the creep mechanism, Q is the activation
energy of the creep mechanism, σ is the applied stress, d is the grain size of the material, k is
Boltzmann's constant, and T is the absolute temperature.
Dislocation creep
At high stresses (relative to the shear modulus), creep is controlled by the movement of
dislocations. For dislocation creep, Q = Q(self diffusion), m = 4-6, and b = 0. Therefore,
dislocation creep has a strong dependence on the applied stress and no grain size dependence.
Some alloys exhibit a very large stress exponent (n > 10), and this has typically been
explained by introducing a "threshold stress," σth, below which creep can't be measured. The
modified power law equation then becomes:

where A, Q and n can all be explained by conventional mechanisms (so 3 ≤ n ≤ 10).


Nabarro-Herring creep
Nabarro-Herring creep is a form of diffusion controlled creep. In Nabarro-Herring creep,
atoms diffuse through the lattice causing grains to elongate along the stress axis; k is related
to the diffusion coefficient of atoms through the lattice, Q = Q(self diffusion), m = 1, and b =
2. Therefore Nabarro-Herring creep has a weak stress dependence and a moderate grain size
dependence, with the creep rate decreasing as grain size is increased.
Nabarro-Herring creep is strongly temperature dependent. For lattice diffusion of atoms to
occur in a material, neighboring lattice sites or interstitial sites in the crystal structure must be
free. A given atom must also overcome the energy barrier to move from its current site (it lies
in an energetically favorable potential well) to the nearby vacant site (another potential well).
The general form of the diffusion equation is D = D0exp(E/KT) where D0 has a dependence
on both the attempted jump frequency and the number of nearest neighbor sites and the
probability of the sites being vacant. Thus there is a double dependence upon temperature. At
higher temperatures the diffusivity increases due to the direct temperature dependence of the
equation, the increase in vacancies through Schottky defect formation, and an increase in the
average energy of atoms in the material. Nabarro-Herring creep dominates at very high
temperatures relative to a material's melting temperature.
Coble creep
Coble creep is a second form of diffusion controlled creep. In Coble creep the atoms diffuse
along grain boundaries to elongate the grains along the stress axis. This causes Coble creep to
have a stronger grain size dependence than Nabarro-Herring creep. For Coble creep k is
related to the diffusion coefficient of atoms along the grain boundary, Q = Q(grain boundary
diffusion), m = 1, and b = 3. Because Q(grain boundary diffusion) < Q(self diffusion), Coble
creep occurs at lower temperatures than Nabarro-Herring creep. Coble creep is still
temperature dependent, as the temperature increases so does the grain boundary diffusion.
However, since the number of nearest neighbors is effectively limited along the interface of
the grains, and thermal generation of vacancies along the boundaries is less prevalent, the
temperature dependence is not as strong as in Nabarro-Herring creep. It also exhibits the
same linear dependence on stress as Nabarro-Herring creep.
Creep of polymers

Strain as a function of time due to constant stress over an extended period for a viscoelastic
material.
a) Applied stress and b) induced strain as functions of time over a short period for a
viscoelastic material.
Creep can occur in polymers and metals which are considered viscoelastic materials. When a
polymeric material is subjected to an abrupt force, the response can be modeled using the
Kelvin-Voigt model. In this model, the material is represented by a Hookean spring and a
Newtonian dashpot in parallel. The creep strain is given by

where:
• σ = applied stress
• C0 = instantaneous creep compliance
• C = creep compliance coefficient
• τ = retardation time
• f(τ) = distribution of retardation times
When subjected to a step constant stress, viscoelastic materials experience a time-dependent
increase in strain. This phenomenon is known as viscoelastic creep.
At a time t0, a viscoelastic material is loaded with a constant stress that is maintained for a
sufficiently long time period. The material responds to the stress with a strain that increases
until the material ultimately fails. When the stress is maintained for a shorter time period, the
material undergoes an initial strain until a time t1 at which the stress is relieved, at which time
the strain immediately decreases (discontinuity) then continues decreasing gradually to a
residual strain.
Viscoelastic creep data can be presented in one of two ways. Total strain can be plotted as a
function of time for a given temperature or temperatures. Below a critical value of applied
stress, a material may exhibit linear viscoelasticity. Above this critical stress, the creep rate
grows disproportionately faster. The second way of graphically presenting viscoelastic creep
in a material is by plotting the creep modulus (constant applied stress divided by total strain
at a particular time) as a function of time.[4] Below its critical stress, the viscoelastic creep
modulus is independent of stress applied. A family of curves describing strain versus time
response to various applied stress may be represented by a single viscoelastic creep modulus
versus time curve if the applied stresses are below the material's critical stress value.
Additionally, the molecular weight of the polymer of interest is known to affect its creep
behavior. The effect of increasing molecular weight tends to promote secondary bonding
between polymer chains and thus make the polymer more creep resistant. Similarly, aromatic
polymers are even more creep resistant due to the added stiffness from the rings. Both
molecular weight and aromatic rings add to polymers' thermal stability, increasing the creep
resistance of a polymer.[5]
Both polymers and metals can creep. Polymers experience significant creep at all
temperatures above ca. –200°C; however, there are three main differences between
polymetric and metallic creep.[6]
Polymers show creep basically in two different ways. At typical work loads (5 up to 50%)
ultra high molecular weight polyethylene (Spectra, Dyneema) will show time-linear creep,
whereas polyester or aramids (Twaron, Kevlar) will show a time-logarithmic creep.
Other examples
• Though mostly due to the reduced yield stress at higher temperatures, the Collapse of
the World Trade Center was due in part to creep from increased temperature
operation.[7]
• The creep rate of hot pressure-loaded components in a nuclear reactor at power can be
a significant design-constraint, since the creep rate is enhanced by the flux of
energetic particles.
• Creep was blamed for the Big Dig

[Hide]

Mechanical engineering
Jump to:

Mechanical engineers design and build engines and power plants...

...structures and vehicles of all sizes...


Mechanical engineering is an engineering discipline that was developed from the
application of principles from physics and materials science. According to the American
Heritage Dictionary, it is the branch of engineering that encompasses the generation and
application of heat and mechanical power and the design, production, and use of machines
and tools. It is one of the oldest and broadest engineering disciplines.
The field requires a solid understanding of core concepts including mechanics, kinematics,
thermodynamics, fluid mechanics, heat transfer, materials science, and energy. Mechanical
engineers use the core principles as well as other knowledge in the field to design and analyze
manufacturing plants, industrial equipment and machinery, heating and cooling systems,
motor vehicles, aircraft, watercraft, robotics, medical devices and more.
Development

...and mechanisms, machines, and robots.


Applications of mechanical engineering are found in the records of many ancient and
medieval societies throughout the globe. In ancient Greece, the works of Archimedes (287
BC–212 BC) and Heron of Alexandria (c. 10–70 AD) deeply influenced mechanics in the
Western tradition. In China, Zhang Heng (78–139 AD) improved a water clock and invented
a seismometer, and Ma Jun (200–265 AD) invented a chariot with differential gears. The
medieval Chinese horologist and engineer Su Song (1020–1101 AD) incorporated an
escapement mechanism into his astronomical clock tower two centuries before any
escapement could be found in clocks of medieval Europe, as well as the world's first known
endless power-transmitting chain drive.[1]
During the years from 7th to 15th century, the era called the Islamic golden age, there have
been remarkable contributions from Muslims in the field of mechanical technology, Al-
Jazari, who was one of them wrote his famous "Book of Knowledge of Ingenious Mechanical
Devices" in 1206 presented many mechanical designs. He is also considered to be the
inventor of such mechanical devices which now form the very basic of mechanisms, such as
crank and cam shafts.[2]
During the early 19th century in England and Scotland, the development of machine tools led
mechanical engineering to develop as a separate field within engineering, providing
manufacturing machines and the engines to power them.[3] The first British professional
society of mechanical engineers was formed in 1847, thirty years after civil engineers formed
the first such professional society.[4] In the United States, the American Society of
Mechanical Engineers (ASME) was formed in 1880, becoming the third such professional
engineering society, after the American Society of Civil Engineers (1852) and the American
Institute of Mining Engineers (1871).[5] The first schools in the United States to offer an
engineering education were the United States Military Academy in 1817, an institution now
known as Norwich University in 1819, and Rensselaer Polytechnic Institute in 1825.
Education in mechanical engineering has historically been based on a strong foundation in
mathematics and science.[6]
The field of mechanical engineering is considered among the broadest of engineering
disciplines. The work of mechanical engineering ranges from the depths of the ocean to outer
space.
Education
Degrees in mechanical engineering are offered at universities worldwide. In Bangladesh,
China, India, Nepal, North America, and Pakistan, mechanical engineering programs
typically take four to five years and result in a Bachelor of Science (B.Sc), Bachelor of
Technology (B.Tech), Bachelor of Engineering (B.Eng), or Bachelor of Applied Science
(B.A.Sc) degree, in or with emphasis in mechanical engineering. In Spain, Portugal and most
of South America, where neither BSc nor BTech programs have been adopted, the formal
name for the degree is "Mechanical Engineer", and the course work is based on five or six
years of training. In Italy the course work is based on five years of training; but in order to
qualify as an Engineer you have to pass a state exam at the end of the course.
In the U.S., most undergraduate mechanical engineering programs are accredited by the
Accreditation Board for Engineering and Technology (ABET) to ensure similar course
requirements and standards among universities. The ABET web site lists 276 accredited
mechanical engineering programs as of June 19, 2006.[7] Mechanical engineering programs in
Canada are accredited by the Canadian Engineering Accreditation Board (CEAB),[8] and most
other countries offering engineering degrees have similar accreditation societies.
Some mechanical engineers go on to pursue a postgraduate degree such as a Master of
Engineering, Master of Technology, Master of Science, Master of Engineering Management
(MEng.Mgt or MEM), a Doctor of Philosophy in engineering (EngD, PhD) or an engineer's
degree. The master's and engineer's degrees may or may not include research. The Doctor of
Philosophy includes a significant research component and is often viewed as the entry point
to academia.[9]
Coursework
Standards set by each country's accreditation society are intended to provide uniformity in
fundamental subject material, promote competence among graduating engineers, and to
maintain confidence in the engineering profession as a whole. Engineering programs in the
U.S., for instance, are required by ABET to show that their students can "work professionally
in both thermal and mechanical systems areas."[10] The specific courses required to graduate,
however, may differ from program to program. Universities will often combine multiple
subjects into a single class or split a subject into multiple classes, depending on the faculty
available and the university's major area(s) of research. Fundamental subjects of mechanical
engineering usually include:
• Statics and dynamics
• Strength of materials and solid mechanics
• Instrumentation and measurement
• Thermodynamics, heat transfer, energy conversion, and HVAC
• Fluid mechanics and fluid dynamics
• Mechanism design (including kinematics and dynamics)
• Manufacturing technology or processes
• Hydraulics and pneumatics
• Engineering design
• Mechatronics and control theory
• Material Engineering
• Drafting, CAD (including solid modeling), and CAM[11][12]
Mechanical engineers are also expected to understand and be able to apply basic concepts
from chemistry, chemical engineering, electrical engineering, civil engineering, and physics.
Most mechanical engineering programs include several semesters of calculus, as well as
advanced mathematical concepts which may include differential equations and partial
differential equations, linear and modern algebra, and differential geometry, among others.
In addition to the core mechanical engineering curriculum, many mechanical engineering
programs offer more specialized programs and classes, such as robotics, transport and
logistics, cryogenics, fuel technology, automotive engineering, biomechanics, vibration,
optics and others, if a separate department does not exist for these subjects.[13]
Most mechanical engineering programs also require varying amounts of research or
community projects to gain practical problem-solving experience. In the United States it is
common for mechanical engineering students to complete one or more internships while
studying, though this is not typically mandated by the university.
License
Engineers may seek license by a state, provincial, or national government. The purpose of
this process is to ensure that engineers possess the necessary technical knowledge, real-world
experience, and knowledge of the local legal system to practice engineering at a professional
level. Once certified, the engineer is given the title of Professional Engineer (in the United
States, Canada, Japan, South Korea, Bangladesh and South Africa), Chartered Engineer (in
the UK, Ireland, India and Zimbabwe), Chartered Professional Engineer (in Australia and
New Zealand) or European Engineer (much of the European Union). Not all mechanical
engineers choose to become licensed; those that do can be distinguished as Chartered or
Professional Engineers by the post-nominal title P.E., P. Eng., or C.Eng., as in: John Doe,
P.Eng.
In the U.S., to become a licensed Professional Engineer, an engineer must pass the
comprehensive FE (Fundamentals of Engineering) exam, work a given number of years as an
Engineering Intern (EI) or Engineer-in-Training (EIT), and finally pass the "Principles and
Practice" or PE (Practicing Engineer or Professional Engineer) exams.
In the United States, the requirements and steps of this process are set forth by the National
Council of Examiners for Engineering and Surveying (NCEES), a national non-profit
representing all states. In the UK, current graduates require a BEng plus an appropriate
masters degree or an integrated MEng degree plus a minimum of 4 years post graduate on the
job competency development in order to become chartered through the Institution of
Mechanical Engineers.
In most modern countries, certain engineering tasks, such as the design of bridges, electric
power plants, and chemical plants, must be approved by a Professional Engineer or a
Chartered Engineer. "Only a licensed engineer, for instance, may prepare, sign, seal and
submit engineering plans and drawings to a public authority for approval, or to seal
engineering work for public and private clients."[14] This requirement can be written into state
and provincial legislation, such as Quebec's Engineer Act.[15] In other countries, such as
Australia, no such legislation exists; however, practically all certifying bodies maintain a
code of ethics independent of legislation that they expect all members to abide by or risk
expulsion.[16]
Further information: FE Exam, Professional Engineer, Chartered Engineer, Incorporated
Engineer, and Washington Accord
Salaries and workforce statistics
The total number of engineers employed in the U.S. in 2004 was roughly 1.4 million. Of
these, 226,000 were mechanical engineers (15.6%), second only to civil engineers in size at
237,000 (16.4%). The total number of mechanical engineering jobs in 2004 was projected to
grow 9% to 17%, with average starting salaries being $50,256 with a bachelor's degree,
$59,880 with a master's degree, and $68,299 with a doctorate degree. This places mechanical
engineering at 8th of 14 among engineering bachelors degrees, 4th of 11 among masters
degrees, and 6th of 7 among doctorate degrees in average annual salary.[17] The median
annual income of mechanical engineers in the U.S. workforce is roughly $63,000. This
number is highest when working for the government ($72,500), and lowest when doing
general purpose machinery manufacturing in the private sector ($55,850).[18]
Canadian engineers make an average of $29.83 per hour with 4% unemployed. The average
for all occupations is $18.07 per hour with 7% unemployed. Twelve percent of these
engineers are self-employed, and since 1997 the proportion of female engineers has risen to
6%.[19]
Mechanical Engineering is the second highest paid profession in the UK behind medicine. A
Mechanical Engineer with a CEng Status earns an average of £55,000 a year.[citation needed]
Modern tools
Many mechanical engineering companies, especially those in industrialized nations, have
begun to incorporate computer-aided engineering (CAE) programs into their existing design
and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD).
This method has many benefits, including easier and more exhaustive visualization of
products, the ability to create virtual assemblies of parts, and the ease of use in designing
mating interfaces and tolerances.
Other CAE programs commonly used by mechanical engineers include product lifecycle
management (PLM) tools and analysis tools used to perform complex simulations. Analysis
tools may be used to predict product response to expected loads, including fatigue life and
manufacturability. These tools include finite element analysis (FEA), computational fluid
dynamics (CFD), and computer-aided manufacturing (CAM).
Using CAE programs, a mechanical design team can quickly and cheaply iterate the design
process to develop a product that better meets cost, performance, and other constraints. No
physical prototype need be created until the design nears completion, allowing hundreds or
thousands of designs to be evaluated, instead of a relative few. In addition, CAE analysis
programs can model complicated physical phenomena which cannot be solved by hand, such
as viscoelasticity, complex contact between mating parts, or non-Newtonian flows
As mechanical engineering begins to merge with other disciplines, as seen in mechatronics,
multidisciplinary design optimization (MDO) is being used with other CAE programs to
automate and improve the iterative design process. MDO tools wrap around existing CAE
processes, allowing product evaluation to continue even after the analyst goes home for the
day. They also utilize sophisticated optimization algorithms to more intelligently explore
possible designs, often finding better, innovative solutions to difficult multidisciplinary
design problems.
Subdisciplines
The field of mechanical engineering can be thought of as a collection of many mechanical
disciplines. Several of these subdisciplines which are typically taught at the undergraduate
level are listed below, with a brief explanation and the most common application of each.
Some of these subdisciplines are unique to mechanical engineering, while others are a
combination of mechanical engineering and one or more other disciplines. Most work that a
mechanical engineer does uses skills and techniques from several of these subdisciplines, as
well as specialized subdisciplines. Specialized subdisciplines, as used in this article, are more
likely to be the subject of graduate studies or on-the-job training than undergraduate research.
Several specialized subdisciplines are discussed at the end of this section.
Mechanics

Mohr's circle, a common tool to study stresses in a mechanical element


Mechanics is, in the most general sense, the study of forces and their effect upon matter.
Typically, engineering mechanics is used to analyze and predict the acceleration and
deformation (both elastic and plastic) of objects under known forces (also called loads) or
stresses. Subdisciplines of mechanics include
• Statics, the study of non-moving bodies under known loads
• Dynamics (or kinetics), the study of how forces affect moving bodies
• Mechanics of materials, the study of how different materials deform under various
types of stress
• Fluid mechanics, the study of how fluids react to forces[20]
• Continuum mechanics, a method of applying mechanics that assumes that objects are
continuous (rather than discrete)
Mechanical engineers typically use mechanics in the design or analysis phases of
engineering. If the engineering project were the design of a vehicle, statics might be
employed to design the frame of the vehicle, in order to evaluate where the stresses will be
most intense. Dynamics might be used when designing the car's engine, to evaluate the forces
in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose
appropriate materials for the frame and engine. Fluid mechanics might be used to design a
ventilation system for the vehicle (see HVAC), or to design the intake system for the engine.
Kinematics
Kinematics is the study of the motion of bodies (objects) and systems (groups of objects),
while ignoring the forces that cause the motion. The movement of a crane and the oscillations
of a piston in an engine are both simple kinematic systems. The crane is a type of open
kinematic chain, while the piston is part of a closed four-bar linkage.
Mechanical engineers typically use kinematics in the design and analysis of mechanisms.
Kinematics can be used to find the possible range of motion for a given mechanism, or,
working in reverse, can be used to design a mechanism that has a desired range of motion.
Mechatronics and robotics
Training FMS with learning robot SCORBOT-ER 4u, workbench CNC Mill and CNC Lathe
Mechatronics is an interdisciplinary branch of mechanical engineering, electrical engineering
and software engineering that is concerned with integrating electrical and mechanical
engineering to create hybrid systems. In this way, machines can be automated through the use
of electric motors, servo-mechanisms, and other electrical systems in conjunction with
special software. A common example of a mechatronics system is a CD-ROM drive.
Mechanical systems open and close the drive, spin the CD and move the laser, while an
optical system reads the data on the CD and converts it to bits. Integrated software controls
the process and communicates the contents of the CD to the computer.
Robotics is the application of mechatronics to create robots, which are often used in industry
to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any
shape and size, but all are preprogrammed and interact physically with the world. To create a
robot, an engineer typically employs kinematics (to determine the robot's range of motion)
and mechanics (to determine the stresses within the robot).
Robots are used extensively in industrial engineering. They allow businesses to save money
on labor, perform tasks that are either too dangerous or too precise for humans to perform
them economically, and to insure better quality. Many companies employ assembly lines of
robots, and some factories are so robotized that they can run by themselves. Outside the
factory, robots have been employed in bomb disposal, space exploration, and many other
fields. Robots are also sold for various residential applications.
Structural analysis
Main articles: Structural analysis and Failure analysis
Structural analysis is the branch of mechanical engineering (and also civil engineering)
devoted to examining why and how objects fail and to fix the objects and their performance.
Structural failures occur in two general modes: static failure, and fatigue failure. Static
structural failure occurs when, upon being loaded (having a force applied) the object being
analyzed either breaks or is deformed plastically, depending on the criterion for failure.
Fatigue failure occurs when an object fails after a number of repeated loading and unloading
cycles. Fatigue failure occurs because of imperfections in the object: a microscopic crack on
the surface of the object, for instance, will grow slightly with each cycle (propagation) until
the crack is large enough to cause ultimate failure.
Failure is not simply defined as when a part breaks, however; it is defined as when a part
does not operate as intended. Some systems, such as the perforated top sections of some
plastic bags, are designed to break. If these systems do not break, failure analysis might be
employed to determine the cause.
Structural analysis is often used by mechanical engineers after a failure has occurred, or when
designing to prevent failure. Engineers often use online documents and books such as those
published by ASM[21] to aid them in determining the type of failure and possible causes.
Structural analysis may be used in the office when designing parts, in the field to analyze
failed parts, or in laboratories where parts might undergo controlled failure tests.
Thermodynamics and thermo-science

Main article: Thermodynamics


Thermodynamics is an applied science used in several branches of engineering, including
mechanical and chemical engineering. At its simplest, thermodynamics is the study of
energy, its use and transformation through a system. Typically, engineering thermodynamics
is concerned with changing energy from one form to another. As an example, automotive
engines convert chemical energy (enthalpy) from the fuel into heat, and then into mechanical
work that eventually turns the wheels.
Thermodynamics principles are used by mechanical engineers in the fields of heat transfer,
thermofluids, and energy conversion. Mechanical engineers use thermo-science to design
engines and power plants, heating, ventilation, and air-conditioning (HVAC) systems, heat
exchangers, heat sinks, radiators, refrigeration, insulation, and others.
Drafting
Main articles: Technical drawing and CNC

A CAD model of a mechanical double seal


Drafting or technical drawing is the means by which mechanical engineers create instructions
for manufacturing parts. A technical drawing can be a computer model or hand-drawn
schematic showing all the dimensions necessary to manufacture a part, as well as assembly
notes, a list of required materials, and other pertinent information. A U.S. mechanical
engineer or skilled worker who creates technical drawings may be referred to as a drafter or
draftsman. Drafting has historically been a two-dimensional process, but computer-aided
design (CAD) programs now allow the designer to create in three dimensions.
Instructions for manufacturing a part must be fed to the necessary machinery, either
manually, through programmed instructions, or through the use of a computer-aided
manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also
manually manufacture a part using the technical drawings, but this is becoming an increasing
rarity, with the advent of computer numerically controlled (CNC) manufacturing. Engineers
primarily manually manufacture parts in the areas of applied spray coatings, finishes, and
other processes that cannot economically or practically be done by a machine.
Drafting is used in nearly every subdiscipline of mechanical engineering, and by many other
branches of engineering and architecture. Three-dimensional models created using CAD
software are also commonly used in finite element analysis (FEA) and computational fluid
dynamics (CFD).
Frontiers of research
Mechanical engineers are constantly pushing the boundaries of what is physically possible in
order to produce safer, cheaper, and more efficient machines and mechanical systems. Some
technologies at the cutting edge of mechanical engineering are listed below (see also
exploratory engineering).
Micro electro-mechanical systems (MEMS)
Micron-scale mechanical components such as springs, gears, fluidic and heat transfer devices
are fabricated from a variety of substrate materials such as silicon, glass and polymers like
SU8. Examples of MEMS components will be the accelerometers that are used as car airbag
sensors, gyroscopes for precise positioning and microfluidic devices used in biomedical
applications.
Friction stir welding (FSW)
Main article: Friction stir welding
Friction stir welding, a new type of welding, was discovered in 1991 by The Welding
Institute (TWI). This innovative steady state (non-fusion) welding technique joins materials
previously un-weldable, including several aluminum alloys. It may play an important role in
the future construction of airplanes, potentially replacing rivets. Current uses of this
technology to date include welding the seams of the aluminum main Space Shuttle external
tank, Orion Crew Vehicle test article, Boeing Delta II and Delta IV Expendable Launch
Vehicles and the SpaceX Falcon 1 rocket, armor plating for amphibious assault ships, and
welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation
among an increasingly growing pool of uses.[22] [23] [24] [25]
Composites

Composite cloth consisting of woven carbon fiber.


Main article: Composite material
Composites or composite materials are a combination of materials which provide different
physical characteristics than either material separately. Composite material research within
mechanical engineering typically focuses on designing (and, subsequently, finding
applications for) stronger or more rigid materials while attempting to reduce weight,
susceptibility to corrosion, and other undesirable factors. Carbon fiber reinforced composites,
for instance, have been used in such diverse applications as spacecraft and fishing rods.
Mechatronics
Main article: Mechatronics
Mechatronics is the synergistic combination of mechanical engineering, electronic
engineering, and software engineering. The purpose of this interdisciplinary engineering field
is the study of automata from an engineering perspective and serves the purposes of
controlling advanced hybrid systems.
Nanotechnology
Main article: Nanotechnology
At the smallest scales, mechanical engineering becomes nanotechnology and molecular
engineering—one speculative goal of which is to create a molecular assembler to build
molecules and materials via mechanosynthesis. For now that goal remains within exploratory
engineering.
Finite element analysis
Main article: Finite element analysis
This field is not new, as the basis of Finite Element Analysis (FEA) or Finite Element
Method (FEM) dates back to 1941. But evolution of computers has made FEM a viable
option for analysis of structural problems. Many commercial codes such as ANSYS, Nastran
and ABAQUS are widely used in industry for research and design of components.
Other techniques such as finite difference method (FDM) and finite-volume method (FVM)
are employed to solve problems relating heat and mass transfer, fluid flows, fluid surface
interaction etc.

Engineering portal

• Building officials
• Building services engineering
• Electric vehicle conversion
• List of historic mechanical engineering landmarks
• List of mechanical engineering topics
• Related journals
• Mechanical engineering technology
• Fields of engineering
• Simple machine
• List of mechanical engineers
• List of inventors
• Patent
• Retrofit
Associations
• ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning
Engineers)
• ASME (American Society of Mechanical Engineers)
• Pi Tau Sigma (Mechanical Engineering Honor Society)
• SAE (Society of Automotive Engineers)
• SWE (The Society of Women Engineers)
• IMechE (Institution of Mechanical Engineers) (British)
• Chartered Institution of Building Services Engineers (CIBSE) (British)
• Pakistan Engineering Council (PEC) (Pakistan)

También podría gustarte