Está en la página 1de 11

Fasor

Ir a la navegaciónIr a la búsqueda
Para las siglas de Frequency Addition Source of Optical Radiation, véase FASOR.

Diagrama fasorial de la impedancia de distintos elementos de un circuito. El fasor rojo es la


impedancia total en serie, suma de los otros tres fasores.

Un fasor es una representación gráfica de un número complejo que se utiliza


para representar una oscilación, de forma que el fasor suma de varios fasores
puede representar la magnitud y fase de la oscilación resultante de la
superposición de varias oscilaciones en un proceso de interferencia.
Los fasores se utilizan directamente en ingeniería eléctrica, óptica, ingeniería
de telecomunicaciones y acústica. La longitud del fasor da la amplitud; y
el ángulo entre el mismo y el eje-x la fase angular. Debido a las propiedades de
la matemática de oscilaciones, en electrónica los fasores se utilizan
habitualmente en el análisis rudimentario de circuitos en AC. Finalmente, los
fasores pueden ser utilizados para describir el movimiento de un oscilador. Las
proyecciones del fasor en los ejes x e y tiene diferentes significados físicos.
Los fasores se usan sobre todo para resolver visualmente problemas del tipo:
"existen varias ondas de la misma frecuencia pero fases y amplitudes
diferentes interfiriendo en un punto, ¿cual es la intensidad resultante?". Para
solventar este problema, se dibuja un fasor para cada una de las oscilaciones
en dicho punto y después se aplica la suma fasorial (similar a la suma vectorial)
sobre ellos. La longitud del fasor resultante es la amplitud de la oscilación
resultante, y su longitud puede elevarse al cuadrado para obtener la intensidad.
Nótese que mientras que la suma de varias oscilaciones sinusoidales no es
necesariamente otra oscilación sinusoidal, la suma de varias oscilaciones
sinusoidales de la misma frecuencia sí lo es, permitiendo leer la fase resultante
como el ángulo del fasor resultante.

Índice
 1Definición

 2Leyes de circuitos

 3Transformada fasorial

o 3.1Transformada fasorial inversa

 4Aritmética fasorial

 5Véase también

 6Referencias

 7Enlaces externos

Definición[editar]

Evolución de dos magnitudes senoidales de la misma frecuencia y de su suma en forma temporal y


fasorial.

Una sinusoide u oscilación sinusoidal está definida como una función de la


forma
donde

 y es la magnitud que varía (oscila) con el tiempo


  es una constante (en radianes) conocida como el ángulo de fase de la
sinusoide
 A es una constante conocida como la amplitud de la sinusoide. Es el valor de
pico de la función sinusoidal.
 ω es la frecuencia angular dada por  donde f es la frecuencia.
 t es el tiempo.
Esto puede ser expresado como
donde
 i es la unidad imaginaria definida como . En ingeniería eléctrica y
telecomunicaciones se usa "j" en lugar de "i" para evitar las confusiones que se
producirían con el mismo símbolo que se usa para designar la intensidad de la
corriente eléctrica.
  da la parte imaginaria del número complejo "Y".
De forma equivalente, según la fórmula de Euler,
"Y", la representación del fasor de esta sinusoide se define de la
forma siguiente:
Hay que dividir A entre raíz de 2.
de forma que
Así, el fasor Y es el número complejo constante que contiene
la magnitud y fase de la sinusoide. Para simplificar la
notación, los fasores se escriben habitualmente en notación
angular:
Dentro de la Ingeniería Eléctrica, el ángulo fase se
especifica habitualmente en grados sexagesimales en
lugar de en radianes y la magnitud suele ser el valor
eficaz en lugar del valor de pico de la sinusoide.

Leyes de circuitos[editar]
Utilizando fasores, las técnicas para resolver circuitos
de corriente continua se pueden aplicar para resolver
circuitos en corriente alterna. A continuación se indican
las leyes básicas.

 Ley de Ohm para resistencias: Una resistencia no produce


retrasos en el tiempo, y por tanto no cambia la fase de una
señal. Por tanto V=IR sigue siendo válida.
 Ley de Ohm para resistencias, bobinas y
condensadores: V=IZ donde Z es la impedancia compleja.
 En un circuito AC se presenta una potencia activa (P) que
es la representación de la potencia media en un circuito y
potencia reactiva (Q) que indica el flujo de potencia atrás y
adelante. Se puede definir también la potencia
compleja S=P+jQ y la potencia aparente que es la
magnitud de S. La ley de la potencia para un circuito AC
expresada mediante fasores es
entonces S=VI* (donde I* es el complejo conjugado de I).
 Las Leyes de Kirchhoff son válidas con fasores en forma
compleja.
Dado esto, se pueden aplicar las técnicas de análisis de
circuitos resistivos con fasores para analizar cicuitos AC
de una sola frecuencia que contienen resistencias,
bobinas y condensadores. Los circuitos AC con más de
una frecuencia o con formas de oscilación diferentes
pueden ser analizados para obtener tensiones y
corrientes transformando todas las formas de oscilación
en sus componentes sinusoidales y después analizando
cada frecuencia por separado. Este método, resultado
directo de la aplicación del principio de superposición, no
se puede emplear para el cálculo de potencias, ya que
éstas no se pueden descomponer linealmente al ser
producto de tensiones e intensidades. Sin embargo, sí es
válido resolver el circuito mediante métodos de
superposición y, una vez obtenidos V e I totales, calcular
con ellos la potencia. Esto aplica para señales senoidales
en estado estable, esto es, después de que los
transitorios han pasado.1

Transformada fasorial[editar]
La transformada fasorial o representación fasorial permite
cambiar de forma trigonométrica a forma compleja:
donde la notación  se lee como "transformada fasorial de
X"
La transformada fasorial transfiere la función sinusoidal
del dominio del tiempo al dominio de los números
complejos o dominio de la frecuencia.
Transformada fasorial inversa[editar]
La transformada fasorial inversa  permite volver del
dominio fasorial al dominio del tiempo.

Aritmética fasorial
Digital electronics, digital technology or digital (electronic) circuits are electronics that
operate on digital signals. In contrast, analog circuits manipulate analog signals whose
performance is more subject to manufacturing tolerance, signal attenuation and noise.
Digital techniques are helpful because it is much easier to get an electronic device to switch
into one of a number of known states than to accurately reproduce a continuous range of
values.
Digital electronic circuits are usually made from large assemblies of logic gates (often
printed on integrated circuits), simple electronic representations of Boolean logic functions.
[1]

Contents
 1History
o 1.1Digital revolution and digital age
 2Properties
 3Construction
 4Design
o 4.1Representation
o 4.2Synchronous systems
o 4.3Asynchronous systems
o 4.4Register transfer systems
o 4.5Computer design
o 4.6Computer architecture
o 4.7Design issues in digital circuits
o 4.8Automated design tools
o 4.9Design for testability
o 4.10Trade-offs
 4.10.1Cost
 4.10.2Reliability
 4.10.3Fanout
 4.10.4Speed
 5Logic families
 6Recent developments
 7See also
 8Notes
 9References
 10Further reading
 11External links

History[edit]
The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705)
and he also established that by using the binary system, the principles of arithmetic and
logic could be joined. Digital logic as we know it was the brain-child of George Boole in the
mid 19th century. In an 1886 letter, Charles Sanders Peirce described how logical
operations could be carried out by electrical switching circuits. [2] Eventually, vacuum
tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of
the Fleming valve can be used as an AND gate. Ludwig Wittgenstein introduced a version
of the 16-row truth table as proposition 5.101 of Tractatus Logico-
Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the
1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924.
Mechanical analog computers started appearing in the first century and were later used in
the medieval era for astronomical calculations. In World War II, mechanical analog
computers were used for specialized military applications such as calculating torpedo
aiming. During this time the first electronic digital computers were developed. Originally
they were the size of a large room, consuming as much power as several hundred
modern personal computers (PCs).[3]
The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in 1941, it
was the world's first working programmable, fully automatic digital computer. [4] Its operation
was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming.
At the same time that digital calculation replaced analog, purely electronic circuit elements
soon replaced their mechanical and electromechanical equivalents. John
Bardeen and Walter Brattain invented the point-contact transistor at Bell Labs in 1947,
followed by William Shockley inventing the bipolar junction transistor at Bell Labs in 1948.[5]
[6]
At the University of Manchester, a team under the leadership of Tom Kilburn designed and
built a machine using the newly developed transistors instead of vacuum tubes.[7] Their
first transistorised computer and the first in the world, was operational by 1953, and a
second version was completed there in April 1955. From 1955 onwards, transistors
replaced vacuum tubes in computer designs, giving rise to the "second generation" of
computers. Compared to vacuum tubes, transistors were smaller, more reliable, had
indefinite lifespans, and required less power than vacuum tubes - thereby giving off less
heat, and allowing much denser concentrations of circuits, up to tens of thousands in a
relatively compact space.
While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas
concerning the integrated circuit (IC), then successfully demonstrated the first working
integrated on 12 September 1958. [8] Kilby's chip was made of germanium. The following
year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated circuit. The
basis for Noyce's silicon IC was the planar process, developed in early 1959 by Jean
Hoerni, who was in turn building on Mohamed Atalla's silicon surface passivation method
developed in 1957.[9] This new technique, the integrated circuit, allowed for quick, low-cost
fabrication of complex circuits by having a set of electronic circuits on one small plate
("chip") of semiconductor material, normally silicon.

Digital revolution and digital age[edit]


Further information: Digital Revolution and Digital Age
The metal–oxide–semiconductor field-effect transistor (MOSFET), also known as the MOS
transistor, was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959.[10][11]
[12]
 The MOSFET's advantages include high scalability,[13] affordability,[14] low power
consumption, and high transistor density.[15] Its rapid on–off electronic switching speed also
makes it ideal for generating pulse trains,[16] the basis for electronic digital signals,[17][18] in
contrast to BJTs which more slowly generate analog signals resembling sine waves.
[16]
 Along with MOS large-scale integration (LSI), these factors make the MOSFET an
important switching device for digital circuits.[19] The MOSFET revolutionized the electronics
industry,[20][21] and is the most common semiconductor device.[11][22] MOSFETs are the
fundamental building blocks of digital electronics, during the Digital Revolution of the late
20th to early 21st centuries.[12][23][24] This paved the way for the Digital Age of the early 21st
century.[12]
In the early days of integrated circuits, each chip was limited to only a few transistors, and
the low degree of integration meant the design process was relatively simple.
Manufacturing yields were also quite low by today's standards. The wide adoption of the
MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips
with more than 10,000 transistors on a single chip. [25] Following the wide adoption of CMOS,
a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be
placed on one chip as the technology progressed, [26] and good designs required thorough
planning, giving rise to new design methods. As of 2013, billions of MOSFETs are
manufactured every day.[11]
The wireless revolution, the introduction and proliferation of wireless networks, began in the
1990s and was enabled by the wide adoption of MOSFET-based RF power
amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS).[27][28][29] Wireless
networks allowed for public digital transmission without the need for cables, leading
to digital television (digital TV), GPS, satellite radio, wireless Internet and mobile
phones through the 1990s–2000s.
Discrete cosine transform (DCT) coding, a data compression technique first proposed
by Nasir Ahmed in 1972,[30] enabled practical digital media transmission,[31][32][33] with image
compression formats such as JPEG (1992), video coding formats such as H.26x (1988
onwards) and MPEG (1993 onwards),[34] audio coding standards such as Dolby
Digital (1991)[35][36] and MP3 (1994),[34] and digital TV standards such as video-on-
demand (VOD)[31] and high-definition television (HDTV).[37] Internet video was popularized
by YouTube, an online video platform founded by Chad Hurley, Jawed Karim and Steve
Chen in 2005, which enabled the video streaming of MPEG-4 AVC (H.264) user-generated
content from anywhere on the World Wide Web.[38]

Properties[edit]
An advantage of digital circuits when compared to analog circuits is that signals
represented digitally can be transmitted without degradation caused by noise.[39] For
example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be
reconstructed without error, provided the noise picked up in transmission is not enough to
prevent identification of the 1s and 0s.
In a digital system, a more precise representation of a signal can be obtained by using
more binary digits to represent it. While this requires more digital circuits to process the
signals, each digit is handled by the same kind of hardware, resulting in an
easily scalable system. In an analog system, additional resolution requires fundamental
improvements in the linearity and noise characteristics of each step of the signal chain.
With computer-controlled digital systems, new functions to be added through software
revision and no hardware changes. Often this can be done outside of the factory by
updating the product's software. So, the product's design errors can be corrected after the
product is in a customer's hands.
Information storage can be easier in digital systems than in analog ones. The noise
immunity of digital systems permits data to be stored and retrieved without degradation. In
an analog system, noise from aging and wear degrade the information stored. In a digital
system, as long as the total noise is below a certain level, the information can be recovered
perfectly. Even when more significant noise is present, the use of redundancy permits the
recovery of the original data provided too many errors do not occur.
In some cases, digital circuits use more energy than analog circuits to accomplish the same
tasks, thus producing more heat which increases the complexity of the circuits such as the
inclusion of heat sinks. In portable or battery-powered systems this can limit use of digital
systems. For example, battery-powered cellular telephones often use a low-power analog
front-end to amplify and tune in the radio signals from the base station. However, a base
station has grid power and can use power-hungry, but very flexible software radios. Such
base stations can be easily reprogrammed to process the signals used in new cellular
standards.
Many useful digital systems must translate from continuous analog signals to discrete
digital signals. This causes quantization errors. Quantization error can be reduced if the
system stores enough digital data to represent the signal to the desired degree of fidelity.
The Nyquist–Shannon sampling theorem provides an important guideline as to how much
digital data is needed to accurately portray a given analog signal.
In some systems, if a single piece of digital data is lost or misinterpreted, the meaning of
large blocks of related data can completely change. For example, a single-bit error in audio
data stored directly as linear pulse code modulation causes, at worst, a single click.
Instead, many people use audio compression to save storage space and download time,
even though a single bit error may cause a larger disruption.
Because of the cliff effect, it can be difficult for users to tell if a particular system is right on
the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can
be reduced by designing a digital system for robustness. For example, a parity bit or
other error management method can be inserted into the signal path. These schemes help
the system detect errors, and then either correct the errors, or request retransmission of the
data.

Construction[edit]
A binary clock, hand-wired on breadboards

A digital circuit is typically constructed from small electronic circuits called logic gates that
can be used to create combinational logic. Each logic gate is designed to perform a
function of boolean logic when acting on logic signals. A logic gate is generally created
from one or more electrically controlled switches, usually transistors but thermionic
valves have seen historic use. The output of a logic gate can, in turn, control or feed into
more logic gates.
Another form of digital circuit is constructed from lookup tables, (many sold as
"programmable logic devices", though other kinds of PLDs exist). Lookup tables can
perform the same functions as machines based on logic gates, but can be easily
reprogrammed without changing the wiring. This means that a designer can often repair
design errors without changing the arrangement of wires. Therefore, in small volume
products, programmable logic devices are often the preferred solution. They are usually
designed by engineers using electronic design automation software.
Integrated circuits consist of multiple transistors on one silicon chip, and are the least
expensive way to make large number of interconnected logic gates. Integrated circuits are
usually interconnected on a printed circuit board which is a board which holds electrical
components, and connects them together with copper traces.

Design[edit]
Engineers use many methods to minimize logic redundancy in order to reduce the circuit
complexity. Reduced complexity reduces component count and potential errors and
therefore typically reduces cost. Logic redundancy can be removed by several well-known
techniques, such as binary decision diagrams, Boolean algebra, Karnaugh maps,
the Quine–McCluskey algorithm, and the heuristic computer method. These operations are
typically performed within a computer-aided design system.
Embedded systems with microcontrollers and programmable logic controllers are often
used to implement digital logic for complex systems that don't require optimal performance.
These systems are usually programmed by software engineers or by electricians,
using ladder logic.

Representation[edit]
Representations are crucial to an engineer's design of digital circuits. To choose
representations, engineers consider types of digital systems.
The classical way to represent a digital circuit is with an equivalent set of logic gates. Each
logic symbol is represented by a different shape. The actual set of shapes was introduced
in 1984 under IEEE/ANSI standard 91-1984 and is now in common use by integrated circuit
manufacturers.[40] Another way is to construct an equivalent system of electronic switches
(usually transistors). This can be represented as a truth table.
Most digital systems divide into combinational and sequential systems. A combinational
system always presents the same output when given the same inputs. A sequential system
is a combinational system with some of the outputs fed back as inputs. This makes the
digital machine perform a sequence of operations. The simplest sequential system is
probably a flip flop, a mechanism that represents a binary digit or "bit". Sequential systems
are often designed as state machines. In this way, engineers can design a system's gross
behavior, and even test it in a simulation, without considering all the details of the logic
functions.
Sequential systems divide into two further subcategories. "Synchronous" sequential
systems change state all at once, when a "clock" signal changes state. "Asynchronous"
sequential systems propagate changes whenever inputs change. Synchronous sequential
systems are made of well-characterized asynchronous circuits such as flip-flops, that
change only when the clock changes, and which have carefully designed timing margins.
For logic simulation, digital circuit representations have digital file formats that can be
processed by computer programs.

Synchronous systems[edit]

A 4-bit ring counter using D-type flip flops is an example of synchronous logic. Each device is
connected to the clock signal, and update together.

Main article: synchronous logic


The usual way to implement a synchronous sequential state machine is to divide it into a
piece of combinational logic and a set of flip flops called a "state register." Each time a
clock signal ticks, the state register captures the feedback generated from the previous
state of the combinational logic, and feeds it back as an unchanging input to the
combinational part of the state machine. The fastest rate of the clock is set by the most
time-consuming logic calculation in the combinational logic.
The state register is just a representation of a binary number. If the states in the state
machine are numbered (easy to arrange), the logic function is some combinational logic
that produces the number of the next state.

Asynchronous systems[edit]
As of 2014, most digital logic is synchronous because it is easier to create and verify a
synchronous design. However, asynchronous logic has the advantage of its speed not
being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic
gates.[a] Building an asynchronous system using faster parts makes the circuit faster.
Nevertherless, most systems need circuits that allow external unsynchronized signals to
enter synchronous logic circuits. These are inherently asynchronous in their design and
must be analyzed as such. Examples of widely used asynchronous circuits include
synchronizer flip-flops, switch debouncers and arbiters.
Asynchronous logic components can be hard to design because all possible states, in all
possible timings must be considered. The usual method is to construct a table of the
minimum and maximum time that each such state can exist, and then adjust the circuit to
minimize the number of such states. Then the designer must force the circuit to periodically
wait for all of its parts to enter a compatible state (this is called "self-resynchronization").
Without such careful design, it is easy to accidentally produce asynchronous logic that is
"unstable," that is, real electronics will have unpredictable results because of the
cumulative delays caused by small variations in the values of the electronic components.

Register transfer systems[edit]


Example of a simple circuit with a toggling output. The inverter forms the combinational logic in this
circuit, and the register holds the state.

Many digital systems are data flow machines. These are usually designed using
synchronous register transfer logic, using hardware description languages such
as VHDL or Verilog.
In register transfer logic, binary numbers are stored in groups of flip flops called registers.
The outputs of each register are a bundle of wires called a "bus" that carries that number to
other calculations. A calculation is simply a piece of combinational logic. Each calculation
also has an output bus, and these may be connected to the inputs of several registers.
Sometimes a register will have a multiplexer on its input, so that it can store a number from
any one of several buses. Alternatively, the outputs of several items may be connected to a
bus through buffers that can turn off the output of all of the devices except one. A
sequential state machine controls when each register accepts new data from its input.
Asynchronous register-transfer systems (such as computers) have a general solution. In
the 1980s, some researchers discovered that almost all synchronous register-transfer
machines could be converted to asynchronous designs by using first-in-first-out
synchronization logic. In this scheme, the digital machine is characterized as a set of data
flows. In each step of the flow, an asynchronous "synchronization circuit" determines when
the outputs of that step are valid, and presents a signal that says, "grab the data" to the
stages that use that stage's inputs. It turns out that just a few relatively simple
synchronization circuits are needed.

Computer design[edit]

Intel 80486DX2 microprocessor

The most general-purpose register-transfer logic machine is a computer. This is basically


an automatic binary abacus. The control unit of a computer is usually designed as
a microprogram run by a microsequencer. A microprogram is much like a player-piano roll.
Each table entry or "word" of the microprogram commands the state of every bit that
controls the computer. The sequencer then counts, and the count addresses the memory
or combinational logic machine that contains the microprogram. The bits from the
microprogram control the arithmetic logic unit, memory and other parts of the computer,
including the microsequencer itself. A "specialized computer" is usually a conventional
computer with special-purpose control logic or microprogram.
In this way, the complex task of designing the controls of a computer is reduced to a
simpler task of programming a collection of much simpler logic machines.
Almost all computers are synchronous. However, true asynchronous computers have also
been designed. One example is the Aspida DLX core.[42] Another was offered by ARM
Holdings. Speed advantages have not materialized, because modern computer designs
already run at the speed of their slowest component, usually memory. These do use
somewhat less power because a clock distribution network is not needed. An unexpected
advantage is that asynchronous computers do not produce spectrally-pure radio noise, so
they 

También podría gustarte