Está en la página 1de 45

o make improvements to the overall structure.

(August 2014) (Learn how and when to remove this template

Art Bell
Born

Arthur William Bell, III


June 17, 1945 (age 71)
Camp Lejeune, North Carolina[1]

Occupation Broadcaster, author

Home town Pahrump, Nevada

Spouse(s)

Sachiko Toguchi Bell Pontius (19651968;


divorced)
Vickie L. Baker Bell (19811991; divorced)
Ramona Lee Hayes Bell (19912006; died)
Airyn Ruiz Bell (2006present)[2]

Children

Parent(s)

Arthur William Bell, Jr. ( 2000)


Jane Lee Gumaer Bell ( Dec 23, 2008)

Call-sign

W6OBB / 4F1AB

Arthur William "Art" Bell, III (born June 17, 1945) is an American broadcaster and author known as
one of the founders and the original host of the paranormal-themed radio program Coast to Coast
AM.[3] He also created and formerly hosted its companion show Dreamland.
Semi-retired from Coast to Coast AM since 2003, he hosted the show many weekends for the
following four years. He announced his retirement from weekend hosting on July 1, 2007, but
occasionally served as a guest host through 2010. He attributed the reason for his retirement to a
desire to spend time with his new wife and their daughter, born May 30, 2007. He added that unlike
his previous "retirements," this one was permanent, while leaving open the option to return.
Classic Bell-hosted episodes of Coast to Coast AM can be heard in some markets on Saturday
nights under the name Somewhere in Time. He started a new nightly show, Art Bell's Dark Matter,
on Sirius XM Radio that began on September 16, 2013 [4] and ended six weeks later (on November
4, 2013).[5][6]
He returned to radio on July 20, 2015 with a new show, Midnight in the Desert, available online, via
TuneIn, and on some terrestrial radio stations. He retired yet again on December 11, 2015, citing
security concerns at his home. He and his family were subject to repeated intrusions on his property

in Pahrump. In fear for his family's safety, he opted to leave the air, and ostensibly, public life, as he
believed the intruder or intruders wanted him off the air.
Bell founded and was the original owner of Pahrump, Nevada-based radio station KNYE 95.1 FM.
His broadcast studio and transmitter were located near his home in Pahrump while he hosted Coast
to Coast AM except from June to December 2006, when he lived in the Philippines. He and his
family returned to the Philippines in March 2009, after having significant difficulties obtaining a U.S.
visa for his wife, Airyn.[7]
Contents
[hide]

1Early life

2Broadcasting career
o

2.1Critical reputation

2.2Political and religious views

3Callers and guests

4Amateur radio

5Honors

6Marriages

7Retirements and comebacks

8Events of 2006
o

8.1Death of Ramona Bell

8.2Change in schedule

8.3New Marriage

8.4Relocation to the Philippines

8.5Return to "the High Desert and the Great American Southwest"

9Events of 2008

10Events of 2009
o

10.1Immigration controversy
11Events of 201015

11.1Return to radio in 2013

11.2Online radio network

11.3Midnight in the Desert radio show

12Books

13Other work

14See also

15Notes

16References

17Further reading

18External links

Early life[edit]
Art Bell III was born in Jacksonville, North Carolina to Arthur Bell, Jr., a United States Marine
Corps Captain, and Jane Gumaer Bell, a Marine drill instructor. Arthur Bell, Jr. died in 2000, and
Jane Bell died December 23, 2008.
Bell has always been interested in radio, and at the age of 13 became a licensed amateur radio
operator. Bell now holds an Amateur Extra Class license, which is in the top U.S. Federal
Communications Commission license class. His call sign is W6OBB.
Bell served in the U.S. Air Force as a medic during the Vietnam War and in his free time operated
a pirate radio station at Amarillo Air Force Base. He would make a point of playing anti-war music
(like "Eve of Destruction" and "Fortunate Son") that was not aired on the American Forces Network.[8]
After leaving military service he stayed in Asia, living on the Japanese island of Okinawa where he
worked as a disc jockey for KSBK, the only non-military English-language station in Japan. While
there, he set a Guinness World Record by staying on the air for 116 hours and 15 minutes. The
money raised there allowed Bell to charter a DC-8, fly to Vietnam, and rescue 130
Vietnamese orphans stranded in Saigon at the war's end. They were eventually brought to the
United States and adopted by American families.[9]
Bell returned to the United States and studied engineering at the University of Maryland, College
Park. He dropped out and returned to radio as a board operator and chief engineer, and had
opportunity to be on the air a few times. For several years he worked behind and in front of the
microphone. After a period of working in cable television, in 1986 the 50,000-watt KDWN in Las
Vegas, Nevada offered Bell a five-hour time slot in the

Artificial neural network


From Wikipedia, the free encyclopedia

"Neural network" redirects here. For networks of living neurons, see Biological neural network. For
the journal, see Neural Networks (journal). For the evolutionary concept, see Neutral network
(evolution).

"Neural computation" redirects here. For the journal, see Neural Computation (journal).

Machine learning and


data mining

Problems[show]
Supervised learning
(classification regression)

[show]

Clustering[show]

Dimensionality reduction[show]

Structured prediction[show]

Anomaly detection[show]

Neural nets[show]

Reinforcement Learning[show]

Theory[show]

Machine learning venues[show]

Machine learning portal

An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain.
Here, each circular node represents an artificial neuron and an arrow represents a connection from the output
of one neuron to the input of another.

Neural networks (also referred to as connectionist systems) are a computational approach which
is based on a large collection of neural units loosely modeling the way a biological brain solves
problems with large clusters of biological neurons connected by axons. Each neural unit is
connected with many others, and links can be enforcing or inhibitory in their effect on the activation
state of connected neural units. Each individual neural unit may have a summation function which
combines the values of all its inputs together. There may be a threshold function or limiting function
on each connection and on the unit itself such that it must surpass it before it can propagate to other
neurons. These systems are self-learning and trained rather than explicitly programmed and excel in
areas where the solution or feature detection is difficult to express in a traditional computer program.
Neural networks typically consist of multiple layers or a cube design, and the signal path traverses
from front to back. Back propagation is where the forward stimulation is used to reset weights on the
"front" neural units and this is sometimes done in combination with training where the correct result
is known. More modern networks are a bit more free flowing in terms of stimulation and inhibition
with connections interacting in a much more chaotic and complex fashion. Dynamic neural networks
are the most advanced in that they dynamically can, based on rules, form new connections and even
new neural units while disabling others.
The goal of the neural network is to solve problems in the same way that the human brain would,
although several neural networks are much more abstract. Modern neural network projects typically
work with a few thousand to a few million neural units and millions of connections, which is still
several orders of magnitude less complex than the human brain and closer to the computing power
of a worm.

New brain research often stimulates new patterns in neural networks. One new approach is using
connections which span much further and link processing layers rather than always being localized
to adjacent neurons. Other research being explored with the different types of signal over time that
axons propagate which is more complex than simply on or off.
Neural networks are based on real numbers, with the value of the core and of the axon typically
being a representation between 0.0 and 1.
An interesting facet of these systems is that they are unpredictable in their success with self
learning. After training some become great problem solvers and others don't perform as well. In
order to train them several thousand cycles of interaction typically occur.
Like other machine learning methods systems that learn from data neural networks have been
used to solve a wide variety of tasks, like computer vision and speech recognition, that are hard to
solve using ordinary rule-based programming.
Historically, the use of neural network models marked a directional shift in the late eighties from highlevel (symbolic) artificial intelligence, characterized by expert systems with knowledge embodied
in if-then rules, to low-level (sub-symbolic) machine learning, characterized by knowledge embodied
in the parameters of a dynamical system.
Contents
[hide]

1History
o

1.1Hebbian learning

1.2Backpropagation and resurgence

1.3Improvements since 2006

2Models
o

2.1Network function

2.2Learning

2.3Learning paradigms

2.2.1Choosing a cost function

2.3.1Supervised learning

2.3.2Unsupervised learning

2.3.3Reinforcement learning
2.4Learning algorithms
3Employing artificial neural networks

4Applications
o

4.1Real-life applications

4.2Neural networks and neuroscience

4.2.1Types of models

4.2.2Networks with memory

5Neural network software

6Types of artificial neural networks

7Theoretical properties
o

7.1Computational power

7.2Capacity

7.3Convergence

7.4Generalization and statistics

8Criticism
o

8.1Training issues

8.2Theoretical issues

8.3Hardware issues

8.4Practical counterexamples to criticisms

8.5Hybrid approaches

9Classes and types of ANNs

10Gallery

11See also

12References

13Bibliography

14External links

History[edit]
Warren McCulloch and Walter Pitts[1] (1943) created a computational model for neural networks
based on mathematics and algorithms called threshold logic. This model paved the way for neural
network research to split into two distinct approaches. One approach focused on biological
processes in the brain and the other focused on the application of neural networks to artificial
intelligence.

Hebbian learning[edit]
In the late 1940s psychologist Donald Hebb[2] created a hypothesis of learning based on the
mechanism of neural plasticity that is now known as Hebbian learning. Hebbian learning is
considered to be a 'typical' unsupervised learning rule and its later va

Artificial intelligence
From Wikipedia, the free encyclopedia

"AI" redirects here. For other uses, see AI and Artificial intelligence (disambiguation).
Part of a series on

Science
Formal
[show]

Physical
[show]

Life
[show]

Social
[show]

Applied
[show]

Interdisciplinary
[show]

Philosophy

History
[show]

Outline

Portal

Category

Complex systems
Topics
Emergence[show]

Self-organization[show]

Collective behaviour[show]

Networks[show]

Evolution and adaptation[show]

Pattern formation[show]

Systems theory[show]

Nonlinear dynamics[show]

Game theory[show]

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal


"intelligent" machine is a flexible rational agent that perceives its environment and takes actions that
maximize its chance of success at some goal.[1] Colloquially, the term "artificial intelligence" is applied
when a machine mimics "cognitive" functions that humans associate with other human minds, such
as "learning" and "problem solving".[2] As machines become increasingly capable, mental facilities
once thought to require intelligence are removed from the definition. For example, optical character
recognition is no longer perceived as an exemplar of "artificial intelligence", having become a routine
technology.[3] Capabilities currently classified as AI include successfully understanding human
speech,[4] competing at a high level in strategic game systems (such as Chess and Go[5]), self-driving
cars, and interpreting complex data. Some people also consider AI a danger to humanity if it
progresses unabatedly.[6] AI research is divided into subfields[7] that focus on specific problems or on
specific approaches or on the use of a particular tool or towards satisfying particular applications.
The central problems (or goals) of AI research
include reasoning, knowledge, planning, learning, natural language
processing (communication), perception and the ability to move and manipulate objects.[8] General
intelligence is among the field's long-term goals.[9] Approaches include statistical
methods, computational intelligence, soft computing (e.g. machine learning), and traditional symbolic
AI. Many tools are used in AI, including versions of search and mathematical
optimization, logic, methods based on probability and economics. The AI field draws upon computer
science, mathematics, psychology, linguistics, philosophy, neuroscience and artificial psychology.
The field was founded on the claim that human intelligence "can be so precisely described that a
machine can be made to simulate it".[10] This raises philosophical arguments about the nature of
the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues
which have been explored by myth, fiction and philosophy since antiquity.[11] Attempts to create
artificial intelligence have experienced many setbacks, including the ALPAC report of 1966, the
abandonment of perceptrons in 1970, the Lighthill Report of 1973, the second AI winter 1987
1993 and the collapse of the Lisp machine market in 1987. In the twenty-first century AI techniques
became an essential part of the technology industry, helping to solve many challenging problems in
computer science.[12]
Contents
[hide]

1History

2Research
o

2.1Goals

2.2Approaches

3Tools
o

3.1Search and optimization

3.2Logic

3.3Probabilistic methods for uncertain reasoning

3.4Classifiers and statistical learning methods

3.5Neural networks

3.6Deep feedforward neural networks

3.7Deep recurrent neural networks

3.8Control theory

3.9Languages

3.10Evaluating progress

4Applications
o

4.1Competitions and prizes

4.2Shaping the healthcare industry

4.3Automotive industry

5Platforms
5.1Partnerships between Big 5 companies to improve AI

6Philosophy and ethics


o

6.1The limits of artificial general intelligence

6.2Intelligent behaviour and machine ethics

6.3Machine consciousness, sentience and mind

6.4Superintelligence

6.5Existential risk

7In fiction

8See also

9Notes

10References
o

10.1AI textbooks

10.2History of AI

10.3Other sources

11Further reading

12External links

History[edit]
Main articles: History of artificial intelligence and Timeline of artificial intelligence
While thought-capable artificial beings appeared as storytelling devices in antiquity,[13] the idea of
actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c.
1300 CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating
machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations
on concepts rather than numbers.[14] Since the 19th century, artificial beings are common in fiction, as
in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots).[15]
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in
antiquity. In the 19th century, George Boole refined those ideas into propositional logic and Gottlob
Frege developed a notational system for mechanical reasoning (a "predicate calculus").[16] Around
the 1940s, Alan Turing's theory of computation suggested that a machine, by shuffling symbols as
simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight,
that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring
thesis.[17][page needed] Along with concurrent discoveries in neurology, information theory and cybernetics,
this led researchers to consider the possibility of building an electronic brain. [18] The first work that is
now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turingcomplete "artificial neurons".[14]
The field of AI research was founded at a conference at Dartmouth College in 1956.[19] The
attendees, including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel and Herbert
Simon, became the leaders of AI research.[20] They and their students wrote programs that were, to
most people, simply astonishing:[21] computers were winning at checkers, solving word problems in
algebra, proving logical theorems and speaking English.[22] By the middle of the 1960s, research in
the U.S. was heavily funded by the Department of Defense[23] and laboratories had been established
around the world.[24] AI's founders were optimistic about the future: Herbert Simon predicted,
"machines will be capable, within twenty years, of doing any work a man can do." Marvin
Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will
substantially be solved."[25]
They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974,
in response to the criticism of Sir James Lighthill[26] and ongoing pressure from the US Congress to
fund more productive projects, both the U.S. and British governments cut off exploratory research in
AI. The next few years would later be called an "AI winter",[27] a period when funding for AI projects
was hard to find.
In the early 1980s, AI research was revived by the commercial success of expert systems,[28] a form
of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the
market for AI had reached over a billion dollars. At the same time, Japan's fifth generation
computer project inspired the U.S and British governments to restore funding for academic research.
[29]
However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into
disrepute, and a second, longer-lasting hiatus began.[30]
In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical
diagnosis and other areas.[12] The success was due to increasing computational power (see Moore's
law), greater emphasis on solving specific problems, new ties between AI and other fields and a
commitment by researchers to mathematical methods and scientific standards. [31] Deep Blue became

the first computer chess-playing system to beat a reigning world chess champion, Garry
Kasparov on 11 May 1997.[32]
Advanced statistical techniques (loosely known as deep learning), access to large amounts of
data and faster computers enabled advances in machine learning and perception.[33] By the mid
2010s, machine learning applications were used throughout the world. [34] In a Jeopardy! quiz
show exhibition match, IBM's question answering system, Watson, defeated the two greatest
Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin.[35] The Kinect, which
provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that
emerged from lengthy AI research[36] as do intelligent personal assistants in smartphones.[37] In March
2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the
first computer Go-playing system to beat a professional Go player without handicaps.[5][38]
According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the
number of software projects that use AI within Google increasing from a "sporadic usage" in 2012 to
more than 2,700 projects. Clark also presents factual data indicating that error rates in image
processing tasks have fallen significantly since 2011.[39]He attributes this to an increase in
affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in
research tools and datasets. Other cited examples include Microsoft's development of a Skype
system that can automatically translate from one language to another and Facebook's system that
can describe images to blind people.[39]

Arthropod
From Wikipedia, the free encyclopedia

Arthropod
Temporal range: 5400 Ma
Pre

O
S
D
C
P
T
J
K
Pg

CambrianHolocene

Extinct and modern arthropods

Scientific classification
Kingdom:

Animalia

Subkingdom:

Eumetazoa

(unranked):

Bilateria

Superphylum:

Ecdysozoa

(unranked):

Tactopoda

Phylum:

Arthropoda
von Siebold, 1848[1]

Subphyla and Classes

Subphylum
Trilobitomorpha

Trilobita
trilobites (extinct)

Subphylum Chelicerata

Arachnida spid
ers, scorpions, etc.

Merostomata h
orseshoe
crabs, eurypterids

Pycnogonida
sea spiders

Subphylum Myriapoda

Chilopoda cent
ipedes

Diplopoda mill
ipedes

Pauropoda
sister group to
millipedes

Symphyla
resemble centipedes

Subphylum Crustacea

Branchiopoda
brine shrimp etc.

Remipedia
blind crustaceans

Cephalocarida
horseshoe shrimp

Maxillopoda b
arnacles, copepods, fish
lice, etc.

Ostracoda seed
shrimp

Malacostraca l
obsters, crabs, shrimp,
etc.

Subphylum Hexapoda

Insecta insects

Entognatha spr
ingtails, etc.

Incertae sedis

Camptophyllia (
extinct)[2]

Marrellomorpha
(extinct)

Acanthomeridio
n (extinct)

An arthropod (from Greek arthro-, joint + podos, foot) is an invertebrate animal having
an exoskeleton (external skeleton), a segmented body, and jointed appendages (paired
appendages). Arthropods form the phylum Arthropoda, which includes
the insects, arachnids, myriapods, and crustaceans. Arthropods are characterized by their jointed
limbs and cuticle made of chitin, often mineralised with calcium carbonate. The arthropod body
plan consists of segments, each with a pair of appendages. The rigid cuticle inhibits growth, so
arthropods replace it periodically by moulting. Their versatility has enabled them to become the most
species-rich members of all ecological guilds in most environments. They have over a million
described species, making up more than 80% of all described living animal species, some of which,
unlike most animals, are very successful in dry environments.
Arthropods range in size from the microscopic crustacean Stygotantulus up to the Japanese spider
crab. Arthropods' primary internal cavity is a hemocoel, which accommodates their internal organs,
and through which their haemolymph analogue of blood circulates; they have open circulatory
systems. Like their exteriors, the internal organs of arthropods are generally built of repeated
segments. Their nervous system is "ladder-like", with paired ventral nerve cords running through all
segments and forming paired ganglia in each segment. Their heads are formed by fusion of varying
numbers of segments, and their brains are formed by fusion of the ganglia of these segments and
encircle the esophagus. The respiratory and excretory systems of arthropods vary, depending as
much on their environment as on the subphylum to which they belong.
Their vision relies on various combinations of compound eyes and pigment-pit ocelli: in most species
the ocelli can only detect the direction from which light is coming, and the compound eyes are the
main source of information, but the main eyes of spiders are ocelli that can form images and, in a
few cases, can swivel to track prey. Arthropods also have a wide range of chemical and mechanical
sensors, mostly based on modifications of the many setae (bristles) that project through their
cuticles. Arthropods' methods of reproduction and development are diverse; all terrestrial species
use internal fertilization, but this is often by indirect transfer of the sperm via an appendage or the
ground, rather than by direct injection. Aquatic species use either internal or external fertilization.
Almost all arthropods lay eggs, but scorpions give birth to live young after the eggs have hatched
inside the mother. Arthropod hatchlings vary from miniature adults to grubs and caterpillars that lack
jointed limbs and eventually undergo a total metamorphosis to produce the adult form. The level of
maternal care for hatchlings varies from nonexistent to the prolonged care provided by scorpions.
The evolutionary ancestry of arthropods dates back to the Cambrian period. The group is generally
regarded as monophyletic, and many analyses support the placement of arthropods
with cycloneuralians (or their constituent clades) in a superphylum Ecdysozoa. Overall however,
the basal relationships of Metazoa are not yet well resolved. Likewise, the relationships between
various arthropod groups are still actively debated.
Arthropods contribute to the human food supply both directly as food, and more importantly
as pollinators of crops. Some specific species are known to spread severe disease to
humans, livestock, and crops.
Contents
[hide]

1Etymology

2Description
o

2.1Diversity

2.2Segmentation

2.3Exoskeleton

2.4Moulting

2.5Internal organs

2.6Senses

2.6.1Optical

2.6.2Olfaction

3Reproduction and development

4Evolution
o

4.1Last common ancestor

4.2Fossil record

4.3Evolutionary family tree

5Classification

6Interaction with humans

7See also

8Notes

9References
o

9.1Bibliography
10External links

Seminole
From Wikipedia, the free encyclopedia

For other uses, see Seminole (disambiguation).


This article's introduction may be too long for the length of the article. Please help by moving
some material from it into the body of the article. Please read the layout guide and lead section
guidelines to ensure the section will still be inclusive of all essential details. Please discuss this

issue on the article's talk page. (December 2016)

Seminole

Seminole portraits

Total population

est. 18,600
Seminole Nation of Oklahoma
15,572 enrolled
Seminole Tribe of Florida
Miccosukee Tribe of Indians of Florida
Regions with significant populations
United States (

Oklahoma,

Florida,

Georgia)

Languages
English, Mikasuki, Creek
Religion
Protestant, Catholic, Green Corn Ceremony

Related ethnic groups


Miccosukee, Choctaw, Muscogee (Creek)

The Seminole are a Native American tribe that emerged in a process of ethnogenesis from various
Native American groups who settled in Florida in the 18th century, most significantly Creeks from
what is now Georgia and Alabama.
They comprise three federally recognized tribes and independent groups, most living
in Oklahoma with a minority in Florida. The word Seminole is a corruption of cimarrn, a Spanish
term for "runaway" or "wild one".[1]
During their early decades, the Seminole became increasingly independent of other Creek groups
and established their own identity. They developed a thriving trade network during
the British and second Spanish periods (roughly 17671821).[2] The tribe expanded considerably
during this time, and was further supplemented from the late 18th century by free black people and
escaped enslaved people who settled near and paid tribute to Seminole towns. The latter became
known as Black Seminoles, although they kept their own Gullah culture of the Low Country.[3] They
developed the Afro-Seminole Creole language, which they spoke through the 19th century after the
move to Indian Territory.
Seminole culture is largely derived from that of the Creek; the most important ceremony is the Green
Corn Dance; other notable traditions include use of the black drink and ritual tobacco. As the
Seminole adapted to Florida environs, they developed local traditions, such as the construction of
open-air, thatched-roof houses known as chickees.[4] Historically the Seminole
spoke Mikasuki and Creek, both Muskogean languages.[5]
After the independent United States acquired Florida from Spain in 1819, its settlers increased
pressure on Seminole lands. During the period of the Seminole Wars (18181858), the tribe was first
confined to a large reservation in the center of the Florida peninsula by the Treaty of Moultrie
Creek (1823) and then evicted from the territory altogether according to the Treaty of Payne's
Landing (1832).[3] By 1842, most Seminoles and Black Seminoles had been coerced or forced to
move to Indian Territory west of the Mississippi River. During the American Civil War, most of the
Oklahoma Seminole allied with the Confederacy, after which they had to sign a new treaty with the
U.S., including freedom and tribal membership for the Black Seminole. Today residents of the
reservation are enrolled in the federally recognized Seminole Nation of Oklahoma, while others
belong to unorganized groups.
Perhaps fewer than 200 Seminoles remained in Florida after the Third Seminole War (18551858),
but they fostered a resurgence in traditional customs and a culture of staunch independence. [6] In the
late 19th century, the Florida Seminole re-established limited relations with the U.S. government and
in 1930 received 5,000 acres (20 km2) of reservation lands. Few Seminole moved to reservations
until the 1940s; they reorganized their government and received federal recognition in 1957 as
the Seminole Tribe of Florida. The more traditional people near the Tamiami Trail received federal
recognition as the Miccosukee Tribe in 1962.[7]
The Oklahoma and Florida Seminole filed land claim suits in the 1950s, which were combined in the
government's settlement of 1976. The tribes and Traditionals took until 1990 to negotiate an
agreement as to division of the settlement, a judgment trust against which members can draw for
education and other benefits. The Florida Seminole founded a high-stakes bingo game on their
reservation in the late 1970s, winning court challenges to initiate Indian Gaming, which many tribes
have adopted to generate revenues for welfare, education and development.

Semiconductor
From Wikipedia, the free encyclopedia

For devices using semiconductors and their history, see Semiconductor device. For other uses,
see Semiconductor (disambiguation).
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged and
removed. (June 2013) (Learn how and when to remove this template message)
Semiconductors are crystalline or amorphous solids with distinct electrical characteristics.[1] They
are of high electrical resistance higher than typical resistance materials, but still of much lower
resistance than insulators. Their resistance decreases as their temperature increases, which is
behavior opposite to that of a metal. Finally, their conducting properties may be altered in useful
ways by the deliberate, controlled introduction of impurities ("doping") into the crystal structure,
which lowers its resistance but also permits the creation of semiconductor junctions between
differently-doped regions of the extrinsic semiconductor crystal. The behavior of charge
carriers which include electrons, ions and electron holes at these junctions is the basis
of diodes, transistors and all modern electronics.
Semiconductor devices can display a range of useful properties such as passing current more easily
in one direction than the other, showing variable resistance, and sensitivity to light or heat. Because
the electrical properties of a semiconductor material can be modified by doping, or by the application
of electrical fields or light, devices made from semiconductors can be used for amplification,
switching, and energy conversion.
The modern understanding of the properties of a semiconductor relies on quantum physics to
explain the movement of charge carriers in a crystal lattice.[2] Doping greatly increases the number of
charge carriers within the crystal. When a doped semiconductor contains mostly free holes it is
called "p-type", and when it contains mostly free electrons it is known as "n-type". The
semiconductor materials used in electronic devices are doped under precise conditions to control the
concentration and regions of p- and n-type dopants. A single semiconductor crystal can have many
p- and n-type regions; the pn junctions between these regions are responsible for the useful
electronic behavior.
Although some pure elements and many compounds display semiconductor
properties, silicon, germanium, and compounds of gallium are the most widely used in electronic
devices. Elements near the so-called "metalloid staircase", where the metalloids are located on the
periodic table, are usually used as semiconductors.
Some of the properties of semiconductor materials were observed throughout the mid 19th and first
decades of the 20th century. The first practical application of semiconductors in electronics was the
1904 development of the Cat's-whisker detector, a primitive semiconductor diode widely used in
early radio receivers. Developments in quantum physics in turn allowed the development of
the transistor in 1947[3] and the integrated circuit in 1958.
Contents
[hide]

1Properties

2Materials
o

2.1Preparation of semiconductor materials


3Physics of semiconductors

3.1Energy bands and electrical conduction

3.2Charge carriers (electrons and holes)


3.2.1Carrier generation and recombination

3.3Doping

4Early history of semiconductors

5See also

6References

7Further reading

8External links

Properties[edit]
Variable conductivity
Semiconductors in their natural state are poor conductors because a current requires the
flow of electrons, and semiconductors have their valence bands filled, preventing the entry
flow of new electrons. There are several developed techniques that allow semiconducting
materials to behave like conducting materials, such as doping or gating. These modifications
have two outcomes: n-type and p-type. These refer to the excess or shortage of electrons,
respectively. An unbalanced number of electrons would cause a current to flow through the
material.[4]
Heterojunctions
Heterojunctions occur when two differently doped semiconducting materials are joined
together. For example, a configuration could consist of p-doped and n-doped germanium.
This results in an exchange of electrons and holes between the differently doped
semiconducting materials. The n-doped germanium would have an excess of electrons, and
the p-doped germanium would have an excess of holes. The transfer occurs until equilibrium
is reached by a process called recombination, which causes the migrating electrons from the
n-type to come in contact with the migrating holes from the p-type. A product of this process
is charged ions, which result in an electric field.[2][4]
Excited Electrons
A difference in electric potential on a semiconducting material would cause it to leave thermal
equilibrium and create a non-equilibrium situation. This introduces electrons and holes to the
system, which interact via a process called ambipolar diffusion. Whenever thermal
equilibrium is disturbed in a semiconducting material, the amount of holes and electrons
changes. Such disruptions can occur as a result of a temperature difference or photons,
which can enter the system and create electrons and holes. The process that creates and
annihilates electrons and holes are called generation and recombination.[4]
Light emission
In certain semiconductors, excited electrons can relax by emitting light instead of producing
heat.[5] These semiconductors are used in the construction of light-emitting diodes and
fluorescent quantum dots.
Thermal energy conversion

Semiconductors have large thermoelectric power factors making them useful


in thermoelectric generators, as well as high thermoelectric figures of merit making them
useful in thermoelectric coolers.[6]

Materials[edit]
Main article: List of semiconductor materials

Silicon crystals are the most common semiconducting materials used


in microelectronics and photovoltaics.

A large number of elements and compounds have semiconducting properties,


including:[7]

Certain pure elements are found in Group 14 of the periodic table; the most
commercially important of these elements are silicon and germanium.
Silicon and germanium are used here effectively because they have 4
valence electrons in their outermost shell which gives them the ability to
gain or lose electrons equally at the same time.

Binary compounds, particularly between elements in Groups 13 and 15,


such as gallium arsenide, Groups 12 and 16, groups 14 and 16, and
between different group 14 elements, e.g. silicon carbide.

Certain ternary compounds, oxides and alloys.

Organic semiconductors, made of organic compounds.

Most common semiconducting materials are crystalline solids, but amorphous


and liquid semiconductors are also known. These include hydrogenated
amorphous silicon and mixtures of arsenic, selenium and tellurium in a variety
of proportions. These compounds share with better known semiconductors the
properties of intermediate conductivity and a rapid variation of conductivity with
temperature, as well as occasional negative resistance. Such disordered
materials lack the rigid crystalline structure of conventional semiconductors
such as silicon. They are generally used in thin film structures, which do not
require material of higher electronic quality, being relatively insensitive to
impurities and radiation damage.

Semi-automatic transmission
From Wikipedia, the free encyclopedia

This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged
and removed. (April 2010) (Learn how and when to remove this template message)

Transmission types
Manual

Sequential manual

Non-synchronous

Preselector
Automatic

Manumatic

Semi-automatic

Electrohydraulic

Saxomat
Dual-clutch
Continuously variable
Bicycle gearing

Derailleur gears

Hub gears

A semi-automatic transmission (SAT) (also known as a clutchless manual


transmission, automated manual transmission, trigger shift, flappy-paddle gear shift,
or paddle-shift gearbox) is an automobile transmission that does not change gears automatically,
but rather facilitates manual gear changes by dispensing with the need to press a clutch pedal at the
same time as changing gears. It uses electronic sensors, pneumatics, processors and actuators to
execute gear shifts on input from the driver or by a computer. This removes the need for
a clutch pedal which the driver otherwise needs to depress before making a gear change, since the
clutch itself is actuated by electronic equipment which can synchronise the timing and torque
required to make quick, smooth gear shifts. The system was designed by automobile
manufacturers to provide a better driving experience through fast overtaking maneuvers on
highways. Some motorcycles also use a system with a conventional gearchange but without the
need for manual clutch operation.
Contents
[hide]

1Early semi-automatic transmissions

2Comparison to other automated transmissions

3Operation

4History
o

4.1Alfa Romeo

4.2Chevrolet

4.3Chrysler

4.4Citron (and Peugeot)

4.5Daihatsu

4.6Ferrari

4.7Ford

4.8General Motors

4.9Honda

4.10Hudson

4.11Isuzu

4.12Mercedes-Benz

4.13NSU

4.14Opel

4.15Packard

4.16Plymouth

4.17Renault

4.18Reo

4.19SAAB

4.20Simca

4.21Smart

4.22Volkswagen

5Other applications
o

5.1Racing

5.2Trucks, buses, and trains

5.3Bristol/Daimler/Leyland buses

5.4Motorcycles

5.5ATVs

6Marketing names

7Types

8See also

9References

Early semi-automatic transmissions[edit]


See also: Preselector gearbox
In the 1930s, automakers began to market cars with some sort of device that would reduce the
amount of clutching and de-clutching and shifting required in stop and go driving. Most typically, a
fluid coupling or a centrifugal clutch replaced the standard manual clutch to allow for stop and go
driving without using the clutch pedal every time the car was brought to a stop. More sophisticated
systems allowed for shifting while driving without using the clutch, and some systems did away with
the clutch pedal altogether. Semi-automatic transmissions were phased out as technology advanced
and automatic controls were developed to take care of changing gear ratios. Smaller, lower powered
cars used Semi-automatic transmissions with a dry clutch because the mechanical connection
offered a more efficient powertrain compared to a fluid coupling.
Another early semi-automatic transmission was the Sinclair S.S.S. (synchro-self-shifting) Powerflow
gearbox. which was applied to Huwood-Hudswell diesel mines locomotives. [1]It was also applied to
some road vehicles.[2] It is covered by US patent 2505842.[3]

Comparison to other automated transmissions[edit]


Modern "Semi-automatic transmissions" usually have a fully automatic mode, where the driver does
not need to change gears at all, operating in the same manner as a conventional type of automatic
transmission by allowing the transmission's computer to automatically change gear if, for example,
the driver were redlining the engine. The semi-automatic transmission can be engaged in manual
mode wherein one can up-shift or down-shift using the console-mounted shifter selecter or the
paddle shifters just behind the steering wheel, without the need of a clutch pedal. The ability to shift
gears manually, often via paddle shifters, can also be found on certain automatic transmissions
(manumatics such as Tiptronic) and continuous variable transmissions (CVTs) (such
as Lineartronic).

Despite superficial similarity to other automated transmissions, semi-automatic transmissions differ


significantly in internal operation and driver's "feel" from manumatics and CVTs. A manumatic, like a
standard automatic transmission, uses a torque converter instead of clutch to manage the link
between the transmission and the engine, while a CVT uses a belt instead of a fixed number of
gears. A semi-automatic transmission offers a more direct connection between the engine and
wheels than a manumatic and this responsiveness is preferred in high performance driving
applications, while a manumatic is better for street use because its fluid coupling makes it easier for
the transmission to consistently perform smooth shifts,[4][5] and CVTs are generally found in gasolineelectric hybrid engine applications.
Typically semi-automatic transmissions are more expensive than manumatics and CVTs, for
instance BMW's 7-speed Double Clutch Transmission is a CAD 3900 upgrade from the standard 6speed manual, while the 6-speed Steptronic Automatic was only a CAD 1600 option in 2007. [6] In a
given market, very few models have two choices of automated transmissions; for instance the BMW
545i (E60) and BMW 645Ci/650i (E63/64) (standard 6-speed manual) had as an option a 6-speed
automatic "Steptronic" transmission or a 7-speed Getrag SMG III single-clutch semi-automatic
transmission until after the 2008 model year, when the SMG III was dropped. [7] Many sport luxury
manufacturers such as BMW offer the manumatic transmissions for their mainstream lineup (such as
the BMW 328i and BMW 535i) and the semi-automatic gearbox for their high-performance models
(the BMW M3 and BMW M5).[6]
The semi-automatic transmission may be derived from a conventional automatic; for
instance Mercedes-Benz's AMG Speedshift MCT semi-automatic transmission is based on the 7GTronic manumatic, however the latter's torque converter has been replaced with a wet, multi-plate
launch clutch.[8] Other semi-automatic transmissions have their roots in a conventional manual; the
SMG II drivelogic (found in the BMW M3 (E46) is a Getrag 6-speed manual transmission, but with
an electrohydraulically actuated clutch pedal, similar to a Formula One style transmission.[9][10][11] The
most common type of semi-automatic transmission in recent years has been the dual clutch type,
since single-clutch types such as the SMG III have been criticized for their general lack of
smoothness in everyday driving (although being responsive at the track). [12]

Operation[edit]

Semi-trailer truck
From Wikipedia, the free encyclopedia

"18 wheeler" and "eighteen wheeler" redirect here. For other uses, see 18 wheeler (disambiguation).
"Big Rig" redirects here. For other uses, see Big Rig (disambiguation).
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged and
removed. (April 2007) (Learn how and when to remove this template message)

Semi-trailer tractor with sleeper behind the cab and oversize load on lowboy trailer

Tractor with a dump trailer

A tractor with an auto-transport semi-trailer

FAW semi-trailer truck in China

A semi-trailer truck is the combination of a tractor unit and one or more semi-trailers to carry
freight. It is variously known as a transport (truck) in Canada; semi or single in
Australia; semi, tractor-trailer, big rig, or eighteen-wheeler in the United States; and articulated lorry,
abbreviated artic, in Britain and Ireland.
A semi-trailer attaches to the tractor with a fifth wheel hitch, with much of its weight borne by the
tractor. The result is that both tractor and semi-trailer will have a distinctly different design than a
rigid truck and trailer.
Contents
[hide]

1Regional configurations
o

1.1North America

1.2Europe

1.2.1United Kingdom

1.2.2Continental Europe

1.2.3Scandinavia
1.3Australia

2Construction
o

2.1Types of trailers

2.2Coupling and uncoupling

2.3Braking

2.4Transmission

2.5Lights

2.6Wheels and tires

2.7Skirted trailers

2.8Underride guard

2.9Semi-truck manufacturers

3Driver's license
o

3.1Canada

3.2United States

3.3Taiwan

3.4Europe

3.5Australia

3.6New Zealand

4Role in trade

5Media
o

5.1Television

5.2Films

5.3Music

5.4Video games

6See also

7References

8External links

Regional configurations[edit]
North America[edit]

Tractor unit hauling tractor units in Idaho

In North America, the combination vehicles made up of a powered truck and one or more
semitrailers are known as "semis", "semitrailers",[1] "tractor-trailers", "big rigs", "semi trucks",
"eighteen-wheelers", or "semi-tractor trailers".
The tractor unit typically has two or three axles; those built for hauling heavy-duty commercialconstruction machinery may have as many as five, some often being lift axles.
The most common tractor-cab layout has a forward engine, one steering axle, and two drive axles.
The fifth-wheel trailer coupling on most tractor trucks is movable fore and aft, to allow adjustment in
the weight distribution over its rear axle(s).
Ubiquitous in Europe, but less common in North America since

Electronic publishing
From Wikipedia, the free encyclopedia

Electronic publishing (also referred to as e-publishing or digital publishing or online


publishing) includes the digital publication of e-books, digital magazines, and the development
of digital libraries and catalogues. Electronic publishing has become common in scientific
publishing where it has been argued that peer-reviewed scientific journals are in the process of
being replaced by electronic publishing. It is also becoming common to distribute books, magazines,
and newspapers to consumers through tablet reading devices, a market that is growing by millions
each year,[1] generated by online vendors such as Apple's iTunes bookstore, Amazon's bookstore for
Kindle, and books in the Google Play Bookstore. Market research suggests that half of all magazine
and newspaper circulation will be via digital delivery by the end of 2015[2] and that half of all reading
in the United States will be done without paper by 2015. [3]
Although distribution via the Internet (also known as online publishing or web publishing when in
the form of a website) is nowadays strongly associated with electronic publishing, there are many
non-network electronic publications such as encyclopedias on CD and DVD, as well as technical and
reference publications relied on by mobile users and others without reliable and high speed access
to a network. Electronic publishing is also being used in the field of test-preparation in developed as
well as in developing economies for student education (thus partly replacing conventional books) -

for it enables content and analytics combined - for the benefit of students. The use of electronic
publishing for textbooks may become more prevalent with iBooks from Apple Inc. and Apple's
negotiation with the three largest textbook suppliers in the U.S.[4] Electronic publishing is increasingly
popular in works of fiction. Electronic publishers are able to respond quickly to changing market
demand, because the companies do not have to order printed books and have them delivered. Epublishing is also making a wider range of books available, including books that customers would
not find in standard book retailers, due to insufficient demand for a traditional "print run". Epublication is enabling new authors to release books that would be unlikely to be profitable for
traditional publishers. While the term "electronic publishing" is primarily used in the 2010s to refer to
online and web-based publishers, the term has a history of being used to describe the development
of new forms of production, distribution, and user interaction in regard to computer-based production
of text and other interactive media.
Contents
[hide]

1Process

2Academic publishing

3Copyright

4Examples

5Business models

6See also

7References

8External links

Process[edit]
The electronic publishing process follows some aspects of the traditional paperbased publishing process[5] but differs from traditional publishing in two ways: 1) it does not include
using an offset printing press to print the final product and 2) it avoids the distribution of a physical
product (e.g., paper books, paper magazines, or paper newspapers). Because the content is
electronic, it may be distributed over the Internet and through electronic bookstores, and users can
read the material on a range of electronic and digital devices, including desktop
computers, laptops, tablet computers, smartphones or e-reader tablets. The consumer may read the
published content online a website, in an application on a tablet device, or in a PDF document on a
computer. In some cases, the reader may print the content onto paper using a consumer-grade inkjet or laser printer or via a print on demand system. Some users download digital content to their
devices, enabling them to read the content even when their device is not connected to the Internet
(e.g., on an airplane flight).
Distributing content electronically as software applications ("apps") has become popular in the
2010s, due to the rapid consumer adoption of smartphones and tablets. At first, native apps for each
mobile platform were required to reach all audiences, but in an effort toward universal device
compatibility, attention has turned to using HTML5 to create web apps that can run on any browser

and function on many devices. The benefit of electronic publishing comes from using three attributes
of digital technology: XML tags to define content,[6] style sheets to define the look of content,
and metadata (data about data) to describe the content for search engines, thus helping users to
find and locate the content (a common example of metadata is the information about a
song's songwriter, composer, genre that is electronically encoded along with most CDs and digital
audio files; this metadata makes it easier for music lovers to find the songs they are looking for).
With the use of tags, style sheets, and metadata, this enables "reflowable" content that adapts to
various reading devices (tablet, smartphone, e-reader, etc.) or electronic delivery methods.
Because electronic publishing often requires text mark-up (e.g., Hyper Text Markup Language or
some other markup language) to develop online delivery methods, the traditional roles of typesetters
and book designers, who created the printing set-ups for paper books, have changed. Designers of
digitally published content must have a strong knowledge of mark-up languages, the variety of
reading devices and computers available, and the ways in which consumers read, view or access
the content. However, in the 2010s, new user friendly design software is becoming available for
designers to publish content in this standard without needing to know detailed programming
techniques, such as Adobe Systems' Digital Publishing Suite and Apple's iBooks

e-commerce
From Wikipedia, the free encyclopedia
(Redirected from Electronic commerce)

It has been suggested that Web commerce be merged into this article. (Discuss) Proposed since
September 2015.

Part of a series on

E-commerce
Online goods and services

Digital distribution

E-books

Software

Streaming media
Retail services

Banking
DVD-by-mail

Flower delivery

Food ordering

Grocery
Pharmacy

Travel
Marketplace services

Advertising

Auctions

Comparison shopping

Social commerce

Trading communities

Wallet
Mobile commerce

Payment

Ticketing
Customer service

Call centre

Help desk
Live support software
E-procurement
Purchase-to-pay

E-commerce is a transaction of buying or selling online. Electronic commerce draws on


technologies such as mobile commerce, electronic funds transfer, supply chain
management, Internet marketing, online transaction processing, electronic data
interchange (EDI), inventory management systems, and automated data collection systems. Modern
electronic commerce typically uses the World Wide Web for at least one part of the transaction's life
cycle although it may also use other technologies such as e-mail.
E-commerce businesses may employ some or all of the following:

Online shopping web sites for retail sales direct to consumers


Providing or participating in online marketplaces, which process third-party business-toconsumer or consumer-to-consumer sales

Business-to-business buying and selling

Gathering and using demographic data through web contacts and social media

Business-to-business (B2B) electronic data interchange

Marketing to prospective and established customers by e-mail or fax (for example,


with newsletters)

Engaging in pretail for launching new products and services

Online financial exchanges for currency exchanges or trading purposes


Contents
[hide]

1Timeline

2Business application

3Governmental regulation

4Forms

5Global trends

6Impact on markets and retailers

7Impact on supply chain management

8Social impact

9Distribution channels

10Examples of new systems

11See also

12References

13Further reading

14External links

Timeline[edit]
A timeline for the development of e-commerce:

1971 or 1972: The ARPANET is used to arrange a cannabis sale between students at
the Stanford Artificial Intelligence Laboratory and the Massachusetts Institute of Technology,
later described as "the seminal act of e-commerce" in John Markoff's book What the Dormouse
Said.[1]
1979: Michael Aldrich demonstrates the first online shopping system.[2]

1981: Thomson Holidays UK is the first business-to-business online shopping system to be


installed.[3]

1982: Minitel was introduced nationwide in France by France Tlcom and used for online
ordering.

1983: California State Assembly holds first hearing on "electronic

Data storage device


From Wikipedia, the free encyclopedia
(Redirected from Electronic storage)

Many different consumer electronic devices can store data.

Edison cylinder phonograph ca. 1899. The phonograph cylinder is a storage medium. The phonograph may be
considered a storage device.

On a reel-to-reel tape recorder (Sony TC-630), the recorder is data storage equipment and the magnetic tape is
a data storage medium.

RNA might be the oldest data storage medium.[1]

A data storage device is a device for recording (storing) information (data). Recording can be done
using virtually any form of energy, spanning from manual muscle power in handwriting, to acoustic
vibrations in phonographic recording, to electromagnetic energy modulating magnetic
tape and optical discs.
A storage device may hold information, process information, or both. A device that only holds
information is a recording medium. Devices that process information (data storage equipment) may
either access a separate portable (removable) recording medium or a permanent component to
store and retrieve data.
Electronic data storage requires electrical power to store and retrieve that data. Most storage
devices that do not require vision and a brain to read data fall into this category. Electromagnetic
data may be stored in either an analog data or digital data format on a variety of media. This type of
data is considered to be electronically encoded data, whether it is electronically stored in
a semiconductor device, for it is certain that a semiconductor device was used to record it on its
medium. Most electronically processed data storage media (including some forms of computer data
storage) are considered permanent (non-volatile) storage, that is, the data will remain stored when
power is removed from the device. In contrast, most electronically stored information within most
types of semiconductor (computer chips) microcircuits are volatile memory, for it vanishes if power is
removed.
Except for barcodes, optical character recognition (OCR), and magnetic ink character
recognition (MICR) data, electronic data storage is easier to revise and may be more cost effective
than alternative methods due to smaller physical space requirements and the ease of replacing
(rewriting) data on the same medium.[2]

Contents
[hide]

1Global capacity, digitization, and trends

2See also

3References

4Further reading

5External links

Electronic design automation


From Wikipedia, the free encyclopedia
(Redirected from ECAD)

"ECAD" redirects here. For the Brazilian music licensing organization, see Escritrio Central de
Arrecadao e Distribuio. For other uses, see ECAD (disambiguation).
Electronic design automation (EDA), also referred to as electronic computer-aided
design (ECAD),[1] is a category of software tools for designing electronic systems such as integrated
circuits and printed circuit boards. The tools work together in a design flow that chip designers use to
design and analyze entire semiconductor chips. Since a modern semiconductor chip can have
billions of components, EDA tools are essential for their design.
This article describes EDA specifically with respect to integrated circuits.
Contents
[hide]

1History
o

1.1Early days

1.2Birth of commercial EDA

2Current status

3Software focuses

3.1Design

3.2Simulation

3.3Analysis and verification

3.4Manufacturing preparation
4Companies

4.1Old companies

4.2Acquisitions

5Table of quarterly EDA industry worldwide revenue

6See also

7References

History[edit]
Early days[edit]
Before EDA, integrated circuits were designed by hand, and manually laid out. Some advanced
shops used geometric software to generate the tapes for the Gerber photoplotter, but even those
copied digital recordings of mechanically drawn components. The process was fundamentally
graphic, with the translation from electronics to graphics done manually. The best known company
from this era was Calma, whose GDSII format survives.
By the mid-1970s, developers started to automate the design along with the drafting. The first
placement and routing (Place and route) tools were developed. The proceedings of the Design
Automation Conference cover much of this era.
The next era began about the time of the publication of "Introduction to VLSI Systems" by Carver
Mead and Lynn Conway in 1980. This ground breaking text advocated chip design with
programming languages that compiled to silicon. The immediate result was a considerable increase
in the complexity of the chips that could be designed, with improved access to design
verification tools that used logic simulation. Often the chips were easier to lay out and more likely to
function correctly, since their designs could be simulated more thoroughly prior to construction.
Although the languages and tools have evolved, this general approach of specifying the desired
behavior in a textual programming language and letting the tools derive the detailed physical design
remains the basis of digital IC design today.
The earliest EDA tools were produced academically. One of the most famous was the "Berkeley
VLSI Tools Tarball", a set of UNIX utilities used to design early VLSI systems. Still widely used are
the Espresso heuristic logic minimizer and Magic.
Another crucial development was the formation of MOSIS, a consortium of universities and
fabricators that developed an inexpensive way to train student chip designers by producing real
integrated circuits. The basic concept was to use reliable, low-cost, relatively low-technology IC
processes, and pack a large number of projects per wafer, with just a few copies of each projects'
chips. Cooperating fabricators either donated the processed wafers, or sold them at cost, seeing the
program as helpful to their own long-term growth.

Birth of commercial EDA[edit]


1981 marks the beginning of EDA as an industry. For many years, the larger electronic companies,
such as Hewlett Packard, Tektronix, and Intel, had pursued EDA internally. In 1981, managers and
developers spun out of these companies to concentrate on EDA as a business. Daisy
Systems, Mentor Graphics, and Valid Logic Systems were all founded around this time, and
collectively referred to as DMV. Within a few years there were many companies specializing in EDA,
each with a slightly different emphasis. The first trade show for EDA was held at the Design
Automation Conference in 1984.

In 1981, the U.S. Department of Defense began funding of VHDL as a hardware description
language. In 1986, Verilog, another popular high-level design language, was first introduced as a
hardware description language by Gateway Design Automation. Simulators quickly followed these
introductions, permitting direct simulation of chip designs: executable specifications. In a few more
years, back-ends were developed to perform logic synthesis.

3D PCB layout

Current status[edit]
Current digital flows are extremely modular (see Integrated circuit design, Design closure,
and Design flow (EDA)). The front ends produce standardized design descriptions that compile into
invocations of "cells,", without regard to the cell technology. Cells implement logic or other electronic
functions using a particular integrated circuit technology. Fabricators generally provide libraries of
components for their production processes, with simulation models that fit standard simulation tools.
Analog EDA tools are far less modular, since many more functions are required, they interact more
strongly, and the components are (in general) less ideal.
EDA for electronics has rapidly increased in importance with the continuous scaling
of semiconductor technology.[2] Some users are foundry operators, who operate the semiconductor
fabrication facilities, or "fabs", and design-service companies who use EDA software to evaluate an
incoming design for manufacturing readiness. EDA tools are also used for programming design
functionality into FPGAs.

Software focuses[edit]
Design[edit]
Main article: Design flow (EDA)

High-level synthesis (or behavioural synthesis, algorithmic synthesis) high-level design


description (e.g. in C/C++) is converted into RTL.

Logic synthesis translation of RTL design description (e.g. written in Verilog or VHDL) into
a discrete netlist of logic gates.

Schematic capture For standard cell digital, analog, RF-like Capture CIS in Orcad by
Cadence and ISIS in Proteus

Layout usually schematic-driven layout, like Layout in Orcad by Cadence, ARES in Proteus

Simulation[edit]
Main article: Electronic circuit simulation

Transistor simulation low-level transistor-simulation of a schematic/layout's behavior,


accurate at device-level.

Logic simulation digital-simulation of an RTL or gate-netlist's digital (boolean 0/1) behavior,


accurate at boolean-level.

Behavioral Simulation high-level simulation of a design's architectural operation, accurate


at cycle-level or interface-level.

Hardware emulation Use of special purpose hardware to emulate the logic of a proposed
design. Can sometimes be plugged into a system in place of a yet-to-be-built chip; this is
called in-circuit emulation.

Technology CAD simulate and analyze the underlying process technology. Electrical
properties of devices are derived directly from device physics.

Electromagnetic field solvers, or just field solvers, solve Maxwell's equations directly for
cases of interest in IC and PCB design. They are known for being slower but more accurate than
the layout extraction above.[where?]

Schematic capture program

Analysis and verification[edit]

Functional verification

Clock Domain Crossing Verification (CDC check): Similar to linting, but these checks/tools
specialize in detecting and reporting potential issues like data loss, meta-stability due to use of
multiple clock domains in the design.

Formal verification, also model checking: Attempts to prove, by mathematical methods, that
the system has certain desired properties, and that certain undesired effects (such as deadlock)
cannot occur.

Equivalence checking: algorithmic comparison between a chip's RTL-description and


synthesized gate-netlist, to ensure functional equivalence at the logical level.

Static timing analysis: Analysis of the timing of a circuit in an input-independent manner,


hence finding a worst case over all possible inputs.

Physical verification, PV: checking if a design is physically manufacturable, and that the
resulting chips will not have any function-preventing physical defects, and will meet original
specifications.

Hybrid vehicle
From Wikipedia, the free encyclopedia

For other types of hybrid transportation, see Hybrid vehicle (disambiguation).


"Hybrid technology" redirects here. For the company formerly known as Hybrid Technologies, see Liion Motors.
Part of a series about

Sustainable energy

Energy conservation

Cogeneration

Efficient energy use

Green building

Heat pump

Low-carbon power

Microgeneration

Passive solar building design


Renewable energy

Anaerobic digestion

Geothermal

Hydroelectricity

Solar

Tidal

Wind
Sustainable transport

Carbon-neutral fuel

Electric vehicle
Fossil fuel phase-out

Green vehicle

Plug-in hybrid

Sustainable development portal

Renewable energy portal

Environment portal

A hybrid vehicle uses two or more distinct types of power, such as internal combustion
engine+electric motor,[1] e.g. in diesel-electric trains using diesel engines and electricity from
overhead lines, and submarines that use diesels when surfaced and batteries when submerged.
Other means to store energy include pressurized fluid, in hydraulic hybrids.
Contents
[hide]

1Power

2Vehicle type
o

2.1Two-wheeled and cycle-type vehicles

2.2Heavy vehicles

3Engine type
o

3.1Hybrid electric-petroleum vehicles

3.2Continuously outboard recharged electric vehicle (COREV)

3.3Hybrid fuel (dual mode)

3.4Fluid power hybrid

3.5Electric-human power hybrid vehicle

4Hybrid vehicle power train configurations


o

4.1Parallel hybrid

4.2Mild parallel hybrid

4.3Power-split or series-parallel hybrid

4.4Series hybrid

4.5Plug-in hybrid electric vehicle (PHEV)

4.6Road safety for cyclists, pedestrians

5Environmental issues
o

5.1Fuel consumption and emissions reductions

5.2Hybrid vehicle emissions

5.3Environmental impact of hybrid car battery

5.4Charging

5.5Raw materials increasing costs

6How hybrid-electric vehicles work

7Alternative green vehicles

8Peugeot/Citron Hybrid Vehicle

9Marketing

10Adoption rate

11European Union 2020 Regulation Standards

12See also

13References

14External links

Power[edit]
Power sources for hybrid vehicles include:

Coal, wood or other solid combustibles

Compressed or liquefied natural gas

Petrol (gasoline) or Diesel fuel

Human powered e.g. pedaling or rowing

Electromagnetic fields, Radio waves

Electric batteries/capacitors

Overhead electricity

Hydraulic accumulator

Hydrogen

Flywheel

Solar

Wind

Vehicle type[edit]

A biodiesel hybrid bus in Montreal

Two-wheeled and cycle-type vehicles[edit]


Mopeds, electric bicycles, and even electric kick scooters are a simple form of a hybrid, powered by
an internal combustion engine or electric motor and the rider's muscles. Early prototype motorcycles
in the late 19th century used the same principle.

In a parallel hybrid bicycle human and motor torques are mechanically coupled at the
pedal or one of the wheels, e.g. using a hub motor, a roller pressing onto a tire, or a connection
to a wheel using a transmission element. Most motorized bicycles, mopeds are of this type.[2]

In a series hybrid bicycle (SHB) (a kind of chainless bicycle) the user pedals a generator,
charging a battery or feeding the motor, which delivers all of the torque required. They are
commercially available, being simple in theory and manufacturing. [3]

The first published prototype of an SHB is by Augustus Kinzel (US Patent 3'884'317) in 1975. In
1994 Bernie Macdonalds conceived the Electrilite[4] SHB with power electronics allowing regenerative
braking and pedaling while stationary. In 1995 Thomas Muller designed and built a "Fahrrad mit
elektromagnetischem Antrieb" for his 1995 diploma thesis. In 1996 Jrg Blatter and Andreas Fuchs
of Berne University of Applied Sciences built an SHB and in 1998 modified a Leitra tricycle
(European patent EP 1165188). Until 2005 they built several prototype
SH tricycles and quadricycles.[5] In 1999 Harald Kutzke described an "active bicycle": the aim is to
approach the ideal bicycle weighing nothing and having no drag by electronic compensation.

A series hybrid electric-petroleum bicycle (SHEPB) is powered by pedals, batteries, a


petrol generator, or plug-in charger - providing flexibility and range enhancements over electriconly bicycles.

A SHEPB prototype made by David Kitson in Australia[6] in 2014 used a lightweight brushless DC
electric motor from an aerial drone and small hand-tool sized internal combustion engine, and a 3D
printed drive system an