Documentos de Académico
Documentos de Profesional
Documentos de Cultura
A standard 8P8C (often called RJ45) connector used most commonly on cat5
cable, a type of cabling used primarily in Ethernet networks.
The Internet Protocol Suite
Application Layer
BGP · DHCP · DNS · FTP · GTP ·
HTTP · IMAP · IRC · Megaco · MGCP ·
NNTP · NTP · POP · RIP · RPC · RTP ·
RTSP · SDP · SIP · SMTP · SNMP ·
SOAP · SSH · Telnet · TLS/SSL ·
XMPP · (more)
Transport Layer
TCP · UDP · DCCP · SCTP · RSVP ·
ECN · (more)
Internet Layer
IP (IPv4, IPv6) · ICMP · ICMPv6 ·
IGMP · IPsec · (more)
Link Layer
ARP · RARP · NDP · OSPF ·
Tunnels (L2TP) · PPP · Media Access
Control (Ethernet, MPLS, DSL, ISDN,
FDDI) · Device Drivers · (more)
This box: view • talk • edit
Contents
[hide]
1 History
2 Standardization
3 General description
4 Dealing with multiple clients
o 4.1 CSMA/CD shared medium Ethernet
4.1.1 Main procedure
4.1.2 Collision detected procedure
o 4.2 Repeaters and hubs
o 4.3 Bridging and switching
o 4.4 Dual speed hubs
o 4.5 More advanced networks
5 Autonegotiation and duplex mismatch
6 Physical layer
7 Ethernet frame types and the EtherType field
o 7.1 Runt frames
8 Varieties of Ethernet
o 8.1 Early varieties
o 8.2 10Mbit/s Ethernet
o 8.3 Fast Ethernet
o 8.4 Gigabit Ethernet
o 8.5 10-gigabit Ethernet
o 8.6 100-gigabit Ethernet
9 Related standards
10 See also
11 References
12 External links
History
The experimental Ethernet described in that paper ran at 3 Mbit/s, and had
eight-bit destination and source address fields, so the original Ethernet
addresses were not the MAC addresses they are today. By software convention,
the 16 bits after the destination and source address fields were a packet type
field, but, as the paper says, "different protocols use disjoint sets of packet
types", so those were packet types within a given protocol, rather than the
packet type in current Ethernet which specifies the protocol being used.
Metcalfe left Xerox in 1979 to promote the use of personal computers and local
area networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to
work together to promote Ethernet as a standard, the so-called "DIX" standard,
for "Digital/Intel/Xerox"; it specified the 10 megabits/second Ethernet, with 48-
bit destination and source addresses and a global 16-bit type field. The first
standard draft was first published on September 30, 1980 within IEEE. It
competed with two largely proprietary systems, Token Ring and Token Bus. To
get over delays of the finalization of the Ethernet CSMA/CD standard due to the
difficult decision processes in the "open" IEEE and due to the competitive
Token Ring proposal strongly supported by IBM, support of CSMA/CD in other
standardization bodies, i.e. ECMA, IEC and ISO was instrumental for its
success. Proprietary systems soon found themselves buried under a tidal wave
of Ethernet products. In the process, 3Com became a major company. 3COM
built the first 10 Mbit/s Ethernet adapter (1983), followed quickly by Digital
Equipment's Unibus to Ethernet adapter.
Standardization
In February 1980, IEEE started a project IEEE 802 for the standardization of
Local Area Networks (LAN).
The "DIX-group" with Gary Robinson (DEC), Phil Arst (Intel) and Bob Printis
(Xerox) submitted the so-called "Blue Book" CSMA/CD specification as
candidate for the LAN specification. Since IEEE membership is open to all
professionals including students, the group received countless comments on this
brand-new technology.
In the Ethernet camp, it put at risk the market introduction of Xerox Star
computing system and 3Com's Ethernet LAN products. With such business
implications in mind, David Liddle (GM Xerox Office Systems) and Bob
Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens
Private Networks) for an alliance in the emerging office communication market,
including Siemens' support for the international standardization of Ethernet
(April 10, 1981). Ingrid Fromm, Siemens representative to IEEE 802 quickly
achieved broader support for Ethernet beyond IEEE by the establishment of a
competing Task Group "Local Networks" within the European standards body
ECMA TC24. As early as March 1982 ECMA TC24 with its corporate
members reached agreement on a standard for CSMA/CD based on the IEEE
802 draft. The speedy action taken by ECMA decisively contributed to the
conciliation of opinions within IEEE and approval of IEEE 802.3 CSMA/CD by
the end of 1982.
General description
A 1990s network interface card. This is a combination card that supports both
coaxial-based using a 10BASE2 (BNC connector, left) and twisted pair-based
10BASE-T, using an RJ45 (8P8C modular connector, right).
From this early and comparatively simple concept, Ethernet evolved into the
complex networking technology that today underlies most LANs. The coaxial
cable was replaced with point-to-point links connected by Ethernet hubs and/or
switches to reduce installation costs, increase reliability, and enable point-to-
point management and troubleshooting. StarLAN was the first step in the
evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair
network. The advent of twisted-pair wiring dramatically lowered installation
costs relative to competing technologies, including the older Ethernet
technologies.
Above the physical layer, Ethernet stations communicate by sending each other
data packets, blocks of data that are individually sent and delivered. As with
other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC
address, which is used to specify both the destination and the source of each
data packet. Network interface cards (NICs) or chips normally do not accept
packets addressed to other Ethernet stations. Adapters generally come
programmed with a globally unique address, but this can be overridden, either
to avoid an address change when an adapter is replaced, or to use locally
administered addresses.
Despite the significant changes in Ethernet from a thick coaxial cable bus
running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all
generations of Ethernet (excluding early experimental versions) share the same
frame formats (and hence the same interface for higher layers), and can be
readily interconnected.
Ethernet originally used a shared coaxial cable (the shared medium) winding
around a building or campus to every attached machine. A scheme known as
carrier sense multiple access with collision detection (CSMA/CD) governed the
way the computers shared the channel. This scheme was simpler than the
competing token ring or token bus technologies. When a computer wanted to
send some information, it used the following algorithm:
Main procedure
This can be likened to what happens at a dinner party, where all the guests talk
to each other through a common medium (the air). Before speaking, each guest
politely waits for the current speaker to finish. If two guests start speaking at the
same time, both stop and wait for short, random periods of time (in Ethernet,
this time is generally measured in microseconds). The hope is that by each
choosing a random period of time, both guests will not choose the same time to
try to speak again, thus avoiding another collision. Exponentially increasing
back-off times (determined using the truncated binary exponential backoff
algorithm) are used when there is more than one failed attempt to transmit.
Since all communications happen on the same wire, any information sent by
one computer is received by all, even if that information is intended for just one
destination. The network interface card interrupts the CPU only when
applicable packets are received: the card ignores information not addressed to it
unless it is put into "promiscuous mode". This "one speaks, all listen" property
is a security weakness of shared-medium Ethernet, since a node on an Ethernet
network can eavesdrop on all traffic on the wire if it so chooses. Use of a single
cable also means that the bandwidth is shared, so that network traffic can slow
to a crawl when, for example, the network and nodes restart after a power
failure.
For signal degradation and timing reasons, coaxial Ethernet segments had a
restricted size which depended on the medium used. For example, 10BASE5
coax cables had a maximum length of 500 meters (1,640 ft). Also, as was the
case with most other high-speed buses, Ethernet segments had to be terminated
with a resistor at each end. For coaxial-cable-based Ethernet, each end of the
cable had a 50 ohm (Ω) resistor attached. Typically this resistor was built into a
male BNC or N connector and attached to the last device on the bus, or, if
vampire taps were in use, to the end of the cable just past the last device. If
termination was not done, or if there was a break in the cable, the AC signal on
the bus was reflected, rather than dissipated, when it reached the end. This
reflected signal was indistinguishable from a collision, and so no
communication would be able to take place.
Despite the physical star topology, hubbed Ethernet networks still use half-
duplex and CSMA/CD, with only minimal activity by the hub, primarily the
Collision Enforcement signal, in dealing with packet collisions. Every packet is
sent to every port on the hub, so bandwidth and security problems aren't
addressed. The total throughput of the hub is limited to that of a single link and
all links must operate at the same speed.
Collisions reduce throughput by their very nature. In the worst case, when there
are lots of hosts with long cables that attempt to transmit many short frames,
excessive collisions can reduce throughput dramatically. However, a Xerox
report in 1980 summarized the results of having 20 fast nodes attempting to
transmit packets of various sizes as quickly as possible on the same Ethernet
segment.[4] The results showed that, even for the smallest Ethernet frames (64B),
90% throughput on the LAN was the norm. This is in comparison with token
passing LANs (token ring, token bus), all of which suffer throughput
degradation as each new node comes into the LAN, due to token waits.
While repeaters could isolate some aspects of Ethernet segments, such as cable
breakages, they still forwarded all traffic to all Ethernet devices. This created
practical limits on how many machines could communicate on an Ethernet
network. Also as the entire network was one collision domain and all hosts had
to be able to detect collisions anywhere on the network, and the number of
repeaters between the farthest nodes was limited. Finally segments joined by
repeaters had to all operate at the same speed, making phased-in upgrades
impossible.
Early bridges examined each packet one by one using software on a CPU, and
some of them were significantly slower than hubs (multi-port repeaters) at
forwarding traffic, especially when handling many ports at the same time. This
was in part due to the fact that the entire Ethernet packet would be read into a
buffer, the destination address compared with an internal table of known MAC
addresses and a decision made as to whether to drop the packet or forward it to
another or all segments.
Since packets are typically only delivered to the port they are intended for,
traffic on a switched Ethernet is slightly less public than on shared-medium
Ethernet. Despite this, switched Ethernet should still be regarded as an insecure
network technology, because it is easy to subvert switched Ethernet systems by
means such as ARP spoofing and MAC flooding. The bandwidth advantages,
the slightly better isolation of devices from each other, the ability to easily mix
different speeds of devices and the elimination of the chaining limits inherent in
non-switched Ethernet have made switched Ethernet the dominant network
technology.
When a twisted pair or fiber link segment is used and neither end is connected
to a hub, full-duplex Ethernet becomes possible over that segment. In full
duplex mode both devices can transmit and receive to/from each other at the
same time, and there is no collision domain. This doubles the aggregate
bandwidth of the link and is sometimes advertised as double the link speed (e.g.
200 Mbit/s) to account for this. However, this is misleading as performance will
only double if traffic patterns are symmetrical (which in reality they rarely are).
The elimination of the collision domain also means that all the link's bandwidth
can be used and that segment length is not limited by the need for correct
collision detection (this is most significant with some of the fiber variants of
Ethernet).
In the early days of Fast Ethernet, Ethernet switches were relatively expensive
devices. Hubs suffered from the problem that if there were any 10BASE-T
devices connected then the whole network needed to run at 10 Mbit/s. Therefore
a compromise between a hub and a switch was developed, known as a dual
speed hub. These devices consisted of an internal two-port switch, dividing the
10BASE-T (10 Mbit/s) and 100BASE-T (100 Mbit/s) segments. The device
would typically consist of more than two physical ports. When a network device
becomes active on any of the physical ports, the device attaches it to either the
10BASE-T segment or the 100BASE-T segment, as appropriate. This prevented
the need for an all-or-nothing migration from 10BASE-T to 100BASE-T
networks. These devices are hubs because the traffic between devices connected
at the same speed is not switched.
They suffer from single points of failure. If any link fails some devices
will be unable to communicate with other devices and if the link that fails
is in a central location lots of users can be cut off from the resources they
require.
It is possible to trick switches or hosts into sending data to your machine
even if it's not intended for it (see switch vulnerabilities).
Large amounts of broadcast traffic, whether malicious, accidental, or
simply a side effect of network size can flood slower links and/or
systems.
o It is possible for any host to flood the network with broadcast
traffic forming a denial of service attack against any hosts that run
at the same or lower speed as the attacking device.
o As the network grows, normal broadcast traffic takes up an ever
greater amount of bandwidth.
o If switches are not multicast aware, multicast traffic will end up
treated like broadcast traffic due to being directed at a MAC with
no associated port.
o If switches discover more MAC addresses than they can store
(either through network size or through an attack) some addresses
must inevitably be dropped and traffic to those addresses will be
treated the same way as traffic to unknown addresses, that is
essentially the same as broadcast traffic (this issue is known as
failopen).
They suffer from bandwidth choke points where a lot of traffic is forced
down a single link.
The autonegotiation standard contained a mechanism for detecting the speed but
not the duplex setting of an Ethernet peer that did not use autonegotiation. An
autonegotiating device defaults to half duplex, when the remote does not
negotiate, as the remote peer is assumed to be a hub (which always has
autonegotiation disabled and supports only half duplex mode). If the remote is
operating in half duplex mode this works. But if remote is in full duplex mode,
this generates a duplex mismatch. When two interfaces are connected and set to
different "duplex" modes, the effect of the duplex mismatch is a network that
works, but is much slower than its nominal speed, and generates more
collisions. The primary rule for avoiding this is to never set one end of a
connection to a forced full duplex setting and the other end to autonegotiation.
Because of the wait times, the effect of a duplex mismatch is a network that is
not completely 'broken' but is incredibly slow. This bad behaviour can be
tolerated on low traffic link, but is really dramatic under heavy bandwidth
transfer attempt, and can lead to a complete stop of the traffic.
Physical layer
The first Ethernet networks, 10BASE5, used thick yellow cable with vampire
taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used
thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium.
The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to
Ethernet hubs with 8P8C modular connectors (not to be confused with FCC's
RJ45).
Currently Ethernet has many varieties that vary both in speed and physical
medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-
TX, and 1000BASE-T. All three utilize twisted pair cables and 8P8C modular
connectors (often called RJ45). They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s,
respectively. However each version has become steadily more selective about
the cable it runs on and some installers have avoided 1000BASE-T for
everything except short connections to servers.
A data packet on the wire is called a frame. A frame viewed on the actual
physical wire would show Preamble and Start Frame Delimiter, in addition to
the other data. These are required by all physical hardware. They are not
displayed by packet sniffing software because these bits are removed by the
Ethernet adapter before being passed on to the host (in contrast, it is often the
device driver which removes the CRC32 (FCS) from the packets seen by the
user).
The table below shows the complete Ethernet frame, as transmitted, for the
MTU of 1500 bytes (some implementations of gigabit ethernet and higher
speeds support larger jumbo frames). Note that the bit patterns in the preamble
and start of frame delimiter are written as bit strings, with the first bit
transmitted on the left (not as byte values, which in Ethernet are transmitted
least significant bit first). This notation matches the one used in the IEEE 802.3
standard. One octet is eight bits of data (i.e., a byte on most modern computers).
After a frame has been sent transmitters are required to transmit 12 octets of idle
characters before transmitting the next frame. For 10M this takes 9600 ns,
100M 960 ns, 1000M 96 ns.
From this table, we may calculate the maximum net bit rate of 10 Mbit/s
Ethernet to be approximately 9.75 Mbit/s, assuming a continuous stream of
maximum-sized packets (containing 1500 payload bytes each):
10/100M transceiver chips (MII PHY) work with four bits (one nibble) at a
time. Therefore the preamble will be 7 instances of 0101 + 0101, and the Start
Frame Delimiter will be 0101 + 1101. 8-bit values are sent low 4-bit and then
high 4-bit. 1000M transceiver chips (GMII) work with 8 bits at a time, and 10
Gbit/s (XGMII) PHY works with 32 bits at a time.
In addition, all four Ethernet frames types may optionally contain a IEEE
802.1Q tag to identify what VLAN it belongs to and its IEEE 802.1p priority
(quality of service). This encapsulation is defined in the IEEE 802.3ac
specification and increases the maximum frame by 4 bytes to 1522 bytes.
The different frame types have different formats and MTU values, but can
coexist on the same physical medium.
The most common Ethernet Frame format, type II
Novell's "raw" 802.3 frame format was based on early IEEE 802.3 work. Novell
used this as a starting point to create the first implementation of its own IPX
Network Protocol over Ethernet. They did not use any LLC header but started
the IPX packet directly after the length field. This does not conform to the IEEE
802.3 standard, but since IPX has always FF at the first two bytes (while in
IEEE 802.2 LLC that pattern is theoretically possible but extremely unlikely), in
practice this mostly coexists on the wire with other Ethernet implementations,
with the notable exception of some early forms of DECnet which got confused
by this.
Novell NetWare used this frame type by default until the mid nineties, and since
Netware was very widespread back then, while IP was not, at some point in
time most of the world's Ethernet traffic ran over "raw" 802.3 carrying IPX.
Since Netware 4.10, Netware now defaults to IEEE 802.2 with LLC (Netware
Frame Type Ethernet_802.2) when using IPX. (See "Ethernet Framing" in
References for details.)
The 802.2 variants of Ethernet are not in widespread use on common networks
currently, with the exception of large corporate Netware installations that have
not yet migrated to Netware over IP. In the past, many corporate networks
supported 802.2 Ethernet to support transparent translating bridges between
Ethernet and IEEE 802.5 Token Ring or FDDI networks. The most common
framing type used today is Ethernet Version 2, as it is used by most Internet
Protocol-based networks, with its EtherType set to 0x0800 for IPv4 and
0x86DD for IPv6.
The IEEE 802.1Q tag, if present, is placed between the Source Address and the
EtherType or Length fields. The first two bytes of the tag are the Tag Protocol
Identifier (TPID) value of 0x8100. This is located in the same place as the
EtherType/Length field in untagged frames, so an EtherType value of 0x8100
means the frame is tagged, and the true EtherType/Length is located after the Q-
tag. The TPID is followed by two bytes containing the Tag Control Information
(TCI) (the IEEE 802.1p priority (quality of service) and VLAN id). The Q-tag is
followed by the rest of the frame, using one of the types described above.
Runt frames
A runt frame is an Ethernet frame that is less than the IEEE 802.3 minimum
length of 64 bytes. Possible causes are collision, underruns, bad network card or
software. [11][12]
Varieties of Ethernet
Early varieties
10BASE5: original standard uses a single coaxial cable into which you
literally tap a connection by drilling into the cable to connect to the core
and screen. Largely obsolete, though due to its widespread deployment in
the early days, some systems may still be in use.
10BROAD36: Obsolete. An early standard supporting Ethernet over
longer distances. It utilized broadband modulation techniques, similar to
those employed in cable modem systems, and operated over coaxial
cable.
1BASE5: An early attempt to standardize a low-cost LAN solution, it
operates at 1 Mbit/s and was a commercial failure.
10Mbit/s Ethernet
Fast Ethernet
100BASE-T: A term for any of the three standard for 100 Mbit/s Ethernet
over twisted pair cable. Includes 100BASE-TX, 100BASE-T4 and
100BASE-T2.
o 100BASE-TX: Uses two pairs, but requires Category 5 cable.
Similar star-shaped configuration to 10BASE-T. 100 Mbit/s.
o 100BASE-T4: 100 Mbit/s Ethernet over Category 3 cabling (as
used for 10BASE-T installations). Uses all four pairs in the cable.
Now obsolete, as Category 5 cabling is the norm. Limited to half-
duplex.
o 100BASE-T2: No products exist. 100 Mbit/s Ethernet over
Category 3 cabling. Supports full-duplex, and uses only two pairs.
It is functionally equivalent to 100BASE-TX, but supports old
cable.
100BASE-FX: 100 Mbit/s Ethernet over fiber.
Gigabit Ethernet
10-gigabit Ethernet
100-gigabit Ethernet
Related standards
Networking standards that are not part of the IEEE 802.3 Ethernet
standard, but support the Ethernet frame format, and are capable of
interoperating with it.
o LattisNet—A SynOptics pre-standard twisted-pair 10 Mbit/s
variant.
o 100BaseVG—An early contender for 100 Mbit/s Ethernet. It runs
over Category 3 cabling. Uses four pairs. Commercial failure.
o TIA 100BASE-SX—Promoted by the Telecommunications
Industry Association. 100BASE-SX is an alternative
implementation of 100 Mbit/s Ethernet over fiber; it is
incompatible with the official 100BASE-FX standard. Its main
feature is interoperability with 10BASE-FL, supporting
autonegotiation between 10 Mbit/s and 100 Mbit/s operation – a
feature lacking in the official standards due to the use of differing
LED wavelengths. It is targeted at the installed base of 10 Mbit/s
fiber network installations.
o TIA 1000BASE-TX—Promoted by the Telecommunications
Industry Association, it was a commercial failure, and no products
exist. 1000BASE-TX uses a simpler protocol than the official
1000BASE-T standard so the electronics can be cheaper, but
requires Category 6 cabling.
o G.hn—A standard developed by ITU-T and promoted by
HomeGrid Forum for high-speed (up to 1 Gbit/s) local area
networks over existing home wiring (coaxial cables, power lines
and phone lines). G.hn defines an Application Protocol
Convergence (APC) layer that accepts Ethernet frames and
encapsulates them into G.hn MSDUs.
Networking standards that do not use the Ethernet frame format but can
still be connected to Ethernet using MAC-based bridging.
o 802.11—A standard for wireless local area networks (LANs), often
paired with an Ethernet backbone.
o 802.16—A standard for wireless metropolitan area networks
(MANs), including WiMAX
10BaseS—Ethernet over VDSL
Long Reach Ethernet
Avionics Full-Duplex Switched Ethernet
TTEthernet — Time-Triggered Ethernet for design of mixed-criticality
embedded systems
Metro Ethernet
It has been observed that Ethernet traffic has self-similar properties, with
important consequences for traffic engineering.[citation needed]
See also
ALOHAnet
Broadband Internet access
Chipcom
List of device bandwidths
Chaosnet
Ethernet Automatic Protection Switching
Ethernet crossover cable
Ethernet Way versus IEEE Way
Fully switched network
Green Ethernet
MII and PHY
Network isolator
Power line communication
Power over Ethernet
Spanning tree protocol
Virtual LAN
Wake-on-LAN
Synchronous Ethernet
References