Está en la página 1de 35

web browser

A web browser is a software application for retrieving, presenting, and traversing information resources on the World Wide Web. An information resource is identified by a Uniform Resource Identifier (URI) and may be a web page, image, video, or other piece of content. Hyperlinks present in resources enable users easily to navigate their browsers to related resources. A web browser can also be defined as an application software or program designed to enable users to access, retrieve and view documents and other resources on the Internet. Although browsers are primarily intended to access the World Wide Web, they can also be used to access information provided by web servers in private networks or files in file systems. The major web browsers are Firefox, Google Chrome, Internet Explorer, Opera, and Safari.

The first web browser was invented in 1990 by Tim Berners-Lee. It was called WorldWideWeb (no spaces) and was later renamed Nexus.

In 1993, browser software was further innovated by Marc Andreesen with the release ofMosaic (later Netscape), "the world's first popular browser", which made the World Wide Web system easy to use and more accessible to the average person. Andreesen's browser sparked the internet boom of the 1990s. The introduction of the NCSA Mosaic web browser in 1993 one of the first graphical web browsers led to an explosion in web use. Marc Andreessen, the leader of the Mosaic team at NCSA, soon started his own company, namedNetscape, and released the Mosaic-influenced Netscape Navigator in 1994, which quickly became the world's most popular browser, accounting for 90% of all web use at its peak (see usage share of web browsers). Microsoft responded with its Internet Explorer in 1995 (also heavily influenced by Mosaic), initiating the industry's first browser war. Bundled with Windows, Internet Explorer gained dominance in the web browser market; Internet Explorer usage share peaked at over 95% by 2002.

Function
The primary purpose of a web browser is to bring information resources to the user. This process begins when the user inputs a Uniform Resource Locator (URL), for example http://en.wikipedia.org/, into the browser. The prefix of the URL, the Uniform Resource Identifier or URI, determines how the URL will be interpreted. The most commonly used kind of URI starts with http: and identifies a resource to be retrieved over the Hypertext Transfer Protocol (HTTP). Many browsers also support a variety of other prefixes, such as https: for HTTPS, ftp: for theFile Transfer Protocol, and file: for local files. Prefixes that the web browser cannot directly handle are often handed off to another application entirely. For example, mailto: URIs are usually passed to the user's default e-mail application, and news: URIs are passed to the user's default newsgroup reader. In the case of http, https, file, and others, once the resource has been retrieved the web browser will display it. HTML is passed to the browser's layout engine to be transformed from markup to an interactive document. Aside from HTML, web browsers can generally display any kind of content that can be part of a web page. Most browsers can display images, audio, video, and XML files, and often have plugins to support Flash applications and Java applets. Upon encountering a file of an unsupported type or a file that is set up to be downloaded rather than displayed, the browser prompts the user to save the file to disk. Information resources may contain hyperlinks to other information resources. Each link contains the URI of a resource to go to. When a link is clicked, the browser navigates to the resource indicated by the link's target URI, and the process of bringing content to the user begins again.

Features
Available web browsers range in features from minimal, text-based user interfaces with bare-bones support for HTML to rich user interfaces supporting a wide variety of file formats and protocols. Browsers which include additional components to support e-mail, Usenet news, andInternet Relay Chat (IRC), are sometimes referred to as "Internet suites" rather than merely "web browsers". All major web browsers allow the user to open multiple information resources at the same time, either in different browser windows or in different tabs of the same window. Major browsers also include pop-up blockers to prevent unwanted windows from "popping up" without the user's consent. Most web browsers can display a list of web pages that the user has bookmarked so that the user can quickly return to them. Bookmarks are also called "Favorites" in Internet Explorer. In addition, all major web browsers have some form of builtin web feed aggregator. In Firefox, web feeds are formatted as "live bookmarks" and behave like a folder of bookmarks corresponding to recent entries in the feed. [18] In Opera, a more traditional feed reader is included which stores and displays the contents of the feed. Furthermore, most browsers can be extended via plug-ins, downloadable components that provide additional features.

User interface
Most major web browsers have these user interface elements in common: Back and forward buttons to go back to the previous resource and forward respectively. A refresh or reload button to reload the current resource. A stop button to cancel loading the resource. In some browsers, the stop button is merged with the reload button. A home button to return to the user's home page. An address bar to input the Uniform Resource Identifier (URI) of the desired resource and display it. A search bar to input terms into a search engine. In some browsers, the search bar is merged with the address bar. A status bar to display progress in loading the resource and also the URI of links when the cursor hovers over them, and page zoomingcapability.

Privacy and security


Most browsers support HTTP Secure and offer quick and easy ways to delete the web cache, cookies, and browsing history. For a comparison of the current security vulnerabilities of browsers, see comparison of web browsers.

Email
Electronic mail, commonly known as email or e-mail, is a method of exchanging digital messages from an author to one or more recipients. Modern email operates across the Internet or other computer networks. Some early email systems required that the author and the recipient both be online at the same time, in common with instant messaging. Today's email systems are based on a store-and-forward model. Emailservers accept, forward, deliver and store messages. Neither the users nor their computers are required to be online simultaneously; they need connect only briefly, typically to an email server, for as long as it takes to send or receive messages. An email message consists of three components, the message envelope, the message header, and the message body. The message header contains control information, including, minimally, an originator's email address and one or more recipient addresses. Usually descriptive information is also added, such as a subject header field and a message submission date/time stamp.

Originally a text-only (7-bit ASCII and others) communications medium, email was extended to carry multi-media content attachments, a process standardized in RFC 2045 through 2049. Collectively, these RFCs have come to be called Multipurpose Internet Mail Extensions(MIME). Electronic mail predates the inception of the Internet, and was in fact a crucial tool in creating it, [2] but the history of modern, global Internet email services reaches back to the early ARPANET. Standards for encoding email messages were proposed as early as 1973 (RFC 561). Conversion from ARPANET to the Internet in the early 1980s produced the core of the current services. An email sent in the early 1970s looks quite similar to a basic text message sent on the Internet today.

Network-based email was initially exchanged on the ARPANET in extensions to the File Transfer Protocol (FTP), but is now carried by theSimple Mail Transfer Protocol (SMTP), first published as Internet standard 10 (RFC 821) in 1982. In the process of transporting email messages between systems, SMTP communicates delivery parameters using a message envelope separate from the message (header and body) itself. Internet email messages consist of two major sections: Header Structured into fields such as From, To, CC, Subject, Date, and other information about the email.

Body The basic content, as unstructured text; sometimes containing a signature block at the end. This is exactly the same as the body of a regular letter. The header is separated from the body by a blank line.

Web search engine


A web search engine is designed to search for information on the World Wide Web and FTP servers. The search results are generally presented in a list of results often referred to as SERPS, or "search engine results pages". The information may consist of web pages, images, information and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler.

During the early development of the web, there was a list of webservers edited by Tim BernersLee and hosted on the CERN webserver. One historical snapshot from 1992 remains.[1] As more webservers went online the central list could not keep up. On the NCSA site new servers were announced under the title "What's New!"[2] The very first tool used for searching on the Internet was Archie.[3] The name stands for "archive" without the "v". It was created in 1990 by Alan Emtage, Bill Heelan and J. Peter Deutsch, computer science students at McGill University in Montreal. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie did not index the contents of these sites since the amount of data was so limited it could be readily searched manually.

How web search engines work


A search engine operates in the following order: 1. 2. Web crawling Indexing

3.

Searching

Web search engines work by storing information about many web pages, which they retrieve from the html itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) an automated Web browser which follows every link on the site. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. A query can be a single word. The purpose of an index is to allow information to be found as quickly as possible. Some search engines, such as Google, store all or part of the source page (referred to as acache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere. When a user enters a query into a search engine (typically by using key words), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. The index is built from the information stored with the data and the method by which the information is indexed. Unfortunately, there are currently no known public search engines that allow documents to be searched by date. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. As well, natural language queries allow the user to type a question in the same form one would ask it to a human. A site like this would be ask.com. The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and

new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This second form relies much more heavily on the computer itself to do the bulk of the work. Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the practice of allowing advertisers to pay money to have their listings ranked higher in search results. Those search engines which do not accept money for their search engine results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.

Intranet
The Internet is a global system of interconnected computer networks that use the standardInternet protocol suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of informationresources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support email. Most traditional communications media including telephone, music, film, and television are reshaped or redefined by the Internet, giving birth to new services such as Voice over Internet Protocol (VoIP) and Internet Protocol Television (IPTV). Newspaper, book and other print publishing are adapting to Web site technology, or are reshaped into blogging and web feeds. The Internet has enabled or accelerated new forms of human interactions throughinstant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business andfinancial services on the Internet affect supply chains across entire industries. The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, faulttolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundationin the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. Thecommercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. As of 2011, more than 2.1 billion people nearly a third of Earth's population use the services of the Internet.[1] The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching

definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

Terminology
See also: Internet capitalization conventions Internet is a short form of the technical term internetwork,[2] the result of interconnecting computer networks with special gateways or routers. The Internet is also often referred to as the Net. The term the Internet, when referring to the entire global system of IP networks, has been treated as a proper noun and written with an initialcapital letter. In the media and popular culture, a trend has also developed to regard it as a generic term or common noun and thus write it as "the internet", without capitalization. Some guides specify that the word should be capitalized as a noun but not capitalized as an adjective.[3][4] The terms Internet and World Wide Web are often used in everyday speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet establishes a global data communications system between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked byhyperlinks and URLs.[5]

Advantages of the Internet


The Internet provides opportunities galore, and can be used for a variety of things. Some of the things that you can do via the Internet are:

E-mail: E-mail is an online correspondence system. With e-mail you can send and receive instant electronic messages, which works like writing letters. Your messages are delivered instantly to people anywhere in the world, unlike traditional mail that takes a lot of time. Access Information: The Internet is a virtual treasure trove of information. Any kind of information on any topic under the sun is available on the Internet. The search engines on the Internet can help you to find data on any subject that you need. Shopping: Along with getting information on the Internet, you can also shop online. There are many online stores and sites that can be used to look for products as well as buy them using your credit card. You do not need to leave your house and can do all your shopping from the convenience of your home. Online Chat: There are many chat rooms on the web that can be accessed to meet new people, make new friends, as well as to stay in touch with old friends. Downloading Software: This is one of the most happening and fun things to do via the Internet. You can download innumerable, games, music, videos, movies, and a host of other entertainment software from the Internet, most of which are free.

Disadvantages of the Internet


There are certain cons and dangers relating to the use of Internet that can be summarized as:

Personal Information: If you use the Internet, your personal information such as your name, address, etc. can be accessed by other people. If you use a credit card to shop online, then your credit card information can also be stolen which could be akin to giving someone a blank check. Pornography: This is a very serious issue concerning the Internet, especially when it comes to young children. There are thousands of pornographic sites on the Internet that can be easily found and can be a detriment to letting children use the Internet. Spamming: This refers to sending unsolicited e-mails in bulk, which serve no purpose and unnecessarily clog up the entire system.

If you come across any illegal activity on the Internet, such as child pornography or even spammers, then you should report these people and their activities so that they can be controlled and other people deterred from carrying them out. Child pornography can be reported to:

Your Internet service provider Local police station Cyber Angels (program to report cyber crime)

Such illegal activities are frustrating for all Internet users, and so instead of just ignoring it, we should make an effort to try and stop these activities so that using the Internet can become that much safer. That said, the advantages of the Internet far outweigh the disadvantages, and millions of people each day benefit from using the Internet for work and for pleasure.

Intranet
An intranet is a computer network that uses Internet Protocol technology to securely share any part of an organization's information ornetwork operating system within that organization. It is the connection of computer networks in a local area. The term is used in contrast tointernet, a network between organizations, and instead refers to a network within an organization. Sometimes, the term refers only to the organization's internal website, but may be a more extensive part of the organization's information technology infrastructure. It may host multiple private websites and constitute an important component and focal point of internal communication and collaboration. Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer protocol). Internet technologies are often deployed to provide modern interfaces to legacy information systems hosting corporate data. An intranet can be understood as a private analog of the Internet, or as a private extension of the Internet confined to an organization. The first intranet websites and home pages began to appear in organizations in 1996-1997. Although not officially noted, the term intranet first became common-place among early adopters, such as universities and technology corporations, in 1992.[dubious discuss]

Intranets have also contrasted with extranets. While intranets are generally restricted to employees of the organization, extranets may also be accessed by customers, suppliers, or other approved parties.[1] Extranets extend a private network onto the Internet with special provisions for authentication, authorization and accounting (AAA protocol). In many organizations, intranets are protected from unauthorized external access by means of a network gateway and firewall. For smaller companies, intranets may be created simply by using private IP ranges, such as 192.168.*.*. In these cases, the intranet can only be directly accessed from a computer in the local network; however, companies may provide access to offsite employees by using a virtual private network. Other security measures may be used, such as user authentication and encryption.

Benefits
Workforce productivity: Intranets can help users to locate and view information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface, users can access data held in any database the organization wants to make available, anytime and subject to security provisions from anywhere within the company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users.

Time: Intranets allow organizations to distribute information to employees on an asneeded basis; Employees may link to relevant information at their convenience, rather than being distracted indiscriminately by electronic mail.

Communication: Intranets can serve as powerful tools for communication within an organization, vertically and horizontally. From a communications standpoint, intranets are useful to communicate strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed is the purpose of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and who to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-to-date with the strategic focus of the organization. Some examples of communication would be chat, email, and or blogs. A great real world example of where an intranet helped a company communicate is when Nestle had a number of food processing plants in Scandinavia. Their central support system had to deal with a number of queries every day.[3] When Nestle decided to invest in an intranet, they quickly realized the savings. McGovern says the savings from the reduction in query calls was substantially greater than the investment in the intranet.

Web publishing allows cumbersome corporate knowledge to be maintained and easily accessed throughout the company usinghypermedia and Web technologies. Examples include: employee manuals, benefits documents, company policies, business standards, news feeds, and even training, can be accessed using common Internet standards (Acrobat

files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is usually available to employees using the intranet. Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise.

Cost-effective: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. This can potentially save the business money on printing, duplicating documents, and the environment as well as document maintenance overhead. For example, People soft "derived significant cost savings by shifting HR processes to the intranet". [3] McGovern goes on to say the manual cost of enrolling in benefits was found to be USD109.48 per enrollment. "Shifting this process to the intranet reduced the cost per enrollment to $21.79; a saving of 80 percent". Another company that saved money on expense reports was Cisco. "In 1996, Cisco processed 54,000 reports and the amount of dollars processed was USD19 million".[3]

Enhance collaboration: Information is easily accessible by all authorised users, which enables teamwork.

Cross-platform capability: Standards-compliant web browsers are available for Windows, Mac, and UNIX. Built for one audience: Many companies dictate computer specifications which, in turn, may allow Intranet developers to write applications that only have to work on one browser (no cross-browser compatibility issues). Being able to specifically address your "viewer" is a great advantage. Since Intranets are user-specific (requiring database/network authentication prior to access), you know exactly who you are interfacing with and can personalize your Intranet based on role (job title, department) or individual ("Congratulations Jane, on your 3rd year with our company!"). Promote common corporate culture: Every user has the ability to view the same information within the Intranet.

Immediate updates: When dealing with the public in any capacity, laws, specifications, and parameters can change. Intranets make it possible to provide your audience with "live" changes so they are kept up-to-date, which can limit a company's liability. Supports a distributed computing architecture: The intranet can also be linked to a companys management information system, for example a time keeping system.

Disadvantages of Intranets
Management concerns

Management fears loss of control

Hidden or unknown complexity and costs Potential for chaos Unauthorized access Abuse of access Denial of service Packet sniffing Overabundance of information Information overload lowers productivity Users set up own web pages

Security concerns

Productivity concerns

Extranet
An extranet is a computer network that allows controlled access from the outside, for specific business or educational purposes. In abusiness-to-business context, an extranet can be viewed as an extension of an organization's intranet that is extended to users outside the organization, usually partners, vendors, and suppliers, in isolation from all other Internet users. In contrast, business-to-consumer (B2C) models involve known servers of one or more companies, communicating with previously unknown consumer users. An extranet is similar to a DMZ in that it provides access to needed services for channel partners, without granting access to an organization's entire network.

Enterprise applications
During the late 1990s and early 2000s, several industries started to use the term 'extranet' to describe centralized repositories of shared data (and supporting applications) made accessible via the web only to authorized members of particular work groups - for example, geographically dispersed, multi-company project teams. Some applications are offered on a Software as a Service (SaaS) basis. For example, in the construction industry, project teams may access a project extranet to share drawings, photographs and documents, and use online applications to mark-up and make comments and to manage and report on project-related communications. In 2003 in the United Kingdom, several of the leading vendors formed the Network for Construction Collaboration Technology Providers (NCCTP) to promote the technologies and to establish data exchange standards between the different data systems. The same type of construction-focused technologies have also been developed in the United States, Australia and mainland Europe.[3]

Specially secured extranets are used to provide virtual data room services to help manage transactions between companies in sectors such as law and accountancy.

Advantages

Exchange large volumes of data using Electronic Data Interchange (EDI) Share product catalogs exclusively with trade partners Collaborate with other companies on joint development efforts Jointly develop and use training programs with other companies

Provide or access services provided by one company to a group of other companies, such as an online banking application managed by one company on behalf of affiliated banks [edit]Disadvantages Extranets can be expensive to implement and maintain within an organization (e.g., hardware, software, employee training costs), if hosted internally rather than by an application service provider.

Security of extranets can be a concern when hosting valuable or proprietary information.

Internet
This is the world-wide network of computers accessible to anyone who knows their Internet

Intranet
This is a network that is not available to the world outside of the Intranet. If the Intranet network is connected to the Internet, the Intranet will reside

Extranet
An Extranet is actually an Intranet that is partially

Protocol (IP) address - the IP address is a unique set of numbers (such as 209.33.27.100) that defines the computer's location. Most will have accessed a computer using a name such ashttp://www.hcidata.com. Before this namedcomputer can be accessed, the name needs to be resolved (translated) into an IP address. To do this your browser (for example Netscape or Internet Explorer) will access a Domain Name Server (DNS) computer to lookup the name and return an IP address - or issue an error message to indicate that the name was not found. Once your browser has the IP address it can access the remote computer. The actual server (the computer that serves up the web pages) does not reside behind a firewall - if it did, it would be an Extranet. It may implement security at a directory level so that access is via a username and password, but otherwise all the information is accessible. To see typical security have a look at a sample secure directory - the username is Dr and the password is Who (both username and password are case sensitive).

behind a firewall and, if it allows access from the Internet, will be an Extranet. The firewall helps to control access between the Intranet and Internet to permit access to the Intranet only to people who are members of the same company or organisation. In its simplest form, an Intranet can be set up on a networked PC without any PC on the network having access via the Intranet network to the Internet. For example, consider an office with a few PCs and a few printers all networked together. The network would not be connected to the outside world. On one of the drives of one of the PCs there would be a directory of web pages that comprise the Intranet. Other PCs on the network could access this Intranet by pointing their browser (Netscape or Internet Explorer) to this directory - for example U:\inet\index.htm. From then onwards they would navigate around the Intranet in the same way as they would get around the Internet.

accessible to authorised outsiders. The actual server (the computer that serves up the web pages) will reside behind a firewall. The firewall helps to control access between the Intranet and Internet permitting access to the Intranet only to people who are suitably authorised. The level of access can be set to different levels for individuals or groups of outside users. The access can be based on a username and password or an IP address (a unique set of numbers such as 209.33.27.100 that defines the computer that the user is on).

Privacy Threats Most people browse the web, send email or take part in online chats or irc without realizing how easily their data can be accessed by a third party. In order to understand the need for privacy on the net, its best to take a look at the dangers first.
Sniffing

Your data goes over various lines on its way to a, say, webserver. First of all your phoneline, then the lines of your isp, various transit isp and at the end the isp of the webserver. Many people can theoretically gain access to it while it travels along its way. This includes:

Anybody able to tap the actual lines. (e.g. telecommunication companies, government agencies, in case of cable internet maybe your neighbours) Anybody with access to any of the hosts/routers. (e.g. your isps sysadmin, the sysadmins at the isp where the webserver is hosted, the admin of the webserver

itself)

Email

Your Email travels around the internet in the clear, just like a postcard without an envelope. It costs even less effort to gain access to them than it is to tap your line.

People that can easily get hold of them include:

Anybody with access to either your mailserver or the mailserver on the receiving end. (the sysadmin of your isp, the sysadmin of the recepients isp) Anybody who can sniff your email while it is being transmitted. (see Sniffing)

Serverlogs

When you access a server, for example a webserver, the server can always see which computer you're coming from and what you're trying to access. When you ftp a file, the server will usually 'remember' your IP (thats the number of your computer) and which file you retrieved. It'll put this in a so-called logfile. Sometimes it'll also log your email address, if you configured your browser to use this as the password for ftp logins. People with access to those logfiles are the sysadmins of the server.
Websites

Websites in particular tend to create even more dangers to your privacy. This is because your browser usually transmits a lot of other information to the website.

This includes things like:

The URL of the website you came from. (referer) If you follow a link on www.yahoo.com to www.xs4all.nl for example, the webserver at xs4all could see that you're coming from yahoo. In this case you probably won't care about it, but I guess there's some more questionable websites you can come from. The OS (operating system, e.g. windows,linux,bsd) you're running and the name and version of your webbrowser (e.g.netscape) In some cases your name and your email address if you configured your browser this way. Any cookies it previously stored on your machine. If you access a website, it will sometimes store some information on your computer, so that if you access it again, it'll know you've been there before etc.

A cookie contains a variable (say name) and a value (say Your Name) and the name of the website it goes with. If you run bsd or linux with netscape, take a look at ~/.netscape/cookies You might wonder who'd be interested in that kind of information. In most cases it is mainly used for advertisement purposes. If you give out your email address, this is also very interesting for spammers.

Network economics
Network economics refers to business economics that benefit from the network effect. This is when the value of a good or service increases when others buy the same good or service. Examples are website such as EBay, or iVillage where the community comes together and shares thoughts to help the website become a better business organization. In sustainability, network economics refers to multiple professionals (architects, designers, or related businesses) all working together to develop sustainable products and technologies. The more companies are involved in environmentally friendly production, the easier and cheaper it becomes to produce new sustainable products. For instance, if no one produces sustainable products, it is difficult and expensive to design a sustainable house with custom materials and technology. But due to network economics, the more industries are involved in creating such products, the easier it is to design an environmentally sustainable building. Another benefit of network economics in a certain field is improvement that results from competition and networking within an industry. The network economy may be viewed from a number of perspectives: transition from the industrial economy, digital and information infrastructure, global scale, value networks, and intellectual property rights.

From a transitional point of view, Malone and Laubacher (1998) indicate that the Information Revolution has changed the nature of business activity. Because information can be shared instantly and inexpensively on a global scale, the value of centralized decision making and expensive bureaucracies is greatly diminished. Brand (1999) points out that commerce is being accelerated by the digital and network revolutions and that the role of commerce is to both exploit and absorb these shocks. Some effort must focus on developing new infrastructure while other activity will emphasize governance and evolving culture. Rifkin (2000) notes that real estate has become a business burden in network-based markets. From an infrastructure perspective, Tapscott (1996) compares information networks of the new economy to highways and the power grid of the industrial economy. He suggests that no country can succeed without state-of-the-art electronic infrastructure. Schwartz (1999) writes that in the future, large companies will manage their purchasing, invoicing, document exchange, and logistics through global networks that connect a billion computing devices. At global scales, Tapscott (1996) indicates that companies can provide 24-hour service as customer requests are transferred from one time zone to another without customers being aware that the work is being done on the other side of the world. Boyett and Boyett (2001) point out that the larger the network, the greater its value and desirability. In a networked economy, success begets more success. Kelly (1998) states that in a network economy, value is created and shared by all members of a network rather than by individual companies and that economies of scale stem from the size of the network - not the enterprise. Similarly, because value flows from connectivity, Boyett and Boyett (2001) point out that an open system is preferable to a closed system because the former typically have more nodes. They also indicate that such networks are blurring the boundaries between a company and its environment. A network economy raises important issues with respect to intellectual property. Shapiro and Varian (1999) explain that once a first copy of information has been produced, producing additional copies costs virtually nothing. Rifkin (2000) proposes that as markets make way for networks, ownership is being replaced by access rights because ownership becomes increasingly marginal to business success and economic progress.

TCP/IP PROTOCOL SUITE


Communications between computers on a network is done through protocol suits. The most widely used and most widely available protocol suite is TCP/IP protocol suite. A protocol suit consists of a layered architecture where each layer depicts some functionality which can be carried out by a protocol. Each layer usually has more than one protocol options to

carry out the responsibility that the layer adheres to. TCP/IP is normally considered to be a 4 layer system. The 4 layers are as follows : 1. Application layer 2. Transport layer 3. Network layer 4. Data link layer

1. Application layer
This is the top layer of TCP/IP protocol suite. This layer includes applications or processes that use transport layer protocols to deliver the data to destination computers. At each layer there are certain protocol options to carry out the task designated to that particular layer. So, application layer also has various protocols that applications use to communicate with the second layer, the transport layer. Some of the popular application layer protocols are :

HTTP (Hypertext transfer protocol) FTP (File transfer protocol) SMTP (Simple mail transfer protocol) SNMP (Simple network management protocol) etc

2. Transport Layer
This layer provides backbone to data flow between two hosts. This layer receives data from the application layer above it. There are many protocols that work at this layer but the two most commonly used protocols at transport layer are TCP and UDP. TCP is used where a reliable connection is required while UDP is used in case of unreliable connections. TCP divides the data(coming from the application layer) into proper sized chunks and then passes these chunks onto the network. It acknowledges received packets, waits for the acknowledgments of the packets it sent and sets timeout to resend the packets if

acknowledgements are not received in time. The term reliable connection is used where it is not desired to loose any information that is being transferred over the network through this connection. So, the protocol used for this type of connection must provide the mechanism to achieve this desired characteristic. For example, while downloading a file, it is not desired to loose any information(bytes) as it may lead to corruption of downloaded content. UDP provides a comparatively simpler but unreliable service by sending packets from one host to another. UDP does not take any extra measures to ensure that the data sent is received by the target host or not. The term unreliable connection are used where loss of some information does not hamper the task being fulfilled through this connection. For example while streaming a video, loss of few bytes of information due to some reason is acceptable as this does not harm the user experience much.

3. Network Layer
This layer is also known as Internet layer. The main purpose of this layer is to organize or handle the movement of data on network. By movement of data, we generally mean routing of data over the network. The main protocol used at this layer is IP. While ICMP(used by popular ping command) and IGMP are also used at this layer.

4. Data Link Layer


This layer is also known as network interface layer. This layer normally consists of device drivers in the OS and the network interface card attached to the system. Both the device drivers and the network interface card take care of the communication details with the media being used to transfer the data over the network. In most of the cases, this media is in the form of cables. Some of the famous protocols that are used at this layer include ARP(Address resolution protocol), PPP(Point to point protocol) etc.

TCP/IP CONCEPT EXAMPLE


One thing which is worth taking note is that the interaction between two computers over the network through TCP/IP protocol suite takes place in the form of a client server architecture. Client requests for a service while the server processes the request for client.

Now, since we have discussed the underlying layers which help that data flow from host to target over a network. Lets take a very simple example to make the concept more clear. Consider the data flow when you open a website.

As seen in the above figure, the information flows downward through each layer on the host machine. At the first layer, since http protocol is being used, so an HTTP request is formed and sent to the transport layer. Here the protocol TCP assigns some more information(like sequence number, source port number, destination port number etc) to the data coming from upper layer so that the communication remains reliable i.e, a track of sent data and received data could be maintained.

At the next lower layer, IP adds its own information over the data coming from transport layer. This information would help in packet travelling over the network. Lastly, the data link layer makes sure that the data transfer to/from the physical media is done properly. Here again the communication done at the data link layer can be reliable or unreliable. This information travels on the physical media (like Ethernet) and reaches the target machine. Now, at the target machine (which in our case is the machine at which the website is hosted) the same series of interactions happen, but in reverse order. The packet is first received at the data link layer. At this layer the information (that was stuffed by the data link layer protocol of the host machine) is read and rest of the data is passed to the upper layer. Similarly at the Network layer, the information set by the Network layer protocol of host machine is read and rest of the information is passed on the next upper layer. Same happens at the transport layer and finally the HTTP request sent by the host application(your browser) is received by the target application(Website server). One would wonder what happens when information particular to each layer is read by the corresponding protocols at target machine or why is it required? Well, lets understand this by an example of TCP protocol present at transport layer. At the host machine this protocol adds information like sequence number to each packet sent by this layer. At the target machine, when packet reaches at this layer, the TCP at this layer makes note of the sequence number of the packet and sends an acknowledgement (which is received seq number + 1). Now, if the host TCP does not receive the acknowledgement within some specified time, it re sends the same packet. So this way TCP makes sure that no packet gets lost. So we see that protocol at every layer reads the information set by its counterpart to achieve the functionality of the layer it represents.

Hypertext Transfer Protocol


The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems.[1] HTTP is the foundation of data communication for the World Wide Web.

HTTP functions as a request-response protocol in the client-server computing model. In HTTP, a web browser, for example, acts as a client, while an application running on a computer hosting a web site functions as a server. The client submits an HTTP request message to the server. The server, which stores content, or provides resources, such as HTML files, or performs other functions on behalf of the client, returns a response message to the client. A response contains completion status information about the request and may contain any content requested by the client in its message body. A web browser (or client) is often referred to as a user agent (UA). Other user agents can include the indexing software used by search providers, known as web crawlers, or variations of the web browser such as voice browsers, which present an interactive voice user interface. The HTTP protocol is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf of the original, so-called origin server, to improve response time. HTTP proxy servers at network boundaries facilitate communication when clients without a globally routable address are located in private networks by relaying the requests and responses between clients and servers. HTTP is an Application Layer protocol designed within the framework of the Internet Protocol Suite. The protocol definitions presume a reliable Transport Layer protocol for host-to-host data transfer.[2] The Transmission Control Protocol (TCP) is the dominant protocol in use for this purpose. However, HTTP has found application even with unreliable protocols, such as the User Datagram Protocol (UDP) in methods such as the Simple Service Discovery Protocol (SSDP). HTTP Resources are identified and located on the network by Uniform Resource Identifiers (URIs)or, more specifically, Uniform Resource Locators (URLs)using the http or https URI schemes. URIs and the Hypertext Markup Language (HTML), form a system of inter-linked resources, called hypertext documents, on the Internet, that led to the establishment of the World Wide Web in 1990 by English computer scientist and innovator Tim Berners-Lee.

HTTP session
An HTTP session is a sequence of network request-response transactions. An HTTP client initiates a request by establishing a Transmission Control Protocol (TCP) connection to a particular port on a server (typically port 80; see List of TCP and UDP port numbers). An HTTP

server listening on that port waits for a client's request message. Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200 OK", and a message of its own, the body of which is perhaps the requested resource, an error message, or some other information.

Secure HTTP
There are three methods of establishing a secure HTTP connection: HTTP Secure, Secure Hypertext Transfer Protocol and the HTTP/1.1 Upgrade header. Browser support for the latter two is, however, nearly non-existent,[citation needed] so HTTP Secure is the dominant method of establishing a secure HTTP connection.

File Transfer Protocol(FTP)


File Transfer Protocol (FTP) is a standard network protocol used to transfer files from one hostto another host over a TCP-based network, such as the Internet. It is often used to upload web pages and other documents from a private development machine to a public webhosting server. FTP is built on a client-server architecture and uses separate control and data connections between the client and the server.[1] FTP users may authenticate themselves using a clear-textsign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. The first FTP client applications were interactive command-line tools, implementing standard commands and syntax. Graphical user interface clients have since been developed for many of the popular desktop operating systems in use today,[2][3] including general web design programs like Microsoft Expression Web, and specialist FTP clients such as CuteFTP.

Differences from HTTP


FTP operates on the application layer of the OSI model, and is used to transfer files using TCP/IP.[3] To do so, an FTP server has to be running and waiting for incoming requests. [3] The client computer is then able to communicate with the server on port 21.[3][4] This connection, called the control connection,[5] remains open for the duration of the session. A second connection, called the data connection,[2][5] can either be opened by the server from its port 20 to a negotiated client port (active mode), or by the client from an arbitrary port to a negotiated server port (passive mode) as required to transfer file data.[2][4] The control connection is used for session administration, for example commands, identification and passwords exchanged between the client and the server using a telnet-like protocol.[6] For example "RETR filename" would transfer the specified file from the server to the client. Due to this two-port structure, FTP is considered an out-of-band protocol, as opposed to an inband protocol such as HTTP.[6]

What is an FTP Client? An FTP Client is software that is designed to transfer files back-and-forth between two computers over the Internet. It needs to be installed on your computer and can only be used with a live connection to the Internet. The classic FTP Client look is a two-pane design. The pane on the left displays the files on your computer and the pane on the right displays the files on the remote computer. File transfers are as easy as dragging-and-dropping files from one pane to the other or by highlighting a file and clicking one of the direction arrows located between the panes. Additional features of the FTP Client include: multiple file transfer; the auto re-get or resuming feature; a queuing utility; the scheduling feature; an FTP find utility; a synchronize utility; and for the advanced user, a scripting utility.

Domain Name System(DNS)


The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to theInternet or a private network. It associates various information with domain names assigned to each of the participating entities. A Domain Name Service translates queries for domain names (which are meaningful to humans) into IP addresses for the purpose of locating computer services and devices worldwide. An often-used analogy to explain the Domain Name System is that it serves as the phone book for the Internet by translating human-friendly computer hostnames into IP addresses. For example, the domain name www.example.com translates to the addresses 192.0.43.10 (IPv4) and 2620:0:2d0:200::10 (IPv6). The Domain Name System makes it possible to assign domain names to groups of Internet resources and users in a meaningful way, independent of each entity's physical location. Because of this, World Wide Web (WWW) hyperlinks and Internet contact information can remain consistent and constant even if the current Internet routing arrangements change or the participant uses a mobile device. Internet domain names are easier to remember than IP addresses such as 208.77.188.166 (IPv4) or 2001:db8:1f70::999:de8:7648:6e8(IPv6). Users take advantage of this when they recite meaningful Uniform Resource Locators (URLs) and e-mail addresses without having to know how the computer actually locates them. The Domain Name System distributes the responsibility of assigning domain names and mapping those names to IP addresses by designating authoritative name servers for each domain. Authoritative name servers are assigned to be responsible for their particular domains,

and in turn can assign other authoritative name servers for their sub-domains. This mechanism has made the DNS distributed and fault tolerant and has helped avoid the need for a single central register to be continually consulted and updated. In general, the Domain Name System also stores other types of information, such as the list of mail servers that accept email for a given Internet domain. By providing a worldwide, distributed keyword-based redirection service, the Domain Name System is an essential component of the functionality of the Internet.

Enterprise resource planning


Enterprise resource planning (ERP) systems integrate internal and external management information across an entire organization, embracing finance/accounting, manufacturing, sales and service, customer relationship management, etc. ERP systems automate this activity with an integrated software application. Their purpose is to facilitate the flow of information between all business functions inside the boundaries of the organization and manage the connections to outside stakeholders.[1] ERP systems can run on a variety of computer hardware and network configurations, typically employing a database as a repository for information

Characteristics
ERP (Enterprise Resource Planning) systems typically include the following characteristics: An integrated system that operates in real time (or next to real time), without relying on periodic updates.[citation needed]

A common database, which supports all applications. A consistent look and feel throughout each module.

Installation of the system without elaborate application/data integration by the Information Technology (IT) department.[3] Finance/Accounting General ledger, payables, cash management, fixed assets, receivables, budgeting, consolidation Human resources payroll, training, benefits, 401K, recruiting, diversity management Manufacturing Engineering, bill of materials, work orders, scheduling, capacity, workflow management, quality control, cost management, manufacturing process, manufacturing projects, manufacturing flow, activity based costing, product lifecycle management Supply chain management

Order to cash, inventory, order entry, purchasing, product configurator, supply chain planning, supplier scheduling, inspection of goods, claim processing, commissions Project management Costing, billing, time and expense, performance units, activity management Customer relationship management Sales and marketing, commissions, service, customer contact, call center support Data services Various "selfservice" interfaces for customers, suppliers and/or employees Access control Management of user privileges for various processes

Components

Transactional database Management portal/dashboard Business intelligence system Customizable reporting External access via technology such as web services Search Document management Messaging/chat/wiki Workflow management

Advantages
The fundamental advantage of ERP is that integrating the myriad processes by which businesses operate saves time and expense. Decisions can be made more quickly and with fewer errors. Data becomes visible across the organization. Tasks that benefit from this integration include:[citation needed]

Sales forecasting, which allows inventory optimization

Chronological history of every transaction through relevant data compilation in every area of operation. Order tracking, from acceptance through fulfillment Revenue tracking, from invoice through cash receipt

Matching purchase orders (what was ordered), inventory receipts (what arrived), and costing (what the vendor invoiced)

ERP systems centralize business data, bringing the following benefits: They eliminate the need to synchronize changes between multiple systems consolidation of finance, marketing and sales, human resource, and manufacturing applications

They bring legitimacy and transparency in each bit of statistical data. They enable standard product naming/coding.

They provide a comprehensive enterprise view (no "islands of information"). They make realtime information available to management anywhere, any time to make proper decisions. They protect sensitive data by consolidating multiple security systems into a single structure.[30]

Disadvantages

Customization is problematic.

Reengineering business processes to fit the ERP system may damage competitiveness and/or divert focus from other critical activities ERP can cost more than less integrated and/or less comprehensive solutions. High switching costs associated with ERP can increase the ERP vendor's negotiating power which can result in higher support, maintenance, and upgrade expenses. Overcoming resistance to sharing sensitive information between departments can divert management attention. Integration of truly independent businesses can create unnecessary dependencies. Extensive training requirements take resources from daily operations.

Due to ERP's architecture (OLTP, On-Line Transaction Processing) ERP systems are not well suited for production planning and supply chain management (SCM) The limitations of ERP have been recognized sparking new trends in ERP application development, the four significant developments being made in ERP are, creating a more flexible ERP, Web-Enable ERP, Interenterprise ERP and e-Business Suites, each of which will potentially address the failings of the current ERP.

Supply chain management


Supply chain management (SCM) is the management of a network of interconnected businesses involved in the ultimate provision of product and service packages required by end customers (Harland, 1996).[2] Supply chain management spans all movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption (supply chain). Another definition is provided by the APICS Dictionary when it defines SCM as the "design, planning, execution, control, and monitoring of supply chain activities with the objective of creating net value, building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand and measuring performance globally."

Problems addressed by supply chain management


Supply chain management must address the following problems: Distribution Network Configuration: number, location and network missions of suppliers, production facilities, distribution centers, warehouses, cross-docks and customers.

Distribution Strategy: questions of operating control (centralized, decentralized or shared); delivery scheme, e.g., direct shipment, pool point shipping, cross docking, DSD (direct store delivery), closed loop shipping; mode of transportation, e.g., motor carrier, including truckload, LTL, parcel; railroad; intermodal transport, including TOFC (trailer on flatcar) and COFC (container on flatcar); ocean freight; airfreight; replenishment strategy (e.g., pull, push or hybrid); and transportation control (e.g., owner-operated, private carrier, common carrier, contract carrier, or 3PL).

Trade-Offs in Logistical Activities: The above activities must be well coordinated in order to achieve the lowest total logistics cost. Trade-offs may increase the total cost if only one of the activities is optimized. For example, full truckload (FTL) rates are more economical on a cost per pallet basis than less than truckload (LTL) shipments. If, however, a full truckload of a product is ordered to reduce transportation costs, there will be an increase in inventory holding costs which may increase total logistics costs. It is therefore imperative to take a systems approach when planning logistical activities. These trade-offs are key to developing the most efficient and effective Logistics and SCM strategy.

Information: Integration of processes through the supply chain to share valuable information, including demand signals, forecasts, inventory, transportation, potential collaboration, etc.

Inventory Management: Quantity and location of inventory, including raw materials, work-in-process (WIP) and finished goods.

Cash-Flow: Arranging the payment terms and methodologies for exchanging funds across entities within the supply chain.

Importance of supply chain management


Organizations increasingly find that they must rely on effective supply chains, or networks, to compete in the global market and networked economy.[7] In Peter Drucker's (1998) new management paradigms, this concept of business relationships extends beyond traditional enterprise boundaries and seeks to organize entire business processes throughout a value chain of multiple companies. During the past decades, globalization, outsourcing and information technology have enabled many organizations, such as Dell and Hewlett Packard, to successfully operate solid collaborative supply networks in which each specialized business partner focuses on only a few key strategic activities (Scott, 1993). This inter-organizational supply network can be acknowledged as a new form of organization. However, with the complicated interactions among the players, the network structure fits neither "market" nor "hierarchy" categories (Powell, 1990). It is not clear what kind of performance impacts different supply network structures could have on firms, and little is known about the coordination conditions and trade-offs that may exist among the players. From a systems perspective, a complex network structure can be decomposed into individual component firms (Zhang and Dilts, 2004). Traditionally, companies in a supply network concentrate on the inputs and outputs of the processes, with little concern for the internal management working of other individual players. Therefore, the choice of an internal management control structure is known to impact local firm performance (Mintzberg, 1979). In the 21st century, changes in the business environment have contributed to the development of supply chain networks. First, as an outcome of globalization and the proliferation of multinational companies, joint ventures, strategic alliances and business partnerships, significant success factors were identified, complementing the earlier "JustIn-Time", "Lean Manufacturing" and "Agile Manufacturing" practices.[8] Second, technological changes, particularly the dramatic fall in information communication costs, which are a significant component of transaction costs, have led to changes in coordination among the members of the supply chain network (Coase, 1998).

What is a Data Warehouse?


A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data, but it can include data from other sources. It separates analysis workload from transaction workload and enables an organization to consolidate data from several sources. In addition to a relational database, a data warehouse environment includes an extraction, transportation, transformation, and loading (ETL) solution, an online analytical processing (OLAP) engine, client analysis tools, and other

applications that manage the process of gathering data and delivering it to business users.

Data Warehouse Architectures


Data warehouses and their architectures vary depending upon the specifics of an organization's situation. Three common architectures are:

Data Warehouse Architecture (Basic) Data Warehouse Architecture (with a Staging Area) Data Warehouse Architecture (with a Staging Area and Data Marts)

Data Warehouse Architecture (Basic)

Figure 1-2 shows a simple architecture for a data warehouse. End users directly access data derived from several source systems through the data warehouse. Figure 1-2 Architecture of a Data Warehouse

Text description of the illustration dwhsg013.gif

In Figure 1-2, the metadata and raw data of a traditional OLTP system is present, as is an additional type of data, summary data. Summaries are very valuable in data warehouses because they pre-compute long operations in advance. For example, a

typical data warehouse query is to retrieve something like August sales. A summary in Oracle is called a materialized view.
Data Warehouse Architecture (with a Staging Area)

In Figure 1-2, you need to clean and process your operational data before putting it into the warehouse. You can do this programmatically, although most data warehouses use a staging area instead. A staging area simplifies building summaries and general warehouse management. Figure 1-3 illustrates this typical architecture. Figure 1-3 Architecture of a Data Warehouse with a Staging Area

Text description of the illustration dwhsg015.gif

Data Warehouse Architecture (with a Staging Area and Data Marts)

Although the architecture in Figure 1-3 is quite common, you may want to customize your warehouse's architecture for different groups within your organization. You can do this by adding data marts, which are systems designed for a particular line of business. Figure 1-4 illustrates an example where purchasing, sales, and inventories are separated. In this example, a financial analyst might want to analyze historical data for purchases and sales.

Figure 1-4 Architecture of a Data Warehouse with a Staging Area and Data Marts

Benefits of a data warehouse


A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to:

Maintain data history, even if the source transaction systems do not.

Integrate data from multiple source systems, enabling a central view across the enterprise. This benefit is always valuable, but particularly so when the organization has grown by merger. Improve data quality, by providing consistent codes and descriptions, flagging or even fixing bad data.

Present the organization's information consistently.

Provide a single common data model for all data of interest regardless of the data's source. Restructure the data so that it makes sense to the business users. Restructure the data so that it delivers excellent query performance, even for complex analytic queries, without impacting the operational systems.

Add value to operational business applications, notably customer relationship management (CRM) systems.
Disadvantages However, there are considerable disadvantages involved in moving data from multiple, often highly disparate, data sources to one data warehouse that translate into long implementation time, high cost, lack of flexibility, dated information and limited capabilities: Major data schema transforms from each of the data sources to one schema in the data warehouse, which can represent more than 50% of the total data warehouse effort Data owners lose control over their data, raising ownership (responsibility and accountability), security and privacy issues Long initial implementation time and associated high cost Adding new data sources takes time and associated high cost Limited flexibility of use and types of users - requires multiple separate data marts for multiple uses and types of users Typically, data is static and dated Typically, no data drill-down capabilities Difficult to accommodate changes in data types and ranges, data source schema, indexes and queries Typically, cannot actively monitor changes in data

Pls check the data warehousing application from mdu study book.

Network economy (unit-III)- from mdu book.

También podría gustarte