Está en la página 1de 42

INFORMATION TECHNOLOGY MANAGEMENT

MULTIMEDIA AND WEB DEVELOPMENT


Q1. Explain Data Compression techniques? Describe different authoring tools?
Q2. Write short notes on:-
a) JPEG and MPEG b) Web Designing
c) DVI and MIDI d) Video on demand

SOFTWARE ENGINEERING
Q1. What is software engineering? Explain the role and responsibilities of a
software engineering?
Q2. Explain software metrics, process models, testing techniques and software
quality factors?

SYSTEM ANALYSIS
Q1. Explain system Development Life Cycle (SDLC)? Explain different types of
feasibilities?
Q2. Describe Data Flow Diagrams (DFD), Project management, warnier-orr
diagrams and Nassi-Shneiderman charts?

In computing, JPEG (pronounced /ˈdʒeɪpɛɡ/, JAY-peg) is a commonly used method of lossy


compression for photographic images. The degree of compression can be adjusted,
allowing a selectable tradeoff between storage size and image quality. JPEG typically
achieves 10:1 compression with little perceptible loss in image quality.

JPEG compression is used in a number of image file formats. JPEG/Exif is the most
common image format used by digital cameras and other photographic image capture
devices; along with JPEG/JFIF, it is the most common format for storing and transmitting
photographic images on the World Wide Web.[citation needed] These format variations are often
not distinguished, and are simply called JPEG.

The JPEG standard

The name "JPEG" stands for Joint Photographic Experts Group, the name of the committee
that created the JPEG standard and also other standards. It is one of two sub-groups of
ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 1 (ISO/IEC JTC
1/SC 29/WG 1) - titled as Coding of still pictures.[1][2][3] The group was organized in 1986,
[4]
issuing the first JPEG standard in 1992, which was approved in September 1992 as
ITU-T Recommendation T.81[5] and in 1994 as ISO/IEC 10918-1.

The JPEG standard specifies the codec, which defines how an image is compressed into a
stream of bytes and decompressed back into an image, but not the file format used to
contain that stream.[6] The Exif and JFIF standards define the commonly used formats for
interchange of JPEG-compressed images.

Typical usage

The JPEG compression algorithm is at its best on photographs and paintings of realistic
scenes with smooth variations of tone and color. For web usage, where the bandwidth
used by an image is important, JPEG is very popular. JPEG/Exif is also the most common
format saved by digital cameras.

On the other hand, JPEG is not as well suited for line drawings and other textual or iconic
graphics, where the sharp contrasts between adjacent pixels cause noticeable artifacts.
Such images are better saved in a lossless graphics format such as TIFF, GIF, PNG, or a raw
image format.

JPEG files

The file format known as 'JPEG Interchange Format' (JIF) is specified in Annex B of the
standard. However, this "pure" file format is rarely used, primarily because of the
difficulty of programming encoders and decoders that fully implement all aspects of the
standard and because of certain shortcomings of the standard:

• Color space definition


• Component sub-sampling registration
• Pixel aspect ratio definition

Video on demand
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be
challenged and removed. (August 2008)

Video on Demand (VOD) or Audio Video on Demand (AVOD) are systems which
allow users to select and watch/listen to video or audio content on demand. IPTV
technology is often used to bring video on demand to televisions and pcs[1]

Television VOD systems either stream content through a set-top box, a computer or other
device, allowing viewing in real time, or download it to a device such as a computer,
digital video recorder (also called a personal video recorder) or portable media player for
viewing at any time. The majority of cable- and telco-based television providers offer both
VOD streaming, including pay-per-view and free content, whereby a user buys or selects a
movie or television program and it begins to play on the television set almost
instantaneously, or downloading to a DVR rented from the provider, or downloaded onto a
pc, for viewing in the future. Internet television, using the Internet, is an increasingly
popular form of video on demand.

Some airlines offer AVOD as in-flight entertainment to passengers through individually-


controlled video screens embedded in seatbacks or armrests or offered via portable media
players. Airline AVOD systems offer passengers the opportunity to select specific stored
video or audio content and play it on demand including pause, fast forward, and rewind.

Functionality

Download and streaming video on demand systems provide the user with a large subset
of VCR functionality including pause, fast forward, fast rewind, slow forward, slow rewind,
jump to previous/future frame etc. These functions are called trick modes. For disk-based
streaming systems which store and stream programs from hard disk drive, trick modes
require additional processing and storage on the part of the server, because separate files
for fast forward and rewind must be stored. Memory-based VOD streaming systems have
the advantage of being able to perform trick modes directly from RAM, which requires no
additional storage or CPU cycles on the part of the processor.

It is possible to put video servers on LANs, in which case they can provide very rapid
response to users. Streaming video servers can also serve a wider community via a WAN,
in which case the responsiveness may be reduced. Download VOD services are practical
to homes equipped with cable modems or DSL connections. Servers for traditional cable
and telco VOD services are usually placed at the cable head-end serving a particular
market as well as cable hubs in larger markets. In the telco world, they are placed in
either the central office, or a newly created location called a Video Head-End Office
(VHO).

Web design is the skill of creating presentations of content (usually hypertext


or hypermedia) that is delivered to an end-user through the World Wide Web, by way of a
Web browser or other Web-enabled software like Internet television clients, microblogging
clients and RSS readers.

The intent of web design[1] is to create a web site—a collection of electronic documents
and applications that reside on a web server/servers and present content and interactive
features/interfaces to the end user in form of Web pages once requested. Such elements
as text, bit-mapped images (GIFs, JPEGs) and forms can be placed on the page using
HTML/XHTML/XML tags. Displaying more complex media (vector graphics, animations,
videos, sounds) requires plug-ins such as Adobe Flash, QuickTime, Java run-time
environment, etc. Plug-ins are also embedded into web page by using HTML/XHTML tags.

Improvements in browsers' compliance with W3C standards prompted a widespread


acceptance and usage of XHTML/XML in conjunction with Cascading Style Sheets (CSS) to
position and manipulate web page elements and objects. Latest standards and proposals
aim at leading to browsers' ability to deliver a wide variety of content and accessibility
options to the client possibly without employing plug-ins.

Typically web pages are classified as static or dynamic:

• Static pages don’t change content and layout with every request unless a human
(web master/programmer) manually updates the page. A simple HTML page is an
example of static content.
• Dynamic pages adapt their content and/or appearance depending on end-user’s
input/interaction or changes in the computing environment (user, time, database
modifications, etc.) Content can be changed on the client side (end-user's
computer) by using client-side scripting languages (JavaScript, JScript, Actionscript,
etc.) to alter DOM elements (DHTML). Dynamic content is often compiled on the
server utilizing server-side scripting languages (Perl, PHP, ASP, JSP, ColdFusion,
etc.). Both approaches are usually used in complex applications.

With growing specialization in the information technology field there is a strong


tendency to draw a clear line between web design and web development.

Web design is a kind of graphic design intended for development and styling of objects
of the Internet's information environment to provide them with high-end consumer
features and aesthetic qualities. The offered definition separates web design from web
programming, emphasizing the functional features of a web site, as well as positioning
web design as a kind of graphic design.[2]

The process of designing web pages, web sites, web applications or multimedia for the Web
may utilize multiple disciplines, such as animation, authoring, communication design,
corporate identity, graphic design, human-computer interaction, information architecture,
interaction design, marketing, photography, search engine optimization and typography.

• Markup languages (such as HTML, XHTML and XML)


• Style sheet languages (such as CSS and XSL)
• Client-side scripting (such as JavaScript)
• Server-side scripting (such as PHP and ASP)
• Database technologies (such as MySQL and PostgreSQL)
• Multimedia technologies (such as Flash and Silverlight)

Web pages and web sites can be static pages, or can be programmed to be dynamic pages
that automatically adapt content or visual appearance depending on a variety of factors,
such as input from the end-user, input from the Webmaster or changes in the computing
environment (such as the site's associated database having been modified).

With growing specialization within communication design and information technology


fields, there is a strong tendency to draw a clear line between web design specifically for
web pages and web development for the overall logistics of all web-based services.
SOFTWARE ENGINEERING
Q1. What is software engineering? Explain the role and responsibilities of a
software engineering?
Q2. Explain software metrics, process models, testing techniques and software
quality factors?

Dr. Tan software eng Role and responsibilities


Nguyen
Principal
Systems
Engineering
Manager
Software engineering
From Wikipedia, the free encyclopedia
Dr. Nguyen has more
than 27 years of Jump to: navigation, search
technical and
management
experience in IT,
data communication,
systems engineering,
networks, software
engineering, and
computer science
R&D. He has more
than 20 years of
teaching experience
in computer science,
information systems,
The Airbus A380 uses a substantial amount of software to create a "paperless" cockpit.
data communication,
and mathematics. He
Software engineering maps and plans the millions of lines of code constituting the plane's
specializes in
software
applying object-
oriented rapid Software engineering is a profession and field of study dedicated to designing,
prototyping implementing, and modifying software so that it is of higher quality, more affordable,
methodologies to maintainable, and faster to build. The term software engineering first appeared in the
systems software 1968 NATO Software Engineering Conference, and was meant to provoke thought
design and regarding the perceived "software crisis" at the time.[1][2] Since the field is still relatively
development. As anyoung compared to its sister fields of engineering, there is still much debate around what
IT expert, he has software engineering actually is, and if it conforms to the classical definition of
applied cost- engineering. Some people argue that development of computer software is more art than
effective, innovativescience [3], and that attempting to impose engineering disciplines over a type of art is an
solutions to exercise in futility because what represents good practice in the creation of software is
efficiently improve not even defined.[4] Others, such as Steve McConnell, argue that engineering's blend of art
client network
environments.
and science to achieve practical ends provides a useful model for software development.
[5]
The IEEE Computer Society's Software Engineering Body of Knowledge defines "software
engineering" as the application of a systematic, disciplined, quantifiable approach to the
development, operation, and maintenance of software, and the study of these approaches;
that is, the application of engineering to software.[6]

Software development, a much used and more generic term, does not necessarily subsume
the engineering paradigm. Although it is questionable what impact it has had on actual
software development over the last more than 40 years,[7][8] the field's future looks bright
according to Money Magazine and Salary.com, who rated "software engineering" as the
best job in the United States in 2006.[

From November 2005 to present, Dr. Tan Nguyen is a Principal Systems Engineering Manager for the Army
Knowledge Online (AKO) Enterprise Services (ES) and was a Chief Architect for the Simulation and
Information Technology Operation (SITO) Group at SAIC. His responsibility is to oversee a complex
interdisciplinary software development project and to provide architectural analysis and design support to assist
in implementing technical capabilities to satisfy functional requirements and interoperability needs for the AKO,
Future Combat Systems (FCS), and Theater Effect Based Operations (TEBO) projects. Dr. Nguyen also works in
a team environment with principal architects and data modelers to analyze and optimize IT systems and business
process requirements. Dr. Nguyen is currently teaching a graduate level course, the Systems Management and
Evaluation (SYST530), at George Mason University, Fairfax, Virginia, http://classweb.gmu.edu/tnguy1
From May 2003 to November 2005, Dr. Tan Nguyen was a Program Management Director of Systems
Enterprise Architecture at Lockheed Martin (LM) Information Technology (IT). He also served as a member of
the LM Corporate Architecture Development User Group. He led several Information Technology Technical
groups at the LMIT DoD Services. Dr. Nguyen was also an executive mentor of the Executive Mentoring
Program at LMIT. From October 1996 to May 2003, at EDS, Dr. Tan Nguyen served as chief systems architect
for the Telecommunications Group, software development team lead for the U.S. Army Knowledge Online
(AKO) program, and as software engineering manager and chief systems architect for the Defense Logistics
Agency Corporate Data Center (DLACDC). He directed numerous projects in deriving Enterprise Architecture
concepts, enterprise network design, network management, and implementing AKO functions in Java in a J2EE
environment using ATG Dynamo Portal He served as the principal DLACDC consultant for integrating and
consolidating mid-tier applications and hardware into a single data center. From 1978 to 1996, Dr. Nguyen held
technical management positions at Software Productivity Consortium, Network Imaging Corporation, Infodata
Systems Inc., MITRE Corporation, U.S. House of Representatives, and Control Data Corporation.

Contributing ideas that work


Dr. Nguyen successfully applies his extensive experience in computer science, software systems engineering,
data communications and network, and enterprise architecture to a wide range of IT requirements, as illustrated
by the following successful projects:

• As senior technical manager at the U.S. House of Representatives, Dr. Nguyen designed and
developed the inventive Electronic Voting System (EVS), including developing and implementing
numerous operating system programs, hardware drivers, data communication modules, and applications
in high-level and low-level computer languages. The EVS is still on operations at the U.S. House.
• While working as a task leader at MITRE Corporation, Dr. Nguyen successfully led several
large-scale development projects for the Federal Aviation Administration (FAA), including software
development, integration, and communications systems that earned him Outstanding Achievement
Recognition from the FAA as well as the MITRE Program Achievement Award.

• As a senior web developer, Dr. Nguyen, within a very short period, successfully developed,
implemented, and delivered the DLA Web Survey for over a million users worldwide.

• As a system and network architect at the Defense Logistics Agency (DLA), Dr. Nguyen
successfully resolved the asynchronous versus synchronous distributed application problems among the
IBM mainframe, Windows, and midrange computer network. He successfully devised the message
queue model and used MQ Series to implement the solution.

• As an enterprise architect at EDS Federal Systems, Dr. Nguyen successfully redesigned part
of the Department of Navy (DoN) Portal to improve the network performance.

• As an enterprise architect at EDS Army-Pentagon Group, Dr. Nguyen successfully derived the
Army Architecture Concept that is currently implementing at the DISC4 Installation Information
Infrastructure Architecture (I3A).

Experience that brings insight


• Dr. Nguyen has served as chief system architect, senior systems engineer, and software
engineer for clients in U.S. government agencies, computer manufacturing companies, research and
development (R&D) firms, and universities, including the following projects:

• During more than 25 years as an adjunct professor at George Mason University, Dr. Nguyen
has taught doctoral level courses on Information Technology, computer science, software engineering,
data communications, computer organization and architecture, operating systems, and language
processors and compilers. He conducts doctoral level courses in Information Technology and Software
Engineering and is a member of several doctoral dissertation committees.

• As director of Systems Enterprise Architecture at LMIT, Dr. Nguyen develops technical


solutions for numerous Business Development projects.

• Since joining EDS, Dr. Nguyen served the DLA as a technical manager and senior systems
developer, producing the large and complex cataloguing reengineering systems. He derived an
enterprise architectural solution for the U.S. Department Health and Human Services (HHS) unified
backbone network. He also was a principal information technologist and senior enterprise integration
adviser for the Defense Medical Information Management/Systems Integration Design, Development,
Operations, and Maintenance Services (D/SIDDOMS) program. He managed numerous software
development and network engineering teams that developed large, complex Web applications for the
DLIS, CEIS, and TriCare programs. His teams also designed and implemented—and in one case
reverse-engineered—complex systems that incorporated Windows NT servers, UNIX servers, and
mainframe computers.

• As director of IT development for the Software Productivity Consortium, Dr. Nguyen


managed a staff of 128 and formulated enterprisewide IT management standards. He created quality
metrics, cost estimates, and workloads for products developed by his team.

• As Web products development manager for Network Imaging Corporation, Dr. Nguyen
directed and implemented all aspects of software and product development, including designing and
releasing the Web Multimedia Object Management System for diverse platforms such as Windows NT,
Sun Solaris, and IBM AIX.
• As project manager and principal computer scientist for Infodata Systems, Inc., Dr. Nguyen
was responsible for network design and system software development life cycle activities. He
performed data modeling, using manual and automated Integrated Computer-Aided Software
Engineering (ICASE) tools. He also designed, installed, and implemented APIs.

• Dr. Nguyen previously served as MITRE Corporation task leader for the FAA Federal
Research Center. He led software development for an air traffic management simulation system and an
arrival rate monitoring system. He evaluated and tested new relational database management systems
(RDBMSs) and Ada compilers, developed a generic applications design methodology, and designed
and completed the integration of numerous FAA systems.

Influence in the industry


Dr. Nguyen is listed in the International Who’s Who and is a member of the American Association of University
Professors, Institute of Electrical and Electronics Engineers (IEEE) Computer Science Society, and IEEE
Software Engineering Society. He also serves on the Technical Committee of the IEEE Computer Science
Society. He has published numerous papers and IEEE articles on systems and network engineering, including a
conceptual paper that innovatively defines the entire Army Enterprise Architecture concept currently being
implemented at the Pentagon.

Proficiencies and expertise


Dr. Nguyen offers proven proficiency in the following roles:

• Program Management Director, Principle Systems Engineering Manger, Enterprise Architect, Chief
Systems Engineer, and software development manager.

• University professor teaching Computer Science, Data Communications, Systems and Software
Engineering.

• Dr. Nguyen is expert in the use of diverse hardware and software products such as the following:

• Windows, UNIX, Linux, Sun Solaris, IBM AIX, OS390, and HP-UX

• J2EE, Spring Framework, Struts, Hibernate, .NET, MS Windows Environment.

• C/C++, Java, JavaScript, Delphi, Visual Basic, Ada, html, XML, PERL, COBOL, UNIX
Shell, Oracle SQL, and standard SQL

• Object-oriented graphic tools, ICASE, Sybase, Geographic Information System, Electronic


Document Management Systems (EDMS) and Workflow products (OpenDocs, Verity, Documentum)
and RDBMSs.

• Dr Nguyen published several technical papers.

Dr. Nguyen earned a Ph.D. in Information Technology, with a concentration in Computer Science, from George
Mason University, 1991. He also earned an M.S. in Operations Research and Mathematics from George Mason
University, 1980, and a B.S. in Physics and Chemistry from Saigon University, 1972. Dr. Nguyen currently
holds a DoD Secret Clearance and an interim DoD Top Secret.

Explain software metrics, process models, testing techniques and software quality
factors?
In the context of software engineering, software quality measures how well software is
designed (quality of design), and how well the software conforms to that design (quality
of conformance),[1] although there are several different definitions.

Whereas quality of conformance is concerned with implementation (see Software Quality


Assurance), quality of design measures how valid the design and requirements are in
creating a worthwhile product.[2]
Contents
[hide]

• 1 Definition
• 2 History
o 2.1 Software product quality
o 2.2 Source code quality
• 3 Software reliability
o 3.1 History
o 3.2 The goal of reliability
o 3.3 The challenge of reliability
o 3.4 Reliability in program development
 3.4.1 Requirements
 3.4.2 Design
 3.4.3 Programming
 3.4.4 Software Build and Deployment
 3.4.5 Testing
 3.4.6 Runtime
• 4 Software quality factors
o 4.1 Measurement of software quality factors
 4.1.1 Understandability
 4.1.2 Completeness
 4.1.3 Conciseness
 4.1.4 Portability
 4.1.5 Consistency
 4.1.6 Maintainability
 4.1.7 Testability
 4.1.8 Usability
 4.1.9 Reliability
 4.1.10 Structuredness
 4.1.11 Efficiency
 4.1.12 Security
• 5 User's perspective
• 6 See also
• 7 References

• 8 Further reading

[edit] Definition

One of the challenges of Software Quality is that "everyone feels they understand it".[3]

A definition in Steve McConnell's Code Complete divides software into two pieces:
internal and external quality characteristics. External quality characteristics are those
parts of a product that face its users, where internal quality characteristics are those that
do not.[4]

Another definition by Dr. Tom DeMarco says "a product's quality is a function of how
much it changes the world for the better."[5] This can be interpreted as meaning that user
satisfaction is more important than anything in determining software quality.[1]

Another definition, coined by Gerald Weinberg in Quality Software Management: Systems


Thinking, is "Quality is value to some person." This definition stresses that quality is
inherently subjective - different people will experience the quality of the same software
very differently. One strength of this definition is the questions it invites software teams
to consider, such as "Who are the people we want to value our software?" and "What will
be valuable to them?"

[edit] History
[edit] Software product quality

• Product quality
o conformance to requirements or program specification; related to Reliability
• Scalability
• Correctness
• Completeness
• Absence of bugs
• Fault-tolerance
o Extensibility
o Maintainability
• Documentation

[edit] Source code quality

A computer has no concept of "well-written" source code. However, from a human point
of view source code can be written in a way that has an effect on the effort needed to
comprehend its behavior. Many source code programming style guides, which often stress
readability and usually language-specific conventions are aimed at reducing the cost of
source code maintenance. Some of the issues that affect code quality include:

• Readability
• Ease of maintenance, testing, debugging, fixing, modification and portability
• Low complexity
• Low resource consumption: memory, CPU
• Number of compilation or lint warnings
• Robust input validation and error handling, established by software fault injection

Methods to improve the quality:


• Refactoring
• Code Inspection or software review
• Documenting code

[edit] Software reliability

Software reliability is an important facet of software quality. It is defined as "the


probability of failure-free operation of a computer program in a specified environment for
a specified time".[6]

One of reliability's distinguishing characteristics is that it is objective, measurable, and


can be estimated, whereas much of software quality is subjective criteria.[7] This
distinction is especially important in the discipline of Software Quality Assurance. These
measured criteria are typically called software metrics.

[edit] History

With software embedded into many devices today, software failure has caused more than
inconvenience. Software errors have even caused human fatalities. The causes have
ranged from poorly designed user interfaces to direct programming errors. An example of
a programming error that lead to multiple deaths is discussed in Dr. Leveson's paper [1]
(PDF). This has resulted in requirements for development of some types software. In the
United States, both the Food and Drug Administration (FDA) and Federal Aviation
Administration (FAA) have requirements for software development.

[edit] The goal of reliability

The need for a means to objectively determine software quality comes from the desire to
apply the techniques of contemporary engineering fields to the development of software.
That desire is a result of the common observation, by both lay-persons and specialists,
that computer software does not work the way it ought to. In other words, software is
seen to exhibit undesirable behaviour, up to and including outright failure, with
consequences for the data which is processed, the machinery on which the software runs,
and by extension the people and materials which those machines might negatively affect.
The more critical the application of the software to economic and production processes,
or to life-sustaining systems, the more important is the need to assess the software's
reliability.

Regardless of the criticality of any single software application, it is also more and more
frequently observed that software has penetrated deeply into most every aspect of modern
life through the technology we use. It is only expected that this infiltration will continue,
along with an accompanying dependency on the software by the systems which maintain
our society. As software becomes more and more crucial to the operation of the systems
on which we depend, the argument goes, it only follows that the software should offer a
concomitant level of dependability. In other words, the software should behave in the
way it is intended, or even better, in the way it should.
[edit] The challenge of reliability

The circular logic of the preceding sentence is not accidental—it is meant to illustrate a
fundamental problem in the issue of measuring software reliability, which is the difficulty
of determining, in advance, exactly how the software is intended to operate. The problem
seems to stem from a common conceptual error in the consideration of software, which is
that software in some sense takes on a role which would otherwise be filled by a human
being. This is a problem on two levels. Firstly, most modern software performs work
which a human could never perform, especially at the high level of reliability that is often
expected from software in comparison to humans. Secondly, software is fundamentally
incapable of most of the mental capabilities of humans which separate them from mere
mechanisms: qualities such as adaptability, general-purpose knowledge, a sense of
conceptual and functional context, and common sense.

Nevertheless, most software programs could safely be considered to have a particular,


even singular purpose. If the possibility can be allowed that said purpose can be well or
even completely defined, it should present a means for at least considering objectively
whether the software is, in fact, reliable, by comparing the expected outcome to the actual
outcome of running the software in a given environment, with given data. Unfortunately,
it is still not known whether it is possible to exhaustively determine either the expected
outcome or the actual outcome of the entire set of possible environment and input data to
a given program, without which it is probably impossible to determine the program's
reliability with any certainty.

However, various attempts are in the works to attempt to rein in the vastness of the space
of software's environmental and input variables, both for actual programs and theoretical
descriptions of programs. Such attempts to improve software reliability can be applied at
different stages of a program's development, in the case of real software. These stages
principally include: requirements, design, programming, testing, and runtime evaluation.
The study of theoretical software reliability is predominantly concerned with the concept
of correctness, a mathematical field of computer science which is an outgrowth of
language and automata theory.

[edit] Reliability in program development

[edit] Requirements

A program cannot be expected to work as desired if the developers of the program do not,
in fact, know the program's desired behaviour in advance, or if they cannot at least
determine its desired behaviour in parallel with development, in sufficient detail. What
level of detail is considered sufficient is hotly debated. The idea of perfect detail is
attractive, but may be impractical, if not actually impossible. This is because the desired
behaviour tends to change as the possible range of the behaviour is determined through
actual attempts, or more accurately, failed attempts, to achieve it.
Whether a program's desired behaviour can be successfully specified in advance is a
moot point if the behaviour cannot be specified at all, and this is the focus of attempts to
formalize the process of creating requirements for new software projects. In situ with the
formalization effort is an attempt to help inform non-specialists, particularly non-
programmers, who commission software projects without sufficient knowledge of what
computer software is in fact capable. Communicating this knowledge is made more
difficult by the fact that, as hinted above, even programmers cannot always know in
advance what is actually possible for software in advance of trying.

[edit] Design

While requirements are meant to specify what a program should do, design is meant, at
least at a high level, to specify how the program should do it. The usefulness of design is
also questioned by some, but those who look to formalize the process of ensuring
reliability often offer good software design processes as the most significant means to
accomplish it. Software design usually involves the use of more abstract and general
means of specifying the parts of the software and what they do. As such, it can be seen as
a way to break a large program down into many smaller programs, such that those
smaller pieces together do the work of the whole program.

The purposes of high-level design are as follows. It separates what are considered to be
problems of architecture, or overall program concept and structure, from problems of
actual coding, which solve problems of actual data processing. It applies additional
constraints to the development process by narrowing the scope of the smaller software
components, and thereby—it is hoped—removing variables which could increase the
likelihood of programming errors. It provides a program template, including the
specification of interfaces, which can be shared by different teams of developers working
on disparate parts, such that they can know in advance how each of their contributions
will interface with those of the other teams. Finally, and perhaps most controversially, it
specifies the program independently of the implementation language or languages,
thereby removing language-specific biases and limitations which would otherwise creep
into the design, perhaps unwittingly on the part of programmer-designers.

[edit] Programming

The history of computer programming language development can often be best understood
in the light of attempts to master the complexity of computer programs, which otherwise
becomes more difficult to understand in proportion (perhaps exponentially) to the size of
the programs. (Another way of looking at the evolution of programming languages is
simply as a way of getting the computer to do more and more of the work, but this may
be a different way of saying the same thing.) Lack of understanding of a program's
overall structure and functionality is a sure way to fail to detect errors in the program, and
thus the use of better languages should, conversely, reduce the number of errors by
enabling a better understanding.
Improvements in languages tend to provide incrementally what software design has
attempted to do in one fell swoop: consider the software at ever greater levels of
abstraction. Such inventions as statement, sub-routine, file, class, template, library,
component and more have allowed the arrangement of a program's parts to be specified
using abstractions such as layers, hierarchies and modules, which provide structure at
different granularities, so that from any point of view the program's code can be imagined
to be orderly and comprehensible.

In addition, improvements in languages have enabled more exact control over the shape
and use of data elements, culminating in the abstract data type. These data types can be
specified to a very fine degree, including how and when they are accessed, and even the
state of the data before and after it is accessed..

[edit] Software Build and Deployment

Many programming languages such as C and Java require the program "source code" to
be translated in to a form that can be executed by a computer. This translation is done by
a program called a compiler. Additional operations may be involved to associate, bind,
link or package files together in order to create a usable runtime configuration of the
software application. The totality of the compiling and assembly process is generically
called "building" the software.

The software build is critical to software quality because if any of the generated files are
incorrect the software build is likely to fail. And, if the incorrect version of a program is
inadvertently used, then testing can lead to false results.

Software builds are typically done in work area unrelated to the runtime area, such as the
application server. For this reason, a deployment step is needed to physically transfer the
software build products to the runtime area. The deployment procedure may also involve
technical parameters, which, if set incorrectly, can also prevent software testing from
beginning. For example, a Java application server may have options for parent-first or
parent-last class loading. Using the incorrect parameter can cause the application to fail to
execute on the application server.

The technical activities supporting software quality including build, deployment, change
control and reporting are collectively known as Software configuration management. A
number of software tools have arisen to help meet the challenges of configuration
management including file control tools and build control tools.

[edit] Testing
Main article: Software Testing

Software testing, when done correctly, can increase overall software quality of
conformance by testing that the product conforms to its requirements. Testing includes,
but is not limited to:
1. Unit Testing
2. Functional Testing
3. Regression Testing
4. Performance Testing
5. Failover Testing
6. Usability Testing

A number of agile methodologies use testing early in the development cycle to ensure
quality in their products. For example, the test-driven development practice, where tests
are written before the code they will test, is used in Extreme Programming to ensure
quality.

[edit] Runtime

runtime reliability determinations are similar to tests, but go beyond simple confirmation
of behaviour to the evaluation of qualities such as performance and interoperability with
other code or particular hardware configurations.

[edit] Software quality factors


This section needs attention from an expert on the subject. See the talk page for
details. WikiProject Software or the Software Portal may be able to help recruit an expert.
(September 2008)

A software quality factor is a non-functional requirement for a software program which is


not called up by the customer's contract, but nevertheless is a desirable requirement
which enhances the quality of the software program. Note that none of these factors are
binary; that is, they are not “either you have it or you don’t” traits. Rather, they are
characteristics that one seeks to maximize in one’s software to optimize its quality. So
rather than asking whether a software product “has” factor x, ask instead the degree to
which it does (or does not).

Some software quality factors are listed here:

Understandability
Clarity of purpose. This goes further than just a statement of purpose; all of the
design and user documentation must be clearly written so that it is easily
understandable. This is obviously subjective in that the user context must be taken
into account: for instance, if the software product is to be used by software
engineers it is not required to be understandable to the layman.
Completeness
Presence of all constituent parts, with each part fully developed. This means that
if the code calls a subroutine from an external library, the software package must
provide reference to that library and all required parameters must be passed. All
required input data must also be available.
Conciseness
Minimization of excessive or redundant information or processing. This is
important where memory capacity is limited, and it is generally considered good
practice to keep lines of code to a minimum. It can be improved by replacing
repeated functionality by one subroutine or function which achieves that
functionality. It also applies to documents.
Portability
Ability to be run well and easily on multiple computer configurations. Portability
can mean both between different hardware—such as running on a PC as well as a
smartphone—and between different operating systems—such as running on both
Mac OS X and GNU/Linux.
Consistency
Uniformity in notation, symbology, appearance, and terminology within itself.
Maintainability
Propensity to facilitate updates to satisfy new requirements. Thus the software
product that is maintainable should be well-documented, should not be complex,
and should have spare capacity for memory, storage and processor utilization and
other resources.
Testability
Disposition to support acceptance criteria and evaluation of performance. Such a
characteristic must be built-in during the design phase if the product is to be easily
testable; a complex design leads to poor testability.
Usability
Convenience and practicality of use. This is affected by such things as the human-
computer interface. The component of the software that has most impact on this is
the user interface (UI), which for best usability is usually graphical (i.e. a GUI).
Reliability
Ability to be expected to perform its intended functions satisfactorily. This
implies a time factor in that a reliable product is expected to perform correctly
over a period of time. It also encompasses environmental considerations in that
the product is required to perform correctly in whatever conditions it finds itself
(sometimes termed robustness).
Structuredness
Organisation of constituent parts in a definite pattern. A software product written
in a block-structured language such as Pascal will satisfy this characteristic.
Efficiency
Fulfillment of purpose without waste of resources, such as memory, space and
processor utilization, network bandwidth, time, etc.
Security
Ability to protect data against unauthorized access and to withstand malicious or
inadvertent interference with its operations. Besides the presence of appropriate
security mechanisms such as authentication, access control and encryption,
security also implies resilience in the face of malicious, intelligent and adaptive
attackers.

[edit] Measurement of software quality factors

There are varied perspectives within the field on measurement. There are a great many
measures that are valued by some professionals—or in some contexts, that are decried as
harmful by others. Some believe that quantitative measures of software quality are
essential. Others believe that contexts where quantitative measures are useful are quite
rare, and so prefer qualitative measures. Several leaders in the field of software testing
have written about the difficulty of measuring what we truly want to measure well.[8][9]

One example of a popular metric is the number of faults encountered in the software.
Software that contains few faults is considered by some to have higher quality than
software that contains many faults. Questions that can help determine the usefulness of
this metric in a particular context include:

1. What constitutes “many faults?” Does this differ depending upon the purpose of
the software (e.g., blogging software vs. navigational software)? Does this take
into account the size and complexity of the software?
2. Does this account for the importance of the bugs (and the importance to the
stakeholders of the people those bugs bug)? Does one try to weight this metric by
the severity of the fault, or the incidence of users it affects? If so, how? And if
not, how does one know that 100 faults discovered is better than 1000?
3. If the count of faults being discovered is shrinking, how do I know what that
means? For example, does that mean that the product is now higher quality than it
was before? Or that this is a smaller/less ambitious change than before? Or that
fewer tester-hours have gone into the project than before? Or that this project was
tested by less skilled testers than before? Or that the team has discovered that
fewer faults reported is in their interest?

This last question points to an especially difficult one to manage. All software quality
metrics are in some sense measures of human behavior, since humans create software.[8]
If a team discovers that they will benefit from a drop in the number of reported bugs,
there is a strong tendency for the team to start reporting fewer defects. That may mean
that email begins to circumvent the bug tracking system, or that four or five bugs get
lumped into one bug report, or that testers learn not to report minor annoyances. The
difficulty is measuring what we mean to measure, without creating incentives for
software programmers and testers to consciously or unconsciously “game” the
measurements.

Software quality factors cannot be measured because of their vague definitions. It is


necessary to find measurements, or metrics, which can be used to quantify them as non-
functional requirements. For example, reliability is a software quality factor, but cannot
be evaluated in its own right. However, there are related attributes to reliability, which
can indeed be measured. Some such attributes are mean time to failure, rate of failure
occurrence, and availability of the system. Similarly, an attribute of portability is the
number of target-dependent statements in a program.

A scheme that could be used for evaluating software quality factors is given below. For
every characteristic, there are a set of questions which are relevant to that characteristic.
Some type of scoring formula could be developed based on the answers to these
questions, from which a measurement of the characteristic can be obtained.
[edit] Understandability

Are variable names descriptive of the physical or functional property represented? Do


uniquely recognisable functions contain adequate comments so that their purpose is
clear? Are deviations from forward logical flow adequately commented? Are all elements
of an array functionally related?...

[edit] Completeness

Are all necessary components available? Does any process fail for lack of resources or
programming? Are all potential pathways through the code accounted for, including
proper error handling?

[edit] Conciseness

Is all code reachable? Is any code redundant? How many statements within loops could
be placed outside the loop, thus reducing computation time? Are branch decisions too
complex?

[edit] Portability

Does the program depend upon system or library routines unique to a particular
installation? Have machine-dependent statements been flagged and commented? Has
dependency on internal bit representation of alphanumeric or special characters been
avoided? How much effort would be required to transfer the program from one
hardware/software system or environment to another?

[edit] Consistency

Is one variable name used to represent different logical or physical entities in the
program? Does the program contain only one representation for any given physical or
mathematical constant? Are functionally similar arithmetic expressions similarly
constructed? Is a consistent scheme used for indentation, nomenclature, the color palette,
fonts and other visual elements?

[edit] Maintainability

Has some memory capacity been reserved for future expansion? Is the design cohesive—
i.e., does each module have distinct, recognisable functionality? Does the software allow
for a change in data structures (object-oriented designs are more likely to allow for this)?
If the code is procedure-based (rather than object-oriented), is a change likely to require
restructuring the main program, or just a module?
[edit] Testability

Are complex structures employed in the code? Does the detailed design contain clear
pseudo-code? Is the pseudo-code at a higher level of abstraction than the code? If tasking
is used in concurrent designs, are schemes available for providing adequate test cases?

[edit] Usability

Is a GUI used? Is there adequate on-line help? Is a user manual provided? Are
meaningful error messages provided?

[edit] Reliability

Are loop indexes range-tested? Is input data checked for range errors? Is divide-by-zero
avoided? Is exception handling provided? It is the probability that the software performs
its intended functions correctly in a specified period of time under stated operation
conditions. but there could also a problem with the requirement document...

[edit] Structuredness

Is a block-structured programming language used? Are modules limited in size? Have the
rules for transfer of control between modules been established and followed?

[edit] Efficiency

Have functions been optimized for speed? Have repeatedly used blocks of code been
formed into subroutines? Has the program been checked for memory leaks or overflow
errors?

[edit] Security

Does the software protect itself and its data against unauthorized access and use? Does it
allow its operator to enforce security policies? Are security mechanisms appropriate,
adequate and correctly implemented? Can the software withstand attacks that can be
anticipated in its intended environment? Is the software free of errors that would make it
possible to circumvent its security mechanisms? Does the architecture limit the potential
impact of yet unknown errors?

[edit] User's perspective

In addition to the technical qualities of software, the end user's experience also
determines the quality of software. This aspect of software quality is called usability. It is
hard to quantify the usability of a given software product. Some important questions to be
asked are:
SYSTEM ANALYSIS
Q1. Explain system Development Life Cycle (SDLC)? Explain different types of
feasibilities?
Q2. Describe Data Flow Diagrams (DFD), Project management, warnier-orr
diagrams and Nassi-Shneiderman charts?

Explain system Development Life Cycle (SDLC)? Explain different types of


feasibilities?

Systems Development Life Cycle


From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see SDLC.

Model of the Systems Development Life Cycle with the Maintenance bubble highlighted.

The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in
systems engineering and software engineering, is the process of creating or altering
systems, and the models and methodologies that people use to develop these systems. The
concept generally refers to computer or information systems.

In software engineering the SDLC concept underpins many kinds of software development
methodologies. These methodologies form the framework for planning and controlling the
creation of an information system[1]: the software development process.
Contents
[hide]

• 1 Overview
• 2 History
• 3 Systems development phases
o 3.1 Initiation/planning
o 3.2 Requirements gathering and analysis
o 3.3 Design
o 3.4 Build or coding
o 3.5 Testing
o 3.6 Operations and maintenance
• 4 Systems development life cycle topics
o 4.1 Management and control
o 4.2 Work breakdown structure organization
o 4.3 Baselines in the SDLC
o 4.4 Complementary to SDLC
• 5 Strengths and weaknesses
• 6 See also
• 7 References
• 8 Further reading

• 9 External links

[edit] Overview

Systems Development Life Cycle (SDLC) is a logical process used by a systems analyst
to develop an information system, including requirements, validation, training, and user
(stakeholder) ownership. Any SDLC should result in a high quality system that meets or
exceeds customer expectations, reaches completion within time and cost estimates, works
effectively and efficiently in the current and planned Information Technology infrastructure,
and is inexpensive to maintain and cost-effective to enhance.[2]

Computer systems are complex and often (especially with the recent rise of Service-
Oriented Architecture) link multiple traditional systems potentially supplied by different
software vendors. To manage this level of complexity, a number of SDLC models have
been created: "waterfall"; "fountain"; "spiral"; "build and fix"; "rapid prototyping";
"incremental"; and "synchronize and stabilize".[citation needed]

SDLC models can be described along a spectrum of agile to iterative to sequential. Agile
methodologies, such as XP and Scrum, focus on light-weight processes which allow for
rapid changes along the development cycle. Iterative methodologies, such as Rational
Unified Process and Dynamic Systems Development Method, focus on limited project scopes
and expanding or improving products by multiple iterations. Sequential or big-design-
upfront (BDUF) models, such as Waterfall, focus on complete and correct planning to
guide large projects and risks to successful and predictable results.[citation needed]

Some agile and iterative proponents confuse the term SDLC with sequential or "more
traditional" processes; however, SDLC is an umbrella term for all methodologies for the
design, implementation, and release of software.[3][4]

In project management a project can be defined both with a project life cycle (PLC) and an
SDLC, during which slightly different activities occur. According to Taylor (2004) "the
project life cycle encompasses all the activities of the project, while the systems
development life cycle focuses on realizing the product requirements".[5]

[edit] History

The systems development lifecycle (SDLC) is a type of methodology used to describe the
process for building information systems, intended to develop information systems in a
very deliberate, structured and methodical way, reiterating each stage of the life cycle. The
systems development life cycle, according to Elliott & Strachan & Radford (2004),
"originated in the 1960s to develop large scale functional business systems in an age of
large scale business conglomerates. Information systems activities revolved around heavy
data processing and number crunching routines".[6]

Several systems development frameworks have been partly based on SDLC, such as the
Structured Systems Analysis and Design Method (SSADM) produced for the UK government
Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the
traditional life cycle approaches to systems development have been increasingly replaced
with alternative approaches and frameworks, which attempted to overcome some of the
inherent deficiencies of the traditional SDLC".[6]

[edit] Systems development phases

Systems Development Life Cycle (SDLC) adheres to important phases that are essential
for developers, such as planning, analysis, design, and implementation, and are explained in
the section below. There are several Systems Development Life Cycle Models in
existence. The oldest model, that was originally regarded as "the Systems Development
Life Cycle" is the waterfall model: a sequence of stages in which the output of each stage
becomes the input for the next. These stages generally follow the same basic steps but
many different waterfall methodologies give the steps different names and the number of
steps seem to vary between 4 and 7. There is no definitively correct Systems
Development Life Cycle model, but the steps can be characterized and divided in several
steps.
The SDLC can be divided into ten phases during which defined IT work products are
created or modified. The tenth phase occurs when the system is disposed of and the task
performed is either eliminated or transferred to other systems. The tasks and work
products for each phase are described in subsequent chapters. Not every project will
require that the phases be sequentially executed. However, the phases are interdependent.
Depending upon the size and complexity of the project, phases may be combined or may
overlap.[7]

[edit] Initiation/planning

To generate a high-level view of the intended project and determine the goals of the
project. The feasibility study is sometimes used to present the project to upper
management in an attempt to gain funding. Projects are typically evaluated in three areas
of feasibility: economical, operational or organizational, and technical. Furthermore, it is
also used as a reference to keep the project on track and to evaluate the progress of the
MIS team.[8] The MIS is also a complement of those phases. This phase is also called as
analysis phase.
[edit] Requirements gathering and analysis

The goal of systems analysis is to determine where the problem is in an attempt to fix the
system. This step involves breaking down the system in different pieces and drawing
diagrams to analyze the situation, analyzing project goals, breaking down what needs to
be created and attempting to engage users so that definite requirements can be defined.
Requirements Gathering sometimes requires individuals/teams from client as well as
service provider sides to get detailed and accurate requirements.

[edit] Design

In systems design functions and operations are described in detail, including screen
layouts, business rules, process diagrams and other documentation. The output of this
stage will describe the new system as a collection of modules or subsystems.

The design stage takes as its initial input the requirements identified in the approved
requirements document. For each requirement, a set of one or more design elements will
be produced as a result of interviews, workshops, and/or prototype efforts. Design
elements describe the desired software features in detail, and generally include functional
hierarchy diagrams, screen layout diagrams, tables of business rules, business process
diagrams, pseudocode, and a complete entity-relationship diagram with a full data
dictionary. These design elements are intended to describe the software in sufficient
detail that skilled programmers may develop the software with minimal additional input.

[edit] Build or coding

Modular and subsystem programming code will be accomplished during this stage. Unit
testing and module testing are done in this stage by the developers. This stage is
intermingled with the next in that individual modules will need testing before integration
to the main project.

[edit] Testing

The code is tested at various levels in software testing. Unit, system and user acceptance
testings are often performed. This is a grey area as many different opinions exist as to
what the stages of testing are and how much if any iteration occurs. Iteration is not
generally part of the waterfall model, but usually some occur at this stage.

Types of testing:

• Data set testing.


• Unit testing
• System testing
• Integration testing
• Black box testing
• White box testing
• Regression testing
• Automation testing
• User acceptance testing
• Performance testing

[edit] Operations and maintenance

The deployment of the system includes changes and enhancements before the
decommissioning or sunset of the system. Maintaining the system is an important aspect of
SDLC. As key personnel change positions in the organization, new changes will be
implemented, which will require system updates.

[edit] Systems development life cycle topics


[edit] Management and control

SDLC Phases Related to Management Controls.[9]

The Systems Development Life Cycle (SDLC) phases serve as a programmatic guide to
project activity and provide a flexible but consistent way to conduct projects to a depth
matching the scope of the project. Each of the SDLC phase objectives are described in
this section with key deliverables, a description of recommended tasks, and a summary of
related control objectives for effective management. It is critical for the project manager
to establish and monitor control objectives during each SDLC phase while executing
projects. Control objectives help to provide a clear statement of the desired result or
purpose and should be used throughout the entire SDLC process. Control objectives can
be grouped into major categories (Domains), and relate to the SDLC phases as shown in
the figure.[9]

To manage and control any SDLC initiative, each project will be required to establish
some degree of a Work Breakdown Structure (WBS) to capture and schedule the work
necessary to complete the project. The WBS and all programmatic material should be
kept in the “Project Description” section of the project notebook. The WBS format is
mostly left to the project manager to establish in a way that best describes the project
work. There are some key areas that must be defined in the WBS as part of the SDLC
policy. The following diagram describes three key areas that will be addressed in the
WBS in a manner established by the project manager.[9]

[edit] Work breakdown structure organization

Work Breakdown Structure.[9]

The upper section of the Work Breakdown Structure (WBS) should identify the major
phases and milestones of the project in a summary fashion. In addition, the upper section
should provide an overview of the full scope and timeline of the project and will be part
of the initial project description effort leading to project approval. The middle section of
the WBS is based on the seven Systems Development Life Cycle (SDLC) phases as a
guide for WBS task development. The WBS elements should consist of milestones and
“tasks” as opposed to “activities” and have a definitive period (usually two weeks or
more). Each task must have a measurable output (e.g. document, decision, or analysis). A
WBS task may rely on one or more activities (e.g. software engineering, systems
engineering) and may require close coordination with other tasks, either internal or
external to the project. Any part of the project needing support from contractors should
have a Statement of work (SOW) written to include the appropriate tasks from the SDLC
phases. The development of a SOW does not occur during a specific phase of SDLC but
is developed to include the work from the SDLC process that may be conducted by
external resources such as contractors.[9]

[edit] Baselines in the SDLC

Baselines are an important part of the Systems Development Life Cycle (SDLC). These
baselines are established after four of the five phases of the SDLC and are critical to the
iterative nature of the model [10]. Each baseline is considered as a milestone in the SDLC.

• Functional Baseline: established after the conceptual design phase.


• Allocated Baseline: established after the preliminary design phase.
• Product Baseline: established after the detail design and development phase.
• Updated Product Baseline: established after the production construction phase.

[edit] Complementary to SDLC

Complementary Software development methods to Systems Development Life Cycle


(SDLC) are:

• Software Prototyping
• Joint Applications Design (JAD)
• Rapid Application Development (RAD)
• Extreme Programming (XP); extension of earlier work in Prototyping and RAD.
• Open Source Development
• End-user development
• Object Oriented Programming

Comparison of Methodologies (Post, & Anderson 2006)[11]


Open Prototypi End
SDLC RAD Objects JAD
Source ng User
Standar
Control Formal MIS Weak Joint User User
ds
Mediu Mediu
Time Frame Long Short Any Short Short
m m
One or
Users Many Few Few Varies Few One
Two
Hundre One or
MIS staff Many Few Split Few None
ds Two
Transaction/DS Transacti
Both Both Both DSS DSS DSS
S on
Minim Windo Crucia Cruci
Interface Minimal Weak Crucial
al ws l al
Documentation Limite In Limite
Vital Internal Weak None
and training d Objects d
Integrity and Unkno In Limite
Vital Vital Weak Weak
security wn Objects d
Limite
Reusability Limited Some Maybe Vital Weak None
d

[edit] Strengths and weaknesses

Few people in the modern computing world would use a strict waterfall model for their
Systems Development Life Cycle (SDLC) as many modern methodologies have
superseded this thinking. Some will argue that the SDLC no longer applies to models like
Agile computing, but it is still a term widely in use in Technology circles. The SDLC
practice has advantages in traditional models of software development, that lends itself
more to a structured environment. The disadvantages to using the SDLC methodology is
when there is need for iterative development or (i.e. web development or e-commerce)
where stakeholders need to review on a regular basis the software being designed. Instead
of viewing SDLC from a strength or weakness perspective, it is far more important to
take the best practices from the SDLC model and apply it to whatever may be most
appropriate for the software being designed.

A comparison of the strengths and weaknesses of SDLC:

Strength and Weaknesses of SDLC [11]


Strengths Weaknesses
Control. Increased development time.
Monitor Large projects. Increased development cost.
Detailed steps. Systems must be defined up front.
Evaluate costs and completion targets. Rigidity.
Hard to estimate costs, project
Documentation.
overruns.
Well defined user input. User input is sometimes limited.
Ease of maintenance.
Development and design standards.
Tolerates changes in MIS staffing.

An alternative to the SDLC is Rapid Application Development

Describe Data Flow Diagrams (DFD), Project management, warnier-orr diagrams


and Nassi-Shneiderman charts?

Data flow diagram


From Wikipedia, the free encyclopedia
Jump to: navigation, search
Data Flow Diagram example.[1]

A data-flow diagram (DFD) is a graphical representation of the "flow" of data through


an information system. DFDs can also be used for the visualization of data processing
(structured design).

On a DFD, data items flow from an external data source or an internal data store to an
internal data store or an external data sink, via an internal process.

A DFD provides no information about the timing or ordering of processes, or about


whether processes will operate in sequence or in parallel. It is therefore quite different
from a flowchart, which shows the flow of control through an algorithm, allowing a reader
to determine what operations will be performed, in what order, and under what
circumstances, but not what kinds of data will be input to and output from the system, nor
where the data will come from and go to, nor where the data will be stored (all of which
are shown on a DFD).
Contents
[hide]

• 1 Overview
• 2 Developing a data-flow diagram
o 2.1 Top-down approach
o 2.2 Event partitioning approach
 2.2.1 Level 1 (high level diagram)
 2.2.2 Level 2 (low level diagram)
• 3 See also
• 4 Notes
• 5 Further reading

• 6 External links

[edit] Overview

It is common practice to draw a context-level data flow diagram first, which shows the
interaction between the system and external agents which act as data sources and data
sinks. On the context diagram (also known as the Level 0 DFD) the system's interactions
with the outside world are modelled purely in terms of data flows across the system
boundary. The context diagram shows the entire system as a single process, and gives no
clues as to its internal organization.

This context-level DFD is next "exploded", to produce a Level 1 DFD that shows some
of the detail of the system being modeled. The Level 1 DFD shows how the system is
divided into sub-systems (processes), each of which deals with one or more of the data
flows to or from an external agent, and which together provide all of the functionality of
the system as a whole. It also identifies internal data stores that must be present in order
for the system to do its job, and shows the flow of data between the various parts of the
system.

Data-flow diagrams were invented by Larry Constantine, the original developer of


structured design,[2] based on Martin and Estrin's "data-flow graph" model of
computation.

Data-flow diagrams (DFDs) are one of the three essential perspectives of the structured-
systems analysis and design method SSADM. The sponsor of a project and the end users
will need to be briefed and consulted throughout all stages of a system's evolution. With a
data-flow diagram, users are able to visualize how the system will operate, what the
system will accomplish, and how the system will be implemented. The old system's
dataflow diagrams can be drawn up and compared with the new system's data-flow
diagrams to draw comparisons to implement a more efficient system. Data-flow diagrams
can be used to provide the end user with a physical idea of where the data they input
ultimately has an effect upon the structure of the whole system from order to dispatch to
report. How any system is developed can be determined through a data-flow diagram.

In the course of developing a set of levelled data-flow diagrams the analyst/designers is


forced to address how the system may be decomposed into component sub-systems, and
to identify the transaction data in the data model.

There are different notations to draw data-flow diagrams, defining different visual
representations for processes, data stores, data flow, and external entities.[3]

Data flow diagram ("bubble charts") are directed graphs in which the nodes specify
processing activities and the arcs specify data items transmitted between processing
nodes.

[edit] Developing a data-flow diagram

data-flow diagram example

data-flow diagram - Yourdon/DeMarco notation

[edit] Top-down approach

1. The system designer makes "a context level DFD" or Level 0, which shows the
"interaction" (data flows) between "the system" (represented by one process) and
"the system environment" (represented by terminators).
2. The system is "decomposed in lower-level DFD (Level 1)" into a set of
"processes, data stores, and the data flows between these processes and data
stores".
3. Each process is then decomposed into an "even-lower-level diagram containing
its subprocesses".
4. This approach "then continues on the subsequent subprocesses", until a necessary
and sufficient level of detail is reached which is called the primitive process (aka
chewable in one bite).

DFD is also a virtually designable diagram that technically or diagrammatically describes


the inflow and outflow of data or information that is provided by the external entity.

• In Level0 the diagram does not contain any Datastores

[edit] Event partitioning approach

Event partitioning was described by Edward Yourdon in Just Enough Structured Analysis.[4]
A context level Data flow diagram created using Select SSADM.

This level shows the overall context of the system and its operating environment and
shows the whole system as just one process. It does not usually show data stores, unless
they are "owned" by external systems, e.g. are accessed by but not maintained by this
system, however, these are often shown as external entities.[5]

[edit] Level 1 (high level diagram)

A Level 1 Data flow diagram for the same system.

This level (level 1) shows all processes at the first level of numbering, data stores,
external entities and the data flows between them. The purpose of this level is to show the
major high-level processes of the system and their interrelation. A process model will
have one, and only one, level-1 diagram. A level-1 diagram must be balanced with its
parent context level diagram, i.e. there must be the same external entities and the same
data flows, these can be broken down to more detail in the level 1, example the "inquiry"
data flow could be split into "inquiry request" and "inquiry results" and still be valid.[5]

[edit] Level 2 (low level diagram)

A Level 2 Data flow diagram showing the "Process Enquiry" process for the same
system.

This level is decomposition of a process shown in a level-1 diagram, as such there should
be a level-2 diagram for each and every process shown in a level-1 diagram. In this
example processes 1.1, 1.2 & 1.3 are all children of process 1, together they wholly and
completely describe process 1, and combined must perform the full capacity of this
parent process. As before, a level-2 diagram must be balanced with its parent level-1
diagram.[5]

[edit] See also

• Control flow diagram


• Data island
• Dataflow
• Functional flow block diagram
• Function model
• IDEF0
• Pipeline
• System context diagram
• Structured Analysis and Design Technique
• Structure chart

65 Action diagrams
65.1 Purpose
65.2 Strengths, weaknesses, and limitations
65.3 Inputs and related ideas
65.4 Concepts
65.4.1 Conventions
65.4.2 Some examples
65.4.3 Input, output, and database operations
65.5 Key terms
65.6 Software
65.7 References

65.1 Purpose

Action diagrams are used in Martin’s information engineering methodology2 to plan and
document both an overview of program logic and the detailed program logic.

65.2 Strengths, weaknesses, and limitations

Action diagrams are relatively easy to draw and require no special tools. Unlike most
software design tools, action diagrams can be used to describe both an overview of
program logic and the detailed program logic. In addition to documenting logical
relationships and structures, action diagrams provide details about tests and conditions.
The action diagrams are relatively easy to convert into program code. The structure of an
action diagram helps to reduce such errors as infinite loops.

Often, program logic is more easily described by using such tools as pseudocode (# 59)
and structured English (# 60). Relatively few analysts or information systems consultants
are familiar with action diagrams. Some advanced features require knowledge of data
normalization.

65.3 Inputs and related ideas

Programs are designed in the context of a system. The system is planned during the
systems analysis stage of the system development life cycle (Part IV). Pseudocode (# 59)
or structured English (# 60) are used within the context of an action diagram to describe
detailed program logic. The basic logical structures (sequence, selection, and iteration)
are discussed in # 62.

Other tools for documenting or planning routines or processes include logic flowcharts (#
55), Nassi-Shneiderman charts (# 56), decision trees (# 57), decision tables (# 58),
pseudocode (# 59), structured English (# 60), and input/process/output (IPO) charts (#
64). Tools for documenting or planning program structure include Warnier-Orr diagrams
(# 33), structure charts (# 63), and HIPO (# 64).
65.4 Concepts

Action diagrams are used in Martin’s information engineering methodology to plan and
document both an overview of program logic and the detailed program logic.

65.4.1 Conventions

The basic building block of an action diagram is a bracket that represents a program
module. Within the bracket, the module’s code is designed using pseudocode, structured
English, or fourth-generation language statements. Action diagrams are assembled from
sets of brackets. The brackets can be any length, and they can be nested (Figure 65.1).

Figure 65.1 The basic building block of an action


diagram is a bracket that represents a program module.
The brackets can be any length, and they can be nested.

Figure 65.2 shows the action diagram notation for a simple IF-THEN-ELSE block and for
a case structure. Note how horizontal lines are used to partition the bracket into mutually
exclusive routines.
Figure 65.2 Decision (or selection)
logic.

Figure 65.3 shows three repetition structures. A double line at the top of the bracket
indicates a DO WHILE loop, while a double line at the bottom indicates a DO UNTIL
loop. Some designers use an arrow pointing inside the bracket to indicate the next
iteration of a loop (Figure 65.3, bottom).
Figure 65.3 Repetition (or iteration)
logic.

An arrow drawn through a bracket (or set of brackets) represents a termination action,
such as EXIT, QUIT, or BREAK (Figure 65.4). A dotted arrow represents an intentional
break such as a GOTO statement.
Figure 65.4 An arrow drawn through a bracket
(or set of brackets) represents a termination
action.

Subprocesses, subprocedures, subroutines and subsystems are shown by round-cornered


rectangles (Figure 65.5). A vertical line near the left of the round-cornered rectangle
indicates a common subprocedure (e.g., a square root function). Some designers add a
wavy line at the right of the rectangle to indicate a not-yet-designed subprocedure. The
detailed logic associated with the subprocedure is documented in a separate action
diagram.

Figure 65.5 Subprocesses are shown by round-


cornered rectangles.

65.4.2 Some examples

shows an overview action diagram for a sales database maintenance program


Figure 65.6
that documents the primary options available on the program’s main menu.
Figure 65.6 An overview action
diagram for a sales database
maintenance program.

Given an overview diagram, the designer decomposes the high-level routine by creating
an action diagram for each primary function; for example, Figure 65.7 shows the
Maintain customer function. The subprocesses are documented in lower-level action
diagrams.
Figure 65.7 A
detailed action
diagram for the
Maintain customer
function.
65.4.3 Input, output, and database operations

Sometimes, a bracket is expanded into a rectangle to show the data entering and leaving a
process (Figure 65.8). By convention, input data are noted at the top right and output data
are noted at the bottom right of the rectangle.

Figu
re
65.8
A
brack
et
can
be
expa
nded
into
a
recta
ngle
to
show
the
data
enter
ing
and
leavi
ng a
proce
ss.

Simple database actions (e.g., CREATE, READ, UPDATE, or DELETE a single record
or transaction) are represented by a rectangular box inside the bracket (Figure 65.9). The
type of action is noted to the left of the box, and the record is identified inside the box.
Figure
65.9 Datab
ase actions.

Compound database actions (CREATE, READ, UPDATE, or DELETE a whole file, and
such functions as SEARCH, SORT, SELECT, JOIN, PROJECT, and DUPLICATE) are
represented as a double rectangular box (Figure 65.9, bottom). The type of action is noted
to the left of the box, the record is identified inside the box, and any conditions are noted
to the right of the box.

A concurrency relationship exists between two processes that can be performed


concurrently. An arc connecting the two processes’ brackets designates a concurrency
relationship.

65.5 Key terms


Action diagram —
A tool used in Martin’s information engineering methodology to plan and
document both an overview of program logic and the detailed program logic.
Bracket —
The basic building block of an action diagram.
Concurrency relationship —
A relationship between two (or more) processes that can be performed
concurrently.

65.6 Software

The action diagrams in this # were created using Visio.

También podría gustarte