Documentos de Académico
Documentos de Profesional
Documentos de Cultura
The Internet
The Internet was originally conceived as a distributed, fail-proof network that could
connect computers together and be resistant to any one point of failure; the
Internet cannot be totally destroyed in one event, and if large areas are disabled,
the information is easily re-routed. It was created mainly by ARPA; its initial
software applications were email and computer file transfer.
It was with the invention of the World Wide Web in 1989 that the Internet truly
became a global network. Today the Internet has become the ultimate platform for
accelerating the flow of information and is, today, the fastest-growing form of
media.
Progression
In 1956 in the United States, researchers noticed that the number of people
holding "white collar" jobs had just exceeded the number of people holding "blue
collar" jobs. These researchers realized that this was an important change, as it
was clear that the Industrial Age was coming to an end. As the Industrial Age
ended, the newer times adopted the title of "the Information Age".
At that time, relatively few jobs had much to do with computers and computer-
related technology. There was a steady trend away from people holding Industrial
Age manufacturing jobs. An increasing number of people held jobs as clerks in
stores, office workers, teachers, nurses, etc. The Western world was shifting into a
service economy.
Eventually, Information and Communication Technology—computers,
computerized machinery, fiber optics, communication satellites, Internet, and
other ICT tools—became a significant part of the economy. Microcomputers were
developed and many business and industries were greatly changed by ICT.
Nicholas Negroponte captured the essence of these changes in his 1995 book,
Being Digital.His book discusses similarities and differences between products
made of atoms and products made of bits. In essence, one can very cheaply and
quickly make a copy of a product made of bits, and ship it across the country or
around the world both quickly and at very low cost.
Thus, the term "Information Age" is often applied in relation to the use of cell
phones, digital music, high definition television, digital cameras, the Internet,
computer games, and other relatively new products and services that have come
into widespread use.
Analysis-->Design-->Coding-->Testing
Advantages
• Simple and a desirable approach when the requirements are clear and
well understood at the beginning.
• It provides a clear cut template for analysis, design, coding, testing and
support.
• It is an enforced disciplined approach
Disadvantages
• It is difficult for the customers to state the requirements clearly at the
beginning. There is always certain degree of natural uncertainty at
beginning of each project.
• Difficult and costlier to change when the changes occur at a later stages.
• Customer can see the working version only at the end. Thus any changes
suggested here are not only difficult to incorporate but also expensive. This
may result in disaster if any undetected problems are precipitated to this
stage
Since Software Reliability is one of the most important aspects of software quality,
Reliability Engineering approaches are practiced in software field as well. Software
Reliability Engineering (SRE) is the quantitative study of the operational behavior
of software-based systems with respect to user requirements concerning reliability
Software Reliability Models
A proliferation of software reliability models have emerged as people try to
understand the characteristics of how and why software fails, and try to quantify
software reliability. Over 200 models have been developed since the early 1970s,
but how to quantify software reliability still remains largely unsolved. As many
models as there are and many more emerging, none of the models can capture a
satisfying amount of the complexity of software; constraints and assumptions have
to be made for the quantifying process. Therefore, there is no single model that
can be used in all situations. No model is complete or even representative. One
model may work well for a set of certain software, but may be completely off track
for other kinds of problems.
Most software models contain the following parts: assumptions, factors, and a
mathematical function that relates the reliability with the factors. The
mathematical function is usually higher order exponential or logarithmic.
DATA REFERENCE Uses historical data Uses data from the current
software development effort
Product metrics
Software size is thought to be reflective of complexity, development effort and
reliability. Lines Of Code (LOC), or LOC in thousands (KLOC), is an intuitive initial
approach to measuring software size. But there is not a standard way of counting.
Typically, source code is used (SLOC, KSLOC) and comments and other non-
executable statements are not counted. This method cannot faithfully compare
software not written in the same language. The advent of new technologies of
code reuses and code generation technique also cast doubt on this simple method.
Function point metric is a method of measuring the functionality of a proposed
software development based upon a count of inputs, outputs, master files,
inquires, and interfaces. The method can be used to estimate the size of a
software system as soon as these functions can be identified. It is a measure of
the functional complexity of the program. It measures the functionality delivered
to the user and is independent of the programming language. It is used primarily
for business systems; it is not proven in scientific or real-time applications.
Complexity is directly related to software reliability, so representing complexity is
important. Complexity-oriented metrics is a method of determining the complexity
of a program’s control structure, by simplifies the code into a graphical
representation. Representative metric is McCabe's Complexity Metric.
Test coverage metrics are a way of estimating fault and reliability by performing
tests on software products, based on the assumption that software reliability is a
function of the portion of software that has been successfully verified or tested.
Detailed discussion about various software testing methods can be found in topic
Software Testing.
• Project management metrics
Researchers have realized that good management can result in better products.
Research has demonstrated that a relationship exists between the development
process and the ability to complete projects on time and within the desired quality
objectives. Costs increase when developers use inadequate processes. Higher
reliability can be achieved by using better development process, risk management
process, configuration management process, etc.
• Process metrics
Based on the assumption that the quality of the product is a direct function of the
process, process metrics can be used to estimate, monitor and improve the
reliability and quality of software. ISO-9000 certification, or "quality management
standards", is the generic reference for a family of standards developed by the
International Standards Organization(ISO).
• Fault and failure metrics
The goal of collecting fault and failure metrics is to be able to determine when the
software is approaching failure-free execution. Minimally, both the number of
faults found during testing (i.e., before delivery) and the failures (or other
problems) reported by users after delivery are collected, summarized and
analyzed to achieve this goal. Test strategy is highly relative to the effectiveness
of fault metrics, because if the testing scenario does not cover the full functionality
of the software, the software may pass all tests and yet be prone to failure once
delivered. Usually, failure metrics are based upon customer information regarding
failures found after release of the software. The failure data collected is therefore
used to calculate failure density, Mean Time between Failures (MTBF) or other
parameters to measure or predict software reliability.
Good engineering methods can largely improve software reliability. Before the
deployment of software products, testing, verification and validation are necessary
steps. Software testing is heavily used to trigger, locate and remove software
defects. Software testing is still in its infant stage; testing is crafted to suit specific
needs in various software development projects in an ad-hoc manner. Various
analysis tools such as trend analysis, fault-tree analysis, Orthogonal Defect
classification and formal methods, etc, can also be used to minimize the possibility
of defect occurrence after release and therefore improve software reliability. After
deployment of the software product, field data can be gathered and analyzed to
study the behavior of software defects. Fault tolerance or fault/failure forecasting
techniques will be helpful techniques and guide rules to minimize fault occurrence
or impact of the fault on the system.
The SDLC can be divided into ten phases during which defined IT work products
are created or modified. The tenth phase occurs when the system is disposed of
and the task performed is either eliminated or transferred to other systems. The
tasks and work products for each phase are described in subsequent chapters. Not
every project will require that the phases be sequentially executed. However, the
phases are interdependent. Depending upon the size and complexity of the
project, phases may be combined or may overlap.
Initiation/planning
To generate a high-level view of the intended project and determine the goals of
the project. The feasibility study is sometimes used to present the project to upper
management in an attempt to gain funding. Projects are typically evaluated in
three areas of feasibility: economical, operational, and technical. Furthermore, it is
also used as a reference to keep the project on track and to evaluate the progress
of the MIS team.[8] The MIS is also a complement of those phases. This phase is
also called the analysis phase.
Design
In systems design functions and operations are described in detail, including
screen layouts, business rules, process diagrams and other documentation. The
output of this stage will describe the new system as a collection of modules or
subsystems.
The design stage takes as its initial input the requirements identified in the
approved requirements document. For each requirement, a set of one or more
design elements will be produced as a result of interviews, workshops, and/or
prototype efforts. Design elements describe the desired software features in
detail, and generally include functional hierarchy diagrams, screen layout
diagrams, tables of business rules, business process diagrams, pseudo code, and a
complete entity-relationship diagram with a full data dictionary. These design
elements are intended to describe the software in sufficient detail that skilled
programmers may develop the software with minimal additional input.
Build or coding
Modular and subsystem programming code will be accomplished during this stage.
Unit testing and module testing are done in this stage by the developers. This
stage is intermingled with the next in that individual modules will need testing
before integration to the main project.code will be test in every sections.
Testing
The code is tested at various levels in software testing. Unit, system and user
acceptance testing are often performed. This is a grey area as many different
opinions exist as to what the stages of testing are and how much if any iteration
occurs. Iteration is not generally part of the waterfall model, but usually some
occurs at this stage.
Types of testing:
• Data set testing.
• Unit testing
• System testing
• Integration testing
• Black box testing
• White box testing
• Module testing
• Back to back testing
• Automation testing
• User acceptance testing
• Performance testing
Complementary to SDLC
Complementary Software development methods to Systems Development Life
Cycle (SDLC) are:
• Software Prototyping
• Joint Applications Design (JAD)
• Rapid Application Development (RAD)
• Extreme Programming (XP); extension of earlier work in Prototyping and
RAD.
• Open Source Development
• End-user development
• Object Oriented Programming
•
Comparison of Methodologies (Post, & Anderson 2006)
Open Prototyp End
SDLC RAD Objects JAD
Source ing User
Standard
Control Formal MIS Weak Joint User User
s
Time Frame Long Short Medium Any Medium Short Short
One or
Users Many Few Few Varies Few One
Two
MIS staff Many Few Hundreds Split Few One or None
Two
Transaction/ Transacti
Both Both Both DSS DSS DSS
DSS on
Interface Minimal Minimal Weak Windows Crucial Crucial Crucial
Documentati
In
on and Vital Limited Internal Limited Weak None
Objects
training
Integrity and In
Vital Vital Unknown Limited Weak Weak
security Objects
Reusability Limited Some Maybe Vital Limited Weak None
From a business perspective, Object Oriented Design refers to the objects that
make up that business. For example, in a certain company, a business object can
consist of people, data files and database tables, artifacts, equipment, vehicles,
etc. What follows is a description of the class-based subset of object-oriented
design, which does not include object prototype-based approaches where objects
are not typically obtained by instancing classes but by cloning other (prototype)
objects.
Input (sources) for object-oriented design
• Relational data model (if applicable): A data model is an abstract model that
describes how data is represented and used. If an object database is not
used, the relational data model should usually be created before the design,
since the strategy chosen for object-relational mapping is an output of the
OO design process. However, it is possible to develop the relational data
model and the object-oriented design artifacts in parallel, and the growth of
an artifact can stimulate the refinement of other artifacts.
Object-oriented concepts
The five basic concepts of object-oriented design are the implementation level
features that are built into the programming language. These features are often
referred to by these common names:
Designing concepts
• Identifying attributes.
• Use design patterns (if applicable): A design pattern is not a finished design,
it is a description of a solution to a common problem, in a context [1]. The
main advantage of using a design pattern is that it can be reused in multiple
applications. It can also be thought of as a template for how to solve a
problem that can be used in many different situations and/or applications.
Object-oriented design patterns typically show relationships and interactions
between classes or objects, without specifying the final application classes
or objects that are involved.
Overview
Computer software is often regarded as anything but hardware, meaning that the
"hard" are the parts that are tangible while the "soft" part is the intangible objects
inside the computer. Software encompasses an extremely wide array of products
and technologies developed using different techniques like programming
languages, scripting languages, microcode, or an FPGA configuration. The types of
software include web pages developed by technologies like HTML, PHP, Perl, JSP,
ASP.NET, XML, and desktop applications like OpenOffice, Microsoft Word
developed by technologies like C, C++, Java, C#, or Smalltalk. Software usually
runs on an underlying software operating systems such as the Linux or Microsoft
Windows. Software also includes video games and the logic systems of modern
consumer devices such as automobiles, televisions, and toasters.
Software Characteristics
• Software is developed and engineered.
• Software doesn't "wear-out".
• Most software continues to be custom built.
Types of software
A layer structure showing where Operating System is
located on generally used software systems on
desktopsPractical computer systems divide software
systems into three major classes system software,
programming software and application software,
although the distinction is arbitrary, and often blurred.
System software
System software helps run the computer hardware and
computer system. It includes a combination of the
following:
• device drivers
• operating systems
• servers
• utilities
• windowing systems
The purpose of systems software is to unburden the applications programmer from
the often complex details of the particular computer being used, including such
accessories as communications devices, printers, device readers, displays and
keyboards, and also to partition the computer's resources such as memory and
processor time in a safe and stable manner. Examples are- Windows XP, Linux and
Mac.
Programming software
Programming software usually provides tools to assist a programmer in writing
computer programs, and software using different programming languages in a
more convenient way. The tools include:
• compilers
• debuggers
• interpreters
• linkers
• text editors
An Integrated development environment (IDE) is a single application that attempts
to manage all these functions.
Application software
Application software allows end users to accomplish one or more specific (not
directly computer development related) tasks. Typical applications include:
• industrial automation
• business software
• computer games
• quantum chemistry and solid state physics software
• telecommunications (i.e., the internet and everything that flows on it)
• databases
• educational software
• medical software
• military software
• molecular modeling software
• image editing
• spreadsheet
• simulation software
• Word processing
• Decision making software
Application software exists for and has impacted a wide variety of topics.
Uses
Regression testing can be used not only for testing the correctness of a program,
but often also for tracking the quality of its output. For instance, in the design of a
compiler, regression testing should track the code size, simulation time and time
of the test suite cases.
Traditionally, in the corporate world, back to back testing has been performed by a
software quality assurance team after the development team has completed work.
However, defects found at this stage are the most costly to fix. This problem is
being addressed by the rise of developer testing. Although developers have always
written test cases as part of the development cycle, these test cases have
generally been either functional tests or unit tests that verify only intended
outcomes. Developer testing compels a developer to focus on unit testing and to
include both positive and negative test cases
Uses
Back to back testing can be used not only for testing the correctness of a program,
but often also for tracking the quality of its output. For instance, in the design of a
compiler, back to back testing should track the code size, simulation time and time
of the test suite cases.
History
The separation of debugging from testing was initially introduced by Glenford J.
Myers in 1979. Although his attention was on breakage testing ("a successful test
is one that finds a bug",) it illustrated the desire of the software engineering
community to separate fundamental development activities, such as debugging,
from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the
phases and goals in software testing in the following stages:
• Until 1956 - Debugging oriented
• 1957–1978 - Demonstration oriented
• 1979–1982 - Destruction oriented
• 1983–1987 - Evaluation oriented
• 1988–2000 - Prevention oriented
Compatibility
A frequent cause of software failure is compatibility with another application, a
new operating system, or, increasingly, web browser version. In the case of lack of
backward compatibility, this can occur (for example...) because the programmers
have only considered coding their programs for, or testing the software upon, "the
latest version of" this-or-that operating system. The unintended consequence of
this fact is that: their latest work might not be fully compatible with earlier
mixtures of software/hardware, or it might not be fully compatible with another
important operating system. In any case, these differences, whatever they might
be, may have resulted in (unintended...) software failures, as witnessed by some
significant population of computer users.This could be considered a "prevention
oriented strategy" that fits well with the latest testing phase suggested by Dave
Gelperin and William C. Hetzel, as cited below
Acceptance Testing
Acceptance testing generally involves running a suite of tests on the completed
system. Each individual test, known as a case, exercises a particular operating
condition of the user's environment or feature of the system, and will result in a
pass or fail boolean outcome. There is generally no degree of success or failure.
The test environment is usually designed to be identical, or as close as possible, to
the anticipated user's environment, including extremes of such. These test cases
must each be accompanied by test case input data or a formal description of the
operational activities (or both) to be performed—intended to thoroughly exercise
the specific case—and a formal description of the expected results.
Regression Testing
Regression testing is any type of software testing that seeks to uncover software
regressions. Such regressions occur whenever previously working software
functionality stops working as intended. Typically, regressions occur as
an unintended consequence of program changes. Common methods of regression
testing include rerunning previously run tests and checking whether previously
fixed faults have re-emerged.
So logically speaking for engineers to test their own programs is possible if all the
above things in place but for all the practical reasons the professional projects are
handling professionally based out of the test case functions.
August 2009
Bachelor of Science in Information Technology (BScIT) – Semester 4
BT0049 – Software Engineering – 4 Credits
(Book ID: B0808)
Assignment Set – 2 (60 Marks)
System Software
System software is a collection of programs written to service other programs.
Some system programs (e.g., compilers,editors, and file management utilities)
process complex, but determinate, information structures. Other system
applications (e.g. operating system components, drivers, telecommunications
processors) process largely indeterminate data. In either case, the system
software area is characterized by heavy interaction with computer hardware;
heavy usage by multiple users; concurrent operation that requires scheduling,
resource sharing, and sophisticated process management; complex data
structures; and multiple external interfaces.
Real-time Software
Software that monitors/analyzes/controls real world events as they occur is called
real time. Elements of real-time software include a data gathering component that
collects and formats information from an external environment, an analysis
component that transforms information as required by the application, a
control/output component that responds to the external environment, and a
monitoring component that coordinates all other components so that real-time
response (typically ranging from 1 millisecond to 1 second) can be maintained.
Business Software
Business information processing is the largest single software application area.
Discrete “systems” (e.g., payroll, accounts receivable/payable, inventory) have
evolved into management information system (MIS) software that accesses one or
more large databases containing business information. Applications in this area
restructure existing data in a way that facilitates business operations or more
management decision-making. In addition to conventional data processing
applications, business software applications also encompass interactive computing
(e.g., point-of-sale transaction processing).
Embedded Software
Intelligent products have become commonplace in nearly every consumer and
industrial market. Embedded software resides in read-only memory and is used to
control products and systems for the consumer and industrial markets. Embedded
software can perform very limited and esoteric functions (e.g., keypad control for a
microwave oven) or provide significant function and control capability (e.g., digital
functions in an automobile such as fuel control, dashboard displays, and braking
systems.
Web-based Software
The web pages retrieved by a browser are software that incorporates executable
instructions (e.g., CGI, HTML, Perl, or java), and data(eg.,hypertext and a variety of
visual and audio formats). In essence, the network becomes a massive computer
providing an almost unlimited software resource that can be accessed by anyone
within a modem.
The concurrent process model defines a series of events that will trigger
transitions
from state to state for each of the software engineering activities.
• Inefficiency is predictable.
Programs can take long time to execute and users can adjust their work to take
this into account. Unreliability by contrast usually surprises the users. Software
that is unreliable can have hidden errors which can violate system and user
data without warning and whose consequences are not immediately obvious.
For example a fault in a CAD program used to design aircraft might not be
discovered until several plane crashes occur.
• Unreliable systems may cause information loss.
Information is very expensive to collect and maintain, it may sometimes be
worth more than the computer system on which it is processed, A great deal of
effort and money is spent duplicating valuable data to guard against corruption
caused by unreliable software.
The software processes used to develop that product influences the reliability of
the software product. A repeatable process which is oriented towards defect
avoidance is likely to develop a reliable system,. However there is not a simple
relationship between product and process reliability.
Users often complain that systems are unreliable. This may be due to poor
software engineering. However a common cause of perceived unreliability is
incomplete specifications. The system performs as specified but the specifications
do not set out how the software should behave in exceptional situations. As
professional software engineers must do their best to produce reliable systems,
which take meaningful and useful actions in such situations.
• A telephone
• A bank account
• A library catalogue
Classes and objects are separate but related concepts. Every object belongs to a
class and every class contains one or more related objects.
• A Class is static. All of the attributes of a class are fixed before, during, and
after the execution of a program. The attributes of a class don't change.
•
• The class to which an object belongs is also (usually) static. If a particular
object belongs to a certain class at the time that it is created then it almost
certainly will still belong to that class right up until the time that it is
destroyed.
• An Object on the other hand has a limited lifespan. Objects are created
and eventually destroyed. Also during that lifetime, the attributes of the
object may undergo significant change.
So let's now use an example to clarify what the differences are between a class
and an object.
Let us consider the class car. Cars have a body, wheels, an engine, seats, are used
to transport people between locations, and require at least one person in the car
for it to move by its own power. These are some of the attributes of the class - car
- and all members that this class has ever or will ever have share these attributes.
The members of the class - car - are objects and the objects are individual and
specific cars. Each individual car has a creation date (an example of an object
having an attribute that is static), an owner, a registered address (examples of
attributes that may or may not change), a current location, current occupants,
current fuel level (examples of attributes that change quickly), and it may be
covered by insurance (an example of an attribute that may or may not exist).
To use a more programming related example, the class window has edges, a title
bar, maximize and minimize buttons, and an area to display the window contents.
A specific window has a location on the screen, a size, a title, and may or may not
have something in the content area.
VALIDATION TESTING
At the culmination of integration testing the software is complete as a package
and the interfacing errors have been uncovered and fixed, final tests- validation
testing- may begin. Validation tests succeed when the software performs exactly
in the manner as expected by the user.
Software validation is done by a series of Black box tests that demonstrate the
conformance with requirements. Alpha and betas testing fall in this category. We
will not do beta testing but alpha testing will certainly will certainly be done.
• Performance – does the input of good data generate good data out?
• Failure modes – if the setup is wrong, does the test results reflect it?
• Repeatability – if one tests with the same input vectors, does one get the
same output results time after time?
• Special Case – completely test dependent for specific requirements.
For a software developer, it is difficult to foresee how the customer will really use a
program. Instructions for use may be misinterpreted; strange combination of data
may be regularly used; and the output that seemed clear to the tester may be
unintelligible to a user in the field. When custom software is built for one
customer, a series of acceptance tests are conducted to enable the customer to
validate all requirements. Acceptance test is conducted by customer rather than
by developer. It can range from an informal “test drive” to a planned and
systematically executed series of tests. In fact, acceptance testing can be
conducted over a period of weeks or months, thereby uncovering cumulative
errors that might degrade the system over time. If software is developed as a
product to be used by many customers, it is impractical to perform formal
acceptance tests with each one. Most software product builders use a process
called alpha and beta testing to uncover errors that only the end user seems able
to find.
Customer conducts the alpha testing at the developer’s site. The software is used
in a natural setting with the developer. The developer records errors and usage
problem. Alpha tests are conducted in a controlled environment. The beta test is
conducted at one or more customer sites by the end user(s) of the software. Here,
developer is not present. Therefore, the beta test is a live application of the
software in an environment that cannot be controlled by the developer. The
customer records all problems that are encountered during beta testing and
reports these to the developer at regular intervals. Because of problems reported
during beta test, the software developer makes modifications and then prepares
for release of the software product to the entire customer base.
A strategy for software testing may also be viewed in the context of the spiral
(Figure18.1).
Unit testing begins at the vortex of the spiral and concentrates on each unit(i.e.,
component) of the software as implemented in source code. Testing progresses by
moving outward along the spiral to integration testing, where the focus is on
design and the construction of the software architecture. Taking another turn
outward on the spiral, we encounter validation testing, where requirements
established as part of software requirements analysis are validated against the
software that has been constructed. Finally, we arrive at system testing, where the
software and other system elements are tested as a whole. To test computer
software, we spiral out along streamlines that broaden the scope of testing with
each turn. Considering the process from a procedural point of view, testing within
the context of software engineering is actually a series of four steps that are
implemented sequentially. The steps are shown in Figure 18.2. Initially, tests focus
on each component individually, ensuring that it functions properly as a unit.
Hence, the name unit testing .
Unit testing makes heavy use of white-box testing techniques, exercising specific
paths in a module's control structure to ensure complete coverage and maximum
error detection. Next, components must be assembled or integrated to form the
complete software package. Integration testing addresses the issues associated
with the dual problems of verification and program construction. Black-box test
case design techniques are the most prevalent during integration, although a
limited amount of white-box testing may be used to ensure coverage of major
control paths. After the software has been integrated (constructed), a set of high-
order tests are conducted. Validation criteria (established during requirements
analysis) must be tested. Validation testing provides final assurance that software
meets all functional, behavioral, and performance requirements. Black-box testing
techniques are used exclusively during validation. The last high-order testing step
falls outside the boundary of software engineering and into the broader context of
computer system engineering. Software, once validated, must be combined with
other system elements (e.g., hardware, people, databases). System testing verifies
that all elements mesh properly and that overall system function/performance is
achieved.
Question 10: Explain the process of Top-down integration and Bottom-up
Integration.
Advantages
• Drivers(Low level components) does not have to be written when top
down testing is used.
• It provides early working module of the program and so design
defects can be found and corrected early.
Disadvantages
• Stubs have to be written with utmost care as they will simulate
setting of output parameters.
• It is difficult to have other people or third parties to perform this
testing, mostly developers will have to spend time on this.
Bottom Up Testing is an approach to integrated testing where the lowest level
components are tested first, then used to facilitate the testing of higher level
components. The process is repeated until the component at the top of the
hierarchy is tested.
All the bottom or low-level modules, procedures or functions are integrated and
then tested. After the integration testing of lower level integrated modules, the
next level of modules will be formed and can be used for integration testing. This
approach is helpful only when all or most of the modules of the same development
level are ready. This method also helps to determine the levels of software
developed and makes it easier to report testing progress in the form of a
percentage
In bottom up integration testing, module at the lowest level are developed first
and other modules which go towards the 'main' program are integrated and tested
one at a time. Bottom up integration also uses test drivers to drive and pass
appropriate data to the lower level modules. As and when code for other module
gets ready, these drivers are replaced with the actual module. In this approach,
lower level modules are tested extensively thus make sure that highest used
module is tested properly.
Advantages
• Behavior of the interaction points are crystal clear, as components are added in
the controlled manner and tested repetitively.
• Appropriate for applications where bottom up design methodology is used.
Disadvantages
• Writing and maintaining test drivers or harness is difficult than writing stubs.
• This approach is not suitable for the software development using top down
approach