Está en la página 1de 9

August 2012

Master of Computer Application (MCA) Semester 3 MC0071 Software Engineering 4 Credits (Book ID: B0808 & B0809)
Assignment Set 1 (60 Marks)

1.

Explain Iterative Development Model in detail

ANS: The iterative enhance model counters the third limitation of the waterfall model and tires to combine
a benefit of both prototyping and the waterfall mode. The basic idea is that the software should be developed in increments, each increment adding some functional capability to the system until the full system is implemented. At each step, extensions and design modifications can be made. An advantage of this approach is that it can result in better testing because testing each increment is likely to be easier than testing the entire system as in the waterfall model. The increment models provide feedback to the client i.e., useful for determining the final requirements of the system. In the first step of this model , a simple initial implementation is done for a subset of the overall problem. This subset is one that contains some of the key aspects of the problem that are easy to understand and implement and which form a useful and usable system. A project control list is created that contains, in order, all the tasks that must be performed to obtain the final implementation this project control list gives a idea of how far the project is at any given step from the final system. Each step consists of removing the next task from the list, designing the implementation for the selected task, coding and testing the implementation, performing an analysis of the partial system obtained after this step, and updating the list as a result on the analysis. These three phases are called the design phase, implementation phase and analysis phase. The process is integrated until the project control list is empty, at which time the final implementation of the system will be available. The iterative enhancement process model is shown in below diagram:

Iterative Development Model


The project control list guiders the iteration steps and keeps track of all tasks that must be done. Based on the analysis, one of the tasks in the list can include redesign of defective components or redesign of the entire system. Redesign of the system will occur only in the initial steps. In the later steps, the design would have stabilized and there is less chance of redesign. Each entry in the list is a task that should be performed in one step of the iterative enhancement process and should be completely understood. Selecting tasks in this manner will minimize the chance of error and reduce the redesign work. The design and implementation phases of each step can be performed in a top-down manner or by using some other technique. One effective use of this type of model is for product development, in which the developers themselves provide the specifications and therefore have a lot of control on what specifications go in the system and what stay out.

In a customized software development, where the client has to essentially provide and approve the specifications, it is not always clear how this process can be applied. Another practical problem with this type of development project comes in generating the business contract-how will the cost of additional features be determined and negotiated, particularly because the client organization is likely to e tied to the original vendor who developed the first version. Overall, in these types of projects, this process model can be useful if the "core" Of the applications to be developed is well understood and the "increments" can be easily defined and negotiated. in client-oriented project, this process has the major advantage that the client's organization does not have to pay for the entire software together, it can get the main part of the software developed and perform cost benefit analysis for it before enhancing the software with more capabilities.

2. Describe the Object Interface Design ANS: Object Interface design is concerned with specifying the detail of the object interfaces. This means
defining the types of the object attributes and the signatures and the semantics of the object operations. If an object-oriented programming language is being used for implementation, it is natural to use it to express the interface design. Designers should avoid interfaces representation information in their interface design. Rather, the representation should be hidden and object operations provided to access and update the data.If the representation is hidden, it can be changed without affecting the objects that use these attributes. This leads to a design which is inherently more maintainable. For example, an array representation of a stack may be changed to a list representation without affecting other objects, which use the stack.

Design Evolution
An important advantage of an object-oriented approach to design is that it simplifies the problem of making changes to the design. The reason for this is that object state representation does not influence the design. Changing the internal details of an object is unlikely to affect any other system objects. Furthermore, because objects are loosely coupled, it is usually straightforward to introduce new objects without significant effects on the rest of the system. To illustrate the robustness of the object-oriented approach, assume that pollution-monitoring capabilities are to be added to each weather station. This involves adding an air quality meter to compute the amount of various pollutants in the atmosphere. The pollution readings are transmitted at the same time as the weather data. To modify the design, the following changes must be made: Figure.below shows weather station and the new objects to the system. The abbreviation NO in Air quality stands for nitrous oxide.

Fig: New objects to support pollution monitoring (1) An object Air quality should be introduced as part of Weather station at the same level as Weather data. (2) An operation Transmit pollution data should be added to Weather station to send the
pollution information to the central computer. The weather station control software must be modified so that pollution readings are automatically collected when the system is switched on. (3) Objects representing the types of pollution, which can be monitored, should be added. Levelsof nitrous oxide, smoke and benzene can be measured. (4) A hardware control object Air quality meter should be added as a sub-object to Air quality.This has attributes representing each of the types of measurement, which can be made. The addition of pollution data collection does not affect weather data collection in any way. Datarepresentations are encapsulated in objects so they are not affected by the additions to the design.

Function Oriented Design


A function-oriented design strategy relies on decomposing the system into a set of interacting functions with a centralized system state shared by these functions as shown in figure below. Functions may also maintain local state information but only for the duration of their execution.

Fig: A function-oriented view of design


Function-oriented has been practiced informally since programming began. Programs were decomposed into subroutines, which were functional in nature. In the late 1960s and early 1970s several books were published which described top-down functional design. They specifically proposed this as a structured design strategy. These led to the development of many design methods based on functional decomposition. Function-oriented design conceals the details of an algorithm in a function but system state information is not hidden. This can cause problems because a function can change the state in a way, which other functions do not expect. Changes to a function and the way in which it uses the system state may cause unanticipated changes in the behavior of other functions. A functional approach to design is therefore most likely to be successful when the amount of system state information is minimized and information sharing is explicit. Systems whose responses depend on a single stimulus or input and which are not affected by input histories are naturally functionoriented. Many transaction-processing systems and business dataprocessing systems fall into this class. In essence, they are concerned with record processing where the processing of onerecord is not dependent on any previous processing.

3.Explain why it is important to describe software designs. ANS:


When we're talking about software system design, we're talking about something larger than just a piece of software. Often software systems are large programs or systems comprised of several programs and subsystems. In many cases systems of larger sizes need to be well designed in order to perform sufficiently well under heavy load. So, software system design is the practice of extracting the needed information about what a system should do and then coming about via design, research and prototyping w/a way to solve the problem in a way that yields sufficient performance w/o wasting time or resources in the process. It is excruciatingly hard, or perhaps impossible to create a software system that works well w/o enough design. However, one way to design systems is through prototyping, but it still isn't really possible to come about w/a prototype that is good enough w/o understanding the actual problem well enough. So, no matter what kind of a approach you take to solving the problem of creating a well-working software system, you cannot live w/o design. There are just different approaches to it. The design might be very thought out, or it may emerge. But the time to think it out has to be put in there in one way or the other.

Software is what enables us to use computer hardware effectively and is needed for our modern life. A software designer changes people's needs into computer programmes to enable people to meet their objectives. Most electrical items have software in them. Some don't like light bulbs. Software is instructions to hardware to do things. A software designers design a set of instructions so a computer will do helpful things for people. A software designer produces computer programmes, a functional specification to agree the design with the customer and a programme to realise the design. Software is involved with everything so software designers work for both companies and freelance. It is very hard to find a company which doesn't have software designers. To be a software designer maths is essential. Employers look for maths and science skills and good communication skills. You can be a software designer without qualifications but it is very unlikely. Software designers plan and write computer programmes to meet functional requirements. Software designers design the features of programmes such as the features of Microsoft word or the features of an oven. Everything we do on the computer is a result of software design.

4. Write an overview on the Rational Unified Process. Ans: The Rational Unified Process provides six guidelines to implement successful projects. These six
best practices were developed from Rationals experience while developing large and complex systems. They were also designed to help drive the use of tools offered in Rationals product line. The designers of

the RUP continue to evolve the process as methods and practices mature through their application [1]. The six basic principles are outlined below in figure 1.

The Rational Unified Process supports an iterative approach to development that helps in identifying risk proactively, reducing re-factor cost, and building models with easy exit strategy. Rational Unified Process recommends using use-cases and scenarios to capture functionality requirements. The Rational Unified Process supports component-based software development. Components are non-trivial modules, subsystems that fulfill a clear function. The Rational Unified Process provides a systematic approach to defining an architecture using new and existing components [2]. RUP encourages visual software models to depict architectures and component. Frequent verification of quality should be reviewed with respect to the requirements based on reliability, functionality, application performance and system

performance. Control changes to software are recommended by the process. The process describes how to control, track and monitor changes to enable iterative development. Per Kroll and Walker Royce updated the six best practices in IBM Rational e-zine , October 2005, as follows: 1. Adapt the process. 2. Balance competing stakeholder priorities. 3. Collaborate across teams. 4. Demonstrate value iteratively. 5. Elevate the level of abstraction. 6. Focus continually on quality. Every project is different in its appeal. Complexity, size, dependent variables, distributed team are some of the factors which impacts the project. Identifying dependent factors helps in adjusting the process to fit it. Project size must be optimal and must be decided based on the requirement. Process must be introduced slowly in initial part of the project and it must be implemented intensely at later parts of the project. Periodic reviews in RUP helps to identify risk proactively and continuously improve the process. Appropriation of process strength is dependent on project size, distributed teams, complexity, stakeholders and other dependent factors. More formal control is required when a project meets following criteria. Project members are distributed in different place User community is large and distributed. Large projects Many stakeholders Defining and understanding business needs is an important aspect while implementing RUP. Key players needs to identify the business needs and try to prioritize business and stakeholder needs. Per Kroll and Walker Royce argue strongly about having center development activities around stakeholder needs. Performing value analysis on leverage of assets, balancing user needs and reusing the assets are some of their suggested best practices. Collaboration across the team is well required to build a high efficient team. Per Kroll and Walker Royce mentions effective collaboration can be obtained by

* Motivating individual on the team. * Encouraging cross functional collaboration. * Integrated collaboration across business, software and operation team. * Providing feasible environment for effective collaboration.
Iterative value based development is one of the best practice. Delivering incremental value helps in obtaining early and continuous feedback. This early feedback helps in reducing re-factoring cost. Per Kroll and Walker Royce argue to elevate the level of abstraction by reusing existing assets. Reusability can be impacted by system complexity, loose coupling, and cryptic architectures. Reusability can be obtained by having modular design, and simple architecture.

5. Describe the Capability Maturity Model. Ans: The Capability Maturity Model developed by the Software Engineering Institute (SEI) at
CarnegieMellon University is a model for identifying the organizational processes required to ensure software process quality. The Capability Maturity Model (CMM) (see Table 3.1) is a multistaged, process definition model intended to characterize and guide the engineering excellence or maturity of anorganizations software development processes. The Capability Maturity Model: Guidelines for Improving the Software Process (1995) contains an authoritative description. See also Paulk etal. (1993) and Curtis, Hefley, and Miller (1995) and, for general remarks on continuous process improvement, Somerville, Sawyer, and Viller (1999) (see Table 3.2). The model prescribes practices for planning, engineering, and managing software development and maintenance and addresses the usual goals of organizational system engineering processes: namely, quality improvement, risk reduction, cost reduction, predictable process, and statistical quality control(Oshana & Linger 1999)

Table 3.1:Profile of Capability Maturity Mode

Table 3.2: Profile of Process Improvement Models

However, the model is not merely a program for how to develop software in a professional, engineeringbased manner; it prescribes an evolutionary improvement path from an ad hoc, immature process to a mature, disciplined process (Oshana & Linger 1999). Walnau, Hissam, and Seacord (2002) observe that the ISO and CMM process standards established the context for improving the practice of software development by identifying roles and behaviors that define a software factory.The CMM identifies five levels of software development maturity in an organization: At level 1, the organizations software development follows no formal development process. The process maturity is said to be at level 2 if software management controls have been introduced and some software process is followed. A decisive feature of this level is that the organizations process is supposed to be such that it can repeat the level of performance that it achieved on similar successful past projects. This is related to a central purpose of the CMM: namely, to improve the predictability of the development process significantly. The major technical requirement at level 2 is incorporation of configuration management into the process. Configuration management (or change management, as it is sometimes called) refers to the processes used to keep track of the changes made to the development product (including all the intermediate deliverables) and the multifarious impacts of these changes. These impacts range from the recognition of development problems; identification of the need for changes; alteration of previous work; verification that agreed upon modifications have corrected the problem and that corrections have not had a negative impact on other parts of the system; etc. An organization is said to be at level 3 if the development process is standard and consistent. The project management practices of the organization are supposed to have been formally agreed on, defined, and codified at this stage of process maturity. Organizations at level 4 are presumed to have put into place qualitative and quantitative measures of organizational process. These process metrics are intended to monitor development and to signal trouble and indicate where and how a development is going wrong when problems occur. Organizations at maturity level 5 are assumed to have established mechanisms designed to ensure continuous process improvement and optimization. The metric feedbacks at this stage are not just applied to recognize and control problems with the current project as they were in level-4 organizations. They are intended to identify possible root causes in the process that have allowed the problems to occur and to guide the evolution of the process so as to prevent the recurrence of such problems in future projects, such as through the introduction of appropriate new technologies and tools.

The higher the CMM maturity level is, the more disciplined, stable, and well-defined the development process is expected to be and the environment is assumed to make more use of automated tools and the experience gained from many past successes (Zhiying 2003). The staged character of the model lets organizations progress up the maturity ladder by setting process targets for the organization. Each advance reflects a further degree of stabilization of an organizations development process, with each level institutionaliz[ing] a different aspect of the process (Oshana & Linger 1999). Each CMM level has associated key process areas (KPA) that correspond to activities that must be formalized to attain that level. For example, the KPAs at level 2 include configuration management, quality assurance, project planning and tracking, and effective management of subcontracted software. The KPAs at level 3 include intergroup communication, training, process definition, product engineering, and integrated software management. Quantitative process management and development quality define the required KPAs at level 4. Level 5 institutionalizes process and technology change management and optimizes defect prevention. The CMM model is not without its critics. For example, Hamlet and Maybee (2001) object to its overemphasis on managerial supervision as opposed to technical focus. They observe that agreement on the relation between the goodness of a process and the goodness of the product is by no means universal. They present an interesting critique of CMM from the point of view of the so-called process versus product controversy. The issue is to what extent software engineers should focus their efforts on the design of the software product being developed as opposed to the characteristics of the software process used to develop that product. The usual engineering approach has been to focus on the product, using relatively straightforward processes, such as the standard practice embodied in the Waterfall Model, adapted to help organize the work on developing the product. A key point of dispute is that no one has really demonstrated whether a good process leads to a good product. Indeed, good products have been developed with little process used, and poor products have been developed under the guidance of a lot of purportedly good processes. Furthermore, adopting complex managerial processes to oversee development may distract from the underlying objective of developing a superior product. Hamlet and Maybee (2001) agree that, at the extremes of project size, there is no particular argument about the planning process to follow. They observe that for small-scale projects, the cost of a heavy process management structure far outweighs the benefits; however, for very large-scale projects that will develop multimillion-lines systems with long lifetimes, significant project management is clearly a necessity. However, in the midrange of projects with a few hundred thousand lines of code, the trade-offs between the managed model of development and the technical model in which the management hierarchy is kept to an absolute minimum are less obvious; indeed, the technical model may possibly be the superior and more creative approach. Bamberger (1997), one of the authors of the Capability Maturity Model, addresses what she believes are some misconceptions about the model. For example, she observes that the motivation for the second level, in which the organization must have a repeatable software process, arises as a direct response to the historical experience of developers when their software development is out of control (Bamberger 1997). Often this is for reasons having to do with configuration management or mismanagement! Among the many symptoms of configuration mismanagement are: confusion over which version of a file is the current official one; inadvertent side effects when repairs by one developer obliterate the changes of another developer; inconsistencies among the efforts of different developers; etc.

6. Describe the round-trip problem solving approach? Ans : Round-Trip Problem-Solving Approach:The software engineering process represents a round-trip framework for problem solving in a business context in several senses. The software engineering process is a problem-solving process entailing that software engineering should incorporate or utilize the problem-solving literature regardless of its interdisciplinary sources.

The value of software engineering derives from its success in solving business and human problems. This entails establishing strong relationships between the software process and the business metrics used to evaluate business processes in general. The software engineering process is a round-trip approach. It has a bidirectional character, which frequently requires adopting forward and reverse engineering strategies to restructure and reengi-neer information systems. It uses feedback control loops to ensure that specifications are accurately maintained across multiple process phases; reflective quality assurance is a critical metric for the process in general. The nonterminating, continuing character of the software development process is necessary to respond to ongoing changes in customer requirements and environmental pressures.

También podría gustarte