Ads
  

Estimating Duration and Cost - Summary

Estimating Duration and Cost - Summary

Estimating product size, staff effort, project schedule, and cost are inextricably entwined during the project planning process, which takes place for the first time near the beginning of the project life cycle and many times thereafter, generally at the end of each major phase. Estimating accuracy is often very poor at the beginning of a project, but it improves with project knowledge gained with every phase, culminating in an exact estimate matching the actual size, effort, schedule, and cost (Figure 1).

Grady and Caswell Corroborate Jones

Effort (person-hours) and duration (calendar days) for the project must be estimated to enable managers to assess product costs, return on investment, time to market, and quality. The estimation process is a difficult one because projects must often satisfy conflicting goals as well as provide specified functionality within specified performance criteria, within a specified cost and schedule, and with some desired level of quality. These several constraints make difficult the estimation process. Moreover, estimates are required before the specifications are well known. An architectural or high-level design barely scratches the surface in terms of providing quantifiable, testable requirements, yet the first estimate normally precedes this activity.

The estimating process falls primarily in the realm of project management skills of documenting plans, estimating cost, estimating effort, scheduling, and selecting metrics. Yet, as with all software project management activities, most of the people management skills come into play as well.

An organization that documents, follows, and continually improves its estimating process will satisfy a goal of the SEI CMM Level 2 key process area, software project planning. Goal 1 particularly states, "Software estimates are documented for use in planning and tracking the software project".

The biggest challenge in software development is to deliver a fully functional, high-quality product on time within budget, using forecasted assets. These constraints are referred to as the software estimation paradigm.

The first step in software estimation is to determine the size of the product. Size is generally expressed in KLOC, but it may also be expressed in function points, feature points, object points, or other units of measurement. A unit other than KLOC is generally translated into KLOC before moving on to the next step of estimating effort, schedule, and cost. Translation tables exist for translating almost any modern widely used language into a LOC count. Methods for estimating the size include Wideband Delphi, analogy, expert judgment, function point analysis, feature point analysis, object point analysis, and so on.

Empirical models, regression models, and mathematical models are used to estimate effort, the amount of labor required to create a software product of a given size. Here, we discussed regression models only. All the effort-estimating models depend on size.

When the effort, generally expressed in staff-months, is known, then the schedule and cost may be derived. Schedule is based on productivity factors, number of staff available, and phase distribution of effort.

COCOMO, a regression model, is the most widely used and best known of all the estimating models. COCOMO supplies three modes - organic, semidetached, and embedded - and three levels of complexity - basic, intermediate, and detailed. A "mode" simply explains the type of project - big, small, simple, complex, and so on. A "level" explains the amount of input - more input results in a more accurate estimate. The original author of COCOMO is Dr. Barry Boehm; his work was researched at TRW and is continued at the University of Southern California.

Basic COCOMO relies on a formula arrived at by curve fitting to a set of observed data points and applying simple regression analysis. Intermediate COCOMO improves the basic model by adding cost drivers, or environmental factors that increase or decrease the effort involved in developing the product. Detailed COCOMO provides additional analysis tools by estimating according to WBS levels (three-level hierarchy: system, subsystem, module) and by adjusting formulas for each life cycle phase. Most people use the intermediate model, which can easily be performed with a spreadsheet. Numerous automated tools exist to calculate the estimations.

COCOMO, like all estimating models and tools, has advantages and disadvantages. It is easy to use and provides a common language for communication in the software project planning / management community. On the other hand, it must be calibrated to an organization's historical data to be really useful. Probably its greatest weakness is that it is founded upon an estimate of product size in LOC. However, neither Dr. Boehm nor the creator of other models and tools has explained a better way to begin - even when starting with functionality instead of size, a translation of functions into size must take place before effort, schedule, and cost estimations can be continued.

COCOMO may be tailored for a specific organization. The means of calibration depends on historical data, and the formulas are in the public domain. Boehm's classic (though dense) book, Software Engineering Economics, contains all.

The original work that Boehm and his colleagues did at TRW in 1975 - 1980 must be translated into today's terms. Dr. Boehm, a professor at the University of Southern California, continues to update his findings and publishes regularly on the Web. A new work titled Software Cost Estimation with Cocomo II was published in 2000. A new user of the COCOMO model (I or II) should: 1. Collect data, 2. Run the COCOMO model, and 3. If a mismatch is suspected, then recalibrate the model for the environment (and restructure equations). This section simply explains how Boehm did his now-famous research, to explain the metamodel and offer directions for constructing organization-specific models. COCOMO II is more modern: It supports evolutionary, risk-driven, and collaborative software processes; fourth-generation languages and application generators; commercial-off-the-shelf and reuse-driven software approaches; fast-track software development approaches; and software process maturity initiatives.

Another model, the mathematical model, is best known as the Software Lifecycle Management (SLIM) tool from QSM. With SLIM, the estimator can invoke a linear programming analysis that considers development constraints on both cost and effort and that provides a month-by-month distribution of effort and a consistency check with data collected for software systems of similar size.

The SLIM model is based on Putnam's own analysis of the software life cycle in terms of the Rayleigh distribution of project personnel level versus time. The algorithm used is:



Size is the lines of code, K is the total life-cycle effort (in working years), and t is development time (in years). Estimating the size, effort, schedule, and cost of a new software project is an inexact science, so experts strongly recommend that more than one method be used for sizing as well as for estimating.

A summary for an estimating checklist might contain:

●  Has a firm foundation been established for the estimate of software size?

●  Have customers, managers, and developers been trained in the basics of sizing and estimating?

●  Has historical data, if used, been applied correctly?

●  Have models been used correctly?

●  Has the estimation method been mapped/calibrated to the development process?

●  Have reasonable assumptions been made about product and process factors that affect productivity?

●  Do aggressive goals have realistic strategies for accomplishment?

●  Have any alternative estimating methods been used for comparison?

●  Have multiple methods been used? No one method is reliable enough, in most cases. When two methods disagree, you may be missing some facts.

●  Has the model clearly defined the costs it is estimating and the costs it is excluding?

●  In tracking estimates, are the estimates close to the actual costs expended on the projects?

●  Is the model objective? Does it allocate most of the software cost variance to poorly calibrated subjective factors (such as complexity)? The planner doesn't need a result that the tool can be manipulated to produce.

●  Does the model accommodate the estimation of subsystems and units? Does it give (accurate) phase and activity breakdown?

●  Is the tool stable? Do small differences in inputs produce small differences in output cost estimates?

●  Does the model cover the class of software projects whose costs you need to estimate?

●  Does the model avoid the use of information that will not be well known until the project is complete?

●  Does the model avoid the use of highly redundant factors or factors that make no appreciable contribution to the results?
In addition to taking care with tool selection, Boehm and other experts have given us good advice to heed:

●  Measure the difference between the actuals and the plan to determine the amount of variance.

●  Establish a baseline (phases, milestones, size, effort/staffing/cost, defects).

●  Use templates for efficiency and consistency.

●  Back up positions with data.

●  Try to negotiate solutions with a better than 50% probability of success.

●  Collect data from completed projects in a nonintrusive and automated way.

●  Maintain a master database that contains the data from every completed project.

●  As the number of collected projects grows, use statistical analysis techniques to stratify the data into various meaningful subsets.

●  Budget and plan for re-estimation. Execute re-estimates based upon new and/or revised information (predictors, assumptions). The uncertainty present at the beginning of a project warrants regular reviews, coupled with risk re-assessment.

●  Don't let the forecast get stale.

There are similarities in the models. Even though we call one "mathematical", they all rely on mathematical formulas (see Table 1). Although we call one "empirical", they all rely on the observation of actual projects. Each estimating model has its own specific strengths and weaknesses, which is one of the reasons to use more than one for the sizing activity as well as for the effort-, schedule-, and cost-estimating activity. Estimating is a process, even a miniproject, to be documented, taught, and consistently followed.

COCOMO SLIM Equation and Parameter Comparison


Tags

life cycle, software project, software development, software estimation, software product, estimating cost
The contents available on this website are copyrighted by TechPlus unless otherwise indicated. All rights are reserved by TechPlus, and content may not be reproduced, published, or transferred in any form or by any means, except with the prior written permission of TechPlus.
Copyright 2017 SPMInfoBlog.
Designed by TechPlus