Ads
  

Getting Started with Software Sizing: Estimating Begins with Planning

Getting Started with Software Sizing: Estimating Begins with Planning

Sizing is the prediction of product deliverables (internal and external) needed to fulfill the project requirements. Estimation is the prediction of effort (resources) needed to produce the deliverables.

Sizing and estimating activities fall in the middle of the sequence of planning tasks for a software project. As shown in Figure (a), "Defining the Goal and Scope of the Software Project", "Creating the Work Breakdown Structure" and "Identifying the Tasks and Activities" precede the sizing task. Following the prediction of software size are the tasks of estimating duration and cost ("Estimating Duration and Cost"), which are used to assign resources ("Assigning Resources"), consider dependencies ("Considering Dependencies"), and schedule the work ("Scheduling the Work").

The Big Picture of Software Estimation

The WBS - "Decomposing" a Project into Tasks

Remember from "Creating the Work Breakdown Structure" that a WBS is a description of the work to be carried out, broken down into key elements or tasks. A task may be managerial, administrative, integral, or developmental. By partitioning the project into these manageable pieces, each element, or task, may be sized and its effort (expressed in person hours, weeks, months, etc.) may be estimated. The WBS identifies tasks at a level useful in locating available staff with the proper skills. After the number of staff members and the skills of each are determined, the effort estimates may then be applied to a calendar to determine the duration of the project and project milestones - the resulting project schedule is usually portrayed as a Gantt chart. Finally, each task is described, estimated, and tracked throughout the project.

There are many views of the WBS - a product view shows hierarchical relationships among product elements (routines, modules, subsystems, etc.), while a project view represents hierarchical relationships among work activities (process elements). In sizing and estimating for schedule predictions, both sets of elements and activities need to be considered. Development (product) tasks may contain analysis, high-level design, low-level design, coding, testing, and so on, which takes place in the life cycle phases. Managerial tasks may contain project planning, tracking, control, risk analysis, and so on.

Support tasks may contain documentation of deliverables such as users' manual, operations guides, and network communications. We call managerial, administrative, and support tasks integral tasks, traversing one or more (sometimes all) of the development tasks. Other integral tasks contain: configuration management procedures, software quality assurance practices, risk analysis and management, software development tools and techniques, methods of conducting reviews of in-process deliverables, metrics to be collected and analyzed, definition and documentation of standards (rules for in-process deliverables), and any other activities required to meet customer requirements, such as creation of documents, training programs, tool development, or acquisition.

The WBS drives planning, providing the foundation for tracking of work activities, cost, and schedule by giving the engineer or manager a global view. It permits us to organize and demonstrate the work to be performed, ensure that all necessary work has been identified; divide the work into small, well-defined tasks; facilitate planning, estimating and scheduling of the project, to provide a basis for data monitoring and historical data collection; and identify contractual tasks and deliverables. It allows us to dismiss, "We're 90 % done" and replace it with "We've completed 97 of 234 tasks". Weighted tasks become meaningful and can be associated with a cost estimate.

The WBS becomes our table of contents, a hierarchical list of the work activities required to complete a project. As such, it becomes an indispensable tool in sizing and estimating process activities.

Recall from "Process Overview" that every process has inputs, transformations, and outputs. The following additional inputs are useful to the sizing and estimating process (where outputs are estimated size, effort, duration [schedule], and cost):

●  Project proposal or statement of work (SOW);
●  Project initiation documentation;
●  Statement of requirements:
    - Performance to be achieved;
    - Specific features to be included;
    - How the results will be evaluated;
●  Constraints;
●  Processes and standards to be used:
    - How the software will be developed;
    - Rules that will be followed and quality criteria that will be used;
●  Contract for services, if applicable;
●  Prior experience on similar tasks;
●  Historical estimating and actual data for your organization;
●  Programming languages to be used;
●  Reusable software information;
●  System design information;
●  Initial concepts of software architecture - major components;
●  Goal and scope of the project, including tasks to be performed and products to be delivered.

Naturally, it is unlikely that all of these inputs will be available to the process of size estimation, but the more, the better. Following the identification of what to size, as rendered in the WBS, the business of sizing software can begin seriously.

Estimating the Size of Software Under Development (Sizing)

Conversions for values of length, volume, capacity, mass, and area to and from the metric system remind us that what we choose for units and unit names is not as important as that we communicate about them in a common language. The same is true of software sizing - as long as we choose a measure and stick with it, comparisons of actual data to planned estimates may be tracked over time to provide feedback for improvement. Some of the more popular units of measure for software include the following examples. If you are not familiar with all the terms, no matter - the point is that any "observable" physical incarnation of software, even before it becomes software, that is countable will be sufficient for a unit of size.

Examples of Size Measures

In our exploration of different units of measurement for software, we'll consider some of the most commonly used ones, including:

●  Lines of code (LOC);
●  Function points;
●  Feature points;
●  Number of bubbles on a data flow diagram (DFD);
●  Number of entities on an entity relationship diagram (ERD);
●  Count of process/control (PSPEC/CSPEC) boxes on a structure chart;
●  Number of "shalls" versus "wills" in a government specification;
●  Amount of documentation;
●  Number of objects, attributes, and services on an object diagram.

There are lots of academic arguments over which is best, and there is merit to measuring whichever fits your application best. It doesn't matter which one you choose as long as you use it consistently and remember that historical data is your best friend.

Lines of Code as a Unit of Size

How do you know how many LOC will be made before they are written or even designed? Why does anyone think that LOC has any bearing on how much effort will be required when product complexity, programmer ability/style, or the power of the programming language are not taken into consideration? How can a vast difference in the number of LOC required to do equivalent work be explained? Questions like these have caused the LOC measure to become infamous, yet it is still the most widely used metric. The relationship between LOC and effort is not linear. Despite the introduction of new languages, the average programmer productivity rate remains, over the last two decades, at about 3,000 delivered LOC/programmer year. This tells us that no matter how languages improve, how hard managers push, or how fast or how much overtime programmers work, cycle time improvements cannot come directly from squeezing more out of programming productivity. The real concerns involve software functionality and quality, not the number of LOC produced.

Estimating LOC Using Expert Opinions and Bottom-Up Summations

We will assume that our WBS contains many levels of decomposition in the product/project hierarchy. Requirements on the WBS product hierarchy have been decomposed into actual software system components, beginning at a generic subsystem level (such as Accounts Receivable or Accounts Payable, in a financial system) and refined into a very precise level or primitive level of abstraction (such as Editor or GET/PUT I/O routines or Screen Formatter). This lowest level can rarely be known at the time of the initial sizing; it is usually completed much later in the project. However, typically several levels of abstraction can be determined even at very early stages of project planning. The most complete WBS that can be derived will be the most helpful in accurate sizing that leads to accurate estimating.

When the WBS has been decomposed to the lowest level possible at this time, a "statistical" size may be created through a sizing and summing process. The size of each component may be obtained by asking experts who have developed similar systems, or by asking potential developers of this system to estimate the size of each box on the lower levels of the WBS. When the sizes are summed, the total is called a "bottom-up" size estimate. A much better size estimate is usually obtained if each estimator is asked to provide an optimistic, pessimistic, and realistic size estimate. Then a beta distribution may be formed by multiplying the realistic size estimate by 4, adding the optimistic and pessimistic, and dividing the total by 6. This weighted average is a comfort to the inherent uncertainty of estimating. For example, if a given window object appears on the WBS for a system, the supporting code required to process the editing for that window might be estimated at between 200 and 400 lines of code, with a belief that it will be closer to 200. Requesting the estimator to think about optimistic and pessimistic scenarios might produce this final estimate:



The number of thousands of source lines of code (KSLOC) delivered is a common metric, carried through to estimations of productivity, which are generally expressed as KSLOC/SM or KLOC/SM (where SM = staff-month). Barry Boehm, one of the most highly regarded researchers in this area, has been looking for many years for a better product metric to correlate with effort and schedule, but he has not found one. LOC is a universal metric because all software products are essentially made of them.

Guidelines for Counting LOC

Counting lines of existing code manually is far too tedious and time-consuming, so most organizations purchase or build an automated LOC counter. This can raise some tricky questions about what exactly is a line of code. Again, it doesn't matter so much how you define LOC, as long as the definition is used consistently. The following counting guidelines have been in use for many years, both for the recording of existing program size and for the estimation of size for programs to be developed:

●  Ensure that each "source code line" counted contains only one source statement (if two executable statements appear on one line, separated by a semicolon, then the count is two; if one executable statement is spread across two "physical" lines, then the count is one. Programming languages allow for all manner of coding options, but it is usually pretty easy to determine a single executable statement because the compiler or interpreter has to do it.

●  Count all delivered, executable statements - the end user may not directly use every statement, but the product may need it for support (i.e., utilities).

●  Count data definitions once.

●  Do not count lines that contain only comments.

●  Do not count debug code or other temporary code such as test software, test cases, development tools, prototyping tools, and so on.

●  Count each invocation, call, or inclusion (sometimes called compiler directive) of a macro as part of the source in which it appears (don't count reused source statements).

●  Translate the number of lines of code to assembly language equivalent lines so that comparisons may be made across projects.

The first and second columns of Table 1 represent a widely used method of translating SLOC in various languages to the average number of basic assembler SLOC. (Note that SLOC and LOC are used interchangeably.) Many project managers want a translation of all languages into basic assembler so that an apples-to-apples comparison may be made across projects. Another use of this data is to project from a known language into a conversion language. For instance, suppose a 50,000 LOC system written in C will be converted to C++. Using numbers from Table 1, the basic Assembler SLOC for C is 2.5, so the 50,000 SLOC system written in C would be equivalent to 125,000 if written in Assembler (50,000 x 2.5). A 125,000 Assembler language system, if written in C++, would be equivalent to 125,000/6, or 20,833 SLOC.

Estimating LOC by Analogy

One way to estimate the size of an undeveloped software system is to compare its functionality with existing ones. Imagine that you have an existing software component, Module A, which will have to be rebuilt for a new system. A is 2,345 LOC, and you believe that the new Module A will be more efficient (you've learned through maintaining the original A how to make the code tighter), yet you also know that there are some additional features that can be added. Then, A may be estimated at 3,000 LOC.

This is certainly not a very accurate method because A may be written in a different programming language, in a different application domain, using different algorithms, with a different level of complexity, with untried functionality, in a different level of reality (simulation, emulation, actual application).

Consider another example: software converted from COBOL, using no design technique, to software written in C++, using an object-oriented design. The size decreased because it was designed better the second time, and the functionality and quality went up. However, the cost per line of code was 10% higher. Is this a productivity loss as it might appear? Of course it is not. It was an improvement in productivity as well as functionality and maintainability.

Advantages of Using LOC as a Unit of Measure

Advantages of using lines of code as a unit of software measurement include:

●  It is widely used and universally accepted.
●  It permits comparison of size and productivity metrics between diverse development groups.
●  It directly relates to the end product.
●  LOC are easily measured upon project completion.
●  It measures software from the developer's point of view - what he actually does (write lines of code).
●  Continuous improvement activities exist for estimation techniques - the estimated size can be easily compared with the actual size during post-project analysis. (How accurate was the estimate? Why was it off by a certain percent? What can be learned for the next project's size estimation?)

Conversion from Programming Language to Basic Assembler SLOC to SLOC per Function Point


Disadvantages of Using LOC

Disadvantages of using lines of code as a unit of software measurement include the following:

●  LOC is difficult to estimate for new software early in the life cycle.

●  Source instructions vary with the type of coding languages, with design methods, and with programmer style and ability.

●  There are no industry standards (such as ISO) for counting lines of code.

●  Software involves many costs that may not be considered when just sizing code - "fixed costs" such as requirements specifications and user documents are not included with coding.

●  Programmers may be rewarded for large LOC counts if management mistakes them for productivity; this penalizes concise design. Source code is not the essence of the desired product - functionality and performance are.

●  LOC count should distinguish between generated code and hand-crafted code - this is more difficult than a "straight count" that could be obtained from a compiler listing or code-counting utility.

●  LOC cannot be used for normalizing if platforms or languages are different.

●  The only way to predict a LOC count for new software to be developed is by analogy to functionally similar existing software products and by expert opinion, both imprecise methods.

●  Code generators often produce excess code that inflates or otherwise skews the LOC count.

Unfortunately, productivity is often measured by LOC produced. If a programmer's average output increases from 200 LOC per month to 250 LOC per month, a manager may be tempted to conclude that productivity has improved. This is a dangerous perception that frequently results in encouraging developers to produce more LOC per design. Not only is the developer rewarded with a seemingly higher productivity rating, but he is also perceived to produce cleaner code. Many organizations use this metric to measure quality:



If the denominator is inflated, then the quality may appear artificially high. The coding phase of most projects typically consumes only an insignificant 7% of total effort to a maximum of only about 20% of total effort. It is, of course, the quality of code that is important, not the volume.

These issues led the thinkers of the software revolution to cast about for another way to measure. Enter function points.

Function Points as a Unit of Size

The function point (FP) method is based on the idea that software size is better measured in terms of the number and complexity of the functions that it performs than on the number of lines of code that represent it. The first work to be published about function points was written in the late 1970s by A.J. Albrecht of IBM, for transaction-oriented systems. Capers Jones, of Software Productivity Research, Inc, expanded Albrecht's ideas into a large and widely recognized body of knowledge. In 1986, a nonprofit group, the International Function Point User Group (IFPUG), was formed to disseminate information about the metric. In 1987, the British government adopted a modified function point for the standard software productivity metric. 1994 saw the publication of Release 4.0 of the IFPUG Standard Function Point Counting Practices Manual and Release 1.0 of the IFPUG Standard Guidelines to Software Measurement.

Function points measure categories of end-user business functions. They are determined in a more methodological way than are LOC counts. A really straightforward analogy is that of a physical house to software: The number of square feet is to the house as LOC is to software; the number of bedrooms and bathrooms is to the house as function points are to software. The former looks only at size; the latter looks at size and function.

Function points are intended to do the following:

●  Measure categories of end-user business functions;

●  Address the problem of attempting to estimate LOC too early in the life cycle;

●  Determine the number and complexity of outputs, inputs, database inquiries, files or data structures, and external interfaces associated with the software system.

A quick overview of the function point process is:

1.  Count the functions in each category (categories are: outputs, inputs, inquiries, data structures, and interfaces).

2.  Establish the complexity of each - simple, medium, complex.

3.  Establish weights for each complexity.

4.  Multiply each function by its weight and then sum up to get total function points.

5.  Convert function points to LOC using the formula:

     LOC = Points x ADJ x Conversion factor
   
    where ADJ is an adjustment for the general characteristics of the application.

The conversion factor, based on historical experience for the application and programming language, represents the average number of lines of code to implement a simple function. Why do this last step? Because most automated tools that estimate effort, cost, and schedule require LOC as input. Now we'll describe the FP process in more detail.

Guidelines for Counting Function Points

Figure (b) shows the basic steps in counting function points; each will be explained later. Each step has an output that is used in the next step. Table 2 shows the input, transformation, and output of each step in FP counting. This worksheet is left blank, as a template for your future use.

Basic Steps in Function Point Analysis

Step 1. Count Number of Functions in Each Category

General Guidelines for Counting

●  Count only software requirements functions.

●  Count logical representations. When any input, output, and so on requires different processing logic, each one of those logical representations is a unique function point.

The first rough cut at estimating the size of the system to be developed entails examination of the major system components. How much output is produced? How much input is necessary to produce the output? How much data is stored?

Count the number of items: outputs, inputs, inquiries, and files. The preliminary architecture provides the basis for this counting activity. Some people can begin with architecture in the form of textual requirements, but having an architecture in graphical form is very helpful. The weighting factors applied to all of these visibly external aspects of software are a set of empirical constants derived from trial and error.

Function Point Analysis Worksheet


Counting Outputs

The following list contains "hints" to keep in mind when counting outputs:

●  External outputs are things produced by the software that go to the outside of the system.

●  Outputs are units of business information produced by the software for the end user (application-oriented).

●  Examples include screen data, report data, error messages, and so on.

●  Count each unique unit of output that leaves the application boundary. An output unit is unique if it has a different format and/or requires different processing logic.

●  For those using structured methods ("Analysis and Design Methods"), an output is a data flow of information produced by the software for the end-user. The number of outputs leaving the application boundary may easily be counted on a context or source/sink diagram.

Each output is added to one of three totals, depending on its complexity: a total for simple outputs, a total for average outputs, and a total for complex outputs. The separation allows each type to be multiplied by a weighting factor - a complex output will require more effort to create than will an average or a simple output. Guidelines for determining complexity are found in Table 3.

Function Point Analysis Outputs Weighting Factors

Counting Inputs

Remember the following when counting inputs:

●  External inputs are things received by the software from outside of the system.
●  Inputs are units of business information input by the user to the software for processing or storage.
●  Count each unique unit of input.

As with outputs, inputs are separated into simple, average, and complex for weighting. Guidelines for determining complexity are found in Table 4.

Function Point Analysis Inputs Weighting Factors

Counting Inquiries (Output/Input)

When counting inquiries, keep the following in mind:

●  External inquiries are specific commands or requests that the software performs, generated from the outside. It is online input that causes a software response.

●  Inquiries are direct accesses to a database that retrieve specific data, use simple keys, are real-time (requires an immediate response), and perform no update functions.

●  Count each unique unit of inquiry. An inquiry is considered unique in either of two cases:

    - It has a format different from others in either its input or output portions.
    - It has the same format, both input and output, as another inquiry but requires different processing logic in either.

●  Inquiries with different input and output portions will have different complexity weighting factors, as explained later.

●  Queries are not inquiries. Queries are usually classified as either inputs or outputs because they often use many keys and include operations or calculations on data.

Inquiries are separated into simple, average, and complex. Guidelines for determining complexity are found in Table 5.

Function Point Analysis Inquiries Weighting Factors

Counting Data Structures (Files)

Things to keep in mind when counting data structures (files) include:

●  Internal files are logical files within the program.

●  Data structures (previously known as "files") are each primary logical group of user data permanently stored entirely within the software system boundary.

●  Data structures are available to users via inputs, outputs, inquiries, or interfaces.

Data structures are separated into simple, average, and complex. Guidelines for determining complexity are found in Table 6.

Counting Interfaces

When counting interfaces, keep these thoughts in mind:

●  External files are machine-generated files used by the program.

●  Interfaces are data (and control) stored outside the boundary of the software system being evaluated.

●  Data structures shared between systems are counted as both interfaces and data structures.

●  Count each data and control flow in each direction as a unique interface.

Function Point Analysis Files Weighting Factors

Interfaces are separated into simple, average, and complex. Guidelines for determining complexity are found in Table 7.

Function Point Analysis Interfaces Weighting Factors

Step 2. Apply Complexity Weighting Factors

●  Multiply each the number of each type (simple, average, complex) within each category (output, input, inquiries [output/input], data structure [files], interfaces) by the appropriate weighting factor. The weighting factors given in Tables 3 through 7 and shown in Table 2 are time-tested values, but they may certainly be changed if deemed necessary.

●  Add the totals for each category. When filled out, Steps 1 and 2 will look like the top section of Table 10.

●  Notice that the total results in a "raw function point" count.

Step 3. Apply Environmental Factors

Adjust the raw function point total to account for environmental factors that affect the entire software development process. Many aspects of your surroundings might affect the software development process. Some of these aspects affect the project positively, and others tip the scales in the negative direction; all are considered as they uniquely apply to a specific project.

Table 8 includes a detailed definition of each of the 14 environmental factors, or influential adjustment factors, as well as guidelines for choosing the weight of the environmental factor.

Here's how the environmental weighting works. Using Table 8, rate each factor on a scale of 0 to 5 (where 0 means not applicable). To help get a feel for one end of the rating spectrum, Table 9 contains examples of software systems that would rate high - a rating number 4 or 5 on the scale.

Sum the factor ratings (Fn) to calculate a total environmental influence factor (N).

N = sum (Fn)

Use the Function Point Worksheet in Table 2 to record the values.

Refer to Table 10 to see how Step 3 looks when filled in.

Function Points Analysis Environmental Factors Descriptions

Step 4. Calculate Complexity Adjustment Factor (CAF)

As stated earlier in this section, Barry Boehm postulated that the level of uncertainty in estimates is a function of the life cycle phase. Capers Jones supports the theory with empirical data, stating that environmental factors would have a maximum impact of +/ 35% on the raw function point total. It is considered maximum impact because if the FP analysis is conducted at the beginning of the life cycle, there is the largest swing in potential inaccuracy, as illustrated in "Problems and Risks with Estimating Software Size" Figure (a). To account for this uncertainty when the level of knowledge is low, a complexity adjustment factor (CAF) is applied to the environmental factors total.

CAF = 0.65 + (0.01 x N)

Function Points Analysis Environmental Factors, Examples of Systems with High Scores

   where N is the sum of the weighted environmental factors.

Because there are 14 suggested environmental factors, each weighted on a scale of 05, the smallest value for N would be 0 (none of the 14 factors is applicable); the largest value for N would be 70 (each of the 14 factors is high - a rating of 5). Plugging in these boundary conditions, minimum CAF = 0.65 + (0.01 x 0) = 0.65. Maximum CAF = 0.65 + (0.01 x 70) = 1.35. (1.35 0.65 = 0.70) The earliest estimates of size and effort may be off by a factor +/ 35%.

Step 4 is illustrated in Table 10.

Table 8 shows Jones's suggestions for environmental factors, but you may create your own values if you feel that straight function point analysis is too generic for use in your case. Our advice, however, is to keep your metrics simple so that metrics-gathering does not become a significant life cycle phase all on its own. Metrics, like other good software engineering techniques, should speed you up, not slow you down.

Step 5. Compute Adjusted Function Points

adjusted function points (AFP) = raw function points x CAF

Step 5 may be observed in Table 10.

Step 6. Convert to LOC (Optional)

Function points give us a way of predicting the size of potential software programs or systems through analysis of its intended functionality from the user's point of view. Programming languages have varying but characteristic levels, where the level is the average number of executable statements required to implement one function point. We may choose to convert function points to LOC for several reasons, including:

●  To measure and compare the productivity or size of programs or systems that are written in multiple languages;

●  To use the standard unit of measure for input into estimating tools (discussed in "Estimating Duration and Cost");

●  To convert the size of a program or application in any language to the equivalent size if the application were written in a different language.

Upon completion of Steps 15, sufficient data is available to permit a reasonably accurate conversion from function points to LOC.

A partial function-point-to-language conversion is shown in Table 1 to illustrate the translation of function points to LOC (the first and third columns). Not all of the IFPUG-approved language conversions are listed in the table (it is quite lengthy), and it is continually evolving as new languages are developed.

LOC = adjusted function points x LOC per adjusted function point

AFP x # of LOC per AFP = LOC

The example of the completed Function Points Analysis Worksheet can again be noted in Table 10.

Advantages of Function Point Analysis

Some of the advantages to the use of function points as a unit of software measurement include:

●  It can be applied early in the software development life cycle - project sizing can occur in the requirements or design phase.

●  It is independent of programming language, technology, and techniques, except for the adjustments at the end.

●  Function points provide a reliable relationship to effort (if you can determine the right functions to measure).

●  Creation of more function points per hour (week or month) is an easily understood, desirable productivity goal (as opposed to the creation of more LOC per hour [week or month], which is less meaningful, perhaps paradoxically).

●  Users can relate more easily to this measure of size. They can more readily understand the impact of a change in functional requirements.

●  The productivity of projects written in multiple languages may be measured.

●  Function points provide a mechanism to track and monitor scope creep. Function points may be counted early and often - function point counts at the end of requirements, analysis, design, and implementation can be compared. If the number of function points is increasing with each count, then the project has become better defined or the project has grown in size (dangerous unless the schedule and/or cost is renegotiated).

●  Function points can be used for graphical user interface (GUI) systems, for client/server systems, and with object-oriented development.

●  Function points may be counted by senior-level users (clients or customers) as well as technicians.

●  Environmental factors are considered.

Function Points Analysis Worksheet Example


As with all sizing and estimating models, adaptations and calibrations are encouraged. What is counted, how weights are applied, and what environmental factors are considered are all modifiable. For example, in the silicon chip industry, where physical units are tested via software, device components could be counted instead of inputs and outputs.

Disadvantages of Function Point Analysis

Disadvantages to the use of function point analysis include the following:

●  It requires subjective evaluations, with much judgment involved.
●  Results depend on technology used to implement it.
●  Many effort and cost models depend on LOC, so function points must be converted.
●  There is more research data on LOC than on function points.
●  It is best performed after the creation of a design specification.
●  It is not well-suited to non-MIS applications (use feature points instead).

Table 1 has another use in that an existing program may be examined for its function point count. For example, if you had an application consisting of 500 SLOC system in C++, the table would indicate that you have 6 x 500 = 3,000 function points. This technique, called "backfiring" can be used to build a rough size measure for a portfolio of applications. The portfolio can become the extremely useful historical database, which can be used for estimating future projects as well as calibrating sizing and estimating models.

How many function points are in a system that is considered to be very large? Some large military applications approach 400,000, the full SAP/R3 is 300,000, Windows 98 is about 100,000, and IBM's MVS is also about 100,000. Software Productivity Research, Inc., generated this data from its database of 8,500 projects from more than 600 organizations.

Feature Points as a Unit of Size

Feature points are an extension of the function point method designed to deal with different kinds of applications, such as embedded and/or real-time systems. In 1986, Software Productivity Research developed feature point analysis for system software. Pure function point counts applied to non-MIS software can result in a misleading metric because the applications are usually heavy in algorithmic complexity but light on external inputs and outputs. A feature point is a new category of function that represents complex algorithms and control (stimulus/response). The complexity of the algorithm is defined in terms of the number of "rules" required to express that algorithm. Feature points are generally used for:

●  Real-time software such as missile defense systems;

●  Systems software (e.g., operating systems, compilers);

●  Embedded software such as radar navigation packages or chips in automobile air bags;

●  Engineering applications such as Computer-Aided Design (CAD), Computer-Integrated Manufacturing (CIM), and mathematical software;

●  Artificial intelligence (AI) software;

●  Communications software (e.g., telephone switching systems);

●  Process control software such as refinery drivers.

Feature points are basically function points that are sensitive to high algorithmic complexity, where an algorithm is a bounded set of rules (executable statements) required to solve a computational problem.

Guidelines for Counting Feature Points

Figure (c) shows the basic steps in counting feature points; each will be described later. The Feature Point Worksheet appears in Table 11.

Basic Steps in Feature Point Analysis

Step 1. Count Feature Points

This is the same as counting function points - count inputs, outputs, files (data structures), inquiries, and interfaces.

The filled-out Feature Point Worksheet in Table 13 serves as an example for each of the seven steps.

Step 2. Continue the Feature Point Count by Counting the Number of Algorithms

An algorithm is a bounded computational problem that is included within a specific computer program.

Significant and countable algorithms deal with a definite, solvable, bounded problem with a single entry and a single exit point.

Developers who use data flow diagrams or structure charts in design often equate an algorithm to a basic process specification or module specification.

Step 3. Weigh Complexity

Use "average" weights instead of simple, average, or complex (note that the average for feature points is different from the average for function points) for inputs, outputs, files (data structures), inquiries, and interfaces. Weigh algorithms with a simple, average, and complex multiplier.

The average complexity factor for "files" is reduced from 10 to 7 to reflect the reduced significance of logical files in computation-intensive software.

The default weighting factor for algorithms is 3. The value can vary over a range of 1 to 10. Algorithms that require basic arithmetic operations and few decision rules are assigned a value of 1. Algorithms requiring complex equations, matrix operation, and difficult logical processing are assigned a value of 10. Algorithms that are significant and therefore should be counted have these characteristics:

●  Deals with a solvable, bounded, definite problem;

●  Must be finite and have an end;

●  Is precise and unambiguous;

●  Has an input or starting value;

●  Has output or produces a result;

●  Is implementable - each step is capable of executing on a computer;

●  Is capable of representation via one of the standard programming constructs: sequence, if-then-else, do-case, do-while, and do-until.

Feature Point Analysis Worksheet


Step 4. Evaluate Environmental Factors

Instead of the 14 environmental factors used in function point analysis, feature point uses only two: logic complexity and data complexity. The range is from 1 to 5.

Logic Values

1 - Simple algorithms and calculations
2 - Majority of simple algorithms
3 - Average complexity of algorithms
4 - Some difficult algorithms
5 - Many difficult algorithms

Data Values

1 - Simple data
2 - Numerous variables, but simple relationships
3 - Multiple fields, files, and interactions
4 - Complex file structures
5 - Very complex files and data relationships

Sum the logic and data complexity factor values, yielding a number between 2 and 10.

Step 5. Calculate the Complexity Adjustment Factor

Use Table 12 to calculate the complexity adjustment factor.

Step 6. Multiply the Raw Feature Point Count by the Complexity Adjustment Factor

Feature Point Complexity Adjustment Factor

Step 7. Convert to Lines of Code Using the Function Point Translation Table (Optional)

Feature Point Analysis Worksheet Example


Advantages of Feature Point Analysis

Advantages of feature point analysis are essentially the same as those for function point analysis, with the additional advantage of being an excellent approach to use in the size estimation of algorithmically intensive systems.

Disadvantages of Feature Point Analysis

The primary disadvantage of feature point analysis is the subjective classification of algorithmic complexity.

Object Points

Counting "object points" to determine software size is an approach developed for object-oriented technology. Conducted at a more macro level than function points, it assigns one object point to each unique class or object, such as a screen, output report, and so on. The rest of the process is similar to that of function and feature points, but the conversion factors are different.

Model Blitz

Estimating gets better with each passing phase because more knowledge about project needs is gained in each phase. A great deal of knowledge is revealed in the analysis and design phase, in which models are produced ("Analysis and Design Methods") that allow for increasingly accurate size estimates. Until that phase of the project is reached, there will probably be some useful but very high-level analysis models produced in the planning phase. They may be used as another method, simple but quick, for estimating size.

The concept of blitz modeling is based on Tom DeMarco's bang metric. Counting component pieces of the system (design elements) and multiplying the count by a productivity factor (on the average, how many lines of procedural code this takes to implement, based on historical precedent), results in rough estimates. For example, if high-level data flow diagrams or object models are produced as part of concept exploration or planning, their components may be observed for size. Imagine that there are 20 object classes and it is known by observation of existing systems that classes are implemented on average as five procedural programs per class. Also imagine that it is known by observation of existing systems that the average size procedural program (C language) is 75 LOC. Then the size can quickly be calculated as:

Number of processes (object classes) x Number of programs per class x Average Program Size = Estimated Size

20 object classes x 5 programs per class x 75 LOC per program = 7,500 LOC estimated

This is known as a "blitz" of early feasibility documents. Components of any model (process bubbles, data flows, data repositories, entities, relationships, objects, attributes, services, etc.) may be multiplied by a factor that has been developed as a result of previous projects. Other examples are as follows: If it is known that each process bubble on a Level 0 DFD roughly corresponds to four actual SQL language programs, and it is also known that the average size for programs in the SQL library is 350 LOC, then a simple multiplication will suffice for the initial size count. Say that there are seven major process bubbles:

Number of processes (DFD bubbles) x Number of programs per bubble x Average Program Size = Estimated Size

7 bubbles x 4 programs per bubble x 350 LOC per program = 9,800 LOC estimated

If a high-level global object model is produced during the feasibility scoping phase, and it is known from historical evidence that each service corresponds to two C++ programs and that company standards encourage confining each service packet to 100 LOC or less, then multiplying as follows will provide a good start in the estimation of the size of the system in LOC:

Number of services x 2 x 100

The key phrase here is "known from historical evidence". A historical database is essential to improving estimating accuracy. The database should contain a record of the actual delivered size for each software component. The amount of effort expended to create the component of that size must also be tracked. As the numbers of data points grow, so does the accuracy of the average number of LOC per program and the average amount of effort required building a component. When actual component sizes and their corresponding amounts of development effort are known, then average "productivity" is also known (size effort).

DeMarco suggests, with the bang metric, that function-strong systems (e.g., real time) be computed separately from data-strong systems. Functions-strong systems rely on a count of indivisible functional primitives as defined by a data flow diagram. Data-strong systems rely on a count of objects in the system-global data model. Each will have a weighting factor (WF) applied.

An example with function-strong systems is this: WF (average number of modules needed to complete this function) is three, the number of processes plus control specifications (functions) is eight, and the average size per function is 78 LOC. Then:

WF x (Number of process and control specifications) x average LOC for this type of module = LOC

3 modules needed for function x 8 functions x 78 LOC = 1,872 LOC

How does this differ from the feature point analysis presented during the feasibility scoping phase? Not by much. A project manager may choose to perform feature point analysis during the feasibility scoping phase, when only high-level models such as context-level DFDs exist, and then refine that estimation during the planning phase, when there is more project knowledge and more documentation, such as a Level 0 DFD, along with a Level 1 DFD for a few of the major subsystems. Any of these models may be used during any phase. If they are applied consistently, the expectation is that sizing and estimating accuracy will increase.

Advantages of Model Blitz

Some of the advantages of using the Model Blitz method include:

●  It is easy to use with structured methods (data flow diagrams, entity relationship diagrams, etc.) and with object-oriented classes, services, and so on.

●  Accuracy increases with use of historical data.

●  Continuous improvement activities are used for estimation techniques - the estimated size can be easily compared with the actual size during post-project analysis. (How accurate was the estimate? Why was it off by a certain percent? What can be learned for the next project's size estimation?)

Disadvantages of Model Blitz

Disadvantages of using Model Blitz include:

●  It requires use of design methodology.

●  Estimation cannot begin until design is complete.

●  It requires historical data.

●  It does not evaluate environmental factors.

Wideband Delphi

Another popular and simple technique for estimating size and for estimating effort is the Wideband Delphi group consensus approach. The Delphi technique originated at the Rand Corporation decades ago; the name was derived from the Oracle of Delphi in Greek mythology. It was used successfully at Rand to predict the future of major world technologies.

This is a disciplined method of using the experience of several people to reach an estimate that incorporates all of their knowledge.

In software engineering circles, the original Delphi approach has been modified. The "pure" approach is to collect expert opinion in isolation, feed back anonymous summary results, and iterate until consensus is reached (without group discussion).

Guidelines for Conducting Wideband Delphi Group Consensus

Because the Delphi approach can take a very long time, the concept of Wideband Delphi was introduced to speed up the process. This improved approach uses group discussion.

Steps in Conducting Wideband Delphi

There are six major steps in conducting Wideband Delphi:

1.  Present experts with a problem and a response form.
2.  Conduct a group discussion.
3.  Collect expert opinion anonymously.
4.  Feed back a summary of results to each expert.
5.  Conduct another group discussion.
6.  Iterate as necessary until consensus is reached.

Group discussions are the primary difference between pure Delphi and Wideband Delphi. The summary of results in Step 4 is presented in Figure (d).

Delphi Software Size Estimation Results Summary Form

Here's another way to look at the Wideband Delphi process:

●  Get a few experts (typically three to five). Include experience in all of the "risk" areas - application domain, programming language, algorithms, target hardware, operating systems, and so on.

●  Meet with them to discuss issues and describe the software to them. Bring specifications, other source documents, WBS, and so on. Let them add their own information and questions. Have everyone take notes.

●  Ask each expert to develop an estimate, including a minimum, expected, and maximum rating. Allow the experts to remain independent and anonymous.

●  Record anonymous estimates on a graph.

●  Meet and have each expert discuss his estimate, assumptions, and rationale.

●  Seek consensus on assumptions. This may result in action items to gather factual data.

●  If possible, reach a consensus estimate.

●  If no consensus can be reached, break until you can gather additional data; then repeat.

●  Stop repeating when you reach consensus or two consecutive cycles do not change much and there is no significant additional data available (agree to disagree). At the end, there is a consensus estimate on an expected value. There should also be a minimum and a maximum so that the degree of confidence in the estimate can be understood.

Advantages of Wideband Delphi

The advantages of Wideband Delphi include the following:

●  Implementation is easy and inexpensive.

●  It takes advantage of the expertise of several people.

●  All participants become better educated about the software.

●  It does not require historical data, although it is useful if available.

●  It is used for high-level and detailed estimation.

●  Results are more accurate and less "dangerous" than LOC estimating.

●  It aids in providing a global view of project to team members.

Disadvantages of Wideband Delphi

The disadvantages of Wideband Delphi include the following:

●  It is difficult to repeat with a different group of experts.

●  You can reach consensus on an incorrect estimate. Because you all "buy in", you may not be skeptical enough when actual data shows it is wrong.

●  You can develop a false sense of confidence.

●  You may fail to reach a consensus.

●  Experts may be all biased in the same subjective direction.


Tags

deliverables, software project, life cycle, cost estimate, software size, wbs, loc
The contents available on this website are copyrighted by TechPlus unless otherwise indicated. All rights are reserved by TechPlus, and content may not be reproduced, published, or transferred in any form or by any means, except with the prior written permission of TechPlus.
Copyright 2017 SPMInfoBlog.
Designed by TechPlus