Abstract: Providing a timely estimation of the likely software development effort hasbeen the focus of intensive research investigations in the field of softwareengineering, especially software project management. As a result, variouscost estimation techniques have been proposed and validated. Due to thenature of the software-engineering domain, software project attributes areoften measured in terms of linguistic values, such as very low, low, highand very high. The imprecise nature of such attributes constitutes uncertaintyand vagueness in their subsequent interpretation. We feel that softwarecost estimation models should be able to deal with imprecision anduncertainty associated with such values. However, there are no cost estimationmodels that can directly tolerate such imprecision and uncertaintywhen describing software projects, without taking the classical intervalsand numeric-values approaches. This chapter presents a new techniquebased on fuzzy logic, linguistic quantifiers, and analogy-based reasoning toestimate the cost or effort of software projects when they are described byeither numerical data or linguistic values. We refer to this approach asFuzzy Analogy. In addition to presenting the proposed technique, thischapter also illustrates an empirical validation based on the historicalCOCOMOÂ81 software projects data set.
Abstract: In a global competitive environment adequate software is a crucial factor for the development of complex and high technology consumer products. As a consequence software developers are faced with an increasing demand for better quality and safety. At the time they are subject to the growing pressure of cost reduction. In order to cope with the challenge reliable measurement is a key issue. This volume presents articles on the problems, benefits, and new directions of software measurement.This book is essential reading for researchers and students in the fields of information systems, software metrics, and quality assessment as well as for professionals responsible for software development and quality assurance in companies.
Abstract: This volume presents the findings of the 6th International Workshop on Software metrics. Consequently continuimng the Workshop's tradition the focus is on the combination of theoretical and practical contributions. The wide range of topics includes articles on the evaluation of the maintenance process, the measurement of object-oriented software development, metrics for class librairies, and the evaluation of Java-applications.
Notes: Number of Volumes: 1, none, 19970923, Research Notes: 527
Abstract: The accurate measurement of the functional size of applications that are automatically
generated in MDA environments is a challenge for the software development industry. This
paper introduces the OO-Method COSMIC Function Points (OOmCFP) procedure, which has
been systematically designed to measure the functional size of object-oriented applications
generated from their conceptual models by means of model transformations. The
OOmCFP procedure is structured in three phases: a strategy phase, a mapping phase,
and a measurement phase. Finally, a case study is presented to illustrate the use of
OOmCFP, as well as an analysis of the results obtained.
Abstract: Agile estimation approaches usually start by sizing the user stories to be developed by comparing them to one another. Various techniques, with varying degrees of formality, are used to perform the comparisons ? plain contrasts, triangulation, planning poker, and voting. This article proposes the use of a modified paired comparison method in which a reduced number of comparisons is selected according to an incomplete cyclic design. Using two sets of data, the authors show that the proposed method produces good estimates, even when the number of comparisons is reduced by half those required by the original formulation of the method.
Abstract: At the core of any engineering discipline is the use of measures, based on ISO standards or on widely recognized conventions, for the development and analysis of the artifacts produced by engineers. In the software domain, many alternatives have been proposed to measure the same attributes, but there is no consensus on a framework for howto analyze or choose among these measures. Furthermore, there is often not even a consensus on the characteristics of the attributes to be measured.In this paper, a framework is proposed for a software measurement life cycle with a particular focus on the design phase of a software measure. The framework includes definitions of the verification criteria that can be used to understand the stages of software measurement design. This framework also integrates the different perspectives of existing measurement. In addition to inputs from the software measurement literature the framework integrates the concepts and vocabulary of metrology. This metrological approach provides a clear definition of the concepts, as well as the activities and products, related to measurement. The aim is to give an integrated view, involving the practical side and the theoretical side, as well as the basic underlying concepts of measurement.
Abstract: Over the past few years, a number of Domain Specific Modeling Languages (DSMLs) have been developed, and their use has increased in approaches such as Model Driven Engineering (MDE), software factories and even MDA (Model Driven Architecture). However, developing a DSML is still a challenging and time-consuming task. Issues to tackle include the DSML development process, DSML quality and DSML model verification and validation (V&V). Therefore, techniques and solutions are needed to make DSML development easier and more accessible to software developers and domain experts. This paper recommends a list of success factors to consider when developing or choosing a DSML for those developing it, and for software developers and domain experts interested in using it. The paper then maps these success factors to a set of assessment criteria that can be used to assess DSML quality.
Abstract: The International Software Benchmarking Standards Group (ISBSG) provides the Software Engineering community with a repository of project data which, up to now, have been used mostly for benchmarking and for estimating project effort. The 2005 version of the ISBSG repository includes data on more than 3,000 projects from various countries, sized with different functional size measurement methods and including a number of quality-related variables. ISO/IEC 9126 is a series of ISO documents for the evaluation of the quality of software products: it proposes three quality models (internal quality, external quality and quality in use) together with the ISO taxonomy of quality characteristics and subcharacteristics from different viewpoints throughout the whole Software Life Cycle (SLC); it also includes an inventory of over two hundred measures of the quality subcharacteristics.This paper investigates the extent to which the current ISBSG repository can be of use for benchmarking software product quality characteristics on the basis of ISO 9126. It also identifies the subset of quality-related data fields made available by the ISBSG to industry and researchers, and illustrates its use for quality analysis.
Abstract: Material measurement standard etalons are widely recognized as critical for accurate measurement in sciences and engineering. However, there are no measurement standard etalons in software engineering yet. The absence of such a concept in software measurement can have a negative impact on software engineers and managers when they use measurement results in decision-making. Software measurement standards etalons would help verify measurement results and they should be included in the design of every software measure proposed. Since the process for establishing standard etalons for software measures has not yet been investigated, this paper addresses this issue and proposes a seven-step design process using ISO 19761: COSMIC-FFP.
Abstract: While metrology has a long tradition of use in physics and chemistry, it is rarely referred to in the software engineering measurement, and in particular, in the design and documentation of software measures. Using the ISO 9126-4 Technical Report on the measurement of software quality in use as a case study, this paper reports on the extent to which this ISO series addresses the metrology criteria typical of classic measurement. Areas for improvement in the design and documentation of measures proposed in ISO 9126-4 are identified based on the ISO International Vocabulary of Basic and General Terms in Metrology (VIM) and ISO 15939.
Abstract: Walter G. Vincenti, in his book What engineers know and how they know it", has proposed a taxonomy of engineering knowledge. Software Engineering, as a discipline, is certainly not yet as mature as other engineering disciplines, and some authors have even challenged the notion that Software Engineering is indeed engineering. To investigate this issue, VincentiÂs categories of engineering knowledge are used to analyze the SWEBOK (Software Engineering Body of Knowledge) Guide from an engineering perspective. This paper presents an overview of the VincentÂsi categories of engineering knowledge, followed by an analysis of the engineering design concept in Vincenti vs. the design concept in the SWEBOK Guide: this highlights in particular the fact that VincentiÂs engineering design concept is not limited to the design phase knowledge area in the SWEBOK Guide, but that it pervades many of the SWEBOK knowledge areas. Finally, the SWEBOK Software Quality knowledge area is selected as a case study, and analyzed using VincentiÂs classification of engineering knowledge."
Abstract: The International Software Benchmarking Standards Group (ISBSG) provides the Software Engineering community with a repository of project data which, up to now, have been used mostly for benchmarking and for estimating project effort. The 2005 version of the ISBSG repository includes data on more than 3,000 projects from various countries, sized with different functional size measurement methods and including a number of quality-related variables. ISO/IEC 9126 (International Organization for Standardization/International Electrotechnical Commission) is a series of ISO documents for the evaluation of the quality of software products: it proposes three quality models (internal quality, external quality andquality in use), together with the ISO taxonomy of quality characteristics and subcharacteristics. ISO 9126 also includes an inventory of over two hundred measures of the quality subcharacteristics. The goal of this paper is to identify whether or not the current ISBSG repository can be of use for benchmarking software product quality on the basis of ISO 9126.
Abstract: The functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard, measures software functionality in terms of the data movements across and within the software boundary. It focuses on the functional user requirements of the software and is applicable throughout the development life cycle, from the requirements phase up to and including the implementation and maintenance phases. This article extends the use of COSMIC-FFP for testing purposes by combining the functions measured by the COSMIC-FFP measurement procedure with a black box testing strategy. Such a testing strategy leverages a COSMIC-FFP advantage, that is, its applicability during the early development phase once the specifications have been documented. This article also investigates the applicability of a functional complexity measure, based on entropy measurement, for assigning priorities to test cases and, ultimately, applying those concepts in a case study.
Abstract: This paper discusses the issue of outliers in therepository of software projects of the InternationalSoftware Benchmarking Standards Group - ISBSG.The criteria used for the identification of outliers iswhether the productivity is significantly lower andhigher, that is with significant economies or diseconomiesof scale, in relatively homogeneoussamples. Once the outliers identified, other projectvariables are investigated by heuristics to identifycandidate explanatory variables that might explainsuch outliers behaviors.
Abstract: Within the context of the current ISO project to upgrade the set of technical reports on the measurement of the quality of software products (ISO 9126), the ISO working group concerned has come up with proposals for various documents (standards or technical reports) in the new ISO 25000 series to improve the interpretation and use of the quality measures. This paper investigates some of the harmonization issues arising with the addition of new documents like ISO 25021, in particular with respect to previously published measurement standards for software engineering, such as ISO 9126, ISO 15939, ISO 14143-1 and ISO 19761.
Abstract: Software accounts now for a increasing share of the content of modern equipments and tools, and must similarly be maintained to ensure its continuous operational efficiency. Although the maintenance of the equipments is discussed extensively, very little is published about software maintenance and how it affects us. This paper presents an overview of key topics of software engineering maintenance.
Abstract: Too often, software intensive organizations can only track the initial assignment of a software to a resource but not necessarily thereafter. In such organizations, Software Asset Management (SAM) is often a reactive process. The lack of defined software asset management processes limits the ability of several organizations to manage the whereabouts of software once it is assigned to a resource. This puts the organization in a passive role so it is important to add planning and control processes, including for the retirement of software. To improve management of assets, the IT industry can learn from other disciplines, in particular from public works engineering. Through active assets management an organization will be better positioned to make choices to optimize and tune its Software Asset portfolio while complying with corporate policies.
Abstract: Software accounts now for a increasing share of the content of modern equipments and tools, and must similarly be maintained to ensure its continuous operational efficiency. Although the maintenance of the equipments is discussed extensively, very little is published about software maintenance and how it affects us. This paper presents an overview of key topics of software engineering maintenance.
Abstract: Too often, software intensive organizations can only track the initial assignment of a software to a resource but not necessarily thereafter. In such organizations, Software Asset Management (SAM) is often a reactive process. The lack of defined software asset management processes limits the ability of several organizations to manage the whereabouts of software once it is assigned to a resource. This puts the organization in a passive role so it is important to add planning and control processes, including for the retirement of software. To improve management of assets, the IT industry can learn from other disciplines, in particular from public works engineering. Through active assets management an organization will be better positioned to make choices to optimize and tune its Software Asset portfolio while complying with corporate policies.
Abstract: The usability of a software product has recently become a key software quality factor. The InternationalOrganization for Standardization (ISO) has developed a variety of models to specify and measure softwareusability but these individual models do not support all usability aspects. Furthermore, they are not yet wellintegrated into current software engineering practices and lack tool support. The aim of this research is to surveythe actual representation (meanings and interpretations) of usability in ISO standards, indicate some of existinglimitations and address them by proposing an enhanced, normative model for the evaluation of software usability.
Abstract: A Balanced Scorecard (BSC) presents the quantitative goals selected from multiple perspectives for implementing the organizational strategy and vision. However, in most current BSC frameworks, including those developed for the Information and Communication Technology field, each perspective is handled separately. None of these perspectives is integrated automatically into a consolidated view, and so these frameworks do not tackle, either in relative or in absolute terms, the contribution of each goal to the whole BSC. Here, this issue is highlighted, candidate consolidation techniques are reviewed and the preferred technique, the QEST model, is selected; more specifically, three options are presented for incorporating the QEST model into a BSC framework.
Abstract: A set of fundamental principles can act as an enabler in the establishment of a discipline; however, software engineering still lacks a set of universally recognized fundamental principles. This article presents a progress report on an attempt to identify and develop a consensus on a set of candidate fundamental principles. A fundamental principle is less specific and more enduring than methodologies and techniques. It should be phrased to withstand the test of time. It should not contradict a more general engineering principle and should have some correspondence with ??best practiceÂÂ. It should be precise enough to be capable of support and contradiction and should not conceal a tradeoff. It should also relate to one or more computer science or engineering concepts. The proposed candidate set consists of fundamental principles which were identified through two workshops, two Delphi studies and a web-based survey.
Abstract: Process and product measurement is one of the key topics in the Software Engineering field. There already exists a significant number of one-dimensional models of performance, which integrate all individual measurements into a single performance index. However, these types of models are too over-simplified to adequately reflect the multi-dimensional nature of performance. Similarly, one-dimensional models do not meet the analytical requirements of management when various viewpoints" must be taken into account simultaneously. This papers proposes a multi-dimensional measurement model capable of handling, concurrently, distinct but related areas of interest, each representing a dimension of performance. The proposed model is based on an open model called QEST (Quality factor + Economic, Social & Technical dimensions) which had been developed to handle, simultaneously and concurrently, a three-dimensional perspective of performance:· economic dimension - the perspective of managers;· social dimension - the perspective of users;· technical dimension - the perspective of developers.A more generic form of this model has been developed to handle a greater number of perspectives, as required by, for instance, several Performance Management frameworks such as the Balanced Scorecard, the Intangible Asset Monitor and the Skandia Navigator. This paper presents the generic form derived from the QEST model, referred to as QEST nD, with the ability to handle n possible dimensions. The generic model is also verified for the particular case of three dimensions using sample data previously applied to the original QEST software performance model."
Abstract: Even though a significant number of estimation models have been proposed for development projects, few have been proposed for software maintenance. This paper reports on two field studies carried out on the use of functional size measures in building estimation models for sets of maintenance projects implementing small functional enhancements in existing software. The first field-study reports on models built with 15 projects making functional enhancements to an internet-based software program for linguistic applications. The second field study analyses 19 maintenance projects on a single real-time embedded software program in the defense industry. Both field studies collected functional size measures using version 2.0 of the COSMIC-FFP functional size measurement method. Also both field studies classified projects into two classes of project difficulty in order to aid identifying subsets of projects with greater homogeneity in the relationship of project effort to functional size. This paper is the first published paper reporting on the use this second generation of functional size-measurement methods in a maintenance-estimation context.
Abstract: The IEEE Computer Society and the Association for Computing Machinery are working on a joint project to develop a guide to the Software Engineering Body of Knowledge (SWEBOK). Articulating a body of knowledge is an essential step toward developing a profession because it represents a broad concensus regarding the contents of the discipline. Without such a consensus, there is no way to validate a licensing examination, set a curriculum to prepare individuals for the examination, or formulate criteria for accrediting the curriculum.The SWEBOK project (http://www.swebok.org) is now nearing the end of the second of its three phases. Here we summarize the results to date and provide an overview of the project and its status.
Abstract: This work presents the geometrical and statistical foundations of a three-dimensional model of a software project performance model called QEST (Quality factor + Economic, Social and Technical dimensions). In this model, the three dimensions taken into consideration are combined through the use of a regular tetrahedron geometrical representation of a pyramid, the sides of which represent the normalised values of each of the project dimension. This paper presents the three geometrical concepts used for assessing project performance progress using geometrical concepts of distance, area and volume, and describes how the corresponding geometrical formulae are derived. The relative merit of each is also discussed and an analysis is included of the multiple combinations of values along the three axes which can be used to assess the respective adequacy of each in order to convey maximum information in the greatest number of instances along all axes.
Abstract: A requirement for software productivity analysis and estimation is the ability to measure the size of a software product from the user's viewpoint, that is, from a functional perspective rather than from a technical perspective. One example of such a measurement technique is Function Points (FP). FP are now widely used in the MIS domain, where it has become on the industry the 'de facto' standard. However, FP have not had the same acceptance in other domains, such as real-time software. This article reports on work carried out to adapt FP to the specific functional characteristics of real-time software. The extension proposed, called Full Function Points (FFP), is described and the results of field tests are discussed.
Abstract: Function Point Analysis (FPA) was initially designed on the basis of expert judgements, without explicit reference to any theoretical foundation. From the point of view of the measurement scales used in its measurement process, FPA constitutes a pot-pourri of scales not admissible without the transformations imbedded in the implicit models of expert judgements. The results of this empirical study demonstrate that in a homogeneous environment not burdened with major differences in productivity factors there is a clear relationship between FPA' primary components and Work-Effort. This empirical study also indicates that there is such a relationship for each step of the FPA measurement process prior to the mixing of scales and the assignments of weights. Comparisons with FPA productivity models based on weights confirm, on the one hand, that the weights do not add information and, on the other, that the weights are fairly robust and can be used when little historical data is available. The full data set is provided for future studies.
Notes: http://saturne.info.uqam.ca/Labo_Recherche/Lrgl/publi/rjournal/aa199601/aa199601.htm, 19971111, Research Notes: 343
Abstract: Standards are designed to promote the efficient use of technology; they can be seen as structured and prepackaged, agreed-upon best practices for specific technologies. Teaching can be viewed as a technology transfer process, and the use of standards can facilitate this process. This paper discusses the uses of both ISO standards and work-in-progress documents in designing and teaching graduate courses in software engineering, it also discusses the approach selected to illustrate to graduate students how an accepted body of knowledge is developed and agreed upon by a group of domain experts. The teaching method involves class simulations fo the review process of ISO work sessions and international voting. Lessons learned from both learning and teaching perspectives are also presented.
Abstract: The paper is concerned with the identification and measurement of reuse within projects in which functional enhancements have been added to existing software applications. The proposed approach is based on the measurement of reuse from a functional perspective rather than from a technical perspective. Two key concepts are introduced: a reuse indicator and a predictor ratio. The reuse indicator is derived from an analysis of the function types as currently defined in function points analysis. The predictor ratio is derived from an understanding of the avoided-cost concept and of how it can be captured using historical databases of function points from previous development projects. The paper indicates how, in functional enhancement projects, the predictor ratio can be combined into the reuse indicator to derive an alternative size measure which takes into account functions reused and not redeveloped. The paper also demonstrates how these ratios can then be integrated in a maintenance productivity model to analyse the benefits of reuse by taking into account the avoided cost of functions reused. A case study based on an industrial data set is provided to illustrate the measurement of functional reuse in an enhancement project and its impact in maintenance productivity analysis. (17 Refs.)
Abstract: Function point metrics were initially designed through expert judgements. The underlying measurement model has not been clearly stated, and this has generated some confusion as to the true nature of these metrics and their usefulness in fields other than their initial management information system domain. When viewed without reference to implicit models hidden in the expert judgements, function points constitute a pot-pourri of measurement scales. This suggests that each step could represent a transcend the measurement scales and maintain or improve the desired relationship with development effort. (37 Refs.)
Notes: 12-nov, Notes: 0164-1212 coden: jssodm, Research Notes: 833
Abstract: While various figures have been published on the workload distribution of maintenance activities, this information is at best indicative of management perceptions, most of it originating from surveys, and almost none based on actual data. This article presents empirical data from a two-year measurement effort in the maintenance environment of a Canadian financial institution. Based on the supply/demand paradigm, maintenance data have been collected and analysed to investigate the basis of productivity analyses through such concepts as the product group, the product mix and the product mix changes on the demand side, as well as resource allocation by product classification and quarterly and yearly distribution changes. This paper includes a discussion on the measurement program implemented, and illustrates how insights into the maintenance process are gained through various measurements. The paper also presents hard data on the demand side and on the supply side of the maintenance process, as well as an analysis of the data collected. (18 Refs.)
Abstract: During the past 10 years, the amount of effort put on setting up benchmarking repositories has considerably increased at the organizational, national and even at international levels to help software managers to determine the performance of software activities and to make better software estimates. This has enabled a number of studies with an emphasis on the relationship between software product size, effort and cost drivers in order to either measure the average performance for similar software projects or to develop estimation models and then refine them using the collected data. However, despite these efforts, none of those methods are yet deemed to be universally applicable and there is still no agreement on which cost drivers are significant in the estimation process. This study discusses some of the possible reasons why in software engineering, practitioners and researchers have not yet been able to come up with reasonable and well quantified relationships between effort and cost drivers although considerable amounts of data on software projects have been collected. An improved classification of application types in benchmarking repositories is also proposed.
Abstract: The main objective of this paper is to explore, through a case study, the issue of the measurement adequacy of the COSMIC and IFPUG FPA measurement methods to capture the functional size of real-time software. The key issue for practitioners is that the measurement result adequately represent functional size. More specifically, this measure, which is a number, should take into consideration the particularities of specific real-time software and be sensitive to small variations in functionality. These two functional size measurement methods were applied separately to measure the same real-time software, and their results compared and analyzed.
Abstract: Evaluation and continuous improvement of software maintenance are key contributors to improve software quality. The software maintenance function suffers from a scarcity of the management models that would facilitate its evaluation, management and continuous improvement. This paper presents an overview the measurements practices that are being introduced for level 3 and higher to the software maintenance maturity model (S3m).
Abstract: Software reuse is essential in improving efficiency and productivity in the software development process. This paper analyses reuse within requirements engineering phase by taking and adapting a standard functional size measurement method, COSMIC FFP. Our proposal attempts to quantify reusability from Object Oriented requirements specifications by identifying potential primitives with a high level of reusability and applying a reuse indicator. These requirements are specified using OO-Method, an automatic software production method based on transformation models. We illustrate the application of our proposal in a Car Rental real system.
Abstract: In recent years, a number of well-known groups have developed sets of best practices on software measurement, but from different perspectives. These best practices have been published in various documents, such as ISO 15939, the CMMI model and the ISBSG data repository. However, these documents were developed independently and for a software engineering organization initiating a measurement program. As a result, it is a challenge to work out a strategy to leverage the benefits of each, while at the same time offsetting gaps. First, although ISO 15939 (Software Measurement Process) is an international standard which defines the activities and tasks that are necessary to implement a software measurement process, because its activities and tasks are defined at a very high level, additional support is necessary for ease of implementation. Second, while CMMI (Capability Maturity Model Integration) is a model which contains the essential elements of an effective software engineering process, it is now strongly measurement-oriented, which means that it provides guidance on which elements need measurement, but does not provide specific guidelines for defining specific measures and does not support an international repository of project measurement results. Third, the International Software Benchmarking Standards Group (ISBSG) provides a repository of project data which may be used for benchmarking and development of estimation models. This paper proposes an approach to integrating resources such as ISO 15939, CMMI and the ISBSG data repository in support a software engineering measurement program.
Abstract: COSMIC-FFP ? ISO 19761 ? is a functional size measurement method developed by the Common Software Measurement International Consortium (COSMIC). The COSMIC-FFP measurement model and related definitions are generic, and this paper investigates the feasibility of their application in specification languages. More specifically, it proposes a formalization of the COSMIC-FFP definition for theAutonomic Systems Timed Reactive Object Model (AS-TRM). This would allow the integration of functional complexity and functional size monitoring during autonomic system specification construction and/or evolution. The Steam Boiler case study is introduced to demonstrate the applicability of functional size measurement in terms of AS-TRM modeling.
Abstract: A number of Web design problems continue to arise, such as: (1) decoupling the various aspects of Web applications (for example, business logic, the user interface, navigation and information architecture; and (2) isolating platform specifics from the concerns common to all Web applications. In the context of a proposal for a pattern-oriented architecture for Web applications, this paper identifies an extensive list of patterns aimed at providing a pool of proven solutions to these problems. The patterns span several levels of abstraction, from information architecture and interoperability patterns to navigation, interaction, visualization and presentation patterns. The proposed architecture will show how several individual patterns can be combined at different levels of abstraction into heterogeneous structures, which can be used as building blocks in the development of Web applications
Abstract: A number of Web design problems continue to arise, such as: (1) decoupling the various aspects of Web applications (for example, business logic, the user interface, navigation and information architecture) and (2) isolating platform specifics from the concerns common to all Web applications. In the context of a proposal for a model-driven architecture for Web applications, this paper identifies an extensive list of models aimed at providing a pool of proven solutions to these problems. The models span several levels of abstraction such as business, task, dialog, presentation and layout. The proposed architecture show how several individual models can be combined at different levels of abstraction into heterogeneous structures which can be used as building blocks in the developmentof Web applications.
Abstract: Software engineering, as a discipline, is not yet as mature as other engineering disciplines and it lacks criteria to assess, from an engineeringperspective, the current content of its body of knowledge as embedded in the SWEBOK Guide. What is then the engineering knowledge thatshould be embedded within software engineering? Vincenti, in his book ?What engineers know and how they know it has proposed a taxonomy of engineering knowledge types. To investigate software engineering from an engineering perspective, these VincentiÂs categories of engineering knowledge are used to identify relevant engineering criteria and their presence in SWEBOK.
Abstract: Since 1984 the International Function Point Users Group (IFPUG) produced a set of standards and technical documents about a functional size measurement methods, known as IFPUG, based on Albrecht Fuction Points. On the other hand, in 1998, the Common Software Measurement International Consortium (COSMIC) proposed an improved measurement method known as Full Function Points (FFP). Both the IFPUG and the COSMIC methods both measure functional size of software, but produce di®erent results. In this paper, we propose a model to convert functional size measures obtained with the IFPUG method to the corresponding COSMIC measures. We also present the validation of the model using 33 software projects measured with both methods. This approach may be beneficial to companies using both methods or migrating to COSMIC such that past data in IFPUG can be considered for future estimates using COSMIC and as a validation procedure.
Abstract: A functional profile provides information about the distribution of functionality within specific software and permits comparison of its functional distribution with that of a sample of projects. This study investigates the impact of functional profiles on effort estimation models and compare results with estimation models based only on total functional size. The data set used in this study includes the projects from the International Software Benchmarking Standards Group (ISBSG) repository that have had their functional size measured with COSMIC-FFP, the ISO 19761 standard.
Abstract: From the mid '90s on, a number of Agile Methodologies have been proposed, most of them based on the basic values and principles summarized in the 2001 Agile Manifesto". These agile methodologies were aimed at small teams with severe project constraints (i.e. small project teams in the same location, the customer as a member of the project team, informal communication, test-driven approach, etc.). Compared to more traditional project methodologies, Agile (or Lightweight) Methodologies are more detailed on Construction and Testing practices, but much less specific about other topics, such as Estimation. Currently, in most Agile Methodologies the experience of the team represents the basis for estimating from the high-level requirements. The application of a Functional Size Measurement Method (FSMM) for estimation purposes raises a number of technical problems in Agile projects (i.e. unstable requirements, iterative SLC, non-functional requirements). A candidate solution is to combine an early sizing method for an agile project with a full FSMM method to be applied later in the SLC, when User Stories (the way XP labels high-level functional requirements) become available and are more stable. The goal of the paper is to identify estimation issues in the most known and adopted agile methodologies, looking at possible improvements at the organizational level."
Abstract: This paper makes a first attempt towards improving the testing process in ERP projects by using a metric-based approach [ABU06] based on functional size measurement. The paper reports on how this approach was adapted to a ERP-package-specific project context, how it was applied to five settings in a mid-sized project, and what was learnt of doing it.
Abstract: Within the context of use of the Unified Software Method (USM), traceability links are identified between all data elements of a software project that have a relation with another data element. Knowledge of these links provides complete traceabil-ity, which in turn means that synchronization of the information in a software pro-ject can be maintained. In this article, we propose a method of measurement based on the USM, which is aimed at quantifying the amount of information related to a maintenance action planned on an existing element in a software project. This measurement method makes it possible to quantify the ratio of project information to be maintained to total project information, at the same time as the amount of in-formation involved in the maintenance project being considered.
Abstract: This paper is concerned with the use of Radial Basis Function (RBF) neural networks for software cost estimation. The study is devoted to the design of these networks, especially their middle layer composed of receptive fields, using two clustering techniques: the Cmeans and the APC-III algorithms. A comparison between an RBFN using C-means and an RBFN using APC-III, in terms of estimates accuracy, is hence presented. This study is based on the COCOMOÂ81 dataset.
Abstract: This paper investigates with the fuzzy representation of software project attributes. The aim is to generate fuzzy sets and their membership functions from numerical data of software project attributes. The proposed fuzzy sets generation process consists in two main steps: First, we use the well-known Fuzzy C-Means algorithm (FCM) and the Xie-Beni validity criterion to decide on the number of fuzzy sets. Second, we use a Real Coded Genetic Algorithm (RCGA) to build membership functions for these fuzzy sets. Membership functions can be trapezoidal, triangular or Gaussian. This study uses the software attributes given in the COCOMOÂ81 dataset.
Abstract: Tracking and control" activities in software projects are most often based, in industry, on just two dimensions of analysis: time and cost. Most often, these activities exclude other dimensions (such as quality, risks, impact on society, the stakeholders' viewpoint in a broader sense) taken into account in Performance Management models such as EFQM or the Malcolm Baldrige model. How can these multiple concurrent control mechanisms across several dimensions of analysis be balanced? Balancing Multiple Perspectives (BMP) is a procedure designed to help project managers choose a set of project indicators from several concurrent viewpoints. This paper also presents the initial results from a BMP application, using a list of 14 candidate measures, with the objectives of representing the "as is" situation and determining what the "to be" situation will be, including cost figures to be possibly considered in future project budgets. Based on the results presented, which have been gathered both from an industrial and an academic sample, make it possible to look at several potential viewpoints and provide suggestions for improving measurement plans."
Abstract: A functional size measurement method, COSMIC-FFP, which was adopted in 2003 as theISO/IEC 19761 standard, measures software functionality in terms of the data movementsacross and within the software boundary. It focuses on the functional user requirements ofthe software and is applicable throughout the development life cycle, from the requirementsphase up and including to the implementation and maintenance phases. This paper extendsthe use of COSMIC-FFP for testing purposes by combining the functions measured by theCOSMIC-FFP measurement procedure with the black box testing strategy. It leverages theadvantage of COSMIC-FFP, which is its applicability during the early development phaseonce the specifications have been documented. This paper also investigates the applicabilityof Entropy measurement in terms of its use with COSMIC-FFP for assigning priorities to testcases.
Abstract: Software component technology has become a major pillar of the IT evolution. The benefits of this technology, such as reuse, enhanced quality and relatively short application development time, have been key drivers of its industrial adoption. However, in its progress towards maturity, component technology has suffered from a number of limitations,such as unused component members (data and functionalities). For instance, a reusable software component incorporates a set of members, a sizevarying subset of which is actually used to satisfy the functional requirements of a particular software application. This means that a complementary subset of unused members will persist in the deployed application, where this subset provides no functional value to the host application. Furthermore, these unused members can consume memory and network resources and might compromise application integrity and/or security if they are exploited inappropriately. In this paper, we propose CoMet, a prototype tool which applies CUMM (Component Unused Member Measurement) method to measure unused component members (attributes and functionalities) and their usage percentages in a software application.
Abstract: Software component has been a main stream technology used to tackle issues such as software reuse, software quality and, software development complexity. In spite of the proliferation of component models (CORBA, .Net, JavaBeans), certain issues and limitations inherent to components are still not addressed adequately. For instance, composing software components especially those provided by different suppliers may result in faulty behavior. This behavior might be the result of incompatibilities between aging components and/or freshly released components and their respective interfaces. This paper, present an approach to tackle component interface incompatibilities via the use of a component and interface versioning scheme. This approach is designed as an extension to the Compositional Structured Component Model (CSCM), an ongoing research project. The implementation of this extension makes use of code annotations to provide interface versioning information useful in detecting interface incompatibilities.
Abstract: The increasing popularity of use-case driven development methodologies has generated an industrial interest in software size and effort estimation based on Use Case Points (UCP). This paper presents an evaluation of the design of the UCP measurement method. The analysis looks into the concepts, as well as the explicit and implicit principles of the method, the correspondence between its measurements and empirical reality and the consistency of its system of points and weights.
Abstract: Up until recently 'software metrics' have been most often proposed as the quantitative tools of choice in software engineering, and the analysis of these had been most often discussed from the perspective referred to as 'measurement theory'. However, in other disciplines, it is the domain of knowledge referred to as 'metrology' that is the foundation for the development and use of measurement instruments. This paper presents an overview of the set of metrology concepts as documented in the ISO Vocabulary of Basic and General Terms in Metrology (VIM) and its use in analyzing 'software metrics'. It also presents the measurement coverage within the Guide to the Software Engineering Body of Knowledge (SWEBOK) as well as a proposed measurement body of knowledge. Throughout these analyses some gaps are identified which need to be addressed for software measurement to mature.
Abstract: COSMIC-FFP (ISO 19761) represents the second generation of functional size of the software, based on its ease of understanding and use and it is applicable to various kinds of software applications; this new method has achieved rapidly an ISO recognition as an international standard as well as market acceptance in various countries. Several organizations are therefore interested in using convertibility ratios between COSMIC- FFP and first generation of functional size measurement (in particular Function Point Analysis ? FPA - ISO 20926), in order to leverage data from their historical databases of software measures. Previous convertibility studies have indicated that convertibility of FPA to COSMIC-FFP can be simple, with a very good correlation for most MIS projects, but that there are some outliers for which convertibility is less straightforward. This study analyzes a new data set of 14 projects measured with both sizing methods, and for which measurement results are available at the detailed level. The analysis reported here identifies reasons why, for some MISprojects, convertibility is not so straightforward. This analysis also provides lead indicators to identify outliers for convertibility purposes.
Abstract: The ISO International Vocabulary of Basic and General Terms in Metrology (VIM) represents the international consensus on a common and general terminology of metrology concepts. However, until recently, it was not usual practice in software engineering measurement to take into account metrology concepts and criteria in the design of software measures. Using the ISO 9126-4 Technical Report on the measurement of software quality in use as a case study, this paper reports on the extent to which this ISO series addresses the metrology criteria typical of classic measurement. Areas for improvement in the design and documentation of measures proposed in ISO 9126 are identified.
Abstract: The aim of this paper is to evaluate the accuracy of Fuzzy Analogy for software cost estimation on a Web software dataset. Fuzzy Analogy is based on reasoning by analogy and fuzzy logic to estimate effort when software projects are described by linguistic values such as low and high. Linguistic values are represented in the Fuzzy Analogy estimation process with fuzzy sets. However, the descriptions given of the Web software attributes used are insufficient to empirically build their fuzzy representations. Hence, we have suggested the use of the Fuzzy C-Means clustering technique (FCM) and a Real Coded Genetic Algorithm (RCGA) to build these fuzzy representations.
Abstract: A major difficulty with current organizational performance models in software engineering management is to represent many possible viewpoints quantitatively and in a consolidated manner, while at the same time keeping track of the values of the individual dimensions of performance. The models currently proposed do not meet the analytical requirements ofsoftware engineering management when various viewpoints must be taken into account concurrently.This paper presents a selection of multidimensional models of performance in software engineering and in management. It then describes the proposed concepts for a tool for multidimensional performance modeling in software engineering management. The tool would adopt an organizational framework of performance and build upon an open, generic and geometrical approach to performance modeling called QEST. It would also enable the user to select different visualization techniques to analyze data. In addition, the proposed tool would allow the user to iteratively define, collect and analyze multidimensional measures at each life cycle phase, and even enter potential results for subsequent phases. The initial test bed of the proposed tool would be the repository of project data of the International Software Benchmarking Standards Group (ISBSG).
Abstract: This S3m maintenance maturity assessment model is divided into four process domains containing 18 Key Process Area", each in turn containing "Roadmaps". Roadmaps are bodies of knowledge containing recommended practices that are linked to one another. Using the S3m software maintenance maturity model, this paper describes the assessment process and results of an individual maintainer process maintaining a key software application within a larger software maintenance organization."
Abstract: This paper discusses and analyzes possible solutions for achieving an effective process improvement in one specific key process area: measurement, whatever the maturity level and without the constraints of a software process improvement model staged representation. It investigates in particular a Support Process Area, that is, Causal Analysis & Resolution (CAR), together with Orthogonal Defect Classification.
Abstract: A major difficulty with current organizational performance models in software engineering management is to represent many possible viewpoints quantitatively and in a consolidated manner, while at the same time keeping track of the values of the individual dimensions of performance. The models currently proposed do not meet the analytical requirements of software engineering management when various viewpoints must be taken into account concurrently. This difficulty is compounded by the fact that the underlying quantitative data is of high dimensionality and therefore the usual two and three dimensional approaches to visualization are generally not sufficient forrepresenting such models.This paper describes the proposed concepts for a tool for multidimensional performance modeling in software engineering management. Due to the continuously increasing amount and the high dimensionality of the data underlying these models, a particular focus is given in this paper on potential visualization concepts and techniques that could be incorporated into the proposed tool.
Abstract: The measurement of software usability is recommended in ISO 9126-2 to assess the external quality of software by allowing the user to test its usefulness before it is delivered to the client. Later, during the operation and maintenance phases, usability should be maintained, otherwise the software will have to be retired. This then raises harmonization issues about the proper positioning of the usability characteristic: does usability really belong to the external quality view of ISO 9126-2 and should the external quality characteristic of usability be harmonized with that of the quality in use model defined in ISO 9126-1 and ISO 9126-4? This paper analyzes these two questions: first, we identify and analyze the subset of ISO9126-2 quality subcharacteristics and measures of usability that can be useful for quality in use, and then we recommend improvements to the harmonization of these ISO 9126 models.
Abstract: Many software tools have been developed to support the implementation of the ISO-19761 COSMIC-FFP standard on functional size measurement. This paper presents a reference framework made up of the set of functions that is of interest to practitioners who implement ISO functional size measurement standards. It also includes a 2006 survey of COSMIC-related tools available both on the market and in the research community. Finally, a gap analysis is presented in which the functions that still need to be addressed by tool vendors are identified.
Abstract: This research extends the architecture-based software reliability prediction model to the COSMIC-FFP context. This model is based on Markov chains and it is applicable prior to implementation with the ability to build reliability models much earlier at the requirement phase or based on the specifications for the design. In essence, each component of thesystem is modeled by a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to evaluate the product reliability in the early phases of software development and to improve the reliability process for large software systems. This approach of applying a Markov model in the COSMIC-FFP context is illustrated with the railroad crossing case study.
Abstract: Software reuse is often recommended for improving the productivity of the development process. However, recognizing opportunities for reuse remains a challenge. This workproposes a technique to identify opportunities for reuse based on the similarity between software functions. This technique, referred to here as ?functional similarity?, is based onfunctional information collected by the COSMIC-FFP measurement method during the measurement of the software. The proposed approach is applied to a set of measurement casestudies for which opportunities for functional reuse have been identified and quantified.
Abstract: Some software measures are still not widely used in industry, despite the fact that they were defined many years ago, and some additional insights might be gained by revisiting them today with the benefit of recent lessons learned about how to analyze their design. In this paper, we analyze the design and definitions of HalsteadÂs metrics, the set of which is commonly referred to as ?software scienceÂ. This analysis is based on a measurement analysis framework defined to structure, compare, analyze and provide an understanding of the various measurement approaches presented in the software engineering measurement literature.
Abstract: This paper introduces an approach to estimating the test volume and related effort required to perform verification and validation activities on software projects. This approach first uses size measures of functional requirements to estimate this volume, and then effort estimation models based on these test volumes. This estimation approach takes into account other types of non-functional requirements, as documented in ECSSE- 40, part 1-B.
Abstract: The ?Guide to the Software Engineering Body of Knowledge ? SWEBOK? (2004 version) ? contains ten distinctKnowledge Areas (KAs) and three common themes: Quality, Tools and Measurement. Since measurement is presentin all the KAs, an initial taxonomy for measurement had been proposed as a foundation for the addition of a newspecific KA on Software Measurement. To verify the feasibility of such a proposal, this paper presents an overview ofthe level of empirical support for each measurement topic identified. The types of empirical support adopted are fromthe Zelkowitz & Wallace taxonomy.
Abstract: This paper presents an overview of some measurement concepts across bothCOSMIC-FFP, an ISO standard (ISO/IEC 19761) for functional sizemeasurement and Functional Complexity (FC), an entropy-based measure. Itinvestigates in particular three metrological properties (scale, unit and scaletype) in both of these measurement methods.
Abstract: Software measurement represents an importanttopic heavily discussed within the softwareengineering community. Since thirty years, softwaremeasurement has become an important domainwhere interesting debates have occurred.Internal measurements of software do not necessitateany execution. Since these measurements areautomated, it is commonly accepted that during suchmeasurements errors cannot occur. Indeed, suchmeasurements have no random or probabilisticaspect.The current paper aims at showing that othersources of error or uncertainty exist in the softwaremeasurement. Sources of uncertainty can appearbefore the measurement itself, that is, at themeasurement design level. Indeed, mistakes relatedto the design of measurement can occur, andtherefore affect the measurement results whenexecuting the measures.The current paper extends the notion of uncertaintyto the measurement design level, and highlights theimpact of the design uncertainty onto themeasurement result.
Abstract: Several organizations are interested in using convertibility ratios between COSMIC- FFP (ISO 19761), the secondgeneration of functional size of the software, and Function Points Analysis ? FPA (ISO 20926). This paper presents asurvey of previous convertibility studies and reports on findings from an additional data set. In summary, thesestudies indicate that convertibility can be simple and straightforward when only human users are taken into account inthe measurement viewpoint. It also provides indication that convertibility can be less straightforward is someinstances.
Abstract: Various measures have been proposed in software engineering for evaluating the quality of object-oriented softwaresystems, many of them aimed at measuring the structural properties of the design of the software, such as coupling,cohesion and inheritance. There is diversity among the current proposals for coupling measurements and models,reflecting a lack of consensus on coupling and the need for a reference framework. This paper investigates thedesign of coupling measures based on Abran and JacquetÂs model of the process for designing a measurementmethod. This analysis is illustrated with a case study using measures of one type of coupling, suggested byChidamber and Kemerer: Coupling Between Objects (CBO). This case study verifies whether or not this CBOmeasure includes all the design elements of a measurement method.
Abstract: Within the ISOÂs mandate to upgrade its set of technical reports on the measurement of the quality of software products (ISO 9126), the ISO working group associated with it has come up with a proposed new structure, with some interesting contributions. This paper investigates the maturity of two new concepts proposed (measurement primitives and quality measures), highlights some of their weaknesses and proposes a way to address these using the measurement information model of ISO 15939 on software measurement process.
Abstract: The Guide to the software engineering body of knowledge(SWEBOK - ISO TR 19759) provides a consensuallyvalidated characterization of the bounds of the softwareengineering discipline as well as a topical access to theBody of Knowledge supporting that discipline. This Bodyof Knowledge is currently organized as a taxonomysubdivided into ten Knowledge Areas designed todiscriminate among the various important concepts, butonly at the top level. Of course, the software engineeringknowledge is much richer that this high level taxonomyand currently resides in the textual descriptions of eachknowledge area. Such textual descriptions widely vary instyle and content. The ontology approach is thereforeused to analyze the richness of this body of knowledge andto improve its structuring. This paper presents the protoontologydeveloped in the first phase of the constructionof a domain ontology for this new engineering discipline.Overall, some six thousands (6000) software engineeringconcepts and about 400 relationship types betweenconcepts have been identified. Some of the major resultsobtained to this point are detailed and discussed.
Abstract: To develop adequate software project estimation models using statistical techniques, the consistency of historicaldata is important. This paper investigates this issue by looking into the consistency of the information contained inone of the most important fields in the International Software Benchmarking Standards Group (ISBSG) repository,that is, the project effort data field. This paper also presents an example of how effort data from projects that includea large number of project phases can be used for extrapolation, through a normalization process, to projects withfewer phases. The ISBSG organization has attempted to tackle this issue on the variability of phases included in theproject effort field by deriving a normalized work effort field. This paper investigates this problem and reports on anumber of related issues.
Abstract: Tracking & Control" activities in software projects are most often based , in industry, on just two dimensions of analysis: time and cost. Most often, 'tracking & control' excludes other dimensions (such as quality, risks & impact on society, stakeholders' viewpoint in a broader sense) taken into account in Performance Management models such as EFQM or the Malcolm Baldridge model. How can balancing those multiple concurrent control mechanisms across several dimensions of analysis be done? Balancing Multiple Perspective (BMPs) is a procedure designed to help project managers choose a set of project indicators from several concurrent viewpoints. This paper also presents the related questionnaire with a list of 14 candidate measures helping to compare the "as-is" situation and to figure out what will be the desired one, including cost figures to be possibly considered in the budget for next projects."
Abstract: Traditional cost estimation models in softwareengineering are based on the concept of productivity definedas the ratio of output to input; for instance, detailed softwareestimation models, such as COCOMO, can take multiplefactors into account, but their multipliers lead to a singleperspective based on the productivity concept. A lessexplored relationship in software engineering is the onebetween productivity and performance. This paper presentssome classic concepts on the multidimensionality ofperformance, and proposes some suggestions to implementmultidimensional performance models in softwareengineering based on certain fundamental concepts fromgeometry, that is, the QEST/LIME family of models.
Abstract: This paper discusses, within the context of the Open Source phenomenon, an issue crucial to thesoftware industry in terms of the dynamics of technology adoption. In particular, it investigates andcompares the distinct rates of penetration of the Linux operating system two different markets: insummary, Linux is very competitive in the server market while its presence is still very limited in themass market, despite its technical features which make it comparable to the MS-Windows standard. Thereasons for these distinct penetration rates are investigated, using in particular the Increasing ReturnsTheory proposed by Brian Arthur and the differentiated characteristics of the Open Source processes ofproduction and diffusion (openness, modularity and cooperative development). Finally, this paperexplores the role of the userÂs technical knowledge of the network topology of Linux processes ofproduction and diffusion.
Abstract: The Guide to the Software Engineering Body of Knowledge (SWEBOK) has been developed to represent an international consensus formed through broad public participation in the review process and is now close to final approval as ISO/IEC TR 19759. This guide constitutes an integrated structuring of a large set of software engineering concepts developed individually over the past forty years from a large number of distinct viewpoints. The absence of a recognized consensus on software engineering terminology has been a challenging task in building the SWEBOK Guide and in achieving this international consensus. This paper presents a first ontological approach to building domain-specific ontologies as a part of the Semantic Web, and shows how it can be used to build the SWEBOK ontology and to increase its internal consistency and clarity. Finally, new ideas on how a SWEBOK ontology can help in developing an e-learning system on software engineering are presented.
Abstract: The Software Engineering Body of Knowledge (SWEBOK) project of the IEEE Computer Society has developed an international consensus on a Guide to the key knowledge in the Software Engineering domain This SWEBOK Guide is being adopted by the international standardization community as ISO 19759. The SWEBOK Guide includes 10 distinct Knowledge Areas (KAs) and three common themes: Quality, Tools and Measurement. As Measurement is present in all the KAs, some reviewers have suggested representing Measurement as a distinct KA. A recent analysis of software measurement topics comparing SWEBOK to the ISO standard on Metrology and the Abran/Jacquet measurement process model has highlighted a lack of generally accepted" sources, as well as some missing knowledge types, even in the area of exploitation of measurement results in quality and prediction models. According to the "generally accepted" criteria of the Project Management Institute in the PMBOK, software engineering measurement, as of 2003, would still be considered rather immature in terms of knowledge maturity.At the same time, the speed of research on software measurement has recently been on the increase, and several international standards on software measurement are coming out, both for software processes (CMMI, ISO 15504, 15939) and for software products (ISO 14143,19761,9126, etc.). Such results are strengthening the knowledge developed over the last 30 years in terms of measurement processes and methods. We are therefore of the opinion that such recent work is rapidly closing important gaps in software-related measurement knowledge, which could move relatively quickly towards the "generally accepted" threshold for establishing a new KA in the SWEBOK.This paper therefore proposes, on the basis of the Trial version of the SWEBOK Guide, of recent work and of the SWEBOK editorial criteria, an initial taxonomy for a Software Measurement body of knowledge."
Abstract: Maintaining and supporting the software of an organization is not an easy task, and software maintenance managers do not currently have access to tools to evaluate strategies for improving the specific activities of software maintenance. This article presents the new architecture (version 2.0) of the software maintenance capability maturity model (SMCMM). The contributions of this paper are: 1) to present a categorization of the software maintenance processes using a representation similar to that in ISO12207; and 2) present the new architecture of the model, which highlights the unique processes of the maintainers.
Abstract: A list of fundamental principles of software engineering is viewed as needed to solidifythe foundations of the field, thereby enabling and hastening the maturation of the discipline. Mostof the authors who have investigated software engineering principles note that the discipline has afoundation. Some talk about principles, others about concepts, laws or notions, while they all agreewith the view that a stable basis for the discipline is formed by their own individual set of these.However, this paper illustrates that there is a clear lack of consensus about which of the proposedprinciples are indeed fundamental. Furthermore, authors do not share the same definition ofsoftware engineering, nor do they share the same definition of the term ?principleÂ. In summary,over 250 statements on what is meant by a principle are inventoried, and most are based only onthe authorÂs opinion or point of view. Therefore, significant effort is still required to pursue thisresearch topic relating to the foundation of software engineering. In particular, more work isrequired to design an appropriate research methodology, including precise definitions of the termsbeing used..
Abstract: Software maintenance constitutes an important part of the total cost of the lifecycle of software. Some even argue this is the most important fraction of the cost (50-80 percent according to Tony Scott [14], 75% according to Rand P. Hall [5]). The added value of software maintenance is often not fully understood by the customer leading to a perception that software maintenance organizations are costly and inefficient. A common view of maintenance is that it is merely fixing bugs. However, studies over the years have indicated that in many organizations the majority, over 80%, of the maintenance effort is dedicated to value added activities (Sommerville[15], Pressman[13], Pigoski[12]). To improve customer perceptions of software maintenance it is important to provide them with better insights into the activities performed by the maintenance organization and to document such performance with objective measures of software maintenance activities.In this paper the prerequisites for Software Maintenance productivity analysis are described with the use of the experiences at the Bahrain Telecommunications Company (Batelco) during the years 2001-2. First the differences between software maintenance activities and IS development projects are described. Then a basic trend model is applied as well as ways to manage the expectations of the customers. To conclude, some remarks are made regarding the application of productivity analysis for the software maintenance managers.
Abstract: The business value of the software product results from its ultimate quality seen by both acquirers and end users. An integrated life cycle quality model, further called complement model for software product quality combines high level quality view of TL9000 Handbook and detailed view from ISO/IEC 1926 in the process of defining, measuring, evaluating and finally achieving appropriate quality of user-centered software product. This paper presents how the use of TL9000 product operational (in-the-field) quality measures can bring benefits to setting up, measuring and evaluating the quality of the software product being developed, through its entire life cycle. The process of building quality into a software product is discussed and illustrated by TL9000-ISO complement model as well as by application process walk-through.
Abstract: This paper presents the state of the art of the new international standard ISO/IEC 19761, the Cosmic-FFP method for functional size measurement, and its future perspective.
Abstract: Abstract : ISO 19761 (COSMIC-FFP) defines the standards for measuring the functional size of software and represents the second generation of functional size measurement methods. While there is a large offering of software tools to support the development process, including integrated environments such as RUP, measurement of functional size had until recently been almost exclusively a manual process. This paper presents an approach and a tool for the automation of functional size measurement with RUP. The design of this tool is based on the direct mapping between COSMIC-FFP and UML concepts and notation, a foundation from which the required Rational Rose artifacts can be extracted to proceed to the software project measurement operation. This makes it possible not only to derive the accurate functional size once all specifications have been completed, but also to derive early size indicators when only high-level information is available.
Abstract: Software component technology has been promoted as an innovative means to tackle the issues of software reuse, software quality and, software development complexity. Several component models (CORBA, .Net, JavaBeans) have been introduced, yet certain issues and limitations inherent to components still need to be addressed. As software components with hosts of functionalities tend to be coarse to large-grained in size and since the set of functionalities required by an application varies according to the particular application context, an excessive number of unwanted functionalities might be generated by such components within the application. In this paper, we present the Compositional Structured Component Model (CSCM) designed to handle the issue of unwanted component functionalities and to provide a flexible approach for easier customization, adaptation, and reuse. The CSCM model is designed to handle this issue via component functional composition using metadata composition instances which allow selective composition of a component's required functionalities.
Abstract: Currently, components technology represents a major step in the evolution of software technology as a whole. Although it has been undergoing continuous enhancement, this technology suffers from a number of limitations: in particular, components' unused functionalities. For instance, a software component incorporates a set of functions of which a size-varying subset is actually used to satisfy the functional requirements of a particular software application. Consequently, a subset of unused functionalities will persist in the deployed application. This subset of unused functionalities provides no functional value to the hosting application. Furthermore, these unused functionalities consume memory and network resources and might compromise application security if they are exploited inappropriately. In this paper, we propose CUMM (Components' Unused Member Measurement), a method to measure components' unused members (attributes and functionalities), and their memory consumption inside a software application. Furthermore, we present a set of analysis models which use the results of the CUMM to determine percentages of unused members as well as the degree of generality of a component's members.
Abstract: Software measurement is still emerging as a field of knowledge, and, most often, traditional quality criteria of measurement methods such as repeatability, reproducibility, accuracy and convertibility are not even investigated by software measurement method designers. In Software Engineering, the Functional Size Measurement (FSM) community has been the first to recognize the importance of such quality criteria for measurement, as illustrated in the recently adopted ISO document 14143-3; these criteria represent, however, only a subset of the metrology criteria which includes, for instance, measurement units and internationally recognized measurement references (e.g. 'etalons'). In this paper, a design for building a set of normalized baseline measurement references for COSMIC-FFP (ISO 19761), the 2nd generation of FSM methods, is proposed. The goal is to design, for the first time in Software Engineering, a system of references for software FSM methods.
Abstract: Even though measurement is considered an essential concept in recognized engineering disciplines, measures in software engineering are still far from being widely used. To figure out why software measurement has not yet gained enough peer recognition, this paper presents a set of issues that still have to be addressed adequately by the software measurement community. These issues were derived from the analysis of comments obtained during two Delphi studies and a Web-based survey conducted to identify and reach a consensus on the fundamental principles of the discipline within the international software engineering community. The paper also discusses the application of metrology concepts as a research direction to address some of the measurement issues identified.
Abstract: Requirements traceability is recognized as a concern in an increasing number of standards andguidelines for requirements engineering. However, there are many challenges in applying currenttraceability approaches to complex and dynamic software system development processes. In thispaper, functional traceability is introduced, and difficulties related to implementing traditionaltraceability methods are discussed. To address these challenges, we propose a model aimed atbuilding functionality into a logic-based graphical framework in order to define theinterrelationships between software-based system components, functions, and sub-functions, aswell as the interrelationships between software life cycle phases. The proposed model providesfunctional traceability for a large-scale software development process. The architecture of theFunctional Traceability Model (FTM) captures both system and software specifications and designattributes into a multi-level hierarchy. Implementation of the model to assess functionalrequirements traceability is illustrated as a case study.
Abstract: On basis of a framework defined to understand, structure, compare and analyze the different measurement approaches presented in software engineering literature, this paper develops a detailed analysis of the well-known McCabe Cyclomatic complexity number. It highlights in particular some misconceptions underlying this measurement approach and also points out the necessity to have well grounded definitions and models for the measurement methods the practitioners are applying in the software industry.
Abstract: As a software functional size method, Function Point Analysis (FPA) has been used in many organizations for measuring productivity and building estimation models [1]. FPA is based on a number of function types (external inputs, external outputs, external inquiries, internal logical files and external interface files), the set of which we refer to as a functional profile. It has been observed that a majority of projects within a sample are close to having an average functional profile, while some, of course, can be considered as outliers. This paper investigates the functional profiles within the ISBSG international repository, and whether or not productivity varies with such profiles. The results of the statistical analyses lead to distinct estimation models, depending on whether or not a project functional profile is within a reasonable range of the average functional profile for any particular sample.
Abstract: The Guide to the SWEBOK" (2001 Trial version) currently contains ten distinct software engineering Knowledge Areas (KAs) and three common themes: Quality, Tools and Measurement. The Measurement topic is pervasive throughout all the KAs (in both the 2001 and 2004 editions). An initial taxonomy for a new specific KA on Software Measurement had been proposed in 2003. To improve this initial proposal, the Vincenti's classification of engineering knowledge types was used as ann analytical toolal. This paper presents a revised breakdown for a body of knowledge on Software Measurement."
Abstract: This paper presents a V&V Measurement Management Tool (V&V MMT) to support the Management of V&V activities in the context of safety-critical software. We illustrate how V&V MMT can facilitate the quantification of the V&V processes, activities and tasks in projects recommended in the IEEE Standard for Software Verification and Validation (IEEE Std. 1012-1998) for facilitating the establishment of V&V measurement indicators: (1) Software Verification and Validation Plan (SVVP), (2) Baseline change assessment, (3) Management Review of V&V, (4) Management and Technical Review Support, (5) Interface with Organizational and Supporting Process.
Abstract: Software development effort estimation with the aid of artificial neural networks (ANN) attracted considerable research interest at the beginning of the nineties. However, the lack of a natural interpretation of their estimation process has prevented them from being accepted as common practice in cost estimation. Indeed, they have generally been viewed with skepticism by a majority of the software cost estimation community. In this paper, we investigate the use and the interpretation of the Radial Basis Function Networks (RBFN) in the software cost estimation field. We first apply the RBFN to estimating the costs of software projects, and then study the interpretation of cost estimation models based on an RBFN using a method which maps this neural network to a fuzzy rule-based system, taking the view that, if the fuzzy rules obtained are easily interpreted, then the RBFN will also be easy to interpret. Our case study is based on the COCOMO'81 dataset.
Abstract: Software maintenance function suffers from a scarcity of management models that would facilitate its evaluation, management and continuous improvement. This paper is part of a series of papers that presents a Software Maintenance Capability Maturity Model (SMCMM). The contributions of this specific paper are: 1) to describe the key references of software maintenance; 2) to present the model update process conducted during 2003; and 3) to present, for the first time, the updated architecture of the model.
Abstract: In the software engineering literature, various methods of software size measurement have been proposed (such as: COSMIC-FFP, IFPUG, and MarkII). Despite the fact that these methods have the same objectives, their designs are vastly different, as are, of course, their results. This creates confusion, making it difficult for industrial organizations, which rely more and more on software and the need to manage it, to make the choices that best suit their requirements, as they have neither the time nor the analytical tools to verify each method individually. We propose here an approach for the analysis of some aspects of the quality of software measures. This approach is based on our modeling of the set of metrology concepts documented in the ISO International vocabulary of basic and general metrology terms (VIM). It is illustrated with a case study using one specific functional size measurement method recognized as an ISO standard: COSMIC-FFP (ISO 19761). This case study documents which of the metrology concepts are being addressed, either in the design of this measurement method or in some of its practical uses. The result of this analysis indicates, for instance, that the design of COSMIC-FFP encompasses a large number of these concepts. From this case study, it can also been inferred that our approach can be used to analyze any other software functional size measurement method, as well as other software measures suggested to industry.
Abstract: This paper presents an exploratory study of related concepts across information theory-based measures and functional size measures. Information theory-based software measurement has been used in the design of an entropy-based measure of functional complexity in terms of an amount of information based on some abstraction of the interactions among software components. As a functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard, measures software functionality in terms of the data movements across and within the software boundary. In this paper, we explore some of the links between the two types of measures, and, in particular, the similarities (and differences) between their generic model of software functionality, their detailed model components taken into account in their respective measurement processes and, finally, their measurement function. Some further investigation avenues are also identified for extending the use of functional size measures for reliability estimation purposes and for scenario-based black-box testing.
Abstract: In the real world, a knowledge-based system (KBS) must often accommodate a considerable number of references which support the particular knowledge domain. The size of such a knowledge repository makes its detailed verification challenging and subsequent maintenance onerous. New technology can help improve both the verification and maintenance of these knowledge repositories. To investigate the effectiveness of new technologies for verification and maintenance, we developed two subsequent versions of a KBS designed to improve the consistency of software measurement using ISO 19761 (the COSMIC-FFP measurement method for software functional size) and COSMIC-FFP guide [3]. The COSMIC-FFP KBS consists of a hybrid knowledge system built on case-based and ruled-based approaches. The first prototype was built in 2000 using Microsoft technology (Visual Basic 6, Access 2000, hyperlink facilities for RTF files). This prototype included 105 case problems and almost 800 files (hyperlinks) for the required references. Because of the high number of files, the verification and validation of this KBS was, of course, very challenging. This led us to design a second KBS, a Web-based prototype (XML, XSL, Java Server Page) which is much easier to verify and validate, leading to considerably improved maintainability and expandability. This paper presents an overview of the selected hybrid KBS approach and of the first and second prototypes. It also illustrates the transitioning and quantitative benefits for the detailed KBS verification and validation process and the lessons learned during this process
Abstract: Quality needs for both customer and software supplier have become more complex and critical than ever. This paper presents the current ISO software products and process quality standards and our positioning of these standards as software quality engineering instruments, including the phases of product development to which they map. The first generation of these product-related and process-related standards are currently in their final ISO publication stage but, having been developed independently, their usage by practitioners will be particularly challenging. While ISO software experts are already at work defining strategies to develop the next generation of these standards, help is needed by practitioners to understand, deploy and leverage ISO standards that are now becoming available to them.This paper addresses first the immediate need for integrating these process and product related standards in the development process through our quality engineering approach which maps them at the detailed level of the life cycle. Then, work in progress at the ISO level to develop the next generation of these software quality related standards is presented.
Abstract: Understanding, predicting, and controlling performance is a continuous challenge, and static measurement systems are inadequate in dynamic and rapidly changing business environments. In this paper, we propose a generic, flexible and integrated Measurement System Repository to handle continuously changing business conditions, and we report our experience in its design and development at Ericsson Research Canada. This Performance Measurement Repository has been developed based on the concept of a data warehouse environment. Reporting features are based on the definition of queries to On Line Analytical Process (OLAP) cubes. OLAP cubes are created as materialized views of the measurement data, and the user functionalities are implemented as analytical drill-down/roll-up capabilities and as Indicator and Trend Analysis capabilities.
Abstract: We propose a measurement repository for collecting, storing, analyzing and reporting measurement data based on the requirements of the Capability Maturity Model Integrated (CMMI). Our repository is generic, flexible and integrated, supporting a dynamic measurement system. It was originally designed to support Ericsson Research CanadaÂs business information needs. Our multidimensional repository can relate measurement information needs to CMMI processes and products requirements. The data model is based on a hierarchical and multidimensional definition of measurement data. It has been developed based on the concept of a data warehouse environment. Reporting features are based on the definition of queries to On Line Analytical Process (OLAP) cubes. OLAP cubes are created as materialized views of the measurement data, and the user functionalities are implemented as analytical drill-down/roll-up capabilities and as Indicator and Trend Analysis capabilities.
Abstract: Software maintenance constitutes an important part of the total cost of the lifecycle of software. Some even argue that this might be the most important component of the cost, even though customers often do not perceive the added value of software maintenance. A proposed approach to highlighting the added value of maintenance is to provide the customer with process performance measures aligned with the key activities performed by the maintenance organization. Such performance measures could then form the basis for a clear agreement on the expectations, and outcomes, of these activities. Process Performance management and measurement requires that processes be chosen based on their impact on the quality and the performance of the software maintenance organization It also requires that measures be identified and established and that a reference point (baseline) and a target be set for each measure. Finally, they require that data be collected in order to develop and use process performance prediction models. In this paper, we introduce best practices, for the first three maturity levels, to help the maintainer organization assess its process performance. These practices constitute a subset of our proposed Software Maintenance Capability Maturity Model (SM-CMM).
Abstract: In the software engineering literature, numerous practitioners and researchers have proposed hundreds of software measures", or "software metrics". To help industry assess the quality of these proposed measures, various researchers have proposed various approaches to software measurement validation, none of which has yet been widely used by either designers or users of software measures. To tackle this diversity of validation approaches, Kitchenham et al. had proposed a framework for software measurement validation and suggested a critical review of their proposed framework. This paper performs such a review using a key ISO document on measurement, that is the ISO Vocabulary on Metrology as well as a measurement process model derived from an analysis of the individual validation proposals. The metrology concepts in particular have facilitated greater understanding of the set of measurement sub-concepts that must be included in each of the steps from the design of a measurement method to the use of the measurement results."
Abstract: Abstract: Most commercial estimation tools can be considered as black boxes in that they do not provide details of the samples used to build their estimates. With the availability of the ISBSG international repository of 2000+ software projects, it is now feasible to develop white box estimation models which provide additional insights into the strengths and limitations of software estimates. This paper presents two Web-based software prototypes developed to support white box software project estimation.
Abstract: Measurement is progressively becoming a mainstream management tool to help ICT organizations plan, monitor and control. However, measurement itself is not a mature domain of knowledge in software engineering. The assessment of proposed measurement indicators in these process improvement models is investigated, and a methodology is proposed for the design of a measurement indicator assessment grid. A case study on the use of this assessment grid is presented and results discussed.
Abstract: The objective of Empirical Software Engineering is to improve the software development and maintenance processes and consequently the quality of theirs various deliverables. This can be achieved by evaluating, controlling and predicting some important attributes of software projects such as development effort, software reliability, and programmers productivity. One of the most interesting sub-field of ESE is software estimation models. Software estimation models are used to predict some critical attributes of some entities that are not yet exist. For example, we often need to predict how much a development project will cost, or how much time and effort will be needed., so that we can allocate the appropriate resources to the project. In general, estimation models relate the attribute to be predicted to some other attributes, that we can measure now, by using mathematical formulas or other techniques such as neural networks, case-based reasoning, regression trees and rule-based induction. Currently, our research concerns software cost estimation models. We have developed an innovative approach referred to as Fuzzy Analogy for software cost estimation. Nevertheless, this approach can be used to evaluate and predict other attributes such as reliability, quality, safety, and maintainability. In this paper, we present some results of our recent research related to the cost estimation field.
Abstract: Articulating a body of knowledge is an essential step toward developing a profession because it represents a broad consensus regarding the contents of the discipline. The IEEE Computer Society, with the support of a consortium of industrial sponsors, has recently published the Guide to the Software Engineering Body of Knowledge (SWEBOK). Throughout this Guide, measurement is pervasive as a fundamental engineering tool. What is then the level of maturity of measurement in software engineering?Up until recently software 'metrics' had been most often proposed as the quantitative tools of choice in software engineering, and the analysis of these had been most often discussed from the perspective referred to as 'measurement theory'. However, in other disciplines, it is the domain of knowledge referred to as 'metrology' that is the foundation for the development and use of measurement instruments and measurement processes. In this paper, our initial modelling of the sets of measurement concepts documented in the ISO International Vocabulary of Basic and General Terms in Metrology is used to investigate and position the measurement concepts referred to in the Guide to the Software Engineering Body of Knowledge. This structured analysis reveals that within the generally accepted body of knowledge on measurement within software engineering, there are still large gaps with respect to the body of knowledge in metrology.
Abstract: The SWEBOK Guide is a project of the IEEE Computer Society and is designed to characterize the discipline of software e ngineering and IO provide a topical guide to the literature describing the generally accepted knowledge within the discipline. The Triol version of the SWEBOK Guide has been published in December 2001. The paper reviews a selecfion of the triol usages of the SWEBOK Guide in thejeld of software engineering education. It also describes the links between the SWEBOK Guide project and other initiatives in thefield, notable thecomputing Curricula - Software Engineering.
Abstract: The Guide to the Software Engineering Body of Knowledge (SWEBOK) has been developed to represent an international consensus formed through broad public participation in the review process and is now close to final approval as ISO/IEC TR 19759. This guide constitutes an integrated structuring of a large set of software engineering concepts developed individually over the past forty years from a large number of distinct viewpoints. The absence of a recognized consensus on software engineering terminology has been a challenging task in building the SWEBOK Guide, and in achieving an international consensus. While major consensus has been reached at the broad taxonomy level of SWEBOK, some work remains to increase terminology consistency at a more detailed level. This paper briefly presents SWEBOK and related terminology issues. We then present the ontology approach to building domain-specific ontologies and show how it can be used to build the SWEBOK ontology and to increase its internal consistency and clarity. A specific example of the benefits of an ontology is presented, along with an analysis of the use of the term 'quality' in the current version of the SWEBOK Guide.
Abstract: This paper presents and describes a Web-based implementation of a three-dimensional software quality measurement model. The implementation is based on the 2003 version of the ISO quality model for software products: ISO 9126. The prototype presented includes all the 120+ measures proposed in the ISO standard, as well as weight assignments, target values, current project values and automated calculations for a three dimensional representation of quality performance, based on the geometrical tetrahedron formula of the QEST model.
Abstract: The generic concepts of Function Points Analysis were published in the late 1970s, and later more detailed measurement rules were developed to improve consistency of measurement. Due to lack of good software documentation, it is not always possible to apply all the detailed rules, and measurers must fall back on approximation techniques. This paper presents an analysis of two such techniques: Function Points Simplified and backfiring" with a ratio of lines of code per Function Point. Two verification criteria were selected from ISO 14143-3: accuracy and convertibility. Results from empirical studies with five data sets are reported."
Abstract: Estimation models in software engineering are used to predict some important attributes of future entities such as software development effort, software reliability and programmers productivity. Among these models, those estimating software effort have motivated considerable research in recent years. The prediction procedure used by these software-effort models can be based on a mathematical function or other techniques such as analogy based reasoning, neural networks, regression trees, and rule induction models. Estimation by analogy is one of the most attractive techniques in the software effort estimation field. However, the procedure used in estimation by analogy is yet not able to handle correctly linguistic values (categorical data) such as 'very low', 'low' and 'high'. In this paper, we propose a new approach based on reasoning by analogy, fuzzy logic and linguistic quantifiers to estimate software project effort when it is described either by numerical or linguistic values; this approach is referred to as Fuzzy Analogy. This paper also presents an empirical validation of our approach based on the COCOMO'81 dataset. We conclude this paper with a discussion about how we can ?humanize the Fuzzy Analogy approach, by using the concept of Soft Computing in its estimation process.
Abstract: Every project - whatever the application field - should be managed taking into account at least four dimensions: Time, Cost, Quality and Risk. To manage these dimensions, a key tool for a Project Manager is to increase project visibility, defined as the amount of information about the project associated with its probability of occurrence. This paper uses the iceberg" metaphor to introduce the ICEBERG (Improvement after Control and Evaluation-BasEd Rules and Guidelines) approach that can help Project Managers through the use of standard (de jure and de facto) ICT methods and techniques. This approach focuses not only on the management, and measurement, of resources, process and product, but also of the project and the organization itself. A list of candidate measures related to these 5 entities is suggested for a comprehensive software measurement plan in order to reduce project risk."
Abstract: Assessing software product quality has become more and more relevant and important to managers, even though it is still challenging to define and measure the detailed quality criteria and to integrate them into quality models. Software engineering standards can help establish a common language for these detailed criteria and, in parallel, implement a model ofquality from its high-level concepts down to its lowest level of measurable details; in particular, the revised ISO/IEC 9126 suite of standards represents a useful taxonomy and framework for specifying software product quality. Several frameworks and techniques are being built on the basis of these standards. In particular, the GDQA (Graphical Dynamic Quality Assessment) framework and the QF2D (Quality Factor throughQFD) technique have been proposed to tackle software product qualityanalysis and measurement. This paper examines the structure of both and integrates them into an Integrated Graphical Assessment of Quality (IGQ) technique supporting quality assessments and related improvements through the full software lifecycle.
Notes: Refereed Conference Proceedings Papers, 20021028, Research Notes: 445
Abstract: This paper describes and explains a few significant changes which have been made to the COSMIC FFP method of functional sizing of software targeted to be published in the Measurement Manual Version 2.2 (October 2002) and in the draft ISO/IEC 19761 standard version of the method. These changes have been made to help understanding and consistent use of the method.None of the changes alter the principles of the method. These have never needed to be changed since the method was first defined, and have been confirmed by successful practical use in may organisations on different types of software. All the changes have arisen because it was found either that certain terms and definitions could be misunderstood and needed clarification, or because of the need to ensure consistency with existing ISO/IEC standard terminology and definitions (a design goal of the COSMIC FFP method).
Abstract: Articulating a body of knowledge is an essential step toward developing a profession because it represents a broad consensus regarding the contents of the discipline. The IEEE Computer Society, with the support of a consortium of industrial sponsors, has recently published the Guide to the Software Engineering Body of Knowledge (SWEBOK). Throughout this Guide, measurement is pervasive as a fundamental engineering tool. In addition, ISO is at present in the process of adopting this Guide as an ISO Technical Report. This presentation will provide overviews of the development process that was followed and of the current version of this Guide. In addition, the topic of measurement will be highlighted, both in terms of its presence throughout the ten SWEBOK knowledge areas and of its depth of treatment.
Abstract: Articulating a body of knowledge is an essential step toward developing a profession because it represents a broad consensus regarding the contents of the discipline. The IEEE Computer Society, with the support of a consortium of industrial sponsors, has recently published the Guide to the Software Engineering Body of Knowledge (SWEBOK) and, throughout this Guide, the engineering of quality into software is pervasive. In addition, ISO is currently in the process of adopting this Guide as an ISO Technical Report. This presentation will provide overviews of the development process that was followed, of the current version of this Guide and of its usage throughout the world.
Abstract: The use of the neural networks to estimate software development effort has been viewed with skepticism by the majority of the cost estimation community. Although, neural networks have shown their strengths in solving complex problems, their shortcoming of being 'black boxes' has prevented them to be accepted as a common practice for cost estimation. In this paper, we study the interpretation of cost estimation models based on a Back-propagation three multi-layer Perceptron network. Our idea consists in the use of a method that maps this neural network to a fuzzy rule-based system. Consequently, if the obtained fuzzy rules are easily interpreted, the neural network will also be easy to interpret. Our experiment is made using the COCOMO'81 dataset.
Abstract: Many standards mandating verification of requirements correctness do not comprehensively state what information should be captured and used for verification and quality assurance activities. Therefore, a wide range of methods, from simplistic checklists to comprehensive formal methods, is used to verify correctness of system and software requirements. In this paper, a semi-formal method to verify functional requirements using a graphical logic-based structured architecture referred to as Graphical Requirement Analysis is proposed and illustrated with a case study. Its architecture allows to trace functional system requirements and to show correctness (non-ambiguity, consistency, completeness) of specifications. The support of graphical system engineering descriptions greatly facilitates to simulate requirement specifications and designs. Such capability is believed by many to be an essential aspect of developing and assuring the quality of highly complex systems requiring high integrity.
Abstract: Both ISO and industry led forums, such as QuEST, have tackled the measurement of software product quality and proposed corresponding quality views. This paper presents how both quality views bring distinct contribution to software product quality and how they can be implemented jointly to verify that the quality requirements have indeed been built in and quality target achieved in the product use. More specifically it presents how the QuEST TL9000 Handbook and ISO/IEC 9126 can be jointly used for defining, measuring, evaluating and finally achieving appropriate quality of user-centered software product.
Abstract: 'Software metrics' are most often proposed as the measurement tools of choice in empirical studies in software engineering, and the field of 'software metrics' is most often discussed from the perspective referred to as 'measurement theory'. However, in other disciplines, it is the domain of knowledge referred to as 'metrology' that is the foundation for the development and use of measurement instruments and measurement processes. In this paper, our initial modeling of the sets of measurement concepts documented in the ISO International Vocabulary of Basic and General Terms in Metrology is used to investigate and position the measurement concepts referred to in the Guide to the Software Engineering Body of Knowledge. This structured analysis reveals that much work remains to be done to introduce the full set of measurement and metrology concepts as fundamental tools for empirical studies in software engineering.
Abstract: The editorial team of the SWEBOK guide received feedback about its use at the National Technical University (NTU) confirming usefulness of the guide with the exception of chapter four, Software Construction, that did not map easily to industry practices nor to actual academic curriculum. After analysis of this specific SWEBOK chapter, some issues were identified, such as inconstancies between the textual descriptions and the visual representation. Furthermore, the analysis of this chapter using the Vincenti classification of engineering knowledge types allowed to identify some further weaknesses and provided some guidance on how the structure of this chapter could be improved. This paper proposes a revised breakdown of topics that is more aligned with an engineering perspective.
Abstract: This document presents the high-level design of a knowledge-based system to assist measurers in applying a functional measurement method consistently and systematically to often quite complex software applications which, moreover, may be from various application domains. The knowledge model underlying the proposed system is built on the key concepts of the software development process itself, as well as on the key concepts of a specific measurement method. The concepts describing the development process originate from the ontology of the SWEBOK [5] project and those related to the functional measurement method from the ontology of the COSMIC-FFP[6] method. The task originates from our modeling of the types of knowledge embedded in the measurement process.
Abstract: This document presents the design of a diagnostic tool to assist measurers in applying consistently and systematically a functional measurement method. The design of the diagnostic tool is based on the UML (Unified Mark-up Language) method [7] and a specific application of van Heijst knowledge modeling method [3]. The result is a hybrid diagnostic tool using CBR and rule based techniques.
Abstract: The field of software metrics is usually discussed from the perspective referred to as 'measurement theory'. However, in other disciplines, the domain of knowledge referred to as 'metrology' is the foundation for the development and use of measurement instruments and measurement processes. This paper presents an initial modelling of the sets of measurement concepts documented in the ISO International Vocabulary of Basic and General Terms in Metrology. In particular, this modelling illustrates the various levels of abstraction of the concepts as well as the relationships across related concepts and sub-concepts. We refer to this representation type as the topology of the concepts within the ISO Vocabulary. These models will provide the basis for analysing the current status of the field of 'software metrics' and to suggest improvements along the classical path of the field of metrology.
Abstract: The editorial team of the SWEBOK Guide received feedback about its use at the National Technological University (NTU), confirming the usefulness of the Guide with the exception of chapter four, Software Construction, which did not map easily either to industry practices or to current academic curricula.An initial analysis of this specific SWEBOK chapter enabled us to propose an initial revision of the structure of topics in this knowledge area. In addition, we conducted a review, presented here, of the chapter to identify the level of experimental support for each topic mentioned in this chapter. In order to classify the level of support, the classification in twelve experimental methods for validating technology by Zelkowitz and Wallace is used. It permits the identification of some of its weaknesses and provides further guidance on content improvements of the chapter.
Abstract: Estimation models in software engineering are used to predict some important attributes of future entities such as software development effort, software reliability and programmers productivity. Among these models, those estimating software effort have motivated considerable research in recent years. Estimation by analogy is one of the most attractive technique in software effort estimation field. However, the procedure used in estimation by analogy is not yet able to handle correctly categorical data such as 'very low', 'complex' and 'average'. In this paper, we propose a new approach based on reasoning by analogy, fuzzy logic and linguistic quantifiers to estimate effort when the software project is described either by categorical or numerical data.
Abstract: The software project similarity attribute has not yet been the subject of in-depth study, even though it is often used when estimating software development effort by analogy. Among the inadequacies identified (Shepperd et al.) in most of the proposed measures for the software project similarity attribute, the most critical is that they are used only when the software projects are described by numerical variables (interval, ratio or absolute scale). However, in practice, many factors which describe software projects, such as the experience of programmers and the complexity of modules, are measured in terms of an ordinal scale composed of qualifications such as ?very lowÂ, ?low and ?highÂ. To overcome this limitation, we propose a set of new measures for similarity when the software projects are described by categorical data. These measures are based on fuzzy logic: the categorical data are represented by fuzzy sets and the process of computing the various measures uses fuzzy reasoning. In this work, the proposed measures are validated by means of an axiomatic validation approach, using a set of axioms representing our intuition about the similarity attribute and verifying whether or not each measure contradicts any of the axioms. We also present in this paper the results of an empirical validation of our similarity measures, based on the COCOMOÂ81 database.
Abstract: The specific analysis of FPA, from a complexity viewpoint, leads us to propose an initial model of functional complexity in which software complexity is a function of component complexity and system complexity.In this paper, we will use the next generation of functional size methods proposed by the COSMIC team [1][24], and we will look at it from the complexity perspective to identify some factors that affect complexity. Based on the analysis of such factors, we will propose a model for measuring a specific perspective of software complexity, which we will refer to as functional complexity. This model of functional complexity has two parts: component complexity, that is, the complexity of a functional process (COSMIC terminology) that comes from both the data movements and data manipulation; system complexity, that is, the complexity coming from relationships between the functional processes like communication, concurrence and multi-instances. Measuring these factors independently gives us a set of indicators or baselines for assessing software complexity from a functional perspective. Such a measure of functional complexity will then be used in the future in empirical studies to investigate its contribution to the improvements of estimation models which sometimes fare poorly when based only on functional size.
Abstract: A diversity of software measurement and evaluation approaches has been proposed to the software engineering community to improve the quality of software products, of the development processes, and of the resources, such as CMM level assessment, ISO 9000 certification, SPICE-based evaluation, GQM-based and PSP-based improvement. These approaches are characterised by questions or criteria based on the implicit structure of the relationships between the IT personnel, process methodologies, product documents, platforms, software components, developer artefacts, quality strategies etc. Researchers have initiated investigations to identify and to define some of the explicit structures of the software measurement domain, including, for example, measurement maturity [4], measurement procedure [2] and measurement classification [15]. Our new approach permits explicitly to model the software measurement and evaluation process ([14], [1]). The generality of our approach provides the flexibility to choose and clarify the measured aspects under consideration. In addition, four steps were defined for implementing software metrics programs in the IT area [5], including the selection of the area to be measured, the analysis of the chosen metrics, the implementation of a measurement process and the identification of tool-based support to facilitate its implementation. This four-step approach includes empirical criteria to evaluate the explicit level of software metrics applications in a selected IT area.The determination of such empirical criteria is a key issue for the definition and the use of software metrics, and it is particularly challenging with new applications or paradigms emerging in research and practitioners communities, such as multi-agent systems or e-commerce adaptations. Based on our experiments in these new areas, this paper proposes an improved method to derive such empirical criteria considering existing values from other components. The explicit model of our approach is a combination of both object diagrams and semantic networks. Initial empirical evaluations of the efficiency of CMM, PSP and ISO 9126 applications will also be presented and discussed.
Abstract: The management of software cost, development effort and project planning are key aspects of software development. Functional size measurement (FSM) has been proposed as a tool for these management requirements. Function Point Analysis (FPA) can be considered as the first FSM method published. Based on FPA, other methods have been proposed as improvements and alternatives thatdiffer in their respective views on functional size. FPA is an intuitive approach without theoretical foundation, and without a measurement model. It is therefore unclear, what FPA actually measures and what the differences between the FSM methods are. We use an axiomatic approach based on measurement theory to develop a model for existing FSM methods. In this paper, we propose a model as a generalized representation for a set of methods: IFPUG FPA, Mark II FPA, and FFP. This view can be used as a basis for the analysis of FSM methods and for a discussion of their differences.
Abstract: In recent years, some software organizations have been successful at improving their maturity level, thanks to the successful application of methods and techniques which help them to achieve better performance and more consistent production processes. Models such as the Sw-CMM (and its evolutions and derived models) have provided roadmaps to process improvements. Creativity and innovation have been placed at Level 5 of the CMMI and the P-CMM respectively. A suggestion is made in this paper to consider creativity and innovation management earlier on in such SPI models. Also in this paper, we propose, in an exploratory way, a method for mapping, tracing and measuring creativity, based on two entities: the CA matrix and the Creativity Indices.
Abstract: Managers of software development initiatives must routinely make crucial decisions in contexts where there are many unknowns regarding the expected outcomes, for example when making project estimations for both effort and duration of software development. And it often happens that software projects are more expensive than estimated and with late completion. Many of these serious consequences are outcome of badly informed decisions earlier on in the development process. So, it is important to obtain good estimates early in software project life, and to understand the potential range of variations of such estimates. To support manager in the estimation, there are, many parametrics estimation models which are proposed in the literature and estimation software tools in the market place; however, very little in known about the quality of the estimates of such models, including about the SLIM model (Software Life-Cycle Model), one such estimation tool available in the market place.This paper presents an exploratory research which main purpose is to evaluate the quality of the estimates produced by the SLIM software estimation tool (Putnam's model, 1978). So, the results of this research will be useful to any manager in computer science and to any practitioner who have to estimate software costs development.This study had been developed in three phases. In the first one, the projects in the repository of the International Software Benchmarking Standard Group are studied and data samples were created based on the criteria of the programming language types existing in ISBSG database.In the second phase, the size and duration of each project are used as entry parameters in the SLIM tool to estimate each project's effort. Finally, in the last part of this study, this SLIM estimated effort is compared to the one already estimated by the automated environment of ISBSG database, which had been developed by Stroian in 1999 at the Software Engineering Management Research Laboratory. Results of this comparison show the differences between SLIM's estimation and the real effort of development. To verify these results, estimated effort and real effort have been correlated. In summary, the results indicate that SLIM does not respond to the criteria of good models" in software engineering, that is, a productivity model will be considered as "good", if it is able to meet the criteria of the mean relative error of ± 25% for 75% of cases (Conte, 1986; Verner,1992, Abran et Robillard, 1993)."
Abstract: The Balanced Scorecard (BSC) represents one of the performance management frameworksadopted with great success in business circles in recent years. One of its most valuablestrengths is its linkage of the strategic and operational levels, through a quantitative andqualitative management using a series of indicators from four different perspectives: Financial,Customer, Internal Process, Learning & Growth. The success of this framework in thebusiness world has led to some tailored extensions in the ICT world, with a few examplesdeveloped in the second half of the '90s. A key issue that needs to be addressed in the designand implementation of a BSC for ICT companies is measurement of the software itself. Tobuild a BSC, once the overall strategic direction has been identified, Goals, Drivers andIndicators (GDI elements) must be selected for each perspective. Even though significantattention has already been paid to the first two elements (Goals and Drivers), the last(Indicators) has been largely neglected. To address this measurement issue in the ICT field,we propose that Functional Size Measurement (FSM) be used as a key measure to normaliseother measurement results across reference values. In summary, this paper illustrates howthe use of Functional Size Measurement can strengthen an ICT BSC, from the operationalpoint of view of measurement.
Abstract: Although software measurement is a key factor in managing, controlling and improving the software development process, software quality criteria are neither well defined nor easily measurable. This paper proposes a new logic-based graphical technique for modeling the dynamic interactions of the variables that affect software quality within a whole system production process. The framework presented here describes the properties of a complex quality assessment system composed of human-software-hardware interactions in terms of their quality requirements, and is designed to address the following issues: (1) What are the relationships between software and system measurable characteristics in terms of their contribution to whole-system quality? (2) What are the relationships between quality requirements and their measurable characteristics? (3) What are the common measures used to compute more than one quality attribute? (4) How can software-quality-related measures be combined to produce an overall assessment of quality?
Abstract: Software projects are often described by linguistic variables such as the experience of programmers and the complexity of modules. Because the existing software project similarity measures take into account only numerical data, we have proposed a set of measures based on fuzzy logic to evaluate the similarity between two software projects when they are described by linguistic values. In this work, we improve the proposed measures by using linguistic quantifiers such as 'most', 'many' and 'few' in the computing process for the various measures.
Abstract: The Common Software Measurement International Consortium (COSMIC) was formed in 1998 to design and bring to market a new generation of software measurement methods. The COSMIC group reviewed existing functional size measurement methods, studied their commonalities, and proposed the basic principles on which a new generation of software functional size measurement methods could be based. In November of 1999, the group published version 2.0 of COSMIC-FFP, a measurement method implementing these principles, and put its measurement manual on the Web for public access. Over the past year, industrial organizations have contributed data in the context of COSMIC field trials. This report present an overview of the field trial results, including an analysis of the relationship of effort with respect to the software functional size, measured in COSMIC-FFP size units. The data set is described, as well as the constraints for the interpretation of the results.
Abstract: Applying software functional measurement method (2) is an intellectual process realized by a human expert on a complex intellectual object. Two types of difficulties occur: 1- the software models are not always available in industrial contexts, neither complete and, when available, most often are not using the same formalism; 2- applying the specific formalisms of measurement method to a wide variety of intellectual products most often not fully documented can lead to difficulties in homogeneity (coherence) of interpretation, even among experts (13). The activity of mapping the functional measurement method to any type of software model, and than instantiating the specific measurement rules, in a specific context, represents the characteristics of an expert task" that can in turn be modeled itself within a knowledge system.The functional size measurement method to be used to investigate the feasibility of the approach is COSMIC-FFP (2), and the knowledge system used is Help CPR (5). This papers presents 1-the different phases of the measurement process and more specifically the step that apply the method of measure; 2-the problem of the application of the functional measure COSMIC-FFP in the context of an organization with two examples of problem; 3-the experimental procedure proposed to resolve the problems; 4 - some examples of resolution of problems using a type CBR (Case-based reasoning) tool which is Help CPR from Haley Enterprise."
Abstract: Software maintenance constitutes an important part of the total cost of the lifecycle of a software application. Some even argue this might be the most important fraction of the cost (50-80 % according to Tony Scott, 75% according to Rand P. Hall, 60% Freedman). The added value of software maintenance is often not perceived by the customers. While the introduction of a new software application clearly shows new benefits, the work being done to maintain an existing application is usually only apparent when the application breaks down or small changes are being implemented (which sometimes also causes some downtime). This results in a negative perception of the software maintenance section. A proposed approach to turn this around is to provide the customer with insights in the activities performed by the maintenance section and to come to a clear agreement on the results and expectations of these activities.The Service Level Agreement (SLA) originates from the practice of the specifications of results found in the contractual agreements of the large computing centres of the 50Âs (McBride 1990). Service Level Agreements could be the used by software maintenance for better managing customers expectations by specifying with the customer what the service results will be. Until a few years ago, this management practice had been limited to operations and support services: the literature search about agreements on Software maintenance turned out some references to Software Maintenance Agreements (for instance Mueller 1994) but most of the agreements reported were limited to helpdesk support, bug fixes and the distribution of new releases. No detailed agreements were reported to include the full spectrum of maintenance services, including the management of the quality of the service.In this paper the application of Service Level Agreements to the field of Software Maintenance is described, based on the experiences at Batelco. First, key differences between software maintenance and IT development are described, together with the difficulties of viewing software maintenance as an IT Service, and related challenges to tackle them in the design of an SLA. The context at Batelco is presented next, together with a description of the various aspects of the SLA implemented. Lessons learned on the application of SLAs on software Maintenance are presented as well as recommendations for future improvements.
Abstract: This paper describes and illustrates a methodology for identifying the correctness of software functional requirements on the basis of a logic-based dynamic framework. It focuses on the issues related to user and/or system functional requirements; quality attributes, measures and analysis methods, and integrates the core concepts of the Graphical Requirement Analysis (GRA) and COSMIC-FFP techniques:The proposed approach provides a structured procedure for arranging functional software requirements into a graphical framework, thereby providing a means for evaluating their clarity and their presence/absence. Moreover, the architecture of this approach makes it possible to trace specific entities forwards, from system/user requirements to design, and backwards. The way in which the proposed Integrated Measure for Functional Requirements (IMFR) captures critical aspects of functional requirements such as ambiguous or incomplete requirements, incomplete linkages from software requirements to system requirements and to design and/or to test cases is illustrated. Using a sub-system of the Generic Westinghouse Reactor Protection (GWRP) control system case study as an example, we identify and demonstrate various ambiguities of textual software requirements.
Abstract: The IEEE Computer Society and the Association for Computing Machinery are working on a joint project to develop a guide to the Software Engineering Body of Knowledge (SWEBOK). Articulating a body of knowledge is an essential step toward developing a profession because it represents a broad consensus regarding the contents of the discipline. Without such a consensus, there is no way to validate a licensing examination, set a curriculum to prepare individuals for the examination, or formulate criteria for accrediting the curriculum.At the time of writing this paper in September 2000, the SWEBOK project (http://www.swebok.org) is nearing the end of the second of its three phases. Here we summarize the results to date and provide an overview of the project.
Abstract: The IEEE Computer Society and the Association for Computing Machinery are working on a joint project to develop a guide to the Software Engineering Body of Knowledge (SWEBOK). Articulating a body of knowledge is an essential step toward developing a profession because it represents a broad consensus regarding the contents of the discipline. Without such a consensus, there is no way to validate a licensing examination, set a curriculum to prepare individuals for the examination, or formulate criteria for accrediting the curriculum.At the time of writing this paper in September 2000, the SWEBOK project (http://www.swebok.org) is nearing the end of the second of its three phases. Here we summarize the results to date and provide an overview of the project.
Abstract: One of the means organisations use to adequately measure the performance of their software engineering process, is to try to identify how much reuse has actually occurred. In this paper, the COSMIC-FFP (COSMIC-Full Function Points) measurement method is proposed as a method for quantifying reuse from a functional perspective rather than from a technical perspective.The COSMIC-FFP method has been developed to improve the measurement of the functional size of various software types: real-time, technical, system and MIS software. By using functional user requirements as input, the method makes it possible to measure the size of software from the user's viewpoint. When other functional perspectives are taken into account in the measurement process, the other results may be used as complementary information related to the measured software. The value of this new information includes the ability to quantify reuse from a functional perspective, and as such it would be worth considering taking it into account in the software productivity model. Some practical results on industrial software are presented, along with the concepts involved.
Abstract: Measuring the functional size of software was proposed by Albrecht, more than 20 years ago, as a solution to the limitations of lines of code (SLOC) when quantifying the output of the software engineering process. While this approach has been, and still is, successful when applied to MIS type of software, it has not enjoyed the same success for measuring the size of non MIS software, as demonstrated in publications from a number of authors in the past 15 years. Three types of approach have been proposed in the literature to apply Albrecht's concepts to non MIS types of software. Although some of these approaches offered interesting insights, none has gained sufficiently wide usage to be recognized as a de facto standard.Building on the strengths of previous work in this field, the Common Software Measurement International Consortium (COSMIC) proposed a set of principles in 1998 onto which a new generation of functional size measurement methods could be built. The COSMIC group then published version 2.0 of COSMIC-FFP, in 1999, an example of a functional size measurement method built on those principles. Key concepts of its design and of the structure of its measurement process are presented.
Notes: 20001023, Notes: Keynotes, Research Notes: 373
Abstract: An open model called QEST (Quality factor + Economic, Social & Technical dimensions) has been developed to handle, simultaneously and concurrently, three-dimensional viewpoints of performance. This model was developed initially to represent multiple views of performance of completed projects. It originally represented a static view of projects. This paper presents an extension to this QEST model, which allows it to be used dynamically throughout a project's life with the flexibility to represent, for example, distinct views of quality depending on the phase of the lifecycle considered. This model is referred to as the LIME (LIfecycle MEasurement) model and can accommodate a lifecycle model where each phase can have distinct relative distributions across the three viewpoints.
Abstract: The IEEE Computer Society and the Association for Computing Machinery are working on a joint project to develop a guide to the Software Engineering Body of Knowledge (SWEBOK). Articulating a body of knowledge is an essential step toward developing a profession because it represents a broad consensus regarding the contents of the discipline. Without such a consensus, there is no way to validate a licensing examination, set a curriculum to prepare individuals for the examination, or formulate criteria for accrediting the curriculum.At the time of writing this paper in February 2000, the SWEBOK project (http://www.swebok.org) is nearing the end of the second of its three phases. Here we summarize the results to date and provide an overview of the project.
Abstract: Quality Function Deployment (QFD) technique has been developed in the context of Total Quality Management, and it has been experimented in the software engineering domain. This paper illustrated how key constructs from QFD contributed to an development of a second version of a Quality Factor (QF) for a qualitative sofwtare evaluation, considering three distinctive but connected areas of interest, each of them representing dimension of performance:- economic dimension, the perspective is the managers' viewpoint;- social dimension, the perspective is the users' viewpoint;- technical dimension, the perspective is the developers' viewpoint.This new version of the original QF technique, referred to as QF2D (Quality Factor through QFD), has the following features: it can be used for both a priori and a posteriori evaluations of the software product; it makes usage of the set of quality sub-characteristics proposed in the new upcoming ISO/IEC 9126: 2000 standard it has a variable number of elements taken into account the three viewpoints for the evaluation; it offers the visual clarity from QFD for external and internal benchmarking. An implementation of this new version of this technique in quality models is also discussed.
Abstract: Software measurement plays a key role in software engineering and, to improve its performance, an organisation needs to measure software at each stage of the development life cycle. Recently, the COSMIC-FFP measurement method has been developed to improve the measurement of the functional size of a large array of software types. By quantifying software's functional user requirements, the method makes it possible to measure software from the user's viewpoint. The COSMIC-FFP measurement method has been designed based on a software functional model that can represent the functional user requirements at many levels of functional abstraction, such as software layers, functional processes and data movement sub-processes.Developers in general, however, need to know the size of the software early in the development process to support the estimation and project planning process. While the measurement rules of the COSMIIC-FFP method have been designed to be applied when the details of the software functions are known, the method has the required flexibility to capture an estimate of the functional size of software early in the life cycle and to offer added value to the software engineers preparing the development plans. This paper investigates the applicability of COSMIC-FFP for measuring the size of software at early stages of the development life cycle
Abstract: The implementation of a measurement program in a large software development organization requires significant teamwork. However, such implementations are a challenging task; indeed, it has been reported that 80% of measurement programs implemented in the USA have a life expectancy of less than two years. The complementary capabilities of team members are often discussed in the various approaches for implementing a measurement program, but these approaches rarely take into account the distinct personalities of individuals within a team or an organization. This paper presents an overview of Ned Herrmann's cognitive approach and discusses how it can contribute to facilitating the implementation of measurement programs in a software organization. Its use is illustrated in the Desharnais-Abran approach to measurement program implementation. Taking this cognitive approach into consideration in the implementation of software measurement programs could contribute to an increase in their success rate.
Abstract: When the COCOMO (Constructive Cost Model) was published at the beginning of the eighties, fuzzy logic was not grounded on solid theoretical foundations. This was not been achieved until Zadeh and others did so in the nineties. Thus, it is not surprising that some of the concepts defined or used in COCOMO are somewhat incompatible with the fuzzy logic. In our work, we investigate the issue of the compatibility of COCOMO with the fuzzy logic.In software metrics, specifically in software cost estimation, many factors (linguistic variables in fuzzy logic) such as the experience of programmers and the complexity of modules are measured on an ordinal scale composed of qualifications such as 'very low' and 'low' (linguistic values in fuzzy logic). In our work, we study the COCOMO'81 model, specifically its intermediate version. Our work is still applicable to the COCOMOII.
Abstract: The attribute of similarity of software projects has not been the subject of in-depth studies even tough it is often used when estimating software development effort by analogy. Most of the proposed measures of projects attribute are described by numerical variables (interval, ratio or absolute scale). However, in practice many factors which describe software projects, such as the experience of programmers and the complexity of modules, are measured on the basis of an ordinal scale composed of qualifications such as 'very low' and 'low'. Many of these qualifications (linguistic values in fuzzy logic) of these attributes can also be represented by fuzzy sets. This will enable us to measure the similarity between software projects which are described by linguistic values (ordinal scale). Furthermore, the proposed measures can, of course, be also used when projects are described by numerical values.
Abstract: This paper compares two quantitative approaches recommended for developing and supporting software process improvements, that is, the Goal-Question-Metric (GQM) technique and the Balanced Scorecard (BSc) framework. While both offer the opportunity to implement a quantitative analysis of software projects, they are often misinterpreted as either interchangeable or, on the contrary, mutually exclusive. After summarising the key aspects of the two approaches, three main characteristics are proposed as a basis of comparison: measurement object, nature of the approach and strategy. These make it possible to identify similarities as well as key differences. In particular, it will be illustrated that strategy is the key point of differentiation between the two. More specifically, the added value in the BSc approach resides in its structuring of a causal relationship chain among the business goals of the various perspectives, which allows for a proper alignment of business and operative goals for achieving success. Examples of the research effort on the joint use of GQM and BSc are presented, as well as the way in which they can contribute to improving the extensions of BSc to the IT field, such as improvements to the ESI-Balanced IT Scorecard (BITS).
Abstract: Field trials of the COSMIC-FFP functional size measurement method were initiated at the end of 1999 with the aim of advancing the method from a 'proposal' status to a 'proven' status by demonstrations and tests with real data on development projects from software from a variety of functional domains in a variety of organizations. Data has been collected in a number of organizations since then and the analysis of the first results started in July 2000. This paper summarizes the context of the COSMIC-FFP field trials and presents some of the key observations obtained to date. Parts of the analysis focused on the relationship between software size and project variables like effort and schedule while other parts of the analysis focused on the relationship between the components contributing to the functional size of the software. Notably the relevance of considering the count of data attributes as a contributor to functional size and the distribution and variation of the size displayed by the functional processes of real-time software was investigated.The paper concludes on the status of the COSMIC-FFP measurement method, outlining the key events and further results to be expected by early 2001.
Abstract: This paper presents a confirmatory analysis of empirical models that predict software engineering project duration from project effort. The results are based on a more recent and much larger sample than those of previous studies. The models are based on the analysis of project data provided by release 4 of the International Software Benchmarking Standards Group (ISBSG) repository. Duration models are built for subsets of projects using personal computer, mid-range and mainframe development platforms.
Abstract: The importance of software system representation through models and visual diagrams is increasing with the steady growth of systems complexity and criticality. Since no single representation is best suited to address all the documentation, communication and expression needs of a typical software development project, the issues related to conversion and coherence between different representations are having a significant impact on team productivity and product as well as process quality.This paper explores the types of relationships that exist between representations and the impact they have on mapping, generation and synchronization processes. We propose a characterization of those relationships as being parallel, hierarchical or orthogonal. Examples and comments on mapping or transformation processes and automation prospects in the context of software size measurement are also provided.
Abstract: In the literature, the expression metrics validation is used in many ways with different meanings. This paper analyzes and compares some of the validation approaches currently proposed. The basis for this analysis is a process model for software measurement methods which identifies the distinct steps involved from the design of a measurement method to the exploitation of the measurement results. This process model for software measurement methods is used to position various authors' validation criteria according to the measurement process to which they apply. This positioning enables the establishment of relationships among the various validation approaches. It also makes it possible to show that, because none of these validation approaches proposed to date in the literature covers the full spectrum of the process of measurement methods, a complete and practical validation framework does not yet exist.
Abstract: This work starts from the analysis of the increasing importance for management to have available tools for quality measurement of company resources, in particular of software. This work presents the concept of the design of a Quality Factor (QF) for a qualitative software evaluation, considering three distinctive but connected areas of interest, each of them representing dimension of performance:- economic dimension, the perspective is the managers' viewpoint;- social dimension, the perspective is the users' viewpoint;- technical dimension, the perspective is the developers' viewpoint. An implementation of this QF, based on ISO/IEC 9126 standard, in quality models is also discussed
Abstract: This paper presents an analysis of contractual outsourcing agreements in the field of Information Technology based on the postulates of the Agency Theory. This analysis reveals that the design of many outsourcing agreements, referred to as procurement contracts, is incomplete from an economic perspective. It is postulated that this degree of contractual incompleteness is the result of a trade-off between the benefits of mitigating the ex-post opportunism of agents and the costs of additional resources expended in ex-ante design. The magnitude of these opposing forces can be predicted based on the characteristics of the suppliers and the software services. From this postulate, as well as from previous findings in the literature on manufacturing procurements, this paper suggests a model which links the degree of contractual completeness with some variables related to the potential opportunism of suppliers and the uncertainty surrounding software services. A subsequent research phase will test this model in software outsourcing environments.
Abstract: The Full Function Points functional size measurement method was first released in the fall of 1997. Since this initial presentation, significant improvements to the description of functional size measurement concepts have been achieved by the COSMIC group. In its second release, the Full Function Points method has been enhanced significantly in order to implement the findings of the COSMIC group. Highlights of the improvements will be presented, including the clarifications to the measurement process model as well as enhancements to the functional size model and to the measurement procedures.
Abstract: Although there is broad agreement on the sorts of things to take into account when measuring functional size, there is a variety of opinion about how to do it. This is partly because several different views of functionality are addressed in functional size measurement. Some are better understood than others. In particular, the general systems characteristics" (GSC's) and "value adjustment factor" (VAF) are poorly understood. Our aim is to provide a foundation for research that may improve this aspect of functional size measurement.A survey of the evolution and state of practice of the GSC's and VAF leads us to identify various aspects of software that are important in functional size measurement. We relate these aspects of software to different views of functionality. A spectrum of viewpoints is seen, with core functionality at one end, effort estimation at the other, and different user viewpoints in between. By noting how the GSC's and VAF contribute to these viewpoints, we see how value may be gained from them, and we identify directions for future research."
Abstract: Organizational performance models are usually based on accounting systems, and therefore take into account mostly the economic-financial viewpoint, or the tangible asset part, of it using performance management terminology. In the IT field, the Earned Value model has been promoted to be present project performance during the project life cycle. However, these types of models oversimplify performance representation with a single performance index, while in reality multiple viewpoints must be managed simultaneously for proper performance management.This work shows how an open three-dimensional measurement model of software project performance functions. Called LIME (LIfecycle MEasurement), it extends the structure of a previous model to a dynamic context I applies to software production during all SLC phases, which are classified following a generic 6-step and scheme waterfall standard.A quantitative and qualitative analysis of the project is effected considering the three distinctive but connected areas of interest, each of them represent has a dimension of performance:· economic dimension, from the managers' viewpoint, with a particular attention to cost and schedule drivers;· social dimension, from the users' viewpoint, with particular attention to the quality-in-use drivers;· technical dimension, from the developers' viewpoint, with particular attention to technical quality, which has a different impact during each SLC phase.
Abstract: The paper proposes a general framework to build a model for automatic Function Point Analysis (FPA) from the source code of COBOL system using program slicing technique. The COBOL system source code is scanned by the model to produce Function Point counts. The application's source files are used to define the application's boundary for the count. The model takes into account the structure of the COBOL language to identify physical files and transactions. Reserved words as FDs, file input/output statements (READ and WRITE) and user interface and data manipulation statements (ACCEPT, DISPLAY and MOVE) are used as basic information for program slicing technique to identify candidate physical files and transactions. Some heuristic rules will be proposed in order to map candidate physical files and transactions into candidate logical files and transactions. These candidate files and transactions are then assessed with regards to the IFPUG' identifying rules in order to identify data function types and transactional function types to be counted. The proposed framework helps to build models for automating Function Point Analysis from source code in compliance with the IFPUG Counting Practices Manual.
Abstract: Release 1.0 of the Full Function Points measurement method was proposed in 1997 to measure the functional size of real-time or embedded software. Since then, field tests have shown the applicability and usefulness of this measurement method not only for real-time or embedded software, but also for other types of software like system software and MIS software.This paper investigates the issue of measurement compatibility between the designs of both the FFP and IFPUG measurement methods. Such compatibility is required to perform mathematical operations involving results from both methods and mixing them into a single functional size measure. The compatibility of both measurement objects and measurement processes is analyzed and the accuracy of their aggregation function is identified as being dependent on the level of granularity at which the measurement functions are applied. Comparing the two approaches, we find that the precision of this aggregation function corresponds to the lowest common denominator of the two approaches.
Abstract: Decision-making is a difficult task per se. This inherent difficulty is exacerbated by the complexity and fast pace of the changes that characterize software engineering. Critical decisions impacting the success of a project or even an entire organization must be made quickly based on information that is either limited to the point of being insufficient or so abundant that it is virtually unmanageable. Either way, the information is more often than not of questionable quality.This paper proposes an evolutionary framework to support efficient and justifiable decision-making throughout the implementation phase. This approach covers the necessity to make decisions quickly without complete, reliable information, as well as integrate new data as it becomes available.
Abstract: Software functional size measurement is regarded as a key aspect in the production, calibration and use of software engineering productivity models because of its independence of technologies and of implementation decisions. In 1997, Full Function Points (FFP) was proposed as a method for measuring the functional size of real-time and embedded software. Since its introduction, the FFP measurement method has been field-tested in many organizations which have provided feedback on ways to improve it.Based on this feedback and in association with the Common Software Measurement International Consortium (COSMIC), version 2.0 of the COSMIC-FFP measurement method will be released in October 1999 for field-testing. This paper describes the new features of COSMIC-FFP version 2.0, including: a generic software model adapted for the purpose of functional size measurement, a two-phase approach to functional size measurement (mapping and measurement), a simplified set of base functional components (BFC) and a scalable aggregation function. Through its generic software model of functional users requirements, version 2.0 of the COSMIC-FFP measurement method is applicable to a broad range of software, including embedded, MIS, middleware and system software.
Abstract: Software has become a key component of most automated process control devices. It offers a high degree of flexibility in adjusting the behavior of those devices. Proper management of the development and maintenance of process control software is therefore a key issue be it for reasons related to internal organization performance or for benchmarking against the best in the industry.Measures are essential for quantitative management; they are needed to analyze both the quality and the productivity of the software processes. For instance, technical measures are useful to quantify the performances of a product's design through efficiency analysis. On the other hand, functional measures are needed for quantifying products from a user perspective and are well suited for productivity analysis. For instance process control projects that exhibit cost or schedule difficulties originating from the work related to the software components could benefit from functional measures to alleviate such difficulties. Functional measures are independent of technical and implementation decisions, they can be used to compare the productivity of different techniques and technologies.This paper presents a measure of a fundamental dimension of software: its size. Although software functional size measures are not new, the most popular one, called function points, have often been described as ill-suited for the quantification of real-time or embedded software for a number of reasons. The measure presented in this paper, called Full Function Point (FFP), has been specifically designed for real-time or embedded type of software.The paper explains the criteria for designing an adequate software functional size measure. The characteristics of FFP and the associated measurement method is then presented along with references to relevant documentation. Results are introduced; these results demonstrate that, from a practitioner perspective, Full Function Point is a functional measure that adequately captures the perceived functional size of real-time or embedded software. The paper conclude on the evolution perspectives for this size measure.
Abstract: Systems rarely run alone. They are usually part of a complex system of software layers (e.g. database managers, network drivers, operation systems and device drivers). Software layers constitute a specific way of grouping functionalities on a level of abstraction. When measuring the functionality of a system, practitioners usually consider one type of layer: user application, or the highest-level layer. They consider the other layers as technical. This approach might work with Management Information Systems, where there is often no business need to consider layers other than the highest-level one. This is because the other layers are usually already developed (e.g. Windows, UNIX, printer drivers). However, this is often not the case for real-time and embedded systems. Embedded system development projects involve developing or modifying operating systems, drivers and user applications as well. Not considering software layers can result in misleading measurements, as measuring only the highest-level layer may lead to misrepresentation of the size of a project or application.This paper covers the definition of software layers and how to identify them, and by extension the identification of peer systems: systems residing on the same layer.
Abstract: During 1997, A large Information System (IS) Division of a Canadian Phone Company implemented formal process assurance in its Quality Assurance group. This status report presents a new perspective on the measurement of process assurance and the lessons learned after one year of assessing the individual conformance [1] of software development projects to the Corporate Software Development Process (CSDP) of the organization. This status report presents the assurance process overview, goals, benefits and scope, as well as the 1997 results overview, followed by the lessons learned, for the 1998 audit program.
Abstract: During 1997, the Information System (IS) Division of the Bahrain Telecommunications Company (Batelco) implemented a millennium compliance program. This experience report presents additional configuration management requirements implemented to manage the millennium project associated with the IBM-MVS applications. Included is a definition of compliance for Year 2000 projects, conversion approaches, additional configuration management requirements, Year 2000 Components Tracking System (Y2KCTS) process overview followed by the lessons learned. At the time of writing this paper there is 1 year 8 months and 24 days left before the new century. The year 2000 project is a significant undertaking and an absolutely no-choice project for Batelco.
Abstract: This work presents an improved version of an open multi-dimensional model of performance, called QEST (Quality factor + Economic, Social and Technical dimensions) [8]. Performance is defined here as productivity adjusted by quality, both of which can be represented from multiple viewpoints. The QEST model integrates into a single representation three dimensions, each one represented by a productivity measurement value derived from an instrument-based measurement process, which value is then adjusted by a perception-based measurement of quality achieved. Both components of performance, that is productivity and quality, take into account the same three distinct viewpoints of performance:* economic dimension, the perspective is the managers' viewpoint, with particular attention paid to cost and schedule drivers;* social dimension, the perspective is the users' viewpoint, with particular attention paid to the quality in use drivers;* technical dimension, the perspective is the developers' viewpoint, with particular attention paid to technical quality.
Abstract: In most software cost estimation models, software size is the key cost driver. Such models use either a technical measure of software size, based on lines of code, or alternatively a functional size measure which can be known earlier in the software life cycle. However, even though Function Points is the most widely used functional size measure in the MIS domain, practitioners have often pointed out its limitations for measuring the size of real-time or embedded software; therefore, it is not currently considered as an adequate input parameter for estimating real-time software effort. In 1997, a new extension to Function Points (referred to as Full Function Points ? FFP) was introduced for measuring the functional size of real-time software in order to address the most obvious weaknesses of IFPUGÂs Function Points while retaining compatibility with traditional Function Points for MIS software. Full Function Points was also recently accepted as a new measurement standard for real-time software by the International Software Benchmarking Standards Group. This paper reports on the key concepts of this extension, as well as on the preliminary results of the measurement field tests carried out in different organizations. The ability of FFP to adequately capture the functional size of real-time software is illustrated by FFP and FPA measurements taken on the same software products. Preliminary results, using additionalcollected data, to support exploratory analysis of the unit effort and schedule delivery date based on FFP are also presented.
Abstract: IT projects continue to be canceled, delivered late and over budget, fail to deliver what was expected or deliver error-prone results. This state of affairs prevails today despite more than 30 years of evolution in the methods,techniques and tools of information technology, softwareengineering and project management. It is clear that software professionals and organizations have not adequately harnessed the bodies of knowledge required to avoid these project delivery pitfalls.This talk presents a recently begun Canadian Government initiative to develop a non-proprietary and open data collection framework for information technology governance. The following topics are discussed:the underlying business model and the major guidelines for putting in place the infrastructure to implement this data collection framework.
Notes: 19980403, Notes: Incomplete, Research Notes: 129
Abstract: Function Point Analysis (FPA) is a technique designed to measure the functional size of software products. The technique measures product size from the user's point of view rather than from a technical perspective. FPA is now widely used in the MIS domain, where it has become the de facto" standard. However, FPA has not enjoyed the same degree of acceptance in other domains, such as real-time software. This paper reports on work carried out to adapt FPA to the specific functional characteristics of real-time software. The extension proposed, called Full Function Points (FFP), is described and the results of field tests are discussed."
Abstract: Function Points Analysis measures user requested functionality independent of the technology used for implementation. Software applications are represented in an abstract model that contains the items that contribute to the functional size. When Function nPoint Analysis is applied to object-oriented software, the concepts of the development method have to be mapped into that abstract model.This article proposes a mapping of the use case driven Object-Oriented Software Engineering method by Jacobson et al. into the abstract Function Point model. The mapping has been formulated as a small set of concise rules that support the actual measurement process. Our work demonstrates the applicability of Function Point Analysis as a measure of functional software size to the OO-Jacobson approach. This supports the thesis that Function Point Analysis measures independent of the technology used for implementation and that it can be used in the object-oriented paradigm.
Abstract: The Function Point software measure does not require the use of a particular development technique. However, the high-level concepts of object-oriented development methods cannot be mapped directly to the concepts of Function Point Analysis. In order to apply this software measure early in the development process, the object-oriented concepts corresponding to transactional and data function types have to be determined.Object-oriented methods differ, especially in their early development phases. The Object-Oriented Software Engineering method of Jacobson et al. is based on so-called use cases. The viewpoint of this method is similar to Function Point Analysis in the sense that it concentrates on the application's functionality from the user's perspective.The OO-Jacobson approach identifies the functionality of an application with the requirements use case model. Data types are described with a domain or analysis object model on the requirements level. Our work proposes rules to map these models to the Function Point counting procedures. With the proposed rules, it is possible to count software developed with the OO-Jacobson method. Experimental counts have been conducted for three industry projects.
Abstract: This paper presents a process model for software measurement methods. Theproposed model details the distinct steps from the design of a measurement method, to its application, then to the analysis of its measurement results and last to the exploitation of these results in subsequent models, such as in quality and estimation models. From this model, a validation framework can be designed for analyzing whether or not a software metrics could qualify as a measurement method. The model can also be used for analyzing the coverage of the validation methods proposed for software metrics.
Notes: http://saturne.info.uqam.ca/Labo_Recherche/Lrgl/publi/confproc/LRGL-1997-001/LRGL-1997-001.htm, 19970522, Research Notes: 45
Abstract: Many parametric models based on estimates of project effort have been proposed in the literature to predict the duration of software development projects. Among these, COCOMO has received wide attention. A comparison of the duration estimates obtained from this model with those from an empirical model derived from a set of historical data maintained by the International Software Benchmarking Standards Group (ISBSG) is presented in this paper. It is shown that the COCOMO duration estimates are optimistic" when compared to the empirical model estimates. Using quantitative evaluation criteria this paper also shows that the goodness of the COCOMO duration models is very close to the goodness of the empirical model in spite of fact that the data used to derive the COCOMO duration models are roughly 20 years old."
Notes: http://saturne.info.uqam.ca/Labo_Recherche/Lrgl/publi/otherpub/LRGL-1997-010A.pdfhttp://saturne.info.uqam.ca/Labo_Recherche/Lrgl/publi/otherpub/LRGL-1997-010B.pdfm:\mig-lrgl\livrables\LRGL-1997-010A.pdfm:\mig-lrgl\livrables\LRGL-1997-010B.pdf, 19970903, Research Notes: 122
Abstract: Function Point Analysis (FPA) is a method for measuring the functional size of a software system. The rules governing FPA are described in textual format in IFPUG's Counting Practices Manual (CPM). Theses textual descriptions need to be transformed into very structured requirements, if the tools to be built have some chance to produce accurate results. Ideally it should be possible to represent the FP rules into a decision table that can then be programmed and fully tested.
Abstract: This position paper addresses the measurement of software reuse from a functional perspective rather than from a technical perspective. Many studies have observed that the potential for reuse in software goes far beyond the reuse of source lines of code and includes data, architecture, design, program and common subsystem modules, documentation, test data and various intangibles. These issues are not tackled by reuse metrics based o nly on lines of code as the unit of measurement. In 1995, Abran and Desharnais proposed the first version of functional reuse metrics based on the Function Point Analysis (FPA) technique. They illustrated how these metrics could be used to take into account the benefits of reuse in a cost-benefit analysis. We are currently working on the refinement and extension of these functional reuse metrics. This empirical research project includes three phases: (1) test of the proposed functional metrics with other industrial datasets; (2) exploration of its limitations and potential extensions, through the design of much more complex simulated case studies using the data collected in the first phase of the project; and (3) design and test of an improved version of the proposed metrics. In this position paper we will present our current work in progress on this subject.
Notes: http://saturne.info.uqam.ca/Labo_Recherche/Lrgl/publi/confproc/LRGL-1997-012A.pdf, 19970521, Research Notes: 603
Abstract: Function Points Analysis measures user requested functionality independent of the technology used for implementation. Software applications are represented in an abstract model that contains the items that contribute to the functional size. When Function nPoint Analysis is applied to object-oriented software, the concepts of the development method have to be mapped into that abstract model.This article proposes a mapping of the use case driven Object-Oriented Software Engineering method by Jacobson et al. into the abstract Function Point model. The mapping has been formulated as a small set of concise rules that support the actual measurement process. Our work demonstrates the applicability of Function Point Analysis as a measure of functional software size to the OO-Jacobson approach. This supports the thesis that Function Point Analysis measures independent of the technology used for implementation and that it can be used in the object-oriented paradigm.
Abstract: Function Points are generally used for measuring software functional size from a user perspective. This paper is concerned with the problem of counting function points from source code using the Function Point Analysis proposed by the International Function point User Group (IFPUG) 1994 standards. This paper presents the Automated FP counting scope andobjective, the presentation of an existing semi-formal model and the required extensions for the definition of four IFPUG rules. Then we propose reverse engineering techniques to address those four rule.
Abstract: Multiple solutions to problems of software development have been proposed such as development methodologies, management models and software tools. The function of software maintenance, on the other hand, has not known such infatuation despite its share of the software budget in organizations: between 50% and 70% of the software budget. The software maintenance function suffers from a scarcity of management models that would facilitate its evaluation, its management and its continuous improvement. This paper proposes an evaluation model of the quality of the software maintenance process. The proposed model is based on the CMM-SEI* model developed by Carnegie Mellon University to evaluate and improve the software development process. The architecture of the CMM model has been retained almost as is while its content, which was specific to the development process, has been either modified or extended to take into account the characteristics specifics to the maintenance function. These characteristics specifics to software maintenance were identified based on both practitioners experience and seminal literature publications on software maintenance; these characteristics were then organized into key process areas into the CMM ordinal scale type structure which promotes a progressive improvement path for gradual improvements of software functions.
Abstract: A requirement for productivity models and productivity analysis is to know the size of the product, or the output, of a work process. In software engineering, the product is the software itself. Function Points Analysis (FPA) has been designed to measure the functional size of software applications from a userÂs perspective. While it is being used extensively to measure either medium or large software development or enhancement projects, it has not been used to measure very small functional enhancements: its current measurement structure does not allow it to discriminate small size increments. This paper describes an extended version of FPA which is proposed to address this measurement issue of lack of sensitivity to small size changes. It also presents the design and the results of an empirical study carried out using this extended version.
Notes: http://saturne.info.uqam.ca/Labo_Recherche/Lrgl/publi/otherpub/mm199601.zip, 19970521, Research Notes: 389
Abstract: The Function Point software measure does not require the use of a particular development technique. However, the high-level concepts of object-oriented development methods cannot be mapped directly to the concepts of Function Point Analysis. In order to apply this software measure early in the development process, the object-oriented concepts corresponding to transactional and data function types have to be determined.Object-oriented methods differ, especially in their early development phases. The Object-Oriented Software Engineering method of Jacobson et al. is based on so-called use cases. The viewpoint of this method is similar to Function Point Analysis in the sense that it concentrates on the application's functionality from the user's perspective.The OO-Jacobson approach identifies the functionality of an application with the requirements use case model. Data types are described with a domain or analysis object model on the requirements level. Our work proposes rules to map these models to the Function Point counting procedures. With the proposed rules, it is possible to count software developed with the OO-Jacobson method. Experimental counts have been conducted for three industry projects.
Abstract: Industrial production firms have over time developed tools and models to ensure that productivity is measured and understood. This article suggests the use of such a model, the SIMAP model, for software maintenance. This article also shows how data could be organized and categorized in order to fully benefit from the SIMAP productivity model.
Abstract: Material to teach evaluation and selection of new technologies is often geared towards major organizations and research centers. However, software engineers in small to medium sized organizations are often faced with the same challenge of selecting new technologies. For graduate courses in software engineering, there is currently is a lack of teaching material geared towards the needs of small to medium sized organizations. The paper discusses the redesigning of a graduate course in software engineering using as base material the work in progress of an ISO subcommittee in software engineering. The test of the redesign was carried out through a class simulation of the review process of an ISO working group. Lessons learned from both the learning and teaching perspectives are presented. (2 Refs.)
Notes: none, Notes: 0 8186 7137 8, Research Notes: 322
Abstract: A requirement for productivity models and productivity analysis is the ability to define a sizing technique of the product of a work process. There is currently a lack of measurement techniques for sizing the work product of software maintenance activities. This paper reports on research work carried out to define a sizing technique for the work product of the maintenance activities of the adaptive category. The proposed sizing technique is based on an extension to the Function Points technique which has been designed to measure the functional size of software applications from a user's perspective. However, it has been used mostly to measure either medium or large development or enhancement projects. An extended version of Function Points is proposed to take into account a finer level of granularity congruent with the small maintenance work products. This extension provides the ability to adequately size products that would have been previously bundled within the same size interval when using the conventional technique. This extension was field tested in an organization over a four-year period and preliminary results are discussed. (18 Refs.)
Abstract: The objective of this article is to illustrate the use of productivity models for enhancement projects and to report on the reliability of Function Points-based models. The results of a field study at a major canadian financial institution indicate that Function points-based productivity models are within the range of the recommanded criteria for good models in Software engineering: a mean relative of +/-25% in 75% of cases.
Notes: 14, 19971112, Notes: incomplet, Research Notes: 393
Abstract: This document presents an analysis and proposed amendments to the Trial version of the SWEBOK Guide describing the software maintenance Knowledge Area. The scope of the proposed amendments is first to broaden the set of maintenance topics identified to improve coverage of the body of knowledge on software maintenance, as well as to recommend additional references and, secondly, to propose for each topic within the Software Maintenance Knowledge Area (KA), the expected level of knowledge for a graduate plus four years of experience"."
Abstract: Software measurement programs are now widely recommended in software engineering, more specifically in support of larger continuous process improvement programs. However, software measurement programs exhibit some of the undesirable characteristics of software development projects in the sense that they are very risky undertakings in themselves. Measurement programs need to be brought under control, and methods are needed, and must be designed, for the identification and the management of their own risks in order to increase their implementation success rate.This paper presents the development of a risk assessment grid or questionnaire for software measurement programs and a risk assessment method enabling the actual usage of this grid in an organization. Four major risk areas are covered by the grid. They are: 1) the context surrounding the software measurement program; 2) the program organizational structure; 3) the program components; and 4) the program results. Results of field-testing are also discussed. This risk assessment method and grid can be used early in the design phase of a software measurement program as well as throughout its implementation. The research work for this project was conducted using BasiliÂs framework for empirical research in software engineering and it is described accordingly.
Notes: Type of Work: Technical Report, 19970904, Notes: Ãcrit en 1997, Research Notes: 702
Abstract: In order to teaming agreements between the ISL and UQAM to perform independent quality assurance review of TRAC-M Code, the work has been started for United States Nuclear Regulatory Commission (USNRC). Under the terms of this agreement, the UQAM has agreed to provide to help ensure TRAC-M code is of the expected quality and can be used with confidence.This report documents the independent verification activities in requirements phase of the TRAC-M code development process. The technical approach for this project included three group activities. The first activity consisted of a review of TRAC-M development process and software engineering techniques/standards related with software requirement IV&V activities. First technical report, planning of IV&V, summarizes process overview, documentation traceability and applicability and limitation of existing V&V methods on TRAC-M code. Second technical report focuses on tarceability of modules. Third technical report examines functional traceability of one module. To assessment of functional Requirement Correctness control system module and containment module will be studied. To analyses the software requirements interface, Documentation of Communication between the consolidation code and other process will be examined.The Independent Verification and validation Activities of Requirement and design Phase project consisted of 6 separated but .
Notes: Type of Work: IV&V Activities for NRC TRAC-M Code Project No.NRC-O4-97-039 Task Order #2, Technical, 20020107, Research Notes: 52
Abstract: Recently, the Full Function Point (FFP) method and its newest development COSMIC-FFP have been developed in order to improve the measurement of functional size for a wide range of software as MIS, Real-Time, Embedded and Technical ones. This paper presents a comparative study of these two functional size measurement methods (Full Function Points and COSMIC-Full Function Points) with respect to the traditional Funtion Points method (i.e. IFPUG Function Points). The study compares the designs of these three measurement methods through a common framework, from the software models to the measurment processes. A study case on a Warehouse Software Portfolio allows illustrating in detail an empirical comparison on the measurement with these three methods.
Notes: Type of Work: Technical Report, 20001109, Research Notes: 329
Abstract: This paper presents a white-box approach for developing models that predict software engineering project duration based on project effort. These models are based on the analysis of empirical data contained in the 1997 release of the International Software Benchmarking Standards Group (ISBSG) repository. Duration models are built for the entire data set and for subsets of projects developed for personal computer, mid-range and mainframe platforms. Duration models are also constructed for projects requiring fewer than 400 person-hours of effort and for projects requiring more than 400 person-hours of effort. The usefulness of adding the maximum number of assigned resources as a second independent variable to explain duration is also analyzed. The opportunity of building duration models directly from project functional size in function points is investigated as well.
Notes: Type of Work: Technical Report, 19970904, 19990305, Research Notes: 123
Abstract: A requirement for software productivity analysis and estimation is the ability to measure the size of a software product from the user's viewpoint, that is, from a functional perspective rather than from a technical perspective. One example of such a measurement method is Function Point Analysis (FPA). FPA is now widely used in the Management Information Systems (MIS) domain, where it has become a de facto" standard. However, FPA has not achieved the same degree of acceptance in other domains of software engineering, such as real-time software. The general opinion is that when FPA is applied to such software the results do not constitute an adequate size measurement of this type of software. This paper reports on a research project carried out to adapt FPA to the specific functional characteristics of real-time software. The proposed extension, called Full Function Points (FFP), is described and the results of field-testing are discussed. This research was conducted using an adaptation of Basili's framework for empirical research in software engineering and it is described accordingly."
Notes: Type of Work: Technical Report, 19990316, Research Notes: 615
Abstract: The rules governing Function Point Analysis (FPA) are described in textual format in the Counting Practices Manual by the International Function Point Users Group. These textual descriptiosn need to be transformed into very structured requirements if the tools to be built have some chance to produce accurate results. Ideally it should be possible to represent the FP rules into decision tables that can then be programmed and fully tested. By using a formal notation ssytem and decisino tables, FPA counting process can be expressed in 17 tasks of which 12 are algorithmic and five require human judgment. For the function points counters it captures the mechanism of counting function points as defined in the CPM; for the tool builder it provides the detail required to start building the physical design of an interactive tool for counting function points.
Abstract: This document presents the Full Function Points (FFP) counting for a rice cooker example. First, the user requirement specifications are presented. Next, the boundary for the count is defined. Then, the control data function types are identified followed by the control transactional function types. For each function type identified, we explain the interpretation of the requirement specifications as well as the assumptions made. Finally, a figure and a table showing the summary of the count and the calculation of the final count are presented.
Notes: Type of Work: Technical Report, 19970904, Research Notes: 391
Abstract: This paper is concerned with the identification and measurement of reuse within projects in which functional enhancements have been added to existing software applications. The proposed approach is based on the measurement of reuse from a functional perspective rather than from a technical perspective. Two key concepts are introduced: a reuse indicator and a predictor ratio. The reuse indicator is derived from an analysis of the function types as currently defined in Function Points Analysis. the predictor ratio is derived from an understanding of the avoided-cost concept and of how it can be captured using historical databases of function points from previous development projects. This paper indicates how, in functinoal enhancement projects, the predictor ratio can be combined into the reuse indicator to derive an alternative size measure which takes into account functions reused and not redeveloped. The paper also demonstrates how these ratios can then be integrated in a maintenance productivity model to analyze the benefits of reuse by taking into account the avoided cost of functions reused. A case study based on an industrial data set is provided to illustrate the measurement of functional reuse in an enhancement project and its impact in maintenance productivity analysis.
Notes: Type of Work: Research, Notes: Manuel entry, Research Notes: 700
Abstract: Article analysis from C. G. Low and D. R. Jeffery.Function Points in the Estimation and Evaluation of the Software Process", IEEE Transactions on Software Engineering, vol. 16, no. 1, January 1990."
Abstract: Function points metrics were developed by Allan Albrecht in 1979 to study software development productivity. There is, however, considerable ambiguity on their interpretation: in the literature they are reported to be either a size measurement, a productivity unit, a functional complexity factor, a dimensionless number or a miltidimensional number. Simultaneously, there is ample evidence in the literature that there is an empirical relationship between function points and development work effort.An analysis of the measurement processes embedded within function points (FP) reveals that the FP metrics include both a measurement model and a productivity model. Within the measurement model the software user deliverables are identified and decomposed into functions and elementary components which are individually counted within the same measurement scales. It is then followed by the assignment of weights, through a set of algorithms, based on the implicit models hidden in the expert judments of the initial experiment of Albrecht: these later steps with FP represent in fact a productivity model. Therefore, the end result of the FP model (eg. the total of function points) constitues a mix-bag of measurement scales whithout precise meaning.This analytical view of function points, from a measurement perspective, indicates that they should not be seen as a dimensionless number but rather as a system of relationship between the measure of typical functions (as defined by the rules of the measurement model) and the effort to develop them. The research hypothesis is that this relationship is valid not only for the total of function points but also for the FP measurement model as defined above, and that such a relationship exists for each of the steps embedded with the FP metrics, within both types of FP models.To investigate this hypothesis through an empirical data set of projects from an industrial site, the research approach must be able to analyze this issue, and only this one, to the exclusion of other productivity variables. The research approach selected is based on MacDonell (1991) and RAmsey and Basili (1989) recommendations for empirical studies in software engineering. The following two types of homogeneity were defined and investigated in this software engineering research:A) External homogeneity:External homogeneity is defined based on the homogeneity of the productivity factors outside of the domain of function points. This type of homogeneity eliminates from the empirical analysis the variations caused by production factors. This allows then to analyse o nly the impact of the internal structure of function point metrics on the work effort relationship. Within this context, two types of regression models are built and their results analyzed:A1 - Regressions with components of the FP productivity models as independent variables;A2 - Regressions with components of the FP measurement model as independent variables.B) Internal homogeneity:Internal homogeneity is defined based on the characteristics internal to the domain of FP such as the distribution of points by function types or groups of functions, within the FP productivity model. Through concepts borrowed from statistical process control techniques, the behavior of projects within selection intervals, and outside of them, are then analyze and the quality of their work/effort relationship compared. These selection intervals can be defined on either the functional distribution of the independent variables within the FP measurement model or within the FP productivity model.The regression models designed based on the external homogenity criteria confirm that there is a FP/work effort relationship not only with the total number of function points, but also with each of the intermediate steps within the FP metrics. It confirms also that, for this empirical analysis, each step of the FP measurement model contributes to improve the relationship and that the steps of the FP productivity model (with the algorithms and the weights) contribute little to the quality of the relationship. It also confirms that this is valid for a sample other than the initial Albrecht dataset, and that it meets the criteria generally accepted in software engineering for good productivity models +/- 25% of accuracy within 75% of the time.While other research on the FP structure had taken for granted the functional homogeneity of their data sets acress projects and across time either formally (Bock et al., 1989) or informally (Kitchenham et al., 1993), this research work looks into the real functional distribution of the data set and analyzes its impact on the work effort relationship. This is done through the identification of internally homogeneous data subsets using selection interval from either the FP productivity model or the FP measurement model.Regression models based on internal homogenity characteristics of the FP productivity model confirm that subsets of data points within the selection intervals (eg. homogenous profile with respect to the FP productivity model) can reach a coefficient of variation of +/- 15% for the standard error and a coefficient of regression (R2) of 0.90. Finally, regression models based on internal homogenity characteristics of the FP measurement model confirm that an ever greater level of accuracy can be reached, with a near perfect regression model under some conditions.The research methodology also illustrates the importance of the FP function types distribution and homogeneity in productivity studies. These results confirm the usefulness in productivity studies of the various measurement processes embedded within FP metrics.The major contributions of this research work to the field of software engineering can be summarized as follows:1. Within the empirical constraints of the industrial data set available, the worth of function point metrics with respect to the work effort relationship is not dependent of the algorithms and weights; in fact, the end ressults of the FP measurement model, by themselves, can be successfully used as the independent variable in a productivity model.2. The constraints of external homogeneity of the data set do not allow, a priori, to infer generic behavior for FP productivity models for other environments and constraints. However, it suggests interesting avenues for further research within the initial M.I.S. domain of function points as well as for other domains of software engineering.3. In software engineering it is important to take into account the external homogeneity of the empirical data sets when analyzing the structure of productivity models and the impact of variations of their components and parameters.4. The results of this empirical analysis indicates that not only the functional size, as described by function point metrics, is an important variable to explain the work effort relationship, but also that the distribution of the function types is an important variable depending on wheter it is an homogeneous or heterogeneous functional distribution. For the data set analyzed, the analysis of the functional distribution of each observation allows then to select the regression model most appropriate to explain its work effort relationship (always within the same constraints of externally homogeneous conditions).5. Finally, the measurement system perspective should be utilized when veriying the validity of the structure of software engineering productivity models and their metrics.
Notes: Type of Work: Ph. D. Thesis, 2011-12-14, Notes: Manual entry, Research Notes: 11
Abstract: A methodology for establishing quality requirements and identifying, implementating, analyzing, and validating the process and products software quality metrics is defined. The methodology spans the entire software life cycle.
Abstract: If software engineering is to mature into a recognized engineering discipline, it needs to be supported by measures, measurement methods and well tested descriptive and quantitative models. Other disciplines have developed a considerable body of knowledge with respect to measures, measurement instruments and quantitative models using measurement results to analyze relationships across objects and attributes. How does software engineering compare to other fields in this respect?This position paper highlights some current high-level ambiguities in the domain of software metrics, whereas this term is often interpreted with what would be considered multiple definitions; similarly for the expression metrics validation which is used in many ways with different meanings, leaving practitioners confused and researchers with considerable challenge in leveraging other researchers' contributions on similarly named but distinct issues. To reach maturity, the software engineering knowledge are, referred to as software metrics must mature into software metrology as in other disciplines. This position paper concludes with recommendations for paths to be explored in order to tackle this issue with contributions from the metrology discipline.
Abstract: In recent years software usability has become a major research theme within the software engineering community. However, up to now, only a few software quality models have addressed address the usability evaluation and measurement aspects in a detailed and structured way. In particular, one of the key forces within the software usability community, the International Organization for Standardization (ISO), has developed a variety of models to specify and measure software usability., However but none of these individual models do not supportcover all usability aspects.; Furthermore, they are not well integrated in part of the current software engineering practices and no lack tool exists to support it. The motivation of this research is to address some of these limitations by proposing a consolidated, and normative model for the evaluation of software usability.