On the relationship of concern metrics and requirements maintainability

Chapter Software quality management

on the relationship of concern metrics and requirements maintainability

Understanding User Requirements and Constraints; Design for 4 Discipline Relationships 5 Discipline Standards; 6 Personnel Considerations; 7 Metrics Reliability, maintainability, and availability (RAM) are three system The discipline's first concerns were electronic and mechanical. interface size, in order to identify maintainability problems in a software project [3, 8, 19, 20]. concern metrics to support code smell detection in information systems. To fill this On the Relationship of Concern Metrics and Requirements. Request PDF on ResearchGate | On the relationship of concern metrics and requirements maintainability | Maintainability has become one of the most essential.

Where failure rates are not known as is often the case for unique or custom developed components, assemblies, or softwaredevelopmental testing may be undertaken assess the reliability of custom-developed components. Markov models and Petri nets are of particular value for computer-based systems that use redundancy. Evaluations based on qualitative analyses assess vulnerability to single points of failure, failure containment, recovery, and maintainability.

Analyses from related disciplines during design time also affect RAM. Human factor analyses are necessary to ensure that operators and maintainers can interact with the system in a manner that minimizes failures and the restoration times when they do occur.

There is also a strong link between RAM and cybersecurity in computer based systems. On the one hand defensive measures reduce the frequency of failures due to malicious events.

The most important of these are ensuring repeatability and uniformity of production processes and complete unambiguous specifications for items from the supply chain. Other are related to design for manufacturability, storage, and transportation Kapur, ; Eberlin Large software intensive systems information systems are affected by issues related to configuration management, integration testing, and installation testing.

On the Effectiveness of Concern Metrics to Detect Code Smells: An Empirical Study

Depending on organizational considerations, this may be the same or a separate system as used during the design. Monitoring During Operation and Use After systems are fielded, their reliability and availability to assess whether system or product has met its RAM objectives, to identify unexpected failure modes, to record fixes, to assess the utilization of maintenance resources, and to assess the operating environment.

In order to assess RAM, it is necessary to maintain an accurate record not only of failures but also of operating time and the duration of outages. Systems that report only on repair actions and outage incidents may not be sufficient for this purpose. An organization should have an integrated data system that allows reliability data to be considered with logistical data, such as parts, personnel, tools, bays, transportation and evacuation, queues, and costs, allowing a total awareness of the interplay of logistical and RAM issues.

These issues in turn must be integrated with management and operational systems to allow the organization to reap the benefits that can occur from complete situational awareness with respect to RAM. Reliability and Maintainability Testing Reliability Testing can be performed at the component, subsystem, and system level throughout the product or system lifecycle.

Reliability Life Tests are used to empirically assess the time to failure for non-repairable products and systems and the times between failure for repairable or restorable systems.

Termination criteria for such tests can be based on a planned duration or planned number of failures.

Accelerated life testing is performed by subjecting the items under test usually electronic parts by increasing the temperature to well above the expecting operating temperature and extrapolating results using an Arhenius relation. Stability tests are life tests for integrated hardware and software systems. The goal of such testing is to determine the integrated system failure rate and assess operational suitability. Test conditions must include accurate simulation of the operating environment including workload and a means of identifying and recording failures.

Such testing assesses the fault tolerance of a system by measuring probability of switchover for redundant systems. Failures are simulated and the ability of the hardware and software to detect the condition and reconfigure the system to remain operational are tested.

Such testing assess the system diagnostics capabilities, physical accessibility, and maintainer training by simulating hardware or software failures that require maintainer action for restoration.

Because of its potential impact on cost and schedule, reliability testing should be coordinated with the overall system engineering effort. Test planning considerations include the number of test units, duration of the tests, environmental conditions, and the means of detecting failures.

Data on a given system is assumed or collected, used to select a distribution for a model, and then used to fit the parameters of the distribution. This process differs significantly from the one usually taught in an introductory statistics course. First, the normal distribution is seldom used as a life distribution, since it is defined for all negative times.

Second, and more importantly, reliability data is different from classic experimental data. Reliability data is often censored, biased, observational, and missing information about covariates such as environmental conditions. Data from testing is often expensive, resulting in small sample sizes.

These problems with reliability data require sophisticated strategies and processes to mitigate them.

  • International Scholarly Research Notices
  • Reliability, Availability, and Maintainability

One consequence of these issues is that estimates based on limited data can be very imprecise. Discipline Management In most large programs, RAM experts report to the system engineering organization. At project or product conception, top level goals are defined for RAM based on operational needs, lifecycle cost projections, and warranty cost estimates. These lead to RAM derived requirements and allocations that are approved and managed by the system engineering requirements management function.

RAM testing is coordinated with other product or system testing through the testing organization, and test failures are evaluated by the RAM function through joint meetings such as a Failure Review Board.

There was a problem providing the content you requested

In some cases, the RAM function may recommend design or development process changes as a result of evaluation of test results or software discrepancy reports, and these proposals must be adjudicated by the system engineering organization, or in some cases, the acquiring customer if cost increases are involved. Post-Production Management Systems Once a system is fielded, its reliability and availability should be tracked.

Such a system captures data on failures and improvements to correct failures. This database is separate from a warranty data base, which is typically run by the financial function of an organization and tracks costs only. Unfortunately, the lack of careful consideration of the backward flow from decision to analysis to model to required data too often leads to inadequate data collection systems and missing essential information.

Proper prior planning prevents this poor performance. Of particular importance is a plan to track data on units that have not failed.

Units whose precise times of failure are unknown are referred to as censored units. Inexperienced analysts frequently do not know how to analyze censored data, and they omit the censored units as a result.

on the relationship of concern metrics and requirements maintainability

This can bias an analysis. Discipline Relationships Interactions RAM interacts with nearly all aspects of the system development effort. Specific dependencies and interactions include: RAM interacts with the system engineering as described in the previous section. RAM interacts with the product or system lifecycle cost and warranty management organizations by assisting in the calculation of expected repair rates, downtimes, and warranty costs.

RAM may work with those organizations to perform tradeoff analyses to determine the most cost efficient solution and to price service contracts. RAM may also interact with the procurement and quality assurance organizations with respect to selection and evaluation of materials, components, and subsystems.

RAM and system safety engineers have many common concerns with respect to managing the failure behavior of a system i. RAM and safety engineers use similar analysis techniques, with safety being concerned about failures affecting life or unique property and RAM being concerned with those failures as well as lower severity events that disrupt operations.

In systems or products integrating computers and software, cybersecurity and RAM engineers have common concerns relating to the availability of cyberdefenses and system event monitoring. However, there are also tradeoffs with respect to access control, boundary devices, and authentication where security device failures could impact the availability of the product or system to users.

Reliability, Availability, and Maintainability - SEBoK

Quality reviews To carry out a technical analysis of product components or documentation to find mismatches between the specification and the component design, code or documentation and to ensure that defined quality standards of the organization have been followed.

Software measurement and metrics Product metrics Software measurement provides a numeric value for some quality attribute of a software product or a software process. Comparison of these numerical values to each other or to standards draws conclusions about the quality of software or software processes.

on the relationship of concern metrics and requirements maintainability

Software product measurements can be used to make general predictions about a software system and identify anomalous software components. Software metric is a measurement that relates to any quality attributes of the software system or process. It is often impossible to measure the external software quality attributes, such as maintainability, understandability, etc. In such cases, the external attribute is related to some internal attribute assuming a relationship between them and the internal attribute is measured to predict the external software characteristic.

Three conditions must be hold in this case: The internal attribute must be measured accurately. A relationship must exist between what we can measure and the external behavioural attribute. This relationship has to be well understood, has been validated and can be expressed in terms of a mathematical formula. The measurement process A software measurement process as a part of the quality control process is shown in Figure The steps of measurement process are the followings: Select measurements to be made.

Selection of measurements that are relevant to answer the questions to quality assessment. Select components to be assessed. Selection of software components to be measured. The selected components are measured and the associated software metric values computed. If any metric exhibit high or low values it means that component has problems.