STSARCES - Annex 1 : Software engineering tasks - Case tools

Annex 1

Software engineering tasks - Case tools

Final Report of WP1.1






1.1 Dependable electronic systems

1.1.1   Need for a system approach

1.1.2   Operation safety process

1.1.3   Obtaining and validating a dependable system

1.1.4   Explicit operation safety for mecatronic systems

1.2      Evaluating safety software

1.2.1   Specific features of safety software

1.2.2   Evaluating safety software

1.3      Requirements for safety software


2.1      Requirements for specification

2.2      Methods of specification

2.3      CASE Tools for specification

2.4      Specification and validation procedure

3.         CONCLUSION 

4.         APPENDIXES

4.1      “Specification methods” sheets

4.2      “Specification tools” sheets

4.3      Bibliography 



The work described in this report was done within the framework of the European project STSARCES, acronym for “Standards for Safety Related Complex Electronic Systems”.  This project groups together the main French and European organisations INRS, INERIS, CETIM, HSE, BIA, INSHT, SP, TÜV, and VTT, as well as the companies JAY ELECTRONIQUE and SICK AG, directly concerned by the safety of industrial systems.  Five themes are treated by the various partners : software safety, equipment safety, validation of the safety of complex components, the connection between the European standard EN 954 and the project for an international standard CEI 61508, and the taking into account of technological innovations.

An in-depth study of the development methods and techniques for systems was also led.  The contribution of the work done by CETIM concerns the drafting of safety software specifications and their validation.  They highlight the importance of the global system approach.

Over the past years, traditional command systems have been replaced by programmed systems at an accelerated rate.  The functionalities of these programmed systems has increased and become increasingly sophisticated, making them complex to produce.  Their complexity indirectly causes an increase in potential faults in design and therefore failures in the systems designed.

The complexity and size of the present systems are such that it is impossible to eliminate all faults contained by a final control.  To ensure that software development is mastered, it is necessary to established an adapted process.

Unlike mechanical systems, it is difficult to foresee the various failure modes.  At the research office level, analysis of uncertain behaviour is not exhaustive and it is difficult to control and eliminate risks.

Since the software behaviour cannot be predicted and since potential incorrect behaviour cannot be quantified, it is impossible to analyse failure modes as for mechanical systems.  Unlike equipment which may break down due to physical faults, software does not age and is really only affected by faults in design of human origin.

Introducing digital technology thus demands that designers make fundamental changes in their methods of approaching problems to be treated.

During the lifetime of software, one of the most delicate steps is to express needs in specifications.  Drafting and evaluating specifications are important steps, especially for safety software.

Industrial fields in advanced technology (aeronautics, energy, railway transport, telecommunications) have integrated all of these concerns in the framework of developing their software.  Concerning other sectors, such as mechanics, assistance, practical and easy operation documents must be made available to research offices, encouraging the appropriation of safety software development techniques, and more especially, their specifications.

The main work today is essentially done in the universities.  It is not easily accessible to non specialists and its implementation requires an intellectual effort and a significant investment in training and in computer tools.

The CETIM work is part of this “Safety software specification” vision.  Our goal is to reduce the distance between the state of the art and present practices, thus making methods and tools easier to use.


1.1      Dependable electronic systems

1.1.1   Need for a system approach

Often safety needs and the verification of their implementation are tasks accomplished afterwards.  To take the safety aspect into account when expressing needs implies a modelisation of the system which is different from that obtained when only the performance and cost aspects are considered.

In fact, while defining the functional needs of a system provides a description of their service the system is to render, defining safety needs describes the behaviour which the system must avoid.  This description leads to identifying the functions which the system must fulfil in order to reduce the possibility that the behaviour to be avoided occurs.

From a system point of view, the safety concept may be planned on various levels.  At a global lever, which is that of the mission, safety expresses the absence of accidents or incidents, concerning people, needs or the environment, and it is associated with the safety function.  At the component level, it expresses the absence of behaviour which could cause an accident for the specified mission.  Finally, the safety concept may also be considered at an output piloted by a component.

To ensure a determined level of safety, risks must be analysed first.  This analysis process is continuous and iterative.  It intervenes early in system development.  The idea is to identify dangerous phenomena and attempt to eliminate them.  In order to do this, the dangerous state must be suppressed at the system operational level, or all dangerous phenomena must be suppressed.

Since it is impossible to eliminate all dangerous phenomena cannot be eliminated, the associated risks must be evaluated and estimated.  When a risk is considered high, measures must be imagined to reduce it, either by decreasing its severity, or by decreasing its probability of occurring.

Risk analysis is led on four levels :

  • Very early in the life cycle : preliminary analysis of risks identifies critical functions (safety functions) and highlights dangers.
  • At the system level : makes it possible to identify risks introduced by interface between sub-systems and risks of human errors.
  • At the sub-systems level : each sub-system is analysed and the safety criteria, during normal operation or in a degraded mode, concerning the sub-system is identified.
  • At the support and operations level : analysis identifies the procedures to reduce danger during use and maintenance of the system.

1.1.2   Operation safety process

The most traditional process, in the automobile industry and in mechanics, to ensure operational safety of systems is based on experience feedback.  Engineers collect and analyse operational dependability data in order to eliminate and control risks of failures.  This systems safety approach is issued from an industrial culture mainly based on system testing, rather than on analysis.

A second process is based on system dependability.  This approach measures the probability of random failures, rather than the probability of a risk of an accident.  It is not efficient to test the safety of systems and software.

A consequence of these two approaches is the use of well-tried components.  Safety is not the property of an isolated element, but the combination of the equipment, the software and the environment in which the system is used.

The system approach to safety mainly consists in identifying risks as early as possible and classifying them, in order to undertake corrective actions to eliminate or minimise them before system design choices are made firm.

Several models of life cycles have been developed :

The cascade model is simple.  A certain number of steps (or phases) is agreed upon.  A step must end with the production of certain documents or software. The results of the step are thoroughly reviewed, and the next step is taken only when this review is considered satisfactory.

The V model, which is more recent, presents a more realistic approach to the relationship between development activities and verification activities, when there is a software code.

Cascade and V models have disciplined the software development process, by identifying its main activity and by specifying their sequencing.  However, the linear vision introduced by these models and their rigidity have called by modifications and extensions of these models.

The first evolutionary model is the incremental model.  Only one sub-assembly is developed at a given time.  Core software is first developed, then increments are successively developed and integrated.

Another form of evolutionary development consists in relying on modelling, a common practice in the field of engineering.  Producing models makes it possible to specify the needs and desires of the user, either globally or by focussing on certain functions.

A representative model of this approach is the spiral model.  Development according to this model begins with a preliminary analysis of needs which is refined during the initial cycles, taking into account constraints and risk analysis.  The originality of this model is to surround the development itself with phases devoted to risk analysis and to the determination of safety objectives.

At present, there is a strong tendency to prefer the definition of system development models in order to master the development of complex systems (the standard MIL-STD 499B prepared by the American department of defence, version EIA/IS-632 of which applies to commercial systems).

In order to harmonise system safety evaluation methods, ITSEC criteria (Information Technology Security Evaluation Criteria) and a method of evaluation ITSEM (Information Technology Security Evaluation Manual) have been elaborated.  In this method, the evaluation process is based on two aspects :

  • The study of dependability, which strives to analyse whether or not the system is apt to fulfil its safety objectives, in its design principle,
  • The study of compliance, which strives to analyse whether or not safety functions and mechanisms are correctly installed.

The standard IEC61508 presents a development model for critical electrical/electronic/ programmable electronic systems.  This model presents a generic development process.  The approach adopted distinguishes four levels of criticality.

Depending on the levels of criticality identified for a system and for the software, this standard recommends the application of operation safety methods and techniques.

The various models described above mix fundamentally different activities, development itself and verification, and conserve the strict sequencing of activities.

The standard DO-178B, specific to the aeronautics industry, makes this separation.  It recommends system structures which use design techniques allowing for partitioning, heterogeneous redundancy and monitoring.  It offers a new software development model, the process model [LAP95].  Its B review consider that information on the system level are necessary as an entrance point into the software development process.

An explicit operation safety development model is proposed by LAAS-CNRS [LAP95].  It presents a global view and summarises the main activities required to develop a dependable operating system : fault prevention, fault tolerance, fault elimination and fault forecast.

The state of the art shows that developing a dependable system requires the integration of operation safety activities throughout the life cycle.  It therefore becomes necessary to be able to certify critical systems, no longer only software.

1.1.3 Obtaining and validating a dependable system

Figure 1.1 shows the relationship between use of means, in a “traditional” quality process, to strive for a system exempt from faults, and the operation safety process, which implements other operation safety means in order to strive for a system exempt from failures.

While fault prevention attempts to prevent faults from occurring or from being introduced, fault forecasting attempts to estimate the presence, creation and consequences of faults.

Certain methods of evaluation are entirely ordinal, such as AMDEC (Analyse des Modes de Défaillance, de leurs Effets et de leur Criticité – Analysis of Failure Modes, their Effects and their Criticality) or APR (Analyse Préliminaire des Risques – Preliminary Risk Analysis);  others are entirely based on probability such as MARKOV chains.  Finally, certain methods may be used for both aspects, such as dependability diagrams and failure tree diagrams.


Figure 1.1 : Elements comprising operation safety.

Three types of processes may be distinguished from among the forecasting methods :

  1. The inductive process moves from a particular situation to a more general situation.  This is a detailed study of the effects consequences of failures have on a system,
  2. The deductive process moves from a more general situation to a more particular situation.  This is the study of the causes of a failure on a system,
  3. The hybrid process is a combination of the two preceding processes.

Methods of evaluation based on probability require a modelling activity which consists in elaborating an analytical model parameterised with the rate of failure of each component in the system.

The two most recognised and most used models are MIL-HDBK-217F and RdF93.  The MIL-HDBK-217F model is recognised on an international level and remains the reference in the electronic industry.  The RdF93 model is a dependability collection published by CNET in France, more particularly for the telecommunications sector.

Fault elimination attempts to reduce the presence of faults, in quantity and severity, by three phases : verification, diagnostic and correction.  After the correction phase, non regression must be verified, in order to ensure that elimination of the fault did not have any undesirable consequences.

Fault tolerance attempts to provide a service capable of fulfilling the function or functions despite the faults.

In many sectors of activity, the cost restriction does not allow for use of material redundancy.  However, some fault tolerance techniques may be implemented in order to improve the dependability and safety of electronic systems :

  • Watchdogs to check that the process is not blocked,
  • Timer degradation to ensure processing speed,
  • Inlet networks to filter parasites,
  • Protection diodes, against overpressure (transitory or load dump),
  • Inlet tests (limit values or loss of information),
  • Outlet tests (intelligent power circuit for the diagnosis),
  • Checksum on the read-only memory to detect memorisation errors which affect the software.

1.1.4 Explicit operation safety for mecatronic systems

A global mecatronic system is composed of two main sub-assemblies : the physical system which identifies the various mechanical hydraulic, pneumatic, electric, etc. parts, and the electronic system which integrates the actuators and the sensors, as well as the electronic command unit (equipment and software).  The structure of the piloting system consists in expressing a global functional view (white box) of the system.

Operation safety studies must be applied when developing the system.  The solution chosen must be justified and accompanied by the tracability of operation safety requirements on each of the three levels of knowledge of the system (global system, electronic piloting system and software).


Figure 1.2 : Explicit operation safety process.  Establishing safety objectives

A safety objective for the global system may be allotted according to the return of experience with systems previously developed, based on expert judgement.  The operation safety study must set a realistic objective.  The objective for electronic systems is expressed by a rate of failure.

Allotting a safety objective for each function must be in keeping with the dependability objective expressed for the overall electronic system.  Other complementary objectives may be necessary for maintenance actions.  Mecatronic system specification

Once the various events feared have been identified for the mecatronic system, the operation safety demands must be specified.  The first step is to express operation safety assurance criteria which will be used to establish confidence in the operation safety level of the system.

The second step is to define the operation safety means which must be implemented to ensure control of operation safety when designing the electronic system.

The operation safety study of function and physical structures of the electronic system is composed of various activities.  It is mainly accomplished by AEEL (analyse des effets des erreurs sur le logiciel – analysis of the effect of errors on the software) and by failure tree diagrams.

Using the software failure tree diagrams makes it possible to complete the AEEL, in order to warn against faults in design.  The study is run on the software structure (specification and/or general design), in order to study the potential failure combinations which cause feared events in the software.  Analysis of minimum cuts makes it possible to rank the critical elements of the software.

Types of software errors

Classes of errors used

Calculation error

Evaluation of an incorrect equation, incorrect result to an operation


Algorithm error

Error in instruction sequencing, (un)conditional incorrect branching, incorrect processing loop


Error in task synchronisation

Incorrect synchronisation primitive type, unexpected synchronisation parameter


Error in data processed

Error in definition, error in initialisation, error in manipulation, modification of the value of data


Error in interfacing between procedures

Error in procedure instruction, error in procedure outlet, error in parameter transmission between procedures


Error in transferring data with the environment

Error in defining data, error in data transmission,, incorrect transfer periodicity


Table 1.2 : Example of software error typology.

1.2       Evaluating safety software

1.2.1 Specific characteristics of safety software

Software is an intellectual creating including programs, procedures, rules and all associated documents, related to implementation of the programmed system.  Software is materialised by specifications, a code (program) and documentation.

Software development is often difficult to control.  Moreover, software is rarely a finished product;  it evolves from one version to another, within very short periods of time.  It is a paradoxical product, which may become obsolete, but is not subject to wear.  On the contrary, it is best when used frequently.  Finally, software development is essentially devoted to product design and testing and little emphasis is placed on series production.

One of the most important characteristics of software is that it is a product with countless inputs and which processes combinations far greater than the brain capacity.  As a consequence, software behaviour cannot be fully apprehended by man.  It is therefore separated into different modules.  Nevertheless, it remains difficult to fully control the complexity of the product.

1.2.2   Evaluating safety software

The problem raised by software evaluation is to obtain justified confidence in the software behaviour.  The software is often analysed according to the method used for its development.

The evaluation is then based on a wide variety of criteria such as its structure, its development process, or the manner in which it was written, even though, in fact, only its behaviour should be evaluated.  This is why it is rather difficult to distinguish between development methods and evaluation methods.  These two types of methods increasingly overlap one another.

Finally, it is interesting to note that there are no specific methods for critical software.  The methods used for critical software and those used for traditional software differ  by the requirements of the standards.  The major difference, in fact, resides in the budget and the time devoted.

Evaluating software may have highly varied significations.  In general, two levels of evaluation are frequently distinguished : validation and verification.

1.3      Software safety requirements

Expressing software safety requirements, as well as the taking into account and follow-up of these requirements throughout the software development cycle, remains within the field of avant garde projects to date.  Information concerning these requirements is not actually diffused to the general public and often remains limited to a circle of experts.

Work concerning software safety requirements has be initiated by organisations such as ISdF (Institut de Sûreté de Fonctionnement – Operation Safety Institute), INRS (Institut National de Recherche et de Sécurité – National Institute for Research and Safety), as well as by INRETS (Institut National de REcherche sur les Transports et leur Sécurité – National Institute for Research concerning Transport and Safety) within the framework of research projects such as CASCADE (Certification and Assessment of Safety Critical Application Development) and ACRUDA.  In general, all of this work resembles the needs of avionics, nuclear and railway transport fields.

Thus, based on collected information concerning practices in the industrial environment in the field of software safety, mainly in the defence, transport and space sectors, and concerning use of work and reflections by European groups (PDCS and EWICS/TC7) and national groups (AFCET and ISdF), work done by IsdF reflection groups has led to the elaboration of two synthesis documents.  An initial guide to elaborate the safety requirements for the software, for the provider, and a second guide to develop software with strict safety requirements, for the contracting party have thus been drafted.


2.1      Specification requirements

A study run by HSE concerning primary causes of failures on a population of 34 accidents, shows that the main part (44.1%) is caused by poor specification.

Special attention must be paid to :

  • Adequation faults.  Adequation between the need recognised and the actual need must be ensured,
  • Over-specification.  This may lead to unnecessary restriction and exclude certain solutions,
  • Under-specification.  This may allow for a manoeuvre margin which is too wide concerning the chosen solutions, and may lead to unacceptable choices,
  • Unfortunate consequences of certain requests.  Impossible or non verifiable objectives should not be specified.  Requests to apply standards or guides should only be made once their contributions and negative impacts have been carefully considered,
  • The form.  It is recommended that enunciation remains precise, that styles “in keeping with the rules of the art” be avoided, that terminology be defined, that references from one document to another be avoided, that there be a constant concern for tracability and verification.

Requirements concerning the product and the processes, which may be applied to the software and its entities and auxiliary services, must be established by contract.  A fair compromise must be made between contractual requirements which are necessarily severe, and the minimum freedom to be granted to the developer, i.e. which engages responsibility and motivation.

Formulating operation safety objectives must be quantitative, in terms of the rate or critical failures and/or qualitative, with a list of feared events, a qualitative analysis, the respect for specific procedures and regulations.

From the point of view of software output safety, feared events must be described perfectly.  It is recommended that a preliminary list of accepted degraded operation modes be established as early as possible.

It is highly inadvisable to simply formulate quantitative requirements for the software.

2.2      Validation and specification methods

2.2.2 Specification methods

Three types of specification language may be distinguished : specification in ordinary language, semi-formal specification and formal specification.

Ordinary language is chosen as the specification language if it is usually used.  It may prove to be ambiguous, contradictory and incomplete, since two major problems persist :

  1. Difficulty of expression : man does not think with words only.
  2. Difficulty of interpretation : a text need not be complex to be difficult to interpret.  Specification may thus be interpreted differently depending on the definitions one possesses or those consider to be the definitions used by the drafter.

Informal specifications written in ordinary language are generally incomplete, incoherent, ambiguous, contradictory and erroneous;  at best, errors introduced are discovered late in the life cycle of the software.  As a consequence, it seems reasonable that they not be used for safety software.

Table 1.3 provides several specification methods and their main characteristics.

Contrary to ordinary language, specifications implemented by formal methods are not ambiguous, are precise, the semantics of notations is clearly defined.  If one is familiar with the representation used, formal methods are a good means of communication and documentation.

In fact, formal methods are more than a tool for representation;  they are also a technique for drafting specifications which restrains the designer to make abstractions and finally results in a better comprehension and modelisation of the specifications.  It is sometimes even possible to make simulations.

Use of a formal method requires considerable investments in time and training.  Formal methods are a significant step forward for the development and evaluation of critical software.

Table 1.4 provides a non exhaustive list of formal methods.


Place in the

Life cycle and

Purpose of

The method



Réseau de Pétri

Pétri Network



Method based on transition systems, using tokens and spaces.  It makes it possible to demonstrate properties such as non-blocking, vivacity or equity of a set of co-operating processes.  It is often used to specify parallelism and synchronisation.





Specification method based on transition systems.



(Structured Analysis Design Technique)

Specification, design


Graphic specification method.  It uses boxes to represent data or activities and arrows to represent flows between data or these activities.  It is sometimes designated as a semi-formal design method and is often used in industry.



(Structured Analysis Real Time)



Real time extension proposed for the structured S.A. method of E. Yourdon and T. de Marco.  One of the most widely used structured software analysis methods for real time applications.



Specification, design


Formal specification language based on the Zermelo theory of sets.  It makes it possible to express functional conditions of the problem to be translated into set notation.





Specification and functional description language.  It is subject to a CCITT standard.





Formalism used to describe parallelism semantics.  It is based on process algebra and remains very abstract and impossible to be used to make useful conclusions.





Presents the same characteristics as CCS.


Table 1.3 : Specification methods which contribute to evaluating software.


Place in the

Life cycle and

Purpose of

The method



(Vienna definition method)




Static evaluation

The oldest and best established formal specification language.  It is also a development method.  It combines concrete notions such as types of data and abstract notions such as the theory of sets.  Before – after predicates (pre and post conditions) where what does not change must be clarified, guides the refining of the specifications.  Proof is required and written using a three value logic (True, False and Undefined).  This particular logic does not simplify the establishment or the verification of proof.  There is no mechanism to decompose or compose specifications or refinings.  VDM has been chosen by the EEC, the English Standards Institution Committee and ISO to be used as the basis to develop a specification language standard.






Static evaluation

Formal method of specification based on the theory of sets and first order logic.  Specifications are modelled using abstract machines.  These machines, inspired by the object oriented design method, have three parts.  The first describes the state and properties of the machine;  the second specifies the operations which make it possible to modify the state;  and the third records the composition connections with other machines.  Specifications are developed using vertical iterations by refining and horizontal iterations by machine construction.  Proof obligations are obtained by a substitution calculation.  The B method is implemented by Atelier B, which strives to cover the entire development of software, from specifications and the production of proof obligations to code generation.






Static evaluation

Set of tools which uses a specification formalism referred to as RSL and which combines the VDM method and process algebra.







Static and dynamic evaluation

Combines a formal method and a traditional software workshop.  Specifications are written in PDL (Program Description Language) making it possible to define abstract machines.  Development is done manually using refining and the generation of proof obligations.  Finally, a series of statistic tests makes it possible to evaluate the dependability of the software developed.


(Formal Development methodology)




Static evaluation

Combines a specification language (Ina jo) and an assertion drafting language (Ina Mod).  It implements abstract machines, refinings justified by proof obligations.  It has the support of an interactive demonstrator, ITP, which takes care of proof, but remains rather limited.  FDM is certified by the US National Computer Security Centre for safety applications.

Table 1.4 : Formal methods which contribute to software evaluation.

2.2.3 Methods of validation

The first step is to make certain that software specifications are in compliance with user needs.  This verification is relatively difficult to make since the user often expresses his needs in an informal, incomplete, imprecise or yet incoherent manner.  This activity therefore mainly rests on the experience and the know-how of experts in the field.

The second level of evaluation corresponds to moving from specifications to the final code;  the final code must be in compliance with the software specifications.  This evaluation is in fact devoted to the software development process.  Its success depends on the methods and tools issued from the software engineering.

The third level of evaluation, corresponding to moving from the final code to the software behaviour, consists in executing the final code to check the software behaviour.  This level of evaluation is based on dynamic methods and is automatically controlled for the most part.

The fourth level of evaluation consists in making certain that the final code is in compliance with the user needs.  For the same reasons as those expressed for the first level of evaluation, which are inherent to the nature of the user needs, this compliance is very difficult to demonstrate.

The fifth level of evaluation, corresponding to moving from specifications to software behaviour, consists in checking that the software behaviour is in compliance with what is described in the specifications.  This activity was previously referred to as verification.  It is also mentioned in documentation as the answer to the question, “Have we built the software correctly”.  To date, this evaluation may be done by tests for which the initial sets have been elaborated based on specifications.

Over a longer period of time and by the intermediary of more elaborate methods, this evaluation may be done by program synthesis techniques or specification simulation techniques.

Finally, the sixth level represents the total evaluation activity.



Figure 1.3 : Software evaluation activities.

2.3      CASE tools for specification and validation

The first CASE tools (Computer Aided Software Engineering) were developed as of 1985 in order to help software developers better understand and apply functional analysis methods and specification methods.

Despite the diversity of the tools available on the market today, the research we have done to identify those which enable safety software specification has remained sterile.

Appendix 4.2 contains the sheets of the 9 most renowned products.

Although oriented towards safety software development, the SCADE product (Safety Critical Application Development Environment) by VERILOG, does not have a sheet since it is not well adapted to small and medium sized applications.

2.4      Specification and validation procedure

The role played by specification for operating safety is to explicit “what to do?”, resulting from refining the specifications after functional analysis and preliminary risk analysis.  It forms the interface between the analyst and the software designer, specifying the safety restrictions, such as execution time, inputs and outputs,  the behaviour desired in case of failure, etc.

We contacted 7 French companies in order to report on the methods and tools used in the software development process, notably for critical software.  Very few companies use specific methods.  Among the companies which systematically implement software engineering methods and tools are companies involved in the automobile and nuclear industries.

It was very difficult for us to establish a procedure based on industrial practices concerning critical software specification.  The interviews obtained with specialists from different horizons, enabled us to synthesise the following procedure for specification :

The specification phase is often based on the experience of the person in charge of drafting the specification.

It is necessary to :

-    Provide a precise definition of the composition and role of the analysis and specification team

-    have final users intervene early

-    plan project reviews and their contents

-    have the client express his needs as extensively as possible

-    Facilitate “client”-developer” communication

  • For specification, in so far as possible, use an adequate method and possibly an adequate tool
  • (CASE tools ensure the unity of the dictionary and make it easier to avoid systematic transcription faults)

-    a good drawing is worth 1,000 words

-    imagine being in the client’s position and adopt his logic and his manner of expressing himself

-    use neutral vocabulary for both parties, client / specification person or team

-    incite the client to enter into the developer logic, in order that he may formalise his need better

-    the client will thus express the needs he considers “evident” and therefore not necessary to be stated

-    begin by making a functional analysis and a specification of the overall system

-    make as complete a description as possible of the environment-operator-application interactions

-    define the role of the operator

-    take into account the application ergonomics : screens, alarm control, diagnostic, interventions for maintenance, etc.

-    divide to rule better

-    wait for the right moment in the system description before dividing specification tasks among the members of a team

  • decrease the complexity by carefully chosen divisions
  • minimise the information exchange flows

-    do not decompose the system into more than three level, since complexity increases quickly and overall control may be lost

-    decompose critical functions into primitive functions.  Impasses may thus be highlighted.

- in parallel, during specification, specify the manner in which to check objective expectations (acceptance files) and the means necessary to complete the verification

-    take the referential into full consideration : standards, guides, technical documents, etc., before referring to them in the specifications

-    select elements which may be realised and measured in relation with the size of the application, the structure and culture of the company which develops the software

-    validate the specifications by an internal audit type action done by a person / team other than that concerned by specification and development

-    include the final user of the application

3.         CONCLUSION

Faced with the fact that few software fault models exist, essentially design faults, the software operation safety procedure to be established must be based on the combined implementation of various complementary techniques.

Over and above the development process, it has been explained that organisation and management dimensions play an important role in safety software projects.  The need to express safety requirements for the software, to take into account and follow-up these requirements throughout the development cycle is absolutely necessary.  In addition to operation safety activities, the procedure must also include “quality” activities depending on the criticality of the software developed.

There is a great difference between software development practices and theoretical works which treat the subject.

It is difficult to elaborate an operation safety methodology for small and average sized companies.  Organisational restriction, and more particularly the lack of operation safety culture, greatly complicate the elaboration of an operation safety process.  There are no specific software safety specification tools for small and average sized applications.

Safety software specification is a very important phase in the life cycle.  The procedure proposed highlights the prime importance of the role of communication between the person (team) doing the specification and the final user (client) and the necessity to use adapted methods and tools.

4.         APPENDIXES

(see pdf file)