Peter B. Ladkin

Abstract: I consider the relative paucity of philosophical-logical studies on the foundations of system engineering. I enumerate some of the questions, and indicate why I think them important. I describe some of the temporal and intellectual history which led me to consider these questions, and acknowledge my intellectual debts, as well as explaining why this work is written as a collection of relatively independent essays.


Contents


The Questions

Back to Contents

It seems to me there is a dearth of studies on the logical and philosophical foundations of engineering. This is all the more surprising in a world in which computers are ubiquitous and people are concerned about their use, trustworthiness and safety - and in which the general public is increasingly unwilling to take the experts' word for it that everything is in order. Do we experts in fact know what we're doing? Without being able to answer some fundamental questions about our work, the answer must honestly be no. I believe we have a moral obligation to try to remedy this situation. This book is an attempt to open the discussion by introducing and answering some of those fundamental questions.

So, what is the logical situation of specification, requirements, design, software, and digital hardware, with respect to `the world' - or even with respect to each other, as components of an artifact? What can we prove mathematically, and what features stand no chance of being mathematically proven? What does it mean for a system to fail, and how do we tell? Conversely, what does it mean for a system to be `correct'? How can we explain or reason about failure? What are the features of the world that are relevant in considering these questions? What ontology do we need; what reasoning power do we need; what is the logical form of our assertions about systems? All these are natural questions to a philosophical logician, which have not in my view really begun to be answered. I offer views and arguments for these views in this series of essays.

Although I am acting the philosopher, I shall be content with playing the investigative journalist:- whilst I hope that my arguments are correct, I shall consider myself to have succeeded simply if this work opens up discussion. For it is the discussion and attention which is of most importance in answering these questions, and it is of less intellectual relevance who actually turns out to be right. I do strongly believe, however, that there is a fact of the matter whether certain reasoning is correct or incorrect, and whether premises (and therefore conclusions, if the reasoning is correct) are true.

The History

Back to Contents

So much for the rationale. What about the history? It starts vaguely in early 1993, when I wrote a couple of short essays on these topics, but further reflection indicates a longer genesis. Basically, I have always been interested in the topics and methods of philosophical logic, and have formally studied and then kept abreast of them since the early 70's. However, I produced no written work in the area, and my interest in applications meant I had become a computer scientist by the 80's. In 1988, the Communications of the ACM published an article by a philosopher, James Fetzer, which became very controversial. The editor, Peter Denning, had noted the reactions to the work before publication and concluded that the issues raised were significant. Fetzer claimed that program verification was impossible, produced a number of rather complicated arguments to that effect, and also wanted to conclude that governments should be aware that they were contributing money to pursuit of the impossible. Although I felt that Fetzer's philosophical reasoning was flawed (from which it follows that his political argument loses its force), much of the correspondence had, I felt, the intent of defending a fruitful area of computer science against the funding axe. This concern was justified, both in light of the fact that verification is both very hard and technically detailed so that even good progress can seem to be slow; and in light of the significant funding cuts applied in this era both in Britain (Lighthill) and the US (DARPA ISTO under Traub) to computer science studies falling under the heading of `artificial intelligence'. While these policy discussions are important, it seemed to me, and still does, that no one had handled Fetzer's philosophical arguments in very much detail. Being funded by the US DoD to work on various aspects of `new-generation' software engineering at the time, I had little opportunity then to devote to the philosophical-logical issues, even though I considered them of paramount importance.

In late 1992, when I was visiting various European research sites, I joined a private discussion group on computers and aircraft, which included many of the computer scientists I knew working in verification and safety, as well as people with aeronautical and avionics backgrounds, both pilots and engineers. Questions about what systems are and how to think about them came up regularly, and now I had the opportunity to think about them. I produced a counter-example to Fetzer's argument, and gave a talk on it to computer scientists at Newcastle. To my (continuing) surprise, and especially in light of a counterexample, I found that many computer scientists think that Fetzer's argument is fundamentally correct. Leslie Lamport confirmed to me that many computer scientists had strong views on the subject, even though these views and the arguments for them were not necessarily aired in technical publications. In my view, arguments which are not aired and not publically discussed are less likely to have their strengths and weaknesses generally understood. On submission of my article to a `formal methods' journal, I also found that the informal methods of argument were such that conclusions seemed to drive reasoning rather than the other way round - reviewers (of two versions of the work) suggested that my conclusions against Fetzer were inappropriate,n yet their critiques of any of my arguments were rather easily, some of them trivially, countered. This entire situation puzzled me, and still does. I accept that it occurs, but I do not understand it.

I was by now convinced of the need for thorough discussion of the logical foundations of software and system engineering. Denning and Fetzer (along with Brian Smith, which I was unaware of at the time) on one side, and Hoare on the other side, had taken the original steps. The argument had generated a brief spurt of activity and faltered. Meanwhile I was aware of parallel efforts on related themes. Here are the three that I think most influenced me, in alphabetical order. Leslie Lamport was constructing a system for algorithm verification, based on temporal logic, designed down to the level of detail which attempted to dot all the i's and cross the t's. Leslie's treatment of basic questions of engineering in the syntax and semantics of TLA is impressive. Using TLA more and more, I also became aware that this work had potentially wider application than that which Leslie wanted to consider, and that there was more work to do in this direction. Leslie's views on temporal logic and its application, and on applied logic in general, are always interesting, and mostly right. Except, of course, where he disagrees with me. Nancy Leveson was working on a book which discussed software and computer safety in the context of engineering, and had found many discrepancies in the way that engineers and computer scientists considered and handled many common issues. Leveson's discussion of the meaning of fundamental concepts of engineering failures such as fault, failure, accident, and hazard to digital-logic-based systems I find to be uniquely illuminating. However, contrary to Leveson's intention, I believe her discussion illustrates some fundamental semantic difficulties with these notions, and I hope to sort these out soon (I have written on this topic and an essay may appear in this collection shortly - many others also believe that these concepts should be better understood). Although I read her book only recently, I engaged in some lengthy sporadic email discussions on these themes with Nancy. Finally, I became, inexcusably belatedly in 1995, aware of Brian Smith's influential and oft-reprinted article, `The Limits of Correctness in Computers', which predates the publication of Fetzer's article. Some of Smith's arguments follow similar lines to some of those of Fetzer, and are in my view mostly mistaken, as I show in my essay in this volume.

Another impetus came from the introduction of so-called `fly-by-wire' aircraft into scheduled passenger service with airlines. Being a keen pilot myself, I became involved in technical discussions of the risks and the initial accidents. Companies such as Airbus claimed that this was a routine evolutionary step in the development of commercial aircraft. Some computer scientists and a large proportion of the public worried that it was more like a leap into the relatively unknown. The argument on one side was that well-engineered hydro-mechanical components had simply been replaced by lighter, and often more reliable, electrical components. The arguments on the other side were that programmers and designers back in the factory rather than pilots on the spot were now deciding how the aircraft should behave in response to difficult situations. Who's right? Either side? Both sides? Neither side? There is no hope of beginning to answer the question satisfactorily until the fundamental philosophical-logical issues have been addressed.

While interested in foundational questions, my roving eye constantly peeks at applications. I'm not really happy with the one without the other (this explains in part the waning of my early intellectual interest in axiomatic set theory - there aren't too many applications for the very high cardinal infinities, even in pure logic, though if my colleague Shaughan Lavine is right (Lav95), maybe we can translate them away and get something useful out of it after all). Not content with discussing whether and how programs can be proven correct or what the right relation of requirements to design specification is, it became clear to me that the reasoning about failure, as exemplified in accident reports, was more art than science. While investigation itself remains a fine engineering art, I believe there is objectivity to reasoning, so I set about trying to categorise and formalise this reasoning. Accident reporting is about determining causal factors, so-called `probable cause' and `contributing factors'. I was lucky to be interested in aviation, because aviation accident reporting has the highest investigative standards of any engineering domain subject to regular accidents (luckily for us, nuclear power accidents are relatively few and far between). I was also lucky to have had the logical groundwork laid for me already by David Lewis, Donald Davidson, Georg Hendrik von Wright and their commentators. Using this work, I have tried to demonstrate by example what I mean by objective causal history, and how writing it would improve over current reporting practice.

The Essays

Back to Contents

Why is this book a collection of essays, rather than a contiguous argument? The essays treat diverse but related topics, which leads to a certain amount of repetition. But it also means that each essay-chapter may be read independently. I chose modularity, and therefore I hope intellectual efficiency, over spatial efficiency.

Firstly, flipping backward and forward in an electronic document is not yet as easy as with a book. Keeping fingers in places, looking in an index, even scanning text rapidly for that crucial sought-after argument, still seems to be easier with paper than with keyboard and mouse. Comparative discomfort with indexing may also be alleviated by repeating crucial points where they are needed or referred to, essential in electronic essays but also valid in paper books.

Secondly, repetition of an argument or a point can be intellectually efficient. Even in purely formal-logical arguments in natural-deduction systems, it is recognised that an assertion may be repeated as often as needed, in order to contribute to clarity of the formal argument. So with arguments in natural language.

Thirdly, repetition of an argument where used gives a clear feeling for its relative importance in the general scheme of related argumentation. Davidson's repeated point, in his Essays on Actions and Events, about what is meant by the logical form of a sentence and how one may determine if one has an adequate form, bears no scars for its constant retelling. One obtains a feeling for its fundamental place in Davidson's arguments.

Fourthly, an argument is rarely repeated verbatim. Subtle, and it might be thought unimportant, differences can help a reader understand what the deep form of an argument is, as contrasted with variable form which is necessitated in each instance by constraints of syntax, writing style, and context. Repetition of an argument allows one to see what it is used for (for example, as an end in itself, to establish its conclusion, to ascertain its level of generality, or to generate funding). Without varying the context, it cannot be seen what is fundamental to an argument and what not.

Fifthly, these essays were originally written as such, and a lot of work could have gone into converting them into a monograph, without thereby clarifying any of the substantial points made, or even without improving the style in which the essays are written. I often prefer reading related independent essays, for their modularity, style and flow of argument, than an entire book. Call this the argument from laziness if you will, it still has force, and also is traditional in philosophy. Furthermore, I have not come across a reader who has read or even considered more than one of these essays at a time. There may even be an argument here to split them up even more. But I like the current structure.

Acknowledgements

Back to Contents

Thanks are due my continuing correspondents on these issues, sympathetic or not, in particular Robert Dorsett, Chris Johnson, Leslie Lamport, Nancy Leveson, Peter Mellor, and Peter Neumann, as well as those with whom I have discussed particular papers - Leslie and Chris again, as well as John Dobson, Cliff Jones, Jonathan Moffett, Ev Palmer and Barry Strauch.

I owe an obvious intellectual debt to Peter Denning, James Fetzer, Tony Hoare, and Brian Smith for starting the discussion, as well as to David Lewis, Donald Davidson, and the US National Transportation Safety Board for their contributions to some of the answers.

The essay analysing the causal history in the Cali accident report is joint work with my students Thorsten Gerdsmeier and Karsten Loer. Thorsten has also devised a method of generating Why-Because graphs automatically by writing them in the programming language DATR. Use of the name Why-Because Graph rather than the earlier causal hypergraph was suggested by Ev Palmer.

References

(Lav95): Shaughan Lavine, Understanding the Infinite, Harvard University Press, 1995. Back


Back to Contents page