[SIGCIS-Members] Correctness and verification

Troy Astarte Troy.Astarte at newcastle.ac.uk
Mon Jul 6 04:10:24 PDT 2020


Dear Matt, and all,

The questions of what is considered “right” and “wrong” are always deeply sociological in nature; and the role of the machine in enforcing strict categories frequently leads to the creation of additional error states: sometimes inadvertently, and sometimes as a reflection of greater societal norms. See Hicks, Marie. "Hacking the Cis-tem." IEEE Annals of the History of Computing 41.1 (2019): 20-33. for a recent discussion of this concept in relation to gender. Last year’s (October 2019) SIG-CIS conference was on precisely this topic: you can find the programme here: http://meetings.sigcis.org/uploads/6/3/6/8/6368912/program_final.pdf.

Let me jump to a totally different way of looking at this problem, and one closer to both your initial mention of the FDIV bug, and Tom’s and Brian’s emails below. As discussed, “formal methods” is a part of computer science that primarily concerns itself with reducing errors in computing—particularly of the “mathematical” kind you mentioned. A recent workshop, also in October 2019, was on the topic of the history of formal methods. You can find the workshop webpages here: https://sites.google.com/view/hfm2019/. Proceedings are in press and will appear in Lecture Notes in Computer Science. (If you let me know off-list, I can send you a notification when they are published). You may also find interesting the special issue of Formal Aspects of Computer Science which contains some personal accounts of formal methods from a historical perspective: https://link.springer.com/journal/165/31/6. Finally, my own work is in the history of formal methods and you can find abstracts and pre-prints on my webpage: http://homepages.cs.ncl.ac.uk/troy.astarte/.

I hope some of this is of interest to you!

Best,

Troy Astarte

School of Computing
Newcastle University

On 4 Jul 2020, at 23:16, Brian Randell <brian.randell at newcastle.ac.uk<mailto:brian.randell at newcastle.ac.uk>> wrote:

⚠ External sender. Take care when opening links or attachments. Do not provide your login details.

Hi Matt:

Thomas Haigh’s pair of messages do an excellent job of outlining and describing some of a large body of computer science literature relevant to Matt’s question. However, the topics he covers, such as software errors, formal methods, correctness proofs, etc., can themselves be seen as part of a more general study, called dependability or trustworthiness, which provides a conceptual basis and body of knowledge for considering and dealing also with such all too real possibilities as inadequate or absent specifications, incomplete or erroneous “proofs”, whether of software or hardware. Some references:

Schneider, Fred B.,  (Ed.).  Trust in Cyberspace: Report of the Committee on Information Systems Trustworthiness, Computer Science and Telecommunications Board, Commission on Physical Sciences, Mathematics, and Applications, National Research Council, Washington, D.C., National Academy Press (1999) 332 pp. [Full text available at: http://www.nap.edu/readingroom/books/trust/]

Cofta, Piotr. Trust, Complexity and Control, John Wiley (2007) 310 pp. [ISBN 0470061308]

and if I may be permitted:

Avizienis, A., Laprie, J.-C., Randell, B. and Landwehr, C. Basic Concepts and Taxonomy of Dependable and Secure Computing, IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 1, (2004) pp.11-33.

Cheers

Brian Randell



On 4 Jul 2020, at 18:58, thomas.haigh at gmail.com<mailto:thomas.haigh at gmail.com> wrote:

Returning for part II of the answer, on actual bugs.

There is a huge computer science literature relevant to Matt’s question, but the key words are “correctness,” “formal methods,” “specification language” and “verification” rather than “error” or “bug.”

With the Pentium bug, IIRC some bogus value in a lookup table deep in the processor caused it to give wrong answers. That’s an example of a situation where a finished system doesn’t perform in line with requirements. But how are requirements expressed? Typically with a written specification that leaves many things ambiguous. Going back to the 1950s a lot of work in systems analysis was aimed at coming up with methods to get specifications right so that the systems built from them would do what was required.

A problem in the final system might be a result of a mistake in specification or in implementation. The computer science answer to this was to express specifications completely and unambiguously in mathematical terms and then prove that the final software/hardware would always do what the specification said. Both tasks were enormously difficult. A lot of the history is told in MacKenzie, Donald.Mechanizing Proof. Cambridge, MA: MIT Press, 2001 which is an extremely good book full of very clear explanations of difficult topics. This history includes the intervention of a philosopher, J.H. Fetzer, who IIRC said that claiming to prove any correspondence between a mathematical specification and a material object is a category error. So I’m sure there are pointers there to follow up for more recent philosophical work on the topic.

I gave some autobiographically grounded thoughts on some of this in a recent paper “Assembling a Prehistory for Formal Methods” which also points to some papers by Niklaus Wirth and Cliff Jones giving insider views on the history. http://www.tomandmaria.com/Tom/Writing/FormalMethodsHistoryPreprint.pdf My suggestion is that the formal methods movement was in large part an outgrowth of the original impetus and core community behind both the Algol effort and the 1968 Software Engineering Conference (which has less than generally suggested to do with what was offered under the banner of “Software Engineering” by the 1980s). That appeared in special issue on the history of formal methods, which includes other, more technical, items of possible interest: https://link.springer.com/journal/165/31/6.

Editing the Turing Award website (particularly ongoing work to add video clips to profiles) has also reminded me that several awards were given for work in this area, and in particular that the “model checking” approach has had a huge impact on how chip designs are tested. So Matt might want to look at the following entries:

https://amturing.acm.org/award_winners/hoare_4622167.cfm
https://amturing.acm.org/award_winners/sifakis_1701095.cfm
https://amturing.acm.org/award_winners/emerson_1671460.cfm
https://amturing.acm.org/award_winners/clarke_1167964.cfm

Best wishes,

Tom


From: Members <members-bounces at lists.sigcis.org> On Behalf Of Matthew Kirschenbaum
Sent: Friday, July 3, 2020 12:55 PM
To: members <members at sigcis.org>
Subject: [SIGCIS-Members] the nature of computational error

Hello all,

I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak).

Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system.

Thank you-- Matt


--
Matthew Kirschenbaum
Professor of English and Digital Studies
Director, Graduate Certificate in Digital Studies
Printer's Devil, BookLab
University of Maryland
mgk at umd.edu
_______________________________________________
This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org

—

School of Computing, Newcastle University, 1 Science Square,
Newcastle upon Tyne, NE4 5TG
EMAIL = Brian.Randell at ncl.ac.uk<mailto:Brian.Randell at ncl.ac.uk>   PHONE = +44 191 208 7923
URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html

_______________________________________________
This email is relayed from members at sigcis.org<http://sigcis.org/>, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sigcis.org/pipermail/members-sigcis.org/attachments/20200706/be4a2469/attachment.htm>


More information about the Members mailing list