[SIGCIS-Members] the nature of computational error

Allan Olley allan.olley at utoronto.ca
Sat Jul 4 11:26:05 PDT 2020


Hello,
 	I would reiterate Paul Edwards suggestion and add that depending 
on what kind of error and what kind of analysis you are looking for etc. 
it might be worth checking out Donald MacKenzie's book expanding on the 
subject of the mentioned essay Mechanizing Proof: Computing, Risk, and 
Trust. MIT Press, 2001.
 	My vague sense is that errors like the Pentium Error are 
not actually that rare? Flaws or suboptimal arrangements of hardwired code 
are noticed and work arounds developped now and then in various 
iterations of popular chip design, they are usually just much more 
obscure technical problems. Looking at the Wikipedia page for the Pentium 
error I see reference to at least one other Pentium chip error that had 
been discovered: https://en.wikipedia.org/wiki/Pentium_FDIV_bug 
https://en.wikipedia.org/wiki/Pentium_F00F_bug
 	Depending on what you mean by error one of the deadlier errors 
known in computing may actually be bad interface design. There was a brand 
of automatic morphine pumps which in at least a few cases was  misprogrammed to 
deliver a lethal dose to patients. If I understand correctly other 
morphine pumps never saw this human error committed, suggesting that poor 
interface design/human factors engineering is responsible. I learned about 
this case from a popular introduction to human factors engineering Kim 
Vincente The Human Factor, Alfred A. Knopf, 2003. Pages 142-150 for an 
account of the morphine pump.
 	Human factors engineering also plays a role in aviation disasters, 
nuclear disasters and so on. More recent accounts might relate these more 
to computing practice and discuss the morphine pump case more 
definitively, I am afraid I have limited familiarity with the literature.
 	In terms of obscure debtates about error in the earlier history of 
computing, one that I have come across that might be of interest although 
perhaps not is the concern among some earlier pioneers that in fixed 
point errors led almost inevitably to the results being the wrong order 
of magnitude and so clearly wrong, whereas with floating point, the order 
of magnitude being accumulated separate from the rest of the result 
errors could creep in to the signficant figures without affecting the 
magnitude leading to difficult to detect errors. I am not clear that this 
fear was well founded or widespread, but I know of at least two 
researchers who talk about it (I don't know of any source that summarizes 
discussion of the worry it is just somehting I noticed in my primary 
source reading and never really followed up). Mathematical physicist 
Martin Schwarzchild explains briefly the worry in an interview (page 20 
of the pdf seen here https://conservancy.umn.edu/handle/11299/107629 ) 
with regards to working on the IAS machine. Herb Grosch mentions in his 
memoir his opposition to floating point in the 1940s and 50s, but never 
quite explains his opposition I am pretty sure it is motivated by what 
Schwarzchild articulated (here is an instance on page 120 where he 
alludes to his misgivings 
http://www.columbia.edu/cu/computinghistory/computer.html#[-120-] 
but there are only 13 instances of floating point in the book so you 
quickly find instances by searching for that if you are interested). 
Sorry to give such on worked out thought/case.

-- 
Yours Truly,
Allan Olley, PhD

http://individual.utoronto.ca/fofound/

On Fri, 3 Jul 2020, Paul N. Edwards wrote:

> Rounding error is ubiquitous and unavoidable in digital computers, but with
> high precision computing (64-bit, 128-bit) it’s so small as to be
> negligible. 
> However, in cases where the same computation is performed many thousands or
> millions of times, it can still accumulate to a point that it’s significant. 
> 
> MacKenzie, D. (1993). Negotiating Arithmetic, Constructing Proof: The
> Sociology of Mathematics and Information Technology. Social Studies of
> Science, 23(1), 37-65.
> 
> Also see the short examples of this in my book A Vast Machine: Computer
> Models, Climate Data, and the Politics of Global Warming (MIT Press, 2010),
> pages 177-178.
> 
> Best,
> 
> Paul
> 
> 
>
>       On Jul 3, 2020, at 10:54, Matthew Kirschenbaum
>       <mkirschenbaum at gmail.com> wrote:
> 
> Hello all,
> 
> I am interested in a better understanding of the nature of
> computational error. My sense is that actual, literal (mathematical)
> mistakes in modern computers are quite rare; the notorious Pentium bug
> of the early 1990s is the exception that proves the rule. Most bugs
> are, rather, code proceeding to a perfectly correct logical outcome
> that just so happens to be inimical or intractable to the user and/or
> other dependent elements of the system. The Y2K "bug," for instance,
> was actually code executing in ways that were entirely internally
> self-consistent, however much havoc the code would wreak (or was
> expected to wreak).
> 
> Can anyone recommend reading that will help me formulate such thoughts
> with greater confidence and accuracy? Or serve as a corrective? I'd
> like to read something fundamental and even philosophical about, as my
> subject line has it, the nature of computational error. I'd also be
> interested in collecting other instances comparable to the Pentium
> bug--bugs that were actual flaws and mistakes hardwired at the deepest
> levels of a system.
> 
> Thank you-- Matt
> 
> 
> --
> Matthew Kirschenbaum
> Professor of English and Digital Studies
> Director, Graduate Certificate in Digital Studies
> Printer's Devil, BookLab
> University of Maryland
> mgk at umd.edu
> _______________________________________________
> This email is relayed from members at sigcis.org, the email discussion
> list of SHOT SIGCIS. Opinions expressed here are those of the member
> posting and are not reviewed, edited, or endorsed by SIGCIS. The list
> archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/
> and you can change your subscription options at
> http://lists.sigcis.org/listinfo.cgi/members-sigcis.org
> 
> 
> ________________________
> Paul N. Edwards
> 
> Director, Program on Science, Technology & Society
> William J. Perry Fellow in International Security and Senior Research Scholar
> Center for International Security and Cooperation
> Co-Director, Stanford Existential Risks Initiative
> Stanford University
> 
> Professor of Information and History (Emeritus)
> University of Michigan
> 
> 
>


More information about the Members mailing list