From thomas.haigh at gmail.com Thu Jul 2 15:48:33 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Thu, 2 Jul 2020 17:48:33 -0500 Subject: [SIGCIS-Members] New evidence re von Neumann's view of Turing in 1945 Message-ID: <051801d650c2$ea8aa2c0$bf9fe840$@gmail.com> Hello SIGCIS, ACM has turned off its free access to the digital library, but I did notice that my most recent Communications of the ACM column is still open access and that reminded me that I never sent out an alert when it was published back in January. Title: "Von Neumann Thought Turing's Universal Machine was 'Simple and Neat.': But That Didn't Tell Him How to Design a Computer" https://cacm.acm.org/magazines/2020/1/241712-von-neumann-thought-turings-uni versal-machine-was-simple-and-neat/fulltext It is written with Mark Priestley, based on a document that Mark found on a visit to the (newly catalogued) Herman Goldstine papers at the American Philosophical Society to gather material for his recent book, Routines of Substitution. Those of you who have been in the field for a while or tend to the computer science side may know that the question of whether von Neumann had read Turing's classic paper introducing Turing machines (1936) when he was writing "First Draft of a Report on the EDVAC" and initiating the design of the computer at the Institute for Advanced Studies (1945-6) has been much discussed, particularly by people trying to show that the fundamental idea of what is often called the "stored program computer" was somehow stolen from Turing. We always thought that discussion was a little off base, because I've never seen anyone identify an architectural features that appear in both the First Draft version of EDVAC and in Turing's paper that von Neumann wasn't independently exposed to in other contexts (his wide-ranging work in logic, the ENIAC project and the ideas of its members, the Harvard computer group, Bell Labs, etc.) (See my earlier column on this: http://www.tomandmaria.com/Tom/Writing/CACMActuallyTuringDidNotInventTheComp uter.pdf) Also there was no reason to doubt that von Neumann had read the paper back in the 1930s even though the earliest proof of his knowledge came from November 1946. There is an indirect link via a paper by McCullouch and Pitts which introduces the pseudo-neuron notation for what we'd now think of as logic gates, which von Neumann adapted for the First Draft. Providence and careful archival work delivered to Mark's a document headed "High Speed Computing" and written in the form of a lecture to a general audience or beginning of a primer on computer technology. We believe it comes from mid-1945 in which von Neumann discusses his thoughts on Turing's universal machine and on what the calls "The Logic of Pitts" and their usefulness for computer designers. This contains what may be the first discussion of the universal Turing machine by anyone other than Turing (Church's famous review synopsis named the "Turing machine" but does not focus on the universal machine - thanks to Andrew Hodges, who knows a lot more about Turing than we do, for highlighting this point to us). In fact von Neumann's summary is so similar to what you'd find in later textbooks that it takes an effort of historical will to remember how unusual this was in 1945. (For reasons we describe in the paper, we believe that this material was written just after the First Draft). On the other hand, the text also shows that while von Neumann was understandably impressed with Turing's work, he didn't think it a useful template for the construction of an actual machine: "Here the analogy to a high-speed computing machine breaks down, for one cannot wait for the machine to go all eternity for his answer." So he moved on to the Pitts neuron notation, showing how it could be used to describe the building blocks of digital logic. In an effort to figure out when this text was written and whether it was delivered, we found an article of Calvin Moores in Annals, and I followed up with a look at his diary and papers at CBI. This revealed a dated journal entry from October 28, 1945 in which von Neumann gave a similar ad-hoc lecture to visitors from the Naval Ordnance Lab's computer project, discussing Turing's paper, Pitts' neuron notation, and his ongoing work on instruction set design. As far as I know, this is the earliest dated reference by von Neumann to Turing's paper. How people interpret this new evidence will probably depend on their previous convictions. We point out that the fact that von Neumann could summarize Turing's paper so well but ignored it in other work of the period confirms that he compartmentalized it separately from his work on computer architecture, instead lumping it with his passionate but unfinished work on cellular automata in which context he freely acknowledge his debt to Turing. So we try and thread the needle by acknowledging the importance of the discovery without buying into the idea that this is the great unsolved mystery of early computing. Those who think that EDVAC was someone a slightly more practical Turing Machine will seize on von Neumann's suggestion that Turing's "system of logic could be used in building a computing machine" and that "The problem of developing a computing machine can be considered as a problem in logic." (Hmm, and then we can all argue about what "logic" means there - is it in the sense of "digital logic" or "switching logic," or the "logical control" von Neumann brings up in the next sentence, or does it privilege abstraction?) There's also the perplexing question of what the text was supposed to be. It is split into three very short "lectures", with the first two giving background and basic technologies and the third exploring Turing and Pitts. We suspect that the payoff would have come in later, unwritten lectures where von Neumann built on these lectures to introduce the new approach he'd taken in the First Draft and his ongoing work at IAS, just as he did in person when Moores called. If the lectures were supposed to be delivered at a specific venue we weren't able to identify it. The timeline doesn't seem to fit for the proto-cybernetic meetings going on around that time, and he is coy about things connected to ENIAC that were shared openly with that community. I could imagine him delivering them at some kind of Los Alamos colloquium. I even imagined him spending a train ride sketching out the first pages of what was supposed to be a popular science text on the emerging technology, following a similar impulse to his Los Alamos colleague George Gammow in One Two Three Infinity (published 1947) but Mark is not at all convinced by that idea. To me it explained the conceit of addressing the concerns of a computer designer while assuming none of the background knowledge a real computer designer would have. Best wishes, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkirschenbaum at gmail.com Fri Jul 3 10:54:44 2020 From: mkirschenbaum at gmail.com (Matthew Kirschenbaum) Date: Fri, 3 Jul 2020 13:54:44 -0400 Subject: [SIGCIS-Members] the nature of computational error Message-ID: Hello all, I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, *the nature of computational error*. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. Thank you-- Matt -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From lb at laurentbloch.org Fri Jul 3 11:06:32 2020 From: lb at laurentbloch.org (Laurent Bloch) Date: Fri, 3 Jul 2020 20:06:32 +0200 Subject: [SIGCIS-Members] the nature of computational error In-Reply-To: References: Message-ID: <20200703200632.634aca3d@olga> Hi, You should have a look on this paper: https://hal.archives-ouvertes.fr/hal-01340384/document https://hal.archives-ouvertes.fr/hal-01340384 I believe it could meet your interest. Cheers! Le Fri, 3 Jul 2020 13:54:44 -0400, Matthew Kirschenbaum a ?crit : > Hello all, > > I am interested in a better understanding of the nature of > computational error. My sense is that actual, literal (mathematical) > mistakes in modern computers are quite rare; the notorious Pentium > bug of the early 1990s is the exception that proves the rule. Most > bugs are, rather, code proceeding to a perfectly correct logical > outcome that just so happens to be inimical or intractable to the > user and/or other dependent elements of the system. The Y2K "bug," > for instance, was actually code executing in ways that were entirely > internally self-consistent, however much havoc the code would wreak > (or was expected to wreak). > > Can anyone recommend reading that will help me formulate such > thoughts with greater confidence and accuracy? Or serve as a > corrective? I'd like to read something fundamental and even > philosophical about, as my subject line has it, *the nature of > computational error*. I'd also be interested in collecting other > instances comparable to the Pentium bug--bugs that were actual flaws > and mistakes hardwired at the deepest levels of a system. > > Thank you-- Matt > > -- Laurent Bloch - https://www.laurentbloch.net - lb at laurentbloch.org Si vous trouvez que l'?ducation co?te cher, essayez l'ignorance ! (A. Lincoln) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: Signature digitale OpenPGP URL: From housec1839 at gmail.com Fri Jul 3 11:37:04 2020 From: housec1839 at gmail.com (Chuck House) Date: Fri, 03 Jul 2020 11:37:04 -0700 Subject: [SIGCIS-Members] the nature of computational error In-Reply-To: References: Message-ID: Matthew, the ?famous? error before the Pentium bug was the ?Inverse Log of 2.02? error in the original HP35 handheld calculator.??? We wound up replacing a lot of firmware as a result. The bug is described well down in this article http://www.hpcc.org/calculators/wmjarts.html For my talk at the ACM History of Personal Computing January 1986, here is the video https://www.computerhistory.org/collections/catalog/102695114 In it at minute 51:00, Tom Osborne, the key creator of HP?s 9100 and HP 35 describes the issue surrounding this ?Inverse log of 2.02? error.?? This is the only description I?ve ever heard Chuck House www.innovascapesinstitute.com www.anywhereanytime.io/covid19 http://innovascapes.blogspot.com 805-570-6706 From: Members on behalf of Matthew Kirschenbaum Date: Friday, July 3, 2020 at 10:55 AM To: members Subject: [SIGCIS-Members] the nature of computational error Hello all, I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. Thank you-- Matt -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 17489 bytes Desc: not available URL: From annettevee at gmail.com Fri Jul 3 12:02:30 2020 From: annettevee at gmail.com (Annette Vee) Date: Fri, 3 Jul 2020 15:02:30 -0400 Subject: [SIGCIS-Members] the nature of computational error In-Reply-To: References: Message-ID: Hi Matt, This article might be useful to you, from the special issue of *Computational Culture *on Rhetoric and Computation that Jim Brown and I co-edited: Matthew Bellinger. ?The Rhetoric of Error in Digital Media.? *Computational Culture* 5 (15th January 2016). http://computationalculture.net/the-rhetoric-of-error-in-digital-media-2/. Good luck! Annette On Fri, Jul 3, 2020 at 2:37 PM Chuck House wrote: > Matthew, the ?famous? error before the Pentium bug was the ?Inverse Log of > 2.02? error in the original HP35 handheld calculator. We wound up > replacing a lot of firmware as a result. > > > > The bug is described well down in this article > http://www.hpcc.org/calculators/wmjarts.html > > > > For my talk at the ACM History of Personal Computing January 1986, here is > the video https://www.computerhistory.org/collections/catalog/102695114 > > In it at minute 51:00, Tom Osborne, the key creator of HP?s 9100 and HP 35 > describes the issue surrounding this ?Inverse log of 2.02? error. This is > the only description I?ve ever heard > > > > Chuck House > > www.innovascapesinstitute.com > > www.anywhereanytime.io/covid19 > > > > [image: signature_656552628] > > > > http://innovascapes.blogspot.com > > 805-570-6706 > > > > > > > > *From: *Members on behalf of Matthew > Kirschenbaum > *Date: *Friday, July 3, 2020 at 10:55 AM > *To: *members > *Subject: *[SIGCIS-Members] the nature of computational error > > > > Hello all, > > > > I am interested in a better understanding of the nature of computational > error. My sense is that actual, literal (mathematical) mistakes in modern > computers are quite rare; the notorious Pentium bug of the early 1990s is > the exception that proves the rule. Most bugs are, rather, code proceeding > to a perfectly correct logical outcome that just so happens to be inimical > or intractable to the user and/or other dependent elements of the system. > The Y2K "bug," for instance, was actually code executing in ways that were > entirely internally self-consistent, however much havoc the code would > wreak (or was expected to wreak). > > > > Can anyone recommend reading that will help me formulate such thoughts > with greater confidence and accuracy? Or serve as a corrective? I'd like to > read something fundamental and even philosophical about, as my subject line > has it, *the nature of computational error*. I'd also be interested in > collecting other instances comparable to the Pentium bug--bugs that were > actual flaws and mistakes hardwired at the deepest levels of a system. > > > > Thank you-- Matt > > > > > -- > > Matthew Kirschenbaum > Professor of English and Digital Studies > Director, Graduate Certificate in Digital Studies > Printer's Devil, BookLab > University of Maryland > > mgk at umd.edu > > _______________________________________________ This email is relayed from > members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions > expressed here are those of the member posting and are not reviewed, > edited, or endorsed by SIGCIS. The list archives are at > http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change > your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 17489 bytes Desc: not available URL: From pedwards at stanford.edu Fri Jul 3 13:18:59 2020 From: pedwards at stanford.edu (Paul N. Edwards) Date: Fri, 3 Jul 2020 20:18:59 +0000 Subject: [SIGCIS-Members] the nature of computational error In-Reply-To: References: Message-ID: Rounding error is ubiquitous and unavoidable in digital computers, but with high precision computing (64-bit, 128-bit) it?s so small as to be negligible. However, in cases where the same computation is performed many thousands or millions of times, it can still accumulate to a point that it?s significant. MacKenzie, D. (1993). Negotiating Arithmetic, Constructing Proof: The Sociology of Mathematics and Information Technology. Social Studies of Science, 23(1), 37-65. Also see the short examples of this in my book A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming (MIT Press, 2010), pages 177-178. Best, Paul On Jul 3, 2020, at 10:54, Matthew Kirschenbaum > wrote: Hello all, I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. Thank you-- Matt -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org ________________________ Paul N. Edwards Director, Program on Science, Technology & Society William J. Perry Fellow in International Security and Senior Research Scholar Center for International Security and Cooperation Co-Director, Stanford Existential Risks Initiative Stanford University Professor of Information and History (Emeritus) University of Michigan -------------- next part -------------- An HTML attachment was scrubbed... URL: From vahrenkamp2 at gmx.de Sat Jul 4 07:03:17 2020 From: vahrenkamp2 at gmx.de (Richard Vahrenkamp) Date: Sat, 4 Jul 2020 16:03:17 +0200 Subject: [SIGCIS-Members] the nature of computational error In-Reply-To: References: Message-ID: <9d0eae03-6254-273a-6c05-2fa8fa93ae9e@gmx.de> Although physicists often use rules of thumb, the precision of calculation became an important point in John von Neumann's policy against the analogue computer. The precision of individual calculations on analog machines is lower than on digital machines. But the final results of integrating a differential equation were the same on analog machines as on digital ones, as comparisons in the 1950s showed, see my paper The Computing Boom in the US Aeronautical Industry, 1945?1965, in: ICON ? The Journal of the International Committee for the History of Technology, volume 24, 2019, pp. 127?149. Best, Richard On 03.07.2020 22:18, Paul N. Edwards wrote: > Rounding error is ubiquitous and unavoidable in digital computers, but with high precision computing (64-bit, 128-bit) it?s so small as to be negligible. > > However, in cases where the same computation is performed many thousands or millions of times, it can still accumulate to a point that it?s significant. > > MacKenzie, D. (1993). Negotiating Arithmetic, Constructing Proof: The Sociology of Mathematics and Information Technology. Social Studies of Science, 23(1), 37-65. > > Also see the short examples of this in my book A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming (MIT Press, 2010), pages 177-178. > > Best, > > Paul > > > > On Jul 3, 2020, at 10:54, Matthew Kirschenbaum > wrote: > > Hello all, > > I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). > > Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. > > Thank you-- Matt > > > -- > Matthew Kirschenbaum > Professor of English and Digital Studies > Director, Graduate Certificate in Digital Studies > Printer's Devil, BookLab > University of Maryland > mgk at umd.edu > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > ________________________ > Paul N. Edwards > > Director, Program on Science, Technology & Society > William J. Perry Fellow in International Security and Senior Research Scholar > Center for International Security and Cooperation > Co-Director, Stanford Existential Risks Initiative > Stanford University > > Professor of Information and History (Emeritus) > University of Michigan > > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -- ******************************************** Prof. Dr. Richard Vahrenkamp Logistik Consulting Berlin Phone 0177- 628 3325 E-Mail: Vahrenkamp2016 at gmx.de Web: www.vahrenkamp.org Trendelenburgstr. 16 14057 Berlin ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Sat Jul 4 10:05:44 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Sat, 4 Jul 2020 12:05:44 -0500 Subject: [SIGCIS-Members] Numerical errors Message-ID: <05e401d65225$5b8a34a0$129e9de0$@gmail.com> Hello Matt, Great question. I?m going to reply first on the normal treatment of error in numerical applications, and separately on the larger question of design mistakes in hardware and software. You are correct that the Pentium bug fits into the second category, but many of the replies have focused on the first and they are both relevant. I?m not actually competent in numerical mathematics, but a spell in 2004-6 conducting full career oral history interviews with numerical software specialists as a subcontractor for the Society for Industrial and Applied Mathematics on a DOE grant exposed me to a lot of the history of this area in ways that have occasionally surfaced in my other work. The oral histories from the project are at http://history.siam.org/oralhistories.htm. One of the things it taught me is that the question of a ?correct? numerical answer is not nearly as straightforward as most of us assume. In integer mathematics sure 2+2=4 etc. But the kinds of problems scientists needed early computers for invariably involved some very large and small quantities. So even though the hardware didn?t support floating point they basically had to do the same thing manually, storing a certain number of significant digits and tracking the scaling factor that related these to the actual quantity. If you look at the ENIAC flow diagram on our poster, you?ll see little notations tracking the power of ten scaling factors in front of many of the variable names in the program boxes https://eniacinaction.com/docs/MonteCarloPoster.pdf. That manual process was itself a major source of error and frustration, so from the 1950s onward all large computers intended for scientific use included hardware floating point so that if, for example, a very small quantity was multiplied by a very large constant the computer would figure out both the significant digits and the power of ten (or two) needed to scale them. But whether done manually or automatically, the numbers being represented are only approximations of the actual quantities. When doing calculations manually, scientist and engineers has always had to make a decision on how many digits to use, and doing that responsibly required some knowledge of how reliable the final answer would be based on the initial rounding and the potential of compounding errors as surplus digits were thrown away each time numbers were multiplied. The other important thing to understand here is that in real world computing even things like differential equations, which college calculus might fool you into thinking can be solved exactly, are solved approximately with numerical methods. These methods are usually iterative, based on measuring how far off target the current answer is so that an initial guess eventually converges on an accurate approximation. The conventional numerical methods found in textbooks, etc. were not well suited for automatic computers. Digital computers could carry our operations thousands of times faster than human computers, which in the worst case allowed errors to compound thousands of times faster. The new field of ?numerical analysis? grew up at the intersection of computing and applied mathematics to address this. It included new methods to track the compounding of numerical errors through computations, and the development of more efficient and accurate algorithms for common mathematical chores such as calculating the matrix eigenvalues. I heard the terms ?overflow,? ?underflow,? ?truncation error? and ?rounding error? a lot in the interviews as well as more esoteric terms such as ?successive overrelaxation.? One stream of work on ?backward error analysis? led to an early Turing Award, for Jim Wilkinson (https://amturing.acm.org/award_winners/wilkinson_0671216.cfm). Those methods were also more complex and harder for non-specialists to reliably implement, which led to some of the earliest initiatives in software libraries (SHARE), peer-review of software, portable software (BLAS, PFORT), and software packaging and distribution (LINPACK and EISPACK). One side of the story I did tell was through biographies of Cleve Moler (https://tomandmaria.com/Tom/Writing/MolerBio.pdf) and Jack Dongarra (https://tomandmaria.com/Tom/Writing/DongarraBio.pdf). Moler founded Mathworks (which you probably hear sponsoring things on NPR). The specialists also complained that ordinary scientists and engineers didn?t want to develop the skills needed to understand which methods could safely be applied to which classes of equation, and so would introduce errors by grabbing the code for an inappropriate method. (A very popular book, Numerical Recipes, was accused of encouraging this and earned the disdain of some of my interviewees). Doing the interviews, I was struck by the very strong and personal aesthetic preferences the numerical software producers expressed for the floating point arithmetic of particular machines. The IBM 709X machines were acclaimed, whereas the CDC supercomputers were distained. I sneaked a little of this into the Revised History of Modern Computing with Paul Ceruzzi, in terms of the terrible step back introduced with the IBM System/360 arithmetic. This needed expensive fixes to installed computers, like the Pentium bug, but it wasn?t a bug ? just the result of the design engineers making decisions without a good idea of how they would impact scientific users. Although System/360 was intended to work equally well for scientific and data processing applications it was much more successful for data processing. The problems began with the System/360 floating point. It used hexadecimal (base 16) rather than binary, which was efficient for smaller, business-oriented machines but would create major problems with rounding errors for scientific users. The new general-purpose registers raised more problems with the handling of single and double precision numbers. When IBM described its new architecture, William Kahan, then of Waterloo University, and others ?went nuts? as they ?recognized something really perverse about the arithmetic.? IBM found ways to work around some of the issues in software libraries, but Kahan recalls that after the full scale of the problem was acknowledged in 1966, following lobbying by SHARE, the company spent millions tweaking the hardware of machines already installed. The wide range of approaches to floating point arithmetic was also a threat to portability. FORTRAN code could be moved from one system to another, but it would give different answers when run on them. There might also be relatively large shifts in answers based on tiny variations in initial inputs. So the question of error gets complicated as the ?right? answer depends on the machine the code is being run on. Also an algorithm might run accurately but give misleading answers because it is being applied to an equation with unsuitable characteristics. Kahan is the central figure in addressing these problems, leading the IEEE standards effort to come up with an optimal floating point design that could be standardized across manufactures. Luckily, Intel was the first adopter thanks to a consulting contract Kahan had. He?s a fascinating figure (I wrote the profile at (https://amturing.acm.org/award_winners/kahan_1023746.cfm) but relatively little known because floating point is seen as such a niche area. When I showed up for the interview he talked for 24 hours spread over four days (http://history.siam.org/pdfs2/Kahan_final.pdf) Here?s how we tell that story in the Revised History: Doing engineering calculations or financial modelling cost a lot less with a personal computer, such as the Apple II, than with a mainframe or timesharing system. But only small jobs would fit into its limited memory and run acceptably quickly. Complex models still needed big computers. That began to change with the IBM PC. Even the original IBM PC could be expanded to much larger memory capacities than the Apple. The other big difference was floating point. Since the 1950s capable floating-point hardware support had been the defining characteristic of large scientifically-oriented computers. The 8088 used in the original PC did not support floating point and its performance on technical calculations was mediocre. But every PC included an empty socket waiting for a new kind of chip, the 8087 ?floating point coprocessor.? The 8087 was the first chip to implement a new approach to floating point, proposed by William Kahan and later formalized in the standard IEEE 754. Its adoption by firms including DEC and IBM was a major advance for scientific computing. Code, even in a standard language like FORTRAN, had previously produced inconsistent floating-point results when run on different computers. According to Jerome Coonen, a student of Kahan?s who managed software development for the original Macintosh, this standardization on robust mechanisms was a ?huge step forward? from the previous ?dismal situation?. Kahan?s achievement was having floating point taken for granted for 40 years.? The 8087 was announced in 1980 but trickled onto the market because it pushed the limits of Intel?s production processes. Writing in Byte, Steven S. Fried called it ?a full-blown 80-bit processor that performs numerical operations up to 100 times faster? at the same speed as a medium-sized minicomputer, while providing more accuracy that most mainframes.? The 8088 itself had only 29,000 transistors, but its coprocessor needed 45,000 to implement its own registers and stack. Code had to be rewritten to use special floating-point instructions, were executed in parallel with whatever the main processor was doing. Scientific users quickly embraced the 8087, which made the PC a credible alternative to minicomputers. Fried had promised that ?the 8087 can also work wonders with business applications? but software support was limited. Even Lotus-1-2-3, which existed only to crunch numbers, did not utilize it. Fried began a business selling patches to add coprocessor support to such packages. Over time, IEEE style floating point became a core part of every processor. By the time Intel launched the 80486 in 1989, its factories were just about able to manufacture a one million transistor chip with a coprocessor built in. Software developers, particularly videogame programmers, began to use floating point instructions. By the late-1990s PC processors competed largely on the strength of their floating-point capabilities. So that?s two big kinds of error to dig into: errors related to the handling of arithmetic in a particular machine and errors introduced by the algorithm (or as they call it ?methods?) chosen to solve an equation numerically. Thanks to reliance on IEEE standard floating point and the eclipse of FORTRAN by modern systems like MATLAB both have been largely black-boxed from typical scientific users. Best wishes, Tom From: Members On Behalf Of Matthew Kirschenbaum Sent: Friday, July 3, 2020 12:55 PM To: members Subject: [SIGCIS-Members] the nature of computational error Hello all, I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. Thank you-- Matt -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannah.rodgers at gmail.com Sat Jul 4 10:26:59 2020 From: johannah.rodgers at gmail.com (Johannah Rodgers) Date: Sat, 4 Jul 2020 13:26:59 -0400 Subject: [SIGCIS-Members] the nature of computational error In-Reply-To: <9d0eae03-6254-273a-6c05-2fa8fa93ae9e@gmx.de> References: <9d0eae03-6254-273a-6c05-2fa8fa93ae9e@gmx.de> Message-ID: If you are interested in the human side of the discussion of error, I'd suggest taking a look at chapter 4 of Otte and Mlynarczyk's 2010 book on Basic Writing and Stuart Moultrhop's essay "Error 1337" in the collection edited by Mark Nunes entitled "Error: Glitch, Noise, and Jam in New Media " (Bloomsbury, 2010). For a 19th c. perspective, you might want to take a look at Kuno Fischer's chapter on "The Origin of Error" in his History of Modern Philosophy (1854?77; the link is to an 1887 English translation). All best, Johannah On Sat, Jul 4, 2020 at 10:04 AM Richard Vahrenkamp wrote: > Although physicists often use rules of thumb, the precision of calculation > became an important point in John von Neumann's policy against the analogue > computer. The precision of individual calculations on analog machines is > lower than on digital machines. But the final results of integrating a > differential equation were the same on analog machines as on digital ones, > as comparisons in the 1950s showed, see my paper The Computing Boom in the > US Aeronautical Industry, 1945?1965, in: ICON ? The Journal of the > International Committee for the History of Technology, volume 24, 2019, pp. > 127?149. > > > Best, Richard > > > > On 03.07.2020 22:18, Paul N. Edwards wrote: > > Rounding error is ubiquitous and unavoidable in digital computers, but with high precision computing (64-bit, 128-bit) it?s so small as to be negligible. > > However, in cases where the same computation is performed many thousands or millions of times, it can still accumulate to a point that it?s significant. > > MacKenzie, D. (1993). Negotiating Arithmetic, Constructing Proof: The Sociology of Mathematics and Information Technology. Social Studies of Science, 23(1), 37-65. > > Also see the short examples of this in my book A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming (MIT Press, 2010), pages 177-178. > > Best, > > Paul > > > > On Jul 3, 2020, at 10:54, Matthew Kirschenbaum > wrote: > > Hello all, > > I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). > > Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. > > Thank you-- Matt > > > -- > Matthew Kirschenbaum > Professor of English and Digital Studies > Director, Graduate Certificate in Digital Studies > Printer's Devil, BookLab > University of Marylandmgk at umd.edu > _______________________________________________ > This email is relayed from members at sigcis.org , the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > ________________________ > Paul N. Edwards > > Director, Program on Science, Technology & Society > William J. Perry Fellow in International Security and Senior Research Scholar > Center for International Security and Cooperation > Co-Director, Stanford Existential Risks Initiative > Stanford University > > Professor of Information and History (Emeritus) > University of Michigan > > > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > -- > ******************************************** > Prof. Dr. Richard Vahrenkamp > Logistik Consulting Berlin > Phone 0177- 628 3325 > E-Mail: Vahrenkamp2016 at gmx.de > Web: www.vahrenkamp.org > Trendelenburgstr. 16 > 14057 Berlin > > ********************************************* > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -- johannahrodgers at gmail.com www.johannahrodgers.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Sat Jul 4 10:58:05 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Sat, 4 Jul 2020 12:58:05 -0500 Subject: [SIGCIS-Members] Correctness and verification Message-ID: <05f201d6522c$abc26210$03472630$@gmail.com> Returning for part II of the answer, on actual bugs. There is a huge computer science literature relevant to Matt?s question, but the key words are ?correctness,? ?formal methods,? ?specification language? and ?verification? rather than ?error? or ?bug.? With the Pentium bug, IIRC some bogus value in a lookup table deep in the processor caused it to give wrong answers. That?s an example of a situation where a finished system doesn?t perform in line with requirements. But how are requirements expressed? Typically with a written specification that leaves many things ambiguous. Going back to the 1950s a lot of work in systems analysis was aimed at coming up with methods to get specifications right so that the systems built from them would do what was required. A problem in the final system might be a result of a mistake in specification or in implementation. The computer science answer to this was to express specifications completely and unambiguously in mathematical terms and then prove that the final software/hardware would always do what the specification said. Both tasks were enormously difficult. A lot of the history is told in MacKenzie, Donald. Mechanizing Proof. Cambridge, MA: MIT Press, 2001 which is an extremely good book full of very clear explanations of difficult topics. This history includes the intervention of a philosopher, J.H. Fetzer, who IIRC said that claiming to prove any correspondence between a mathematical specification and a material object is a category error. So I?m sure there are pointers there to follow up for more recent philosophical work on the topic. I gave some autobiographically grounded thoughts on some of this in a recent paper ?Assembling a Prehistory for Formal Methods? which also points to some papers by Niklaus Wirth and Cliff Jones giving insider views on the history. http://www.tomandmaria.com/Tom/Writing/FormalMethodsHistoryPreprint.pdf My suggestion is that the formal methods movement was in large part an outgrowth of the original impetus and core community behind both the Algol effort and the 1968 Software Engineering Conference (which has less than generally suggested to do with what was offered under the banner of ?Software Engineering? by the 1980s). That appeared in special issue on the history of formal methods, which includes other, more technical, items of possible interest: https://link.springer.com/journal/165/31/6. Editing the Turing Award website (particularly ongoing work to add video clips to profiles) has also reminded me that several awards were given for work in this area, and in particular that the ?model checking? approach has had a huge impact on how chip designs are tested. So Matt might want to look at the following entries: https://amturing.acm.org/award_winners/hoare_4622167.cfm https://amturing.acm.org/award_winners/sifakis_1701095.cfm https://amturing.acm.org/award_winners/emerson_1671460.cfm https://amturing.acm.org/award_winners/clarke_1167964.cfm Best wishes, Tom From: Members On Behalf Of Matthew Kirschenbaum Sent: Friday, July 3, 2020 12:55 PM To: members Subject: [SIGCIS-Members] the nature of computational error Hello all, I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. Thank you-- Matt -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From allan.olley at utoronto.ca Sat Jul 4 11:26:05 2020 From: allan.olley at utoronto.ca (Allan Olley) Date: Sat, 4 Jul 2020 14:26:05 -0400 (Eastern Daylight Time) Subject: [SIGCIS-Members] the nature of computational error In-Reply-To: References: Message-ID: Hello, I would reiterate Paul Edwards suggestion and add that depending on what kind of error and what kind of analysis you are looking for etc. it might be worth checking out Donald MacKenzie's book expanding on the subject of the mentioned essay Mechanizing Proof: Computing, Risk, and Trust. MIT Press, 2001. My vague sense is that errors like the Pentium Error are not actually that rare? Flaws or suboptimal arrangements of hardwired code are noticed and work arounds developped now and then in various iterations of popular chip design, they are usually just much more obscure technical problems. Looking at the Wikipedia page for the Pentium error I see reference to at least one other Pentium chip error that had been discovered: https://en.wikipedia.org/wiki/Pentium_FDIV_bug https://en.wikipedia.org/wiki/Pentium_F00F_bug Depending on what you mean by error one of the deadlier errors known in computing may actually be bad interface design. There was a brand of automatic morphine pumps which in at least a few cases was misprogrammed to deliver a lethal dose to patients. If I understand correctly other morphine pumps never saw this human error committed, suggesting that poor interface design/human factors engineering is responsible. I learned about this case from a popular introduction to human factors engineering Kim Vincente The Human Factor, Alfred A. Knopf, 2003. Pages 142-150 for an account of the morphine pump. Human factors engineering also plays a role in aviation disasters, nuclear disasters and so on. More recent accounts might relate these more to computing practice and discuss the morphine pump case more definitively, I am afraid I have limited familiarity with the literature. In terms of obscure debtates about error in the earlier history of computing, one that I have come across that might be of interest although perhaps not is the concern among some earlier pioneers that in fixed point errors led almost inevitably to the results being the wrong order of magnitude and so clearly wrong, whereas with floating point, the order of magnitude being accumulated separate from the rest of the result errors could creep in to the signficant figures without affecting the magnitude leading to difficult to detect errors. I am not clear that this fear was well founded or widespread, but I know of at least two researchers who talk about it (I don't know of any source that summarizes discussion of the worry it is just somehting I noticed in my primary source reading and never really followed up). Mathematical physicist Martin Schwarzchild explains briefly the worry in an interview (page 20 of the pdf seen here https://conservancy.umn.edu/handle/11299/107629 ) with regards to working on the IAS machine. Herb Grosch mentions in his memoir his opposition to floating point in the 1940s and 50s, but never quite explains his opposition I am pretty sure it is motivated by what Schwarzchild articulated (here is an instance on page 120 where he alludes to his misgivings http://www.columbia.edu/cu/computinghistory/computer.html#[-120-] but there are only 13 instances of floating point in the book so you quickly find instances by searching for that if you are interested). Sorry to give such on worked out thought/case. -- Yours Truly, Allan Olley, PhD http://individual.utoronto.ca/fofound/ On Fri, 3 Jul 2020, Paul N. Edwards wrote: > Rounding error is ubiquitous and unavoidable in digital computers, but with > high precision computing (64-bit, 128-bit) it?s so small as to be > negligible. > However, in cases where the same computation is performed many thousands or > millions of times, it can still accumulate to a point that it?s significant. > > MacKenzie, D. (1993). Negotiating Arithmetic, Constructing Proof: The > Sociology of Mathematics and Information Technology. Social Studies of > Science, 23(1), 37-65. > > Also see the short examples of this in my book?A Vast Machine: Computer > Models, Climate Data, and the Politics of Global Warming (MIT Press, 2010), > pages 177-178. > > Best, > > Paul > > > > On Jul 3, 2020, at 10:54, Matthew Kirschenbaum > wrote: > > Hello all, > > I am interested in a better understanding of the nature of > computational error. My sense is that actual, literal (mathematical) > mistakes in modern computers are quite rare; the notorious Pentium bug > of the early 1990s is the exception that proves the rule. Most bugs > are, rather, code proceeding to a perfectly correct logical outcome > that just so happens to be inimical or intractable to the user and/or > other dependent elements of the system. The Y2K "bug," for instance, > was actually code executing in ways that were entirely internally > self-consistent, however much havoc the code would wreak (or was > expected to wreak). > > Can anyone recommend reading that will help me formulate such thoughts > with greater confidence and accuracy? Or serve as a corrective? I'd > like to read something fundamental and even philosophical about, as my > subject line has it, the nature of computational error. I'd also be > interested in collecting other instances comparable to the Pentium > bug--bugs that were actual flaws and mistakes hardwired at the deepest > levels of a system. > > Thank you-- Matt > > > -- > Matthew Kirschenbaum > Professor of English and Digital Studies > Director, Graduate Certificate in Digital Studies > Printer's Devil, BookLab > University of Maryland > mgk at umd.edu > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ > and you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > ________________________ > Paul N. Edwards > > Director,?Program on Science, Technology & Society > William J. Perry Fellow in International Security and Senior Research Scholar > Center for International Security and Cooperation > Co-Director,?Stanford Existential Risks Initiative > Stanford University > > Professor of?Information?and?History?(Emeritus) > University of Michigan > > > From samkellogg at gmail.com Sat Jul 4 11:36:37 2020 From: samkellogg at gmail.com (Sam Kellogg) Date: Sat, 4 Jul 2020 14:36:37 -0400 Subject: [SIGCIS-Members] the nature of computational error Message-ID: Hello Matt, all, What an exciting email thread! For those many of you who don't know me, my name is Sam Kellogg?I'd a PhD student in Media, Culture, and Communication at NYU and a long-time lurker on this list. My current dissertation research actually revolves around this very question of computational error, in a historical, philosophical, and linguistic sense?Matt, I think that you're absolutely right in your characterization of 'literal' mathematical error being a rather rare occurrence, though what precisely we mean by error is perhaps what most needs unpacking. After all, many of the 'error' messages we encounter seem to signify that things are still working as intended, at least on some level?the machine is still humming along! Anyways, I thoroughly agree this question needs a good deal more attention. I presented a paper on a specific class of error at SIGCIS in Milan last year. This was based on a series of contemporary empirical cases of errors I encountered that were functioning in overtly political ways in the context of the U.S. embargo on Cuba, but which were still appearing *as though* they had purely technical origins. Sanctions and political disjunctures, I argued, were being internalized within software systems as 'errors' (sometimes with accompanying bug reports and maintenance tickets!) with all kinds of bizarre outcomes. The fieldwork, undertaken a few years ago when travel between New York and Havana was a bit simpler, led me down a rabbit hole of questioning the role of error in the history of computation more broadly, and as well as in mathematics and philosophy, and I've been trying to think through these questions in view of some of the political consequences that we see downstream. I'd be happy to send you, and anyone else on this list, my slides and notes, with the proviso that the project has progressed quite a bit since then. Laurent, Chuck, Annette, Paul, Tom, and everyone else who has already responded (I receive the digest of this list so apologies if I've missed someone), all of these recommendations are exceedingly valuable. I am in the midst of moving house right now, but if there is interest I could put together a selection from my bibliography along with the materials already shared to distribute over the coming week. I myself have become particularly fascinated by Turing Award winner Richard Hamming's work on error (his early paper and accompanying patent for Error detecting and error correcting codes is a key touchpoint), who I don't think has been mentioned yet. The backstory of how he ended up working on error in the first place is a good one, and I'm working on writing up this history, likely as a dissertation chapter, right now. Hamming's notion of what we now refer to as Hamming Distance is well worth reading up on?according to this schema, greater deviation from an intended message is rendered as greater 'distance' within a multidimensional space, a powerful transformation of the problem into spatialized terms. There's a great deal more to say here about how error is conceived of metaphorically and visually in order to transform it into something a little more workable or interpretable in different contexts, though for the moment I'll spare this list any further rambling. Matt, and anyone else actively pursuing this topic, please do reach out?I'd love to discuss further, hear more about parallel research, and in general welcome fellow travellers in the theory and history of error! Warm regards, Sam -- Sam P. Kellogg MCC, NYU // PUBLIC CULTURE // samkellogg.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkirschenbaum at gmail.com Sat Jul 4 11:21:31 2020 From: mkirschenbaum at gmail.com (Matthew Kirschenbaum) Date: Sat, 4 Jul 2020 14:21:31 -0400 Subject: [SIGCIS-Members] the nature of computational error In-Reply-To: References: <9d0eae03-6254-273a-6c05-2fa8fa93ae9e@gmx.de> Message-ID: I'd like to thank everyone on the list who has responded so generously to my query, both here and back channel. Tom's posts are, as always, marvels. There's a lot for me to work with here. List members might enjoy going back to Ellen Ullman's 2002 novel *The Bug*, which is a dark but loving recreation early programmer culture from someone who was there. Here's a snippet to enjoy, just as a small thank you: On Sat, Jul 4, 2020 at 1:27 PM Johannah Rodgers wrote: > If you are interested in the human side of the discussion of error, I'd > suggest taking a look at chapter 4 of Otte and Mlynarczyk's 2010 book on Basic > Writing and > Stuart Moultrhop's essay "Error 1337" in the collection edited by Mark > Nunes entitled "Error: Glitch, Noise, and Jam in New Media > " > (Bloomsbury, 2010). For a 19th c. perspective, you might want to take a > look at Kuno Fischer's chapter on "The Origin of Error" in his History of > Modern Philosophy > > (1854?77; the link is to an 1887 English translation). > > All best, > > Johannah > > On Sat, Jul 4, 2020 at 10:04 AM Richard Vahrenkamp > wrote: > >> Although physicists often use rules of thumb, the precision of >> calculation became an important point in John von Neumann's policy against >> the analogue computer. The precision of individual calculations on analog >> machines is lower than on digital machines. But the final results of >> integrating a differential equation were the same on analog machines as on >> digital ones, as comparisons in the 1950s showed, see my paper The >> Computing Boom in the US Aeronautical Industry, 1945?1965, in: ICON ? The >> Journal of the International Committee for the History of Technology, >> volume 24, 2019, pp. 127?149. >> >> >> Best, Richard >> >> >> >> On 03.07.2020 22:18, Paul N. Edwards wrote: >> >> Rounding error is ubiquitous and unavoidable in digital computers, but with high precision computing (64-bit, 128-bit) it?s so small as to be negligible. >> >> However, in cases where the same computation is performed many thousands or millions of times, it can still accumulate to a point that it?s significant. >> >> MacKenzie, D. (1993). Negotiating Arithmetic, Constructing Proof: The Sociology of Mathematics and Information Technology. Social Studies of Science, 23(1), 37-65. >> >> Also see the short examples of this in my book A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming (MIT Press, 2010), pages 177-178. >> >> Best, >> >> Paul >> >> >> >> On Jul 3, 2020, at 10:54, Matthew Kirschenbaum > wrote: >> >> Hello all, >> >> I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). >> >> Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. >> >> Thank you-- Matt >> >> >> -- >> Matthew Kirschenbaum >> Professor of English and Digital Studies >> Director, Graduate Certificate in Digital Studies >> Printer's Devil, BookLab >> University of Marylandmgk at umd.edu >> _______________________________________________ >> This email is relayed from members at sigcis.org , the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org >> >> ________________________ >> Paul N. Edwards >> >> Director, Program on Science, Technology & Society >> William J. Perry Fellow in International Security and Senior Research Scholar >> Center for International Security and Cooperation >> Co-Director, Stanford Existential Risks Initiative >> Stanford University >> >> Professor of Information and History (Emeritus) >> University of Michigan >> >> >> >> _______________________________________________ >> This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org >> >> -- >> ******************************************** >> Prof. Dr. Richard Vahrenkamp >> Logistik Consulting Berlin >> Phone 0177- 628 3325 >> E-Mail: Vahrenkamp2016 at gmx.de >> Web: www.vahrenkamp.orgTrendelenburgstr. 16 >> 14057 Berlin >> >> ********************************************* >> >> _______________________________________________ >> This email is relayed from members at sigcis.org, the email discussion >> list of SHOT SIGCIS. Opinions expressed here are those of the member >> posting and are not reviewed, edited, or endorsed by SIGCIS. The list >> archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ >> and you can change your subscription options at >> http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > > -- > johannahrodgers at gmail.com > www.johannahrodgers.net > > -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_2975.jpg Type: image/jpg Size: 273052 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_2974.jpg Type: image/jpg Size: 292692 bytes Desc: not available URL: From william.mcmillan at cuaa.edu Sat Jul 4 11:43:23 2020 From: william.mcmillan at cuaa.edu (McMillan, William W) Date: Sat, 4 Jul 2020 18:43:23 +0000 Subject: [SIGCIS-Members] Correctness and verification In-Reply-To: <05f201d6522c$abc26210$03472630$@gmail.com> References: <05f201d6522c$abc26210$03472630$@gmail.com> Message-ID: Adding to Tom's comments on this, especially the importance of getting requirements and specifications right to begin with, it's estimated that 80% of software errors result from incorrect requirements definition (though this would likely be far less in defining requirements for purely numerical calculations). In support of Fetzer's philosophical complaints about using math and logic to prove a correspondence between a specification and an implemented algorithm, we can point to human error in the proof process. (Maybe he did this himself -- haven't read it.) A good example was in a textbook that used formal proof techniques to show that a binary search algorithm was correct. The problem was that the code was actually wrong (a confusion between integer division and a modulus operation); the author simply misread the code he had written and proved correct what he didn't write. (Fully automated proof techniques might prevent such errors.) (BTW, many distinguish between "requirements" and "specifications," with the latter usually meaning something more formal, but, like all efforts to fix technical meanings to words in human languages, the distinction doesn't always take hold very well when people talk about software.) About errors in numerical precision, it's important to note the problem of accumulated errors over many operations, not just in one-shot representations or calculations. You can estimate pi by adding up a bunch of chords of a half circle. As you increase the number of chords, getting a finer approximation, you get a better estimate of pi. For a while. Then the estimate gets progressively worse as you chop up the circle into smaller pieces Accumulated errors in representation and calculation pollute the final result. - Bill ________________________________ From: Members [members-bounces at lists.sigcis.org] on behalf of thomas.haigh at gmail.com [thomas.haigh at gmail.com] Sent: Saturday, July 04, 2020 1:58 PM To: 'Matthew Kirschenbaum'; 'members' Subject: [SIGCIS-Members] Correctness and verification Returning for part II of the answer, on actual bugs. There is a huge computer science literature relevant to Matt?s question, but the key words are ?correctness,? ?formal methods,? ?specification language? and ?verification? rather than ?error? or ?bug.? With the Pentium bug, IIRC some bogus value in a lookup table deep in the processor caused it to give wrong answers. That?s an example of a situation where a finished system doesn?t perform in line with requirements. But how are requirements expressed? Typically with a written specification that leaves many things ambiguous. Going back to the 1950s a lot of work in systems analysis was aimed at coming up with methods to get specifications right so that the systems built from them would do what was required. A problem in the final system might be a result of a mistake in specification or in implementation. The computer science answer to this was to express specifications completely and unambiguously in mathematical terms and then prove that the final software/hardware would always do what the specification said. Both tasks were enormously difficult. A lot of the history is told in MacKenzie, Donald. Mechanizing Proof. Cambridge, MA: MIT Press, 2001 which is an extremely good book full of very clear explanations of difficult topics. This history includes the intervention of a philosopher, J.H. Fetzer, who IIRC said that claiming to prove any correspondence between a mathematical specification and a material object is a category error. So I?m sure there are pointers there to follow up for more recent philosophical work on the topic. I gave some autobiographically grounded thoughts on some of this in a recent paper ?Assembling a Prehistory for Formal Methods? which also points to some papers by Niklaus Wirth and Cliff Jones giving insider views on the history. http://www.tomandmaria.com/Tom/Writing/FormalMethodsHistoryPreprint.pdf My suggestion is that the formal methods movement was in large part an outgrowth of the original impetus and core community behind both the Algol effort and the 1968 Software Engineering Conference (which has less than generally suggested to do with what was offered under the banner of ?Software Engineering? by the 1980s). That appeared in special issue on the history of formal methods, which includes other, more technical, items of possible interest: https://link.springer.com/journal/165/31/6. Editing the Turing Award website (particularly ongoing work to add video clips to profiles) has also reminded me that several awards were given for work in this area, and in particular that the ?model checking? approach has had a huge impact on how chip designs are tested. So Matt might want to look at the following entries: https://amturing.acm.org/award_winners/hoare_4622167.cfm https://amturing.acm.org/award_winners/sifakis_1701095.cfm https://amturing.acm.org/award_winners/emerson_1671460.cfm https://amturing.acm.org/award_winners/clarke_1167964.cfm Best wishes, Tom From: Members On Behalf Of Matthew Kirschenbaum Sent: Friday, July 3, 2020 12:55 PM To: members Subject: [SIGCIS-Members] the nature of computational error Hello all, I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. Thank you-- Matt -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu From CeruzziP at si.edu Sat Jul 4 14:44:23 2020 From: CeruzziP at si.edu (Ceruzzi, Paul) Date: Sat, 4 Jul 2020 21:44:23 +0000 Subject: [SIGCIS-Members] Numerical errors In-Reply-To: <05e401d65225$5b8a34a0$129e9de0$@gmail.com> References: <05e401d65225$5b8a34a0$129e9de0$@gmail.com> Message-ID: I have two contributions to this fascinating discussion -- I hope you find them of interest. Something for the Fourth of July. One is serious, the other less so (but you be the judge); As most of you know, the Space Shuttle was equipped with five identical IBM 4-pi computers, with a voting circuit to override any hardware fault in one of them. The fifth computer was programmed by a separate team. The reasoning was that an error in the software would vitiate the redundancy of the hardware, since all five would possibly have the same "bug." After the Shuttle was in operation for a while, NASA realized that an error in the _specifications_ would have been common to all five, regardless of who programmed them or whether they were programmed correctly. In other words, the "belt and suspenders" philosophy was perhaps flawed. The second is my recollection of a meeting at the Charles Babbage Institute in its inaugural year, when George Stibitz of Bell Labs was in attendance. While at Bell Labs, Stibitz was active in developing error-detecting codes for relay computers, which were notorious for their tendency to encounter intermittent hardware faults. As others mentioned, Hamming extended this work, and error-detecting and error-correcting codes are now common in most digital systems. After retiring from Bell Labs, Stibitz moved to Vermont and took a post at Dartmouth, across the Connecticut River. He told us that when he went to get a Vermont driver's license, he was told that they couldn't give him the license that day because "the computer made an error." He replied "That's impossible. I invented the computer, and when I did I made sure that it could not make errors." He had a wonderful sense of humor, so perhaps he was just messin' with us. If anyone else was there & remembers the story, let me know. Paul Ceruzzi ________________________________ From: Members on behalf of thomas.haigh at gmail.com Sent: Saturday, July 4, 2020 1:05 PM To: 'Matthew Kirschenbaum' ; 'members' Subject: [SIGCIS-Members] Numerical errors External Email - Exercise Caution Hello Matt, Great question. I?m going to reply first on the normal treatment of error in numerical applications, and separately on the larger question of design mistakes in hardware and software. You are correct that the Pentium bug fits into the second category, but many of the replies have focused on the first and they are both relevant. I?m not actually competent in numerical mathematics, but a spell in 2004-6 conducting full career oral history interviews with numerical software specialists as a subcontractor for the Society for Industrial and Applied Mathematics on a DOE grant exposed me to a lot of the history of this area in ways that have occasionally surfaced in my other work. The oral histories from the project are at http://history.siam.org/oralhistories.htm. One of the things it taught me is that the question of a ?correct? numerical answer is not nearly as straightforward as most of us assume. In integer mathematics sure 2+2=4 etc. But the kinds of problems scientists needed early computers for invariably involved some very large and small quantities. So even though the hardware didn?t support floating point they basically had to do the same thing manually, storing a certain number of significant digits and tracking the scaling factor that related these to the actual quantity. If you look at the ENIAC flow diagram on our poster, you?ll see little notations tracking the power of ten scaling factors in front of many of the variable names in the program boxes https://eniacinaction.com/docs/MonteCarloPoster.pdf. That manual process was itself a major source of error and frustration, so from the 1950s onward all large computers intended for scientific use included hardware floating point so that if, for example, a very small quantity was multiplied by a very large constant the computer would figure out both the significant digits and the power of ten (or two) needed to scale them. But whether done manually or automatically, the numbers being represented are only approximations of the actual quantities. When doing calculations manually, scientist and engineers has always had to make a decision on how many digits to use, and doing that responsibly required some knowledge of how reliable the final answer would be based on the initial rounding and the potential of compounding errors as surplus digits were thrown away each time numbers were multiplied. The other important thing to understand here is that in real world computing even things like differential equations, which college calculus might fool you into thinking can be solved exactly, are solved approximately with numerical methods. These methods are usually iterative, based on measuring how far off target the current answer is so that an initial guess eventually converges on an accurate approximation. The conventional numerical methods found in textbooks, etc. were not well suited for automatic computers. Digital computers could carry our operations thousands of times faster than human computers, which in the worst case allowed errors to compound thousands of times faster. The new field of ?numerical analysis? grew up at the intersection of computing and applied mathematics to address this. It included new methods to track the compounding of numerical errors through computations, and the development of more efficient and accurate algorithms for common mathematical chores such as calculating the matrix eigenvalues. I heard the terms ?overflow,? ?underflow,? ?truncation error? and ?rounding error? a lot in the interviews as well as more esoteric terms such as ?successive overrelaxation.? One stream of work on ?backward error analysis? led to an early Turing Award, for Jim Wilkinson (https://amturing.acm.org/award_winners/wilkinson_0671216.cfm). Those methods were also more complex and harder for non-specialists to reliably implement, which led to some of the earliest initiatives in software libraries (SHARE), peer-review of software, portable software (BLAS, PFORT), and software packaging and distribution (LINPACK and EISPACK). One side of the story I did tell was through biographies of Cleve Moler (https://tomandmaria.com/Tom/Writing/MolerBio.pdf) and Jack Dongarra (https://tomandmaria.com/Tom/Writing/DongarraBio.pdf). Moler founded Mathworks (which you probably hear sponsoring things on NPR). The specialists also complained that ordinary scientists and engineers didn?t want to develop the skills needed to understand which methods could safely be applied to which classes of equation, and so would introduce errors by grabbing the code for an inappropriate method. (A very popular book, Numerical Recipes, was accused of encouraging this and earned the disdain of some of my interviewees). Doing the interviews, I was struck by the very strong and personal aesthetic preferences the numerical software producers expressed for the floating point arithmetic of particular machines. The IBM 709X machines were acclaimed, whereas the CDC supercomputers were distained. I sneaked a little of this into the Revised History of Modern Computing with Paul Ceruzzi, in terms of the terrible step back introduced with the IBM System/360 arithmetic. This needed expensive fixes to installed computers, like the Pentium bug, but it wasn?t a bug ? just the result of the design engineers making decisions without a good idea of how they would impact scientific users. Although System/360 was intended to work equally well for scientific and data processing applications it was much more successful for data processing. The problems began with the System/360 floating point. It used hexadecimal (base 16) rather than binary, which was efficient for smaller, business-oriented machines but would create major problems with rounding errors for scientific users. The new general-purpose registers raised more problems with the handling of single and double precision numbers. When IBM described its new architecture, William Kahan, then of Waterloo University, and others ?went nuts? as they ?recognized something really perverse about the arithmetic.? IBM found ways to work around some of the issues in software libraries, but Kahan recalls that after the full scale of the problem was acknowledged in 1966, following lobbying by SHARE, the company spent millions tweaking the hardware of machines already installed. The wide range of approaches to floating point arithmetic was also a threat to portability. FORTRAN code could be moved from one system to another, but it would give different answers when run on them. There might also be relatively large shifts in answers based on tiny variations in initial inputs. So the question of error gets complicated as the ?right? answer depends on the machine the code is being run on. Also an algorithm might run accurately but give misleading answers because it is being applied to an equation with unsuitable characteristics. Kahan is the central figure in addressing these problems, leading the IEEE standards effort to come up with an optimal floating point design that could be standardized across manufactures. Luckily, Intel was the first adopter thanks to a consulting contract Kahan had. He?s a fascinating figure (I wrote the profile at (https://amturing.acm.org/award_winners/kahan_1023746.cfm) but relatively little known because floating point is seen as such a niche area. When I showed up for the interview he talked for 24 hours spread over four days (http://history.siam.org/pdfs2/Kahan_final.pdf) Here?s how we tell that story in the Revised History: Doing engineering calculations or financial modelling cost a lot less with a personal computer, such as the Apple II, than with a mainframe or timesharing system. But only small jobs would fit into its limited memory and run acceptably quickly. Complex models still needed big computers. That began to change with the IBM PC. Even the original IBM PC could be expanded to much larger memory capacities than the Apple. The other big difference was floating point. Since the 1950s capable floating-point hardware support had been the defining characteristic of large scientifically-oriented computers. The 8088 used in the original PC did not support floating point and its performance on technical calculations was mediocre. But every PC included an empty socket waiting for a new kind of chip, the 8087 ?floating point coprocessor.? The 8087 was the first chip to implement a new approach to floating point, proposed by William Kahan and later formalized in the standard IEEE 754. Its adoption by firms including DEC and IBM was a major advance for scientific computing. Code, even in a standard language like FORTRAN, had previously produced inconsistent floating-point results when run on different computers. According to Jerome Coonen, a student of Kahan?s who managed software development for the original Macintosh, this standardization on robust mechanisms was a ?huge step forward? from the previous ?dismal situation?. Kahan?s achievement was having floating point taken for granted for 40 years.? The 8087 was announced in 1980 but trickled onto the market because it pushed the limits of Intel?s production processes. Writing in Byte, Steven S. Fried called it ?a full-blown 80-bit processor that performs numerical operations up to 100 times faster? at the same speed as a medium-sized minicomputer, while providing more accuracy that most mainframes.? The 8088 itself had only 29,000 transistors, but its coprocessor needed 45,000 to implement its own registers and stack. Code had to be rewritten to use special floating-point instructions, were executed in parallel with whatever the main processor was doing. Scientific users quickly embraced the 8087, which made the PC a credible alternative to minicomputers. Fried had promised that ?the 8087 can also work wonders with business applications? but software support was limited. Even Lotus-1-2-3, which existed only to crunch numbers, did not utilize it. Fried began a business selling patches to add coprocessor support to such packages. Over time, IEEE style floating point became a core part of every processor. By the time Intel launched the 80486 in 1989, its factories were just about able to manufacture a one million transistor chip with a coprocessor built in. Software developers, particularly videogame programmers, began to use floating point instructions. By the late-1990s PC processors competed largely on the strength of their floating-point capabilities. So that?s two big kinds of error to dig into: errors related to the handling of arithmetic in a particular machine and errors introduced by the algorithm (or as they call it ?methods?) chosen to solve an equation numerically. Thanks to reliance on IEEE standard floating point and the eclipse of FORTRAN by modern systems like MATLAB both have been largely black-boxed from typical scientific users. Best wishes, Tom From: Members On Behalf Of Matthew Kirschenbaum Sent: Friday, July 3, 2020 12:55 PM To: members Subject: [SIGCIS-Members] the nature of computational error Hello all, I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. Thank you-- Matt -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.randell at newcastle.ac.uk Sat Jul 4 15:16:13 2020 From: brian.randell at newcastle.ac.uk (Brian Randell) Date: Sat, 4 Jul 2020 22:16:13 +0000 Subject: [SIGCIS-Members] Correctness and verification In-Reply-To: <05f201d6522c$abc26210$03472630$@gmail.com> References: <05f201d6522c$abc26210$03472630$@gmail.com> Message-ID: Hi Matt: Thomas Haigh?s pair of messages do an excellent job of outlining and describing some of a large body of computer science literature relevant to Matt?s question. However, the topics he covers, such as software errors, formal methods, correctness proofs, etc., can themselves be seen as part of a more general study, called dependability or trustworthiness, which provides a conceptual basis and body of knowledge for considering and dealing also with such all too real possibilities as inadequate or absent specifications, incomplete or erroneous ?proofs?, whether of software or hardware. Some references: Schneider, Fred B., (Ed.). Trust in Cyberspace: Report of the Committee on Information Systems Trustworthiness, Computer Science and Telecommunications Board, Commission on Physical Sciences, Mathematics, and Applications, National Research Council, Washington, D.C., National Academy Press (1999) 332 pp. [Full text available at: http://www.nap.edu/readingroom/books/trust/] Cofta, Piotr. Trust, Complexity and Control, John Wiley (2007) 310 pp. [ISBN 0470061308] and if I may be permitted: Avizienis, A., Laprie, J.-C., Randell, B. and Landwehr, C. Basic Concepts and Taxonomy of Dependable and Secure Computing, IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 1, (2004) pp.11-33. Cheers Brian Randell > On 4 Jul 2020, at 18:58, thomas.haigh at gmail.com wrote: > > Returning for part II of the answer, on actual bugs. > > There is a huge computer science literature relevant to Matt?s question, but the key words are ?correctness,? ?formal methods,? ?specification language? and ?verification? rather than ?error? or ?bug.? > > With the Pentium bug, IIRC some bogus value in a lookup table deep in the processor caused it to give wrong answers. That?s an example of a situation where a finished system doesn?t perform in line with requirements. But how are requirements expressed? Typically with a written specification that leaves many things ambiguous. Going back to the 1950s a lot of work in systems analysis was aimed at coming up with methods to get specifications right so that the systems built from them would do what was required. > > A problem in the final system might be a result of a mistake in specification or in implementation. The computer science answer to this was to express specifications completely and unambiguously in mathematical terms and then prove that the final software/hardware would always do what the specification said. Both tasks were enormously difficult. A lot of the history is told in MacKenzie, Donald.Mechanizing Proof. Cambridge, MA: MIT Press, 2001 which is an extremely good book full of very clear explanations of difficult topics. This history includes the intervention of a philosopher, J.H. Fetzer, who IIRC said that claiming to prove any correspondence between a mathematical specification and a material object is a category error. So I?m sure there are pointers there to follow up for more recent philosophical work on the topic. > > I gave some autobiographically grounded thoughts on some of this in a recent paper ?Assembling a Prehistory for Formal Methods? which also points to some papers by Niklaus Wirth and Cliff Jones giving insider views on the history. http://www.tomandmaria.com/Tom/Writing/FormalMethodsHistoryPreprint.pdf My suggestion is that the formal methods movement was in large part an outgrowth of the original impetus and core community behind both the Algol effort and the 1968 Software Engineering Conference (which has less than generally suggested to do with what was offered under the banner of ?Software Engineering? by the 1980s). That appeared in special issue on the history of formal methods, which includes other, more technical, items of possible interest: https://link.springer.com/journal/165/31/6. > > Editing the Turing Award website (particularly ongoing work to add video clips to profiles) has also reminded me that several awards were given for work in this area, and in particular that the ?model checking? approach has had a huge impact on how chip designs are tested. So Matt might want to look at the following entries: > > https://amturing.acm.org/award_winners/hoare_4622167.cfm > https://amturing.acm.org/award_winners/sifakis_1701095.cfm > https://amturing.acm.org/award_winners/emerson_1671460.cfm > https://amturing.acm.org/award_winners/clarke_1167964.cfm > > Best wishes, > > Tom > > > From: Members On Behalf Of Matthew Kirschenbaum > Sent: Friday, July 3, 2020 12:55 PM > To: members > Subject: [SIGCIS-Members] the nature of computational error > > Hello all, > > I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). > > Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. > > Thank you-- Matt > > > -- > Matthew Kirschenbaum > Professor of English and Digital Studies > Director, Graduate Certificate in Digital Studies > Printer's Devil, BookLab > University of Maryland > mgk at umd.edu > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org ? School of Computing, Newcastle University, 1 Science Square, Newcastle upon Tyne, NE4 5TG EMAIL = Brian.Randell at ncl.ac.uk PHONE = +44 191 208 7923 URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html From carolyn.kane at ryerson.ca Sun Jul 5 05:00:00 2020 From: carolyn.kane at ryerson.ca (Carolyn Kane) Date: Sun, 5 Jul 2020 08:00:00 -0400 Subject: [SIGCIS-Members] the nature of computational error In-Reply-To: References: Message-ID: Hi all, Just copying the email I sent to Matthew a few days ago to contribute to the scholarship on this subject. "....a copy of my recent book (*High-Tech Trash: Glitch, Noise, and Aesthetic Failure , *University of California Press, 2019) which addresses themes of error, accident, and failure in computational systems, albeit primarily from the perspective of aesthetics." The book can also be downloaded for free, through open access here: *High-Tech Trash: Glitch, Noise, and Aesthetic Failure * Best, Carolyn L. Kane, Author of *High-Tech Trash: Glitch, Noise, and Aesthetic Failure * (University of California Press, 2019) *Chromatic Algorithms: Synthetic Color, Computer Art, and Aesthetics after Code * (University of Chicago Press, 2014) https://www.ryerson.ca/kane/ Associate Professor, School of Professional Communication Faculty of Communication and Design Ryerson University 80 Gould Street Toronto, Ontario Canada M5B 2K3 Carolyn L. Kane, Author of *High-Tech Trash: Glitch, Noise, and Aesthetic Failure * (University of California Press, 2019) *Chromatic Algorithms: Synthetic Color, Computer Art, and Aesthetics after Code * (University of Chicago Press, 2014) https://www.ryerson.ca/kane/ Associate Professor, School of Professional Communication Faculty of Communication and Design Ryerson University 80 Gould Street Toronto, Ontario Canada M5B 2K3 On Fri, Jul 3, 2020 at 1:55 PM Matthew Kirschenbaum wrote: > Hello all, > > I am interested in a better understanding of the nature of computational > error. My sense is that actual, literal (mathematical) mistakes in modern > computers are quite rare; the notorious Pentium bug of the early 1990s is > the exception that proves the rule. Most bugs are, rather, code proceeding > to a perfectly correct logical outcome that just so happens to be inimical > or intractable to the user and/or other dependent elements of the system. > The Y2K "bug," for instance, was actually code executing in ways that were > entirely internally self-consistent, however much havoc the code would > wreak (or was expected to wreak). > > Can anyone recommend reading that will help me formulate such thoughts > with greater confidence and accuracy? Or serve as a corrective? I'd like to > read something fundamental and even philosophical about, as my subject line > has it, *the nature of computational error*. I'd also be interested in > collecting other instances comparable to the Pentium bug--bugs that were > actual flaws and mistakes hardwired at the deepest levels of a system. > > Thank you-- Matt > > > -- > Matthew Kirschenbaum > Professor of English and Digital Studies > Director, Graduate Certificate in Digital Studies > Printer's Devil, BookLab > University of Maryland > mgk at umd.edu > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From brianberg at gmail.com Sun Jul 5 06:45:25 2020 From: brianberg at gmail.com (Brian Berg) Date: Sun, 5 Jul 2020 06:45:25 -0700 Subject: [SIGCIS-Members] =?utf-8?q?Wed=2C_7/22=3A_Video_Conferencing_?= =?utf-8?q?=E2=80=93_Silicon_Valley=E2=80=99s_50-Year_History?= In-Reply-To: References: Message-ID: Hello, With all the online events happening these days that have replaced in-person events and meetings, video conferencing has gone mainstream with the general public. We are pleased to promote the upcoming Wed, July 22 webinar of the IEEE Silicon Valley Technical History Committee which will detail how Silicon Valley has advanced this technology over the last 50+ years. Committee Chair (Tom Coughlin) and Vice Chair (myself) are both CNSV Board members. CNSV member Ken Pyle is also a committee member and event videographer, and he originated this webinar and will moderate the panel of speakers. Brian Berg *Wed, July 22, 1:30-2:30pm PDT* *Webinar: Video Conferencing ? Silicon Valley?s 50-Year History * [image: NY-VideoPhone.jpg] The COVID-19 pandemic looks like it will be the catalyst that turns video conferencing into an everyday communications tool for use by everyday people. However, this 50+ year overnight success story has roots dating back to the 19th century, as well as the 1964 World?s Fair with AT&T?s vision of a videophone that would be as simple to use as a telephone. To realize that vision, many different technologies would have to be invented, refined and cost-reduced, including video capture and associated screens, broadband infrastructure and what came to be known as the cloud, and the digitalization of audio and video. The compression algorithms and methodologies that were used to dramatically reduce the amount of audio and video data needed to be transmitted were an essential enabler, and they will be the focus of this event. Silicon Valley?s history in this realm goes back over 50 years, and companies that no longer exist that built the foundation for video conferencing include Compression Labs, C-Cube and Divicom. However, their inventions were foundational, and companies including 8?8 and Intel still exist and are active in support of video conferencing as we use it today. Join the IEEE Silicon Valley Technical History Committee as we look back at the history of how we arrived at today?s technology, including how hype and marketing often raced ahead of the technology and infrastructure. Our speakers will discuss the lessons they learned along the way, and will explain how the various building blocks slowly came together. We will also provide a picture of what the future holds for richer communication experiences, including virtual reality and improved security. About the speaker, Dave House of House Family Vineyards Dave House joined Intel Corp.in 1974, and led the company?s microprocessor division from the 80386 through the Pentium II in 1985-2003. He also launched Intel?s still very successful Server Products Division, and managed the team that introduced the ?Intel Inside? marketing program. Dave was also instrumental in Intel?s 1988 purchase of Digital Video Interactive (DVI) technology from David Sarnoff Research Center Laboratories. DVI brought multimedia initially to DOS-based PCs, and later became a fundamental building block for Intel?s ProShare video conferencing system. Dave went on to lead several Silicon Valley telecom and networking stalwarts such as Bay Networks, Nortel, and Brocade, and he is currently the proprietor of the highly acclaimed House Family Vineyards. About the speaker, Eric Dorsey Eric Dorsey?s primary expertise is in creating consumer products in the audio, video, neural networks and mobile space. He is currently managing projects for the US government using AI and Machine Learning in the area of national security. Eric was Director of Engineering at San Jose-based Compression Laboratories, a pioneer in video compression for both video conferencing and television distribution networks. He was involved in the initial meetings of the MPEG standard committees, and went on to senior roles at notable set-top companies such as Thomson and San Jose-based TiVo. More recently, he worked on a project for preserving Dr. Stephen Hawking?s synthetic voice . About the speaker, Bryan Martin of 8x8, Inc. Bryan Martin is Chairman and CTO of 8?8, Inc. in Campbell. He led the company?s transition from hardware to its Unified Communications as a Service offering which integrates voice and video over IP in a single platform to enable cloud phone, video collaboration, team chat, contact center and analytics functionalities. Bryan?s early work in compression was at Santa Clara-based Integrated Information Technology (IIT) where he helped develop the Vision Processor chips that allowed JPEG image encoding and decoding at video rates, and which could also perform both MPEG and H.261 compression and decompression of audio and video data. San Jose-based Compression Labs used these chips as a replacement for discrete circuits in its systems. About the speaker, Ken Pyle of Viodi, LLC Ken Pyle will moderate the above panel of speakers. Ken is a CNSV member who has been a videographer of many past CNSV events. He began working in field service and product management roles in the cable television and video industries starting in the 1980s, including Catel and Comlux/C-Cor. As these companies were providers of uni- and bi-directional analog and digital video transmission systems, this led Ken into work with video-on-demand (VOD) providers. In 2002, he founded Viodi, a premier provider of information and assistance to independent communications service providers and their vendors. Besides creating well over 100 video interviews and ?documonials? annually, Ken is managing editor of the Viodi View newsletter and producer of the ViodiTV YouTube channel . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NY-VideoPhone.jpg Type: image/jpeg Size: 138219 bytes Desc: not available URL: From j-coopersmith at tamu.edu Mon Jul 6 02:21:17 2020 From: j-coopersmith at tamu.edu (Jonathan Coopersmith) Date: Mon, 6 Jul 2020 04:21:17 -0500 Subject: [SIGCIS-Members] AI progress measuring Message-ID: Interesting piece that raises some neat questions ranging from comparison to upgrading and hype: Core progress in AI has stalled in some fields 1. Matthew Hutson Science 29 May 2020: Vol. 368, Issue 6494, pp. 927DOI: 10.1126/science.368.6494.927 Stay sane, keep washing those hands, and practice social solidarity as well as distancing, JC Jonathan Coopersmith Professor Department of History Texas A&M University College Station, TX 77843-4236 979.739.4708 (cell) 979.862.4314 (fax) To teach or not to teach: https://www.tact.org/post/to-teach-in-person-or-not-that-is-the-question Most recent oped: "Will Artemis fail in the halls of Congress?" https://www.thespacereview.com/article/3836/1 Apollo thoughts: https://today.tamu.edu/2019/07/19/would-apollo-11-have-happened-without-russia/ *FAXED. The Rise and Fall of the Fax Machine* (Johns Hopkins University Press) is the co-recipient of the 2016 Business History Conference Hagley Prize for best book in business history. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: AI progress stalled 2020.pdf Type: application/pdf Size: 113117 bytes Desc: not available URL: From mkirschenbaum at gmail.com Mon Jul 6 08:06:20 2020 From: mkirschenbaum at gmail.com (Matthew Kirschenbaum) Date: Mon, 6 Jul 2020 11:06:20 -0400 Subject: [SIGCIS-Members] Correctness and verification In-Reply-To: <651F9096-E504-4E06-ADE4-20EEF47291D6@ncl.ac.uk> References: <05f201d6522c$abc26210$03472630$@gmail.com> <651F9096-E504-4E06-ADE4-20EEF47291D6@ncl.ac.uk> Message-ID: Hi all, Again, I'm so appreciative of the many generous and deeply informed responses to my query. (This will be quite the footnote to write!) I hope the discussion has also been worthwhile for others. Best, Matt On Mon, Jul 6, 2020 at 7:10 AM Troy Astarte wrote: > Dear Matt, and all, > > The questions of what is considered ?right? and ?wrong? are always deeply > sociological in nature; and the role of the machine in enforcing strict > categories frequently leads to the creation of additional error states: > sometimes inadvertently, and sometimes as a reflection of greater societal > norms. See Hicks, Marie. "Hacking the Cis-tem." IEEE Annals of the History > of Computing 41.1 (2019): 20-33. for a recent discussion of this concept in > relation to gender. Last year?s (October 2019) SIG-CIS conference was on > precisely this topic: you can find the programme here: > http://meetings.sigcis.org/uploads/6/3/6/8/6368912/program_final.pdf. > > Let me jump to a totally different way of looking at this problem, and one > closer to both your initial mention of the FDIV bug, and Tom?s and Brian?s > emails below. As discussed, ?formal methods? is a part of computer science > that primarily concerns itself with reducing errors in > computing?particularly of the ?mathematical? kind you mentioned. A recent > workshop, also in October 2019, was on the topic of the history of formal > methods. You can find the workshop webpages here: > https://sites.google.com/view/hfm2019/. Proceedings are in press and will > appear in *Lecture Notes in Computer Science*. (If you let me know > off-list, I can send you a notification when they are published). You may > also find interesting the special issue of *Formal Aspects of Computer > Science* which contains some personal accounts of formal methods from a > historical perspective: https://link.springer.com/journal/165/31/6. > Finally, my own work is in the history of formal methods and you can find > abstracts and pre-prints on my webpage: > http://homepages.cs.ncl.ac.uk/troy.astarte/. > > I hope some of this is of interest to you! > > Best, > > Troy Astarte > > School of Computing > Newcastle University > > On 4 Jul 2020, at 23:16, Brian Randell > wrote: > > ? External sender. Take care when opening links or attachments. Do not > provide your login details. > > Hi Matt: > > Thomas Haigh?s pair of messages do an excellent job of outlining and > describing some of a large body of computer science literature relevant to > Matt?s question. However, the topics he covers, such as software errors, > formal methods, correctness proofs, etc., can themselves be seen as part of > a more general study, called dependability or trustworthiness, which > provides a conceptual basis and body of knowledge for considering and > dealing also with such all too real possibilities as inadequate or absent > specifications, incomplete or erroneous ?proofs?, whether of software or > hardware. Some references: > > Schneider, Fred B., (Ed.). Trust in Cyberspace: Report of the Committee > on Information Systems Trustworthiness, Computer Science and > Telecommunications Board, Commission on Physical Sciences, Mathematics, and > Applications, National Research Council, Washington, D.C., National Academy > Press (1999) 332 pp. [Full text available at: > http://www.nap.edu/readingroom/books/trust/] > > Cofta, Piotr. Trust, Complexity and Control, John Wiley (2007) 310 pp. > [ISBN 0470061308] > > and if I may be permitted: > > Avizienis, A., Laprie, J.-C., Randell, B. and Landwehr, C. Basic Concepts > and Taxonomy of Dependable and Secure Computing, IEEE Transactions on > Dependable and Secure Computing, vol. 1, no. 1, (2004) pp.11-33. > > Cheers > > Brian Randell > > > > On 4 Jul 2020, at 18:58, thomas.haigh at gmail.com wrote: > > Returning for part II of the answer, on actual bugs. > > There is a huge computer science literature relevant to Matt?s question, > but the key words are ?correctness,? ?formal methods,? ?specification > language? and ?verification? rather than ?error? or ?bug.? > > With the Pentium bug, IIRC some bogus value in a lookup table deep in the > processor caused it to give wrong answers. That?s an example of a situation > where a finished system doesn?t perform in line with requirements. But how > are requirements expressed? Typically with a written specification that > leaves many things ambiguous. Going back to the 1950s a lot of work in > systems analysis was aimed at coming up with methods to get specifications > right so that the systems built from them would do what was required. > > A problem in the final system might be a result of a mistake in > specification or in implementation. The computer science answer to this was > to express specifications completely and unambiguously in mathematical > terms and then prove that the final software/hardware would always do what > the specification said. Both tasks were enormously difficult. A lot of the > history is told in MacKenzie, Donald.Mechanizing Proof. Cambridge, MA: MIT > Press, 2001 which is an extremely good book full of very clear explanations > of difficult topics. This history includes the intervention of a > philosopher, J.H. Fetzer, who IIRC said that claiming to prove any > correspondence between a mathematical specification and a material object > is a category error. So I?m sure there are pointers there to follow up for > more recent philosophical work on the topic. > > I gave some autobiographically grounded thoughts on some of this in a > recent paper ?Assembling a Prehistory for Formal Methods? which also points > to some papers by Niklaus Wirth and Cliff Jones giving insider views on the > history. > http://www.tomandmaria.com/Tom/Writing/FormalMethodsHistoryPreprint.pdf > My suggestion is that the formal methods movement was in large part an > outgrowth of the original impetus and core community behind both the Algol > effort and the 1968 Software Engineering Conference (which has less than > generally suggested to do with what was offered under the banner of > ?Software Engineering? by the 1980s). That appeared in special issue on the > history of formal methods, which includes other, more technical, items of > possible interest: https://link.springer.com/journal/165/31/6. > > Editing the Turing Award website (particularly ongoing work to add video > clips to profiles) has also reminded me that several awards were given for > work in this area, and in particular that the ?model checking? approach has > had a huge impact on how chip designs are tested. So Matt might want to > look at the following entries: > > https://amturing.acm.org/award_winners/hoare_4622167.cfm > https://amturing.acm.org/award_winners/sifakis_1701095.cfm > https://amturing.acm.org/award_winners/emerson_1671460.cfm > https://amturing.acm.org/award_winners/clarke_1167964.cfm > > Best wishes, > > Tom > > > From: Members On Behalf Of Matthew > Kirschenbaum > Sent: Friday, July 3, 2020 12:55 PM > To: members > Subject: [SIGCIS-Members] the nature of computational error > > Hello all, > > I am interested in a better understanding of the nature of computational > error. My sense is that actual, literal (mathematical) mistakes in modern > computers are quite rare; the notorious Pentium bug of the early 1990s is > the exception that proves the rule. Most bugs are, rather, code proceeding > to a perfectly correct logical outcome that just so happens to be inimical > or intractable to the user and/or other dependent elements of the system. > The Y2K "bug," for instance, was actually code executing in ways that were > entirely internally self-consistent, however much havoc the code would > wreak (or was expected to wreak). > > Can anyone recommend reading that will help me formulate such thoughts > with greater confidence and accuracy? Or serve as a corrective? I'd like to > read something fundamental and even philosophical about, as my subject line > has it, the nature of computational error. I'd also be interested in > collecting other instances comparable to the Pentium bug--bugs that were > actual flaws and mistakes hardwired at the deepest levels of a system. > > Thank you-- Matt > > > -- > Matthew Kirschenbaum > Professor of English and Digital Studies > Director, Graduate Certificate in Digital Studies > Printer's Devil, BookLab > University of Maryland > mgk at umd.edu > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > ? > > School of Computing, Newcastle University, 1 Science Square, > Newcastle upon Tyne, NE4 5TG > EMAIL = Brian.Randell at ncl.ac.uk PHONE = +44 191 208 7923 > URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From Troy.Astarte at newcastle.ac.uk Mon Jul 6 04:10:24 2020 From: Troy.Astarte at newcastle.ac.uk (Troy Astarte) Date: Mon, 6 Jul 2020 11:10:24 +0000 Subject: [SIGCIS-Members] Correctness and verification In-Reply-To: References: <05f201d6522c$abc26210$03472630$@gmail.com> Message-ID: <651F9096-E504-4E06-ADE4-20EEF47291D6@ncl.ac.uk> Dear Matt, and all, The questions of what is considered ?right? and ?wrong? are always deeply sociological in nature; and the role of the machine in enforcing strict categories frequently leads to the creation of additional error states: sometimes inadvertently, and sometimes as a reflection of greater societal norms. See Hicks, Marie. "Hacking the Cis-tem." IEEE Annals of the History of Computing 41.1 (2019): 20-33. for a recent discussion of this concept in relation to gender. Last year?s (October 2019) SIG-CIS conference was on precisely this topic: you can find the programme here: http://meetings.sigcis.org/uploads/6/3/6/8/6368912/program_final.pdf. Let me jump to a totally different way of looking at this problem, and one closer to both your initial mention of the FDIV bug, and Tom?s and Brian?s emails below. As discussed, ?formal methods? is a part of computer science that primarily concerns itself with reducing errors in computing?particularly of the ?mathematical? kind you mentioned. A recent workshop, also in October 2019, was on the topic of the history of formal methods. You can find the workshop webpages here: https://sites.google.com/view/hfm2019/. Proceedings are in press and will appear in Lecture Notes in Computer Science. (If you let me know off-list, I can send you a notification when they are published). You may also find interesting the special issue of Formal Aspects of Computer Science which contains some personal accounts of formal methods from a historical perspective: https://link.springer.com/journal/165/31/6. Finally, my own work is in the history of formal methods and you can find abstracts and pre-prints on my webpage: http://homepages.cs.ncl.ac.uk/troy.astarte/. I hope some of this is of interest to you! Best, Troy Astarte School of Computing Newcastle University On 4 Jul 2020, at 23:16, Brian Randell > wrote: ? External sender. Take care when opening links or attachments. Do not provide your login details. Hi Matt: Thomas Haigh?s pair of messages do an excellent job of outlining and describing some of a large body of computer science literature relevant to Matt?s question. However, the topics he covers, such as software errors, formal methods, correctness proofs, etc., can themselves be seen as part of a more general study, called dependability or trustworthiness, which provides a conceptual basis and body of knowledge for considering and dealing also with such all too real possibilities as inadequate or absent specifications, incomplete or erroneous ?proofs?, whether of software or hardware. Some references: Schneider, Fred B., (Ed.). Trust in Cyberspace: Report of the Committee on Information Systems Trustworthiness, Computer Science and Telecommunications Board, Commission on Physical Sciences, Mathematics, and Applications, National Research Council, Washington, D.C., National Academy Press (1999) 332 pp. [Full text available at: http://www.nap.edu/readingroom/books/trust/] Cofta, Piotr. Trust, Complexity and Control, John Wiley (2007) 310 pp. [ISBN 0470061308] and if I may be permitted: Avizienis, A., Laprie, J.-C., Randell, B. and Landwehr, C. Basic Concepts and Taxonomy of Dependable and Secure Computing, IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 1, (2004) pp.11-33. Cheers Brian Randell On 4 Jul 2020, at 18:58, thomas.haigh at gmail.com wrote: Returning for part II of the answer, on actual bugs. There is a huge computer science literature relevant to Matt?s question, but the key words are ?correctness,? ?formal methods,? ?specification language? and ?verification? rather than ?error? or ?bug.? With the Pentium bug, IIRC some bogus value in a lookup table deep in the processor caused it to give wrong answers. That?s an example of a situation where a finished system doesn?t perform in line with requirements. But how are requirements expressed? Typically with a written specification that leaves many things ambiguous. Going back to the 1950s a lot of work in systems analysis was aimed at coming up with methods to get specifications right so that the systems built from them would do what was required. A problem in the final system might be a result of a mistake in specification or in implementation. The computer science answer to this was to express specifications completely and unambiguously in mathematical terms and then prove that the final software/hardware would always do what the specification said. Both tasks were enormously difficult. A lot of the history is told in MacKenzie, Donald.Mechanizing Proof. Cambridge, MA: MIT Press, 2001 which is an extremely good book full of very clear explanations of difficult topics. This history includes the intervention of a philosopher, J.H. Fetzer, who IIRC said that claiming to prove any correspondence between a mathematical specification and a material object is a category error. So I?m sure there are pointers there to follow up for more recent philosophical work on the topic. I gave some autobiographically grounded thoughts on some of this in a recent paper ?Assembling a Prehistory for Formal Methods? which also points to some papers by Niklaus Wirth and Cliff Jones giving insider views on the history. http://www.tomandmaria.com/Tom/Writing/FormalMethodsHistoryPreprint.pdf My suggestion is that the formal methods movement was in large part an outgrowth of the original impetus and core community behind both the Algol effort and the 1968 Software Engineering Conference (which has less than generally suggested to do with what was offered under the banner of ?Software Engineering? by the 1980s). That appeared in special issue on the history of formal methods, which includes other, more technical, items of possible interest: https://link.springer.com/journal/165/31/6. Editing the Turing Award website (particularly ongoing work to add video clips to profiles) has also reminded me that several awards were given for work in this area, and in particular that the ?model checking? approach has had a huge impact on how chip designs are tested. So Matt might want to look at the following entries: https://amturing.acm.org/award_winners/hoare_4622167.cfm https://amturing.acm.org/award_winners/sifakis_1701095.cfm https://amturing.acm.org/award_winners/emerson_1671460.cfm https://amturing.acm.org/award_winners/clarke_1167964.cfm Best wishes, Tom From: Members On Behalf Of Matthew Kirschenbaum Sent: Friday, July 3, 2020 12:55 PM To: members Subject: [SIGCIS-Members] the nature of computational error Hello all, I am interested in a better understanding of the nature of computational error. My sense is that actual, literal (mathematical) mistakes in modern computers are quite rare; the notorious Pentium bug of the early 1990s is the exception that proves the rule. Most bugs are, rather, code proceeding to a perfectly correct logical outcome that just so happens to be inimical or intractable to the user and/or other dependent elements of the system. The Y2K "bug," for instance, was actually code executing in ways that were entirely internally self-consistent, however much havoc the code would wreak (or was expected to wreak). Can anyone recommend reading that will help me formulate such thoughts with greater confidence and accuracy? Or serve as a corrective? I'd like to read something fundamental and even philosophical about, as my subject line has it, the nature of computational error. I'd also be interested in collecting other instances comparable to the Pentium bug--bugs that were actual flaws and mistakes hardwired at the deepest levels of a system. Thank you-- Matt -- Matthew Kirschenbaum Professor of English and Digital Studies Director, Graduate Certificate in Digital Studies Printer's Devil, BookLab University of Maryland mgk at umd.edu _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org ? School of Computing, Newcastle University, 1 Science Square, Newcastle upon Tyne, NE4 5TG EMAIL = Brian.Randell at ncl.ac.uk PHONE = +44 191 208 7923 URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From dcb at dcbrock.net Wed Jul 8 13:18:40 2020 From: dcb at dcbrock.net (David C. Brock) Date: Wed, 8 Jul 2020 16:18:40 -0400 Subject: [SIGCIS-Members] Expert Systems and the Commercialization of Artificial Intelligence Message-ID: <5CB82FA4-0DC3-453C-A99D-E843856737A1@dcbrock.net> Burt Grad and I are pleased to announce that we will be guest editing a special issue of the IEEE Annals of the History of Computing for early 2022 on ?Expert Systems and the Commercialization of Artificial Intelligence.? We have most of the issue already lined up with contributions from key figures in various commercialization efforts with expert systems, describing and analyzing their technology and business experiences. We are currently seeking one or two additional articles that address the commercialization of artificial intelligence (expert systems or otherwise) from the 1970s to the 1990s. If members of this list have work that they feel might be a good fit for this special issue, please email David C. Brock directly at dbrock at computerhistory.org +++++++++++++++ David C. Brock dcb at dcbrock.net 40 Russell Street, Greenfield, MA 01301 Mobile: 413-522-3578 Skype: dcbrock Twitter: @dcbrock Pronouns: he, him, his From brian.randell at newcastle.ac.uk Thu Jul 9 03:22:02 2020 From: brian.randell at newcastle.ac.uk (Brian Randell) Date: Thu, 9 Jul 2020 10:22:02 +0000 Subject: [SIGCIS-Members] How the Digital Camera Transformed Our Concept of History Message-ID: <6AAD4F4A-1F9C-45D4-8BF7-711D247E9AB7@newcastle.ac.uk> Hi: "How the Digital Camera Transformed Our Concept of History" is the title of a paper by Allison Marsh that has just been published by IEEE Spectrum. It starts: > For an inventor, the main challenge might be technical, but sometimes it?s timing that determines success. Steven Sasson had the technical talent but developed his prototype for an all-digital camera a couple of decades too early. > > A CCD from Fairchild was used in Kodak?s first digital camera prototype > It was 1974, and Sasson, a young electrical engineer at Eastman Kodak Co., in Rochester, N.Y., was looking for a use for Fairchild Semiconductor?s new type 201 charge-coupled device. His boss suggested that he try using the 100-by-100-pixel CCD to digitize an image. So Sasson built a digital camera to capture the photo, store it, and then play it back on another device. > > Sasson?s camera was a kluge of components. He salvaged the lens and exposure mechanism from a Kodak XL55 movie camera to serve as his camera?s optical piece. The CCD would capture the image, which would then be run through a Motorola analog-to-digital converter, stored temporarily in a DRAM array of a dozen 4,096-bit chips, and then transferred to audio tape running on a portable Memodyne data cassette recorder. The camera weighed 3.6 kilograms, ran on 16 AA batteries, and was about the size of a toaster. > > After working on his camera on and off for a year, Sasson decided on 12 December 1975 that he was ready to take his first picture. Lab technician Joy Marshall agreed to pose. The photo took about 23 seconds to record onto the audio tape. But when Sasson played it back on the lab computer, the image was a mess?although the camera could render shades that were clearly dark or light, anything in between appeared as static. So Marshall?s hair looked okay, but her face was missing. She took one look and said, ?Needs work.? > > Sasson continued to improve the camera, eventually capturing impressive images of different people and objects around the lab. He and his supervisor, Garreth Lloyd, received U.S. Patent No. 4,131,919 for an electronic still camera in 1978, but the project never went beyond the prototype stage. Sasson estimated that image resolution wouldn?t be competitive with chemical photography until sometime between 1990 and 1995, and that was enough for Kodak to mothball the project. The article ends: > Digital cameras also changed how historians conduct their research > For professional historians, the advent of digital photography has had other important implications. Lately, there?s been a lot of discussion about how digital cameras in general, and smartphones in particular, have changed the practice of historical research. At the 2020 annual meeting of the American Historical Association, for instance, Ian Milligan, an associate professor at the University of Waterloo, in Canada, gave a talk in which he revealed that 96 percent of historians have no formal training in digital photography and yet the vast majority use digital photographs extensively in their work. About 40 percent said they took more than 2,000 digital photographs of archival material in their latest project. W. Patrick McCray of the University of California, Santa Barbara, told a writer with The Atlantic that he?d accumulated 77 gigabytes of digitized documents and imagery for his latest book project [an aspect of which he recently wrote about for Spectrum]. > > So let?s recap: In the last 45 years, Sasson took his first digital picture, digital cameras were brought into the mainstream and then embedded into another pivotal technology?the cellphone and then the smartphone?and people began taking photos with abandon, for any and every reason. And in the last 25 years, historians went from thinking that looking at a photograph within the past year was a significant marker of engagement with the past to themselves compiling gigabytes of archival images in pursuit of their research. > So are those 1.4 trillion digital photographs that we?ll collectively take this year a part of history? I think it helps to consider how they fit into the overall historical narrative. A century ago, nobody, not even a science fiction writer, predicted that someone would take a photo of a parking lot to remember where they?d left their car. A century from now, who knows if people will still be doing the same thing. In that sense, even the most mundane digital photograph can serve as both a personal memory and a piece of the historical record. Full story at https://spectrum.ieee.org/tech-history/silicon-revolution/how-the-digital-camera-transformed-our-concept-of-history Cheers Brian Randell ? School of Computing, Newcastle University, 1 Science Square, Newcastle upon Tyne, NE4 5TG EMAIL = Brian.Randell at ncl.ac.uk PHONE = +44 191 208 7923 URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html From thomas.haigh at gmail.com Thu Jul 9 22:33:58 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Fri, 10 Jul 2020 00:33:58 -0500 Subject: [SIGCIS-Members] How the Digital Camera Transformed Our Concept of History In-Reply-To: <6AAD4F4A-1F9C-45D4-8BF7-711D247E9AB7@newcastle.ac.uk> References: <6AAD4F4A-1F9C-45D4-8BF7-711D247E9AB7@newcastle.ac.uk> Message-ID: <088001d6567b$b6488360$22d98a20$@gmail.com> It's not quite the same thing, but in the Revised History of Modern Computing (with Paul Ceruzzi, coming soon from MIT Press) we've tried to integrate the history of digital imaging into the history of computing. It seems necessary, not least because digital cameras are computers in disguise (and because the images were stored, edited, and transmitted on more recognizable kinds of computer). The topic comes back in the later discussion of smartphones and device convergence in the final chapter but as a sneak preview here is the subsection ?Digital Cameras? from Chapter 10: The Computer Becomes a Universal Media Device. Would be happy to hear of any errors while there is still time to fix them?. Tom Digital Cameras Television worked by dividing a picture into a grid of dots. Even the term ?pixel? (for picture element) which we now associate with computer equipment originated in the television equipment industry. Back in 1945, working on the ?First Draft? EDVAC design, John von Neumann was fascinated by the idea potential of the iconoscope, an electronic tube used in early television cameras, as a storage device. In television, however, intensity of each dot was transmitted as an analog value. Turning pixels into numbers was the job of the frame grabber. This captured a single frame from a video input and turned it into a bitmap image. Frame grabbers were used for video production work and were built into specialist video manipulation hardware to create special effects. A related piece of hardware, the gen lock, synchronized computer displays with video image so that computer generate titles and graphics could be added to videos. These devices were expensive, purchased mostly by video production companies to liven up music videos, advertisements, and wedding footage with titles and special effects.[1] Today digital video sensors are everywhere. The crucial development was the charged coupled device (CCD), which combined a semiconductor with a light sensitive layer. Fairchild Semiconductor began to sell a 100x100 light sensor in 1974. That provided the basis for an experimental digital camera at Kodak. When light was focused onto the sensor matrix numbers could be read off the chip. Space missions had a particular need for tiny and reliable digital imaging technologies, creating pictures that could be beamed back to Earth. Techniques had been developed back in the 1960s, original for spy satellites, to expose film and then scan it and transmit images digitally back to earth. Being able to take high quality digital still images directly was much simpler and faster. By 1978 a KH-11 spy satellite was using a CCD with, reportedly, an 800x800 resolution.[2] The Hubble Space Telescope, launched in 1986, used a similar size mirror but gave much higher resolution CCD sensors a very public showcase.[3] Back on earth, the first big market was for cheaper ?one dimensional? sensors able to scan a single line. Flatbed scanners and fax machines moved the scanner across against the page to capture the entire image gradually. (A similar digital scanning approach had been pioneered with the photo diode cameras of the Viking Mars landers. It worked well, albeit slowly, as neither the platform nor the landscape was moving). Commercializing digital cameras took longer, because many more sensor elements were needed to capture an entire image at once. The technology made a brief consumer appearance in 1987, in the PXL-2000 ?PixelVision? camera produced by toy company Fisher Price. It recorded highly pixelated video onto standard audio cassettes, later becoming a favorite of hipster artists.[4] CCDs were also used in some of the analog camcorders of the 1980s, bulky devices that combined a video cassette recorder and a television camera into a single box. By the mid-1990s higher resolution sensors and the chips and memories to deal with the large files they produced were becoming affordable. They made their way into two related kinds of product. Digital video cameras could store one hour of crisp, high resolution footage on special tapes as 13 gigabytes of computer data. Computers fitted with a Firewire connection (also used by early iPods) could extract digital video, edit it, and write the results back to the tape without any loss of quality. The other kind of digital camera was patterned after traditional cameras. Camera manufacturers competed on ?megapixels? ? how many millions of pixels the sensor element handled. At the end of the 1990s most had just one or two megapixels, capturing images that looked good on screen but would appear jagged when printed out. Because they were optimized for still images, which took less space than video, most still cameras used chip-based flash memory cards rather than tape (though some early models used floppy disks or CDs). Flash retained data when power was turned off but could be quickly and selectively overwritten. It was introduced by Toshiba in 1987, finding early applications in computers to store configuration settings for computers and to hold BIOS code in an easily updatable form. The cards used in early digital cameras could store only a few megabytes but, as with other memory chips, their capacities rose into the gigabytes as transistors shrank. Because it was very compact and power efficient, high capacity flash memory capacities was a crucial enabling technology for the creation of new portable devices. Flash memories able to store hundreds of gigabytes ultimately replaced hard disk storage in most PCs, though this took longer than expected because magnetic disk capacities increased even faster than chip densities during the 1990s and early-2000s. The digital cameras of the late-1990s were bulky, had small screens, and would deplete their batteries and fill their memory cards after taking just a few dozen images. Compared to the models available even a few years later they were terrible, but the relevant comparison was with consumer film cameras. Conventional film cartridges held only 24 or 36 pictures. Seeing those pictures cost at least ten dollars and usually took three trips to a drugstore, to buy the film, to drop it off for processing, and to collect the prints. Pocket sized camera forced users to squint through a plastic window, giving a vague idea of what might appear in a photograph. Larger, more expensive single lens reflex cameras took better pictures and showed whether an image was in focus. Little wonder that most people took out their camera only for vacation trips and special occasions. Even the most primitive digital cameras enabled new photographic practices Digital cameras caught on fastest for business that needed to shoot images and use them immediately, for real estate sales, corporate newsletters, or identity cards. Their direct competition was Polaroid instant cameras, which had high running costs and mostly took small pictures. As prices dropped and picture quality improved, consumers began to buy digital cameras, and to take far more pictures than ever before. Vacations were now captured with hundreds of pictures, not just one or two films. Teenagers could mimic the practices of fashion photographers by taking a few dozen shots of a friend and using the best one. Since the early 2000s, daily life has been visually recorded on a scale unmatched in earlier history, a phenomenon known as ?ubiquitous photography.?[5] Early memory cards held only a few megabytes, needing aggressive compression to hold even a dozen images. That was provided by a new image format, the JPEG (named for the Joint Photographic Experts Group), a cousin to the MP3 format that used a fractal compression algorithm to achieve similarly impressive reductions in file size. In 1991, when libjpeg, a widely used open source code module for JPEG compression, was released, it took a powerful PC to create these files. By the late 1990s the computer power could be put into a camera, though early models would be tied up for several seconds processing each image. Once the memory card was full, users moved the files onto a computer. Digital photography was another of the practices made possible by the arrival of PCs with voluminous hard drives as a standard feature of middle-class households. People who wanted to print out their photographs could still go to the drug store, or purchase an affordable little color printer, but photographs were viewed more and more on screens. They were shared with friends and family by email, or by copying them onto a zip disk or burning onto a CD rather than by handing over an envelope full of duplicate prints. Screens got bigger, images sharper, battery life longer, camera bodies smaller, and sensors better. By the early 2000s sensors with a dozen megapixels were common, enough that the image quality would be limited primarily by the quality of the camera?s optics. Cameras began to use a different sensor technology, called CMOS after the chip technology it is based on. CMOS imaging was prototyped at the Jet Propulsion Laboratory, a centerpiece of the US space probe program. The new technology produced camera sensors cheaper, smaller, and lower powered than those based on CCDs. By 2006 a camera costing a few hundred dollars would fit in a trouser pocket, take hundreds of images without changing a battery or a memory card, and offer better image quality than any compact film-based consumer camera. Improvements under low light conditions, taking photographs at night or indoors without a flash, were particularly dramatic. -----Original Message----- From: Members On Behalf Of Brian Randell Sent: Thursday, July 9, 2020 5:22 AM To: Sigcis Subject: [SIGCIS-Members] How the Digital Camera Transformed Our Concept of History Hi: "How the Digital Camera Transformed Our Concept of History" is the title of a paper by Allison Marsh that has just been published by IEEE Spectrum. It starts: > For an inventor, the main challenge might be technical, but sometimes it?s timing that determines success. Steven Sasson had the technical talent but developed his prototype for an all-digital camera a couple of decades too early. > > A CCD from Fairchild was used in Kodak?s first digital camera > prototype It was 1974, and Sasson, a young electrical engineer at Eastman Kodak Co., in Rochester, N.Y., was looking for a use for Fairchild Semiconductor?s new type 201 charge-coupled device. His boss suggested that he try using the 100-by-100-pixel CCD to digitize an image. So Sasson built a digital camera to capture the photo, store it, and then play it back on another device. > > Sasson?s camera was a kluge of components. He salvaged the lens and exposure mechanism from a Kodak XL55 movie camera to serve as his camera?s optical piece. The CCD would capture the image, which would then be run through a Motorola analog-to-digital converter, stored temporarily in a DRAM array of a dozen 4,096-bit chips, and then transferred to audio tape running on a portable Memodyne data cassette recorder. The camera weighed 3.6 kilograms, ran on 16 AA batteries, and was about the size of a toaster. > > After working on his camera on and off for a year, Sasson decided on 12 December 1975 that he was ready to take his first picture. Lab technician Joy Marshall agreed to pose. The photo took about 23 seconds to record onto the audio tape. But when Sasson played it back on the lab computer, the image was a mess?although the camera could render shades that were clearly dark or light, anything in between appeared as static. So Marshall?s hair looked okay, but her face was missing. She took one look and said, ?Needs work.? > > Sasson continued to improve the camera, eventually capturing impressive images of different people and objects around the lab. He and his supervisor, Garreth Lloyd, received U.S. Patent No. 4,131,919 for an electronic still camera in 1978, but the project never went beyond the prototype stage. Sasson estimated that image resolution wouldn?t be competitive with chemical photography until sometime between 1990 and 1995, and that was enough for Kodak to mothball the project. The article ends: > Digital cameras also changed how historians conduct their research For > professional historians, the advent of digital photography has had other important implications. Lately, there?s been a lot of discussion about how digital cameras in general, and smartphones in particular, have changed the practice of historical research. At the 2020 annual meeting of the American Historical Association, for instance, Ian Milligan, an associate professor at the University of Waterloo, in Canada, gave a talk in which he revealed that 96 percent of historians have no formal training in digital photography and yet the vast majority use digital photographs extensively in their work. About 40 percent said they took more than 2,000 digital photographs of archival material in their latest project. W. Patrick McCray of the University of California, Santa Barbara, told a writer with The Atlantic that he?d accumulated 77 gigabytes of digitized documents and imagery for his latest book project [an aspect of which he recently wrote about for Spectrum]. > > So let?s recap: In the last 45 years, Sasson took his first digital picture, digital cameras were brought into the mainstream and then embedded into another pivotal technology?the cellphone and then the smartphone?and people began taking photos with abandon, for any and every reason. And in the last 25 years, historians went from thinking that looking at a photograph within the past year was a significant marker of engagement with the past to themselves compiling gigabytes of archival images in pursuit of their research. > So are those 1.4 trillion digital photographs that we?ll collectively take this year a part of history? I think it helps to consider how they fit into the overall historical narrative. A century ago, nobody, not even a science fiction writer, predicted that someone would take a photo of a parking lot to remember where they?d left their car. A century from now, who knows if people will still be doing the same thing. In that sense, even the most mundane digital photograph can serve as both a personal memory and a piece of the historical record. Full story at https://spectrum.ieee.org/tech-history/silicon-revolution/how-the-digital-camera-transformed-our-concept-of-history Cheers Brian Randell ? School of Computing, Newcastle University, 1 Science Square, Newcastle upon Tyne, NE4 5TG EMAIL = Brian.Randell at ncl.ac.uk PHONE = +44 191 208 7923 URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org _____ [1] Commodore?s Amiga was well suited to video production, thanks to high resolution video modes that functioned well with inexpensive genlock and frame grabber hardware. Maher, The Future Was Here: The Commodore Amiga, ch. 5. [2] On the history of spy satellites, see William E Burrows, Deep Black: Space Espionage and National Security (New York: Random House, 1987). [3] R W Smith and J N Tatarewicz, "Replacing a Technology: The Large Space Telescope and CCDs," Proceedings of the IEEE 73, no. 7 (July 1985):1221-1235. [4] Chris O'Falt, "Pixelvision: How a Failed '80s Fisher-Price Toy Became One of Auteurs' Favorite '90s Tools", IndieWire, 2018, https://www.indiewire.com/2018/08/pixelvision-pxl-2000-fisher-price-toy-experimental-film-camera-lincoln-center-series-1201991348/. [5] Martin Hand, Ubiquitous Photography (Malden, MA: Polity Press, 2012). -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.randell at newcastle.ac.uk Fri Jul 10 14:47:55 2020 From: brian.randell at newcastle.ac.uk (Brian Randell) Date: Fri, 10 Jul 2020 21:47:55 +0000 Subject: [SIGCIS-Members] =?utf-8?q?Ideas_Embodied_in_Metal=3A_Babbage?= =?utf-8?q?=E2=80=99s_Engines_Dismembered_and_Remembered?= Message-ID: <19D3BBC2-C363-45D3-9E46-B721A3F284BF@newcastle.ac.uk> Hi: I?ve just come across and read a fascinating and though-provoking account that adds significantly to the Babbage canon. Ideas Embodied in Metal: Babbage?s Engines Dismembered and Remembered Chap 6 of The Whipple Museum of the History of Science (Cambridge University Press, August 2019) pp 119-158. Simon Schaffer Abstract This chapter seeks to extend previous studies of Charles Babbage?s celebrated difference engine through an examination of a rare surviving fragment in the Whipple Museum?s collection. Constructed from scrap parts inherited by Charles?s son, Henry, in order to relaunch the engine project, the fragment has much to reveal about the afterlife of Babbage?s most famous failure. An appendix to the chapter also publishes for the first time key documents relating to Henry Babbage?s project. Full text: https://www.cambridge.org/core/books/whipple-museum-of-the-history-of-science/ideas-embodied-in-metal-babbages-engines-dismembered-and-remembered/F1CFD1014E5C02B61C0CE57E2D350CBF/core-reader Cheers Brian Randell ? School of Computing, Newcastle University, 1 Science Square, Newcastle upon Tyne, NE4 5TG EMAIL = Brian.Randell at ncl.ac.uk PHONE = +44 191 208 7923 URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html From ddouglas at mit.edu Fri Jul 10 08:45:11 2020 From: ddouglas at mit.edu (Deborah Douglas) Date: Fri, 10 Jul 2020 15:45:11 +0000 Subject: [SIGCIS-Members] How the Digital Camera Transformed Our Concept of History In-Reply-To: <088001d6567b$b6488360$22d98a20$@gmail.com> References: <6AAD4F4A-1F9C-45D4-8BF7-711D247E9AB7@newcastle.ac.uk> <088001d6567b$b6488360$22d98a20$@gmail.com> Message-ID: <459C2637-597C-46A0-BA35-7FF83D374D8E@mit.edu> Fascinating? For those interested in Polaroid and digital photography, Peter Buse provides a nice synopsis in Chapter 3 of his book ?The Camera Does the Rest: How Polaroid Changed Photography?. Preoccupied with the introduction of integral film and the SX-70 in the 1970s, it seems Polaroid did not get series about electronic imaging until the late 1970s/early 1980s. (You start reading about digital imaging (?filminess cameras? or ??electronic cameras?) in Polaroid?s press around the mid 1980s.). Most of the records are at Baker Library at Harvard. As for the cost of film versus digital images, you might find it interesting to include the cost of Polaroid ?instant film? as that eliminated two of those drugstore trips and provided more immediate gratification. (When the SX-70 camera was released in 1972 for a retail price of $180; a pack of film with 10 images was $6.90.). Most serious historians of Polaroid consider the overnight Fotomat followed by the 1-hour mini-lab technology that were the main ?killers? of instant photography for consumer use. (The ?nod to the pod? obsession that became gospel during the Polaroid v. Kodak lawsuit also created a stultifying atmosphere within the company that worked against the development of digital technologies?or any other technologies is another big factor.) Probably more than you wanted to know about Polaroid but the great stories below got me thinking! Debbie Douglas On Jul 10, 2020, at 1:33 AM, thomas.haigh at gmail.com wrote: It's not quite the same thing, but in the Revised History of Modern Computing (with Paul Ceruzzi, coming soon from MIT Press) we've tried to integrate the history of digital imaging into the history of computing. It seems necessary, not least because digital cameras are computers in disguise (and because the images were stored, edited, and transmitted on more recognizable kinds of computer). The topic comes back in the later discussion of smartphones and device convergence in the final chapter but as a sneak preview here is the subsection ?Digital Cameras? from Chapter 10: The Computer Becomes a Universal Media Device. Would be happy to hear of any errors while there is still time to fix them?. Tom Digital Cameras Television worked by dividing a picture into a grid of dots. Even the term ?pixel? (for picture element) which we now associate with computer equipment originated in the television equipment industry. Back in 1945, working on the ?First Draft? EDVAC design, John von Neumann was fascinated by the idea potential of the iconoscope, an electronic tube used in early television cameras, as a storage device. In television, however, intensity of each dot was transmitted as an analog value. Turning pixels into numbers was the job of the frame grabber. This captured a single frame from a video input and turned it into a bitmap image. Frame grabbers were used for video production work and were built into specialist video manipulation hardware to create special effects. A related piece of hardware, the gen lock, synchronized computer displays with video image so that computer generate titles and graphics could be added to videos. These devices were expensive, purchased mostly by video production companies to liven up music videos, advertisements, and wedding footage with titles and special effects.[1] Today digital video sensors are everywhere. The crucial development was the charged coupled device (CCD), which combined a semiconductor with a light sensitive layer. Fairchild Semiconductor began to sell a 100x100 light sensor in 1974. That provided the basis for an experimental digital camera at Kodak. When light was focused onto the sensor matrix numbers could be read off the chip. Space missions had a particular need for tiny and reliable digital imaging technologies, creating pictures that could be beamed back to Earth. Techniques had been developed back in the 1960s, original for spy satellites, to expose film and then scan it and transmit images digitally back to earth. Being able to take high quality digital still images directly was much simpler and faster. By 1978 a KH-11 spy satellite was using a CCD with, reportedly, an 800x800 resolution.[2] The Hubble Space Telescope, launched in 1986, used a similar size mirror but gave much higher resolution CCD sensors a very public showcase.[3] Back on earth, the first big market was for cheaper ?one dimensional? sensors able to scan a single line. Flatbed scanners and fax machines moved the scanner across against the page to capture the entire image gradually. (A similar digital scanning approach had been pioneered with the photo diode cameras of the Viking Mars landers. It worked well, albeit slowly, as neither the platform nor the landscape was moving). Commercializing digital cameras took longer, because many more sensor elements were needed to capture an entire image at once. The technology made a brief consumer appearance in 1987, in the PXL-2000 ?PixelVision? camera produced by toy company Fisher Price. It recorded highly pixelated video onto standard audio cassettes, later becoming a favorite of hipster artists.[4] CCDs were also used in some of the analog camcorders of the 1980s, bulky devices that combined a video cassette recorder and a television camera into a single box. By the mid-1990s higher resolution sensors and the chips and memories to deal with the large files they produced were becoming affordable. They made their way into two related kinds of product. Digital video cameras could store one hour of crisp, high resolution footage on special tapes as 13 gigabytes of computer data. Computers fitted with a Firewire connection (also used by early iPods) could extract digital video, edit it, and write the results back to the tape without any loss of quality. The other kind of digital camera was patterned after traditional cameras. Camera manufacturers competed on ?megapixels? ? how many millions of pixels the sensor element handled. At the end of the 1990s most had just one or two megapixels, capturing images that looked good on screen but would appear jagged when printed out. Because they were optimized for still images, which took less space than video, most still cameras used chip-based flash memory cards rather than tape (though some early models used floppy disks or CDs). Flash retained data when power was turned off but could be quickly and selectively overwritten. It was introduced by Toshiba in 1987, finding early applications in computers to store configuration settings for computers and to hold BIOS code in an easily updatable form. The cards used in early digital cameras could store only a few megabytes but, as with other memory chips, their capacities rose into the gigabytes as transistors shrank. Because it was very compact and power efficient, high capacity flash memory capacities was a crucial enabling technology for the creation of new portable devices. Flash memories able to store hundreds of gigabytes ultimately replaced hard disk storage in most PCs, though this took longer than expected because magnetic disk capacities increased even faster than chip densities during the 1990s and early-2000s. The digital cameras of the late-1990s were bulky, had small screens, and would deplete their batteries and fill their memory cards after taking just a few dozen images. Compared to the models available even a few years later they were terrible, but the relevant comparison was with consumer film cameras. Conventional film cartridges held only 24 or 36 pictures. Seeing those pictures cost at least ten dollars and usually took three trips to a drugstore, to buy the film, to drop it off for processing, and to collect the prints. Pocket sized camera forced users to squint through a plastic window, giving a vague idea of what might appear in a photograph. Larger, more expensive single lens reflex cameras took better pictures and showed whether an image was in focus. Little wonder that most people took out their camera only for vacation trips and special occasions. Even the most primitive digital cameras enabled new photographic practices Digital cameras caught on fastest for business that needed to shoot images and use them immediately, for real estate sales, corporate newsletters, or identity cards. Their direct competition was Polaroid instant cameras, which had high running costs and mostly took small pictures. As prices dropped and picture quality improved, consumers began to buy digital cameras, and to take far more pictures than ever before. Vacations were now captured with hundreds of pictures, not just one or two films. Teenagers could mimic the practices of fashion photographers by taking a few dozen shots of a friend and using the best one. Since the early 2000s, daily life has been visually recorded on a scale unmatched in earlier history, a phenomenon known as ?ubiquitous photography.?[5] Early memory cards held only a few megabytes, needing aggressive compression to hold even a dozen images. That was provided by a new image format, the JPEG (named for the Joint Photographic Experts Group), a cousin to the MP3 format that used a fractal compression algorithm to achieve similarly impressive reductions in file size. In 1991, when libjpeg, a widely used open source code module for JPEG compression, was released, it took a powerful PC to create these files. By the late 1990s the computer power could be put into a camera, though early models would be tied up for several seconds processing each image. Once the memory card was full, users moved the files onto a computer. Digital photography was another of the practices made possible by the arrival of PCs with voluminous hard drives as a standard feature of middle-class households. People who wanted to print out their photographs could still go to the drug store, or purchase an affordable little color printer, but photographs were viewed more and more on screens. They were shared with friends and family by email, or by copying them onto a zip disk or burning onto a CD rather than by handing over an envelope full of duplicate prints. Screens got bigger, images sharper, battery life longer, camera bodies smaller, and sensors better. By the early 2000s sensors with a dozen megapixels were common, enough that the image quality would be limited primarily by the quality of the camera?s optics. Cameras began to use a different sensor technology, called CMOS after the chip technology it is based on. CMOS imaging was prototyped at the Jet Propulsion Laboratory, a centerpiece of the US space probe program. The new technology produced camera sensors cheaper, smaller, and lower powered than those based on CCDs. By 2006 a camera costing a few hundred dollars would fit in a trouser pocket, take hundreds of images without changing a battery or a memory card, and offer better image quality than any compact film-based consumer camera. Improvements under low light conditions, taking photographs at night or indoors without a flash, were particularly dramatic. -----Original Message----- From: Members > On Behalf Of Brian Randell Sent: Thursday, July 9, 2020 5:22 AM To: Sigcis > Subject: [SIGCIS-Members] How the Digital Camera Transformed Our Concept of History Hi: "How the Digital Camera Transformed Our Concept of History" is the title of a paper by Allison Marsh that has just been published by IEEE Spectrum. It starts: > For an inventor, the main challenge might be technical, but sometimes it?s timing that determines success. Steven Sasson had the technical talent but developed his prototype for an all-digital camera a couple of decades too early. > > A CCD from Fairchild was used in Kodak?s first digital camera > prototype It was 1974, and Sasson, a young electrical engineer at Eastman Kodak Co., in Rochester, N.Y., was looking for a use for Fairchild Semiconductor?s new type 201 charge-coupled device. His boss suggested that he try using the 100-by-100-pixel CCD to digitize an image. So Sasson built a digital camera to capture the photo, store it, and then play it back on another device. > > Sasson?s camera was a kluge of components. He salvaged the lens and exposure mechanism from a Kodak XL55 movie camera to serve as his camera?s optical piece. The CCD would capture the image, which would then be run through a Motorola analog-to-digital converter, stored temporarily in a DRAM array of a dozen 4,096-bit chips, and then transferred to audio tape running on a portable Memodyne data cassette recorder. The camera weighed 3.6 kilograms, ran on 16 AA batteries, and was about the size of a toaster. > > After working on his camera on and off for a year, Sasson decided on 12 December 1975 that he was ready to take his first picture. Lab technician Joy Marshall agreed to pose. The photo took about 23 seconds to record onto the audio tape. But when Sasson played it back on the lab computer, the image was a mess?although the camera could render shades that were clearly dark or light, anything in between appeared as static. So Marshall?s hair looked okay, but her face was missing. She took one look and said, ?Needs work.? > > Sasson continued to improve the camera, eventually capturing impressive images of different people and objects around the lab. He and his supervisor, Garreth Lloyd, received U.S. Patent No. 4,131,919 for an electronic still camera in 1978, but the project never went beyond the prototype stage. Sasson estimated that image resolution wouldn?t be competitive with chemical photography until sometime between 1990 and 1995, and that was enough for Kodak to mothball the project. The article ends: > Digital cameras also changed how historians conduct their research For > professional historians, the advent of digital photography has had other important implications. Lately, there?s been a lot of discussion about how digital cameras in general, and smartphones in particular, have changed the practice of historical research. At the 2020 annual meeting of the American Historical Association, for instance, Ian Milligan, an associate professor at the University of Waterloo, in Canada, gave a talk in which he revealed that 96 percent of historians have no formal training in digital photography and yet the vast majority use digital photographs extensively in their work. About 40 percent said they took more than 2,000 digital photographs of archival material in their latest project. W. Patrick McCray of the University of California, Santa Barbara, told a writer with The Atlantic that he?d accumulated 77 gigabytes of digitized documents and imagery for his latest book project [an aspect of which he recently wrote about for Spectrum]. > > So let?s recap: In the last 45 years, Sasson took his first digital picture, digital cameras were brought into the mainstream and then embedded into another pivotal technology?the cellphone and then the smartphone?and people began taking photos with abandon, for any and every reason. And in the last 25 years, historians went from thinking that looking at a photograph within the past year was a significant marker of engagement with the past to themselves compiling gigabytes of archival images in pursuit of their research. > So are those 1.4 trillion digital photographs that we?ll collectively take this year a part of history? I think it helps to consider how they fit into the overall historical narrative. A century ago, nobody, not even a science fiction writer, predicted that someone would take a photo of a parking lot to remember where they?d left their car. A century from now, who knows if people will still be doing the same thing. In that sense, even the most mundane digital photograph can serve as both a personal memory and a piece of the historical record. Full story at https://spectrum.ieee.org/tech-history/silicon-revolution/how-the-digital-camera-transformed-our-concept-of-history Cheers Brian Randell ? School of Computing, Newcastle University, 1 Science Square, Newcastle upon Tyne, NE4 5TG EMAIL = Brian.Randell at ncl.ac.uk PHONE = +44 191 208 7923 URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org ________________________________ [1] Commodore?s Amiga was well suited to video production, thanks to high resolution video modes that functioned well with inexpensive genlock and frame grabber hardware. Maher, The Future Was Here: The Commodore Amiga, ch. 5. [2] On the history of spy satellites, see William E Burrows, Deep Black: Space Espionage and National Security (New York: Random House, 1987). [3] R W Smith and J N Tatarewicz, "Replacing a Technology: The Large Space Telescope and CCDs," Proceedings of the IEEE 73, no. 7 (July 1985):1221-1235. [4] Chris O'Falt, "Pixelvision: How a Failed '80s Fisher-Price Toy Became One of Auteurs' Favorite '90s Tools", IndieWire, 2018, https://www.indiewire.com/2018/08/pixelvision-pxl-2000-fisher-price-toy-experimental-film-camera-lincoln-center-series-1201991348/. [5] Martin Hand, Ubiquitous Photography (Malden, MA: Polity Press, 2012). _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org Deborah G. Douglas, PhD ? Director of Collections and Curator of Science and Technology, MIT Museum; Research Associate, Program in Science, Technology, and Society ? Room N51-209 ? 265 Massachusetts Avenue ? Cambridge, MA 02139-4307 ? ddouglas at mit.edu ? 617-253-1766 telephone ? 617-253-8994 facsimile ? http://mitmuseum.mit.edu ? she/her/hers -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Wed Jul 15 10:14:49 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Wed, 15 Jul 2020 12:14:49 -0500 Subject: [SIGCIS-Members] Opinions on "Spacewar" vs. "Spacewar!" Message-ID: <0b1b01d65acb$72792050$576b60f0$@gmail.com> Hello SIGCIS, I'm canvassing opinion on a small topic, as I work with Paul Ceruzzi to finalize revisions on the revised History of Modern Computing. There are a lot of style choices like FORTRAN vs Fortran, Internet vs internet, etc. where dominant usage has evolved over the past twenty years, generally in the clear direction of not capitalizing things that aren't acronyms. We're planning to follow that, while still capitalizing Internet and Web to respect the historical context. On the other hand, since the first edition the pioneering PDP-1 video game formerly known as Spacewar has grown an exclamation point to become Spacewar!. This causes problems with punctuation, as in the previous sentence. It did not have one in either of the accounts that made it famous, Levy's Hackers (1984) and Brand's 1972 article "SPACEWAR: Fanatic Life and Symbolic Death Among the Computer Bums." So one might assume that any punctuation attached to it in its original MIT incarnation had fallen by the wayside as it spread. The original Modern History followed this then-standard usage. Wikipedia now has the exclamation point, and so does the Computer History Museum: https://www.computerhistory.org/pdp-1/spacewar/. The Smithsonian appears to have endorsed it in its writeup of the NMAH anniversary event, https://www.smithsonianmag.com/smithsonian-institution/how-first-popular-vid eo-game-kicked-off-generations-virtual-adventure-180971020/. So I assume that this general shift must reflect some kind of movement in video game studies to reattach a lost piece of punctuation. On the other hand, there's a precedent for not using an exclamation point in books or articles even when it is part of a company self presentation: Yahoo!! (The second exclamation point there is my own excitement). The company used it consistently, but the AP Style Guide tells journalists to drop it when writing about Yahoo. We're following that in the revised history. So I'm torn about whether to follow the trend and use "Spacewar!" throughout, despite the punctuation problems it causes, or to apply the same logic as Yahoo and use "Spacewar" with an initial parenthetical observation that the official name is "Spacewar!". Best wishes, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddouglas at mit.edu Wed Jul 15 10:47:25 2020 From: ddouglas at mit.edu (Deborah Douglas) Date: Wed, 15 Jul 2020 17:47:25 +0000 Subject: [SIGCIS-Members] Opinions on "Spacewar" vs. "Spacewar!" In-Reply-To: <0b1b01d65acb$72792050$576b60f0$@gmail.com> References: <0b1b01d65acb$72792050$576b60f0$@gmail.com> Message-ID: <46B490D8-BCAE-4DAE-829D-10390F9C8C25@mit.edu> Friends, When MIT marked the 50th anniversary of the game the name had the exclamation point. (http://gambit.mit.edu/updates/2012/01/spacewar_turns_50_gambit_celeb.php). I used it in the label for our display of a Spacewar! emulator in our MIT150 exhibition in 2011 (http://museum.mit.edu/150/25). I did so in deference to the preference of the game?s creators (in the manner, I had deferred to Oliver Smoot on the question of plus-or-minus an ear in the measurement of the Harvard Bridge). It is now in our database as well. At the same time, I will confess to having dropped the exclamation point in informal communications. My recommendation, Tom, is to strive for consistency in your text; the computer search engines will find it with or without excitement! Cheers, Debbie Douglas On Jul 15, 2020, at 1:14 PM, thomas.haigh at gmail.com wrote: Hello SIGCIS, I?m canvassing opinion on a small topic, as I work with Paul Ceruzzi to finalize revisions on the revised History of Modern Computing. There are a lot of style choices like FORTRAN vs Fortran, Internet vs internet, etc. where dominant usage has evolved over the past twenty years, generally in the clear direction of not capitalizing things that aren?t acronyms. We?re planning to follow that, while still capitalizing Internet and Web to respect the historical context. On the other hand, since the first edition the pioneering PDP-1 video game formerly known as Spacewar has grown an exclamation point to become Spacewar!. This causes problems with punctuation, as in the previous sentence. It did not have one in either of the accounts that made it famous, Levy?s Hackers (1984) and Brand?s 1972 article ?SPACEWAR: Fanatic Life and Symbolic Death Among the Computer Bums.? So one might assume that any punctuation attached to it in its original MIT incarnation had fallen by the wayside as it spread. The original Modern History followed this then-standard usage. Wikipedia now has the exclamation point, and so does the Computer History Museum: https://www.computerhistory.org/pdp-1/spacewar/. The Smithsonian appears to have endorsed it in its writeup of the NMAH anniversary event, https://www.smithsonianmag.com/smithsonian-institution/how-first-popular-video-game-kicked-off-generations-virtual-adventure-180971020/. So I assume that this general shift must reflect some kind of movement in video game studies to reattach a lost piece of punctuation. On the other hand, there?s a precedent for not using an exclamation point in books or articles even when it is part of a company self presentation: Yahoo!! (The second exclamation point there is my own excitement). The company used it consistently, but the AP Style Guide tells journalists to drop it when writing about Yahoo. We?re following that in the revised history. So I?m torn about whether to follow the trend and use ?Spacewar!? throughout, despite the punctuation problems it causes, or to apply the same logic as Yahoo and use ?Spacewar? with an initial parenthetical observation that the official name is ?Spacewar!?. Best wishes, Tom _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org Deborah G. Douglas, PhD ? Director of Collections and Curator of Science and Technology, MIT Museum; Research Associate, Program in Science, Technology, and Society ? Room N51-209 ? 265 Massachusetts Avenue ? Cambridge, MA 02139-4307 ? ddouglas at mit.edu ? 617-253-1766 telephone ? 617-253-8994 facsimile ? http://mitmuseum.mit.edu ? she/her/hers -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcweber at att.net Wed Jul 15 11:41:21 2020 From: marcweber at att.net (Marc Weber) Date: Wed, 15 Jul 2020 11:41:21 -0700 Subject: [SIGCIS-Members] Opinions on "Spacewar" vs. "Spacewar!" In-Reply-To: <46B490D8-BCAE-4DAE-829D-10390F9C8C25@mit.edu> References: <0b1b01d65acb$72792050$576b60f0$@gmail.com> <46B490D8-BCAE-4DAE-829D-10390F9C8C25@mit.edu> Message-ID: Dear Tom, Steve Russell who wrote the game in question is a docent for the Computer History Museum and until fairly recently gave demos of the program on our PDP-1; perhaps you?ve met him. I?m sure he?d be happy to give an opinion if useful. Although as you point out the creator?s wishes are not always relevant to later style. Best, Marc > On Jul 15, 2020, at 10:47, Deborah Douglas wrote: > > Friends, > > When MIT marked the 50th anniversary of the game the name had the exclamation point. (http://gambit.mit.edu/updates/2012/01/spacewar_turns_50_gambit_celeb.php ). I used it in the label for our display of a Spacewar! emulator in our MIT150 exhibition in 2011 (http://museum.mit.edu/150/25 ). I did so in deference to the preference of the game?s creators (in the manner, I had deferred to Oliver Smoot on the question of plus-or-minus an ear in the measurement of the Harvard Bridge). It is now in our database as well. At the same time, I will confess to having dropped the exclamation point in informal communications. > > My recommendation, Tom, is to strive for consistency in your text; the computer search engines will find it with or without excitement! > > Cheers, > > Debbie Douglas > > >> On Jul 15, 2020, at 1:14 PM, thomas.haigh at gmail.com wrote: >> >> Hello SIGCIS, >> >> I?m canvassing opinion on a small topic, as I work with Paul Ceruzzi to finalize revisions on the revised History of Modern Computing. >> >> There are a lot of style choices like FORTRAN vs Fortran, Internet vs internet, etc. where dominant usage has evolved over the past twenty years, generally in the clear direction of not capitalizing things that aren?t acronyms. We?re planning to follow that, while still capitalizing Internet and Web to respect the historical context. >> >> On the other hand, since the first edition the pioneering PDP-1 video game formerly known as Spacewar has grown an exclamation point to become Spacewar!. This causes problems with punctuation, as in the previous sentence. It did not have one in either of the accounts that made it famous, Levy?s Hackers (1984) and Brand?s 1972 article ?SPACEWAR: Fanatic Life and Symbolic Death Among the Computer Bums.? So one might assume that any punctuation attached to it in its original MIT incarnation had fallen by the wayside as it spread. The original Modern History followed this then-standard usage. >> >> Wikipedia now has the exclamation point, and so does the Computer History Museum: https://www.computerhistory.org/pdp-1/spacewar/ . The Smithsonian appears to have endorsed it in its writeup of the NMAH anniversary event, https://www.smithsonianmag.com/smithsonian-institution/how-first-popular-video-game-kicked-off-generations-virtual-adventure-180971020/ . >> >> So I assume that this general shift must reflect some kind of movement in video game studies to reattach a lost piece of punctuation. >> >> On the other hand, there?s a precedent for not using an exclamation point in books or articles even when it is part of a company self presentation: Yahoo!! (The second exclamation point there is my own excitement). The company used it consistently, but the AP Style Guide tells journalists to drop it when writing about Yahoo. We?re following that in the revised history. >> >> So I?m torn about whether to follow the trend and use ?Spacewar!? throughout, despite the punctuation problems it causes, or to apply the same logic as Yahoo and use ?Spacewar? with an initial parenthetical observation that the official name is ?Spacewar!?. >> >> Best wishes, >> >> Tom >> >> >> >> >> _______________________________________________ >> This email is relayed from members at sigcis.org , the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > Deborah G. Douglas, PhD ? Director of Collections and Curator of Science and Technology, MIT Museum; Research Associate, Program in Science, Technology, and Society ? Room N51-209 ? 265 Massachusetts Avenue ? Cambridge, MA 02139-4307 ? ddouglas at mit.edu ? 617-253-1766 telephone ? 617-253-8994 facsimile ? http://mitmuseum.mit.edu ? she/her/hers > > > > > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc at webhistory.org Wed Jul 15 12:35:35 2020 From: marc at webhistory.org (Marc Weber) Date: Wed, 15 Jul 2020 12:35:35 -0700 Subject: [SIGCIS-Members] Opinions on "Spacewar" vs. "Spacewar!" In-Reply-To: <46B490D8-BCAE-4DAE-829D-10390F9C8C25@mit.edu> References: <0b1b01d65acb$72792050$576b60f0$@gmail.com> <46B490D8-BCAE-4DAE-829D-10390F9C8C25@mit.edu> Message-ID: <08C88984-00D2-4E9B-A25F-598AA88E7F21@webhistory.org> Dear Tom, Steve Russell who wrote the game in question is a docent for the Computer History Museum and until fairly recently gave demos of the program on our PDP-1; perhaps you?ve met him. I?m sure he?d be happy to give an opinion if useful. Although as you point out the creator?s wishes are not always relevant to later style. Best, Marc > On Jul 15, 2020, at 10:47, Deborah Douglas > wrote: > > Friends, > > When MIT marked the 50th anniversary of the game the name had the exclamation point. (http://gambit.mit.edu/updates/2012/01/spacewar_turns_50_gambit_celeb.php ). I used it in the label for our display of a Spacewar! emulator in our MIT150 exhibition in 2011 (http://museum.mit.edu/150/25 ). I did so in deference to the preference of the game?s creators (in the manner, I had deferred to Oliver Smoot on the question of plus-or-minus an ear in the measurement of the Harvard Bridge). It is now in our database as well. At the same time, I will confess to having dropped the exclamation point in informal communications. > > My recommendation, Tom, is to strive for consistency in your text; the computer search engines will find it with or without excitement! > > Cheers, > > Debbie Douglas > > >> On Jul 15, 2020, at 1:14 PM, thomas.haigh at gmail.com wrote: >> >> Hello SIGCIS, >> >> I?m canvassing opinion on a small topic, as I work with Paul Ceruzzi to finalize revisions on the revised History of Modern Computing. >> >> There are a lot of style choices like FORTRAN vs Fortran, Internet vs internet, etc. where dominant usage has evolved over the past twenty years, generally in the clear direction of not capitalizing things that aren?t acronyms. We?re planning to follow that, while still capitalizing Internet and Web to respect the historical context. >> >> On the other hand, since the first edition the pioneering PDP-1 video game formerly known as Spacewar has grown an exclamation point to become Spacewar!. This causes problems with punctuation, as in the previous sentence. It did not have one in either of the accounts that made it famous, Levy?s Hackers (1984) and Brand?s 1972 article ?SPACEWAR: Fanatic Life and Symbolic Death Among the Computer Bums.? So one might assume that any punctuation attached to it in its original MIT incarnation had fallen by the wayside as it spread. The original Modern History followed this then-standard usage. >> >> Wikipedia now has the exclamation point, and so does the Computer History Museum: https://www.computerhistory.org/pdp-1/spacewar/ . The Smithsonian appears to have endorsed it in its writeup of the NMAH anniversary event, https://www.smithsonianmag.com/smithsonian-institution/how-first-popular-video-game-kicked-off-generations-virtual-adventure-180971020/ . >> >> So I assume that this general shift must reflect some kind of movement in video game studies to reattach a lost piece of punctuation. >> >> On the other hand, there?s a precedent for not using an exclamation point in books or articles even when it is part of a company self presentation: Yahoo!! (The second exclamation point there is my own excitement). The company used it consistently, but the AP Style Guide tells journalists to drop it when writing about Yahoo. We?re following that in the revised history. >> >> So I?m torn about whether to follow the trend and use ?Spacewar!? throughout, despite the punctuation problems it causes, or to apply the same logic as Yahoo and use ?Spacewar? with an initial parenthetical observation that the official name is ?Spacewar!?. >> >> Best wishes, >> >> Tom >> >> >> >> >> _______________________________________________ >> This email is relayed from members at sigcis.org , the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > Deborah G. Douglas, PhD ? Director of Collections and Curator of Science and Technology, MIT Museum; Research Associate, Program in Science, Technology, and Society ? Room N51-209 ? 265 Massachusetts Avenue ? Cambridge, MA 02139-4307 ? ddouglas at mit.edu ? 617-253-1766 telephone ? 617-253-8994 facsimile ? http://mitmuseum.mit.edu ? she/her/hers > > > > > > _______________________________________________ > This email is relayed from members at sigcis.org , the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org Marc Weber | marc at webhistory.org | +1 415 282 6868 Internet History Program Curatorial Director, Computer History Museum 1401 N Shoreline Blvd., Mountain View CA 94043 computerhistory.org/nethistory Co-founder, Web History Center and Project, webhistory.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Wed Jul 15 22:53:11 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Thu, 16 Jul 2020 00:53:11 -0500 Subject: [SIGCIS-Members] Update on SpaceWar!/Spacewar Message-ID: <0bbb01d65b35$642e0460$2c8a0d20$@gmail.com> Hello SIGCIS, I had some good off-list replies, so here's a wrap up for the benefit of the community. We've decided to use "Spacewar" in the text, with a note or parenthetical aside that Steve Russell prefers the name with an exclamation point. Eagle eyed readers may note that this is a slight variation from my original suggestion of noting that Spacewar! is the "official name." That's because a couple of responses have made me wonder what "official" would even mean. We already knew the game became famous in the 1970s and 1980s as "Spacewar". We also have a generally accepted idea that Steve Russell is on the record as liking the title more with the exclamation point. Peggy Kidwell of the National Museum of American History pointed me towards this catalog entry for a 1962 paper tape. https://americanhistory.si.edu/collections/search/object/nmah_1064201 That's just one year after the game was written. According to the catalog, "in compartment B - Friden Business Systems - Tape-Talk, X Good Space War. Also marked in punches: SPACEWAR 3.1 24 SEP 62 PT. 1" So it was written for humans as "Space War" and coded on the tape itself as "SPACEWAR". The tape was a gift from DEC itself, which makes sense as DEC is said to have been instrumental in spreading the game to show off and diagnose its display screens. Arthur Daemmrich at the Lemelson Center was also kind enough to look into this. He supplied me with a DEC brochure that used the game to sell the PDP-1. The cover gives the name as "SPACE/WAR" (on two lines) and the interior calls it both "SPACEWAR" and "Spacewar." I was able to find a copy of the brochure here: https://www.masswerk.at/spacewar/pdp-1-computer-and-spacewar.html. Arthur also raised the question with one of the original MIT programmers of the game, who did not recall the exclamation point being an integral part of the name. (I'm reluctant to share the full message without permission). I found a detailed technical examination of the game by Norbert Landsteiner at https://www.masswerk.at/spacewar/inside/. The author calls it Spacewar! throughout, but there is a suggestion that the filenames holding the binary contents of the original paper tapes, such as "spacewar2B_2apr.bin" are transcriptions of the labels on the original tapes. These do not contain the point. (Full list at https://www.masswerk.at/spacewar/sources/) The site also mentions code to punch "SPACEWAR" into tape headers. Since paper tapes didn't have a filename, the filename of the digital image would have presumingly been derived from a handwritten label, right at the start of the tape. Apparently, someone had written "Spacewar SA 5" on the tape. (Or, since there was a little program to punch a pattern reading "SPACEWAR" onto a tape, it might have been the title-punch and a handwritten "SA 5".) Finally, Arthur also sent an early article on the game, from Creative Computing in 1981. This _does_ include the exclamation point in the game's title and is thus the earliest recorded usage of it we've so far stumbled on. Chronologically this is between Brand's 1972 article and Levy's 1984 book. See https://archive.org/details/creativecomputing-1981-08/page/n59/mode/2up. The author, J.M. Graetz, was another of the MIT group behind the game. There's an earlier report by Graetz in the 1962 DECUS proceedings: http://bitsavers.org/pdf/dec/decus/confProceedings/DECUS_1962.pdf in which the name does have the point but is also capitalized: "SPACEWAR!" So putting it all together: "Spacewar!" is the preferred name for at least some of the members of the original hacker collective that produced the game at MIT and (as "SPACEWAR!") has been on record as such since 1962. On the other hand, it is not clear that the point was ever part of the name as written on the MIT program tapes holding the game or used when the name was punched as a header onto the tape. We _do_ have clear evidence that the PDP-1 game as distributed and advertised by DEC was called "Spacewar" with no point. So that would seem to be the "official" name to describe its use and influence beyond MIT. As using the name sometimes with and sometimes without the point depending on context would be confusing we'll standardize on the pointless DEC/Levy/Brand version of the name for the purposes of our book. Some of you may also feel that "pointless" is a good summary of this thread. Best wishes, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Thu Jul 16 18:33:04 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Thu, 16 Jul 2020 20:33:04 -0500 Subject: [SIGCIS-Members] Spacewar(!) contributor Robert A. Sanders says lose the point. Message-ID: <0c5b01d65bda$37e39c30$a7aad490$@gmail.com> Hello SIGCIS, Robert A. Saunders, one of the contributors to the original Spacewar(!) game has given permission to share his response to yesterday's post. According to Levy (p. 43), "In the later stages of programming, Saunders helped Slug Russell out, and they hacked a few intense six-to-eight hour sessions." Levy also credits Saunders as co-constructor of the first computer joysticks (p.45). Asked by Arthur Daemmrich whether "you have information / evidence that the exclamation mark was an integral part of the name of the program, Spacewar!?" Saunders replied "I do not recall that anyone made a point of distinction of this at the time. However, the exclamation point usage seems to have become common, and I suppose that it could continue to do so. But, in view of the comment below, it is plausible (and perhaps sensible) to eliminate the exclamation point - which I think I would prefer." Meanwhile, the Talk page for the Wikipedia article references this page for support of what I think the hackers would probably have called the "pointful" position: https://www.crummy.com/2013/02/02/0. It reports on an event at the Museum of the Moving Image in 2013. "I asked Russell the question that's been burning in my mind for years: why does Spacewar! have an exclamation mark in its name? His answer: "Once I got it working, I thought it deserved an exclamation point!" I also asked Russell if he considered any other names for the game. "Nope."" So the Spacewar(!) creators themselves to not appear to be of one mind on this point, but if you believe that creators have a moral right to name things that's sufficiently strong to set aside previously widespread usage (which appears to be the logic behind general adoption of Spacewar! over the past fifteen years) then you might reasonably conclude because Russell created more of the game his wishes should predominate. Best wishes, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From evan at snarc.net Thu Jul 16 18:37:05 2020 From: evan at snarc.net (Evan Koblentz) Date: Thu, 16 Jul 2020 21:37:05 -0400 Subject: [SIGCIS-Members] Spacewar(!) contributor Robert A. Sanders says lose the point. In-Reply-To: <0c5b01d65bda$37e39c30$a7aad490$@gmail.com> References: <0c5b01d65bda$37e39c30$a7aad490$@gmail.com> Message-ID: <9ee4f223-4833-4f11-bac3-e0b6c763208a@snarc.net> >> you might reasonably conclude because Russell created more of the game his wishes should predominate I was under the impression that Peter Samson also had a lot to do with its creation. You should ask him, too. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddouglas at mit.edu Fri Jul 17 03:39:28 2020 From: ddouglas at mit.edu (Deborah Douglas) Date: Fri, 17 Jul 2020 10:39:28 +0000 Subject: [SIGCIS-Members] "In Event of Moon Disaster" Screening - Deep Fakes and AI References: Message-ID: <0D434A72-75E4-4286-B94D-BD896D676E73@mit.edu> Friends, Next week the MIT Museum will be presenting a program on ?Deep Fakes? and AI. This video was originally released in Amsterdam last fall by the MIT Center for Advanced Virtuality. You may have seen clips or other articles including the promise of a spring release. The pandemic delayed that release until now. Next week is an opportunity to both screen the video and participate in Q&A. Debbie Douglas At In Event of Moon Disaster - Live Screening and Q&A, the live stream of the art installation short, seven minute film of a speech by Richard Nixon he never actually made. Using deep fake techniques?In Event of Moon Disaster? will empower and educate the public on how to discern reality from deepfakes on their own. The piece is designed by a collaborative of artists and computer scientists and directed by Francesca Panetta along with co-director Halsey Burgund. In ?In Event of Moon Disaster,? the team has reimagined the story of the moon landing. In a telecast, President Nixon reads a contingency speech written for him by his speech writer, Bill Safire, ?in event of moon disaster? which he was to read if the Apollo 11 astronauts had not been able to return to Earth. In this, faked news reel Richard Nixon is seen reading this speech from the Oval Office. To recreate this moving elegy that never happened, the team used deep learning techniques such as synthetic speech working and video dialogue replacement techniques. The resulting video is highly believable, highlighting the possibilities of deepfake technology today. "Our goal was to use the most advanced artificial intelligence techniques available today to create the most believable result possible ? and then point to it and say, ?This is fake; here?s how we did it; and here?s why we did it,?? says Halsey Burgund. Program: In Event of Moon Disaster ? Live Screening and Q&A Date: July 20, 2020 Time: 12 ET (9 PT, 17 BST) Youtube link: https://youtu.be/ur8wl20nt9Y Directors: Francesca Panetta & Halsey Burgund Hosts: Loren Hammonds, Senior Programmer, Tribeca Film Festival Brought to you by: Mozilla, the MIT Center for Advanced Virtuality, & the MIT Museum [cid:5D080649-5FCD-4B39-9131-A33FCA90C917 at lan] ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Greg DeFrancis ? Director of Engagement and Cambridge Science Festival MIT Museum ? Massachusetts Institute of Technology gregd1 at mit.edu ? 617.253.0527 (o)? 802.299.9075 (m) The MIT Museum is closed to visitors until further notice due to the COVID-19 outbreak. Deborah G. Douglas, PhD ? Director of Collections and Curator of Science and Technology, MIT Museum; Research Associate, Program in Science, Technology, and Society ? Room N51-209 ? 265 Massachusetts Avenue ? Cambridge, MA 02139-4307 ? ddouglas at mit.edu ? 617-253-1766 telephone ? 617-253-8994 facsimile ? http://mitmuseum.mit.edu ? she/her/hers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2020-07-16 23.08.42.png Type: image/png Size: 1819924 bytes Desc: Screenshot 2020-07-16 23.08.42.png URL: From reg at harbeck.ca Fri Jul 17 12:23:58 2020 From: reg at harbeck.ca (Reg Harbeck) Date: Fri, 17 Jul 2020 15:23:58 -0400 Subject: [SIGCIS-Members] Introduction and Humanity and the IBM System/360-descended mainframe Message-ID: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> Hello, SIGCIS. I am happy to have joined your listserv and be in such excellent company. I've joined this listserv at the recommendation of Dr. Willard McCarty, founder of the Humanist listserv, which I've also joined, and for the same reason: I'm working on my second draft of my thesis for my Master of Arts (Interdisciplinary Humanities) with a subject of the humanity of the IBM System/360-descended mainframe. I've been working on that platform since 1987, the year both of these listservs were founded, as a technologist and, more recently, ecosystem enabler. You can see what I've been up to if you Google "Reg Harbeck" "mainframe" - lots of both technical and cultural content. My research, experience, and perhaps predisposition, lead me to believe that the best of our human and humanities history were brought to bear in the development and announcement of IBM's System/360 mainframe on April 7, 1964. Prior to that, everything from the lessons of deep history (e.g. "measure twice, cut once" and other established practical and philosophical principles), more recent history (e.g. Jacquard, Babbage, WW II, Turing, Von Neumann, Fr. Roberto Busa, etc.), and input from experience and experienced users (e.g. the SHARE user group, founded in August of 1955 - still alive at SHARE.org) from the first two decades of electronic computing, funnelled into the design and creation of this system. Since then, while the actual platform was used by people studying the humanities, including the humanity of computing, until more autonomous systems became generally available, its further advances were more driven by the practical needs of serving humanity - especially business - than by philosophical considerations. Today, the modern mainframe descended from S/360, aka IBM Z, runs the world economy, with the large majority of credit card, financial, tax, and other government and business data of record. But most personal computing happens on other platforms - for now. But Moore's Law has ended, and the world is refocusing from novelty to sustainability, just on time for this same mainframe platform to become an increasingly evident option for quality cloud services. All of which leads to my request from this list: I'm still trying to tie the threads together well enough to ensure my thesis statement is logically supportable by the data I've put together, and my current version of that statement, still somewhat in flux, is something like, "The IBM System/360 mainframe and its successors are a definitive manifestation of the best of historical humanity and humanities, and it has continued to develop in a definitive role as part of our shared humanity, now and into the unforeseeable future."? So I would be most grateful if anyone has any publications or other sources they can recommend that speak specifically to these origins and this journey. While I have gathered a great deal of data so far, I'd rather have the same thing recommended to me multiple times than miss an important document that could be the missing link in my thinking. Thank you all so much for reading and considering this, and for your anticipated responses.? ? - Reg Harbeck Reg at Harbeck.ca +1.403.605.7986 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggrider at lanl.gov Fri Jul 17 12:33:36 2020 From: ggrider at lanl.gov (Grider, Gary Alan) Date: Fri, 17 Jul 2020 19:33:36 +0000 Subject: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe In-Reply-To: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> References: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> Message-ID: Interesting thesis topic. If you trace 360 back a bit further I think you will find that the 360 descended in some degree from Stretch or at least what was learned from Stretch. Since I walk by the building built for Stretch at Los Alamos every day, While I am not sure that the Cold War represents the best in humanity, it is a slightly different angle to your quest. Not sure if the Stretch 360 predecessor adds to your humanities angle but much has been written about the Stretch project and the people involved were a bit of a who?s who in Computing of that era. Gary Grider LANL From: Members on behalf of Reg Harbeck Date: Friday, July 17, 2020 at 1:24 PM To: "members at lists.sigcis.org" Subject: [EXTERNAL] [SIGCIS-Members] Introduction and Humanity and the IBM System/360-descended mainframe Hello, SIGCIS. I am happy to have joined your listserv and be in such excellent company. I've joined this listserv at the recommendation of Dr. Willard McCarty, founder of the Humanist listserv, which I've also joined, and for the same reason: I'm working on my second draft of my thesis for my Master of Arts (Interdisciplinary Humanities) with a subject of the humanity of the IBM System/360-descended mainframe. I've been working on that platform since 1987, the year both of these listservs were founded, as a technologist and, more recently, ecosystem enabler. You can see what I've been up to if you Google "Reg Harbeck" "mainframe" - lots of both technical and cultural content. My research, experience, and perhaps predisposition, lead me to believe that the best of our human and humanities history were brought to bear in the development and announcement of IBM's System/360 mainframe on April 7, 1964. Prior to that, everything from the lessons of deep history (e.g. "measure twice, cut once" and other established practical and philosophical principles), more recent history (e.g. Jacquard, Babbage, WW II, Turing, Von Neumann, Fr. Roberto Busa, etc.), and input from experience and experienced users (e.g. the SHARE user group, founded in August of 1955 - still alive at SHARE.org) from the first two decades of electronic computing, funnelled into the design and creation of this system. Since then, while the actual platform was used by people studying the humanities, including the humanity of computing, until more autonomous systems became generally available, its further advances were more driven by the practical needs of serving humanity - especially business - than by philosophical considerations. Today, the modern mainframe descended from S/360, aka IBM Z, runs the world economy, with the large majority of credit card, financial, tax, and other government and business data of record. But most personal computing happens on other platforms - for now. But Moore's Law has ended, and the world is refocusing from novelty to sustainability, just on time for this same mainframe platform to become an increasingly evident option for quality cloud services. All of which leads to my request from this list: I'm still trying to tie the threads together well enough to ensure my thesis statement is logically supportable by the data I've put together, and my current version of that statement, still somewhat in flux, is something like, "The IBM System/360 mainframe and its successors are a definitive manifestation of the best of historical humanity and humanities, and it has continued to develop in a definitive role as part of our shared humanity, now and into the unforeseeable future." So I would be most grateful if anyone has any publications or other sources they can recommend that speak specifically to these origins and this journey. While I have gathered a great deal of data so far, I'd rather have the same thing recommended to me multiple times than miss an important document that could be the missing link in my thinking. Thank you all so much for reading and considering this, and for your anticipated responses. - Reg Harbeck Reg at Harbeck.ca +1.403.605.7986 -------------- next part -------------- An HTML attachment was scrubbed... URL: From reg at harbeck.ca Fri Jul 17 12:37:57 2020 From: reg at harbeck.ca (Reg Harbeck) Date: Fri, 17 Jul 2020 15:37:57 -0400 Subject: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe In-Reply-To: References: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> Message-ID: <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> Thank you, Gary. This is an angle I could perhaps spend more time thinking about than I have so far. I appreciate the recommendation! ? - Reg Harbeck On Fri, 17 Jul 2020 19:33:36 +0000, "Grider, Gary Alan" wrote: Interesting thesis topic.? If you trace 360 back a bit further I think you will find that the 360 descended in some degree from Stretch or at least what was learned from Stretch. Since I walk by the building built for Stretch at Los Alamos every day, While I am not sure that the Cold War represents the best in humanity, it is a slightly different angle to your quest. Not sure if the Stretch 360 predecessor adds to your humanities angle but much has been written about the Stretch project and the people involved were a bit of a who?s who in Computing of that era. ? Gary Grider LANL ? From: Members on behalf of Reg Harbeck Date: Friday, July 17, 2020 at 1:24 PM To: "members at lists.sigcis.org" Subject: [EXTERNAL] [SIGCIS-Members] Introduction and Humanity and the IBM System/360-descended mainframe ? Hello, SIGCIS. I am happy to have joined your listserv and be in such excellent company. I've joined this listserv at the recommendation of Dr. Willard McCarty, founder of the Humanist listserv, which I've also joined, and for the same reason: I'm working on my second draft of my thesis for my Master of Arts (Interdisciplinary Humanities) with a subject of the humanity of the IBM System/360-descended mainframe. I've been working on that platform since 1987, the year both of these listservs were founded, as a technologist and, more recently, ecosystem enabler. You can see what I've been up to if you Google "Reg Harbeck" "mainframe" - lots of both technical and cultural content. My research, experience, and perhaps predisposition, lead me to believe that the best of our human and humanities history were brought to bear in the development and announcement of IBM's System/360 mainframe on April 7, 1964. Prior to that, everything from the lessons of deep history (e.g. "measure twice, cut once" and other established practical and philosophical principles), more recent history (e.g. Jacquard, Babbage, WW II, Turing, Von Neumann, Fr. Roberto Busa, etc.), and input from experience and experienced users (e.g. the SHARE user group, founded in August of 1955 - still alive at SHARE.org) from the first two decades of electronic computing, funnelled into the design and creation of this system. Since then, while the actual platform was used by people studying the humanities, including the humanity of computing, until more autonomous systems became generally available, its further advances were more driven by the practical needs of serving humanity - especially business - than by philosophical considerations. Today, the modern mainframe descended from S/360, aka IBM Z, runs the world economy, with the large majority of credit card, financial, tax, and other government and business data of record. But most personal computing happens on other platforms - for now. But Moore's Law has ended, and the world is refocusing from novelty to sustainability, just on time for this same mainframe platform to become an increasingly evident option for quality cloud services. All of which leads to my request from this list: I'm still trying to tie the threads together well enough to ensure my thesis statement is logically supportable by the data I've put together, and my current version of that statement, still somewhat in flux, is something like, "The IBM System/360 mainframe and its successors are a definitive manifestation of the best of historical humanity and humanities, and it has continued to develop in a definitive role as part of our shared humanity, now and into the unforeseeable future."? So I would be most grateful if anyone has any publications or other sources they can recommend that speak specifically to these origins and this journey. While I have gathered a great deal of data so far, I'd rather have the same thing recommended to me multiple times than miss an important document that could be the missing link in my thinking. Thank you all so much for reading and considering this, and for your anticipated responses.? ? - Reg Harbeck Reg at Harbeck.ca +1.403.605.7986 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcortada at umn.edu Fri Jul 17 14:21:39 2020 From: jcortada at umn.edu (James Cortada) Date: Fri, 17 Jul 2020 16:21:39 -0500 Subject: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe In-Reply-To: <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> References: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> Message-ID: ... and you should probably reach out to Emerson Pugh, who worked on S/360 and wrote four books about IBM and its technologies. Jim On Fri, Jul 17, 2020 at 2:38 PM Reg Harbeck wrote: > Thank you, Gary. This is an angle I could perhaps spend more time thinking > about than I have so far. I appreciate the recommendation! > > - Reg Harbeck > > On Fri, 17 Jul 2020 19:33:36 +0000, "Grider, Gary Alan" > wrote: > > Interesting thesis topic. If you trace 360 back a bit further I think you > will find that the 360 descended in some degree from Stretch or at least > what was learned from Stretch. > > Since I walk by the building built for Stretch at Los Alamos every day, > While I am not sure that the Cold War represents the best in humanity, it > is a slightly different angle to your quest. > > Not sure if the Stretch 360 predecessor adds to your humanities angle but > much has been written about the Stretch project and the people involved > were a bit of a who?s who in Computing of that era. > > > > Gary Grider > > LANL > > > > *From: *Members on behalf of Reg > Harbeck > *Date: *Friday, July 17, 2020 at 1:24 PM > *To: *"members at lists.sigcis.org" > *Subject: *[EXTERNAL] [SIGCIS-Members] Introduction and Humanity and the > IBM System/360-descended mainframe > > > > Hello, SIGCIS. I am happy to have joined your listserv and be in such > excellent company. > > I've joined this listserv at the recommendation of Dr. Willard McCarty, > founder of the Humanist listserv, which I've also joined, and for the same > reason: > > I'm working on my second draft of my thesis for my Master of Arts > (Interdisciplinary Humanities) with a subject of the humanity of the IBM > System/360-descended mainframe. > > I've been working on that platform since 1987, the year both of these > listservs were founded, as a technologist and, more recently, ecosystem > enabler. You can see what I've been up to if you Google "Reg Harbeck" > "mainframe" - lots of both technical and cultural content. > > My research, experience, and perhaps predisposition, lead me to believe > that the best of our human and humanities history were brought to bear in > the development and announcement of IBM's System/360 mainframe on April 7, > 1964. Prior to that, everything from the lessons of deep history (e.g. > "measure twice, cut once" and other established practical and philosophical > principles), more recent history (e.g. Jacquard, Babbage, WW II, Turing, > Von Neumann, Fr. Roberto Busa, etc.), and input from experience and > experienced users (e.g. the SHARE user group, founded in August of 1955 - > still alive at SHARE.org) from the first two decades of electronic > computing, funnelled into the design and creation of this system. > > Since then, while the actual platform was used by people studying the > humanities, including the humanity of computing, until more autonomous > systems became generally available, its further advances were more driven > by the practical needs of serving humanity - especially business - than by > philosophical considerations. > > Today, the modern mainframe descended from S/360, aka IBM Z, runs the > world economy, with the large majority of credit card, financial, tax, and > other government and business data of record. But most personal computing > happens on other platforms - for now. But Moore's Law has ended, and the > world is refocusing from novelty to sustainability, just on time for this > same mainframe platform to become an increasingly evident option for > quality cloud services. > > All of which leads to my request from this list: I'm still trying to tie > the threads together well enough to ensure my thesis statement is logically > supportable by the data I've put together, and my current version of that > statement, still somewhat in flux, is something like, "The IBM System/360 > mainframe and its successors are a definitive manifestation of the best of > historical humanity and humanities, and it has continued to develop in a > definitive role as part of our shared humanity, now and into the > unforeseeable future." > > So I would be most grateful if anyone has any publications or other > sources they can recommend that speak specifically to these origins and > this journey. While I have gathered a great deal of data so far, I'd rather > have the same thing recommended to me multiple times than miss an important > document that could be the missing link in my thinking. > > Thank you all so much for reading and considering this, and for your > anticipated responses. > - Reg Harbeck > Reg at Harbeck.ca > +1.403.605.7986 > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thequeensofcode at gmail.com Fri Jul 17 14:43:03 2020 From: thequeensofcode at gmail.com (Eileen Buckholtz) Date: Fri, 17 Jul 2020 17:43:03 -0400 Subject: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe In-Reply-To: <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> References: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> Message-ID: Hi Gary and Reg, I have also been researching HARVEST/STRETCH for my Queens of Code Project--Stories from NSA's computing women--some who worked on the Harvest project. NSA was the other government site besides Los Alamos that had the first STRETCH computer and there were a lot of lessons learned from that development. I would appreciate any connections you have to any of the women--or men-- who worked on the HARVEST/STRETCH development team. eCheers, Eileen Eileen Buckholtz Queens of Code Project queenofcode.com https://www.facebook.com/queensofcode On Fri, Jul 17, 2020 at 3:38 PM Reg Harbeck wrote: > Thank you, Gary. This is an angle I could perhaps spend more time thinking > about than I have so far. I appreciate the recommendation! > > - Reg Harbeck > > On Fri, 17 Jul 2020 19:33:36 +0000, "Grider, Gary Alan" > wrote: > > Interesting thesis topic. If you trace 360 back a bit further I think you > will find that the 360 descended in some degree from Stretch or at least > what was learned from Stretch. > > Since I walk by the building built for Stretch at Los Alamos every day, > While I am not sure that the Cold War represents the best in humanity, it > is a slightly different angle to your quest. > > Not sure if the Stretch 360 predecessor adds to your humanities angle but > much has been written about the Stretch project and the people involved > were a bit of a who?s who in Computing of that era. > > > > Gary Grider > > LANL > > > > *From: *Members on behalf of Reg > Harbeck > *Date: *Friday, July 17, 2020 at 1:24 PM > *To: *"members at lists.sigcis.org" > *Subject: *[EXTERNAL] [SIGCIS-Members] Introduction and Humanity and the > IBM System/360-descended mainframe > > > > Hello, SIGCIS. I am happy to have joined your listserv and be in such > excellent company. > > I've joined this listserv at the recommendation of Dr. Willard McCarty, > founder of the Humanist listserv, which I've also joined, and for the same > reason: > > I'm working on my second draft of my thesis for my Master of Arts > (Interdisciplinary Humanities) with a subject of the humanity of the IBM > System/360-descended mainframe. > > I've been working on that platform since 1987, the year both of these > listservs were founded, as a technologist and, more recently, ecosystem > enabler. You can see what I've been up to if you Google "Reg Harbeck" > "mainframe" - lots of both technical and cultural content. > > My research, experience, and perhaps predisposition, lead me to believe > that the best of our human and humanities history were brought to bear in > the development and announcement of IBM's System/360 mainframe on April 7, > 1964. Prior to that, everything from the lessons of deep history (e.g. > "measure twice, cut once" and other established practical and philosophical > principles), more recent history (e.g. Jacquard, Babbage, WW II, Turing, > Von Neumann, Fr. Roberto Busa, etc.), and input from experience and > experienced users (e.g. the SHARE user group, founded in August of 1955 - > still alive at SHARE.org) from the first two decades of electronic > computing, funnelled into the design and creation of this system. > > Since then, while the actual platform was used by people studying the > humanities, including the humanity of computing, until more autonomous > systems became generally available, its further advances were more driven > by the practical needs of serving humanity - especially business - than by > philosophical considerations. > > Today, the modern mainframe descended from S/360, aka IBM Z, runs the > world economy, with the large majority of credit card, financial, tax, and > other government and business data of record. But most personal computing > happens on other platforms - for now. But Moore's Law has ended, and the > world is refocusing from novelty to sustainability, just on time for this > same mainframe platform to become an increasingly evident option for > quality cloud services. > > All of which leads to my request from this list: I'm still trying to tie > the threads together well enough to ensure my thesis statement is logically > supportable by the data I've put together, and my current version of that > statement, still somewhat in flux, is something like, "The IBM System/360 > mainframe and its successors are a definitive manifestation of the best of > historical humanity and humanities, and it has continued to develop in a > definitive role as part of our shared humanity, now and into the > unforeseeable future." > > So I would be most grateful if anyone has any publications or other > sources they can recommend that speak specifically to these origins and > this journey. While I have gathered a great deal of data so far, I'd rather > have the same thing recommended to me multiple times than miss an important > document that could be the missing link in my thinking. > > Thank you all so much for reading and considering this, and for your > anticipated responses. > - Reg Harbeck > Reg at Harbeck.ca > +1.403.605.7986 > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Fri Jul 17 14:43:13 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Fri, 17 Jul 2020 16:43:13 -0500 Subject: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe In-Reply-To: <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> References: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> Message-ID: <00cb01d65c83$463ef2d0$d2bcd870$@gmail.com> Hello Reg, I think your topic is important, but must admit to doubts regarding the way you framed the thesis. It is not so much that I disagree with the idea that the System/360 was a wonderful achievement, but you draw the claim so broadly that it can?t legitimately be evaluated: ?a definitive manifestation of the best of historical humanity and humanities, and it has continued to develop in a definitive role as part of our shared humanity, now and into the unforeseeable future.? To mention one issue, to sustain your argument you would have to demonstrate foreknowledge of the ?unforeseeable future? which is by definition impossible. You also would run into problems from readers who challenge your assertion that an architecture devised by white men to serve the military industrial complex and corporate administration, etc. has a ?definitive role in our shared humanity.? I don?t think your thesis needs to assert the inherent primary worth of certain activities over others to make a contribution. So perhaps something more tightly drawn might work better? For example, if you wanted to focus on the technical and infrastructural legacy of the /360 you might argue, ?The System/360 has a complex and underappreciated technological legacy, and the decisions made by its designers shaped taken for granted aspects of our digital world such as A, B and C. Without it, our lives would be very different.? Or if you wanted to focus on the digital humanities side, you might write, ?Although the importance of the System/360 to business data processing, scientific computation, and the evolution of systems software are well established, in this thesis I will argue that its contributions to the development of scholarship in the humanities are just as important. I focus particularly on A, B, and C. Without such accomplishments, the digital humanities movement would never have existed.? Those are still very ambitious claims, but they are better aligned with what your research might plausibly be able to show and would not raise hackles. Best wishes, Tom From: Members On Behalf Of Reg Harbeck Sent: Friday, July 17, 2020 2:38 PM To: ggrider at lanl.gov; members at lists.sigcis.org Subject: Re: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe Thank you, Gary. This is an angle I could perhaps spend more time thinking about than I have so far. I appreciate the recommendation! - Reg Harbeck On Fri, 17 Jul 2020 19:33:36 +0000, "Grider, Gary Alan" < ggrider at lanl.gov> wrote: Interesting thesis topic. If you trace 360 back a bit further I think you will find that the 360 descended in some degree from Stretch or at least what was learned from Stretch. Since I walk by the building built for Stretch at Los Alamos every day, While I am not sure that the Cold War represents the best in humanity, it is a slightly different angle to your quest. Not sure if the Stretch 360 predecessor adds to your humanities angle but much has been written about the Stretch project and the people involved were a bit of a who?s who in Computing of that era. Gary Grider LANL From: Members < members-bounces at lists.sigcis.org> on behalf of Reg Harbeck < reg at harbeck.ca> Date: Friday, July 17, 2020 at 1:24 PM To: " members at lists.sigcis.org" < members at lists.sigcis.org> Subject: [EXTERNAL] [SIGCIS-Members] Introduction and Humanity and the IBM System/360-descended mainframe Hello, SIGCIS. I am happy to have joined your listserv and be in such excellent company. I've joined this listserv at the recommendation of Dr. Willard McCarty, founder of the Humanist listserv, which I've also joined, and for the same reason: I'm working on my second draft of my thesis for my Master of Arts (Interdisciplinary Humanities) with a subject of the humanity of the IBM System/360-descended mainframe. I've been working on that platform since 1987, the year both of these listservs were founded, as a technologist and, more recently, ecosystem enabler. You can see what I've been up to if you Google "Reg Harbeck" "mainframe" - lots of both technical and cultural content. My research, experience, and perhaps predisposition, lead me to believe that the best of our human and humanities history were brought to bear in the development and announcement of IBM's System/360 mainframe on April 7, 1964. Prior to that, everything from the lessons of deep history (e.g. "measure twice, cut once" and other established practical and philosophical principles), more recent history (e.g. Jacquard, Babbage, WW II, Turing, Von Neumann, Fr. Roberto Busa, etc.), and input from experience and experienced users (e.g. the SHARE user group, founded in August of 1955 - still alive at SHARE.org) from the first two decades of electronic computing, funnelled into the design and creation of this system. Since then, while the actual platform was used by people studying the humanities, including the humanity of computing, until more autonomous systems became generally available, its further advances were more driven by the practical needs of serving humanity - especially business - than by philosophical considerations. Today, the modern mainframe descended from S/360, aka IBM Z, runs the world economy, with the large majority of credit card, financial, tax, and other government and business data of record. But most personal computing happens on other platforms - for now. But Moore's Law has ended, and the world is refocusing from novelty to sustainability, just on time for this same mainframe platform to become an increasingly evident option for quality cloud services. All of which leads to my request from this list: I'm still trying to tie the threads together well enough to ensure my thesis statement is logically supportable by the data I've put together, and my current version of that statement, still somewhat in flux, is something like, "The IBM System/360 mainframe and its successors are a definitive manifestation of the best of historical humanity and humanities, and it has continued to develop in a definitive role as part of our shared humanity, now and into the unforeseeable future." So I would be most grateful if anyone has any publications or other sources they can recommend that speak specifically to these origins and this journey. While I have gathered a great deal of data so far, I'd rather have the same thing recommended to me multiple times than miss an important document that could be the missing link in my thinking. Thank you all so much for reading and considering this, and for your anticipated responses. - Reg Harbeck Reg at Harbeck.ca +1.403.605.7986 -------------- next part -------------- An HTML attachment was scrubbed... URL: From reg at harbeck.ca Fri Jul 17 16:32:17 2020 From: reg at harbeck.ca (Reg Harbeck) Date: Fri, 17 Jul 2020 19:32:17 -0400 Subject: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe In-Reply-To: References: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> Message-ID: <1595028737.i99ogjt5wcg88w4c@webmail.telushosting.com> James, that is an excellent idea - and I'm embarrassed to admit that it hadn't previously occurred to me, even though I have several of Emerson Pugh's books on my desk in front of me, which I have used in-depth in my thesis. I just Googled around but couldn't readily locate his contact information - does anyone have it, or a good site for getting it, handy? ? - Reg Harbeck On Fri, 17 Jul 2020 16:21:39 -0500, James Cortada wrote: ? ... and you should probably reach out to Emerson Pugh, who worked on S/360 and wrote four books about IBM and its technologies.? Jim ? On Fri, Jul 17, 2020 at 2:38 PM Reg Harbeck wrote: Thank you, Gary. This is an angle I could perhaps spend more time thinking about than I have so far. I appreciate the recommendation! ? - Reg Harbeck On Fri, 17 Jul 2020 19:33:36 +0000, "Grider, Gary Alan" wrote: ? Interesting thesis topic.? If you trace 360 back a bit further I think you will find that the 360 descended in some degree from Stretch or at least what was learned from Stretch. Since I walk by the building built for Stretch at Los Alamos every day, While I am not sure that the Cold War represents the best in humanity, it is a slightly different angle to your quest. Not sure if the Stretch 360 predecessor adds to your humanities angle but much has been written about the Stretch project and the people involved were a bit of a who?s who in Computing of that era. ? Gary Grider LANL ? From: Members on behalf of Reg Harbeck Date: Friday, July 17, 2020 at 1:24 PM To: "members at lists.sigcis.org" Subject: [EXTERNAL] [SIGCIS-Members] Introduction and Humanity and the IBM System/360-descended mainframe ? Hello, SIGCIS. I am happy to have joined your listserv and be in such excellent company. I've joined this listserv at the recommendation of Dr. Willard McCarty, founder of the Humanist listserv, which I've also joined, and for the same reason: I'm working on my second draft of my thesis for my Master of Arts (Interdisciplinary Humanities) with a subject of the humanity of the IBM System/360-descended mainframe. I've been working on that platform since 1987, the year both of these listservs were founded, as a technologist and, more recently, ecosystem enabler. You can see what I've been up to if you Google "Reg Harbeck" "mainframe" - lots of both technical and cultural content. My research, experience, and perhaps predisposition, lead me to believe that the best of our human and humanities history were brought to bear in the development and announcement of IBM's System/360 mainframe on April 7, 1964. Prior to that, everything from the lessons of deep history (e.g. "measure twice, cut once" and other established practical and philosophical principles), more recent history (e.g. Jacquard, Babbage, WW II, Turing, Von Neumann, Fr. Roberto Busa, etc.), and input from experience and experienced users (e.g. the SHARE user group, founded in August of 1955 - still alive at SHARE.org) from the first two decades of electronic computing, funnelled into the design and creation of this system. Since then, while the actual platform was used by people studying the humanities, including the humanity of computing, until more autonomous systems became generally available, its further advances were more driven by the practical needs of serving humanity - especially business - than by philosophical considerations. Today, the modern mainframe descended from S/360, aka IBM Z, runs the world economy, with the large majority of credit card, financial, tax, and other government and business data of record. But most personal computing happens on other platforms - for now. But Moore's Law has ended, and the world is refocusing from novelty to sustainability, just on time for this same mainframe platform to become an increasingly evident option for quality cloud services. All of which leads to my request from this list: I'm still trying to tie the threads together well enough to ensure my thesis statement is logically supportable by the data I've put together, and my current version of that statement, still somewhat in flux, is something like, "The IBM System/360 mainframe and its successors are a definitive manifestation of the best of historical humanity and humanities, and it has continued to develop in a definitive role as part of our shared humanity, now and into the unforeseeable future."? So I would be most grateful if anyone has any publications or other sources they can recommend that speak specifically to these origins and this journey. While I have gathered a great deal of data so far, I'd rather have the same thing recommended to me multiple times than miss an important document that could be the missing link in my thinking. Thank you all so much for reading and considering this, and for your anticipated responses.? ? - Reg Harbeck Reg at Harbeck.ca +1.403.605.7986 _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org ? ? -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From reg at harbeck.ca Fri Jul 17 16:44:16 2020 From: reg at harbeck.ca (Reg Harbeck) Date: Fri, 17 Jul 2020 19:44:16 -0400 Subject: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe In-Reply-To: References: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> Message-ID: <1595029456.0ewq29bvkgcg4gsg@webmail.telushosting.com> Hi Eileen, I interviewed Michael Myers, who was one of the programmers of OS/360, and is still very active. His LinkedIn profile is at?https://www.linkedin.com/in/michael-myers-320618122/. You can catch my interview of him at?https://ibmsystemsmag.com/IBM-Z/02/2020/mainframe-mike-myers. There's a good likelihood that he has contacts such as you're seeking. Also, the SHARE conference, which is meeting virtually in August (see share.org), has a strong and growing Women in IT focus, and there's a good possibility that some of the participants in that may have further insights you can use. One outstanding person involved with that is IBM Distinguished Engineer Rosalind Radcliffe (https://www.linkedin.com/in/rosalind-radcliffe/), who may be able to point you to additional resources. I'll keep your request in mind in case anything else occurs to me. - Reg Harbeck On Fri, 17 Jul 2020 17:43:03 -0400, Eileen Buckholtz wrote: ? Hi Gary and Reg, ? I have also been researching HARVEST/STRETCH for my Queens of Code Project--Stories from NSA's computing women--some who worked on the Harvest project. NSA was the other government site besides Los Alamos that had the first STRETCH computer and there were a lot of lessons learned from that development. ? I would appreciate any connections you have to any of the women--or men-- who worked on the HARVEST/STRETCH development team. ? eCheers, ? Eileen ? Eileen Buckholtz Queens of Code Project queenofcode.com https://www.facebook.com/queensofcode ? On Fri, Jul 17, 2020 at 3:38 PM Reg Harbeck wrote: Thank you, Gary. This is an angle I could perhaps spend more time thinking about than I have so far. I appreciate the recommendation! ? - Reg Harbeck On Fri, 17 Jul 2020 19:33:36 +0000, "Grider, Gary Alan" wrote: ? Interesting thesis topic.? If you trace 360 back a bit further I think you will find that the 360 descended in some degree from Stretch or at least what was learned from Stretch. Since I walk by the building built for Stretch at Los Alamos every day, While I am not sure that the Cold War represents the best in humanity, it is a slightly different angle to your quest. Not sure if the Stretch 360 predecessor adds to your humanities angle but much has been written about the Stretch project and the people involved were a bit of a who?s who in Computing of that era. ? Gary Grider LANL ? From: Members on behalf of Reg Harbeck Date: Friday, July 17, 2020 at 1:24 PM To: "members at lists.sigcis.org" Subject: [EXTERNAL] [SIGCIS-Members] Introduction and Humanity and the IBM System/360-descended mainframe ? Hello, SIGCIS. I am happy to have joined your listserv and be in such excellent company. I've joined this listserv at the recommendation of Dr. Willard McCarty, founder of the Humanist listserv, which I've also joined, and for the same reason: I'm working on my second draft of my thesis for my Master of Arts (Interdisciplinary Humanities) with a subject of the humanity of the IBM System/360-descended mainframe. I've been working on that platform since 1987, the year both of these listservs were founded, as a technologist and, more recently, ecosystem enabler. You can see what I've been up to if you Google "Reg Harbeck" "mainframe" - lots of both technical and cultural content. My research, experience, and perhaps predisposition, lead me to believe that the best of our human and humanities history were brought to bear in the development and announcement of IBM's System/360 mainframe on April 7, 1964. Prior to that, everything from the lessons of deep history (e.g. "measure twice, cut once" and other established practical and philosophical principles), more recent history (e.g. Jacquard, Babbage, WW II, Turing, Von Neumann, Fr. Roberto Busa, etc.), and input from experience and experienced users (e.g. the SHARE user group, founded in August of 1955 - still alive at SHARE.org) from the first two decades of electronic computing, funnelled into the design and creation of this system. Since then, while the actual platform was used by people studying the humanities, including the humanity of computing, until more autonomous systems became generally available, its further advances were more driven by the practical needs of serving humanity - especially business - than by philosophical considerations. Today, the modern mainframe descended from S/360, aka IBM Z, runs the world economy, with the large majority of credit card, financial, tax, and other government and business data of record. But most personal computing happens on other platforms - for now. But Moore's Law has ended, and the world is refocusing from novelty to sustainability, just on time for this same mainframe platform to become an increasingly evident option for quality cloud services. All of which leads to my request from this list: I'm still trying to tie the threads together well enough to ensure my thesis statement is logically supportable by the data I've put together, and my current version of that statement, still somewhat in flux, is something like, "The IBM System/360 mainframe and its successors are a definitive manifestation of the best of historical humanity and humanities, and it has continued to develop in a definitive role as part of our shared humanity, now and into the unforeseeable future."? So I would be most grateful if anyone has any publications or other sources they can recommend that speak specifically to these origins and this journey. While I have gathered a great deal of data so far, I'd rather have the same thing recommended to me multiple times than miss an important document that could be the missing link in my thinking. Thank you all so much for reading and considering this, and for your anticipated responses.? ? - Reg Harbeck Reg at Harbeck.ca +1.403.605.7986 _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From reg at harbeck.ca Fri Jul 17 16:51:46 2020 From: reg at harbeck.ca (Reg Harbeck) Date: Fri, 17 Jul 2020 19:51:46 -0400 Subject: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe In-Reply-To: <00cb01d65c83$463ef2d0$d2bcd870$@gmail.com> References: <1595013838.3lslxd5ioow0kgs4@webmail.telushosting.com> <1595014677.ksp5t0aumio8wsoo@webmail.telushosting.com> <00cb01d65c83$463ef2d0$d2bcd870$@gmail.com> Message-ID: <1595029906.zgmt7rtqosgk00sg@webmail.telushosting.com> Hi Tom, Thank you for the carefully-considered and thorough response. I agree that many people do not think as highly of the mainframe as I do - that's one of my motivators in putting together my thesis - some combination of demonstrating my thoughts and ensuring they are consistent with reality. That said, as I adjust my thesis to be more reasonably demonstrable, my particular interest is in drawing a line all the way back to Socrates/Plato and their thinking about techn? and po?sis, and then forward through history, technology, art, philosophy, and business, leading up to the design of the System/360, and then tracing the interaction of it with humanity and business and other technologies until we reach this point, 56 years after its announcement, and see what definitive aspects of it have become embedded in our journey forward. Possibly still too ambitious - that's part of why I'm rewriting my thesis rather than just editing my first draft. In any case, I appreciate the feedback, and will keep it in mind as I fine-tune. ? - Reg Harbeck On Fri, 17 Jul 2020 16:43:13 -0500, wrote: Hello Reg, ? I think your topic is important, but must admit to doubts regarding the way you framed the thesis. It is not so much that I disagree with the idea that the ?System/360 was a wonderful achievement, but you draw the claim so broadly that it can?t legitimately be evaluated: ?a definitive manifestation of the best of historical humanity and humanities, and it has continued to develop in a definitive role as part of our shared humanity, now and into the unforeseeable future.? ? To mention one issue, to sustain your argument you would have to demonstrate foreknowledge of the ?unforeseeable future? which is by definition impossible. You also would run into problems from readers who challenge your assertion that an architecture devised by white men to serve the military industrial complex and corporate administration, etc. has a ?definitive role in our shared humanity.? I don?t think your thesis needs to assert the inherent primary worth of certain activities over others to make a contribution. ? So perhaps something more tightly drawn might work better? For example, if you wanted to focus on the technical and infrastructural legacy of the /360 you might argue, ?The System/360 has a complex and underappreciated technological legacy, and the decisions made by its designers shaped taken for granted aspects of our digital world such as A, B and C. Without it, our lives would be very different.? ? Or if you wanted to focus on the digital humanities side, you might write, ?Although the importance of the System/360 to business data processing, scientific computation, and the evolution of systems software are well established, in this thesis I will argue that its contributions to the development of scholarship in the humanities are just as important. I focus particularly on A, B, and C. Without such accomplishments, the digital humanities movement would never have existed.? ? Those are still very ambitious claims, but they are better aligned with what your research might plausibly be able to show and would not raise hackles. ? Best wishes, ? Tom ? From: Members On Behalf Of Reg Harbeck Sent: Friday, July 17, 2020 2:38 PM To: ggrider at lanl.gov; members at lists.sigcis.org Subject: Re: [SIGCIS-Members] [EXTERNAL] Introduction and Humanity and the IBM System/360-descended mainframe ? Thank you, Gary. This is an angle I could perhaps spend more time thinking about than I have so far. I appreciate the recommendation! ? - Reg Harbeck On Fri, 17 Jul 2020 19:33:36 +0000, "Grider, Gary Alan" wrote: Interesting thesis topic.? If you trace 360 back a bit further I think you will find that the 360 descended in some degree from Stretch or at least what was learned from Stretch. Since I walk by the building built for Stretch at Los Alamos every day, While I am not sure that the Cold War represents the best in humanity, it is a slightly different angle to your quest. Not sure if the Stretch 360 predecessor adds to your humanities angle but much has been written about the Stretch project and the people involved were a bit of a who?s who in Computing of that era. ? Gary Grider LANL ? From: Members on behalf of Reg Harbeck Date: Friday, July 17, 2020 at 1:24 PM To: "members at lists.sigcis.org" Subject: [EXTERNAL] [SIGCIS-Members] Introduction and Humanity and the IBM System/360-descended mainframe ? Hello, SIGCIS. I am happy to have joined your listserv and be in such excellent company. I've joined this listserv at the recommendation of Dr. Willard McCarty, founder of the Humanist listserv, which I've also joined, and for the same reason: I'm working on my second draft of my thesis for my Master of Arts (Interdisciplinary Humanities) with a subject of the humanity of the IBM System/360-descended mainframe. I've been working on that platform since 1987, the year both of these listservs were founded, as a technologist and, more recently, ecosystem enabler. You can see what I've been up to if you Google "Reg Harbeck" "mainframe" - lots of both technical and cultural content. My research, experience, and perhaps predisposition, lead me to believe that the best of our human and humanities history were brought to bear in the development and announcement of IBM's System/360 mainframe on April 7, 1964. Prior to that, everything from the lessons of deep history (e.g. "measure twice, cut once" and other established practical and philosophical principles), more recent history (e.g. Jacquard, Babbage, WW II, Turing, Von Neumann, Fr. Roberto Busa, etc.), and input from experience and experienced users (e.g. the SHARE user group, founded in August of 1955 - still alive at SHARE.org) from the first two decades of electronic computing, funnelled into the design and creation of this system. Since then, while the actual platform was used by people studying the humanities, including the humanity of computing, until more autonomous systems became generally available, its further advances were more driven by the practical needs of serving humanity - especially business - than by philosophical considerations. Today, the modern mainframe descended from S/360, aka IBM Z, runs the world economy, with the large majority of credit card, financial, tax, and other government and business data of record. But most personal computing happens on other platforms - for now. But Moore's Law has ended, and the world is refocusing from novelty to sustainability, just on time for this same mainframe platform to become an increasingly evident option for quality cloud services. All of which leads to my request from this list: I'm still trying to tie the threads together well enough to ensure my thesis statement is logically supportable by the data I've put together, and my current version of that statement, still somewhat in flux, is something like, "The IBM System/360 mainframe and its successors are a definitive manifestation of the best of historical humanity and humanities, and it has continued to develop in a definitive role as part of our shared humanity, now and into the unforeseeable future."? So I would be most grateful if anyone has any publications or other sources they can recommend that speak specifically to these origins and this journey. While I have gathered a great deal of data so far, I'd rather have the same thing recommended to me multiple times than miss an important document that could be the missing link in my thinking. Thank you all so much for reading and considering this, and for your anticipated responses.? ? - Reg Harbeck Reg at Harbeck.ca +1.403.605.7986 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcortada at umn.edu Mon Jul 20 12:41:41 2020 From: jcortada at umn.edu (James Cortada) Date: Mon, 20 Jul 2020 14:41:41 -0500 Subject: [SIGCIS-Members] Help on Coffee and Computing Message-ID: The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From StaitiA at si.edu Mon Jul 20 12:53:04 2020 From: StaitiA at si.edu (Staiti, Alana) Date: Mon, 20 Jul 2020 19:53:04 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: Hi Jim, The National Museum of American History has some mugs in the computing collection featuring company names. Some include fun little sayings. See links below for a few examples. I'm not sure I can elaborate on coffee culture though! We are still working remotely but if you have specific questions about any of these or other objects I'd be happy to do whatever digging I can do from afar, for the time being. https://americanhistory.si.edu/collections/search/object/nmah_1281495 https://americanhistory.si.edu/collections/search/object/nmah_1281135 https://americanhistory.si.edu/collections/search/object/nmah_1281136 https://americanhistory.si.edu/collections/search/object/nmah_1281137 Be well, Alana Alana Staiti (she/her/hers) Curator of the History of Computers and Information Sciences National Museum of American History Smithsonian Institution staitia at si.edu ________________________________ From: Members on behalf of James Cortada Sent: Monday, July 20, 2020 3:41 PM To: members at sigcis.org Subject: [SIGCIS-Members] Help on Coffee and Computing External Email - Exercise Caution The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdriscoll at alum.mit.edu Mon Jul 20 14:52:40 2020 From: kdriscoll at alum.mit.edu (Kevin Driscoll) Date: Mon, 20 Jul 2020 17:52:40 -0400 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: Hello Jim and SIGCIS, Two references come to mind: 1. The "Trojan Room coffee pot" at the U of Cambridge is often cited as the first live camera on the web: - Quentin Stafford-Fraser, ?On Site: The Life and Times of the First Web Cam,? Communications of the ACM 44, no. 7 (July 1, 2001): 25?26. https://doi.org/10.1145/379300.379327. - Full text of above without paywall: https://www.cl.cam.ac.uk/coffee/qsf/cacm200107.html - Captured by the Wayback Machine on 10 December 1997: http://web.archive.org/web/19971210230542/http://www.cl.cam.ac.uk/coffee/coffee.html 2. Roy Levin of Microsoft Research published a paper about running an industry lab in which he recommends that managers "INSTALL A WORLD-CLASS COFFEE MACHINE" and notes that "the first capital purchase" at MSR-Silicon Valley was an espresso machine. - Roy Levin, ?A Perspective on Computing Research Management,? ACM SIGOPS Operating Systems Review 41, no. 2 (April 1, 2007): 3?9, https://doi.org/10.1145/1243418.1243420. I've heard other lore about coffee culture at Microsoft that involves the proximity of Starbucks in the 1990s. Allegedly, management lobbied for coffee carts in every building to keep programmers from driving to off-campus coffeehouses. No cite for that one but it would be fun to track down the origin of the story. Looking forward to a caffeinated special issue of the Annals on the transnational history of stimulants and computing. Best of luck, Kevin Driscoll U of Virginia On Mon, Jul 20, 2020 at 3:41 PM James Cortada wrote: > > The IT community of users, programmers, vendors, etc have for decades had > a reputation for being extensive consumers of coffee. In some parts of the > IT ecosystem, especially among those who work odd hours, such as > programmers, computer operators, and vendor field engineers. I am studying > the corporate ephemera of this industry and its cultural attachments, such > as coffee cups and what they tell us about computing. Do any of you have > any information, ephemera, or sources and citations on this specific issue > of coffee and computing? I can get many industry folks, such as IBM > retirees, to wax eloquently on the subject in their private FB accounts, > but that is not enough. Corporate culture is tough to study. Thanks in > advance for your help. Jim > -- > James W. Cortada > Senior Research Fellow > Charles Babbage Institute > University of Minnesota > jcortada at umn.edu > 608-274-6382 > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From halvormj at plu.edu Mon Jul 20 15:14:56 2020 From: halvormj at plu.edu (Michael Halvorson) Date: Mon, 20 Jul 2020 15:14:56 -0700 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: Kevin and James, At Microsoft/Redmond in the late 80s and early 90s, there was a lot of lore around the distribution of "free" sodas in refrigerators in most of the break rooms. This was before bottled water became a thing, for the most part. On tours for new employees and guests, there was a lot of admiration for the relatively narrow selection of Code, Diet Code, Milk, and Chocolate Milk, which people could freely consume if they wished. Coffee was less popular, but people did venture off "campus" for burgers, ribs, etc. The most popular stimulant beverage by far at Microsoft was Mountain Dew, among developers and the documentation teams. In other circles, Jolt Cola was popular, and mentioned in publications like *The Cyberpunk Handbook* (Random House, 1995), edited by R. U. Sirius [Ken Goffman], St. Jude [Jude Milhon], and Bart Nagel. See p. 66. --Michael On Mon, Jul 20, 2020 at 2:52 PM Kevin Driscoll wrote: > Hello Jim and SIGCIS, > > Two references come to mind: > > 1. The "Trojan Room coffee pot" at the U of Cambridge is often cited as > the first live camera on the web: > - Quentin Stafford-Fraser, ?On Site: The Life and Times of the First Web > Cam,? Communications of the ACM 44, no. 7 (July 1, 2001): 25?26. > https://doi.org/10.1145/379300.379327. > - Full text of above without paywall: > https://www.cl.cam.ac.uk/coffee/qsf/cacm200107.html > - Captured by the Wayback Machine on 10 December 1997: > http://web.archive.org/web/19971210230542/http://www.cl.cam.ac.uk/coffee/coffee.html > > 2. Roy Levin of Microsoft Research published a paper about running an > industry lab in which he recommends that managers "INSTALL A WORLD-CLASS > COFFEE MACHINE" and notes that "the first capital purchase" at MSR-Silicon > Valley was an espresso machine. > - Roy Levin, ?A Perspective on Computing Research Management,? ACM SIGOPS > Operating Systems Review 41, no. 2 (April 1, 2007): 3?9, > https://doi.org/10.1145/1243418.1243420. > > I've heard other lore about coffee culture at Microsoft that involves the > proximity of Starbucks in the 1990s. Allegedly, management lobbied for > coffee carts in every building to keep programmers from driving to > off-campus coffeehouses. No cite for that one but it would be fun to track > down the origin of the story. > > Looking forward to a caffeinated special issue of the Annals on the > transnational history of stimulants and computing. > > Best of luck, > > Kevin Driscoll > U of Virginia > > > On Mon, Jul 20, 2020 at 3:41 PM James Cortada wrote: > >> >> The IT community of users, programmers, vendors, etc have for decades had >> a reputation for being extensive consumers of coffee. In some parts of the >> IT ecosystem, especially among those who work odd hours, such as >> programmers, computer operators, and vendor field engineers. I am studying >> the corporate ephemera of this industry and its cultural attachments, such >> as coffee cups and what they tell us about computing. Do any of you have >> any information, ephemera, or sources and citations on this specific issue >> of coffee and computing? I can get many industry folks, such as IBM >> retirees, to wax eloquently on the subject in their private FB accounts, >> but that is not enough. Corporate culture is tough to study. Thanks in >> advance for your help. Jim >> -- >> James W. Cortada >> Senior Research Fellow >> Charles Babbage Institute >> University of Minnesota >> jcortada at umn.edu >> 608-274-6382 >> _______________________________________________ >> This email is relayed from members at sigcis.org, the email discussion >> list of SHOT SIGCIS. Opinions expressed here are those of the member >> posting and are not reviewed, edited, or endorsed by SIGCIS. The list >> archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ >> and you can change your subscription options at >> http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -- Michael J. Halvorson Benson Family Chair in Business and Economic History Author of: *Code Nation: Personal Computing and the Learn to Program Movement in America (2020) * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean.graham at stonybrook.edu Mon Jul 20 15:25:37 2020 From: jean.graham at stonybrook.edu (Jean Graham) Date: Mon, 20 Jul 2020 18:25:37 -0400 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: I am reminded of the mathematician Paul Erd?s's comment, "A mathematician is a device for turning coffee into theorems." Of course, he also used amphetamines as an aid to productivity. On Mon, Jul 20, 2020 at 6:15 PM Michael Halvorson wrote: > Kevin and James, > > At Microsoft/Redmond in the late 80s and early 90s, there was a lot of > lore around the distribution of "free" sodas in refrigerators in most of > the break rooms. This was before bottled water became a thing, for the most > part. On tours for new employees and guests, there was a lot of admiration > for the relatively narrow selection of Code, Diet Code, Milk, and Chocolate > Milk, which people could freely consume if they wished. Coffee was less > popular, but people did venture off "campus" for burgers, ribs, etc. > > The most popular stimulant beverage by far at Microsoft was Mountain Dew, > among developers and the documentation teams. In other circles, Jolt Cola > was popular, and mentioned in publications like *The Cyberpunk Handbook* > (Random House, 1995), edited by R. U. Sirius [Ken Goffman], St. Jude [Jude > Milhon], and Bart Nagel. See p. 66. > > --Michael > > On Mon, Jul 20, 2020 at 2:52 PM Kevin Driscoll > wrote: > >> Hello Jim and SIGCIS, >> >> Two references come to mind: >> >> 1. The "Trojan Room coffee pot" at the U of Cambridge is often cited as >> the first live camera on the web: >> - Quentin Stafford-Fraser, ?On Site: The Life and Times of the First Web >> Cam,? Communications of the ACM 44, no. 7 (July 1, 2001): 25?26. >> https://doi.org/10.1145/379300.379327. >> - Full text of above without paywall: >> https://www.cl.cam.ac.uk/coffee/qsf/cacm200107.html >> - Captured by the Wayback Machine on 10 December 1997: >> http://web.archive.org/web/19971210230542/http://www.cl.cam.ac.uk/coffee/coffee.html >> >> 2. Roy Levin of Microsoft Research published a paper about running an >> industry lab in which he recommends that managers "INSTALL A WORLD-CLASS >> COFFEE MACHINE" and notes that "the first capital purchase" at MSR-Silicon >> Valley was an espresso machine. >> - Roy Levin, ?A Perspective on Computing Research Management,? ACM SIGOPS >> Operating Systems Review 41, no. 2 (April 1, 2007): 3?9, >> https://doi.org/10.1145/1243418.1243420. >> >> I've heard other lore about coffee culture at Microsoft that involves the >> proximity of Starbucks in the 1990s. Allegedly, management lobbied for >> coffee carts in every building to keep programmers from driving to >> off-campus coffeehouses. No cite for that one but it would be fun to track >> down the origin of the story. >> >> Looking forward to a caffeinated special issue of the Annals on the >> transnational history of stimulants and computing. >> >> Best of luck, >> >> Kevin Driscoll >> U of Virginia >> >> >> On Mon, Jul 20, 2020 at 3:41 PM James Cortada wrote: >> >>> >>> The IT community of users, programmers, vendors, etc have for decades >>> had a reputation for being extensive consumers of coffee. In some parts of >>> the IT ecosystem, especially among those who work odd hours, such as >>> programmers, computer operators, and vendor field engineers. I am studying >>> the corporate ephemera of this industry and its cultural attachments, such >>> as coffee cups and what they tell us about computing. Do any of you have >>> any information, ephemera, or sources and citations on this specific issue >>> of coffee and computing? I can get many industry folks, such as IBM >>> retirees, to wax eloquently on the subject in their private FB accounts, >>> but that is not enough. Corporate culture is tough to study. Thanks in >>> advance for your help. Jim >>> -- >>> James W. Cortada >>> Senior Research Fellow >>> Charles Babbage Institute >>> University of Minnesota >>> jcortada at umn.edu >>> 608-274-6382 >>> _______________________________________________ >>> This email is relayed from members at sigcis.org, the email discussion >>> list of SHOT SIGCIS. Opinions expressed here are those of the member >>> posting and are not reviewed, edited, or endorsed by SIGCIS. The list >>> archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ >>> and you can change your subscription options at >>> http://lists.sigcis.org/listinfo.cgi/members-sigcis.org >> >> _______________________________________________ >> This email is relayed from members at sigcis.org, the email discussion >> list of SHOT SIGCIS. Opinions expressed here are those of the member >> posting and are not reviewed, edited, or endorsed by SIGCIS. The list >> archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ >> and you can change your subscription options at >> http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > > -- > Michael J. Halvorson > Benson Family Chair in Business and Economic History > > > Author of: *Code Nation: Personal Computing and the Learn to Program > Movement in America (2020) * > > > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.sumner at manchester.ac.uk Tue Jul 21 02:57:25 2020 From: james.sumner at manchester.ac.uk (James Sumner) Date: Tue, 21 Jul 2020 10:57:25 +0100 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: <937b1abd-db15-7db6-5edc-fd7263614ac1@manchester.ac.uk> What a wonderful question! The kind of short insider-humour pieces that circulated so readily as email forwards and on Usenet, bulletin boards and early Web forums would no doubt be worth surveying for mentions of coffee dependency. (From their nature, of course, it's often hard to firmly identify original authorship, but much easier to document the spread and mutation of these pieces over time.) So, the "BOFH Excuse List" preserved in various places including ? which, as far as I can work out, began as an outgrowth of Simon Travaglia's "Bastard Operator From Hell" sysadmin pyschosis saga, with fans adding their own suggestions ? includes the excuses "operators on strike due to broken coffee machine" and "firmware update in the coffee machine". In "A helpdesk log" as preserved at (often assumed to be another BOFH production, but different in style) the dastardly admin reassigns a crucial server's UPS to the coffee-maker, leaves the phone off the hook while creating an "@CoffeeMake macro", and ends the day by plugging the coffee-maker into an Ethernet hub "to see what happens. Not (too) much." Cheers James On 20/07/2020 20:41, James Cortada wrote: > > The IT community of users, programmers, vendors, etc have for decades > had a reputation for being extensive consumers of coffee. In some > parts of the IT ecosystem, especially among those who work odd hours, > such as programmers, computer operators, and vendor field engineers.? > I am studying the corporate ephemera of this industry and its cultural > attachments, such as coffee cups and what they tell us about > computing.? Do any of you have any information, ephemera, or sources > and citations on this specific issue of coffee and computing?? I can > get many industry folks, such as IBM retirees, to wax eloquently?on > the subject in their private FB accounts, but that is not enough.? > Corporate culture is tough to study.? Thanks in advance for your > help.? Jim > -- > James W. Cortada > Senior Research Fellow > Charles Babbage Institute > University of Minnesota > jcortada at umn.edu > 608-274-6382 > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From kidwellp at si.edu Tue Jul 21 04:08:38 2020 From: kidwellp at si.edu (Kidwell, Peggy) Date: Tue, 21 Jul 2020 11:08:38 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: , Message-ID: I would add to Alana's fine list: https://americanhistory.si.edu/collections/search/object/nmah_1867086 (a photograph) https://americanhistory.si.edu/collections/search/object/NMAH.AC.0324_ref460 (a cartoon - though not much coffee shown) Best - Peggy Kidwell ________________________________ From: Members on behalf of Staiti, Alana Sent: Monday, July 20, 2020 3:53 PM To: James Cortada ; members at sigcis.org Subject: Re: [SIGCIS-Members] Help on Coffee and Computing External Email - Exercise Caution Hi Jim, The National Museum of American History has some mugs in the computing collection featuring company names. Some include fun little sayings. See links below for a few examples. I'm not sure I can elaborate on coffee culture though! We are still working remotely but if you have specific questions about any of these or other objects I'd be happy to do whatever digging I can do from afar, for the time being. https://americanhistory.si.edu/collections/search/object/nmah_1281495 https://americanhistory.si.edu/collections/search/object/nmah_1281135 https://americanhistory.si.edu/collections/search/object/nmah_1281136 https://americanhistory.si.edu/collections/search/object/nmah_1281137 Be well, Alana Alana Staiti (she/her/hers) Curator of the History of Computers and Information Sciences National Museum of American History Smithsonian Institution staitia at si.edu ________________________________ From: Members on behalf of James Cortada Sent: Monday, July 20, 2020 3:41 PM To: members at sigcis.org Subject: [SIGCIS-Members] Help on Coffee and Computing External Email - Exercise Caution The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ramesh.Subramanian at quinnipiac.edu Tue Jul 21 06:06:50 2020 From: Ramesh.Subramanian at quinnipiac.edu (Subramanian, Ramesh Prof.) Date: Tue, 21 Jul 2020 13:06:50 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: , , Message-ID: Jim, A possible consideration of the emerging Chai culture in Silicon Valley: Apparently, the large number of Indians working in Silicon Valley like to relive their home style chai: https://www.deccanchronicle.com/lifestyle/travel/300516/indians-stir-chai-into-silicon-valley-coffee-culture.html One either allows racial inequities to persevere, as a racist, or confronts racial inequities, as an anti-racist. There is no in-between safe space of 'not racist.? ? Ibram X. Kendi, How to Be an Antiracist --------------------------------------------------------------------- Ramesh Subramanian, Ph.D. Gabriel Ferrucci Professor of Computer Information Systems Quinnipiac University Hamden, CT 06518. Email: ramesh.subramanian at quinnipiac.edu Web: https://www.qu.edu/student-resources/directory/staff.23345.html & Fellow, Yale Law School - Information Society Project New Haven, CT 06511 Email: ramesh.subramanian at yale.edu Web: https://www.law.yale.edu/ramesh-subramanian ________________________________ From: Members on behalf of Kidwell, Peggy Sent: Tuesday, July 21, 2020 7:08 AM To: Staiti, Alana ; James Cortada ; members at sigcis.org Subject: Re: [SIGCIS-Members] Help on Coffee and Computing I would add to Alana's fine list: https://americanhistory.si.edu/collections/search/object/nmah_1867086 (a photograph) https://americanhistory.si.edu/collections/search/object/NMAH.AC.0324_ref460 (a cartoon - though not much coffee shown) Best - Peggy Kidwell ________________________________ From: Members on behalf of Staiti, Alana Sent: Monday, July 20, 2020 3:53 PM To: James Cortada ; members at sigcis.org Subject: Re: [SIGCIS-Members] Help on Coffee and Computing External Email - Exercise Caution Hi Jim, The National Museum of American History has some mugs in the computing collection featuring company names. Some include fun little sayings. See links below for a few examples. I'm not sure I can elaborate on coffee culture though! We are still working remotely but if you have specific questions about any of these or other objects I'd be happy to do whatever digging I can do from afar, for the time being. https://americanhistory.si.edu/collections/search/object/nmah_1281495 https://americanhistory.si.edu/collections/search/object/nmah_1281135 https://americanhistory.si.edu/collections/search/object/nmah_1281136 https://americanhistory.si.edu/collections/search/object/nmah_1281137 Be well, Alana Alana Staiti (she/her/hers) Curator of the History of Computers and Information Sciences National Museum of American History Smithsonian Institution staitia at si.edu ________________________________ From: Members on behalf of James Cortada Sent: Monday, July 20, 2020 3:41 PM To: members at sigcis.org Subject: [SIGCIS-Members] Help on Coffee and Computing External Email - Exercise Caution The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From HintzE at si.edu Tue Jul 21 06:31:38 2020 From: HintzE at si.edu (Hintz, Eric) Date: Tue, 21 Jul 2020 13:31:38 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: Hi SIGCIS- I?ve loved seeing all the great responses to James?s query re: coffee culture. I immediately thought of Hewlett Packard?s twice daily coffee breaks as described in David Packard?s, The HP Way (1995). Re: ephemera, it looks like there are also pamphlets like ?The HP way?? (1980) preserved at hpalumni.org that describe the coffee breaks. I would love to hear more from Chuck House or other HP alums about this. Back in career 1.0 as a Bay Area IT consultant in the late 1990s, early 2000s, I can attest that HP and Agilent sites in and around Sunnyvale, Cupertino had outstanding break areas, with coffee machines and hot water, all the coffee, tea bags, sugar, creamer, and little straw stirrers you could want! I suspect there is a military angle here too. I am thinking of military computer operators attending to missile early warning systems on three shifts, 24/7, and needing coffee to stay alert. Best- Eric __________________ Eric S. Hintz, PhD, Historian, Lemelson Center Office +1 202-633-3734 | Mobile +1 610-717-7134 |Fax +1 202-633-4593 Email hintze at si.edu | americanhistory.si.edu | invention.si.edu Co-editor, Does America Need More Innovators? (MIT Press, 2019) From: Members On Behalf Of James Cortada Sent: Monday, July 20, 2020 3:42 PM To: members at sigcis.org Subject: [SIGCIS-Members] Help on Coffee and Computing External Email - Exercise Caution The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mabmab at gmail.com Tue Jul 21 07:01:54 2020 From: mabmab at gmail.com (Magnus Boman) Date: Tue, 21 Jul 2020 16:01:54 +0200 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: Jim, Coffee was always, at least since it turned into a conference proper, essential at TED. I dare mention this conference since "the people that built the Internet" used to be there. At the last Monterey TED, or possibly the first Long Beach one, The Barista Guild got a couple of stands with excellent proto-hipster espresso. Guild members were not only ace coffee makers but also followed all the talks on monitors, so you could discuss coffee AND less serious stuff with them. With an 18-hour daily programme, their coffee sure helped. Making espresso is also a near-perfect percolation process, so it would appeal to all of us that simulate forest fires, epidemics, etc., also at the surface/syntactic level. It all just makes sense, really. M. On Tue, 21 Jul 2020 at 00:15, Michael Halvorson wrote: > Kevin and James, > > At Microsoft/Redmond in the late 80s and early 90s, there was a lot of > lore around the distribution of "free" sodas in refrigerators in most of > the break rooms. This was before bottled water became a thing, for the most > part. On tours for new employees and guests, there was a lot of admiration > for the relatively narrow selection of Code, Diet Code, Milk, and Chocolate > Milk, which people could freely consume if they wished. Coffee was less > popular, but people did venture off "campus" for burgers, ribs, etc. > > The most popular stimulant beverage by far at Microsoft was Mountain Dew, > among developers and the documentation teams. In other circles, Jolt Cola > was popular, and mentioned in publications like *The Cyberpunk Handbook* > (Random House, 1995), edited by R. U. Sirius [Ken Goffman], St. Jude [Jude > Milhon], and Bart Nagel. See p. 66. > > --Michael > > On Mon, Jul 20, 2020 at 2:52 PM Kevin Driscoll > wrote: > >> Hello Jim and SIGCIS, >> >> Two references come to mind: >> >> 1. The "Trojan Room coffee pot" at the U of Cambridge is often cited as >> the first live camera on the web: >> - Quentin Stafford-Fraser, ?On Site: The Life and Times of the First Web >> Cam,? Communications of the ACM 44, no. 7 (July 1, 2001): 25?26. >> https://doi.org/10.1145/379300.379327. >> - Full text of above without paywall: >> https://www.cl.cam.ac.uk/coffee/qsf/cacm200107.html >> - Captured by the Wayback Machine on 10 December 1997: >> http://web.archive.org/web/19971210230542/http://www.cl.cam.ac.uk/coffee/coffee.html >> >> 2. Roy Levin of Microsoft Research published a paper about running an >> industry lab in which he recommends that managers "INSTALL A WORLD-CLASS >> COFFEE MACHINE" and notes that "the first capital purchase" at MSR-Silicon >> Valley was an espresso machine. >> - Roy Levin, ?A Perspective on Computing Research Management,? ACM SIGOPS >> Operating Systems Review 41, no. 2 (April 1, 2007): 3?9, >> https://doi.org/10.1145/1243418.1243420. >> >> I've heard other lore about coffee culture at Microsoft that involves the >> proximity of Starbucks in the 1990s. Allegedly, management lobbied for >> coffee carts in every building to keep programmers from driving to >> off-campus coffeehouses. No cite for that one but it would be fun to track >> down the origin of the story. >> >> Looking forward to a caffeinated special issue of the Annals on the >> transnational history of stimulants and computing. >> >> Best of luck, >> >> Kevin Driscoll >> U of Virginia >> >> >> On Mon, Jul 20, 2020 at 3:41 PM James Cortada wrote: >> >>> >>> The IT community of users, programmers, vendors, etc have for decades >>> had a reputation for being extensive consumers of coffee. In some parts of >>> the IT ecosystem, especially among those who work odd hours, such as >>> programmers, computer operators, and vendor field engineers. I am studying >>> the corporate ephemera of this industry and its cultural attachments, such >>> as coffee cups and what they tell us about computing. Do any of you have >>> any information, ephemera, or sources and citations on this specific issue >>> of coffee and computing? I can get many industry folks, such as IBM >>> retirees, to wax eloquently on the subject in their private FB accounts, >>> but that is not enough. Corporate culture is tough to study. Thanks in >>> advance for your help. Jim >>> -- >>> James W. Cortada >>> Senior Research Fellow >>> Charles Babbage Institute >>> University of Minnesota >>> jcortada at umn.edu >>> 608-274-6382 >>> _______________________________________________ >>> This email is relayed from members at sigcis.org, the email discussion >>> list of SHOT SIGCIS. Opinions expressed here are those of the member >>> posting and are not reviewed, edited, or endorsed by SIGCIS. The list >>> archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ >>> and you can change your subscription options at >>> http://lists.sigcis.org/listinfo.cgi/members-sigcis.org >> >> _______________________________________________ >> This email is relayed from members at sigcis.org, the email discussion >> list of SHOT SIGCIS. Opinions expressed here are those of the member >> posting and are not reviewed, edited, or endorsed by SIGCIS. The list >> archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ >> and you can change your subscription options at >> http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > > -- > Michael J. Halvorson > Benson Family Chair in Business and Economic History > > > Author of: *Code Nation: Personal Computing and the Learn to Program > Movement in America (2020) * > > > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From James.E.Dobson at dartmouth.edu Tue Jul 21 07:06:07 2020 From: James.E.Dobson at dartmouth.edu (James E. Dobson) Date: Tue, 21 Jul 2020 14:06:07 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: Hi, What about the silly RFC 2324: Hyper Text Coffee Pot Control Protocol (HTCPCP/1.0)? https://tools.ietf.org/html/rfc2324 and the CMU Coke machine along with the finger interfaces, etc? https://cseweb.ucsd.edu/~bsy/coke.history.txt Jed From: Members on behalf of James Cortada Date: Monday, July 20, 2020 at 3:42 PM To: "members at sigcis.org" Subject: [SIGCIS-Members] Help on Coffee and Computing The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From G.Alberts at uva.nl Tue Jul 21 07:09:32 2020 From: G.Alberts at uva.nl (Gerard Alberts) Date: Tue, 21 Jul 2020 14:09:32 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: <2e72b4f0f75a450185d19d3082fa0e7a@uva.nl> Dear Jim, Allow me to chip in my tiny story of ephemera and turn it into a challenge to you. On the edge of my bookshelf was for years a champaign glass, until it fell off into a dozen pieces --as do things in precarious positions. The glass had been engraved with the IBM logo and reminded the date of the opening of some new data center or computing center. I collected it from the flea market. I picked it up, because to me it represented the lore of emblems, decorations, fountain pens. It is a culture not uncommon in the world of computing, but IBM was particularly good at it, mixing --often macho tainted-- company pride with celebration of technical progress. There was a high culture of walking well suited, of not riding a motercycle when visiting clients, of not spoiling coffee but finishing a job on ephedrine. Jim, the lore of the coffee mug is all around and, judging from the response, we all have access to this low hanging fruit --I reckon the same would be true of printed T-shirts as ephemera of hacker culture. But few would have access like you have to the high culture of computing, of human struggling with computing --still tacit knowledge but visible to your exquisite ethnographical perception. Help us by collecting the ephemera and anecdotes on the further branches of the tree. Surprise me with things I would not even have recognized as ephemera of computing culture. Cheers, Gerard ________________________________ Van: Members namens James Cortada Verzonden: maandag 20 juli 2020 21:41 Aan: members at sigcis.org Onderwerp: [SIGCIS-Members] Help on Coffee and Computing The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgj at dcs.bbk.ac.uk Tue Jul 21 07:27:33 2020 From: rgj at dcs.bbk.ac.uk (Roger Johnson) Date: Tue, 21 Jul 2020 14:27:33 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: <2e72b4f0f75a450185d19d3082fa0e7a@uva.nl> References: <2e72b4f0f75a450185d19d3082fa0e7a@uva.nl> Message-ID: Dear Jim I am not sure if this is exactly what you want to hear but I clearly remember the morning ritual when I joined CAP in London (then one of the UK's major software houses) in 1972. The first person in started up the filter coffee machine which had the large glass flasks filled by passing hot water through the coffee held in a filter paper. However it took a long time to fill the flask and we rapidly worked out with a little manual dexterity it was possible to whip the flask aside and hold one's mug under the coffee flowing out of the funnel holding the filter paper. When your mug was filled the flask could be switched back to catch the flow. Only years later did I realise that the first coffee through the machine must have been packed with caffeine. No wonder we were a very excitable bunch of programmers! We also consumed a tin box of assorted sweet biscuits during the day so our diet was probably not the best - still we built good systems in COBOL and PL/1! Happy days ... Good wishes Roger From: Members On Behalf Of Gerard Alberts Sent: 21 July 2020 15:10 To: James Cortada ; members at sigcis.org Subject: Re: [SIGCIS-Members] Help on Coffee and Computing Dear Jim, Allow me to chip in my tiny story of ephemera and turn it into a challenge to you. On the edge of my bookshelf was for years a champaign glass, until it fell off into a dozen pieces --as do things in precarious positions. The glass had been engraved with the IBM logo and reminded the date of the opening of some new data center or computing center. I collected it from the flea market. I picked it up, because to me it represented the lore of emblems, decorations, fountain pens. It is a culture not uncommon in the world of computing, but IBM was particularly good at it, mixing --often macho tainted-- company pride with celebration of technical progress. There was a high culture of walking well suited, of not riding a motercycle when visiting clients, of not spoiling coffee but finishing a job on ephedrine. Jim, the lore of the coffee mug is all around and, judging from the response, we all have access to this low hanging fruit --I reckon the same would be true of printed T-shirts as ephemera of hacker culture. But few would have access like you have to the high culture of computing, of human struggling with computing --still tacit knowledge but visible to your exquisite ethnographical perception. Help us by collecting the ephemera and anecdotes on the further branches of the tree. Surprise me with things I would not even have recognized as ephemera of computing culture. Cheers, Gerard ________________________________ Van: Members namens James Cortada Verzonden: maandag 20 juli 2020 21:41 Aan: members at sigcis.org Onderwerp: [SIGCIS-Members] Help on Coffee and Computing The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at jeffreythompson.org Tue Jul 21 07:39:16 2020 From: mail at jeffreythompson.org (Jeff Thompson) Date: Tue, 21 Jul 2020 10:39:16 -0400 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: <2e72b4f0f75a450185d19d3082fa0e7a@uva.nl> Message-ID: Also maybe not exactly what you?re after, but there are ?coffee clubs? scattered all around the Bell Labs campus, where researchers pool money to buy coffee machines (of differing levels of fanciness) and supplies. The one right near the Unix group is particularly nice! Jeff - - - Jeff Thompson Assistant Professor, Program Director Visual Art & Technology, Stevens Institute of Technology www.jeffreythompson.org @jeffkthompson > On Jul 21, 2020, at 10:27 AM, Roger Johnson wrote: > > Dear Jim > > I am not sure if this is exactly what you want to hear but I clearly remember the morning ritual when I joined CAP in London (then one of the UK?s major software houses) in 1972. The first person in started up the filter coffee machine which had the large glass flasks filled by passing hot water through the coffee held in a filter paper. However it took a long time to fill the flask and we rapidly worked out with a little manual dexterity it was possible to whip the flask aside and hold one?s mug under the coffee flowing out of the funnel holding the filter paper. When your mug was filled the flask could be switched back to catch the flow. > > Only years later did I realise that the first coffee through the machine must have been packed with caffeine. No wonder we were a very excitable bunch of programmers! We also consumed a tin box of assorted sweet biscuits during the day so our diet was probably not the best - still we built good systems in COBOL and PL/1! Happy days ? > > Good wishes > > Roger > > From: Members On Behalf Of Gerard Alberts > Sent: 21 July 2020 15:10 > To: James Cortada ; members at sigcis.org > Subject: Re: [SIGCIS-Members] Help on Coffee and Computing > > Dear Jim, > > Allow me to chip in my tiny story of ephemera and turn it into a challenge to you. > > On the edge of my bookshelf was for years a champaign glass, until it fell off into a dozen pieces --as do things in precarious positions. The glass had been engraved with the IBM logo and reminded the date of the opening of some new data center or computing center. I collected it from the flea market. > > I picked it up, because to me it represented the lore of emblems, decorations, fountain pens. It is a culture not uncommon in the world of computing, but IBM was particularly good at it, mixing --often macho tainted-- company pride with celebration of technical progress. There was a high culture of walking well suited, of not riding a motercycle when visiting clients, of not spoiling coffee but finishing a job on ephedrine. > > Jim, the lore of the coffee mug is all around and, judging from the response, we all have access to this low hanging fruit --I reckon the same would be true of printed T-shirts as ephemera of hacker culture. But few would have access like you have to the high culture of computing, of human struggling with computing --still tacit knowledge but visible to your exquisite ethnographical perception. Help us by collecting the ephemera and anecdotes on the further branches of the tree. > > Surprise me with things I would not even have recognized as ephemera of computing culture. > > Cheers, > > Gerard > > Van: Members namens James Cortada > Verzonden: maandag 20 juli 2020 21:41 > Aan: members at sigcis.org > Onderwerp: [SIGCIS-Members] Help on Coffee and Computing > > > The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim > -- > James W. Cortada > Senior Research Fellow > Charles Babbage Institute > University of Minnesota > jcortada at umn.edu > 608-274-6382 > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From CeruzziP at si.edu Tue Jul 21 07:39:38 2020 From: CeruzziP at si.edu (Ceruzzi, Paul) Date: Tue, 21 Jul 2020 14:39:38 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: <2e72b4f0f75a450185d19d3082fa0e7a@uva.nl>, Message-ID: In November 1996 I attended a celebration of the 50th anniversary of the ENIAC at the Aberdeen Proving Ground in Maryland. They gave out a beautiful mug to attendees. Some of the women who operated the ENIAC were there, as was Herman Goldstine. The coffee mug is one of my favorites, although I'd be willing to donate it to a qualified museum if anyone is interested. At the time Amtrak had one or two trains a day that stopped at Aberdeen. Although Aberdeen is a short drive from my home, naturally I took the train and was met at the platform, thus recreating the legendary encounter by Herman Goldstine with John von Neumann. Cheers, Paul Ceruzzi ________________________________ From: Members on behalf of Roger Johnson Sent: Tuesday, July 21, 2020 10:27 AM To: James Cortada ; members at sigcis.org Subject: Re: [SIGCIS-Members] Help on Coffee and Computing External Email - Exercise Caution Dear Jim I am not sure if this is exactly what you want to hear but I clearly remember the morning ritual when I joined CAP in London (then one of the UK?s major software houses) in 1972. The first person in started up the filter coffee machine which had the large glass flasks filled by passing hot water through the coffee held in a filter paper. However it took a long time to fill the flask and we rapidly worked out with a little manual dexterity it was possible to whip the flask aside and hold one?s mug under the coffee flowing out of the funnel holding the filter paper. When your mug was filled the flask could be switched back to catch the flow. Only years later did I realise that the first coffee through the machine must have been packed with caffeine. No wonder we were a very excitable bunch of programmers! We also consumed a tin box of assorted sweet biscuits during the day so our diet was probably not the best - still we built good systems in COBOL and PL/1! Happy days ? Good wishes Roger From: Members On Behalf Of Gerard Alberts Sent: 21 July 2020 15:10 To: James Cortada ; members at sigcis.org Subject: Re: [SIGCIS-Members] Help on Coffee and Computing Dear Jim, Allow me to chip in my tiny story of ephemera and turn it into a challenge to you. On the edge of my bookshelf was for years a champaign glass, until it fell off into a dozen pieces --as do things in precarious positions. The glass had been engraved with the IBM logo and reminded the date of the opening of some new data center or computing center. I collected it from the flea market. I picked it up, because to me it represented the lore of emblems, decorations, fountain pens. It is a culture not uncommon in the world of computing, but IBM was particularly good at it, mixing --often macho tainted-- company pride with celebration of technical progress. There was a high culture of walking well suited, of not riding a motercycle when visiting clients, of not spoiling coffee but finishing a job on ephedrine. Jim, the lore of the coffee mug is all around and, judging from the response, we all have access to this low hanging fruit --I reckon the same would be true of printed T-shirts as ephemera of hacker culture. But few would have access like you have to the high culture of computing, of human struggling with computing --still tacit knowledge but visible to your exquisite ethnographical perception. Help us by collecting the ephemera and anecdotes on the further branches of the tree. Surprise me with things I would not even have recognized as ephemera of computing culture. Cheers, Gerard ________________________________ Van: Members namens James Cortada Verzonden: maandag 20 juli 2020 21:41 Aan: members at sigcis.org Onderwerp: [SIGCIS-Members] Help on Coffee and Computing The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ENIAC coffee mug.jpg Type: image/jpeg Size: 40135 bytes Desc: ENIAC coffee mug.jpg URL: From christine.aicardi at kcl.ac.uk Tue Jul 21 07:47:42 2020 From: christine.aicardi at kcl.ac.uk (Aicardi, Christine) Date: Tue, 21 Jul 2020 14:47:42 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: Dear Jim, I was a programmer in the mid-1980s, and I remember that we made so many trips to the drinks machine (for coffee, tea, chocolate, fizzy drinks, you name it) because compiling and link-editing a programme took up so much time, we wouldn?t stay glued to our screens and wait. Hence the trips to get something in a cup that would give a kick to our taste buds ? and usually, these were still the days, smoke a cigarette. Coffee + cigarette: that was a match made in heaven (or hell?). So I?m wondering whether you?ve come across smoking-related memorabilia, before smoking became banned (rightfully so) from the workplace? Good luck with the project! Christine Aicardi From: Members On Behalf Of James Cortada Sent: 20 July 2020 20:42 To: members at sigcis.org Subject: [SIGCIS-Members] Help on Coffee and Computing The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.randell at newcastle.ac.uk Tue Jul 21 07:56:21 2020 From: brian.randell at newcastle.ac.uk (Brian Randell) Date: Tue, 21 Jul 2020 14:56:21 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: <2e72b4f0f75a450185d19d3082fa0e7a@uva.nl> References: <2e72b4f0f75a450185d19d3082fa0e7a@uva.nl> Message-ID: <44C45646-1739-41BA-91D4-0C20F86A76D8@newcastle.ac.uk> Hi Gerard: Your comment about collecting emphemera reminds me of the time I was involved with the (Boston) Computer Museum. Gordon Bell, in Out of a Closet: The Early Years of The Computer [x]* Museum https://www.researchgate.net/publication/267569839_Bell_Gordon_Out_of_a_Closet_The_Early_Years_of_The_Computer_Museum_Dedicated_to_Brian_Randell_on_the_Occasion_of_his_75th_Birthday very kindly recalled: "As its first Chairman of the Collections and Exhibits Committee, Brian first argued to preserve and display advertisements and ephemera as a significant source for historical understanding and audience recollection?. This was probably one of the, no doubt several, causes of two of the Museum's exhibits being in fact very well-done recreations of an office and a teenager?s bedroom featuring, respectively, a single PC and a single Mac! :-) Cheers Brian Randell > On 21 Jul 2020, at 15:09, Gerard Alberts wrote: > > Dear Jim, > Allow me to chip in my tiny story of ephemera and turn it into a challenge to you. > On the edge of my bookshelf was for years a champaign glass, until it fell off into a dozen pieces --as do things in precarious positions. The glass had been engraved with the IBM logo and reminded the date of the opening of some new data center or computing center. I collected it from the flea market. > I picked it up, because to me it represented the lore of emblems, decorations, fountain pens. It is a culture not uncommon in the world of computing, but IBM was particularly good at it, mixing --often macho tainted-- company pride with celebration of technical progress. There was a high culture of walking well suited, of not riding a motercycle when visiting clients, of not spoiling coffee but finishing a job on ephedrine. > Jim, the lore of the coffee mug is all around and, judging from the response, we all have access to this low hanging fruit --I reckon the same would be true of printed T-shirts as ephemera of hacker culture. But few would have access like you have to the high culture of computing, of human struggling with computing --still tacit knowledge but visible to your exquisite ethnographical perception. Help us by collecting the ephemera and anecdotes on the further branches of the tree. > Surprise me with things I would not even have recognized as ephemera of computing culture. > Cheers, > Gerard > Van: Members namens James Cortada > Verzonden: maandag 20 juli 2020 21:41 > Aan: members at sigcis.org > Onderwerp: [SIGCIS-Members] Help on Coffee and Computing > > > The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim > -- > James W. Cortada > Senior Research Fellow > Charles Babbage Institute > University of Minnesota > jcortada at umn.edu > 608-274-6382 > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org ? School of Computing, Newcastle University, 1 Science Square, Newcastle upon Tyne, NE4 5TG EMAIL = Brian.Randell at ncl.ac.uk PHONE = +44 191 208 7923 URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html From lowood at stanford.edu Tue Jul 21 09:44:10 2020 From: lowood at stanford.edu (Henry E Lowood) Date: Tue, 21 Jul 2020 16:44:10 +0000 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: , Message-ID: Hi Jim, You might want to browse through the Doug Menuez photography collection at Stanford. About 10,000 of the images are online (out of about 200-250,000). He captures quite a bit of the culture in companies like Apple, NeXT, Adobe, etc., mostly 1980s. I am sure you will find many coffee mugs there! Here is a link to the online exhibit created from the images in this collection: https://exhibits.stanford.edu/menuez Hit "browse" to see a selection of companies represented. Henry Henry Lowood, PhD Harold C. Hohbach Curator, History of Science & Technology Collections; Curator, Film & Media Collections HASG, Green Library, 557 Escondido Mall Stanford University Libraries Stanford CA 94305-6066 PH: 650-723-4602 EM: lowood at stanford.edu From: Members On Behalf Of Kidwell, Peggy Sent: Tuesday, July 21, 2020 4:09 AM To: Staiti, Alana ; James Cortada ; members at sigcis.org Subject: Re: [SIGCIS-Members] Help on Coffee and Computing I would add to Alana's fine list: https://americanhistory.si.edu/collections/search/object/nmah_1867086 (a photograph) https://americanhistory.si.edu/collections/search/object/NMAH.AC.0324_ref460 (a cartoon - though not much coffee shown) Best - Peggy Kidwell ________________________________ From: Members > on behalf of Staiti, Alana > Sent: Monday, July 20, 2020 3:53 PM To: James Cortada >; members at sigcis.org > Subject: Re: [SIGCIS-Members] Help on Coffee and Computing External Email - Exercise Caution Hi Jim, The National Museum of American History has some mugs in the computing collection featuring company names. Some include fun little sayings. See links below for a few examples. I'm not sure I can elaborate on coffee culture though! We are still working remotely but if you have specific questions about any of these or other objects I'd be happy to do whatever digging I can do from afar, for the time being. https://americanhistory.si.edu/collections/search/object/nmah_1281495 https://americanhistory.si.edu/collections/search/object/nmah_1281135 https://americanhistory.si.edu/collections/search/object/nmah_1281136 https://americanhistory.si.edu/collections/search/object/nmah_1281137 Be well, Alana Alana Staiti (she/her/hers) Curator of the History of Computers and Information Sciences National Museum of American History Smithsonian Institution staitia at si.edu ________________________________ From: Members > on behalf of James Cortada > Sent: Monday, July 20, 2020 3:41 PM To: members at sigcis.org > Subject: [SIGCIS-Members] Help on Coffee and Computing External Email - Exercise Caution The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ft at mur.at Tue Jul 21 10:06:53 2020 From: ft at mur.at (Friedrich Tietjen) Date: Tue, 21 Jul 2020 19:06:53 +0200 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: <20200721190653.Horde.FQwpRzebYv2Oh5FEp1j5tA1@webmail.mur.at> Hi Jim, great subject for research. It might also be worth looking into the many patents meant to fight coffee (and other beverages) as the bane of the keyboard. All the best Friedrich Quoting Henry E Lowood : > Hi Jim, > You might want to browse through the Doug Menuez photography > collection at Stanford. About 10,000 of the images are online (out > of about 200-250,000). He captures quite a bit of the culture in > companies like Apple, NeXT, Adobe, etc., mostly 1980s. I am sure > you will find many coffee mugs there! > Here is a link to the online exhibit created from the images in this > collection: > https://exhibits.stanford.edu/menuez > Hit "browse" to see a selection of companies represented. > Henry > > Henry Lowood, PhD > Harold C. Hohbach Curator, History of Science & Technology > Collections; Curator, Film & Media Collections > HASG, Green Library, 557 Escondido Mall > Stanford University Libraries > Stanford CA 94305-6066 > PH: 650-723-4602 > EM: lowood at stanford.edu > > From: Members On Behalf Of Kidwell, Peggy > Sent: Tuesday, July 21, 2020 4:09 AM > To: Staiti, Alana ; James Cortada > ; members at sigcis.org > Subject: Re: [SIGCIS-Members] Help on Coffee and Computing > > I would add to Alana's fine list: > > https://americanhistory.si.edu/collections/search/object/nmah_1867086 (a > photograph) > https://americanhistory.si.edu/collections/search/object/NMAH.AC.0324_ref460 > (a cartoon - though not much coffee shown) > > Best - > > Peggy Kidwell > ________________________________ > From: Members > > > on behalf of Staiti, Alana > > Sent: Monday, July 20, 2020 3:53 PM > To: James Cortada >; > members at sigcis.org > > > Subject: Re: [SIGCIS-Members] Help on Coffee and Computing > > > External Email - Exercise Caution > Hi Jim, > > The National Museum of American History has some mugs in the > computing collection featuring company names. Some include fun > little sayings. See links below for a few examples. I'm not sure I > can elaborate on coffee culture though! We are still working > remotely but if you have specific questions about any of these or > other objects I'd be happy to do whatever digging I can do from > afar, for the time being. > > https://americanhistory.si.edu/collections/search/object/nmah_1281495 > https://americanhistory.si.edu/collections/search/object/nmah_1281135 > https://americanhistory.si.edu/collections/search/object/nmah_1281136 > https://americanhistory.si.edu/collections/search/object/nmah_1281137 > > Be well, > Alana > > > Alana Staiti (she/her/hers) > > Curator of the History of Computers and Information Sciences > > National Museum of American History > > Smithsonian Institution > > staitia at si.edu > > ________________________________ > From: Members > > > on behalf of James Cortada > > Sent: Monday, July 20, 2020 3:41 PM > To: members at sigcis.org > > > Subject: [SIGCIS-Members] Help on Coffee and Computing > > > External Email - Exercise Caution > > The IT community of users, programmers, vendors, etc have for > decades had a reputation for being extensive consumers of coffee. In > some parts of the IT ecosystem, especially among those who work odd > hours, such as programmers, computer operators, and vendor field > engineers. I am studying the corporate ephemera of this industry > and its cultural attachments, such as coffee cups and what they tell > us about computing. Do any of you have any information, ephemera, > or sources and citations on this specific issue of coffee and > computing? I can get many industry folks, such as IBM retirees, to > wax eloquently on the subject in their private FB accounts, but that > is not enough. Corporate culture is tough to study. Thanks in > advance for your help. Jim > -- > James W. Cortada > Senior Research Fellow > Charles Babbage Institute > University of Minnesota > jcortada at umn.edu > 608-274-6382 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-keys Size: 1645 bytes Desc: PGP Public Key URL: From marc at webhistory.org Tue Jul 21 10:08:56 2020 From: marc at webhistory.org (Marc Weber) Date: Tue, 21 Jul 2020 10:08:56 -0700 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: Message-ID: <4E28F378-1948-45E0-A256-8C72BFD5FF22@webhistory.org> Mugs are also well represented in our collection at the Computer History Museum? you?ll get 470 hits when you search on ?mug? in our online catalog . In fact we have essentially stopped collecting them as a result. We also have a Peet?s Dash Button . Best, Marc Marc Weber | marc at webhistory.org | +1 415 282 6868 Curatorial Director, Internet History Program Computer History Museum, 1401 N Shoreline Blvd., Mountain View CA 94043 computerhistory.org/nethistory | Co-founder, Web History Center and Project > On Jul 21, 2020, at 09:44, Henry E Lowood wrote: > > Hi Jim, > You might want to browse through the Doug Menuez photography collection at Stanford. About 10,000 of the images are online (out of about 200-250,000). He captures quite a bit of the culture in companies like Apple, NeXT, Adobe, etc., mostly 1980s. I am sure you will find many coffee mugs there! > Here is a link to the online exhibit created from the images in this collection: > https://exhibits.stanford.edu/menuez > Hit ?browse? to see a selection of companies represented. > Henry > > Henry Lowood, PhD > Harold C. Hohbach Curator, History of Science & Technology Collections; Curator, Film & Media Collections > HASG, Green Library, 557 Escondido Mall > Stanford University Libraries > Stanford CA 94305-6066 > PH: 650-723-4602 > EM: lowood at stanford.edu > > From: Members On Behalf Of Kidwell, Peggy > Sent: Tuesday, July 21, 2020 4:09 AM > To: Staiti, Alana ; James Cortada ; members at sigcis.org > Subject: Re: [SIGCIS-Members] Help on Coffee and Computing > > I would add to Alana's fine list: > > https://americanhistory.si.edu/collections/search/object/nmah_1867086 (a photograph) > https://americanhistory.si.edu/collections/search/object/NMAH.AC.0324_ref460 (a cartoon - though not much coffee shown) > > Best - > > Peggy Kidwell > From: Members > on behalf of Staiti, Alana > > Sent: Monday, July 20, 2020 3:53 PM > To: James Cortada >; members at sigcis.org > > Subject: Re: [SIGCIS-Members] Help on Coffee and Computing > > External Email - Exercise Caution > Hi Jim, > > The National Museum of American History has some mugs in the computing collection featuring company names. Some include fun little sayings. See links below for a few examples. I'm not sure I can elaborate on coffee culture though! We are still working remotely but if you have specific questions about any of these or other objects I'd be happy to do whatever digging I can do from afar, for the time being. > > https://americanhistory.si.edu/collections/search/object/nmah_1281495 > https://americanhistory.si.edu/collections/search/object/nmah_1281135 > https://americanhistory.si.edu/collections/search/object/nmah_1281136 > https://americanhistory.si.edu/collections/search/object/nmah_1281137 > > Be well, > Alana > > Alana Staiti (she/her/hers) > Curator of the History of Computers and Information Sciences > National Museum of American History > Smithsonian Institution > staitia at si.edu > > From: Members > on behalf of James Cortada > > Sent: Monday, July 20, 2020 3:41 PM > To: members at sigcis.org > > Subject: [SIGCIS-Members] Help on Coffee and Computing > > External Email - Exercise Caution > > The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim > -- > James W. Cortada > Senior Research Fellow > Charles Babbage Institute > University of Minnesota > jcortada at umn.edu > 608-274-6382 > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org Marc Weber | marc at webhistory.org | +1 415 282 6868 Internet History Program Curatorial Director, Computer History Museum 1401 N Shoreline Blvd., Mountain View CA 94043 computerhistory.org/nethistory Co-founder, Web History Center and Project, webhistory.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at willegal.net Tue Jul 21 11:43:16 2020 From: mike at willegal.net (mike at willegal.net) Date: Tue, 21 Jul 2020 14:43:16 -0400 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: <4E28F378-1948-45E0-A256-8C72BFD5FF22@webhistory.org> References: <4E28F378-1948-45E0-A256-8C72BFD5FF22@webhistory.org> Message-ID: I can relate to some of these comments. Cisco, where I have employed since 1997, used to have coolers with a large variety of free beverages available to all employees. I had one friend that said that when the free drinks went away, so would he, and he did leave not so long after the free drinks disappeared. I also used to fill my cup directly from the outflow from the old style brewing machines. At one point Encore Computer, charged employees a quarter a cup for coffee on the honor system. Eventually the coffee became free to employees, but management didn?t tell us and they used the funds to sponsor a year end holiday party. Here is a story. More than 15 years ago, I fairly frequently travelled back and forth between Boston and San Jose on the ?Nerd Bird." Over time, I had established the habit of staying on east coast time, even when out in California. Visiting San Jose, during the intense effort of a new hardware bring up, the team stayed and worked through a weekend. Sunday morning, I woke up, as usual, about 4 or 5 AM local time. Not having anything to do in the hotel room, I decided to go into the office and get a head start on the days efforts. I arrived in the large, dark and apparently empty, office building at something like 5 AM that Sunday morning. With the lights out, no one in sight, I found that the coffee machine was in the middle of brewing a fresh pot of coffee. It was a very eerie thing. Eventually, I ran across a guy from another team that had come into the office early that morning and needed his cup of ?Joe.? Eventually I had to give up on all caffeinated beverages, as I often didn?t drink them during weekends and would then end up with a splitting headache on Sunday. -Mike Willegal > On Jul 21, 2020, at 1:08 PM, Marc Weber wrote: > > Mugs are also well represented in our collection at the Computer History Museum? you?ll get 470 hits when you search on ?mug? in our online catalog . In fact we have essentially stopped collecting them as a result. > We also have a Peet?s Dash Button . > Best, Marc > > Marc Weber | marc at webhistory.org | +1 415 282 6868 > Curatorial Director, Internet History Program > Computer History Museum, 1401 N Shoreline Blvd., Mountain View CA 94043 > computerhistory.org/nethistory | Co-founder, Web History Center and Project > >> On Jul 21, 2020, at 09:44, Henry E Lowood > wrote: >> >> Hi Jim, >> You might want to browse through the Doug Menuez photography collection at Stanford. About 10,000 of the images are online (out of about 200-250,000). He captures quite a bit of the culture in companies like Apple, NeXT, Adobe, etc., mostly 1980s. I am sure you will find many coffee mugs there! >> Here is a link to the online exhibit created from the images in this collection: >> https://exhibits.stanford.edu/menuez >> Hit ?browse? to see a selection of companies represented. >> Henry >> >> Henry Lowood, PhD >> Harold C. Hohbach Curator, History of Science & Technology Collections; Curator, Film & Media Collections >> HASG, Green Library, 557 Escondido Mall >> Stanford University Libraries >> Stanford CA 94305-6066 >> PH: 650-723-4602 >> EM: lowood at stanford.edu >> >> From: Members > On Behalf Of Kidwell, Peggy >> Sent: Tuesday, July 21, 2020 4:09 AM >> To: Staiti, Alana >; James Cortada >; members at sigcis.org >> Subject: Re: [SIGCIS-Members] Help on Coffee and Computing >> >> I would add to Alana's fine list: >> >> https://americanhistory.si.edu/collections/search/object/nmah_1867086 (a photograph) >> https://americanhistory.si.edu/collections/search/object/NMAH.AC.0324_ref460 (a cartoon - though not much coffee shown) >> >> Best - >> >> Peggy Kidwell >> From: Members > on behalf of Staiti, Alana > >> Sent: Monday, July 20, 2020 3:53 PM >> To: James Cortada >; members at sigcis.org > >> Subject: Re: [SIGCIS-Members] Help on Coffee and Computing >> >> External Email - Exercise Caution >> Hi Jim, >> >> The National Museum of American History has some mugs in the computing collection featuring company names. Some include fun little sayings. See links below for a few examples. I'm not sure I can elaborate on coffee culture though! We are still working remotely but if you have specific questions about any of these or other objects I'd be happy to do whatever digging I can do from afar, for the time being. >> >> https://americanhistory.si.edu/collections/search/object/nmah_1281495 >> https://americanhistory.si.edu/collections/search/object/nmah_1281135 >> https://americanhistory.si.edu/collections/search/object/nmah_1281136 >> https://americanhistory.si.edu/collections/search/object/nmah_1281137 >> >> Be well, >> Alana >> >> Alana Staiti (she/her/hers) >> Curator of the History of Computers and Information Sciences >> National Museum of American History >> Smithsonian Institution >> staitia at si.edu >> >> From: Members > on behalf of James Cortada > >> Sent: Monday, July 20, 2020 3:41 PM >> To: members at sigcis.org > >> Subject: [SIGCIS-Members] Help on Coffee and Computing >> >> External Email - Exercise Caution >> >> The IT community of users, programmers, vendors, etc have for decades had a reputation for being extensive consumers of coffee. In some parts of the IT ecosystem, especially among those who work odd hours, such as programmers, computer operators, and vendor field engineers. I am studying the corporate ephemera of this industry and its cultural attachments, such as coffee cups and what they tell us about computing. Do any of you have any information, ephemera, or sources and citations on this specific issue of coffee and computing? I can get many industry folks, such as IBM retirees, to wax eloquently on the subject in their private FB accounts, but that is not enough. Corporate culture is tough to study. Thanks in advance for your help. Jim >> -- >> James W. Cortada >> Senior Research Fellow >> Charles Babbage Institute >> University of Minnesota >> jcortada at umn.edu >> 608-274-6382 >> _______________________________________________ >> This email is relayed from members at sigcis.org , the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > Marc Weber | marc at webhistory.org | +1 415 282 6868 > Internet History Program Curatorial Director, Computer History Museum > 1401 N Shoreline Blvd., Mountain View CA 94043 computerhistory.org/nethistory > Co-founder, Web History Center and Project, webhistory.org > > _______________________________________________ > This email is relayed from members at sigcis.org , the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Tue Jul 21 15:31:57 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Tue, 21 Jul 2020 17:31:57 -0500 Subject: [SIGCIS-Members] Origin of "Hobbes" figures for Web server growth during the 1990s Message-ID: <039f01d65fae$becd92a0$3c68b7e0$@gmail.com> Hello SIGCIS, One of the final highlighted items in the endnotes for the Revised History of Modern Computing is a note to support figures given for the rapid growth of Web servers during the 1990s. I'm writing to share what I was able to figure out with a few hours of web searching and to ask if anyone has more authoritative knowledge of this. When we drafted the relevant part of the text, we just grabbed numbers from the so-called "Hobbes' Internet Timeline" at https://www.zakon.org/robert/internet/timeline/#Growth The 1990s data appears in the tabular inset. 10, 50, and 100,000 are suspiciously round numbers, and 1 is clearly a retroactive data point rather than the result of a count. Other numbers like 646,162 give the impression of an actual count of some kind. So now the challenge is to figure out where those numbers came from and what was being counted. An archive version from 2001 (https://web.archive.org/web/20010220202319/https://www.zakon.org/robert/int ernet/timeline/#Growth ) has more detailed data for 1996-2000, but lacks the first three data points for 1/90 to 12/92. A note at the bottom reads "WWW growth summary compiled from: - Web growth summary page by Matthew Gray of MIT: http://www.mit.edu/people/mkgray/net/web-growth-summary.html - Netcraft at http://www.netcraft.com/survey/" So then I followed http://www.mit.edu/people/mkgray/net/web-growth-summary.html which, remarkably, is still live. The personal page of Matthew K. Gray provides the source of the Hobbes figures from 1993 to early 1996. The final two rows (not used by Hobbes) are labelled as "est" for estimate, which implies that the other rows are somehow counted. I found more information at http://www.mit.edu/people/mkgray/growth/ which explains "The primary tool used to collect the data presented here was the World Wide Web Wanderer, the first automated Web agent or "spider". The Wanderer was first functional in spring of 1993 and performed regular traversals of the Web from June 1993 to June 1995." That solves the mystery of the round 100,000 number for 1/96 which must also be an estimate, though it is not marked as such. He appears to have carried out the measurement work as an undergraduate physics student, some of it while taking a leave to start a company called "net Genesis" to develop web tools. Gray never got around to posting the month-by-month counts he claimed to have made, just the five data points for the six-monthly intervals. So his link for "Web Growth Data" http://www.mit.edu/people/mkgray/net/web-growth-data.html just goes to a note that "The full data sets on web growth will be published here sometime when I get time. Do NOT send me email asking for the data in advance, asking me when it will be available or anything of the sort. It will be available sometime later. It will include the data from the comprehensive list of sites." Gray's MIT site points people to a newer site, which is no longer functional. But I think this is probably the same guy: http://x.gray.org/ and http://matthew.gray.org/ If he still had his original month-by-month lists of all known websites for the period maybe he'd be willing to donate it to an archive. He asked people not to email, but maybe after 23 years it would be OK. Apparently he works for Google now. So the measurements do not come from an official MIT research project, and the data wasn't peer reviewed or even published online except as a one page summary. But on the other hand we can't go back and crawl the early web ourselves, so they may nevertheless be the best numbers available for June 1993-June 1995. Interesting aside: Wikipedia (https://en.wikipedia.org/wiki/WebCrawler) suggests that the first search engine powered by a crawler did not come online until April 1994, but of course crawling the web to count is easier than crawling to produce a searchable public index. This also implies that the rest of the data comes from http://www.netcraft.com/survey/ which is still being updated to this day. The numbers do more or less match. However, the current Netcraft graph shows only "host names" until around the year 2000, at which point it also begins to graph a very much smaller number of "Active sites." https://www.netcraft.com/active-sites/ explains the difference between hosts and active sites thus: In the early days of the web, hostnames were a good indication of actively managed content providing information and services to the internet community. The situation is now considerably more blurred - the web includes a great deal of activity, but also a considerable quantity of sites that are untouched by human hand, produced automatically at the point of customer acquisition by domain registration or hosting service companies, advertising providers or speculative domain registrants, or search-engine optimisation companies. The biggest domain registrars are large enough to be significant in the context of the whole survey. For example, GoDaddy (17M hostnames) and 1&1 (10M hostnames) make up 16% of the 168M hostnames surveyed in May 2008. Circa 1996-1997, the number of distinct IP addresses would have been a good approximation to the number of real sites, since hosting companies would typically allocate an IP address to each site with distinct content, and multiple domain names could point to the IP address being used to serve the same site content. However, with the adoption of HTTP/1.1 virtual hosting, and the availability of load balancing technology it is possible to reliably host a great number of active sites on a single (or relatively few) IP addresses. In June 2000, the first month where both numbers are given, the estimate is 7.5 million active sites vs. 17 million host names. So our current plan is avoid citing the Hobbes page at all, and instead to cite M K Gray's personal page at MIT for the early 1990s numbers and the Netcraft survey estimate of web hostnames for the later ones, with a caveat that the hostname counts for 1998-99 were likely already inflated by domain squatters and spammers. Anyone one got anything to add, or any better sources on 1990s web server numbers and counting methodology to point us to? Thanks, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 71487 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 76411 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 55388 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 124770 bytes Desc: not available URL: From marc at webhistory.org Tue Jul 21 16:57:05 2020 From: marc at webhistory.org (Marc Weber) Date: Tue, 21 Jul 2020 16:57:05 -0700 Subject: [SIGCIS-Members] Origin of "Hobbes" figures for Web server growth during the 1990s In-Reply-To: <039f01d65fae$becd92a0$3c68b7e0$@gmail.com> References: <039f01d65fae$becd92a0$3c68b7e0$@gmail.com> Message-ID: <1F257BA5-6185-430B-9450-A0FE58028703@webhistory.org> Hi Tom, There are a couple of figures on the very earliest years in the timeline in Web pioneer Kevin Hughes? "From Webspace to Cyberspace,? attached. After 1996, the Internet Archive is likely to have some accurate figures from their Web crawl. Let me know offline if you want me to put you in touch with Kevin or folks at the Archive for further info/ideas. You might also consider contacting ISOC or W3C. Best, Marc > On Jul 21, 2020, at 15:31, wrote: > > Hello SIGCIS, > > One of the final highlighted items in the endnotes for the Revised History of Modern Computing is a note to support figures given for the rapid growth of Web servers during the 1990s. I?m writing to share what I was able to figure out with a few hours of web searching and to ask if anyone has more authoritative knowledge of this. > > When we drafted the relevant part of the text, we just grabbed numbers from the so-called ?Hobbes? Internet Timeline? at https://www.zakon.org/robert/internet/timeline/#Growth > > > > The 1990s data appears in the tabular inset. 10, 50, and 100,000 are suspiciously round numbers, and 1 is clearly a retroactive data point rather than the result of a count. Other numbers like 646,162 give the impression of an actual count of some kind. So now the challenge is to figure out where those numbers came from and what was being counted. > > An archive version from 2001 (https://web.archive.org/web/20010220202319/https://www.zakon.org/robert/internet/timeline/#Growth ) has more detailed data for 1996-2000, but lacks the first three data points for 1/90 to 12/92. > > > > A note at the bottom reads > > ?WWW growth summary compiled from: > - Web growth summary page by Matthew Gray of MIT: > http://www.mit.edu/people/mkgray/net/web-growth-summary.html > - Netcraft at http://www.netcraft.com/survey/ ? > > So then I followed http://www.mit.edu/people/mkgray/net/web-growth-summary.html which, remarkably, is still live. The personal page of Matthew K. Gray provides the source of the Hobbes figures from 1993 to early 1996. The final two rows (not used by Hobbes) are labelled as ?est? for estimate, which implies that the other rows are somehow counted. > > > > I found more information at http://www.mit.edu/people/mkgray/growth/ which explains ?The primary tool used to collect the data presented here was the World Wide Web Wanderer, the first automated Web agent or "spider". The Wanderer was first functional in spring of 1993 and performed regular traversals of the Web from June 1993 to June 1995.? That solves the mystery of the round 100,000 number for 1/96 which must also be an estimate, though it is not marked as such. He appears to have carried out the measurement work as an undergraduate physics student, some of it while taking a leave to start a company called ?net Genesis? to develop web tools. > > Gray never got around to posting the month-by-month counts he claimed to have made, just the five data points for the six-monthly intervals. So his link for ?Web Growth Data? http://www.mit.edu/people/mkgray/net/web-growth-data.html just goes to a note that ?The full data sets on web growth will be published here sometime when I get time. Do NOT send me email asking for the data in advance, asking me when it will be available or anything of the sort. It will be available sometime later. It will include the data from the comprehensive list of sites.? > > Gray?s MIT site points people to a newer site, which is no longer functional. But I think this is probably the same guy: http://x.gray.org/ and http://matthew.gray.org/ If he still had his original month-by-month lists of all known websites for the period maybe he?d be willing to donate it to an archive. He asked people not to email, but maybe after 23 years it would be OK. Apparently he works for Google now. > > So the measurements do not come from an official MIT research project, and the data wasn?t peer reviewed or even published online except as a one page summary. But on the other hand we can?t go back and crawl the early web ourselves, so they may nevertheless be the best numbers available for June 1993-June 1995. Interesting aside: Wikipedia (https://en.wikipedia.org/wiki/WebCrawler ) suggests that the first search engine powered by a crawler did not come online until April 1994, but of course crawling the web to count is easier than crawling to produce a searchable public index. > > This also implies that the rest of the data comes from http://www.netcraft.com/survey/ which is still being updated to this day. The numbers do more or less match. However, the current Netcraft graph shows only ?host names? until around the year 2000, at which point it also begins to graph a very much smaller number of ?Active sites.? > > > > https://www.netcraft.com/active-sites/ explains the difference between hosts and active sites thus: > > In the early days of the web, hostnames were a good indication of actively managed content providing information and services to the internet community. The situation is now considerably more blurred ? the web includes a great deal of activity, but also a considerable quantity of sites that are untouched by human hand, produced automatically at the point of customer acquisition by domain registration or hosting service companies, advertising providers or speculative domain registrants, or search-engine optimisation companies. The biggest domain registrars are large enough to be significant in the context of the whole survey. For example, GoDaddy (17M hostnames) and 1&1 (10M hostnames) make up 16% of the 168M hostnames surveyed in May 2008. > > Circa 1996-1997, the number of distinct IP addresses would have been a good approximation to the number of real sites, since hosting companies would typically allocate an IP address to each site with distinct content, and multiple domain names could point to the IP address being used to serve the same site content. However, with the adoption of HTTP/1.1 virtual hosting, and the availability of load balancing technology it is possible to reliably host a great number of active sites on a single (or relatively few) IP addresses. > > In June 2000, the first month where both numbers are given, the estimate is 7.5 million active sites vs. 17 million host names. > > So our current plan is avoid citing the Hobbes page at all, and instead to cite M K Gray?s personal page at MIT for the early 1990s numbers and the Netcraft survey estimate of web hostnames for the later ones, with a caveat that the hostname counts for 1998-99 were likely already inflated by domain squatters and spammers. > > Anyone one got anything to add, or any better sources on 1990s web server numbers and counting methodology to point us to? > > Thanks, > > Tom > > > > > > > _______________________________________________ > This email is relayed from members at sigcis.org , the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org Marc Weber | marc at webhistory.org | +1 415 282 6868 Internet History Program Curatorial Director, Computer History Museum 1401 N Shoreline Blvd., Mountain View CA 94043 computerhistory.org/nethistory Co-founder, Web History Center and Project, webhistory.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Hughes Webspace cspace_1_1.pdf Type: application/pdf Size: 2068686 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcortada at umn.edu Thu Jul 23 06:45:42 2020 From: jcortada at umn.edu (James Cortada) Date: Thu, 23 Jul 2020 08:45:42 -0500 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: <4E28F378-1948-45E0-A256-8C72BFD5FF22@webhistory.org> Message-ID: Everyone has been wonderful and generous with your thoughts and leads. It seems you are as excited about coffee as everyone else in the computer world. It is becoming clearer to me that there are certain "material culture" issues that can guide us to understanding the world of computing. Besides coffee mugs, lapel pins, postcards and all that stuff we would get at COMDEX, for example, just opens up all kinds of avenues for the study of computing culture. And yes, it seems every industry loved its coffee and other trinkets. Thank you for your thoughts, I have a lot to ponder. Jim On Tue, Jul 21, 2020 at 1:43 PM mike at willegal.net wrote: > I can relate to some of these comments. > > Cisco, where I have employed since 1997, used to have coolers with a large > variety of free beverages available to all employees. I had one friend > that said that when the free drinks went away, so would he, and he did > leave not so long after the free drinks disappeared. > > I also used to fill my cup directly from the outflow from the old style > brewing machines. > > At one point Encore Computer, charged employees a quarter a cup for coffee > on the honor system. Eventually the coffee became free to employees, but > management didn?t tell us and they used the funds to sponsor a year end > holiday party. > > Here is a story. More than 15 years ago, I fairly frequently travelled > back and forth between Boston and San Jose on the ?Nerd Bird." Over time, > I had established the habit of staying on east coast time, even when out in > California. Visiting San Jose, during the intense effort of a new hardware > bring up, the team stayed and worked through a weekend. Sunday morning, I > woke up, as usual, about 4 or 5 AM local time. Not having anything to do > in the hotel room, I decided to go into the office and get a head start on > the days efforts. I arrived in the large, dark and apparently empty, > office building at something like 5 AM that Sunday morning. With the > lights out, no one in sight, I found that the coffee machine was in the > middle of brewing a fresh pot of coffee. It was a very eerie thing. > Eventually, I ran across a guy from another team that had come into the > office early that morning and needed his cup of ?Joe.? > > Eventually I had to give up on all caffeinated beverages, as I often > didn?t drink them during weekends and would then end up with a splitting > headache on Sunday. > > > -Mike Willegal > > > > > On Jul 21, 2020, at 1:08 PM, Marc Weber wrote: > > Mugs are also well represented in our collection at the Computer History > Museum? you?ll get 470 hits when you search on ?mug? in our online catalog > . > In fact we have essentially stopped collecting them as a result. > We also have a Peet?s Dash Button > . > Best, Marc > > Marc Weber | > marc at webhistory.org | +1 415 282 6868 > Curatorial Director, Internet History Program > Computer History Museum, 1401 N Shoreline Blvd., Mountain View CA 94043 > computerhistory.org/nethistory | Co-founder, Web History Center and > Project > > On Jul 21, 2020, at 09:44, Henry E Lowood wrote: > > Hi Jim, > You might want to browse through the Doug Menuez photography collection at > Stanford. About 10,000 of the images are online (out of about > 200-250,000). He captures quite a bit of the culture in companies like > Apple, NeXT, Adobe, etc., mostly 1980s. I am sure you will find many > coffee mugs there! > Here is a link to the online exhibit created from the images in this > collection: > https://exhibits.stanford.edu/menuez > Hit ?browse? to see a selection of companies represented. > Henry > > Henry Lowood, PhD > Harold C. Hohbach Curator, History of Science & Technology Collections; > Curator, Film & Media Collections > HASG, Green Library, 557 Escondido Mall > Stanford University Libraries > Stanford CA 94305-6066 > PH: 650-723-4602 > EM: lowood at stanford.edu > > *From:* Members *On Behalf Of *Kidwell, > Peggy > *Sent:* Tuesday, July 21, 2020 4:09 AM > *To:* Staiti, Alana ; James Cortada ; > members at sigcis.org > *Subject:* Re: [SIGCIS-Members] Help on Coffee and Computing > > I would add to Alana's fine list: > > https://americanhistory.si.edu/collections/search/object/nmah_1867086 (a > photograph) > > https://americanhistory.si.edu/collections/search/object/NMAH.AC.0324_ref460 (a > cartoon - though not much coffee shown) > > Best - > > Peggy Kidwell > ------------------------------ > *From:* Members on behalf of Staiti, > Alana > *Sent:* Monday, July 20, 2020 3:53 PM > *To:* James Cortada ; members at sigcis.org < > members at sigcis.org> > *Subject:* Re: [SIGCIS-Members] Help on Coffee and Computing > > *External Email - Exercise Caution* > Hi Jim, > > The National Museum of American History has some mugs in the computing > collection featuring company names. Some include fun little sayings. See > links below for a few examples. I'm not sure I can elaborate on coffee > culture though! We are still working remotely but if you have specific > questions about any of these or other objects I'd be happy to do whatever > digging I can do from afar, for the time being. > > https://americanhistory.si.edu/collections/search/object/nmah_1281495 > > https://americanhistory.si.edu/collections/search/object/nmah_1281135 > > https://americanhistory.si.edu/collections/search/object/nmah_1281136 > > https://americanhistory.si.edu/collections/search/object/nmah_1281137 > > > Be well, > Alana > > *Alana Staiti* (she/her/hers) > Curator of the History of Computers and Information Sciences > National Museum of American History > Smithsonian Institution > staitia at si.edu > > ------------------------------ > *From:* Members on behalf of James > Cortada > *Sent:* Monday, July 20, 2020 3:41 PM > *To:* members at sigcis.org > *Subject:* [SIGCIS-Members] Help on Coffee and Computing > > *External Email - Exercise Caution* > > The IT community of users, programmers, vendors, etc have for decades had > a reputation for being extensive consumers of coffee. In some parts of the > IT ecosystem, especially among those who work odd hours, such as > programmers, computer operators, and vendor field engineers. I am studying > the corporate ephemera of this industry and its cultural attachments, such > as coffee cups and what they tell us about computing. Do any of you have > any information, ephemera, or sources and citations on this specific issue > of coffee and computing? I can get many industry folks, such as IBM > retirees, to wax eloquently on the subject in their private FB accounts, > but that is not enough. Corporate culture is tough to study. Thanks in > advance for your help. Jim > -- > James W. Cortada > Senior Research Fellow > Charles Babbage Institute > University of Minnesota > jcortada at umn.edu > 608-274-6382 > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > > > Marc Weber | > marc at webhistory.org | +1 415 282 6868 > Internet History Program Curatorial Director, Computer History Museum > > 1401 N Shoreline Blvd., Mountain View CA 94043 > computerhistory.org/nethistory > Co-founder, Web History Center and Project, webhistory.org > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > -- James W. Cortada Senior Research Fellow Charles Babbage Institute University of Minnesota jcortada at umn.edu 608-274-6382 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mscroggins at ucla.edu Thu Jul 23 14:13:03 2020 From: mscroggins at ucla.edu (MICHAEL SCROGGINS) Date: Thu, 23 Jul 2020 14:13:03 -0700 Subject: [SIGCIS-Members] Help on Coffee and Computing In-Reply-To: References: <4E28F378-1948-45E0-A256-8C72BFD5FF22@webhistory.org> Message-ID: There might be a parallel line of inquiry within folklore. I know there has been some work on the folklore of the office and the folklore of the internet. I can't imagine either one lacking references to coffee. Best, Michael Scroggins On Thu, Jul 23, 2020 at 6:47 AM James Cortada wrote: > Everyone has been wonderful and generous with your thoughts and leads. It > seems you are as excited about coffee as everyone else in the computer > world. It is becoming clearer to me that there are certain "material > culture" issues that can guide us to understanding the world of computing. > Besides coffee mugs, lapel pins, postcards and all that stuff we would get > at COMDEX, for example, just opens up all kinds of avenues for the study of > computing culture. And yes, it seems every industry loved its coffee and > other trinkets. Thank you for your thoughts, I have a lot to ponder. Jim > > On Tue, Jul 21, 2020 at 1:43 PM mike at willegal.net > wrote: > >> I can relate to some of these comments. >> >> Cisco, where I have employed since 1997, used to have coolers with a >> large variety of free beverages available to all employees. I had one >> friend that said that when the free drinks went away, so would he, and he >> did leave not so long after the free drinks disappeared. >> >> I also used to fill my cup directly from the outflow from the old style >> brewing machines. >> >> At one point Encore Computer, charged employees a quarter a cup for >> coffee on the honor system. Eventually the coffee became free to >> employees, but management didn?t tell us and they used the funds to sponsor >> a year end holiday party. >> >> Here is a story. More than 15 years ago, I fairly frequently travelled >> back and forth between Boston and San Jose on the ?Nerd Bird." Over time, >> I had established the habit of staying on east coast time, even when out in >> California. Visiting San Jose, during the intense effort of a new hardware >> bring up, the team stayed and worked through a weekend. Sunday morning, I >> woke up, as usual, about 4 or 5 AM local time. Not having anything to do >> in the hotel room, I decided to go into the office and get a head start on >> the days efforts. I arrived in the large, dark and apparently empty, >> office building at something like 5 AM that Sunday morning. With the >> lights out, no one in sight, I found that the coffee machine was in the >> middle of brewing a fresh pot of coffee. It was a very eerie thing. >> Eventually, I ran across a guy from another team that had come into the >> office early that morning and needed his cup of ?Joe.? >> >> Eventually I had to give up on all caffeinated beverages, as I often >> didn?t drink them during weekends and would then end up with a splitting >> headache on Sunday. >> >> >> -Mike Willegal >> >> >> >> >> On Jul 21, 2020, at 1:08 PM, Marc Weber wrote: >> >> Mugs are also well represented in our collection at the Computer History >> Museum? you?ll get 470 hits when you search on ?mug? in our online >> catalog >> . >> In fact we have essentially stopped collecting them as a result. >> We also have a Peet?s Dash Button >> . >> Best, Marc >> >> Marc Weber | >> marc at webhistory.org | +1 415 282 6868 >> Curatorial Director, Internet History Program >> Computer History Museum, 1401 N Shoreline Blvd., Mountain View CA 94043 >> computerhistory.org/nethistory | Co-founder, Web History Center and >> Project >> >> On Jul 21, 2020, at 09:44, Henry E Lowood wrote: >> >> Hi Jim, >> You might want to browse through the Doug Menuez photography collection >> at Stanford. About 10,000 of the images are online (out of about >> 200-250,000). He captures quite a bit of the culture in companies like >> Apple, NeXT, Adobe, etc., mostly 1980s. I am sure you will find many >> coffee mugs there! >> Here is a link to the online exhibit created from the images in this >> collection: >> https://exhibits.stanford.edu/menuez >> Hit ?browse? to see a selection of companies represented. >> Henry >> >> Henry Lowood, PhD >> Harold C. Hohbach Curator, History of Science & Technology Collections; >> Curator, Film & Media Collections >> HASG, Green Library, 557 Escondido Mall >> Stanford University Libraries >> Stanford CA 94305-6066 >> PH: 650-723-4602 >> EM: lowood at stanford.edu >> >> *From:* Members *On Behalf Of *Kidwell, >> Peggy >> *Sent:* Tuesday, July 21, 2020 4:09 AM >> *To:* Staiti, Alana ; James Cortada ; >> members at sigcis.org >> *Subject:* Re: [SIGCIS-Members] Help on Coffee and Computing >> >> I would add to Alana's fine list: >> >> https://americanhistory.si.edu/collections/search/object/nmah_1867086 (a >> photograph) >> >> https://americanhistory.si.edu/collections/search/object/NMAH.AC.0324_ref460 (a >> cartoon - though not much coffee shown) >> >> Best - >> >> Peggy Kidwell >> ------------------------------ >> *From:* Members on behalf of Staiti, >> Alana >> *Sent:* Monday, July 20, 2020 3:53 PM >> *To:* James Cortada ; members at sigcis.org < >> members at sigcis.org> >> *Subject:* Re: [SIGCIS-Members] Help on Coffee and Computing >> >> *External Email - Exercise Caution* >> Hi Jim, >> >> The National Museum of American History has some mugs in the computing >> collection featuring company names. Some include fun little sayings. See >> links below for a few examples. I'm not sure I can elaborate on coffee >> culture though! We are still working remotely but if you have specific >> questions about any of these or other objects I'd be happy to do whatever >> digging I can do from afar, for the time being. >> >> https://americanhistory.si.edu/collections/search/object/nmah_1281495 >> >> https://americanhistory.si.edu/collections/search/object/nmah_1281135 >> >> https://americanhistory.si.edu/collections/search/object/nmah_1281136 >> >> https://americanhistory.si.edu/collections/search/object/nmah_1281137 >> >> >> Be well, >> Alana >> >> *Alana Staiti* (she/her/hers) >> Curator of the History of Computers and Information Sciences >> National Museum of American History >> Smithsonian Institution >> staitia at si.edu >> >> ------------------------------ >> *From:* Members on behalf of James >> Cortada >> *Sent:* Monday, July 20, 2020 3:41 PM >> *To:* members at sigcis.org >> *Subject:* [SIGCIS-Members] Help on Coffee and Computing >> >> *External Email - Exercise Caution* >> >> The IT community of users, programmers, vendors, etc have for decades had >> a reputation for being extensive consumers of coffee. In some parts of the >> IT ecosystem, especially among those who work odd hours, such as >> programmers, computer operators, and vendor field engineers. I am studying >> the corporate ephemera of this industry and its cultural attachments, such >> as coffee cups and what they tell us about computing. Do any of you have >> any information, ephemera, or sources and citations on this specific issue >> of coffee and computing? I can get many industry folks, such as IBM >> retirees, to wax eloquently on the subject in their private FB accounts, >> but that is not enough. Corporate culture is tough to study. Thanks in >> advance for your help. Jim >> -- >> James W. Cortada >> Senior Research Fellow >> Charles Babbage Institute >> University of Minnesota >> jcortada at umn.edu >> 608-274-6382 >> _______________________________________________ >> This email is relayed from members at sigcis.org, the email discussion >> list of SHOT SIGCIS. Opinions expressed here are those of the member >> posting and are not reviewed, edited, or endorsed by SIGCIS. The list >> archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and >> you can change your subscription options at >> http://lists.sigcis.org/listinfo.cgi/members-sigcis.org >> >> >> >> >> Marc Weber | >> marc at webhistory.org | +1 415 282 6868 >> Internet History Program Curatorial Director, Computer History Museum >> >> 1401 N Shoreline Blvd., Mountain View CA 94043 >> computerhistory.org/nethistory >> Co-founder, Web History Center and Project, webhistory.org >> >> _______________________________________________ >> This email is relayed from members at sigcis.org, the email discussion >> list of SHOT SIGCIS. Opinions expressed here are those of the member >> posting and are not reviewed, edited, or endorsed by SIGCIS. The list >> archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and >> you can change your subscription options at >> http://lists.sigcis.org/listinfo.cgi/members-sigcis.org >> >> >> > > -- > James W. Cortada > Senior Research Fellow > Charles Babbage Institute > University of Minnesota > jcortada at umn.edu > 608-274-6382 > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Tue Jul 28 22:36:04 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Wed, 29 Jul 2020 00:36:04 -0500 Subject: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? Message-ID: <076701d6656a$26f27150$74d753f0$@gmail.com> Hello SIGCIS, Our of the last unsourced footnotes for the Revised History of Modern Computing holds a note to myself concerning a possibly exaggerated factoid from the "Hobbes' Internet Timeline." https://www.zakon.org/robert/internet/timeline/ According to the timeline entry for 1973: "ARPA study shows email composing 75% of all ARPANET traffic." Keep in mind that Tomlinson sent the first network mail in 1971 and mail technologies were rather immature for the first few years. If that is true it's certainly a fact worth including in the book to demonstrate the very rapid spread of email on the ARPANET. But "ARPANET study" is not something I can use to confirm the original source. I haven't been able to find anything so specific in Janet Abbate's book Inventing the Internet though she features email prominently and agree that its rise was both rapid and unexpected. Ian Hardy's undergraduate thesis, an early historical look at Internet email, does not include this particular figure. https://www.livinginternet.com/References/Ian%20Hardy%20Email%20Thesis.txt Craig Partridge's IEEE Annals article "Technical Development of Internet Email" didn't, on a recent skim, seem to say anything on this topic either. Does anyone know where this number might be coming from? Or have a well-sourced alternative for slightly later year like 1975 or 76? Best wishes, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From treese at acm.org Wed Jul 29 13:01:40 2020 From: treese at acm.org (Win Treese) Date: Wed, 29 Jul 2020 16:01:40 -0400 Subject: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? In-Reply-To: <076701d6656a$26f27150$74d753f0$@gmail.com> References: <076701d6656a$26f27150$74d753f0$@gmail.com> Message-ID: <46479343-B69E-42C4-A7FC-08717BD6E279@acm.org> HI, Tom. Stephen Lukasik?s retrospective "Why the Arpanet Was Built? (IEEE Annals of the History of Computing, July-September 2011, pp. 4-21, vol. 33 https://www.computer.org/csdl/magazine/an/2011/03/man2011030004/13rRUxly9fL) says "A 1974 Mitre study of Arpanet usage showed that about three-quarters of the traffic was email?. It seems odd that Licklider and Vezza said a lot about email but didn?t include that fact in their 1978 ?Applications of Information Networks? paper (Proceedings of the IEEE, Vol. 66, NO. 11, November 1978). They wrote (among other statements: "By the fall of 1973, the great effectiveness and convenience of such fast, informed messages services... had been discovered by almost everyone who had worked on the development of the ARPANET -- and especially by the then Director of ARPA, S.J. Lukasik, who soon had most of his office directors and program managers communicating with him and with their colleagues and their contractors via the network. Thereafter, both the number of (intercommunicating) electronic mail systems and the number of users of them on the ARPANET increased rapidly." A Gizmodo article from 2016 (https://paleofuture.gizmodo.com/the-defense-department-got-mad-at-darpa-for-creating-em-1763274070) has: BEGIN QUOTE The explosion of email was swift. In 1974, ARPA asked MITRE to study how the network was being used. They were shocked to find out that roughly 75 percent of the net packets were for email. I reached out to Steve Lukasik, former director of ARPA during the late 1960s and early 1970s, who told me about the bureaucratic hurdles that the agency faced once they had cracked email?s technical problems. History books often ignore, or don?t fully appreciate, the bureaucratic hurdles that must be jumped to accomplish major technological feats. Al Gore didn?t invent the internet, for example, but without him the bureaucratic barriers wouldn?t have been overcome to privatize it. Email?s use of 75 percent of network traffic in 1974 ?had enormous bureaucratic implications that were initially worrisome,? Lukasik told me. ?DoD auditors slapped our wrist for violating DoD procedures. They said we had constructed a communication system, but that was the responsibility of the Defense Communication Agency.? END QUOTE I couldn?t find any actual details on the "MITRE study? beyond that. It?s fragmentary, but perhaps helpful. Best, Win Win Treese treese at acm.org > On Jul 29, 2020, at 1:36 AM, thomas.haigh at gmail.com wrote: > > Hello SIGCIS, > > Our of the last unsourced footnotes for the Revised History of Modern Computing holds a note to myself concerning a possibly exaggerated factoid from the ?Hobbes? Internet Timeline.? https://www.zakon.org/robert/internet/timeline/ > > According to the timeline entry for 1973: ?ARPA study shows email composing 75% of all ARPANET traffic.? Keep in mind that Tomlinson sent the first network mail in 1971 and mail technologies were rather immature for the first few years. > > If that is true it?s certainly a fact worth including in the book to demonstrate the very rapid spread of email on the ARPANET. But ?ARPANET study? is not something I can use to confirm the original source. > > I haven?t been able to find anything so specific in Janet Abbate?s book Inventing the Internet though she features email prominently and agree that its rise was both rapid and unexpected. Ian Hardy?s undergraduate thesis, an early historical look at Internet email, does not include this particular figure. https://www.livinginternet.com/References/Ian%20Hardy%20Email%20Thesis.txt Craig Partridge?s IEEE Annals article ?Technical Development of Internet Email? didn?t, on a recent skim, seem to say anything on this topic either. > > Does anyone know where this number might be coming from? Or have a well-sourced alternative for slightly later year like 1975 or 76? > > Best wishes, > > Tom > > > > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org From jean.graham at stonybrook.edu Wed Jul 29 13:12:10 2020 From: jean.graham at stonybrook.edu (Jean Graham) Date: Wed, 29 Jul 2020 16:12:10 -0400 Subject: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? In-Reply-To: <46479343-B69E-42C4-A7FC-08717BD6E279@acm.org> References: <076701d6656a$26f27150$74d753f0$@gmail.com> <46479343-B69E-42C4-A7FC-08717BD6E279@acm.org> Message-ID: A lot of the material circulated on early networks was pre-publication scholarly literature. That would certainly take up a lot of network traffic, just by its nature. On Wed, Jul 29, 2020 at 4:01 PM Win Treese wrote: > HI, Tom. Stephen Lukasik?s retrospective "Why the Arpanet Was Built? (IEEE > Annals of the History of Computing, > July-September 2011, pp. 4-21, vol. 33 > https://www.computer.org/csdl/magazine/an/2011/03/man2011030004/13rRUxly9fL) > says "A 1974 Mitre study of Arpanet usage showed that about three-quarters > of the traffic was email?. > > It seems odd that Licklider and Vezza said a lot about email but didn?t > include that fact in their 1978 ?Applications of Information Networks? > paper (Proceedings of the IEEE, Vol. 66, NO. 11, November 1978). They > wrote (among other statements: > > "By the fall of 1973, the great effectiveness and convenience of > such fast, informed messages services... had been discovered by > almost everyone who had worked on the development of the ARPANET -- > and especially by the then Director of ARPA, S.J. Lukasik, who soon > had most of his office directors and program managers communicating > with him and with their colleagues and their contractors via the > network. Thereafter, both the number of (intercommunicating) > electronic mail systems and the number of users of them on the > ARPANET increased rapidly." > > A Gizmodo article from 2016 ( > https://paleofuture.gizmodo.com/the-defense-department-got-mad-at-darpa-for-creating-em-1763274070) > has: > > BEGIN QUOTE > The explosion of email was swift. In 1974, ARPA asked MITRE to study how > the network was being used. They were shocked to find out that > roughly 75 percent of the net packets were for email. > > I reached out to Steve Lukasik, former director of ARPA during the late > 1960s and early 1970s, who told me about the bureaucratic hurdles that the > agency faced once they had cracked email?s technical problems. History > books often ignore, or don?t fully appreciate, the bureaucratic hurdles > that must be jumped to accomplish major technological feats. Al Gore didn?t > invent the internet, for example, but without him the bureaucratic barriers > wouldn?t have been overcome to privatize it. > > Email?s use of 75 percent of network traffic in 1974 ?had enormous > bureaucratic implications that were initially worrisome,? Lukasik told me. > ?DoD auditors slapped our wrist for violating DoD procedures. They said we > had constructed a communication system, but that was the responsibility of > the Defense Communication Agency.? > END QUOTE > > I couldn?t find any actual details on the "MITRE study? beyond that. It?s > fragmentary, but perhaps helpful. > > Best, > > Win > > Win Treese > treese at acm.org > > > > On Jul 29, 2020, at 1:36 AM, thomas.haigh at gmail.com wrote: > > > > Hello SIGCIS, > > > > Our of the last unsourced footnotes for the Revised History of Modern > Computing holds a note to myself concerning a possibly exaggerated factoid > from the ?Hobbes? Internet Timeline.? > https://www.zakon.org/robert/internet/timeline/ > > > > According to the timeline entry for 1973: ?ARPA study shows email > composing 75% of all ARPANET traffic.? Keep in mind that Tomlinson sent the > first network mail in 1971 and mail technologies were rather immature for > the first few years. > > > > If that is true it?s certainly a fact worth including in the book to > demonstrate the very rapid spread of email on the ARPANET. But ?ARPANET > study? is not something I can use to confirm the original source. > > > > I haven?t been able to find anything so specific in Janet Abbate?s book > Inventing the Internet though she features email prominently and agree that > its rise was both rapid and unexpected. Ian Hardy?s undergraduate thesis, > an early historical look at Internet email, does not include this > particular figure. > https://www.livinginternet.com/References/Ian%20Hardy%20Email%20Thesis.txt > Craig Partridge?s IEEE Annals article ?Technical Development of Internet > Email? didn?t, on a recent skim, seem to say anything on this topic either. > > > > Does anyone know where this number might be coming from? Or have a > well-sourced alternative for slightly later year like 1975 or 76? > > > > Best wishes, > > > > Tom > > > > > > > > > > _______________________________________________ > > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Wed Jul 29 13:22:44 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Wed, 29 Jul 2020 15:22:44 -0500 Subject: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? In-Reply-To: <46479343-B69E-42C4-A7FC-08717BD6E279@acm.org> References: <076701d6656a$26f27150$74d753f0$@gmail.com> <46479343-B69E-42C4-A7FC-08717BD6E279@acm.org> Message-ID: <002601d665e6$04e97320$0ebc5960$@gmail.com> Great! So that gets us closer to an actual source for the claim, and also moves the year. Thanks to Dave Walden I've received some off-list input from Dave Crocker, Vint Cerf and Dan Lynch. They think that email would have been a majority of bytes sent over the network by the end of 1973 (though possible not a majority of packets as telnet would produce a lot of largely empty packets). But they don't recall an actual study. The figure is also in Wikipedia, without a real source, so I'm acutely aware that it's one of those factoids that historians and journalists will find somewhere like Wikipedia or the Hobbes Timeline and then copy, thus creating a "reliable source" that Wikipedia can then cite to support the information. It's like a time paradox. Tom -----Original Message----- From: Win Treese Sent: Wednesday, July 29, 2020 3:02 PM To: thomas.haigh at gmail.com Cc: members Subject: Re: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? HI, Tom. Stephen Lukasik?s retrospective "Why the Arpanet Was Built? (IEEE Annals of the History of Computing, July-September 2011, pp. 4-21, vol. 33 https://www.computer.org/csdl/magazine/an/2011/03/man2011030004/13rRUxly9fL) says "A 1974 Mitre study of Arpanet usage showed that about three-quarters of the traffic was email?. It seems odd that Licklider and Vezza said a lot about email but didn?t include that fact in their 1978 ?Applications of Information Networks? paper (Proceedings of the IEEE, Vol. 66, NO. 11, November 1978). They wrote (among other statements: "By the fall of 1973, the great effectiveness and convenience of such fast, informed messages services... had been discovered by almost everyone who had worked on the development of the ARPANET -- and especially by the then Director of ARPA, S.J. Lukasik, who soon had most of his office directors and program managers communicating with him and with their colleagues and their contractors via the network. Thereafter, both the number of (intercommunicating) electronic mail systems and the number of users of them on the ARPANET increased rapidly." A Gizmodo article from 2016 (https://paleofuture.gizmodo.com/the-defense-department-got-mad-at-darpa-for-creating-em-1763274070) has: BEGIN QUOTE The explosion of email was swift. In 1974, ARPA asked MITRE to study how the network was being used. They were shocked to find out that roughly 75 percent of the net packets were for email. I reached out to Steve Lukasik, former director of ARPA during the late 1960s and early 1970s, who told me about the bureaucratic hurdles that the agency faced once they had cracked email?s technical problems. History books often ignore, or don?t fully appreciate, the bureaucratic hurdles that must be jumped to accomplish major technological feats. Al Gore didn?t invent the internet, for example, but without him the bureaucratic barriers wouldn?t have been overcome to privatize it. Email?s use of 75 percent of network traffic in 1974 ?had enormous bureaucratic implications that were initially worrisome,? Lukasik told me. ?DoD auditors slapped our wrist for violating DoD procedures. They said we had constructed a communication system, but that was the responsibility of the Defense Communication Agency.? END QUOTE I couldn?t find any actual details on the "MITRE study? beyond that. It?s fragmentary, but perhaps helpful. Best, Win Win Treese treese at acm.org > On Jul 29, 2020, at 1:36 AM, thomas.haigh at gmail.com wrote: > > Hello SIGCIS, > > Our of the last unsourced footnotes for the Revised History of Modern > Computing holds a note to myself concerning a possibly exaggerated > factoid from the ?Hobbes? Internet Timeline.? > https://www.zakon.org/robert/internet/timeline/ > > According to the timeline entry for 1973: ?ARPA study shows email composing 75% of all ARPANET traffic.? Keep in mind that Tomlinson sent the first network mail in 1971 and mail technologies were rather immature for the first few years. > > If that is true it?s certainly a fact worth including in the book to demonstrate the very rapid spread of email on the ARPANET. But ?ARPANET study? is not something I can use to confirm the original source. > > I haven?t been able to find anything so specific in Janet Abbate?s book Inventing the Internet though she features email prominently and agree that its rise was both rapid and unexpected. Ian Hardy?s undergraduate thesis, an early historical look at Internet email, does not include this particular figure. https://www.livinginternet.com/References/Ian%20Hardy%20Email%20Thesis.txt Craig Partridge?s IEEE Annals article ?Technical Development of Internet Email? didn?t, on a recent skim, seem to say anything on this topic either. > > Does anyone know where this number might be coming from? Or have a well-sourced alternative for slightly later year like 1975 or 76? > > Best wishes, > > Tom > > > > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ > and you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org From treese at acm.org Wed Jul 29 14:18:37 2020 From: treese at acm.org (Win Treese) Date: Wed, 29 Jul 2020 17:18:37 -0400 Subject: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? In-Reply-To: <002601d665e6$04e97320$0ebc5960$@gmail.com> References: <076701d6656a$26f27150$74d753f0$@gmail.com> <46479343-B69E-42C4-A7FC-08717BD6E279@acm.org> <002601d665e6$04e97320$0ebc5960$@gmail.com> Message-ID: <6B93EECF-4ED6-4079-A0F8-47C75CA48AF2@acm.org> > On Jul 29, 2020, at 4:22 PM, wrote: > [?] > > The figure is also in Wikipedia, without a real source, so I'm acutely aware that it's one of those factoids that historians and journalists will find somewhere like Wikipedia or the Hobbes Timeline and then copy, thus creating a "reliable source" that Wikipedia can then cite to support the information. It's like a time paradox. Of course, there?s an XKCD for that: https://xkcd.com/978/. - Win From alexandre.hocquet at univ-lorraine.fr Wed Jul 29 15:54:24 2020 From: alexandre.hocquet at univ-lorraine.fr (Alexandre Hocquet) Date: Thu, 30 Jul 2020 00:54:24 +0200 Subject: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? In-Reply-To: <6B93EECF-4ED6-4079-A0F8-47C75CA48AF2@acm.org> References: <076701d6656a$26f27150$74d753f0$@gmail.com> <46479343-B69E-42C4-A7FC-08717BD6E279@acm.org> <002601d665e6$04e97320$0ebc5960$@gmail.com> <6B93EECF-4ED6-4079-A0F8-47C75CA48AF2@acm.org> Message-ID: On 7/29/20 11:18 PM, Win Treese wrote: > > >> On Jul 29, 2020, at 4:22 PM, wrote: >> [?] >> >> The figure is also in Wikipedia, without a real source, so I'm acutely aware that it's one of those factoids that historians and journalists will find somewhere like Wikipedia or the Hobbes Timeline and then copy, thus creating a "reliable source" that Wikipedia can then cite to support the information. It's like a time paradox. > > Of course, there?s an XKCD for that: https://xkcd.com/978/. If we know which Wikipedia article exactly, the best thing to do would be to add a [[citation needed]] flag, or even better, to link to this SIGCIS thread in the artcile talk page with a short explanation. If you point me to the article, I can do that as a contribution to restore XKCD's faith in Wikipedia :) -- *********************************************** Alexandre Hocquet Archives Henri Poincar? & Science History Institute Alexandre.Hocquet at univ-lorraine.fr https://www.sciencehistory.org/profile/alexandre-hocquet https://poincare.univ-lorraine.fr/fr/membre-titulaire/alexandre-hocquet *********************************************** From thomas.haigh at gmail.com Wed Jul 29 16:24:10 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Wed, 29 Jul 2020 18:24:10 -0500 Subject: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? In-Reply-To: References: <076701d6656a$26f27150$74d753f0$@gmail.com> <46479343-B69E-42C4-A7FC-08717BD6E279@acm.org> <002601d665e6$04e97320$0ebc5960$@gmail.com> <6B93EECF-4ED6-4079-A0F8-47C75CA48AF2@acm.org> Message-ID: <004001d665ff$5da664a0$18f32de0$@gmail.com> https://en.wikipedia.org/wiki/ARPANET: "In 1971, Ray Tomlinson, of BBN sent the first network e-mail (RFC 524, RFC 561). By 1973, e-mail constituted 75% of the ARPANET traffic.[9][84]" There are two citations, so straight a "citation needed" flag might not convince people. However one is to a Ray Tomlinson page that does not include the figure. The other is to a book chapter by communications scholar Leah A. Lievrouw, but the page pointed to by the link doesn't seem to include the figure either. (That page does cite the Ceruzzi history we're working to revise). So the two references go with the first sentence of the short paragraph, not the second which is unsourced. A better general citation for the creation and rapid adoption of Internet email would actually be pages 106-111 of Abbate. Tom -----Original Message----- From: Members On Behalf Of Alexandre Hocquet Sent: Wednesday, July 29, 2020 5:54 PM To: members at lists.sigcis.org Subject: Re: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? On 7/29/20 11:18 PM, Win Treese wrote: > > >> On Jul 29, 2020, at 4:22 PM, wrote: >> [?] >> >> The figure is also in Wikipedia, without a real source, so I'm acutely aware that it's one of those factoids that historians and journalists will find somewhere like Wikipedia or the Hobbes Timeline and then copy, thus creating a "reliable source" that Wikipedia can then cite to support the information. It's like a time paradox. > > Of course, there?s an XKCD for that: https://xkcd.com/978/. If we know which Wikipedia article exactly, the best thing to do would be to add a [[citation needed]] flag, or even better, to link to this SIGCIS thread in the artcile talk page with a short explanation. If you point me to the article, I can do that as a contribution to restore XKCD's faith in Wikipedia :) -- *********************************************** Alexandre Hocquet Archives Henri Poincar? & Science History Institute Alexandre.Hocquet at univ-lorraine.fr https://www.sciencehistory.org/profile/alexandre-hocquet https://poincare.univ-lorraine.fr/fr/membre-titulaire/alexandre-hocquet *********************************************** _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org From alexandre.hocquet at univ-lorraine.fr Wed Jul 29 17:05:59 2020 From: alexandre.hocquet at univ-lorraine.fr (Alexandre Hocquet) Date: Thu, 30 Jul 2020 02:05:59 +0200 Subject: [SIGCIS-Members] Was email really already 75% of ARPANET traffic by 1973? In-Reply-To: <004001d665ff$5da664a0$18f32de0$@gmail.com> References: <076701d6656a$26f27150$74d753f0$@gmail.com> <46479343-B69E-42C4-A7FC-08717BD6E279@acm.org> <002601d665e6$04e97320$0ebc5960$@gmail.com> <6B93EECF-4ED6-4079-A0F8-47C75CA48AF2@acm.org> <004001d665ff$5da664a0$18f32de0$@gmail.com> Message-ID: <4c94e02b-0f33-0860-fb56-1de460cb8059@univ-lorraine.fr> On 7/30/20 1:24 AM, thomas.haigh at gmail.com wrote: > There are two citations, so straight a "citation needed" flag might not convince people. Thanks Tom, you are right, that would be unconvincing. I have now modified it to a more neutral formulation. I have also explained the changes int he talk page : https://en.wikipedia.org/wiki/Talk:ARPANET#%22By_1973,_e-mail_constituted_75%_of_the_ARPANET_traffic%22_as_a_dubious_claim If it's erroneous, or could be better phrased, don't hesitate to tell me or change it. -- *********************************************** Alexandre Hocquet Archives Henri Poincar? & Science History Institute Alexandre.Hocquet at univ-lorraine.fr https://www.sciencehistory.org/profile/alexandre-hocquet https://poincare.univ-lorraine.fr/fr/membre-titulaire/alexandre-hocquet *********************************************** From brian.randell at newcastle.ac.uk Thu Jul 30 03:38:20 2020 From: brian.randell at newcastle.ac.uk (Brian Randell) Date: Thu, 30 Jul 2020 10:38:20 +0000 Subject: [SIGCIS-Members] Obscure Edwardian science and engineering magazines, and Percy Ludgate Message-ID: ? Hi: For the past several years I have been involved in an extensive project led by Dr Brian Coghlan of Trinity College Dublin that is researching the life and work of Percy Ludgate. The paper that Ludgate published in the Scientific Record of the Royal Dublin Society in 1909 is notable for its description of his design for what we would now term a mechanical programmable digital computer, the first after Babbage?s ?Analytical Engine? of over sixty years earlier. In this paper Ludgate said he had made ?many drawings of the machine and its parts?, but neither this paper nor the one subsequent article he is known to have published (about Babbage?s machine, in the 1914 Napier Tercentenary Celebration Handbook) contained any illustrations. We have recently determined that Ludgate?s 1909 paper attracted notice at the time (beyond its review by C.V Boys in the journal ?Nature?), in that it led promptly to articles in two popular engineering magazines. These were ?Engineering? (a London-based monthly magazine founded in 1865 that is still going strongly) and ?The English Mechanic and World of Science? (which appeared weekly from 1865 to 1926). Both of these articles included a drawing - presumably by Ludgate - of his ?Irish logarithm? multiplication mechanism! Neither of these articles was at all easily found. I?d be grateful for any suggestions as to other obscure magazines and journals, or other sources, not necessarily from the UK, whose contents have so far escaped Google?s attention, which might include contemporaneous commentary on Ludgate?s paper and machine or his involvement with the Napier Tercentenary Celebration - and also for help in searching their contents. Brian Randell ? School of Computing, Newcastle University, 1 Science Square, Newcastle upon Tyne, NE4 5TG EMAIL = Brian.Randell at ncl.ac.uk PHONE = +44 191 208 7923 URL = http://www.ncl.ac.uk/computing/people/profile/brianrandell.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.haigh at gmail.com Fri Jul 31 13:05:52 2020 From: thomas.haigh at gmail.com (thomas.haigh at gmail.com) Date: Fri, 31 Jul 2020 15:05:52 -0500 Subject: [SIGCIS-Members] eBay is selling an Apple 1 for $1.5 million Message-ID: <01f701d66775$fe5dc100$fb194300$@gmail.com> https://www.ebay.com/i/174195921349. There is at least a "Make offer" button. I have to say that this is more than a little unhinged, possibly a further sign (as if one were needed) of the approach of the end times. Though I did recently pay $250 for a working Apple IIe with disk drives and monitor. Simple mathematics suggests for a Bezos, Musk or Zuckerberg this would be a very much smaller purchase relative to net worth. Full description at http://vi.raptor.ebaydesc.com/ws/eBayISAPI.dll?ViewItemDescV4 &item=174195921349&category=162075&pm=1&ds=0&t=1582079090000&ver=0 Also an entry in the Apple 1 registry (which of course): https://www.apple1registry.com/en/79.html. Best wishes, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at willegal.net Fri Jul 31 15:24:34 2020 From: mike at willegal.net (mike at willegal.net) Date: Fri, 31 Jul 2020 18:24:34 -0400 Subject: [SIGCIS-Members] eBay is selling an Apple 1 for $1.5 million In-Reply-To: <01f701d66775$fe5dc100$fb194300$@gmail.com> References: <01f701d66775$fe5dc100$fb194300$@gmail.com> Message-ID: <3D1FB17A-8D3E-473C-A429-58A5A1278660@willegal.net> I talked to Krishna a few years ago. I don?t think he is unhinged, but I can?t imagine any Apple 1 fetching that price, even though it appears to be one of the nicer survivors. It?s funny, when I first became interested in Apple 1s, condition mattered little, but now the market has evolved to the point where condition seems to matter. Note that the Henry Ford Museum paid around 1 million dollars for an Apple 1 several years ago, though that price hasn?t been approached since. Regards, Mike Willegal > On Jul 31, 2020, at 4:05 PM, wrote: > > https://www.ebay.com/i/174195921349 . There is at least a ?Make offer? button. > > I have to say that this is more than a little unhinged, possibly a further sign (as if one were needed) of the approach of the end times. Though I did recently pay $250 for a working Apple IIe with disk drives and monitor. Simple mathematics suggests for a Bezos, Musk or Zuckerberg this would be a very much smaller purchase relative to net worth. > > Full description at http://vi.raptor.ebaydesc.com/ws/eBayISAPI.dll?ViewItemDescV4&item=174195921349&category=162075&pm=1&ds=0&t=1582079090000&ver=0 > > Also an entry in the Apple 1 registry (which of course): https://www.apple1registry.com/en/79.html . > > Best wishes, > > Tom > _______________________________________________ > This email is relayed from members at sigcis.org , the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddouglas at mit.edu Fri Jul 31 16:13:21 2020 From: ddouglas at mit.edu (Deborah Douglas) Date: Fri, 31 Jul 2020 23:13:21 +0000 Subject: [SIGCIS-Members] eBay is selling an Apple 1 for $1.5 million In-Reply-To: <3D1FB17A-8D3E-473C-A429-58A5A1278660@willegal.net> References: <01f701d66775$fe5dc100$fb194300$@gmail.com> <3D1FB17A-8D3E-473C-A429-58A5A1278660@willegal.net> Message-ID: For those who are curious here are some of the prices paid for Apple 1 computers in the past 6 years. 2014: $910,000 (Charity auction) https://www.cultofmac.com/498888/apple-history-celebration-apple-1-auction/ 2016: $815,000 (Charity auction) https://www.cultofmac.com/498888/apple-history-celebration-apple-1-auction/ 2018: $375,000 https://www.cnet.com/news/rare-apple-1-sells-at-auction-for-over-500-times-original-price/ 2019: $470,000 https://www.cnbc.com/2019/05/28/wozniak-built-apple-1-computer-sold-for-almost-500000-at-christies.html 2020: $458,711.25. https://appleinsider.com/articles/20/03/13/rare-functional-apple-1-computer-sold-at-auction-for-458711 Debbie Douglas On Jul 31, 2020, at 6:24 PM, mike at willegal.net wrote: I talked to Krishna a few years ago. I don?t think he is unhinged, but I can?t imagine any Apple 1 fetching that price, even though it appears to be one of the nicer survivors. It?s funny, when I first became interested in Apple 1s, condition mattered little, but now the market has evolved to the point where condition seems to matter. Note that the Henry Ford Museum paid around 1 million dollars for an Apple 1 several years ago, though that price hasn?t been approached since. Regards, Mike Willegal On Jul 31, 2020, at 4:05 PM, > > wrote: https://www.ebay.com/i/174195921349. There is at least a ?Make offer? button. I have to say that this is more than a little unhinged, possibly a further sign (as if one were needed) of the approach of the end times. Though I did recently pay $250 for a working Apple IIe with disk drives and monitor. Simple mathematics suggests for a Bezos, Musk or Zuckerberg this would be a very much smaller purchase relative to net worth. Full description at http://vi.raptor.ebaydesc.com/ws/eBayISAPI.dll?ViewItemDescV4&item=174195921349&category=162075&pm=1&ds=0&t=1582079090000&ver=0 Also an entry in the Apple 1 registry (which of course): https://www.apple1registry.com/en/79.html. Best wishes, Tom _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org _______________________________________________ This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org Deborah G. Douglas, PhD ? Director of Collections and Curator of Science and Technology, MIT Museum; Research Associate, Program in Science, Technology, and Society ? Room N51-209 ? 265 Massachusetts Avenue ? Cambridge, MA 02139-4307 ? ddouglas at mit.edu ? 617-253-1766 telephone ? 617-253-8994 facsimile ? http://mitmuseum.mit.edu ? she/her/hers -------------- next part -------------- An HTML attachment was scrubbed... URL: From lizaloop at loopcenter.org Fri Jul 31 21:33:48 2020 From: lizaloop at loopcenter.org (LO*OP CENTER, INC.) Date: Fri, 31 Jul 2020 21:33:48 -0700 Subject: [SIGCIS-Members] eBay is selling an Apple 1 for $1.5 million In-Reply-To: References: <01f701d66775$fe5dc100$fb194300$@gmail.com> <3D1FB17A-8D3E-473C-A429-58A5A1278660@willegal.net> Message-ID: I feel like I ought to say something in response to this thread but I'm not sure what. Do you-all think the first Apple 1 should be worth more than the others? Pricing collectables is sooooo difficult. Cheers, Liza On Fri, Jul 31, 2020 at 4:13 PM Deborah Douglas wrote: > > > For those who are curious here are some of the prices paid for Apple 1 > computers in the past 6 years. > > 2014: $910,000 (Charity auction) > https://www.cultofmac.com/498888/apple-history-celebration-apple-1-auction/ > 2016: $815,000 (Charity auction) > https://www.cultofmac.com/498888/apple-history-celebration-apple-1-auction/ > 2018: $375,000 > https://www.cnet.com/news/rare-apple-1-sells-at-auction-for-over-500-times-original-price/ > 2019: $470,000 > https://www.cnbc.com/2019/05/28/wozniak-built-apple-1-computer-sold-for-almost-500000-at-christies.html > 2020: $458,711.25. > https://appleinsider.com/articles/20/03/13/rare-functional-apple-1-computer-sold-at-auction-for-458711 > > Debbie Douglas > > > > On Jul 31, 2020, at 6:24 PM, mike at willegal.net wrote: > > I talked to Krishna a few years ago. I don?t think he is unhinged, but I > can?t imagine any Apple 1 fetching that price, even though it appears to be > one of the nicer survivors. It?s funny, when I first became interested in > Apple 1s, condition mattered little, but now the market has evolved to the > point where condition seems to matter. Note that the Henry Ford Museum > paid around 1 million dollars for an Apple 1 several years ago, though that > price hasn?t been approached since. > > Regards, > Mike Willegal > > On Jul 31, 2020, at 4:05 PM, < > thomas.haigh at gmail.com> wrote: > > https://www.ebay.com/i/174195921349. There is at least a ?Make offer? > button. > > I have to say that this is more than a little unhinged, possibly a further > sign (as if one were needed) of the approach of the end times. Though I did > recently pay $250 for a working Apple IIe with disk drives and monitor. > Simple mathematics suggests for a Bezos, Musk or Zuckerberg this would be a > very much smaller purchase relative to net worth. > > Full description at > http://vi.raptor.ebaydesc.com/ws/eBayISAPI.dll?ViewItemDescV4&item=174195921349&category=162075&pm=1&ds=0&t=1582079090000&ver=0 > > Also an entry in the Apple 1 registry (which of course): > https://www.apple1registry.com/en/79.html. > > Best wishes, > > Tom > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org > > > *Deborah G. Douglas, PhD *? Director of Collections and Curator of > Science and Technology, MIT Museum; Research Associate, Program in Science, > Technology, and Society ? Room N51-209 ? 265 Massachusetts Avenue ? > Cambridge, MA 02139-4307 ? ddouglas at mit.edu ? 617-253-1766 telephone ? > 617-253-8994 facsimile ? http://mitmuseum.mit.edu ? she/her/hers > > > > > > _______________________________________________ > This email is relayed from members at sigcis.org, the email discussion > list of SHOT SIGCIS. Opinions expressed here are those of the member > posting and are not reviewed, edited, or endorsed by SIGCIS. The list > archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and > you can change your subscription options at > http://lists.sigcis.org/listinfo.cgi/members-sigcis.org -- Liza Loop Executive Director, LO*OP Center, Inc. Guerneville, CA 95446 www.loopcenter.org 650 619 1099 (between 8 am and 10 pm Pacific time only please) -------------- next part -------------- An HTML attachment was scrubbed... URL: