[SIGCIS-Members] Does a counterfactual history of the PC still lead us to the same place?

Thomas Haigh thomas.haigh at gmail.com
Fri Apr 28 08:14:55 PDT 2017


Replying more generally on the question of counterfactuals.

 

Thanks to Andy, Evan, and others who replied on this. I will have a look at
the papers they mention. 

 

In one of my more obscure publications, "Technology's Other Storytellers:
Science Fiction as History of Technology," I made the argument that SF
writers and historians of technology are both in the business of producing
historical narratives in which technology plays a role in changing the
course of historical development. The difference is that historians tell a
story of what "actually" happened and usually stop their narratives before
the time the book is finished, whereas SF writers trace a different chain of
events and often follow that chain after the time the book is finished
(though by the time the book is read the point of departure from our actual
history is inevitably in the past - making SF a form of historical fiction).
http://www.tomandmaria.com/Tom/Writing/TechnologysOtherStorytellersDRAFT.pdf

 

I go on to argue that the counterfactual alternative, while usually implicit
in the history of technology, is essential to making the historical writing
analytical rather than just (as those from outside the field sometimes see
it) a list of things that happened arranged end to end. Historical causes
are always overdetermined, and there are always too many details to put in a
narrative. By picking details and highlighting causes the historian is
suggesting that, had those details been different or those particular causes
lacking, events would have moved in a different direction. 

 

It would be fascinating to see more discussion of counterfactual history of
computing. At the risk of being branded a technological determinist, my
sense is that there was a lot less flexibility with respect to end points
than with respect to the paths to get to them. Particular combinations of
technology and business model make sense, and the question is more which
firms and platforms can discover the winning model first and execute it
best.

 

For example, by the early 1980s the vision of a universal network that would
allow email, home shopping, telecommunting, etc. was well established. Over
the next decade efforts like OSI, viewdata services, AOL and other online
services, smart set top boxes, etc. were funded to try to deliver on it. The
contingent part is that it was the Internet/Web combo that blossomed to fill
that need, meaning that features such as micropayments for online publishers
were missing and email wasn't authenticated to a sender but, on the positive
side, the network was open to different services and users. Those details
all had profound influences on how online practices and businesses
developed, e.g. the destruction of the newspaper industry.

 

With respect to the evolution of personal computing, it's hard for me to
imagine the big picture going particularly differently post-1980 if one
assumes progress in hardware continuing at roughly the same pace. In the
first half of the 1980s GUIs and advanced operating systems overtaxed
available hardware, meaning that only expensive but still sluggish
workstations could support them. These could only be supported by niche high
value applications like engineering, and publishing where the graphical
capabilities were crucial.

 

So as some responses suggested, it was probably inevitable that the mass
market platform of the mid-1980s would be a simple and efficient text-based
OS that didn't impose too much overhead between applications and hardware.
Hansen mentioned the Apple II GS - perhaps the real counterfactual is if
Apple had introduced this in 1983 instead of Lisa, not crippled its
performance, and then evolved it through multiple generations of compatible
hardware.

 

By the late 1980s hardware was getting cheaper and faster, GUIs were
imposing less of a performance penalty, and mass market machines could hold
enough RAM to make multitasking practical. The direction of the future was
clear, but the path there over the next decade wasn't. Remember that NT ran
on Mips, Alpha, and PowerPC as well as Intel. OS/2 ran windows 3.X apps and
also supported multiple platforms. Apple, IBM and others were supporting a
common hardware platform, and were supposed to produce Taligent as a common
OS. Apple allowed clones, then banned them. There were efforts to
standardize Unix. Linux made a push for the desktop. MacOS faltered badly in
the late 1990s. Windows 9X was startlingly successful bridging users from
the DOS-based 16 bit Windows 3.X to the industrial grade Windows NT
underpinnings of Windows 2000 and XP. Intel fell way behind RISC performance
but rallied with the later Pentiums. So if you go back and read the play by
plays in the industry literature it was a time of incredible uncertainty and
rapid change in which all kinds of grand plans and alliance fell apart with
amazing speed and key products shipped years late.

 

The other question, as mentioned earlier on the list, was open vs. closed
business models for PC producers. Versus 1990 PC makers Apple had its own
custom floppy drives, hardware, different processors, OS, keyboard
connectors, etc. and users had only one place to by Macs. PC purchases could
choose from thousands of suppliers, who were just buying components from
thousands of Asian producers and screwing them into a case. In an era of
rapid change the PC ecosystem drove down costs and encouraged rapid
innovation in components such as graphics cards. This almost drove Apple out
of business, and after Jobs returned caused it to retreat to a niche of high
priced, good looking premium products.

 

On the other hand, let's step back to look at where things settled down.
Apple gave up on having a hardware platform of its own, bit by bit,
finishing up with the processor shift to Intel in 2006. On the PC side,
Intel integrated everything that was on half a dozen expansion cards into a
couple of chips, meaning that PCs are pretty much standard. The shift to
laptops and all in one designs means that PCs and Macs are both basically
closed internally but can be expanded externally. Both platforms have
stable, 64-bit operating systems with equivalent features. Both OSes are,
technologically, a complete departure from 1980s PC systems - Windows being
NT which is based on Culter's minicomputer projects, OSX being (as discussed
by Hansen et al) NeXT Step which in turn is BSD Unix. So the two models have
converged in the same place - I liked the BMW vs Ford remark attributed to
Jobs in an earlier post, but while BMW is more profitable than Ford
(particularly on a per-vehicle basis) either can sell you a fast,
comfortable vehicle that will functionally do everything you need. The
difference are design, branding, and social cachet.

 

Here's what I mean about the convergence of end points: if the 1990s had
gone completely differently and Linux had wiped out Windows on the desktop,
or everyone was running OS/2 on Alpha, or Taligent on PowerPC had swept all
before it, by about 2005 there probably wouldn't be very much difference.
People would still have rock sold, modern operating systems running on cheap
fast hardware that was shrinking into laptops or all in one systems.

 

So I can definitely sell a narrative where a different platform dominates
early-2000s personal computing. I just tend to feel that it would be running
on commodity hardware, be integrated into stylish compact boxes, and do
basically what Windows XP and OS X do. I suppose there are some significant
differences, for example whether Microsoft makes money on each PC, but that
matters more to Microsoft than to users. I can see plenty of paths not
taken, but they all seem to lead to the same place. Is this a failure of my
imagination? 

 

Tom

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sigcis.org/pipermail/members-sigcis.org/attachments/20170428/0b638fa8/attachment.htm>


More information about the Members mailing list