[SIGCIS-Members] On the Merits of ChatGPT and the like as Historian's Assistants

herbert.bruderer at bluewin.ch herbert.bruderer at bluewin.ch
Thu Jul 20 15:03:46 PDT 2023


 
  Might be of interest to you:
 
Is Bard Better than ChatGPT? | blog at CACM | Communications of the ACM
 
  
 
 
  Best wishes,
Herbert
 ----Ursprüngliche Nachricht----
 
Von : members at lists.sigcis.org
 
Datum : 18/07/2023 - 22:12 (MS)
 
An : brian.randell at newcastle.ac.uk
 
Cc : members at lists.sigcis.org
 
Betreff : Re: [SIGCIS-Members] On the Merits of ChatGPT and the like as Historian's Assistants
 
 
 
   I found Donald Knuth’s reflections in this topic informative:  
  
   https://www-cs-faculty.stanford.edu/~knuth/chatGPT20.txt
  
  
   
  
  
    My two cents: 
  
  
   
  
  
    The capacity for hallucination alongside what appears to be appropriate identification of context or nuance casts a veil of fabulism over all the output, which (IMO) makes models like these unreliable in ways that even Procopius was not. Given that problem, a model which more often returns correct information might well be considered *less* trustworthy overall as an assistant.  
  
  
   
  
  
    On Tue, Jul 18, 2023 at 12:49 PM Brian Randell via Members < 
   members at lists.sigcis.org> wrote: 
   
  
 
 
  
   
    
     
      
Hi:
      
 
      
Brian Coghlan and I have been pleasantly surprised by the interest aroused by, and the positive comments we have received on, the little paper "ChatGPT’s Astonishing Fabrications About Percy Ludgate" that we published in the April-June 2023 issue of the IEEE Annals of the History of Computing. The study [1] that we reported on was a temporary diversion from our main interests, but we are curious to know whether the startlingly high level of fabrications (so-called "hallucinations") in the answers we obtained from ChatGPT is typical of its performance when used as a historian's assistant. It would be great if someone with appropriate skills and resources (research students!) could organise and carry out a serious statistically-valid evaluation of the trustworthiness of the answers provided by ChatGPT and the like to a wide range of  questions on the history of computing, carefully checking (i) the existence of any citations listed, (ii) the accuracy of all the factual statements made, and (iii) whether the errors found were likely to be because of errors in the learning data, or due to ChatGPT's text selection and generation strategy.
      
 
      
One of the motivations we had for our study was the enthusiastic and uncritical assessments of ChatGPT's merits as a historian's assistant that we found on the web. We have since been amused and embarrassed to find that the one article we quoted [2] in our paper was almost certainly generated by ChatGPT itself. It turns out that the alleged author, Martin Frackiewicz, has been posting on average close to twenty sizeable articles a day since mid-March on his company's website in praise of ChatGPT. 
      
 
      
A much more credible favourable account of the merits of ChatGPT and the like has been provided by Mark Humphries and Eric Story [3]. This is very optimistic about the ability of historians to use such systems in ways that achieve high levels of veracity, and to identify such errors as are nevertheless made. It would be good if such hopes could be validated - or debunked - before large amounts of ChatGPT-generated text on the history of computing was published and added to future sets of learning data.
      
 
      
Cheers
      
 
      
Brian Randell
      
 
      
1. ChatGPT’s Astonishing Fabrications about Percy Ludgate - https://treasures.scss.tcd.ie/miscellany/TCD-SCSS-X.20121208.002/ChatGPTs-AstonishingFabrications-aboutPercyLudgate-CoghlanRandellOBoyle-20230424-1434.pdf
      
 
      
2: ChatGPT-4: A Valuable Tool for Historical Research and Analysis - https://ts2.space/en/chatgpt-4-a-valuable-tool-for-historical-research-and-analysis/
      
 
      
3. Today’s AI, Tomorrow’s History: Doing History in the Age of ChatGPT. https://activehistory.ca/2023/03/todays-ai-tomorrows-history-doing-history-in-the-age-of-chatgpt/
      
 
      
 
      
 
      
       
        
—
        
 
        
School of Computing, Newcastle University, 1 Science Square, Newcastle upon Tyne, NE4 5TG
        
EMAIL = Brian.Randell at ncl.ac.uk   PHONE = +44 191 208 7923
       
      
      
URL =  https://www.ncl.ac.uk/computing/staff/profile/brianrandell.html
     
     _______________________________________________ 
    
 This email is relayed from members at 
    sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at 
    https://urldefense.com/v3/__http://lists.sigcis.org/pipermail/members-sigcis.org/__;!!K-Hz7m0Vt54!hwCqcmqZeYw_ZiA1gvEYeZcwiAuicJxRcaqmrqsshs1rFMcb_eTH4-FNydNlg1cciaPHtA6lKPrxlq_2gkLS$  and you can change your subscription options at 
    https://urldefense.com/v3/__http://lists.sigcis.org/listinfo.cgi/members-sigcis.org__;!!K-Hz7m0Vt54!hwCqcmqZeYw_ZiA1gvEYeZcwiAuicJxRcaqmrqsshs1rFMcb_eTH4-FNydNlg1cciaPHtA6lKPrxllcliWn4$
    
   
  
 
 -- 
 
 
  
    Adam Hyland ( 
   he/him)
   
    adampunk.com
   
   
     UW HCDE PhD Student 
   
  
 
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sigcis.org/pipermail/members-sigcis.org/attachments/20230721/b09d0392/attachment.htm>


More information about the Members mailing list