[SIGCIS-Members] CfP: 4S 2018 - Machine Biases and Other Algorithmic Harms in Transnational Perspective

M. Hicks mhicks1 at iit.edu
Fri Jan 19 07:37:02 PST 2018


Looks like a terrific panel, Colin--any chance that you might bootleg record your own talks and make them available? I have several grad students who would *love* to hear this panel but can't make it there in person.

And don't forget Safiya Noble's soon to be released Algorithms of Oppression for the body of existing literature :)

Best,

Mar 
______________________
Marie Hicks, Ph.D.
Asst. Professor, History of Technology
Illinois Institute of Technology
Chicago, IL USA
mhicks1 at iit.edu | mariehicks.net | @histoftech
Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing
www.programmedinequality.com


On Jan 19, 2018, at 10:27 AM, Colin K Garvey <garvec at rpi.edu> wrote:

Perhaps a bit far afield for some, but historical takes on this theme will be more than welcome. In any case, please circulate to your networks. For more information, feel free to contact Colin Garvey at garvec at rpi.edu

Machine Biases and Other Algorithmic Harms in Transnational Perspective
Colin Garvey, RPI; Gernot Rieder, IT University of Copenhagen
On Halloween, October 31st 2017, three monsters called Facebook, Twitter, and Google were forced to testify before US senators about the services they offer. Sen. Lindsey Graham, R-S.C., the chairman of the Senate Judiciary Subcommittee on Crime and Terrorism, noted that “millions of Americans use your technology to share the first step of a grandchild, to talk about good and bad things in our lives” before going on to declare that “the bottom line is these technologies also can be used to undermine our democracy and put our nation at risk.” Indeed, from drone-mediated assassination to racial discrimination on digital platforms and beyond, algorithms have made possible the creation of a new category of harm in both digitized as well as digitizing societies. A growing literature has described a number of algorithmic harms deriving from information gatekeepers (Tufekci 2014), killer robots, secretive decision making black boxes (Pasquale 2015), “weapons of math destruction” (O’Neil 2016), as well as a variety of other systems. These investigations have remained constrained primarily to Western contexts, however, as algorithmic harms in contexts outside the Global North remain largely unexamined. This panel calls for knowledge from the margins; transnational perspectives on algorithmic harms as they have occurred, are transpiring, and may unfold in futures to come. What do computerized systems capable of wreaking socioeconomic devastation on diverse publics look like in, from, through, and across different nations?
109.  Machine Biases and Other Algorithmic Harms in Transnational Perspective
https://4s2018sydney.org/accepted-open-panels-4s/

Submission link at bottom of page.
_______________________________________________
This email is relayed from members at sigcis.org, the email discussion list of SHOT SIGCIS. Opinions expressed here are those of the member posting and are not reviewed, edited, or endorsed by SIGCIS. The list archives are at http://lists.sigcis.org/pipermail/members-sigcis.org/ and you can change your subscription options at http://lists.sigcis.org/listinfo.cgi/members-sigcis.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sigcis.org/pipermail/members-sigcis.org/attachments/20180119/1796caf5/attachment.html>


More information about the Members mailing list