[SIGCIS-Members] "How Social Media's Giant Algorithm Shapes our Feeds."

Paul N. Edwards pedwards at stanford.edu
Sun Oct 31 10:35:11 PDT 2021


Kimon, thanks for this interesting reaction. I agree with a lot of what you say.

With regards to these two comments from your great contribution Paul. Trying to differentiate two types of algorithms perhaps allows for unintentional or even purposeful dismissal of the social situatedness of the development of any algorithm. In your first example the reason that it is uncontroversial is that the algorithm lacks a level of complexity and therefore its design is very unlikely to end up with results that are encoded with any sort of concerning cultural context as we see it.

Your emailer, word processor, spreadsheet, calculator, and a million other pieces of software in daily use are full of algorithms to do uncontroversial things. It’s not that they’re not complex, since many of them are in fact quite complex - it’s that their purposes and outcomes don’t have much ethical significance. Vast numbers of algorithms do things such as controlling machinery (your car), modeling physical processes, and a billion other things that are not about human relations with each other. Social media infrastructure is qualitatively different because it’s entirely about human relations.

Further, while the algorithms making up word processors and emailers are complex, they can be understood by a person examining the code. The AI and ML cases that are so concerning today are problematic because (a) they DO concern socially and ethically significant issues, and (b) in many cases, especially ML and neural networks, no one can understand the code because it’s not produced by people at all.. It has step-by-step procedures, but human beings literally can’t understand them - only the outcomes they produce. Neural nets are a great case to look at because they’re in fact very simple - there’s almost nothing but addition, subtraction, and multiplication going on under the hood, yet tracing out the interactions of all that math won’t tell you zip about how it recognizes a signature or a face. They’re “trained,” not coded in the more traditional sense.

I’m not interested in letting programmers off the hook, and sometimes I’m sure they’re at fault. Instead, I'm interested in people having a clearer picture of what they’re talking about, and I think a lot of discourse about “the algorithm” targets the wrong thing. Tech companies absolutely need to be held responsible for the terrible outcomes of software they create, but this won’t happen because their programmers wake up and act ethical (though of course they should do that.)

Where we differ is that given the complexity of systems with multiple millions of lines of code and teams of hundreds of coders working simultaneously, those coders can't anticipate the interactions that may result. To me, that’s not “letting them off the hook,” it’s focusing attention on what the hook is trying to catch, which is not a few bad apples (individuals) spoiling the barrel. It’s unanticipated (and unpredictable) consequences of complex system interactions. “Move fast and break things” sure did work - a lot got broken, and usually because the companies (as we have been hearing from the recent whistleblower cases) knew they were breaking things but were making too much money to stop.

Focusing on individual coders as the evildoers won’t work, because that’s not usually the level where the problems occur. A major lesson of STS, and sociology in general, is that system effects aren’t under the direct control of lower-level actors. In your reply here, you lump programmers and tech firms together. I think tech firms, especially, and government regulation are more appropriate levels of agency than coders.

I’m not sure we actually disagree about most of this, except that you seem to think programmers have more agency in complex systems than I do.

Read my article, or look at Jenna Burrell’s interesting "How the machine ‘thinks’: Understanding opacity in machine learning algorithms” (2016).

Best,

Paul


But even in this example we can postulate a fictional environment where such an algorithm could have social ramifications. Imagine a society where those people whose names start with higher letters in an alphabet by default have more power than others. Then an alphabetizing system would reinforce such a social strata, and lack of recognition by a person using that algorithm in certain instances where ramifications would come through would represent irresponsible naivete. This is not that far a reach from histories of noble and non-aristocratic names in recent Western history. You sort the names, you sort the classes.

I posit this speculative/real fictional example because there is always a creator crafting an algorithm and then applying it in practice. Algorithms are culture. Sometimes their creators are programmers, sometimes they are mathematicians, sometimes they are economists, and so on and so forth. Those algorithm crafters can only ever see the world through the eyes of their personal history (the Bourdiueian habitus) and whether they do it intentionally or not the work is imprinted by that history.

Which leads to what I see as a problem of your second quote. The algorithm isn’t a proxy for the programmers, because they are part of an indivisible system. You can’t have one without the other. So, if we are going to critique the algorithms we must consider the programmers. And if we are going to critique the programmers we must consider the algorithms they create. They may not have intent, but they always have agency. A few structures in contemporary society however often let programmers or the companies they work for off the hook.

One is the positivistic nature of much computer science which lacks an introspective and self-critical analysis to think about just what would be the ramifications of an algorithm once it interacts with a massive online system. Just because they don’t know–as you state–doesn’t mean they shouldn’t imagine what might happen. Nor does it mean that they shouldn’t be vigilant or adaptive about sociocultural impacts, which they often are not because that does not fit their motives or the motives of the corporations they work for. This rupture is what people in the humanities in media studies, digital humanities, etc. are often trying to bring to the table, a more systemic understanding of the ramifications of these actions.

Second is the continued American passion for techno-libertarianism, which has gotten us into this huge mess with Google, Facebook, etc. Many people have known for a long time that these companies, and specifically the algorithms they use to do business, are aimed towards corporate expansion and not with the public good or betterment of individuals in mind. But, as the success stories of the 21st century they have for a long time been given the benefit of the doubt by consumers, tech critics, and government. Only now has there been the beginning of a reckoning. But for the majority of their existence Facebook and Google have been companies driven by advertising sales with some alternative services (search, mail, books, scholar, apps) provided for free to entice people into their ecosystem and enhance that business model. And the main focus of their work is to create algorithms that are (to quote Paul’s paraphrase of Knuth) well-defined, finite set of steps that produce unambiguous results. And in this case those unambiguous results of their algorithmic processing is more information to improvement their systems and to ultimately increase ad sales revenue, despite potential social harm.

I go to this length because I think your comments that the algorithms are constantly changing and adapting lets the corporations and programmers off the hook. They are of course completely aware of this system flux and their algorithms are complicated enough to not only recognize, but to exploit that flux. Algorithms have input and output, and we should keep a focus in critiquing our digital era on what people intend for their algorithms to do, what types of outputs they are crafting for. Looking into that, we can better determine whether they are creating public systems that are not exploit and harming people through the intentional crafting of those algorithms.

Cheers,
Kimon

Kimon Keramidas, Ph.D.
Clinical Associate Professor, XE: Experimental Humanities & Social Engagement<http://as.nyu.edu/xe.html>
Affiliated Faculty, Program in International Relations

Pronouns: He/Him

New York University
14 University Place
New York, NY 10003

Co-Director - ITMO University International Digital Humanities Research Center<http://dh.itmo.ru/en_about>
Co-Founder - The Journal of Interactive Technology and Pedagogy<http://jitpedagogy.org/>
Co-Founder - NYCDH<http://nycdh.org/>

E kimon.keramidas at nyu.edu<mailto:kimon.keramidas at nyu.edu>
W http://kimonkeramidas.com<http://kimonkeramidas.com/>

The Sogdians: Influencers on the Silk Roads
Exhibition<https://www.freersackler.si.edu/sogdians>

The Interface Experience: Forty Years of Personal Computing
Exhibition<https://www.bgc.bard.edu/gallery/exhibitions/10/the-interface-experience>

The Interface Experience: A User’s Guide
Winner of the 2016 Innovation in Print Design Award from the American Alliance of Museums
Buy Book<http://store.bgc.bard.edu/the-interface-experience-a-users-guide-by-kimon-keramidas/>

On Oct 29, 2021, at 4:25 PM, Paul N. Edwards <pedwards at stanford.edu<mailto:pedwards at stanford.edu>> wrote:




________________________
Paul N. Edwards<https://profiles.stanford.edu/paul-edwards>

Director, Program on Science, Technology & Society<http://sts.stanford.edu>
William J. Perry Fellow in International Security and Senior Research Scholar
Center for International Security and Cooperation<http://cisac.fsi.stanford.edu/>
Co-Director, Stanford Existential Risks Initiative<https://cisac.fsi.stanford.edu/stanford-existential-risks-initiative>
Stanford University

Professor of Information<http://www.si.umich.edu/> and History<http://www.lsa.umich.edu/history/> (Emeritus)
University of Michigan

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sigcis.org/pipermail/members-sigcis.org/attachments/20211031/a1ade984/attachment.htm>


More information about the Members mailing list