[SIGCIS-Members] "How Social Media's Giant Algorithm Shapes our Feeds."

Kimon Keramidas kimon.keramidas at nyu.edu
Sun Oct 31 17:07:52 PDT 2021


Paul,

I agree that I am sure that we probably agree on most of this, and making sure I wasn’t being antagonistic was something I was hoping to come across with in my writing. I’m glad it’s clear we are conversing and not fighting :) No all caps probably helps with that HAHAHA.

I’ve loved this conversation from all aspects, including Tom and Ramesh’s great contributions lately as well. Really great stuff.

I now see that you are worried about targeting individual coders as evil doers and that kind of direct stigmatization is not what I meant so we can agree on that. But I would push back against your claim that system effects aren’t under the control of lower-level actors (direct or indirect). In a network of actors there is push and pull from all directions and as similar minded small groups of actors act together they can influence the shape of a system. I am not picking on individual programmers, but I am concerned about the culture of programmers and the the groups of individuals that tend to constitute those communities. At the firm and regulator level that you mention (and which I agree are very much the main power players here) I believe that the impetus is most likely profit driven and in some cases driven by CEO-idealism that is blind to social impact (see Zuckerberg, Musk, and Bezos). 

But the communities of practice that are doing the work, developing the algorithms, and creating the programs to allow those algorithms to interrelate and execute are implicated by their constitution and worldview. That is where the positivism, techno-libertarianism, and disregard for concerns of intersectionality still occurs and which will both indirectly and directly influence the development of algorithms and programs. We know that the people developing these technologies are not representative of our society and that demographics are skewed so these gaps are inevitable. We also know that these communities can be antagonistic to critique and the kinds of perspectives that STS and sociology bring to the table. 

Just look at the letter that the Google engineer wrote back in 2018 and that made it onto the community boards there. The fact that he felt comfortable posting that material because he was confident enough that is telling. 

That is really the concerning issue and a reconsideration/critique of who is culpable for inequities in algorithms and software must happen not just at the governmental and boardroom level but must engage from top to bottom in how these industries situate themselves in relation to social responsibility in the development of their products and not in just being successful and profitable.

Once again really enjoying the conversation and back and forth.

Cheers,
Kimon

Kimon Keramidas, Ph.D.
Clinical Associate Professor, XE: Experimental Humanities & Social Engagement <http://as.nyu.edu/xe.html>
Affiliated Faculty, Program in International Relations

Pronouns: He/Him

New York University
14 University Place
New York, NY 10003

Co-Director - ITMO University International Digital Humanities Research Center <http://dh.itmo.ru/en_about>
Co-Founder - The Journal of Interactive Technology and Pedagogy <http://jitpedagogy.org/>
Co-Founder - NYCDH <http://nycdh.org/>

E kimon.keramidas at nyu.edu <mailto:kimon.keramidas at nyu.edu>
W http://kimonkeramidas.com <http://kimonkeramidas.com/>

The Sogdians: Influencers on the Silk Roads
Exhibition <https://www.freersackler.si.edu/sogdians>

The Interface Experience: Forty Years of Personal Computing
Exhibition <https://www.bgc.bard.edu/gallery/exhibitions/10/the-interface-experience>

The Interface Experience: A User’s Guide
Winner of the 2016 Innovation in Print Design Award from the American Alliance of Museums
Buy Book <http://store.bgc.bard.edu/the-interface-experience-a-users-guide-by-kimon-keramidas/>

> On Oct 31, 2021, at 1:35 PM, Paul N. Edwards <pedwards at stanford.edu> wrote:
> 
> Kimon, thanks for this interesting reaction. I agree with a lot of what you say.
> 
>> With regards to these two comments from your great contribution Paul. Trying to differentiate two types of algorithms perhaps allows for unintentional or even purposeful dismissal of the social situatedness of the development of any algorithm. In your first example the reason that it is uncontroversial is that the algorithm lacks a level of complexity and therefore its design is very unlikely to end up with results that are encoded with any sort of concerning cultural context as we see it.
> 
> Your emailer, word processor, spreadsheet, calculator, and a million other pieces of software in daily use are full of algorithms to do uncontroversial things. It’s not that they’re not complex, since many of them are in fact quite complex - it’s that their purposes and outcomes don’t have much ethical significance. Vast numbers of algorithms do things such as controlling machinery (your car), modeling physical processes, and a billion other things that are not about human relations with each other. Social media infrastructure is qualitatively different because it’s entirely about human relations. 
> 
> Further, while the algorithms making up word processors and emailers are complex, they can be understood by a person examining the code. The AI and ML cases that are so concerning today are problematic because (a) they DO concern socially and ethically significant issues, and (b) in many cases, especially ML and neural networks, no one can understand the code because it’s not produced by people at all.. It has step-by-step procedures, but human beings literally can’t understand them - only the outcomes they produce. Neural nets are a great case to look at because they’re in fact very simple - there’s almost nothing but addition, subtraction, and multiplication going on under the hood, yet tracing out the interactions of all that math won’t tell you zip about how it recognizes a signature or a face. They’re “trained,” not coded in the more traditional sense.
> 
> I’m not interested in letting programmers off the hook, and sometimes I’m sure they’re at fault. Instead, I'm interested in people having a clearer picture of what they’re talking about, and I think a lot of discourse about “the algorithm” targets the wrong thing. Tech companies absolutely need to be held responsible for the terrible outcomes of software they create, but this won’t happen because their programmers wake up and act ethical (though of course they should do that.)
> 
> Where we differ is that given the complexity of systems with multiple millions of lines of code and teams of hundreds of coders working simultaneously, those coders can't anticipate the interactions that may result. To me, that’s not “letting them off the hook,” it’s focusing attention on what the hook is trying to catch, which is not a few bad apples (individuals) spoiling the barrel. It’s unanticipated (and unpredictable) consequences of complex system interactions. “Move fast and break things” sure did work - a lot got broken, and usually because the companies (as we have been hearing from the recent whistleblower cases) knew they were breaking things but were making too much money to stop.
> 
> Focusing on individual coders as the evildoers won’t work, because that’s not usually the level where the problems occur. A major lesson of STS, and sociology in general, is that system effects aren’t under the direct control of lower-level actors. In your reply here, you lump programmers and tech firms together. I think tech firms, especially, and government regulation are more appropriate levels of agency than coders. 
> 
> I’m not sure we actually disagree about most of this, except that you seem to think programmers have more agency in complex systems than I do. 
> 
> Read my article, or look at Jenna Burrell’s interesting "How the machine ‘thinks’: Understanding opacity in machine learning algorithms” (2016).
> 
> Best,
> 
> Paul
> 
> 
>> But even in this example we can postulate a fictional environment where such an algorithm could have social ramifications. Imagine a society where those people whose names start with higher letters in an alphabet by default have more power than others. Then an alphabetizing system would reinforce such a social strata, and lack of recognition by a person using that algorithm in certain instances where ramifications would come through would represent irresponsible naivete. This is not that far a reach from histories of noble and non-aristocratic names in recent Western history. You sort the names, you sort the classes.
> 
>> I posit this speculative/real fictional example because there is always a creator crafting an algorithm and then applying it in practice. Algorithms are culture. Sometimes their creators are programmers, sometimes they are mathematicians, sometimes they are economists, and so on and so forth. Those algorithm crafters can only ever see the world through the eyes of their personal history (the Bourdiueian habitus) and whether they do it intentionally or not the work is imprinted by that history. 
>> 
>> Which leads to what I see as a problem of your second quote. The algorithm isn’t a proxy for the programmers, because they are part of an indivisible system. You can’t have one without the other. So, if we are going to critique the algorithms we must consider the programmers. And if we are going to critique the programmers we must consider the algorithms they create. They may not have intent, but they always have agency. A few structures in contemporary society however often let programmers or the companies they work for off the hook. 
>> 
>> One is the positivistic nature of much computer science which lacks an introspective and self-critical analysis to think about just what would be the ramifications of an algorithm once it interacts with a massive online system. Just because they don’t know–as you state–doesn’t mean they shouldn’t imagine what might happen. Nor does it mean that they shouldn’t be vigilant or adaptive about sociocultural impacts, which they often are not because that does not fit their motives or the motives of the corporations they work for. This rupture is what people in the humanities in media studies, digital humanities, etc. are often trying to bring to the table, a more systemic understanding of the ramifications of these actions.
>> 
>> Second is the continued American passion for techno-libertarianism, which has gotten us into this huge mess with Google, Facebook, etc. Many people have known for a long time that these companies, and specifically the algorithms they use to do business, are aimed towards corporate expansion and not with the public good or betterment of individuals in mind. But, as the success stories of the 21st century they have for a long time been given the benefit of the doubt by consumers, tech critics, and government. Only now has there been the beginning of a reckoning. But for the majority of their existence Facebook and Google have been companies driven by advertising sales with some alternative services (search, mail, books, scholar, apps) provided for free to entice people into their ecosystem and enhance that business model. And the main focus of their work is to create algorithms that are (to quote Paul’s paraphrase of Knuth) well-defined, finite set of steps that produce unambiguous results. And in this case those unambiguous results of their algorithmic processing is more information to improvement their systems and to ultimately increase ad sales revenue, despite potential social harm.
>> 
>> I go to this length because I think your comments that the algorithms are constantly changing and adapting lets the corporations and programmers off the hook. They are of course completely aware of this system flux and their algorithms are complicated enough to not only recognize, but to exploit that flux. Algorithms have input and output, and we should keep a focus in critiquing our digital era on what people intend for their algorithms to do, what types of outputs they are crafting for. Looking into that, we can better determine whether they are creating public systems that are not exploit and harming people through the intentional crafting of those algorithms.
>> 
>> Cheers,
>> Kimon
>> 
>> Kimon Keramidas, Ph.D.
>> Clinical Associate Professor, XE: Experimental Humanities & Social Engagement <http://as.nyu.edu/xe.html>
>> Affiliated Faculty, Program in International Relations
>> 
>> Pronouns: He/Him
>> 
>> New York University
>> 14 University Place
>> New York, NY 10003
>> 
>> Co-Director - ITMO University International Digital Humanities Research Center <https://urldefense.proofpoint.com/v2/url?u=http-3A__dh.itmo.ru_en-5Fabout&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=bj1ds2HCeAVZB81WHQ7n3gIlPtPsuhxiGHJ5L3Iveqs&e=>
>> Co-Founder - The Journal of Interactive Technology and Pedagogy <https://urldefense.proofpoint.com/v2/url?u=http-3A__jitpedagogy.org_&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=WohVajSW87Pm6t5ORbq67xHWbeVQNVA81TyPIPxrfUg&e=>
>> Co-Founder - NYCDH <https://urldefense.proofpoint.com/v2/url?u=http-3A__nycdh.org_&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=AdUhLRj0XxgYtILhJ7dYxJ_T35urocPOzVSBS_P0m3k&e=>
>> 
>> E kimon.keramidas at nyu.edu <mailto:kimon.keramidas at nyu.edu>
>> W http://kimonkeramidas.com <https://urldefense.proofpoint.com/v2/url?u=http-3A__kimonkeramidas.com_&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=Loe18mHnRWl5a_SjAoTvRNilYf1aHmG4cPum-U1rcuw&e=>
>> 
>> The Sogdians: Influencers on the Silk Roads
>> Exhibition <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.freersackler.si.edu_sogdians&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=mCJQa_XdppExFxOdTNAUEhAOlQ1RhXjQD3LY0uq2tgs&e=>
>> 
>> The Interface Experience: Forty Years of Personal Computing
>> Exhibition <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.bgc.bard.edu_gallery_exhibitions_10_the-2Dinterface-2Dexperience&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=Xk-fGDK3GXbzQ0mTKufeyewAq5_5X4XD0L8D_Q45-tQ&e=>
>> 
>> The Interface Experience: A User’s Guide
>> Winner of the 2016 Innovation in Print Design Award from the American Alliance of Museums
>> Buy Book <https://urldefense.proofpoint.com/v2/url?u=http-3A__store.bgc.bard.edu_the-2Dinterface-2Dexperience-2Da-2Dusers-2Dguide-2Dby-2Dkimon-2Dkeramidas_&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=plDETmpBrfrijhRCRV6f3roTx_Xn2503E95fTDTbjiw&e=>
>> 
>>> On Oct 29, 2021, at 4:25 PM, Paul N. Edwards <pedwards at stanford.edu <mailto:pedwards at stanford.edu>> wrote:
>>> 
>> 
> 
> ________________________
> Paul N. Edwards <https://urldefense.proofpoint.com/v2/url?u=https-3A__profiles.stanford.edu_paul-2Dedwards&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=gjiVlhfqO-PKd4lpuSqtdMdV9MmDejaft8s0li8lpfk&e=>
> 
> Director, Program on Science, Technology & Society <https://urldefense.proofpoint.com/v2/url?u=http-3A__sts.stanford.edu&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=xocKM_y-aqIGwetb9ST26uoVkCy5CFRHXpE8d3CwnMs&e=>
> William J. Perry Fellow in International Security and Senior Research Scholar
> Center for International Security and Cooperation <https://urldefense.proofpoint.com/v2/url?u=http-3A__cisac.fsi.stanford.edu_&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=aebXB7BGRaCV7io4DQ2XMiGbAllsmonsqfDpeXIyGqo&e=>
> Co-Director, Stanford Existential Risks Initiative <https://urldefense.proofpoint.com/v2/url?u=https-3A__cisac.fsi.stanford.edu_stanford-2Dexistential-2Drisks-2Dinitiative&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=KDS_wKYJM8WSm1eRnMMgIk4vVo1ItlfwNhZD8ZE2n8o&e=>
> Stanford University
> 
> Professor of Information <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.si.umich.edu_&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=iSRIXFRQYiXlvhKcwdbDdg_SWBQKvAA0qVUAZXaFjC4&e=> and History <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.lsa.umich.edu_history_&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=8lhfdn1CVAMK_u4AbH6K2X3Rh95e5EvJanbOcfGalCo&m=JDvdw8Q1_F8ikFzSigS-U0ew2yx_tzrmEiloCKo00ku_i_2QXmIhYH7-_5dHxY9X&s=AWlEPBhXYGPNmO7u285cD6mQg5TpNbIbVi21LScJL1I&e=> (Emeritus)
> University of Michigan
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sigcis.org/pipermail/members-sigcis.org/attachments/20211031/a250f219/attachment.htm>


More information about the Members mailing list