Home » Module 9: The coded gaze » Module 9: The coded gaze

Module 9: The coded gaze

Hi! Hello!

It’s so funny, I talk so much about access and being kind to yourself and yet I feel this innate urge to apologize for being unavailable last week. So I guess I will say I’m sorry because I do really care about your learning space and I hope that comes through. I think that it does. But this semester, like most or all semesters I would argue, is untenable because the conditions that we’re expected to perform under are unrealistic for any bodymind (bodymind is a Disability justice term). So for any bodymind that is looking to have any amount of joy for the time that we’re here on this planet earth — yeah, it’s rough out here people. I apologize for the rant, I can’t help myself. I also don’t understand why we’re expected to be productive constantly. But that is another byproduct of white supremacy culture. So. All the more reason to flip the script.

Please DM me if you need any support or feel lost or just want to say hello. I’m here for you and hope this asynchronous space can still feel human.

Let’s jump back into thinking critically about the fields inside engineering and the sciences. This goes for everyone, but especially for Computer Science majors — have you considered the ways in which your field has bias? the ways your field has a profound impact on how society is shaped?

I’m not sure if these questions are being raised in your other courses (I hope they are! Tell me if they are!) and since we’re considering both rhetoric and composition, these questions must be taken into account. 

For this week, I would like you to watch this 13 minute talk by Dr. Joy Buolamwini about facial recognition and the effects when the sample set skews white and male.

For the module comment, I would like you to consider the following:

Take note of 2-3 rhetorical issues Dr. Buolamwini raises that speak to you. For me, it was her reframing of the “under-sampled majority” as a way to think about who is represented in most technological spaces and who is erased. So often we say “minority” when speaking about the people of the global majority who are not white and that set standard creates an intentional bias which has real implications (think policing, thinking community funding, think incarceration rates).

Have you ever considered algorithmic bias when using your devices?

What are some ways we can shift the dominant data set?

If you have an experience of algorithmic bias that you want to share, I welcome it in this space but it is not required.

Thanks everyone for staying engaged and enjoy the rest of your week!


4 Comments

  1. The concept of algorithmic bias is something that I was unaware of, however, I do not find it surprising. Dr. Buolamwini provided the fact that white men are the top demographic to be recognized immediately and women of color are at the bottom of being recognized. This proves further the unequal treatment women face specifically, those of color. Moreover, a way we can shift the data dominant set is, as Dr. Buolamwini mentioned to talk to the ones in charge and present them the statistics found. By showing them the statistics and explaining how the algorithm does not represent the whole population.

  2. I have noticed algorithmic bias, but I never knew the proper name for it. I have noticed that I mostly see thin or socially “attractive”, white people on my devices. I noticed this and realized how unfair this is, especially to younger, impressionable people who could also see this algorithmic bias and begin to think poorly of themselves. This algorithmic bias can create a huge mental toll on the population that sees this, but doesn’t match these standards. We can shift the dominant data set by showing the statistics of how many people are falsely identified. We can also remind those in power, and in charge of this, of how there are specific acts in place to avoid bias and discrimination, but yet they are not following those laws, which is what Dr. Buolamwini mentioned in regards to college acceptances. In addition, we can expand the range of features found that represent either “masculine” or “feminine” features and expand the range to include darker skin tones in the different face detection software.

  3. I have never come in contact with the term algorithmic bias but have noticed it on various occasions. I perceived it as an AI bias in a computer system where repeatable errors that seem “unfair” and “prejudiced” are its outcomes due to erroneous assumptions. From my perspective, I have seen various influencers and artists on Instagram that have been shadowbanned just because their content doesn’t satisfy specific values or whose views align more with the LGBTQIA or black community. I have also witnessed an occasion where Instagram deleted a black-plus size model’s bikini picture even though no genitals were shown. Their reasoning was “nudity” but in reality, her skin color and body type were seen as a problem for the public, not allowing her picture to remain on the platform. This algorithmic bias is impacting these individuals affected by it and certainly is prohibiting them from posting things that might seem “problematic.” We can try to shift the domain data set as Dr. Buolamwini mentions by speaking to those in power/charge and providing them with the statistical analysis of such bias. However, even if we try to shift the domain data set, the algorithm can still be abused. So, is it really worth it?

Leave a comment

Your email address will not be published. Required fields are marked *

Course Info

Professor: Andréa Stella (she/her)

Email: astella@ccny.cuny.edu

Zoom: 4208050203

Slack:engl21003fall22.slack.com/