Reading for Monday September 11th

Algorithmic Injustice: a relational ethics approach, by Abeba Birhane (https://www.sciencedirect.com/science/article/pii/S2666389921000155?via%3Dihub) (you can either read it online or download the PDF)

Feynman’s Error: On Ethical Thinking and Drifting (https://www.danmunro.ca/blog/2018/11/29/feynmans-error-on-ethical-thinking-and-drifting-nbsp)

24 thoughts on “Reading for Monday September 11th”

  1. In Abeda Birhane’s analysis of algorithmic decision making, the author advocates for a shift from rationalist to relational ethics, where whether something is good is no longer considered in the abstract but instead considered through relationships. The author explains that the backbone of Western philosophy is established on the assumption that knowledge can be obtained through human rationality alone, where sensation and experience can be ignored due to the subjective nature of human experience. Having read Descartes’ meditations, the philosopher makes the assumption that mind and body can be separated, where mind and reason can be removed from an embodied context. However, as discussed in some of my philosophy classes at Grinnell, academia often shapes knowledge by affirming certain forms of knowledge and discrediting others. Those outside of academia are rejected as not being credible because they lack a position at an esteemed university, do not get published in a particular journal article, or are not graded highly by professors that disagree with their work. An example of this in the digital age is that top-level domains in URL’s, like .gov, .edu, or .org, are taken as shorthand for whether a resource is credible. The .edu top-level domain, for example, is awarded to institutions that meet a set of eligibility requirements set by the organization Educause. One requirement for receiving this mark of credibility is having recognition from an international institution recognized by the U.S. Department of Education (https://net.educause.edu/eligibility.htm). Of course, being considered credible is not limited to those with a .edu, .gov, or .org top-level domain, but top-level domain is typically thought about whether a resource is trustworthy. However, to obtain some top-level domains, like .edu, an institution has to be recognized by institutions accredited by the U.S. Department of Education, making credibility an issue of appeasing institutions already in power.

    Reply
  2. Within Feynman’s Error: On Ethical Thinking and Drifting I thought it was interesting how they identify the term moral agent for example, when “Feynman was drifting, not acting as a thinking moral agent.” and convinced himself that making the atomic bomb was inevitable. By not thinking as a moral agent he overlooked other less devastating options. But as the philosopher Hannah Pinkston noted “that many of our moral failings are a result not of deliberate malevolence, but of simply not thinking about what we are doing.” Feynman wasn’t thinking about what he was doing as an individual and only thought of his role in the creation of the atomic bomb. This story reminds me how important it is to always be a moral agent and to help contribute to forming an understanding peaceful society.

    The second article Algorithmic Injustice: A Relational Ethics Approach discusses the growing importance of algorithmic decision-making because it’s becoming so prevalent in day-to-day use. These algorithms are made by people who have their own biases these translate into algorithmic decision-making which encodes these biases further into everyday life. I agree with the article that it’s important to combat bias. One of many ways to do so is to know one’s potential bias and have a diverse group of people designing algorithms as moral agents. To help explain a different way to think about the issue I really like how the article included Afro-feminist thoughts. Specifically, Patricia Hill Collins contends “ that concrete experiences are primary and abstract reasoning secondary.” Because abstract reasoning is so impersonal and done from a distance and learning from concrete experiences, forces you to understand a person and be more intimate which I think is critical to understanding one another.

    Reply
  3. While considering information science, computer science plays an important role in algorithmic decision-making. Especially when it comes to machine learning, the validation of the data is one of the major components of problem-solving and preventing false-positive conditions. However, with the occurrence of social status and privilege, the data would easily obtain the bias. As mentioned by Abeba Birhane, “Within the fields of computing and data sciences, the knower is heavily dominated by privileged groups of mainly elite, Western, cis-gendered, and able-bodied white men.” The disproportion of different types of people under various cultural backgrounds, gender identities, and racial statuses would lead to the condition that only privileged people get benefits. Since the decision-making highly depends on who is the majority or what the majority cases are like. And the decision might be made in a certain period with moral propose. However, as philosopher Pitkin says “Many of our moral failings are a result not of deliberate malevolence, but of simply not thinking about what we are doing,” It is easier to make decisions than think twice throughout the time after they have been made. And it is really important to be flexible with the decision by considering the variation of conditions.

    Reply
  4. “Algorithmic Injustice” reflects some of the points that pop up as a footnote in “Dying to be Competent,” such as the temptation to “solve” complex, infinitely faceted issues with simple, crowd-pleasing products. As we introduce more and more technology into our world that is driven by so much more than logic, it becomes clear that we only introduce more issues by trying to fix things that require more than a one-step implementation. Birhane mentions Descartes desire to rid emotion from knowledge, to make knowledge solid and unquestionable; but from our modules on intersectionality and identity, it is obvious that knowledge is not so clear cut, that different people experience different things in different ways, and that no one way is more or less correct. When we try to solve a problem embedded in society, we not only do so without a full understanding of its extent and influence, we try to bend reality to exist in a more interpretable way. AI is an extremely powerful tool; Modeling serves broad and meaningful purposes; but their knowledge is built in cut and dry logic, like that of Descartes: limited, simplified, emulated. And it is for this reason that we must constantly consider and reconsider the “solutions” we create.

    Reply
  5. When exploring how the two readings for today connect, there are a few main things that overlap. Developing algorithms and studying and gathering data are usually conducted in a socially disconnected manner. But, when considering Feynman’s Error, it is interesting to question how many of the solely logical and linear lines of thinking employed within the fields of Western science and philosophy, fields of inquiry (and social and institutional practices), life sciences, physical sciences, the arts, and humanities, and computer science are ever revisited as the processes of their research and data collection progress. To clarify, I wonder how different the products and research these fields create would be if there ever was a reassessment of their overall goals for their finished product (if they would ever factor identity into these conversations or their product’s effects on the general population).

    Ultimately, Birhane discusses how we need to change our way of thinking when it comes to scientific and analytical thinking in STEM fields, and this applies to already existing projects because Munro is encouraging us to continue to reassess our decisions long after we make them and during the process of acting out that decision, because sometimes our motivations may change, and/or consideration for other factors that you did not think of before could come to the forefront of conversation, when it may not have before if you had never stopped to reconsider the “why” of what you are doing.

    Reply
  6. I have often fallen into the trap of rational thinking. Less so now after I have learned to better understand the relational and emotional impacts of such reductive thinking, but as a younger person with a deep interest in sciences, I wanted to know and understand “answers.” Birhane’s paper on “Algorithmic Injustice” does a good summary of rationality and the issues that such ideas bring to ethics and morality. The idea of a static “truth” independent of influences is ultimately impossible for people who are deeply ingrained and interacting with outside influences. Like Munro points out in his paper on Feynman, the decisions we make are ultimately dependent on our circumstances, and if we do not try to reevaluate consistently, we are liable to overlook ethical issues that arise as we continue to work on or distribute software or ideas.

    I feel like I still often hear rationality come up in often in regard to ethics, and I think it will be hard for us to move away from this Western dominant way of thinking, but I enjoy that we are approaching ethics in a manner pioneered by Afro-feminists through relational thinking. I think it has a lot to offer in the ways it can change our focus to a constantly working and changing way of addressing problems that can be dynamic for a changing view of our world.

    Reply
  7. Today’s reading provided a fascinating perspective on the mindset of how AI and other algorithmic technologies are programmed. In particular, I liked how both readings go into the human psyche, and in particular, Algorithmic Injustice: A Relational Ethics Approach by Abeba Birhane proceeded into how we think as programmers and problem solvers. The concept of us thinking in the moment and rationally compared to thinking forwardly and relationally spoke to me. The code I write often aims to cover the task right ahead of me. Although I am not writing code for generative AI or extreme algorithmic modules, I could see and understand how my experience translates into the upper levels of programming. Thinking relationally, by contrast, increases the scope to which we can problem-solve. In an ideal and utopic society, thinking neutrally would be enough. However, society as a whole is not neutral to any individual. Some experience biases that are positive and allow one to experience the loopholes that exist in a capitalistic system. Outside the cis-white man, the biases engrained in our society will never allow the majority of people to experience utmost neutrality. To code AI or other programs that aim to be human and neutral, will create something so completely removed from reality that it can never be applicable and accurate to every human.

    The readings gave insight into something I am fascinated by in regard to AI which is the value of lived experiences. To me, I learn the most by experience and in some regards, AI does too. AI learns from the responses it gets from its users and collects data, but at the end of the day, AI learns from digital experiences and not much from the lived reality. I think until there is a way that AI is able to learn directly from lived experience, it will not be able to truly act as a partner. AI for the time being can only serve as a biased mediator for the problems we need solved.

    Reply
  8. As described by Birhane, rationality – the philosophy that undergirds most modern sciences- is one that presupposes that there is some objective, knowable reality outside of what is perceived and experienced by us. Its commitment to what Birhane terms as “separation and clear binaries” permeates (and limits) how we are able to conceptualize and understand things within computer science, as with Will’s example of bits in classical computing. As Birhane illustrates, rationality is a western concept and its prevalence in modern sciences is a testament to what Peruvian sociologist Anibal Quijano calls the ‘coloniality of knowledge’ within these fields. In his conception, modern rationality allows for the hegemonic control of knowledge production to lie with the ‘west’ (or western ideologies) by only legitimizing knowledge created in this rational manner. I think most of us have not interrogated rationality as a method of knowledge production that allows for the perpetuation of existing colonial power structures. Since this is not something we see, it stands to reason that we likely are unable to view most ethical issues using rational approaches.

    Reply
  9. As a Black man living in a society still grappling with the legacy of racism and discrimination, this paper resonates deeply. I’ve experienced firsthand how algorithmic systems, despite claims of objectivity, often reinforce existing inequities. The author rightly centers the real harms to marginalized people over technological abstractions. These algorithmic injustices didn’t emerge from a vacuum but are rooted in centuries of oppression. The solutions can’t just be technical tweaks – we need a fundamentally different ethical approach based on our shared humanity and concrete lived experiences. Data science is not ethically neutral and technical fixes like debiasing datasets don’t address core issues of power and history. As the author argues, we need relational ethics, not abstract rationality. I’m heartened this paper uplifts voices typically excluded from these discussions. Centering our stories is vital. My hope is data science can advance justice, not undermine it. But we have far to go and I appreciate this thoughtful contribution in moving the conversation forward.

    Reply
  10. I was very used to thinking of knowledge in a way that heavily focuses on abstract ideas, facts or logic, and it was definitely a change to start interpreting those ideas or numbers more contextually, objectively and flexibly. I like the idea of how knowledge and truth is not static, and that things can change or be viewed differently by different people in different settings and social circumstances. The second reading, “Feynman’s Error: On Ethical Thinking and Drifting”, ties in with the concept of relational ethics very well in that it shows the importance of constantly reflecting on past decisions and systems when conditions change. Similarly, it is crucial to revisit algorithms or tools that were written or created in the past, and make necessary modifications based on contemporary circumstances. AI and machine learning algorithms, however effective, are still created by error-prone programmers based on their own assumptions, past experiences or visions of how the machines should work. It is interesting to think that the field of data science and machine learning itself is technically about making sense of observed patterns or available datasets, and forming clusters, attempting to establish causality, and making generalizations or predictions accordingly. Therefore, it is inevitable that they are rooted in prejudices that might have been common in the past, or are based on observed practices that were never ethical and fair. Instead of focusing on rationality, we should consider the historical and social context of the numbers we analyze, or account for that when we devise or update algorithms that have so much impact on human lives.

    Reply
  11. I don’t think I’m going to do a good job explaining my views on this (describing what you think is already hard enough, describing how you think about what you think seems way harder) but I’ll give it a shot and hopefully it will come out at least moderately comprehensible. Anyways, to start, the idea of rationalism reminded me of my Public Opinion class, where we learned how political scientists used to try to predict voter behavior by graphing “issue positions” on basically a number line and then just seeing which candidates people were closest to. When this didn’t work (obviously), they just started adding more number lines and taking the minimum of the average of all those position differences between voter and candidate. This also (predictably) failed and thus consigned “Rational Choice Theory” to the realm of collegiate teaching-tools. So, people aren’t perfectly logical. I agree. But I also don’t think we are physically able of acting outside of the realm of logic. I’ll try to explain (it gets a bit into religion/spirituality so I’m sorry if we don’t agree, this is just my worldview): we are all just bundles of meat and electricity. Souls (in the way most people conceive of them) can’t exist because if they’re unmeasurable (in a theoretical way) then by definition they can’t affect people’s actions and if they are measurable, then most moral belief systems crumble (as well as the First Law of Thermodynamics) (and also then it probably wouldn’t be something most people consider a soul). So, everyone’s actions have to come from the patterns and firings of our neurons. But why then, don’t we act “logically”? Well, we do, the system is just too complex to understand (currently). On a much much smaller scale, take the example of Chat GPT doing things that seemingly defy logic and go beyond mere matrix multiplication; the sheer force of complexity creates ripples of illusory ability. So maybe this is just me revealing my rationalist sympathies (I certainly am going at least tangentially against Geertz’s quote), but I think my view differs wildly from the rationalism presented in the article in that I make no claims on the isolation or compressibility of life. While it may be that for political scientists, it turns out that the best predictor of how people are going to vote is the party that they’re registered with, the fact that life is a “fundamentally co-existent… web” is necessary to my view and a big part of why life is even possible (life can’t evolve in a static system). Our world is more complex than any of us can possibly comprehend (see Cilliers’ argument “that a proper model of a complex system would have to be as complex as the system itself”) and thus, I agree that focusing on people, understanding, and the broader context of a system is vital to understanding any tiny part in the vast web (or soup because I don’t think webs are connected enough) that we all form.

    Reply
  12. A question on my mind after these readings is this: Is it possible for a machine learning algorithm to adjust its aims continually in order to represent the best interests of society? An AI programmed to consistently decrease the carbon content of our atmosphere would be absolutely amazing for climate change right now, but what if it became so effective that we began needing more carbon in our atmosphere? Would it be possible to create some sort of program that actually tracks how beneficial the actions of the AI are and then adjusts the aim of the AI in order to ensure that it is always working towards the benefit of society? It seems like it would be a necessary measure to take, but that could also be far too subjective to handle with ML. How does one have a robot consider the health of society when the health of society cannot fully be captured in a data set? On top of that, even if we can create a data set that we think represents the state of society, how do we ensure that this data set includes a diverse array of perspectives? Also, if this data set models our society, then how do we detect the disfunction within it? What is our base line to compare to?

    Reply
  13. I found the article “Algorithmic Injustice: A Relational Ethics Approach” particularly captivating because of the way Abeba Birhane explores the different approaches to ethics and how the relational approach is, in her opinion, the best approach when discussing algorithmic injustice. We had previously discussed in class the problem with algorithms that have direct consequences on society and how most of these algorithms and programs were developed by mainly elite, Western, cis-gendered, and able-bodied white men who do not have the tools to recognize social oppression and injustice. Yet, we did not talk much about how most of the time the solutions they provide when dealing with social issues such as biases are approached from a rational perspective. It’s hard to imagine that a big problem like bias has a quick solution that can be implemented efficiently. I believe Birhane does a phenomenal job when introducing the notion of relational ethics. She explains how the basic idea of it consists of emphasizing the commonality of interdependence, relationships, and connectedness. Then she proceeds to give different schools of thought that can be categorized as relational. One of the ideas mentioned in this section of the article that I found interesting was how Patricia Collins defines wisdom and knowledge from the Afro-feminist perspective, where she explains how knowledge tries to find an objective truth that transcends everything while wisdom is grounded in lived experiences.

    Later on, when Birhane is explicitly talking about ethics built on the foundation of relational thinking, she mentions, “Given that harm is distributed disproportionately and that the most marginalized hold the epistemic privilege to recognize harm and injustice, relational ethics asks that for any solution that we seek, the starting point be the individuals and groups that are impacted the most” (Birhane 5). I feel that such an approach is a huge improvement over the rational approach mentioned earlier. What is missing is actually something mentioned in the article “Feynman’s Error: On Ethical Thinking and Drifting” by Dan Murano, where he concludes that “The social, cultural, political, and economic conditions within which we think, decide, and act are constantly changing. As such, the responsibility to revisit and rethink the ethical implications of our decisions and actions is ongoing.” In other words, as we look for solutions, we constantly need to revisit and rethink the different factors because those factors are constantly changing.

    Reply
  14. Computer Science, especially Artificial Intelligence and Machine Learning is a great power for society. Therefore, computer scientists must understand the impact of technology on society from a long-term point of view. It is important to create systems and programs that work correctly and efficiently as a programmer, but it is also important to keep having ethical implications of own products and actions, as Munro argues in the reading.
    However, in reality, it is difficult to have ethical implications for systems and programs, because we live in a world that never stops conflicts. Military Industry already started using AI and ML to “kill people effectively” like the atomic bomb in the Manhattan Project. It is ideal to not use technologies for those purposes, but historically, a lot of technologies including those we are using currently were produced to kill people or to support killing people. Ethics in technology development were ignored in the past and it is likely to be ignored in the future.

    Reply
  15. I found myself particularly drawn to Birhane’s piece on relational ethics. Within the first page, she writes, “At the heart of relational ethics is the need to ground key concepts such as ethics, justice, knowledge, bias, and fairness in context, history, and an engaging epistemology. Fundamental to this is the need to shift over from prioritizing rationality as of primary importance to the supremacy of relationality”. In doing this, she posits that our focus as computer scientists on making things most efficient, most rational, most mathematically straightforward is standing in the way of systems that serve the people they were made for (or, with the knowledge that they were likely “made for” societally dominant groups, it might be better to phrase it as systems who serve the broader population best. This knowledge and shift in wording comes from what Birhane said on page 5, which reads “Since knowing is a relational affair, it matters who enters into the knower-known relations. Within the fields of computing and data sciences, the knower is heavily dominated by privileged groups of mainly elite, Western, cis-gendered, and able-bodied white men”.).
    This is something I care a lot about in my own philosophy of computing, I’ve said it before in this class actually that computers are for humans and should work for our benefit and this is something I stick by. Especially since computers and programs tend to reflect human biases bc they were programmed by humans, it is paramount that as system builders, we actively confront the norms we assume and how they reflect in the broader world with the aim of not just being a neutral actor in relation to systemic harm, but actually combatting it. On the fifth page, Birhane notes that “Given that harm is distributed disproportionately and that the most marginalized hold the epistemic privilege to recognize harm and injustice, relational ethics asks that for any solution that we seek, the starting point be the individuals and groups that are impacted the most.” Only in this way can we hope to undo the harm that holding bias, implicit or explicit, unfairly inserts into our increasingly tech centered world.

    Reply
  16. the reading discusses the increasing automation of decision-making processes in various social spheres through algorithmic systems, machine learning, and artificial intelligence (AI). It highlights the potential harms and limitations of relying solely on technical solutions for complex social issues. It emphasizes that working in the realm of algorithmic decision-making is, fundamentally, an ethical and moral endeavor. It also introduces the concept of relational ethics as a framework for rethinking ethics within the context of data science, machine learning, and AI. It contrasts relational ethics with the dominant rationality-based approach and argues that ethics should be grounded in the lived experiences of marginalized communities, rather than abstract contemplations of “fair” and “good.”: As a computer science major student, it’s essential to recognize the ethical responsibility that comes with working on AI and machine learning projects. Understanding the potential biases and consequences of algorithms on different communities is crucial. This perspective can influence daily decisions about project design, data collection, and model evaluation. Moreover, embracing diverse perspectives is essential in the field of AI and data science. Recognizing that the dominant Western perspective is not universal and that different cultures and communities have unique worldviews can lead to more inclusive and culturally sensitive AI solutions

    Reply
  17. I think Birhane’s reading articulated one of the most common concerns among people who protest against the integration of AI: the extensive dehumanization of decision making. Sure, many of these models are able to replicate “expert-level” human decision making incredibly quickly and efficiently. But, as Birhane describes, if our models are built around our understanding with no specific inquiry as to how such decision making can be improved, they are doomed to codify the biases and discrimination that are rooted inside. There is also the threat of such negative aspects being even more difficult to remove once they are perpetuated in AI, as if we employ them in “rational” and “logical” systems, it is even more difficult for people to admit that such an approach may be flawed or even harmful.

    Many of these problems, in my opinion, stem back to American policies during the Cold War. Education became highly focused on science and mathematics, and it was seen as largely benign or even positive to be singularly interested or skilled in math and science, with little care or interest in the humanities. Many of the people from this generation went on to form and influence large tech companies, resulting in the careless and sometimes harmful attitudes that plague them. I know I only speak from a small bubble that greatly encourages the participation of all groups in tech, but I believe that this narrower focus is slowly disappearing to welcome the people who are more aware and accepting of the social responsibility that those in the tech industry have.

    Reply
  18. Abeba Birhane’s article was really exemplary of some of the issues I have with computer science as a discipline and field. Technological advancement is praised and rewarded, but I always ask what exactly is being advanced? As our society becomes more dependent on technology, I feel scared. There seems to be a cultural shift where competency and viability are prioritized over slowness and intentional decision making. The people at the table making these decisions are detached from the needs of large segments of the population. Innovation seems more focused on being the next “insert technology mogul here” instead of actually innovating technology.

    I worked at the Human Trafficking Lab at University of Michigan this past summer, and found that technology has a huge role in making people vulnerable to trafficking. The permanence of online media, the ease in which people can find information on others, the vastly informal moderation processes of most of these social media platforms, all contribute to injustice directly or indirectly.

    Reply
  19. In Abeba Birhane’s article, “Algorithmic injustice: a relational ethic’s approach,” she emphasizes a key point about ethics in data science. As machine learning and AI have increasingly come to the forefront of public attention, more scrutiny has been placed on the datasets used to train these AI algorithms. For instance, facial detection algorithms used by law enforcement have been found to be more inaccurate on black people, since the datasets used to train it had a bias toward white faces.

    Some data scientists and engineers have approached this issue from a purely technical standpoint: they see the problem as a bug in the dataset that negatively impacts the model’s accuracy. But as Birhane writes, “bias is not a deviation from the “correct” description,” (6) and no approach to “debiasing” a dataset can occur from a purely technical standpoint. Indeed, even if a dataset could be “fixed” by, for instance, it wouldn’t address the underlying systemic problems.

    In the context of facial recognition this means, even if we did add more black faces to the dataset, that fix wouldn’t address the more fundamental issues of what other differences in identity the dataset misses, or whether such an algorithm should even exist at all.

    Reply
  20. I feel like Birhane’s paper did a very good job of explaining some of my own fears about machine learning and the future of algorithms. AI technology is implemented into every aspect of our lives extremely quickly, without much thought about how it actually works. We’ve put too much faith in technology’s ability to be “objective” without reflecting on where that objectivity comes from. Like Richard Feynman, we as programmers and creators feel too content in coming to definitive conclusions about our rationality without reflecting on it. Systems of oppression are embedded in our algorithms in ways that can be easily seen and ways that are much more subtle. I like Birhane’s use of Patricia Hill Collin’s knowledge vs widsom: our technology has access to a wealth of information (knowledge), but it looks at that information as representative of a neutral, non-biased world. Algorithms lack the context or lived experience to look at that information and come to conclusions in the ways that an actual researcher would. I also appreciated Birhane’s thoughts about ethics not being simply a tool or something to take into consideration, but an integral part of research and programming that requires fundamental changes in thinking.

    Reply
  21. In Birhane’s article on algorithmic injustice, one quote really stuck with me and it was the fact that computer scientists are rationalists and that they “embrace abstraction, generalization, and universal principles at the expense of concrete, particular, and contextual understanding.” This really hit me with the idea of how machine learning is exactly the same concept. During my internship over the summer, I was told to create machine-learning models, where data scientists would observe patterns and send us the programming to see how we could help the customers using machine learning and generalizing the people into a category. Looking back at it, it made me realize how lacking and considerate we are of others and are trying to appeal to the entire group of people as a whole rather than an individual. These thoughts were very appealing as they made me wonder, is this really the route we should take in order to help those in need? What about those who are considered a part of that group, but have a completely different problem? Our thoughts are so focused on generalizing the population into categories that we really don’t consider those who are in need or struggling. I found it so interesting that even in other stem-related courses, we aren’t really considering individuals but as categories, where we consider, “what can we do for this group of people?” It is so interesting how we are taught to help people but in a more general way rather than helping people with specific needs. Why do you think we were taught this way? Is it because it is too much work to consider each individual? Is it because we don’t care as much?

    Reply
  22. I think scientists often get focused on their work and become unaware of the consequences it may bring about. We have established in previous readings however that ignorance does not withhold accountability. I am familiar with formal logic, at least within a philosophical context, and if my studies on it have taught me anything, it is that logic is more unreliable than we may think. As technology progresses, we are entering an increasingly morally questionable landscape in which scientists presuppose their biases to be rooted in objectivity. I think that kind of mindset is incredibly dangerous because it justifies scientific output as “objective” and “inevitable” when it is not the case.

    Reply
  23. Both articles did really well in demonstrating the importance of lived experience in comparison to abstract theory, and how conditions are fluid rather than static. Algorithms and systems are built on “objective assumptions,” but these are limited to the beliefs of the dominant group or group in power. How is it that theories lacking the context in understanding “human thought”/history that predict based on past data take precedence over lived experience when society is not theoretical and is often unpredictable?

    I think especially in STEM subjects, objectivity and rationality are prioritized. When explaining data, people often list possible biases affecting the data, then go on to analyze it as if it was collected in a vacuum—as if simply naming potential factors contributing to the quality of the data allows one to then analyze it without considering these factors. Therefore, the results of the data seem objective and thus valid since the goal is to examine it away from these biases. Yet, if they are so engrained throughout the process of collection, creating of the datasets themselves, and in the material the data focuses on, how can the data ever be considered as neutral?

    Reply
  24. Birhane’s piece on algorithmic injustice was an extremely welcome departure from the language I am used to seeing regarding addressing moral and ethical issues related to computing. As outlined in the first half of the piece, solutions focused on making more inclusive/ representative datasets or framing these issues as a technical problem that needs solving, and critique that does not offer a finely packaged solution is dismissed as unproductive. The idea of productive ties closely to Cottom’s conception of competence. The way value judgements are leveraged under capitalism and especially in a field like computing that prioritzes efficiency and optimization (often at the cost of scope and flexibility). I also was excited to see citations of work and theorists (Hill Collins etc.) that are familiar / unsuprising when I see them cited in papers I read for GWSS. I think pieces like this that come from someone working in the intersections of afro-feminist theory, alternative epistemological conceptions, with the knowledge and insight to stand toe to toe with other cs professionals

    Reply

Leave a Comment

css.php
The views and opinions expressed on individual web pages are strictly those of their authors and are not official statements of Grinnell College. Copyright Statement.