Reading for Wednesday September 20th

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, by Cathy O’Neil. Chapter 5. (please see link to PDF in Pweb under “readings” tab.)

Algorithms, Correcting Biases, by Cass R. Sunstein (https://muse.jhu.edu/article/732187)

Predictive Policing is still racist (https://www.technologyreview.com/2021/02/05/1017560/predictive-policing-racist-algorithmic-bias-data-crime-predpol/)

21 thoughts on “Reading for Wednesday September 20th”

  1. It was interesting how the Weapons of Math Destruction reading and the Algorithms, Correcting Biases had completely contrasting viewpoints. WMD was cognizant of the potential and often realized pitfalls of using algorithms for predictive policing (and sentencing), whereas Sunstein argues that sentencing algorithms were better at classifying high risk individuals than judges. Unfortunately, this point may be overstated, since like many of the models we have looked at so far, the factors constituting “High risk” are often subjective and rooted in historically biased policing and sentencing practices, and are also related to proxy for racial bias such as socioeconomic status, which increases likelihood of proximity to crime, which then heightens one’s actual or perceived risk, etc. In the technology review article, we also see that even methods that might be intended to avert biased data end up disseminating the same biases, such as data from victim reporting and not police record. On the surface, it may seem like a step in the right direction, but put any thought toward who is making the most reports and who is most often getting reported, and it is clear that it suffers from many of the same issues.

    Reply
  2. In Weapons of Math Destruction, Cathy O’Neil explains predictive crime models, where police departments across the United States use policing data to inform their patrol routes. Much like risk assessment algorithms for defendants and whether they are expected to commit another crime soon, these algorithms result in racial disparities, often considering people of color to be criminals. In this case, O’Neil explains that these algorithms discriminate between geographical regions, which tend to be reflections of poverty and racial disparities in segregated cities. O’Neil explains how current policing data is fed into these algorithms, entrenching police beliefs about controlling crime.

    As I read the excerpt, I felt that greater implications of the algorithm could be discussed. Reported crime statistics affect more than just where police officers decide to patrol and report more crimes. These statistics are often accessible to the public, where people use them to consider whether they should live in an area, start a business, or even simply spend money in a community. By publicizing this data, police departments, regardless of intent, market certain communities as not worth investing in. This perpetuates disparities in socioeconomic outcomes as local businesses fail to attract customers, properties remain unsold, and schools lose out on funding. At the same time, the issue of communities receiving investment from outside groups can be complicated, where the influx of new inhabitants can displace currently existing communities. Regardless, these algorithms have implications beyond their direct effects, like perpetuating poverty, not just policing habits.

    Reply
  3. The articles highlight how predictive policing algorithms inherit and perpetuate systemic biases, leading to discriminatory over-policing of minority communities. Despite claims these tools are “colorblind”, the data they train on reflects ingrained prejudices and unequal enforcement. Relying on flawed data cements existing injustices.

    Even if victim reports are used instead of police arrests, as in Bogotá, the outputs remain skewed. Minorities face higher reporting rates and under-reporting occurs where police distrust is high. Attempts to tweak algorithms provide little improvement. There is no easy technical fix to social problems.

    What matters most are the objectives encoded in these systems. Focusing narrowly on optimizing arrests and reported crimes leads to damaging feedback loops. It pulls more police into over-policed areas, generating more questionable data. Structural racism and concentrated poverty drive higher crime rates, which algorithms take as immutable rather than something to address.

    We must be highly skeptical of claims that algorithms inherently reduce bias compared to human decisions. Both exhibit bias, but automated decisions conceal it behind a scientific veneer. Transparency and public accountability are crucial. But equally important is rethinking what success looks like for public safety – community trust, crime prevention, and restorative justice should be the goals, not just predictive accuracy and arrests. Technical tools reflect society’s values. Unless we change what we choose to optimize, even unbiased algorithms will be part of the problem.

    Reply
  4. The readings ultimately discuss how algorithms can affect how we run our justice system. But, how they interpret how they affect it seems to contrast, especially with Weapons of Math Destruction and Predictive Policing is still racist vs. Algorithms, Correcting Biases. The first two discussed how Algorithms inherently have bias encoded within their algorithms due to their analytical nature, which means that they do not take social conditions into account when analyzing certain data. Conversely Correcting Biases discusses how the objectivity of algorithms is good for this very reason, because it creates an “unbiased” environment, since there is a lack of human cognitive biases––machines cannot develop bias the way that humans can.

    In Predictive Policing is still racist, there is something that it discusses that is quite interesting when added to this current line of thinking. The article mentions how there is no quick fix for biased algorithms. This is due to there being a lack of data regarding certain social issues. So, the only fix as of now is to cease the use of these tools until a much better algorithm comes along in the future. So, although these algorithms are meant to expel cognitive bias, currently cognitive assessment of people under the law is our best option.

    Reply
  5. Today’s readings both discussed biases in algorithms used to predict crime. However, while two of them talked about the existence of bias in these types of algorithms, the other argued that the biases are not a direct result of the algorithm, but rather lie in what people ask the algorithm to do. All of these articles create an interesting academic discussion about the nature of these algorithms.

    To start with, I think that in ‘Weapons of Math Destruction,’ Cathy O’Neal does a great job comparing the use of past policies such as ‘stop and frisk’ implemented in New York and the current surveillance algorithms like PredPol or the algorithm developed by the city of Chicago to create a list of the 400 individuals most likely to commit crimes. It was interesting how both strategies relied on deeply biased techniques at their core. The only significant difference was that while one of them was conducted by biased individuals, the other ignored the race factor but involved many questions whose answers were connected to race. As O’Neal says, ‘Birds of a feather, statistically speaking, do fly together’ (O’Neal, 102). This is where the problem lies – statistics cannot tell the full story.

    O’Neal also mentions how the problem also relies on how we are addressing this issue. We are trying to predict crime instead of working to prevent it. What’s even more concerning, as Douglas says in his article ‘Predictive Policing is Still Racist – Whatever Data It Uses,’ is that even though studies have found issues with these technologies, authorities have done nothing to stop their use.

    Reply
  6. Readings talked about how biases affect algorithms to make judgments in the justice system. Algorithms do not use “color” as variables when they provide outcomes, but data for algorithms is reflected by human bias, therefore, the algorithms provide skewed results. There is no easy solution to get rid of racial biases in programs and if government can not solve issues, they should not continue to use those algorithms.

    Reply
  7. Sunstein’s article is a profound exploration into the realms of algorithms and their potential implications in public policy. The research cited and the perspectives presented offer a fresh lens to view how algorithms can potentially reshape and optimize decision-making processes that have traditionally relied on human judgment. One of the main takeaways from the article is that algorithms have the potential to overcome harmful cognitive biases. In day-to-day life, humans, even those trained rigorously, fall prey to various cognitive biases. The availability heuristic, for instance, is ubiquitous and can manifest in any number of ways, from making purchasing decisions based on recent advertising to judging the risk of an event based on recent occurrences. As algorithms, by design, are devoid of emotional or cognitive biases, they can serve as more rational decision-makers, making them particularly useful in fields requiring prediction and risk analysis. Algorithms, due to their deterministic nature, can be examined and audited, offering an unprecedented level of transparency. This transparency is beneficial when addressing concerns of bias or discrimination. In a future career, especially in sectors like AI ethics, ensuring that algorithms are transparent and can be inspected for biases will be crucial. Making the trade-offs clear, as algorithms do, will also assist policymakers in making informed choices

    Reply
  8. Modern policing is in such an unfortunate state. Many people can see that there are obvious flaws in this system: flaws that cause serious and disproportionate harm to minority communities. We recognize that this biased system is composed of biased people, and attempt to remove their influence by giving algorithms more of a say in the decision-making process. Every step of the way, though, we must deal with our legacy as a society that has historically tolerated injustice as long as it is focused on people of color. In a way, everything wrong with our justice system can be described with tools such as COMPAS. We feed it presumably nothing but our own decisions, and receive a measurably racist/biased reflection. I’m not sure whether the data that could properly train such machine learning models will ever exist now that they are increasingly adopted into law enforcement and continue to reinforce societal biases.

    As the third article says, there is no technological fix for something that is not really a technological problem. I think cases like this show why diverse interests in computer science and this class specifically are incredibly important. It will require a lot of interfield communication to further reveal issues and approach ways to create just systems where machine learning models may actually be trained.

    Reply
  9. It felt like we got a well rounded look at these crime prediction models within these readings. Specifically we got some insight as to what the argument for these things is, and I do feel as though the intentions and arguments for these things are good. Generally I am for the reduction of crime. I was knocked back by the part in weapons of math destruction that mentioned how these predictive models focused on the poor not because they commit more crime, but because their crimes were of a nature that the police could handle. That makes a lot of sense, I’ve always heard about poverty driven crime and that has always led me to view crime as the direct result of poverty. The crimes that are committed by impoverished people are just simply more visible and not easily celebrated nor are they pretty to look at. Makes me wonder how these models could be altered to include upper class crimes.

    Reply
  10. I’ve been wanting to read weapons of Math Destruction for a while, so I’m glad to get an outside push to read at least part of it. I thought that their explanation of the positive feedback loop that happened from a geographically predictive algorithm was very well explained and incredibly worrisome. Plus, I’m sure something akin to this is already happening in the minds of many patrolling officers already and to see it codified and made defensible as “just what the algorithm says” is frightening. I also like the point of “fairness vs efficiency” which is similar to what a lot of people have discussed in class but was laid out very directly and nicely here. Furthermore, the fundamental assumption here that they bring up and point out doesn’t often get challenged is that prisons stop or reduce crime. I think it’s a great question to ask if locking up non-violent (or even violent) offenders up in the first place is even helpful. Obviously, rehabilitation and reentry programs are massively underfunded and under-researched, and I wonder if all the resources going to this went into determining how to effectively prepare people to leave prison or just to create prison alternatives, what could get done? (I wrote this as I was reading and they bring this up, so I’m going to leave it in, but acknowledge that the reading read my mind – although I will say that I just feels weird to be looking towards Amazon as a model, but I guess if we can use their bad capitalistic tendencies for good causes then that’s good, but it’s still disconcerting to say “the criminal justice system should be more like Amazon”)
    As for the Sunstein reading, I wondered if anyone had done a judge-algorithm comparison study in my last post and so am glad to have my question answered so quickly. I’m floored by the analysis of decreasing crime rates or jail rates. When framed like that, it seems completely immoral to not replace judges with algorithms (in limited settings) immediately (not that we necessarily should, as we’ve focused heavily on). I think the counterargument would be that judges are worse now, but that they could change while the algorithms we’ve discussed are likely to lead to self-fulfilling outcomes that entrench past and current biases. The current offense bias is fascinating though and makes a lot of sense as an explanation. I also agree completely with the article’s classification “that experts can develop reliable intuitions” as “fallible” since there’s a lot of evidence that most people we consider “experts” don’t do nearly as well as we’d think or hope. I’m glad we got a piece that provided a bit of pushback on the class’s thinking although I’m still really skeptical about algorithms and think if they’re used it should be limited and controlled to prevent leakage (i.e. from bail to sentencing).

    Reply
  11. I thoroughly enjoyed the three readings because they provided contrasting perspectives on yet another algorithm developed for the legal system, so everything combined felt like an academic discussion.

    The reading in “Weapons of Math Destruction” gives a very in-depth analysis of the predictive policing algorithm used by police departments in many major cities to identify areas with potentially more crime occurrence. It did a really good job of offering readers some background information including the need for the algorithm as well as the kind of input data that was fed into the model, before showing us the statistics demonstrating its positive impacts and the associated ethical implications. I like that it brought up the trade-off between fairness and efficiency, which is something I talked about in many of my discussion essays and something that was brought up multiple times in class. In the reading, the trade-off was very well laid out, which further emphasizes the challenge of using a predictive algorithm created to be efficient and accurate in a system where fairness is so important.

    I also specifically like the comparison between these predictive or recidivism algorithms and what is used by Amazon to predict consumer retention. Unlike Amazon which continually learns and gathers so much information to demystify consumer behavior, the lack of data recorded in private prison and the lack of confounding variables in the predictive policing models poses a lot of hindrance to identifying a relationship among all predictors of crime. As a result, it disproportionately impacts the poor and minority groups despite not taking race into consideration when training the model. This reminds me of redlining, which was also a racist practice that seemingly had good intentions to begin with but still presented and perpetuated racial prejudices.

    Objectively, society is so complex, and it’s so hard to strike that balance between optimization and equity. The sentiment described in the last reading by MIT Technology Review perfectly sums up the situation: there is no quick fix, no good alternative, and no easy way to account for biases in algorithms. Even though these technologies are highly problematic and unethical in many ways, it’s also hard to replace and completely get rid of.

    Reply
  12. In the article, “Predictive policing is still racist—whatever data it uses” by Will Douglas Heaven, Rashida Richardson is quoted and hits the nail directly on the head on a theme that a lot of these articles we have read over the past two days have been getting to. She states that “‘many predictive policing vendors like PredPol fundamentally do not understand how structural and social conditions bias or skew many forms of crime data’” (2021). Companies such as PredPol or COMPAS or any other of these digitized “crime-fighting tools” claim that their tools ignore elements such as race and gender, for they simply just look at the situation. Often, however, these crimes and situations come from years of harmful practices of over-policing and inequality in the justice system. The first article we read for today’s class, “Algorithms, Correcting Biases” by Cass R. Sunstein aims to defend crime-fighting technologies and does have valid evidence to support the use of automated judges. However, I think Sunstein overlooks how some of the language used in the article comes across. To me, it felt as though they were admitting the criminal justice system will never be perfect and as a result, it is then acceptable to use technology as while it will still be biased, it will be less biased. The notes in this article remind me of the dangers warned about objectivity in machine learning in the article “Optimize What?” by Jimmy Wu. Additionally, Chapter 5 of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil, further rebuffs a lot of the sentiments in Sunstein’s piece. O’Neil brings up how often the feedback loop of data reports success among rising disparity in policing. Sure more crimes are found, but what kind of crimes are they and were they crimes that would usually be “arrestable” if the police presence was not as large and as present? These are the questions that I am sure we will and that I am excited to discuss in class today.

    Reply
  13. Part of my frustration with this push to involve algorithms to mitigate bias is that there is very little consideration that the people that make these algorithms also have to intentionally practice and embody anti-racism. The “Predictive Policing is still racist” article expands on this, “I think many predictive policing vendors like PredPol fundamentally do not understand how structural and social conditions bias or skew many forms of crime data.”

    In “Algorithm’s, Correcting Biases,” Sunstein writes, “There is no assurance, of course, that algorithms will avoid cognitive biases. They can be built so as to display them. My point is that they can also be built so as to improve on human decisions.” I think there needs to be a shift in thinking of algorithms or new technologies as “taking over” human jobs/decisions and instead another factor of decision making, and even then I don’t know how big of a factor algorithms should play because of the problems Sunstein highlights.

    There have been several times where students have admitted/expressed that the concepts we are learning in this course has been the first time they’ve confronted or thought about these ideas, which is scary to me. And I don’t say this to shame whoever, like I’ve said before it’s not necessarily their fault, but it is their responsibility, and I myself have definitely learned some new things in this course. I only mention this to stress that it’s so disappointing that this course was met with so much push back from the CS department. Being good at algorithms is not enough, and the department should be more concerned with producing graduates who are socially aware, on top of being good computer scientists. In fact, I think being aware, or at least trying to be aware, of the impact of algorithms on society is required to be a good computer scientist.

    Reply
  14. The chapter from Weapons of Math Destruction provides a great example of the dangers of predictive algorithms, and it only scratches the surface of self-perpetuating cycles of bias in law enforcement. The way software like PredPol encourage police to patrol in impoverished areas where crimes have been reported previously reminds me of a racist statistic I use to hear circulate a lot: “In the US, black people commit ~40% of violent crimes despite being ~13% of the population.” People would bring this up in an attempt to elicit the conclusion that black people are either inherently more violent or their culture is more violent. Obviously, that’s a very shallow and prejudiced interpretation of the data. First of all, law enforcement is not omniscient – the police don’t know about every crime that is happening, so crimes are only ‘caught’ in the places the police patrol, or where crimes are reported. The former, as pointed out in the reading, frequently coincides with poverty and minority populations, as well as ‘street crimes’, and the latter is subject to implicit or explicit biases held by the reporter, which brings me to my next point, which is that those same implicit or explicit biases also influence what law enforcers identify as crimes and which crimes they choose to act on. Furthermore, in the case that someone is accused of a crime, they don’t factor into this statistic unless they are convicted, which means the data is also sensitive to biases on the part of judges and juries. Not to mention that more money (which coincides with whiteness) buys better lawyers. There is also an issue in how the data is collected and by who. The point of all of this is to say that statistics and data need to be interrogated heavily for biases.

    I also have to wonder whether these predictive law enforcement softwares were developed with the best intentions as O’Neil suggests. I must admit, I find it bizarre that a professor of anthropology at UCLA would not have considered that so-called ‘racially blind’ geographic data on reported crimes is not even remotely racially blind. When someone brings up the 40% statistic, the problem is not only that the statistic is seriously flawed, but also that they brought it up at all. What kind of conclusions are they trying to make people draw by presenting the data that way? Similarly, I think it is naive to act as though the racist and classist consequences of predictive crime software are entirely accidental as not at all by design. The history of the criminal justice system is underpinned by racist and classist assumptions and motivations, so why assume that modern attempts to to digitally assist law enforcement practices are any different?

    Reply
  15. In the reading from Weapons of Math Destruction, we see another data-driven approach that contributes to mass-incarceration in the US. Predictive-policing algorithms like PredPol give officers information of crime “hotspots” where it anticipates crimes will occur. Although designed with the intention of keeping communities safer, they have been widely criticized as racist as they lead to over-policing of poor neighborhoods, and have lead to a surge in arrests of people of color.

    One aspect I took from this reading and last class’s discussion on recidivism prediction algorithms is the danger of feedback loops in algorithms that impact the public. O’Neil describes the feedback loop inherent to predictive policing that contributes to a cycle of incarceration and poverty: The algorithm predicts crimes in poor neighborhoods, so officers patrol these areas and make arrests for petty crimes such as underage drinking or drug possession. This sends young people to prison, where after they are released they have trouble finding work and are more likely to commit crime, continuing the cycle.

    The clear description of this loop makes it clear how and why the US has become a country with 5.4 million people in prison or on parole, a number greater than many other countries entire population. Here, it also becomes clear how Big Data algorithms, when designed without fairness as an initial pillar, can become tools that empower discrimination on a society-wide level.

    Reply
  16. In reading the “Weapons of Math Destruction” chapter, alongside the “Algorithms, Mitigating Bias” piece, I found their core themes to be in close to diametrical opposition.

    “Algorithms, Mitigating Bias”, much to my frustration, was very keen on perpetuating the false promise we discussed in Monday’s class: that humans are biased and machines are not, which leaves machines very well equipped to make consequential decisions where we may worry bias will come into play. On page 501, the author, Sunstein, wrote, “Judges show leniency to a population that is likely to commit crimes”, in formation of an argument that an algorithm would reduce this leniency and with the implication that it would treat leniency appropriately in all cases. Aside from the other, deeply problematic implication, that some populations are, by nature of their population characteristic, intrinsically more criminal—a notion that is rooted deeply in classism, racism, ageism, and academic elitism—we’ve now done multiple readings that demonstrate this is not the case. We don’t have anything to assert that judges are inappropriately leniant to these “criminal groups”, but we do have plenty of evidence (see COMPAS and Northpointe) that algorithms can swing hard the other way, predicting recidivism for marginalized people (with a particular focus on Black racial marginalization) despite no prior crimes or non-racialized markers of potential criminality. Despite this, Sunstein’s piece asserts, “And recall that if the algorithm is instructed to produce the same crime rate that judges currently achieve, it would jail many fewer African Americans and Hispanics, because it would detain many fewer people, focusing on the riskiest defendants; many African Americans and Hispanics would benefit from its more accurate judgments” (Sunstein 507). The COMPAS ProPublica readings we did on Monday were both from 2016, predating the 2019 publication of “Algorithms, Mitigating Bias”, which suggests Sunstein made this claim in such direct opposition to a case study on the judge-assist software they are referring to. This information was available to them, and yet, was largely ignored. The closest “Algorithms, Mitigating Bias” comes to addressing this reality comes when Sunstein writes, “If the goal is accurate predictions, an algorithm might use a factor that is genuinely predictive of what matters (flight risk, educational attainment, job performance)—but that factor might have a disparate impact on African Americans or women. If disparate impact is best understood as an effort to ferret out disparate treatment, it might not be a problem, at least so long as no human being, armed with a discriminatory motive, is behind its use, or behind the factor that is being used” (Sunstein 508-509). What this basically boils down to, by my interpretation, is Sunstein saying that because the disparate outcome (increased incarceration of minority individuals) comes from an algorithm that is seeking to have the opposite affect (reducing bias in sentencing), it’s alright as long as a human wasn’t setting that bias forth, because intentions were good. That seems so wildly backwards to me: lives are on the line, intention is absolutely not what counts, neither these algorithms nor their creators deserve any kind of pat on the back for “trying their best”. Perhaps I’m being too cynical, but I found it very difficult, especially in the context of other readings we’ve done and conversations we’ve had, to read “Algorithms, Mitigating Bias” as anything other than ironic. It sets out with the goal to perpetuate the myth that algorithms are somehow totally discrete from the broader systems they exist in, and does so with maddening commitment.

    On the other hand, Chapter 5 of “Weapons of Math Destruction”, took a position much more in line with the nature of conversations we’ve had to date. It recognizes that by separating algorithm from creator from systems of existing oppression, we breed great danger, because algorithms will reflect back magnified the biases that they are given to train and to use as their word-is-God reference point. There was a particular quote that stuck with me, which reads, “All too often they use the data to justify the workings of the system but not to question or improve the system” (O’Neil 95). An algorithm is not revolution, taking human decisions and giving them to a computer to make more decisions that look a lot like human decisions does not wash our hands of the human biases that went in to past decision making; it only guarantees more decisions reflecting those same biases will be made in perpetuity. If we want better, we need to do better, not pass it off to a machine of our own creation we can then scapegoat for failing to make change.

    Reply
  17. I liked how The Predictive Policing is Still Racist article dove deeper into the sociological perspective on the algorithm. Specifically, when Richerson explained the relationship between black and white people and reported crimes; Richardson said that white people are more likely to report that Black people committed a crime and Black people are also more likely to report another black person committing a crime. Thus the reported crime rates for Black people or higher, because both parties are over-monitoring black communities. Due to this inconsistency, I’m glad that Chicago suspended the use of the algorithm because it’s so controversial and not accurate. One of the most concerning things may be the fact that politicians and police officials still want to use this technology even when these issues have been raised. Wow the other article Algorithms, Correcting Biases, by Cass R. Sunstein Wow the other article explored exactly why this algorithm is biased and why algorithms can be designed to avoid racial discrimination. The article also explained the term disparate impact which is used to explain the disproportionate adverse effects on a particular group, such as Black people. Thus it’s very important to consider the disparate impact of the code someone is developing on different groups.

    Reply
  18. I found that the Weapons of Math Destruction had some excellent ideas and point surrounding the underlying issues of why algorithms that entrench certain issues surrounding recidivism and police patrol can be an ethical issue for the communities that are governed by these pieces of software. I think because we have been considering the issues with software, I had not been thinking of the alternative. The alternative being that people would have to take their place. I think the “Algorithms, Correcting Bias” article made some very convincing points about the usefulness that these algorithms have in creating more ethical environments when compared to judges. Now whether you believe certain crimes should be policed in certain ways is an ethical standpoint in its own right, but in a functioning system you do want people to show up to their court dates, and it does appear that in some ways these algorithms can make a more “accurate” decision about whether someone might show up to a hearing. Some judges were seen to let people go too often and others too infrequently, with different internal biases. The worry that I have, though, is that it is much easier for us to see that people’s biases can come into play than the fact that a machine has biases. If we make these kinds of decisions and regulations given to algorithms too permanent, it could be difficult to change in the future.

    Reply
  19. I believe that algorithms are definitely able to help avoid some of the biases, mainly cognitive biases, that normally occur. And they “can have a strong hold on people whose job it is to avoid them and whose training and experience might be expected to allow them to do so. ” as Cass R. Sunstein mentioned. However, the inequality that occurs before a certain decision is made hugely affects other than the single word “race” or “gender”. Parents or teachers might have different expectations of girls and boys, which makes them have different achievements and have different pathways. All of this, the implicit reasons that appear beneath the “gender” and “race”, are hard to calculate through a simple algorithm. This is what I think of “discrimination on the basis of race and sex” which is mentioned by Cass R. Sunstein. Thus, I agree with Will Douglas Heavenarchive that it is so hard to use data and algorithms in anything that is related to race and gender.

    Reply
  20. It was interesting to read a defense of algorithms use in policing / the justice system, but I don’t think it validates the use of algorithms in these contexts. Overall, it is the fundamental misunderstanding of the power structure that and discrimination baked into the data used to build these algorithms, along with the racism built into the system of policing and justice at the very base level. Even if the algorithm was “perfect”, i think it is conceptually racist. Because Black people are systematically targeted by this system, an algorithm attempting ro make decisions based in this system will neccesarily reproduce this

    Reply
  21. These readings go into the idea of algorithms and how we could make them unbias or how they are already unbiased. However, one thing we have to consider is when we are already putting the statistics of our crime rates and locations onto the algorithm, wouldn’t it already be biased because of the people who chose to categorize them as criminals or not? I thought it was very interesting that in “Weapons of Math Destruction”, they had a quote “Innocent people surrounded by criminals get treated badly, and criminals surrounded by a law-abiding public get a pass.” This shows that even if we had an algorithm, the algorithm would only be based on people’s thoughts, opinions, and statistics. Thus, the following algorithm would be biased even if we were trying not to make it biased. Because no matter what we still need to put an input for the algorithm to base it off of something and make a prediction. And so, yes eventually I believe we would make an algorithm that is unbiased, but first, we would need to make our views on society less biased and figure more about the individuals rather than the environment that they live in.

    Reply

Leave a Comment

css.php
The views and opinions expressed on individual web pages are strictly those of their authors and are not official statements of Grinnell College. Copyright Statement.