22 thoughts on “Reading for Wednesday November 29th”

  1. While most discussions of technology throughout this course have been focused on corporate or individual use of technology, there has been little focus toward government use of technology. Besides the discussion of social media’s influence on the distribution of information for elections, this course has not focused much on technology specifically used by governments to surveil, control, or kill. Speaking to this topic, Zolan Kanno-Youngs describes how the United States Department of Homeland Security used surveillance technology to monitor protests across the country in the wake of George Floyd’s death. While some politicians and government officials have critiqued the militarization of domestic policing, like the first two leaders of the Department of Homeland Security, the current government officials justify the use of the technology by arguing that technological limitations prevented the surveillance effort from being a “all-seeing, all-knowing, hover-outside-your-window Predator.” If the benchmark for surveillance being harmful is that it has to be all-seeing, then almost all forms of surveillance would not be considered harmful. In this case, David Fulcher, the deputy directory for operations at the National Air Security Operations Center in Grand Forks, misses the point of why many are alarmed at this use of technology. With digital information being easily transferable, it is difficult to establish how any form of surveillance is limited. After all, information collected today could be provided to a government agency or private security corporation with greater technological capabilities, or the information could be improved in unexpected ways due to future innovations in technology. Regardless, there is a lack of transparency about surveillance technology, which can be particularly damaging when used by governments. Especially in democracies, there is an expectation that governments and government officials are beholden to constituents, and surveillance technology can damage trust between constituents and government officials.

    Reply
  2. I thought that “The problem with Blaming Robots” reading offered a drastically different perspective of technology than what we previously have read in this class, and in particular I feel that some of the fear mongering that comes with advances in technology are overshadowed by the reality of technology. For example, when the article talks about the inability of robots to take over the education sector, I thought immediately of COVID and online learning, and the disastrous effects on young children that resulted. Even though distance learning wasn’t quote-unquote automated education, in many ways we could draw a parallel to the impersonal nature of attending class through a screen to learning from some automated system. For young children especially, the method lacks intimacy and flexibility; its is exceedingly difficult to guide and develop young minds when there is a lack of person to person communication. Even as a high schooler I experienced this feeling; I can confidently say I learned hardly anything my senior year, and I certainly took little interest in even trying. Part of our skills as human beings is our ability to connect with one another, and though we have read other sources that note the tendency we have to humanize machines, the results from online-learning might suggest that even though we may feel some level of normalcy, technology inevitably lacks some effectiveness. Elementary students are far behind in reading aptitude, the behaviors of students are increasingly immature and erratic; and while of course these trends are also affected by other sources, the year or two in which children were absent from classrooms and learning from virtual teachers and online web assistants and YouTube and now chatGPT, etc. cannot be discounted. Sure, robots may advance to places we cannot currently imagine; maybe someday they will be able to automate our humanity; but the optimist inside me hopes that we have some intrinsic value to our kind, and that we won’t ever truly be able to replicate that.

    Reply
  3. Today we learned about how AI is being employed by the government for surveillance. We also learned about how AI is being blamed for being biased and perpetuating bias, when in reality it cannot hold bias in the traditional sense. We also read about how the people who make it will always have some inherent bias, whether they are aware of it or not. Making sure that we have an ample amount of checks and balances when going through the process of making and conceptualizing these new systems is the key here. On the topic of surveillance, it was off-putting to read about how the government justified its surveillance practices because it did not go past the line of being harmful and stalker-ish, which is such a wild justification and a euphemistic claim. People are still being surveilled without their knowledge, and that is what is harmful. AI has brought about significant changes in the way the government monitors its citizens, and while AI has the potential to improve surveillance practices, we must be careful to ensure that it does not infringe on people’s privacy and personal liberty. If this surveillance technology does eventually become capable of passing that line, I wonder what kind of new justification the government will give to continue to quell our understandably negative reactions to being surveilled without permission.

    Reply
  4. The readings primarily focus on the utilization of technology by governments and society, highlighting their shortcomings. Benjamin’s perspective points out the bias ingrained in AI tools due to their creation by biased humans and data. This bias becomes problematic when these technologies are integrated into society.
    Kanno-Youngs looks over how the US Homeland Security employed surveillance drones and airplanes during the George Floyd protests. Despite the inability of these technologies to individually identify protesters, the Homeland Security Office retained video footage. Jay Stanley, a senior policy analyst at the American Civil Liberties Union, expressed concern that such aircraft could potentially deter people from participating in protests.
    I disagree with his perspective. If protests remain peaceful, there’s no justification for Homeland Security to make arrests based on video footage. The primary aim of surveillance isn’t to prevent protests but rather to deter harmful actions that might occur within “peaceful protests”. As long as there are individuals who engage in destructive behavior during protests, like damaging small stores or setting buildings on fire in many “peaceful” protests, it is important to have surveillance over protests to deter those actions.
    While I believe surveillance requires stricter regulations and guidelines, I don’t see surveillance itself as inherently harmful, contrary to what some individuals in the readings suggest.

    Reply
  5. The New York Time’s article made some interesting comments about the economic impact of the technologies that we have been talking about. I think in a lot of discussions, we have sort of alluded to the idea that these companies are searching for profits while ignoring the privacy and impact that their technology has. I think it is fascinating, though, that the article points out that there really has not been much increase in productivity despite advancements. It points out that a lot of this technology is and intends to replace service jobs which it ultimately claims do not produce economic value, but rather move money around. I am not quite sure if I can agree on this. It seems a little bit of a stretch to argue this, but I do agree with the article’s secondary claim that these jobs are not quite fit for automation, considering part of the money is in the human to human interaction these jobs provide. I won’t talk too much about the surveillance article because while it is an issue and concerning, I find myself unsurprised considering how easy camera surveillance has become, and I do expect it will get worse without legislation. In the last reading, I felt like the author managed to summarize a lot of the thought that we have put forward in the class this semester. I think the author’s comment about “assum[ing] that self-conscious intention is what makes something racist” hits exactly the main point of a lot of our discussions. We cannot ignore the impact that these systems have on people simply because they exist within a system that encourages and shapes inequality.

    Reply
  6. The line in the Not Blaming Robots reading that stated we often aim technology at our anxieties was really interesting. Here it is: “When we fantasize about robots, we correct for—or displace—our own anxieties, our sense that society, as structured, leaves too many humans unfullled, immiserated”
    This was really interesting to think about, I know that a lot of automation has happened with the unsexy work that happens to maintain our present day society. But we don’t talk about that stuff, we talk about the robots that are going to take over our ruling class and chatbots, stuff like that. I know social anxiety is becoming more and more of a thing, and I wonder what might happen if we continue to allow our anxieties to steer us. We probably can’t create a robot that is going to make everybody like us and eliminate all of the awkard little moments that can arise throughout the day, but we can create a chat bot that allows us to avoid those things. It also makes sense that we would attempt to automate the criminal justice system (throwback to the recidivism readings), considering that we know about it’s many many flaws, flaws that often seem too large to fix. Is tech the solution to either of these anxieties? No not really, not directly at least. But the tech is being developed anyways. The idea of tech development being fueled by our anxieties is an interesting lens through which to look at these ethical tech issues.

    Reply
  7. Its worrying – and telling about our society – that the default response to citizens protesting police violence and over policing was to fly literal military drones and helicopter over them. I thought there was a great point near the end of that piece where an ACLU Analyst points out that people don’t know what kind of technology these are carrying, so even if the government comes out afterwards and says they weren’t carrying weapons or facial-recognition equipment, and even if you’re willing believe that, the effect of a thing literally called a “predator” (who’s entire design purpose was to be able to identify and kill individuals) flying over you is definitely disconcerting. Also I thought the whole “the technology isn’t there” line was fascinating because: a) of course that’s what they’d go to, b) it implies that they’d do it if it was, and c) the government spends a shit ton of effort to seem like they really do have that kind of tech (and it seems like the Reaper prob does) so you’d understand why people would think that…

    Reply
  8. I found Chapter 1 of Ruha Benjamin’s race after technology to be a really well written and important read. We’ve often discussed how technologies can exhibit racism, sexism, and really all forms of bias and prejudice, and that has often been accompanied by verbiage surrounding the algorithms of being biased. However, as Benjamin pointed out, algorithms cannot hold bias in the traditional since, but they can very much exhibit it, in the ways they are based on the same structures as social fabric broadly. This then leads into perspectives on racist programming of racist robots being an intentional thing, and discussed only in the sense that it means to do harm. However, Benjamin is quick to point out that “Still, the view that ill intent is always a feature of racism is common: ‘No one at Google giggled while intentionally programming its software to mislabel black people.'” Not all encoding of bias is intentional encoding of bias, and yet, all of it is damaging and damning. The entire last paragraph really hit this point home for me (as I suppose concluding paragraphs often set out with the goal of doing). It reads: “Ultimately the danger of the New Jim Code positioning is that existing social biases are reinforced – yes. But new methods of social control are produced as well. Does this mean that every form of technological prediction or personalization has racist effects? Not necessarily. It means that, whenever we hear the promises of tech being extolled, our antennae should pop up to question what all that hype of ‘better, faster, fairer’ might be hiding and making us ignore. And, when bias and inequity come to light, ‘lack of intention’ to harm is not a viable alibi. One cannot reap the reward when things go right but downplay responsibility when they go wrong.”. This emphasizes the fact that intentional or not, bias is not passive, and instead is a zero sum game: where some lose, others win, and allowing to pass off bias as something the machines do rather than something humans remain indubitably responsible for is merely a way of escaping our own guilt all the while we develop new ways to oppress, in lieu of even attempting to undo institutional harm

    Reply
  9. Each of the readings today provides us with different insights regarding robots. First of all, the first chapter of ‘Race After Technology’ by Ruha Benjamin primarily focuses on demonstrating the biases that AI is having and how this phenomenon will translate to robots too. She starts the reading by mentioning the first beauty contest that was judged uniquely by an AI model, and the results showed that the winners across most age ranges were predominantly white. As we have previously discussed in this course, Benjamin associates this with the idea of using data that is already biased, and in addition, she explains how biased the industry is as well.

    The second reading focuses on another concern regarding robots: surveillance. During the George Floyd protests, the Department of Homeland Security dispatched helicopters, drones, and airplanes to surveil the protests. Although the officers claimed it was to have a better understanding of the situation and to be alert if anything happened, most people disagree with this idea. They mention that there will always be an aspect of militaristic domination when seeing these sorts of machines, and what concerns them the most is how the information they recorded could be used in the future.The last reading talked a little bit about the history of automation and argued why there exist different perspectives on whether robots will take jobs away from people.

    Reply
  10. Today’s readings returned to something previously discussed bias in tech that we create. However, contrary to previous learnings/readings, it was really interesting to learn about how humans are truly at the root of all ‘robots’ or AI problems we create. The surveillance of the George Floyd protests and the subsequent storage for years of this footage and the data collected are sure to be used in some sort of policing algorithm to track those surveilled. The decrease in human productivity, as we read in “The Problem with Blaming Robots for Taking our Jobs” by Jane Hu, is attributed to humans creating a market saturated with supply and decreasing demand. This decrease in demand can almost be attributed to the stagnation and falling of middle-lower pay in the face of steeply increasing inflation. In the face of rising income, the upper class has only had their incomes continue with little falter.

    The issues of robots, machine learning algorithms, AI, and many other so-called independent tech have been engineered by human hands and human minds. The root of racism in robots is racism in humans. I think our class has been critical in really hammering in this important notion. 395 has been instrumental in keeping us in the headspace that while biases come from biases laden in society, tech cannot yet fix them, but we can. It is up to us to not go and change the whole world, but to keep what we learned and put it somehow into practice in the professional world.

    Reply
  11. Ummm… yeah. I think all three readings today highlight how technology is often used as a tool of domination that preserves inured systems of power while obscuring the human. No one is arguing that technology on its own is inherently bad but, as seen in the readings today and in almost every reading in this class previously, technology and computation are often inextricable from the social contexts in which they are deployed. What is more is that these technologies allow for greater surveillance and harm to people all the while automizing the same logics of oppression used prior to their development (like the discussion of the militarised police’s surveillance of citizens in the ACLU article, and Benjamin’s discussion of social credit and its relationship to the history of white wealth hoarding in the US). Even in the case of the article by Hu, all they do is point out that while technological affordances have led to increased precarity in (middle class) jobs that were not previously threatened, this is a continuation of the logic of capitalism (which is first and foremost concerned with total control to ensure profit, no matter how much they couch it in a quest for efficiency).

    Reply
  12. Throughout this course, the focus on technology primarily centered around its corporate and individual applications, with limited attention given to its role in government. Aside from discussions about social media’s impact on election information, little consideration has been given to how governments employ technology for surveillance, control, or even lethal purposes. Zolan Kanno-Youngs highlights the United States Department of Homeland Security’s use of surveillance technology during nationwide protests following George Floyd’s death. While some criticize the militarization of domestic policing, current officials argue that technological constraints prevent omnipresent surveillance.

    The critique against surveillance often revolves around the idea that it must be all-encompassing to be harmful. However, the ease of digital information transfer makes it challenging to establish limitations on any surveillance form. David Fulcher, from the National Air Security Operations Center, misses the concern: the lack of transparency in government surveillance technology, damaging trust in democracies.

    Contrastingly, the “The Problem with Blaming Robots” reading offers a distinct perspective on technology. Despite fear surrounding technological advances, the article disputes the notion of robots taking over education. Yet, the author notes the impersonal nature of online learning during COVID, paralleling it to automated education. The absence of personal connection in virtual classrooms negatively impacts learning, as seen in the declining reading skills and increasingly immature behavior of elementary students. The speaker, drawing from personal experience, emphasizes the importance of human connection in education and questions the efficacy of technology in replicating human values. The optimistic view is that, despite technological advancements, humanity’s intrinsic value may resist full replication.

    Reply
  13. Regardless of the intent of surveilling protestors with military drones (i.e. to ensure “safety”), it absolutely feels like a violation of privacy to monitor demonstrations, regardless of the “sensitivity” of the data collected. We’ve previously discussed how the criminal justice system and police force are deeply flawed systems, and incorporating algorithms into relevant decision-making processes just reinforces the biases within these systems.

    I think a lot about the sentiment carried by the phrase “they’re stealing our jobs.” Most often I’ve heard this used pejoratively towards immigrants by U.S. born citizens who are anxious about the spiraling economy and associated detriments (higher taxes, lower wages, worse contracts). The reality is the entirety of the working class shares the same “enemy,” which is racial capitalism and the sub-systems that work to encourage wealth hoarding. But it is easier to blame something, or someone, more tangible or accessible. I think having robots perform menial tasks should theoretically be productive for our society and encourage more sustainable, appreciated labor for everyone, but this proves to be difficult when so much of the working class depends on this kind of labor to acquire basic necessities.

    Additionally, on the note of surveillance, one tension is that increasing robot labor would certainly increase surveillance, either explicitly or implicitly.

    Reply
  14. I feel like the Race After Technology excerpt reiterated a lot of the points we’ve been making this semester about technology being just another way to uphold systems of oppression that are already in place but without having a form of accountability, while adding more specific examples to the discussion. I especially liked where the author pointed out if there’s only one theory of the mind/whose mind are we modeling in reference to deep learning. The reading talked about how indifference/lack of concern rather than the outright hate and intention to harm also perpetuates racism. I want to add that the VICE reading from a few weeks ago points out an additional way where it’s not necessarily malicious intent that may uphold bias, but also the exploitation of laborers. Even if someone does recognize that the machines they program or the variables they input perpetuate historical forms of racism, they may just end up inputting what their employer coerces them to anyway since their income and chances of getting hired in the future are on the line.

    Reply
  15. The first reading highlights the intersection of technology, privacy, and ethics in surveillance. The deployment of aerial surveillance tools like drones and the integration of their feeds into networks like “Big Pipe” demonstrate advanced capabilities in real-time data collection and sharing. However, it also raises questions about data privacy, the potential for misuse of surveillance data, and the balance between security and civil liberties. The technology’s neutrality is contrasted with its application, which can either support public safety or infringe on personal freedoms. The second article delves into the complexities of automation, AI, and their impact on the workforce. It highlights the difference between the technical feasibility of automation and its economic viability. This underscores a fundamental point in AI and robotics: while technological advances make it possible to automate many tasks, the decision to do so often hinges on economic factors, not just technical capability. The article invites a nuanced understanding of the role of technology in society, emphasizing that the effects of automation are more intricate and tied to broader economic and social contexts than often perceived.

    Reply
  16. “The allure of objectivity without public accountability”
    “But, as she documents, these technical fixes, often promoted as benefiting society, end up hurting the most vulnerable, sometimes with deadly results.”
    These two quotes from Benjamin’s “Race After Technology” succinctly sum up a lot of the themes we have been discussing throughout the course of this class. First of all, the way algorithms and datasets (judged by the public to be somehow more objective than human decision-making). The quote really illustrates another motive behind offloading some of the messy work of decision-making and thinking critcially to robots or algorithms. The ability to operate with accountability, without blame. This really brings to mind how conversations of “cancel culture” and deplatforming have been spun into a classic moral panic. If possibly “cancelable” actions are offloaded to AI or robots, hegemonic power structures remain intact. The (mostly white) rich get richer, and bias is reaffirmed.

    The second quote really reminded me of one of the articles we read earlier about the techno-futurism and belief by some technologists and computer scientists that we can “tech”/ program our way out of (for example) climate change, without addressing the underlying patterns of human behavior and consumption that are truly responsible in the first place.

    Reply
  17. The readings today only further highlight how automation is being used in ways that will harm people and their livelihoods.

    As others have mentioned, even if the intention of the drones and helicopters deployed during the protests was not to threaten protesters and invade their privacy, both of those things still happened. No matter the intention, deploying expensive aircrafts with sophisticated sensory technology on peaceful protests would only imbue fear within protesters; it is a display of authority and a violation of the First Amendment. But, I feel this deployment would be very different if the drones were actually automating the jobs of police. Unlike other implementations of robots, the drones were not replacing anybody. They were an extension of the already extensive policing during the Black Lives Matter protests. If the thousands of officers that assaulted peaceful protesters were replaced with drones that only directed first responders to where they were actually needed, perhaps people would be more open to them. Still, it would be indicative of a surveillance state, and many questions are still left unanswered regarding how the footage they record is used and how bias is implemented into their systems.

    I’m still not sure where I stand on the issue of automation in the work force, mostly because there are so many factors to consider when discussing the topic. What I do know is that automation in its current form has furthered the strength of our bizarre and inefficient capitalist system. It’s apparent that we can’t continue to automate working and middle-class jobs without drastically changing its implementation and our current economic system (I find it interesting the article uses words like “fear” and “force” to describe the thoughts of some writers regarding universal basic income). I think we should be worried about the implementation of automated systems in a capitalist economy, regardless of if they are creating or taking jobs.

    Reply
  18. I think that the use of military surveillance technology on domestic issues is an incredibly discouraging sign for the state of American democracy. Most Americans seem to simply not care about the increased militarization within our borders and by police. This, despite the fact that political events like January 6th demonstrated that some republican voters have no interest in maintaining a fair democracy, and would instead prefer a dictatorship.

    The reading discusses how these military-grade surveillance drones were used during the George Floyd inspired protests in 2020. The authors interviewed several members of the department of homeland security involved in the surveillance who claimed their only goal was to monitor the outbreak of violence. However, even if they claim this as their intention, I still think this type of civillian monitoring sets a dangerous precedent. We’ve already seen in countries like China that do not have a strong democracy, that surveillance is used indiscriminately by the government to control their population. I fear that if there was a change in power in the U.S., this same technology could be used similarly, before the American public was even aware of any change.

    Reply
  19. This is the first I’m hearing of the aerial surveillance of the George Floyd protests, though I’m not surprised. Every time I hear someone saying they “support peaceful protests”, I get riled up. Decades of propaganda have seemingly convinced the majority of the population that peaceful protests are always sufficient and inherently more noble than violent protests. In response to the grotesque violence perpetrated by the state, I believe advocating for peaceful protests to be inappropriate; they are a half-measure. Let us not forget the transformation of LGBT rights that occurred as a result of the Stonewall riots, or the standardization of the 7-day work week, minimum wage, paid holidays, etc. that can be attributed to violent protests in the wake of the labor movement. It is counterproductive to protest state-sanctioned violence by acquiescing to (and thereby enabling) the state’s monopoly on legitimate violence.

    Reply
  20. I think the article from the New Yorker was a pretty informative one that explained the relationship between automation and human work a lot clearer than I originally understood it. My Grandma and Mother both worked in some factories in Pennsylvania growing up, and my Grandma has lamented in the past that the jobs they worked are now completely gone. This is only true up to a point. I often fail to consider the recalibration caused by people seeing their opportunities shrink due to successful cases of automation and accepting ever low-paying jobs. I’m sure that many corporations benefit greatly from the perception that automation is going to completely replace all human workers and use that to continuously worsen working conditions.
    The surveillance of protests using aerial technology is another terrifying addition to the suite of surveillance tools we have allowed most governments in the world to run free with. It is truly scary that we do not know exactly the capabilities of whatever may be flying over our heads. We must trust the police, the exact body that is being protested against, to not abuse technology such as live face recognition cameras and phone tracking through GPS (or other means). It’s interesting to now see protest guides not discussing the actions being taken, but discussing ways that people can sufficiently protect themselves from invasive surveillance technologies. Now, even to express oneself peacefully, measures to protect your anonymity must be taken.

    Reply
  21. The reading regarding aerial surveillance during the protests following George Floyd’s death stood out to me the most because it raised critical questions about the intersection of technology and social justice. While the deployment of drones, helicopters, and airplanes by the Department of Homeland Security may showcase the “right technology” for that occasion, the timing of such extensive surveillance operations is deeply troubling. The focus on monitoring Black Lives Matter protests prompts reflection on the urgency and necessity of such measures. The surveillance, spanning 15 cities and accumulating over 270 hours, reveals a disproportionate allocation of resources that could be perceived as an infringement on privacy rights and an intimidation tactic against those protesting against racial injustice. This situation underscores the broader debate over the militarization of law enforcement and the role of federal agencies in monitoring civil demonstrations. It also emphasizes the need for a careful balance between security concerns and safeguarding the constitutional rights of citizens, particularly during a period of heightened sensitivity and outcry for social change.

    Reply
  22. In the reading, U.S. Watched George Floyd Protests in 15 Cities Using Aerial Surveillance, I found it very interesting how people were using technology. The fact that they had to surveil a peaceful protest using drones is ridiculous. What is the purpose of having aerial surveillance? One thing the author mentioned was discouraging people from protesting and I found this a possible answer. However, I also find this ridiculous that they are abusing the use of their “advanced” technology is discourage an important protest such as this with the excuse that they were making sure there were no arsons or robberies. It brings up the question how advanced will technology be and what purpose are they going to use such advanced technology. Talking about advanced technology, we move to the next reading, The Problem with Blaming Robots for Taking Our Jobs. We see that a lot of movies and people talk about how machines and AI would take our jobs, but we notice that the real reason is because of overcapacity and that the mindset of technology stealing our jobs is false. But I do feel like a sense of me is still telling me that machines would be able to take certain jobs like factories where putting things together wouldn’t need people at a certain point. I thought it was interesting to read about this, as we get to see a economic’s point of view and how it really isn’t technology but just our own faults. However, we do see the frequent problem of capitalism and how companies are changing depending on how much profit they could receive based on what they need for their company. I find this mindset very absurd with how companies are so fixated on income and nothing else.

    Reply

Leave a Comment

css.php
The views and opinions expressed on individual web pages are strictly those of their authors and are not official statements of Grinnell College. Copyright Statement.