Reading for Friday September, 15th

Optimize What? (https://communemag.com/optimize-what/)

You are not expected to understand this, edited by Torie Bosch. Chapter 9 (https://www.degruyter.com/document/doi/10.1515/9780691230818/html#contents)

Your Computer is On Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip. Introductions (one is by Mullaney, one is by Hicks)

Optional: Code is Law (https://www.harvardmagazine.com/2000/01/code-is-law-html)

21 thoughts on “Reading for Friday September, 15th”

  1. A few thoughts, mostly on Optimize what:
    The desire to fix things, or find “correct” solutions is a guiding force personally, one that helps in some regards but also is quite detrimental in others. I often find myself devoting unnecessary brainpower and time in the pursuit of a perfect solution. This article spoke to me, if only for the reason that it reminded me that not all problems have perfect, attainable solutions. Sometimes, problems can be left unsolved.
    Additionally, some of the points made in Optimize What are related to the content of dying to be competent, in that Cottom chastised the endless amount of tech solutions to endless amounts of problems in the same way Wu does in his piece.
    In the end, I felt shortchanged by Wu’s piece though, since his critique of computer science and his eventual “solution” reflected some of the same issues he writes about within. What makes computer science for the problems he deems “socially beneficial” any more effective than the optimization of markets, etc.? It is not specifically noted why, only that the current power of computing affords the ability to progress as a postcapitalist society—which is reminiscent of his point that CS apologetics maintain their belief that algorithms are not inherently flawed, and have the potential to do good when implemented by the right people.

    Reply
  2. As discussed in Jimmy Wu’s piece “Optimize What?,” computer science curriculum often does not discuss the social consequences of creating algorithms. When computer science curriculum does consider the negative consequences of these algorithms, this consideration is often relegated to a particular course, where other courses in a computer science department remain unaffected. At the same time, Wu explains that this mentality of assuming good outcomes from optimizing social outcomes began in the mid-20th century. I believe this faith in optimizing social or economic outcomes is partially due to the prevalence of individualist thought in Western society. Only being concerned with the individual, whether it is an individual person, group, or corporation, often ignores broader societal implications.

    In my History of Early Modern Philosophy course at Grinnell College, the underpinnings of Western thought were explored, including the emphasis on the individual over society. Based on a research paper I completed, many academics argue that John Locke, for example, had colonial motivations influence his philosophy. For those drafting colonial constitutions, like John Locke, property was considered a political matter in that it expanded the land occupied by the colony in question. Thus, John Locke’s philosophy encouraged the acquisition of property by those moving to North America. One argument made in his Second Treatise of Government was that no individual could infringe on the property rights of another because every individual would only be able to use a small fraction of the large expanses of the North American continent. This promotes the idea that individuals should not be limited because they are already limited in power.

    As this idea of individualism permeates through Western society, individuals may continue to use this framework to understand their work in software. As Kimberle Crenshaw discussed in her TED talk, frameworks influence what and how people consider themselves and others in society. Therefore, having been framed as limited in power, Individuals may not consider broader implications of their work.

    Reply
  3. I find the points Wu makes in his post very refreshing. This was written in 2019, and it has been about 4 years since, but I think little has changed, at least in the way we seem to be approaching learning. I think this class we are in right now though is some of the first steps we are seeing in creating this change though. Still, in most classes that do not focus on the ethics of computer science, when ethics come up I still feel like we are in a position where we talk about the ethical representations of what we create and not the societal implications that lead to such issues. The donor example was very compelling. The idea that we as programmers are required to focus on the issues of protecting the anonymity of donors rather than question why donors are allowed to influence elections with their monetary privilege is a very worrying way of approaching computer science ethics. We cannot remove ourselves and our interactions from ethics. Computer scientist cannot create ethical solutions in a space that is unethical. The solution in some cases is beyond the issue of computer science, but that does not mean that ethics stop at that line. Computer science is ultimately one piece of ethical creation of software.

    I definitely think Bosch’s chapter on computer science community and interpretation was an interesting addition to this thought of breaking outside the computer science sphere. I found it interesting that it became this culture of assuming internal knowledge when the creators of the comment did not understand it themselves. I think it very much reflects the outward vs inward knowledge surrounding ethics. We have this internal idea of ethics in the computer science community that does not reflect the way that the outside world understands ethics, and they are not yet intertwined.

    Reply
  4. “Optimize What?” was a very thought-provoking and interesting reading. I definitely agree that many CS programs give their graduates an incredible level of technical skill (though sometimes not complete understanding), but rarely ever discuss the implications and dangers behind these technologies. This imbalance, coupled with the traditional Silicon Valley mindset of expansion and profiteering, results in a workforce that does not consider the implications of their actions and commits all of them aggressively to maximize their chances of making more and more money. Google’s Dragonfly resembles this perfectly. It is a translation of an extremely profitable product that includes human rights violations to make it acceptable in a new, large market.

    Though the ML/DL technology discussed can generally be poor or, at worst, dangerous, I appreciated the philosophical idea of them being used to plan “a complex peacetime economy” in socialist countries. I can’t say I would be interested in being the first to live there, but I am surprised I haven’t heard more about simulations or research in this area specifically.

    I enjoyed Cassel’s chapter as well. I find myself getting really into the history of these foundational projects of Computer Science, as the people behind them were pushing the boundaries of technology in an extremely inventive and creative way. It’s probably best for projects to be professional and readable, but I will always get a kick out of flashes of genius framed in confused and foul language.

    Reply
  5. As someone who never intended to be a CS major, the computer science culture and the one existing here at Grinnell in particular drew me in. The comradery of just a bunch of people typing away late into the night to find a bug or type up a lab is something special that is not found in other departments. In some sense, for much of my life, I have thought of Silicon Valley in a similar way, just a bunch of nerds typing away, collaborating, and having fun trying to figure out the most complex codes in the world. The first reading, “The Most Famous Comment in Unix History: You Are Not Expected to Understand This” by David Cassel, taps into this understanding. The comment is a metaphor for the greater computer science community I had known and accepted to be empirically good and collectively confused.

    “Optimize What?” by Jimmy Wu is a reminder that we may think we know what community we are a part of but we commonly forget what society we are in. Silicon Valley really represents the people who have figured the monetization and thus capitalistic exports of computing. Our computing heroes are the champions of this world and rarely the socialist underdogs aiming to better society as a whole through computing. Jimmy Wu’s words of a worker’s revolution in Silicon Valley and socialist rise were thus appealing. Computing and machine learning have gotten so complex and complicated that only a few companies know their true potential. The capitalistic corruption of Silicon Valley has turned potential into a view of revenue and is thus why every big tech company on the planet is racing towards AI. Nevertheless, we all understand, to some degree, the threat of AI but also employ it because we have been indoctrinated into this same neo-liberal world. Computer Science “teaches the axioms and methods of advanced capitalism, stripped of the pesky political question” and is the newest foundation for the faults of capitalism because the knowledge gap is so wide it is almost above questioning.

    Overall, I found Jimmy Wu’s words to be really intriguing. I think many of us were raised with the view of Silicon Valley as being a bunch of young tech rebels changing the world for the better but now have grown into the understanding that it is the face of the modern corporate machine. My perspective, at least, has been blinded by the latter whilst knowing the former. As a result, I do not code or think of myself as being a part of the capitalist machine on the third floor of Noyce. However, the objective nature of problem-solving that Wu mentioned is both carried into and out of the classroom. Thus, even now, we stand at the forefront of the newly recognized technological face of a neo-liberal society.

    Reply
  6. The three readings for class explore different aspects of the intersection between technology and society. “Optimize What?” critiques the limitations of current debates around tech ethics and argues for a new vision of “communist computer science” in service of people and the planet. “Code is Law” discusses the regulation of behavior in cyberspace and the need for democratic control over the design of code and architecture. “You are not Expected to Understand This” explores the significance of a famous programmer’s comment and what it reveals about coder culture. These readings highlight the importance of understanding the values embedded in technology and the need for democratic participation in shaping its design and implementation. They also raise the question of who benefits from technological advancements and call for a reconsideration of the role of technology in society. Overall, these readings prompt us to think critically about the relationship between technology and society and the implications of technological advancements for the future.

    Reply
  7. “Optimize What?” was a reading that touched on many questions that have been on my mind as a computer scientist for a long time. When someone thinks of computer science, they think of a field of study intending to optimize and automate the task that they were given in a purely objective and scientific way. I had always wondered why there was seemingly so little room in computer science academia for the study of the social consequences of what we code. According to Wu, this is because of how we as computer scientists are taught to think––purely through an analytical and optimal lens, with virtually no capacity for the ability to consider the political and societal effects of our work.

    When reflecting on “You Are Not Expected to Understand This,” it perfectly describes the humanness that is present within the technology that we as computer scientists make––the media in the most recent years have consistently put the (low) possibility of technology completely taking over our lives, but what they do not realize is that behind every piece of technology is the human(s) that created it. Technology in its current state has been coined as the opposition to humanity in its state of imperfection. But there would be no technology without humans, and we are reminded of that when studying and researching the code from coders before us, where they have left behind small remnants of their humanity within the technicality of their work.

    Reply
  8. I found both readings intriguing and worth reflecting on. To begin, “OPTIMIZE WHAT?” provides a profound analysis of the ultimate purpose of computer science and how it has evolved over time to the point where we are optimizing everything except human well-being. Jimmy Wu addresses these issues from various perspectives, including an examination of the academic origins of the problem, a comparison with the study of economics, and an exploration of the ethical dilemmas faced by computer scientists in the industry. One particularly striking point in this reading is when he states, “The study of machine learning offers a stunning revelation that computer science in the twenty-first century is, in reality, wielding powers it barely understands.” It’s astonishing to contemplate how one of the most impactful technologies we have implemented is still not fully understood. Knowing this can make you feel uneasy and vulnerable, as it’s impossible to predict what could go wrong. In other words, you wouldn’t trust a pilot who doesn’t have the procedures required to fly a plane memorized.

    Wu also asserts, “The cold science of computation seems to declare that social progress is over—there can only be technological progress.” I believe this encapsulates why he believes we need to start thinking about “a communist computer science.” We have reached a point where the only thing that seems to matter is continuous advancement, regardless of the potential consequences. If we don’t take action soon, it might be too late.

    Regarding the reading “You are not expected to understand this,” I find the most compelling message it conveys is that no matter what technology humans build, it will always have an impact on humanity in some way or another.

    Reply
  9. Jimmy Wu’s deep dive into the prevailing techno-utilitarian ethos of modern computer science curricula struck a chord with me. As a computer science major, I can attest to the increasing dominance of the “optimization mindset” and how it molds not just our thinking but our ambitions and aspirations as well. Universities like Stanford and Berkeley have historically been the breeding grounds for innovation in technology. However, as Wu rightly pointed out, the current curriculum increasingly stresses the technical aspects, often to the detriment of the ethical or societal consequences of our work. As computer scientists, we’re trained to perceive problems as variables, constraints, and objective functions. Yet, the real world often doesn’t fit neatly into such models. Reflecting on my own journey, I recall multiple occasions where the optimization problem was the central focus. Whether it was designing an algorithm to efficiently route data packets in a network or maximizing the accuracy of a machine learning model, the end goal was often numerical supremacy, detached from broader humanistic considerations. The introduction from “Your Computer Is on Fire” delves into the ethical and social implications of technological advancements and their unintended consequences. The author explores the role of technology in modern society, drawing attention to how certain tech innovations, particularly those by big tech companies, can have unforeseen and sometimes detrimental impacts on society.

    Reply
  10. “Optimize What?” by Jimmy Wu, is an interesting article. One of his main arguments is that computer scientists need to consider the political and societal effects of technology development. He criticizes how computer scientists often think their job is “to solve whatever problems we were given, not to question what problems we should be solving in the first place. And we learned to do this far too well.”.
    I disagree with his argument. I believe that computer Scientists’ job is to focus on research and technological development. It is the policymaker’s job to consider the consequences of technology in society by setting guidelines and regulations. Most new technologies can be used with ill intentions, but they greatly benefit society too.
    For example, generative AI such as ChatGPT was introduced to society last two to three years. Some people with bad intentions are already using generative AI for criminal purposes such as scamming, creating malware, and spreading misinformation. However, generative AI provides numerous benefits to society. A lot of people use it to make their businesses more efficient. It is not the fault of computer scientists that some people use generative AI for unethical activities; rather it is the responsibility of policymakers to restrict the usage of generative AI for such activities.
    If computer scientists put more weight on considering how technologies impact society, it is less likely that they will develop innovative products due to hesitation. Their primal focus should be on researching and developing new technologies, not making political and societal decisions.

    Reply
  11. I really liked the reading, “Optimize What?”, where we bring back the recurring idea of how we generalize most peoples’ problems into one big grouping. A quote that I really thought resonated with this whole reading was “Everything is an optimization problem” from Boyd. From moving to an idea of generalization and grouping, we move onto the idea of how we try to optimize every problem possible. Yes, optimizing is good, but we fail to notice the flaws of how we would ignore alot of individuals and generalize. By being able to generalize we would be able to optimize each problem but would fail in helping those in real need. We would label each problem as either solved or not, in which we lack the consideration of actually helping, and instead have the idea of being able to make every process fast and efficient. I notice that even the things we are taught in class are to make every code efficient and that is the end of it. We are taught that simplifying is good. These lessons gear our minds to be more focused on the idea of being efficient and eventually lead us down the path of techno-solutionism. This is a far stretch, but I feel like this connects to our other reading, “You Are Not Expected to Understand This”. We see that there is a community for computer science students that find the phrase “you are not expected to understand this as a challenge”. Although this doesn’t apply to all students, there are still quite a bit of students who are intrigued by the challenge and want to take it on. Having this sort of mentality has pros and cons, where the pros are that at least they are willing to take on a challenge no matter how hard and difficult it is to get there. But the cons are that it seems to be the same mindset as “Optimize What?”, where these students are thinking more about how they could solve it in the best way possible rather than who it benefits and helps. After having read both of these readings, I thought it was very interesting how our minds work after having been taught in a certain way that I have questioned my own way of thinking and how I process certain things.

    Reply
  12. It is crucial for computer scientists to understand what they are doing and if the data/ simulation/ ethics behind the code make sense. Especially when it comes to the question of data validation. And it is also really necessary to help others understand the code by documenting it. In that way, the professionals are able to be involved in the codes and interpret the code from a different perspective. Besides, the documentation which is similar to the chance of learning the code is important for society and the later researchers who are going to continue the work. During this summer, I faced the problem that there is no clear path and clue of how some of the data have been processed and utilized which takes me a lot of time to understand and pick up the work from what she has done. So when I saw the story “You Are Not Expected to Understand This”, it made me feel that the lack of opportunities to learn the special types of code would not contribute to the community by any chance, and some of the redundant times would be needed for a lot of people in order to build the similar algorithm.

    Reply
  13. There is clearly a lot that we do not know about what we are doing as computer scientists. I already knew this about myself. It is interesting that this is an industry-wide phenomenon. My assumption is that there is always somebody who understands what is happening, but that’s a hard assumption to justify when it comes to large language models and other AI driven things that are probably at least a little out of our hands at this point.
    A point that sticks out to me still is the politicization of the CS/Tech world. The characterization of the CS world as in line with capitalism and capitalistic realism was honestly a little crazy to think about. Makes me think of Jurassic Park, specifically the one line about being to busy asking if we could and not asking if we should. I know that’s a pretty classic line that is thrown around about innovation in general, and yet it’s not ever really taken all that seriously. When Wu asked should we even be trying to solve this problem in class, they were just simply dismissed. Nobody wants to have that conversation. Imagine if you were in a technical interview and you questioned the interviewer about why they even wanted to solve the problem in the first place. How would they respond? Probably depends on the question and the company.

    Reply
  14. In reading the introductions to Your Computer is On Fire, particularly the one by Mullaney, the quote, “No matter the problem, it seems, a chorus of techno-utopian voices is always
    at the ready to offer up ‘solutions’ that, remarkably enough, typically involve the
    same strategies (and personnel) as those that helped give rise to the crisis in the first
    place. We can always code our way out, we are assured. We can make, bootstrap, and
    science the shit out of this” (Mullaney 4). Indeed, it is comforting to think that we, as computer scientists and technologists, we, as the same people who created so many of the technological perpetuations of systemic injustice, can use the same techniques as always to somehow undo this harm. However, this is naive at best and purposefully ignorant at worst: it is inexcusable to label Black faces as gorillas, regardless of whether or not the next update fixed it, the harm is as much in letting that happen in the first place as letting it stay, and that is not something that can be erased by a git commit. We can’t blame harmful performance on bad training data when we were the ones to provide that data–this is on us. As Mar Hicks points out in her intro, “Since those problems disproportionately harm those with the least power in society, there is usually a long lag between the problems being noticed or cared about by people in charge and becoming seen as important enough or disturbing enough to warrant solving”, it is unfair and irresponsible to brush off the great harms we do by citing technical limitations or ignorance to the issue—someone knew, someone told us, and we didn’t listen; this too is on us (Hicks 14). This is not even to mention the points that beyond social harm, as Mullaney puts it, “nothing is virtual”, and the unsustainable, snowballing practices of computing as a field are causing great damage to an already suffering environment. Fire in the literal, fire as a crisis, and fire as propogation: all of these interpretations hold, and tell us that the problem not only is here now, but will be here, more forboding by the day, until we decide to take a hard pivot on everything we know to stop it. As Mullaney asserts, “the time for equivocation is over”—problems aren’t potential, they are the current reality.

    I found that Optimize What connected nicely with this because, while most of these issues are part of the public discourse as it relates to the tech industry, computer science academia manages to inappropriately distance itself by “belying the fact that its own intellectual tools are the source of the technology industry’s dangerous power” (Wu). The way computer science tends to be taught, which is to say the lines of thought that focus so heavily on “utility functions, symbolic manipulations, and objective maximization” with little room for seeing the broader, non-explicitly-technical context got us here. It breeds the naivety that allows us to believe that, circling back to where I began writing, “make, bootstrap, and science the shit out of this”. It will take a cultural shift not only in industry, but also the most basic pedagogical approaches to computing to ever have ethical computing as a genuinely understood and standard-implemented practice, but it is so deeply necessary. Until then, we will remain, even if we protest, stuck in a techno-solutionist, techno-utopian, techno-delusional landscape, doing unthinkable harm while convinced we’re heroes.

    Reply
  15. Jimmy Wu’s “Optimize What?” article communicated issues within the computer science community that arise when ethics and societal implications of technology are pushed aside or seen as another group’s problem.

    Rarely, if ever, in computer science or stats courses I’ve taken do they discuss societal implications of anything data or tech-related that we interact with and create ourselves. There is the occasional example of the obvious misuse of some function or some data, and it’s usually met with a technical solution. But often, computer scientists and statisticians seem to think that even though we’re the ones creating this technology, we are far removed from the consequences of it. We’re given problems to solve, taught ways and methods in which to solve these problems, but not always told why we’re solving them. Furthermore, we’re shown how our methods for coming to solutions work computationally, but not how in the context of society they work (or even a brief note on their development).

    I think this way of learning in which we solve problems given to us in the context of basically a vacuum perpetuates the thought of computer science and statistics as being “unbiased” or leaves people thinking they’re not even remotely responsible for any of its effects on society. I’m not saying we are solely responsible for everything that has any relation to AI/ML. But when developing new technologies like machine learning, people should stop and understand it before it continues to develop further for the sake of technological advancement, as you can’t always fix or undo damage already done.

    Reply
  16. I really liked the Harvard Magazine article (likely helped by my particular fancy towards the overlap of Computer Science and Politics) because I thought it did a great job of providing a clear and direct takedown of the libertarian internet-cowboy philosophy. While I may still put on my boots occasionally out of habit, I thought the reframing of regulation between that of an elected, accountable government and some random people who took a coding class once was enough to convince any logical person to hang up their saddle. In thinking about the solutions they provided, I found the idea of protocols that have built-in identification less horrible than I expected. It does seem entirely reasonable that you’d have to provide the basic information required to access sites that are restricted by law. Honestly, it is kinda incredible (from a lawyer/insurance-policy-for-one-of-these-companies perspective) that a simple “Are you over 18?” fulfills that legal requirement. I know other countries are trying to make stricter laws, but now I’m wondering what Wu (the author of the Commue Mag piece called “Optimize What”) would think of Lessig (author of the Harvard Mag piece) since Lessig is mainly discussing technical solutions. I think Wu would call this techno-solutionism but I also didn’t think Lessig’s piece was bad or that many of us would disagree with it. I thought Wu piece was less clearly completely true so I’m interested to see what we discuss in class.

    Reply
  17. The obsession with optimization that Jimmy Wu writes about is something that I have thought about for a while. As computer scientists, we are constantly looking for the most efficient solution possible while using the least amount of resources. This is how we’re trained to think, and not without good reason. But it’s hard to escape the neoliberal capitalistic thinking this requires. I really appreciated Wu’s connection of machine learning to the free market: how instead of using our resources to plan carefully, we throw everything we have at the wall, and we only correct and optimize after the fact, and the inner-workings of our algorithms are only understood by a select few.

    In all the new technology I see, optimization and advancement are only meant to further the already existing toxic work environment that exists in our capital-obsessed culture. The future of AI for CEOs and Silicon Valley techies is not to make life easier for everyone, but to make the workplace more efficient by taking over jobs. In a world obsessed with optimization and efficiency, I feel that the humanity and social connection found in “You Are Not Expected to Understand This” is evaporating. The culture of computer scientists is shifting to become more individualistic and business-minded. During these readings I thought a lot about the movie Blackberry, where we view the thoughtful socially-awkward nerds behind a small company become harsh, business-minded millionaires with only the bottom-line on their minds. The entire culture of tech is shifting in that direction, and I worry about that a lot.

    Reply
  18. In Jimmy Wu’s essay, “Optimize What?” he’s interested in tracking the origin of the hubris and techno-libertarian ethos of Big Tech and Silicon Valley. He blames this attitude primarily on how computer science is taught in academia. In his view, the CS curricula at universities like Stanford and Berkeley indoctrinate students to believe every problem — even complex social ones — can be reduced to technical problems solvable by some combination of machine learning, a few undergraduates, and a white board. Wu also points out the inherent hypocrisy present on these campuses, where students often simultaneously attend lectures that declare “everything is an optimization problem” and evening workshops on coding for social good.

    While I agree with these criticisms of the tunnel-vision and contradictions of academic CS, I disagree that the ego of Big Tech can be pinned solely or even mostly on universities. Silicon Valley has a strong history of rejecting the need for academic study (e.g. see venture capitalist Peter Thiel’s infamous “drop out of college” fellowship that awards students to leave the classroom to pursue a startup). Wu also mentions Andrew Ng’s machine learning lectures at Stanford as an example of academia’s missing integration of AI and ethics, but he ignores that Ng is famous for founding Coursera, a company designed for students gain access to the same content while circumventing traditional academia.

    In this sense, I think blaming academic CS as the source of Silicon Valley’s AI hubris is misguided, as it’s often the academics themselves exposing the biases inherent in existing models and advocating for AI practitioners to better understand the datasets and tools they employ.

    Reply
  19. In “Optimize What?”Jimmy Wu brought up some really valid concerns about the computer science academia and how we focus a lot on technical skills but fail to teach students the ethics behind what we do or the social impacts of our lines of code. When we think about Silicon Valley or about the field of computer programming in general, it does have a lot to do with just constantly generalizing problems, devising solutions and then optimizing them. We programmers do it in a very systematic and logical way while overlooking the prejudices behind each optimization or generalization we make, and failing to account for what I would call real-life or social edge cases in addition to technical ones. The optimizing work that we do, therefore, comes at the expense of compliance with human rights or privacy policies as well as the protection of underprivileged groups in the society.

    I also think it’s really interesting to read about vague documentation which is a very relatable aspect in coding as a task in “The Most Famous Comment in Unix History”. This really emphasizes that behind every program, every machine, every tool or every AI chatbot these days is a community of coders. Technology has become so advanced these days and has grown into such a big industry that there has been somewhat a sense of detachment between the programmer and the user. The programmers usually lose sight of the social implications of their code and not account for the end user as they implement or optimize their programs; on the other hand, people are so used to always having technology at their disposal that they forget about the brains behind it. Another point that I thought was really interesting is that the dense documentation or comments in the code shows where we’re at in terms of coding practices. Schools have started to teach students to comment their code, not only for other programmers but also for their future self, because the truth is never static (from the Relational Ethics reading), social conditions constantly change, and we as responsible and ethical coders need to develop the habit of revising our programs to be socially appropriate.

    Reply
  20. While I could not follow all of their arguments, Wu’s chapter put into words what I have felt for a while now. It makes sense that a discipline that developed (for the most part) in neoliberal economy for its benefit is, in and of itself, neoliberal. I also think that his discussion of machine learning training protocols as ad hoc really elucidates how trial-and-error has become the premier development technique for AI and how no one really knows what is going on beneath the hood, and that is acceptable as long as it spits out the expected answers for the test cases we identify within the current computer science culture.

    Additionally, there is another idea that ‘Code is Law’ captured that I find interesting. Due to the size of the tech industry and the giants within it, our lack of (outside) regulation of the industry has created a situation where a handful of companies and people decide the contours of the digital world that we get to interact with. Since so many of them have monopolistic control over different aspects of this arena, it is now harder for other to change the digital landscape. This means, through limiting who gets to decide the architecture of cyberspace, we limit freedom by only adhering to their ideology of what the role of these technologies should be.

    Reply
  21. I get very frustrated with the priorities of my cs peers, and of the company recruiters and representatives I speak to, especially this year as graduation looms on the horizon. I think the idea that “purely technological” research (I don’t think that exists) is more worthy of pursuit, shows the fundamental challenge of struggling against the oppressing forces of American capitalism especially. There is a systematic devaluing of that which is human, that which strives for social good, that which understands, etc. at hand, and hearing the absolute indifference to the impacts of technology some computer science students show, is somewhat maddening. I don’t know. I think a lot of times, (in the very same vein of the Optimize What? piece) this obsession with a technological or perfect pre-planned solution to pair with any critique of the current system is so limiting. This is not a very focused comment, I have had a hard time articulating this frustration. Basically, nothing exists in a vacuum, and it is impossible to do computer science without being political / having social impact, so why don’t we take the time to consider it?

    Reply

Leave a Comment

css.php
The views and opinions expressed on individual web pages are strictly those of their authors and are not official statements of Grinnell College. Copyright Statement.