24 thoughts on “Reading for Monday November 20th”

  1. In Stephen Cave’s The Problem with Intelligence, the author explains how understandings of human intelligence have been fraught with ideology, particularly Western philosophy, the interest in rationalism, and the pursuit of racial hierarchies for the sake of colonialism. As the humanity of people has been reduced to rationality and perceived rationality, it has become a metric to game, both for the sake of those trying to prove their intelligence but also for institutions seeking to preserve the ideology that the white man is an ideal representation of humanity. Part of this can be seen with the creation of IQ tests and the SAT, where entrance to positions of power and academia are dependent on one’s perceived rationality. An obvious result of this is that disadvantaged groups are intentionally kept away from positions of power. Believing society to be a meritocracy, those justifying this ideology of white supremacism view the relegation of lower socioeconomic groups to low-skill labor as natural, a product of the invisible hand of the free market. These jobs are then seen as requiring no intelligence or humanity, even when “low-skill” labor can be complex and require many mental faculties. Moreover, even if the ideology of white supremacism is no longer being actively taught as a legitimate philosophy, this will not immediately rid society of the culture and history of this ideology. So, this perception that “low-skill” labor does not require humanity or intelligence can continue and enable computer scientists to view the automation of these jobs as a natural progression of society. If a job is not seen as requiring human skill, then one can more easily argue that the job can and should be automated by a machine. After all, the person losing their job has been historically viewed as less human due to their perceived lack of intelligence.

    Reply
  2. The reaction described in “Don’t be Fooled by Charismatic Robots” is reminiscent of the Eliza effect, namely in the infatuation that comes along with technological innovation in AI. Robotics has always felt to me as some far off and distant sci-fi pipe Dream—think C3P0 in Star Wars, or TARS in Interstellar—but this hides the immediacy of modern robots that have quickly integrated into our society. I think that this sort of feeling plays into the fascination that many people find with robots and AI, as people see something non-human that can behave beyond their expectations. In some capacity, I feel the same amazement that we have advanced to the point that robots can contribute in meaningful ways; when I walk through the grocery store and a robot is scanning the shelves to find what needs to be restocked, for example, I can understand the appeal. But I find it difficult to relate the subset of people that can carry on a conversation with a machine; there is always some notion that an abstraction of human interaction has some intrinsic moral dilemma associated with its existence. These readings push me to think about the reasoning behind developing robots. The grocery store robot makes sense to me—its purpose is eliminating a task that can be done better by a machine. The robot that emulates human intelligence, however, begs a question of interest and consequence. Humans can create life naturally, so what prompts us to create it artificially? Is it the element of control, that programmers can command their tech to do as instructed? Is it a desire to play god? Will the interest overshadow the ethics; will we consider the consequences before they are presented to us?

    Reply
  3. In The Problem with Intelligence, Stephen Cave discusses the risks and opportunities of AI and the value-ladenness in AI usage. The fetishization of intelligence would eventually lead to biases and disadvantages for women and minorities in the CS fields. And the wide use of AI would lead to the threat to middle-class jobs and many people may not be able to earn a living. In The Experience Machine, the author talks about the life in the machine. That’s to say, we live in an experience machine where we only insert the experience rather than go through it. However, life in the machine is lack of exploration. It would only give the chance to explore what people already encountered and are not able to try any “new” things since the so-to-speak experience is based on data that have been imported in. In Don’t be Fooled by Charismatic Robots, Karla Erickson also discusses human/machine relations. With the greater intelligence level and smaller size, the machine become more “charming” than it used to be. They become one of the most important and widely used working and entertaining tools in daily life. The strong dependence on machines makes humans worry about how they would harm human life. And in this case, what should be the correct relationship between humans and computers?

    Reply
  4. I have read Don’t Be Fooled by Charismatic Robots by Karla Erickson a total of three times. What she brings is the conversation of how robots have begun to charm us, and how we have begun to rely on the allure of their unconditionally positive company.

    In addition, the other two readings discuss how AI and technology have the potential to take over our lives by targeting our desire to achieve happiness within our lives. AI is marketed as intelligent. The word intelligent is such a powerful buzzword that people have used it to gain control throughout the centuries of our existence. There is a conception that the most intelligent are the ones to lead us all––to make the hard decisions, and to control. If AI is ‘intelligent’, then what is stopping it from joining this group of entities that maintain power through its intellect? And there is this question from the reading that wonders whether people would choose to give away their life for the illusion of internal happiness. However, they conclude that it is simply more complicated than that, for people want to be right, and they want to know that their happiness is real.

    Overall, these readings highlight important concerns about the role of technology in our lives. While AI and robots can bring many benefits, we must be careful not to rely on them too heavily for emotional fulfillment. It is crucial to maintain our autonomy and not allow technology to control our lives.

    Reply
  5. I think Professor Erickson really hit on an interesting note in her article about the general cultural impressions we have about realistic, human-like robots. There are a few exceptions to the rule, like C3PO and R2-D2, but general depictions of robots in film and media have almost always been negative. Even before Star Wars, there were fearful depictions of Gort from The Day the Earth Stood Still and, almost immediately after Star Wars, Ash from Alien terrorized a helpless human crew. Of course, our thoughts about these robots are always defined by much more than their intelligence. More often, we think about their loyalty and intrinsic sense of right and wrong. Robby the Robot from Forbidden Planet, who adheres to the laws of robotics and refuses to harm humans, is immediately more likable than an assassin droid from Star Wars. It is truly strange, then, that we only really consider the “intelligence” of the things coming to market now, especially in the AI space. We only measure “correctness” in services such as ChatGPT, and take the incorrect responses it often gives as evidence of LLMs not being quite ready yet, for whatever purposes people believe it will be used for. We seldom consider their originality, wisdom, intrinsic ulterior motives, and so on, and maybe because it takes on a very different form from the technology we have come to instinctually trust as evil.

    Reply
  6. I encountered the experience machine earlier in my Philosophy of Life class, and not only is it super fascinating, it inspired some really interesting class discussions, so I’m excited to discuss it in this new context. Whether or not you understand the experience machine to be something “valuable” or more simply, whether you should enter, depends on your moral philosophy. So while comparisons to “The Matrix” are almost inevitable, if you have a view on flourishing that includes epistemic relations but not states of affairs, then you should view the Experience Machine as amazing, and really the perfect tool to achieve your ideal life (I had an ironic typo where I didn’t hit the “f” in that last word… Freudian type?). I would imagine that this reading was intended to relate to class in the sense that interacting with machines that are designed to please us as an equivalent to being in an Experience Machine Lite™️. I do think, however, that it’s not an entirely fair equivalency, especially given that many people use AI as an intermediary to achieve experiences and communication instead of a replacement (at least for now).
    Finally, I just wanted to comment that the dilemma that Cave posits at the end of section 3.5 of his article was really fascinating, and something I’d never considered. It does make a lot of sense that people who have prefaced their rise and domination of others on the grounds of intelligence would have the most to fear from an objectively superior intellect. I still think he misinterpreted the use of the work “intelligence” in the Hawking quote, which to me read more as human ability/consciousness not specifically like test smarts, which is kinda the straw man he draws up by saying that creativity and work ethic are also important (I mean I don’t think Hawking would argue that a dictionary or calculator are responsible for the sum of human achievement in writing and math, but they’re certainly more (test smarts) intelligent that us in their respective fields).

    Reply
  7. Erickson’s reading posed a philosophical and challenging question. She illustrated her point using various examples, such as Shaky, Sophia, Alexa, and smartphones to demonstrate how humans can form emotional connections with robots and technology to gain happiness.
    Humans often attribute emotions to objects that exhibit movement. Additionally, humans tend to form emotional connections with objects when they are capable of movement, communication, and physical interaction which can extend to relationships with robots. However, when these emotional bonds with robots become excessively strong, they can lead to societal issues.
    It is important to emphasize that dismissing relationships with robots as unnecessary, fake, or harmful isn’t a constructive perspective. That kind of relationship can be helpful for people sometimes.
    Excessive emotional relationships between humans and humans can indeed harm individuals or society just as relationships robots can. For instance, human relationships can lead to serious issues like domestic violence and stalking which are significant problems worldwide.
    We can identify our emotions, but we often assume others feel emotions, even without concrete proof. In this context of uncertainty, humans are somewhat like robots, whose inner workings remain a mystery to most users.

    Reply
  8. Out of the three readings today, I was most compelled by Cave’s discussion of intelligence. I think a large amount of visible fear and worry about these technologies comes from the people with resources to reach other. Those with power, “intelligence,” and influence. These technologies threaten to upend a power structure built on notions of superior intelligence. I found the author’s discussions over fears of AI asserting malicious dominance interesting. By looking at it from a perspective of hierarchical structure and power control, the idea that machines would intentionally pursue dominance when labeled with higher “intelligence” just makes sense. I think it is fascinating to look at my own ideas about the dangers of AI and reconsider the underlying biases I have about intelligence. Do these technologies frighten me because I believe in the inherent power of “intelligence?”

    Moving on to “The Experience Machine” and the reading about charismatic robots, I think I have to push back against some of the general premise of “The Experience Machine,” but I think that both of these go back to our discussion on isolation and social media. I think in person interaction and connection is deeply beneficial, and I do not think robots, phones, social media, or AI can provide that experience. I do not exactly care for a theoretical argument about experience and happiness, though. It is not something we can provide and nor can we have continuous happiness even if we seek it. Sure, we may be more satisfied with a life with rising happiness rather than one that decreases, but in many cases we do not chose all aspects of our trajectory in life. We are not isolated systems. I think in the present, though, I often find myself reliant on tech as an avoidance technique and I certainly think that experiences with current “charismatic robots” eventually feel hollow. The thing is, I do not think it is impossible to experience a relationship with something considered a “robot,” but I don’t think we are quite there yet.

    Reply
  9. The note about how our equation of intelligence with domination really stands out to me. I’ve sometimes wondered if the narrative of AI taking over and turning the world into some sort of post-apocalyptic wasteland assumes that some hyper intelligent AI wouldn’t recognize the value of the natural functions of the earth. It might decide that Humans ought to go though, idk. The inclusion of the experience machine reading was very cool. I’ve had this conversation at lunch and my answer is no primarily because of how impossible it would be for humans to compete with the complexity of experience offered in the real world. It’d be like watching episodes of your favorite comfort show over and over again. The same ingredients mixed over and over, sometimes reruns, lots of predictability. Very comforting when you have had a long day and want something familiar, safe, and predictable to lay yourself out to. It becomes very stagnant after a while though.
    I would hate to be stuck in some sort of experience machine rut.
    These readings were very matrix esque.

    Reply
  10. In “The Problem with Intelligence,” Stephen Cave explores how historical ideologies, particularly in Western philosophy, fueled the reduction of human intelligence to rationality. This reduction, evident in metrics like IQ tests and the SAT, has perpetuated a meritocracy that disadvantages certain groups, sustaining white supremacist ideologies. The perception that “low-skill” jobs require neither humanity nor intelligence has further enabled the automation of these roles, reflecting historical dehumanization. The article highlights the lingering impact of white supremacist ideologies on societal perceptions and urges consideration of the consequences of automating jobs traditionally deemed as lacking in human skill.

    In “Don’t be Fooled by Charismatic Robots,” the text reflects on the allure of AI and robotics, contrasting the mundane integration of robots in everyday life with the more complex realm of machines emulating human intelligence. While the practicality of robots in tasks like restocking shelves is evident, the article prompts reflection on the motives behind creating machines that mimic human interaction. Questions about control, the desire to play god, and ethical considerations arise, emphasizing the need to carefully assess the consequences of developing robots that emulate human intelligence. The readings prompt contemplation on the intersection of technological advancement, ethics, and the societal impact of artificial intelligence.

    Reply
  11. As is always the case, I was very glad to see an assigned Erickson article. This article was called “Don’t Be Fooled by Charismatic Robots” and it focuses on the shift in what humans expect from robots over time. It opens with the subheading, “The popular perception of a ‘robot’ has shifted from Terminator to Siri”, which leads into an article that highlights the way we now kind of expect robots to be somehow emotionally available, and their purpose, aside from what their actual job is (assuming it was ever something other than being social) is to be a bit of a social creature (as if that isn’t an oxymoron for something non alive). This reminds me a bit of the Eliza effect which we read about earlier this semester (in an article by Erickson), but is a little different because rather than simply attributing human characteristics to machines, anthropomorphism style, this goes further to give robots, not just those like Sophia whose sole purpose is to be personable but also those like Siri and Alexa, human functioning, leading to trends that border on parasocial. It brings up further questions about what outsourcing “tasks” to robots implies, because that is so often thought about in the context of replacing humans in the workforce, but this brings up replacing them in the context of our social worlds.

    Reply
  12. The three readings provide profound insights into the significance of AI and robots while also raising questions about their roles. In “The Problem with Intelligence,” Steve Cave draws a comparison between the historical definitions of intelligence, highlighting the classist, racist, and sexist aspects of the definition and addressing potential problems with AI. One intriguing aspect of his argument relates to the historical use of technology and intelligence for conquest and colonization, suggesting parallels with how AI might be utilized, especially considering the lack of diversity in the industry. The second reading focuses on resolving a dilemma regarding whether one would prefer to be plugged into a machine, preprogramming life to maximize happiness without awareness of being in the machine. However, once plugged in, there is no turning back. The reading includes various responses explaining reasons for rejecting the offer. In my opinion, I would also decline the offer. As some of the authors in the responses pointed out, life consists of more than just happiness. Even though there’s currently no way to prove whether we are living in a simulation, I believe that experiencing the highs and lows of life is what makes it worth living. The connections formed and the simple pleasures of life are aspects I wouldn’t want to change.

    Reply
  13. I very much enjoyed reading “The Problem With Intelligence”. Seeing a piece of work that begins to bring together conversations around eugenics, scientific racism, etc. and the growing prevalence / importance of AI systems in daily life. The unquestioned priveleging of “intelligence” as the paramount of humanity and the driving force of progress (which in itself is similarly value-laden) certainly harkens back to eugenicist ideology and the use of “intelligence” as a marker of inherent worth, and as justification for systems of domination. In Professor Erickson’s article, I found it interesting to think about “the politics of outsourcing our tasks to robots”. I have certainly experienced moments of “swooning over technology”,for example, the serving robots so common in Japan. The robot waiters often have little cat faces and tap into “cuteness” in a way that is difficult to resist. However, I have a hard time seeing the nefariousness of these specific robots. They do not replace human waiters, rather they assist them, their tasks restricted to running food. As someone who has worked in the restaurant / food service industry as a waiter, I certainly see the appeal.

    Reply
  14. The readings for Monday combined a little bit of the future, past, and present. I enjoyed not only reading about the history behind intelligence and robots but also the theoretical/sci-fi stories of a potential “experience machine.” It is always cool to read sci-fi, and their stories were certainly a breath of fresh air. Additionally, they tie into our current reality. Smartphones represent these “experience machines” in many facets, including their ability to match our internal feelings. Today, we are trying to go deeper in making the “experience machine” through the “Metaverse” and AI that can adeptly respond and converse as humans. It was cool to read Professor Erickson’s piece and the history of our push toward mechanized intelligence. It puts into perspective how we have changed our need for robots from physical beings to ones that conform to the ever-valued true “intelligence.” Reading about the term intelligence in “The Problem with Intelligence” by Stephen Cave shows how our desire for an intelligent non-human lifeform is systematically intertwined with a history of measuring intelligence and using it as a tool for natural ordering. The Eugenics movement gave birth to the idea of innate biological intelligence that favored the characteristics typical of a straight, white man. Now, AI, NLPs, and LLMs are being programmed based on this ideal form of “intelligence.” This includes how to respond, what to respond to, and much more. Intelligence fails to describe much of what we consider desirable characteristics. That being said, the push towards technological intelligence above all is because of a long history of intelligence being a measure of rank and a key component in growing capitalism.

    Reply
  15. The Experience Machine really interested me as they talked about what the chances are of entering an experience machine where you would be able to live all the experiences you have wanted as well as only living happy thoughts. When we are brining up this reading in a CS ethics course, are we talking about the possibility of being able to remove yourself from this unjust society and being able to live a more happy life, or are we more focused on the aspect of how this sort of technology would be biased? Either case works, where we can talk about the flaws in society and how most technology has a negative effect that most people don’t notice as they are better suited for particular groups that they would want to avoid society and live their own happy lives without worrying about any political or sociological problems, or if this machine is also bias where only the rich would be able to attain such machine and that people from a poorer background can not obtain such technology for their benefit. We also have to wonder when it comes to the machines credentials of whether they are truly beneficial to society as well as which groups are we really targeting. With this kind of caliber, I feel like only the rich can be able to use this machine and again prevent certain groups from being able to use this machine, which leads to more questions about the technology we make and who we are truly wanting to use these creations.

    Reply
  16. I really like these readings, as they bring up important questions of what we want artificial intelligence to do vs what it actually does. The Problem With Intelligence raises important points about how AI is being implemented with the our culture’s reigning notions of intelligence, which is deeply influenced by our histories of bigotry. Western notions of intelligence such as IQ have been consistently debunked as classist, invalid, and arbitrary measurements, but we still like to think of intelligence in these terms. The leaders in the field of AI are putting too little thought into what they think the “intelligence” part should mean.

    The Experience Machine and Professor Erickson’s article made me think about what I actually want from robots and AI. After reading the Salon article, I immediately thought of Boston Dynamic’s robot dogs. I used to watch the YouTube videos of the testing of these robots, and was immediately attached to and intrigued by these weird anthropomorphic machines; I thought they were cool. But years later I watched videos of those same robot dogs being deployed in combat zones and by already outrageously overfunded police departments. I thought these implementations were excessive and gross, showing direct parallels to the fascistic governments of the sci-fi media I consumed. But I realized I didn’t ever think about what I wanted from these robot creations. Did I want them to be used as quirky C-3PO-esque companions? Used by first responders? I obviously don’t want them taking jobs and being used to oppress. I don’t know what I want from robots, and I’m not sure many of us do. Professor Erickson argues that we are embracing machines as assistants and companions, and it certainly feels like it. I’m just wondering if we should reflect more on what the “ideal” implementations of these machines are.

    Reply
  17. Stephen Cave’s paper does a fantastic job of layout out the history of intelligence, a concept that is so fundamental to our modern cultural that as he points out, we take it for granted. Beginning with the Greek philosophers Plato and Aristotle, Cave traces how intelligence has been used as a qualification and justification for racial and social hierarchies throughout history. Later, he identifies 19th century imperialists with “the White man’s burden” and 20th century eugenicists as using intelligence to justify their colonialism and racism.

    One surprising fact from this paper was that I did not know the SAT originated from a racist context, intended to separate applicants by “intelligence” as a way of preserving the racial hegemony in the Ivy League. I think that the history of the test is important to keep in mind in present-day discussion of whether it should continue to be administered, yet I’ve never heard this history brought up before.

    Reply
  18. Cave’s paper prompts a reevaluation of fundamental assumptions in AI development. The historical context of intelligence as a concept used to perpetuate social hierarchies challenges the neutrality and objectivity often ascribed to AI and machine learning. This raises several points of reflection. The paper underscores the need for ethical considerations in AI development that go beyond technical accuracy and efficiency. Understanding the historical baggage of concepts like intelligence is crucial to prevent perpetuating biases and inequalities in AI systems. Nozick’s thought experiment presents intriguing parallels and contrasts with contemporary virtual reality (VR) and artificial intelligence (AI) technologies. It challenges the emerging narrative that digitally mediated experiences can wholly substitute for real-life experiences. In designing VR and AI experiences, there’s often a focus on making them as realistic and immersive as possible. Nozick’s argument introduces a critical distinction between experiencing and doing. This raises questions about the ethical implications of creating digital environments that might blur this distinction. AI and VR technologies have the potential to influence personal identity and development. Nozick’s argument that being a certain kind of person matters suggests that reliance on AI-driven experiences could impede personal growth or distort self-perception.

    Reply
  19. Given how large of a role rationalism and easily gamed metrics play in defining and measuring intelligence (both concepts we’ve discussed in detail previously), I think it’s pretty clear how the idea of intelligence enforces existing power structures. Often in academia, you see blame placed on the oppressed individuals rather than the systems in place that cause lack of educational resources in communities. For example, the SAT. I don’t know if it’s because I just didn’t have much information about it until it actually came time to take it in high school, but at least when I was younger, less people around me were critical of its use to measure intelligence. Up until high school, I was unaware that the test wasn’t developed in a way to fairly examine the “intelligence” of students for college. Getting a low score on the SAT was either seen as you being a bad test taker or you just not having enough knowledge (not due to lack of resources, rather, the blame was placed on you). I think its become more apparent to people (especially after Varsity Blues because I don’t know how much more obvious it can get) that even if you don’t know the historic background of the SAT, it’s another way that helps the privileged to stay in power (under the guise of a supposedly fair and quantifiable assessment of “intelligence”), as it’s a factor in granting people access to higher education, more resources, etc.

    As the reading said, the idea of innate intelligence in combination with the fetishization of intelligence is impactful through the field’s biases. Though there is the myth of meritocracy in the field of CS, I think there’s also this perception of CS being something that’s for those in CS/those who’ve been doing CS (idk if that makes sense), and not for people unfamiliar with coding or related concepts or something that can be learned just like any other subject. This perception further contributes to intelligence upholding the status quo, as those who aren’t exposed to computing earlier in their lives are subject to this line of thought. Though, I have seen efforts to combat this with the expansion of the internet. I also slightly disagree with the notion of the oppressed being less concerned about AI since they’re already being oppressed by those “purporting to be superior beings.” If anything, there’s at least an equal amount of concern since the elite are the ones driving further development of AI and computational models. It may (and has been as we’ve seen) yet another way of harming the marginalized by perpetuating bias.

    Reply
  20. I really enjoyed reading Cave’s ‘The Problem with Intelligence’ and wanted to comment on some things it made me think about. Firstly, the author clearly illustrates that (perceived) intelligence has long been used a tool or justification for domination which has placed white elite men at the top of the hierarchy it creates. His mention of the creation of the SAT by a member of the American Eugenics Organization to ensure the white character of Ivy leagues also makes me think of the enduring legacies of such standardized testing, especially English tests that some graduate programs still require only non-native (defined as those not from the Anglo-Saxon world) speakers to take regardless of if they have completed their bachelor’s degrees with English as the language of instruction. It also made me think of how knowledge is somehow able to be viewed as property, intrinsically belonging to certain people and not others.
    I also agree with his assertion of the assuming of the “central importance of intelligence in the human story” by those who study and are concerned by AI, which leads to the fallacious equation of the possession of intelligence as a marker of humanness. But, this also makes me think of the flipside wherein many professionals believe that AI cannot do their job because it does not have certain soft skills, but it can do the part of their job it was narrowly created to do.
    Finally, while the idea of AI as the answer to global warming is laughable, it is likely that this is a sentiment that is allowed to persist as it benefits the established system of capital.

    Reply
  21. I really liked Professor Erickson’s article about charismatic machines. I never thought to use that language to describe the charitable sentiment towards machines. I think the use of the term charisma is especially revealing, because charisma is such a political term. It’s important to think about the implicit ways we are meant to become more comfortable with surveillance. It is interesting how ideas of competency and intelligence are both fetishized and infantilized, like robot dogs and service machines in restaurants. But it makes sense that framing technology as non-threatening as possible also increases general sentiment towards bigger things like robotic police dogs.

    This is a bit tangential, but I love sci-fi and horror so I loved the vignettes from Robert Nozick’s “The Experience Machine.” It reminded me of another cold-war era short story, “I Have No Mouth and I Must Scream” by Harlan Ellison, which is about a supercomputer named AM who becomes so frustrated/infuriated with humanity that he decides to destroy all of human kind except for 5 people, all of which he tortures for 109 years. It made me reflect on what role we are hoping machines can supplement in our lives, and how this could potentially backfire on us.

    Reply
  22. Cave’s discussion about the value-laden history of intelligence and the implications of its fetishization on the development of AI provides some insightful context to what we have been discussing in class. The centrality of intelligence and the association of it with a certain group of people that is deemed to innately possess such qualities have always been problematic. The fact that we placed so much emphasis on intelligence and rationality gave rise to the popularity of AI in modern society. The fact that AI technologies and large language models are being trained to perform “intelligent” tasks better than humans raises the concern of what the limit will be and whether they will eventually dominate given that we value intelligence so much.

    Professor Erickson’s commentary serves as a perfect supplement to this discussion by looking back to the history of different models of robots and how the purposes they serve in human’s society have changed over time. I like that she pointed out how “ designers moved from physically powerful machines that freed up muscle and might to machines that could work alongside and with people ”. Examples that she cited of robots in the past have well illustrated how we have gone from using robots to help us carry out labor-intensive and error-prone tasks better, and using them for increased productivity and efficiency, to using them for convenience and emotional fulfillment. I’ve always been concerned about the fact that we’re now able to converse with robots, more specifically our voice assistants and generative AI technologies, or come to them for emotional support, as in the case of the ones used in nurseries. I feel like we’re starting to design and invent robots that satisfy our psychological needs and temporarily caters to our wants for attention, sympathy and companionship. The trajectory of where we’re moving with these robots is deeply concerning to me.

    Reply
  23. I really enjoyed these readings in which “The Problem with Intelligence” by Stephen Cave delves into the complexities and ethical considerations surrounding artificial intelligence. Cave explores the potential risks associated with developing highly intelligent machines, emphasizing the need for careful ethical guidelines. He highlights the danger of creating AI systems that surpass human intelligence but lack a comparable moral compass. The article encourages a thoughtful approach to AI development, urging society to consider the implications of creating entities that could outsmart us.

    In “Don’t be fooled by charismatic robots” by Karla Erickson, the focus shifts to the social implications of human-robot interactions. Erickson warns against anthropomorphizing robots and attributing them human-like qualities based on superficial features. She argues that charismatic robots, despite their seemingly lifelike appearances, lack genuine emotions and intentions. Erickson emphasizes the importance of understanding the limitations of robotic capabilities and avoiding the pitfalls of ascribing human characteristics to machines. The article serves as a cautionary reminder to approach human-robot relationships with a clear understanding of the boundaries between artificial and genuine human experiences. Together, these articles shed light on the ethical and social considerations surrounding the development and interaction with intelligent machines.

    Reply
  24. In some sense I believe that experiences do make the person. A sophisticated enough experience machine could essentially guarantee that I would be satisfied by plugging in. It could erase my memories of making the decision, it could convince me that what I’m experiencing is reality, and it could make me feel content with those experiences. It could trick me into thinking I have put in effort, or made a difference, or earned the experiences, etc.

    So in knowing that I will be satisfied by my choice to plug-in, the question becomes whether that is enough to make the choice. Would I willingly choose the option that is not guaranteed to make me happy, because in the present I have some philosophical qualms with using the experience machine? Nozick initially seems to assume that rational people would never choose to plug in, but I think it is a difficult choice.

    If I get to experience a lifetime of ‘mindless pleasures’ and ‘frivolous amusements’, but there is a nagging voice in the back of my head telling me that those pleasures are superficial, is the feeling of superficiality not an unhappy one? If there are any unhappy feelings associated with a life of unlimited happiness, then it can’t truly be unlimited happiness. True happiness should feel fulfilling.

    Reply

Leave a Comment

css.php
The views and opinions expressed on individual web pages are strictly those of their authors and are not official statements of Grinnell College. Copyright Statement.