25 thoughts on “Reading for Wednesday October 4th”
In both Karla Erickson’s article on AI and Henry Kissinger’s article, the already blurred lines of consciousness, life and death, and human reasoning are blurred, where Large Language Models are described as having both human and inhuman abilities. In Erickson’s article, however, the author describes a human tendency to “read emotions, intelligence and even consciousness into machines,” a tendency often called the Eliza effect, which was named after an early precursor to ChatGPT created in 1966. Of course, Large Language Models seen today are significantly distinct from past technologies in that they are able to generate coherent pieces of text using large datasets made possible by the Internet. However, I am curious of how much excitement surrounding Large Language Models is a result of novelty and the Eliza effect.
With their novelty, people can project their hopes and desires onto this new technology due to little awareness of its limitations. Through history, people have tended to expect new technologies to bring humanity closer to a utopian society. For example, the Internet was expected to global communication and reduce unequal access to information. Another example could even by nuclear energy, where it has been seen as a means to energy independence with little cost. However, both of these technologies have had their limitations and negative consequences, dampening public belief in a path to utopia through the use of them. Paired with the Eliza effect, people can project human qualities onto Artificial Intelligence without paying attention to possible limitations of Artificial Intelligence. Many of these limitations are likely to be revealed in the future, but our ignorance of them may result in overly optimistic assertions about the abilities of Artificial Intelligence. That being said, it is entirely possible that optimistic claims about this new, unanticipated technology may come true.
Today’s readings were centered again around LLM and in particular ChatGPT and other AI language models. The first reading, “ChatGPT Heralds an Intellectual Revolution” by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher was a very interesting read. Never would I have thought that the same man who once advocated for Shuttle Diplomacy in the White House would now be advocating for caution with the use of ChatGPT. Slightly to my surprise, I thought Kissinger, Schmidt, and Huttenlocher crafted a fascinating discussion of ChatGPT and other AI models in comparison to the Enlightenment. They do an excellent job contrasting the growth and expansion of the human mind during the Enlightenment to its supposed contraction in the face of the growing use of AI chatbots. The group of three pointed out how ChatGPT and other forms of AI have evidenced the willingness of humans to blindly conform and accept technology. In particular, they discuss the lake of sourcing and regulation that leaves room for accepting problematic knowledge. Hearing this particular group of three echo words of caution speaks to the fear AI has ignited.
The second reading “What a Precursor to ChatGPT” by Karla Erickson offers a different perspective of comparison. Erickson adds context to the history of language models and how human fascination with LLMs dates back centuries. However, Erickson makes a clear point about the new threats that advanced LLMs such as ChatGPT create. Scientists who “scramble” to understand the advances in tech and the potential “deskilling” of humans are just a minute peek into the perils of LLMs. The readings make it clear that like so much AI, there is so much that is unknown and that more perils are inevitably going to be found.
Very cool that we got to read something from Professor Erickson. The desire to empathise with chat bots and language models is something that I hadn’t thought much about, but it is something that I have seen plenty of. I just had never made the connection as to why we gave Siri and Cortana actual names. It all makes me think of the movie Her. The reference to Replika in the Kissinger reading was also interesting, I have seen ads for it before. When I see those kinds of advertisements I always tend to think somewhat negative things about the kinds of people that use that technology. Meanwhile, I personify needless other pieces of technology, like my car, my watch, my mom’s Amazon Echo, my mom’s Roomba. They all have names. I have also had a conversation or two with the Snapchat chatbot. I am not above this behavior because it is a human one. It is scary to think that this behavioral pattern might continue to drive people to interact with each other less and less because they don’t want to do the “work” necessary to have a relationship with a human.
The talk about the outsourcing of the cognitive process is also concerning. I do like that Kissinger included some talk about solutions, and he didn’t necessarily write off the benefits of having these tools either. He had a very well-rounded perspective.
You could feel the worry about the implications of generative AI models within the words of both of the articles. It feels like we are in a weird stage where the effects of these systems have not been completely realized, but also in a position where it is easiest to create rules and regulations. The WSJ article on ChatGPT was talking about creating parameters for responsible use, but the problem is our legislative bodies in the U.S. where these systems seem to be most impactful and utilized are incredibly slow. I also don’t think that there is much consensus on how to regulate the use of such technologies. The Writers Guild just won a long battle against large entertainment companies with some language in their contract preventing the forced use of generative AI in scripts, but there has yet to be much interest in nationwide regulations against generative AI usage. I found the worry within both articles about the devaluation of our own thoughts very compelling (paraphrased Erickson). When these systems are able to aggregate so much information and often statistically create more accurate results than an average person, it is obviously quite daunting to question an outcome or response. I think the WSJ article uses an excellent example of doctors being willing to question a software answer. Would someone be able to sue if they knew an AI had suggested a different treatment and a patient died because a doctor chose their own path? Could that be considered negligent? What if a doctor did not use their better judgement? There are honestly a lot of ethical considerations, and I am not sure if we will be able to create regulations for this kind of technology before we see the impact it creates, good or not.
The article explores the historical and philosophical impact of early chatbots like Eliza and the revolutionary capabilities of modern AI, exemplified by ChatGPT. Eliza, created in the 1960s, was a simple chatbot that engaged in basic conversations, but users were surprisingly drawn to it and attributed intelligence and compassion to the program. This phenomenon, known as the Eliza effect, revealed the potential of AI to elicit powerful emotional responses and challenge our perceptions of machines.
ChatGPT, a generative AI system, represents a significant advancement in conversational AI. It leverages vast amounts of text data to generate human-like responses, enabling more sophisticated interactions. However, the article raises concerns about the potential deskillment of humans and the need to carefully navigate the social impacts of AI.
Generative AI, such as ChatGPT, poses profound philosophical and practical challenges akin to those faced during the Enlightenment. While the printing press facilitated the spread of knowledge, generative AI distills and synthesizes information in opaque ways, creating new elements of mystery, risk, and surprise. The article highlights the importance of understanding how generative AI stores, distills, and retrieves knowledge, as well as the ethical implications of its potential to fabricate false information.
The impacts of generative AI extend beyond commerce, influencing diplomacy, security strategy, and even the fabric of reality itself. The article emphasizes the need for moral leadership, responsible human-machine interaction, and comprehensive challenges to AI systems to prevent domination, anarchy, and societal alienation. Education must adapt to equip individuals with the skills to navigate the complexities of AI and preserve human judgment.
In conclusion, the rise of chatbots like Eliza and the transformative capabilities of generative AI systems like ChatGPT have reshaped our understanding of artificial intelligence. While these advancements offer exciting possibilities, they also bring about ethical, societal, and philosophical challenges. It is imperative that we foster responsible AI development, cultivate moral leadership, and prioritize human wisdom in order to harness the potential of AI while safeguarding the well-being of society.
Kissinger’s article was an interesting one that, pretty much line by line, brought a ton of foundational issues to the table. I particularly liked the focus on how ChatGPT “understands” and provides knowledge. LLMs like ChatGPT can basically take in the whole of human knowledge. Though they may not understand a topic in the way humans do, it has intrinsic access to all the information on that topic, and can synthesize unique and (sometimes) accurate answers to questions about it in a way that conveys information like humans do. It’s sort of scary to think that when you ask ChatGPT a question, you are asking an amalgamation of every expert on every topic for an answer. But, because no one can really unravel the daisy chain of complex calculations, associations, and analysis the model does, you can’t really say for sure where an answer comes from. “Interrogating” the model, as Kissinger describes it, still seems to be a challenge that is far less interesting to investors than an increasingly accurate and correct model.
I really liked the article by Erickson as well. There is something really incredible about the interactions humans have with increasingly human-like machines. I simply don’t think our brains are equipped to understand that language, written or spoken, can have an entirely nonhuman source. I even catch myself falling into these traps unconsciously while reading articles like these. Especially when I encounter the concept of romantic/lover AI chatbots. I consider it abnormal to engage with them, but the feeling goes past that and borders on unethical. I realize that this is really only because I feel it would be wrong to compel an actual human being to do these things. But this is entirely different despite the fact it is human like in its behavior. I don’t know exactly how to feel.
The Kissinger article brought up a notion I brought up last class. The offloading of critical thinking I have observed in some of my classes reflects the worries expressed by Kissinger in this essay. I tend to find ChatGPT most useful for explaining relatively simple technical concepts, automating formatting tasks, creating flashcards, etc. The sort of task that requires the understanding LLMs provide, but that are not acting as a stand in to my own understanding. I think the trust in AI and automation bias discussed in the text is something we will have to be very concious of going forward as technologists. The mere fact of the prevalence of fake news, and blind faith in sketchy internet information (seen often in the tech illiterate older generations) shows just how much of a threat this is to the production of knowledge and understanding.
Erickson’s article was extremely interesting, and I like the approach she took to the questions presented by the increasing prevalence of AI. I was especially compelled by the questions of value and humanity she raised in regards to humanity’s historical willingness to defer to technology.
I also looked at the full transcript of the conversation the NYT writer had with the bing chatbot, and now my brain hurts. Pattern matching and mimicry are powerful tools, I cannot tell if its doing that really well which freaks me out, or tapping into something else to some extent (which also freaks me out)
Karla Erickson’s retrospective on the development and implications of chatbots, from Eliza to ChatGPT, provides a comprehensive overview of how humanity has grappled with increasingly sophisticated artificial intelligence systems. Erickson touches on the Eliza effect, where users ascribe emotions, intelligence, and even consciousness to machines. This tendency is not new; humans have long anthropomorphized inanimate objects and animals. However, with AI, this inclination is intensified, as machines can now ‘respond’ in ways that resemble human thinking. The question then arises: to what extent should developers nurture this tendency? It’s both a technological and ethical dilemma, especially when these systems are designed to offer companionship or emotional support. The progression from text-based interaction (Eliza) to voice interactions (Siri, Cortana, Alexa) and now generative, real-time responses (ChatGPT) showcases the rapid evolution of human-machine interfaces. While these advancements are fascinating from a technological standpoint, they bring forward concerns about dependency. The article touches on how individuals have altered their behaviors to accommodate these virtual assistants, potentially at the cost of human-human interaction.
By far the most concerning aspect of AI is our infatuation with it, which, as both pieces note, may hinder our ability to safely develop it, or even cease its development. Especially now that billions of dollars are being poured into the development of AI, it seems even more likely that despite all of the potential pitfalls, development will be pushed ahead without much thought, and we will be forced to deal with the consequences. I thought it was also interesting how the Kissinger article mentioned the idea that data could become monopolized as the evolution of models becomes more integral to our economy, which presents an additional challenge to responsible development. And this is what is particularly frustrating about AI—there is so much potential for good, but so many questions about its ability to be exploited, and it seems as thought the driving forces behind its development are too focused on the positives. As Kissinger notes, we humanize it because it behaves like us, but that does not mean we truly understand what it is and what it might become.
Reading some of Karla’s work after being her student and former advisee is so exciting because she always has incredible insight into how we are reacting and learning to interact with the current technology of the present. With the rise of “Quasi-Beings,” Professor Erickson and I have discussed in her Sociology of Robots course how there is a higher possibility of someone developing feelings of loneliness. As we become less trusting of the effectiveness of our interactability with other human beings, I am afraid that we will isolate ourselves more as a result. We will become more reliant on the AI that is there to please us for entertainment, interaction, and even romance. We are naturally very trusting creatures, and tend to “breathe life” into things that neither have a consciousness nor a soul. This innate quality of ours is what makes chatbots like Eliza and ChatGPT dangerous, because although in our brains we know that these bots are not alive, we grow attached, and try to make ourselves believe that they are beings like us.
These AI programs are already changing so many different aspects of our lives; how we work, study, do a simple browser search, write a paper, practice our pickup lines, etc. At the CS table about ChatGPT a few weeks ago, we discussed how past generations grumbled and raved about how the typewriter would make the younger generations lazy because they did not have to use a pen. The next said the same thing about the cell phone. And now, the same thing is being said about chatbots. But, if we choose how to use them wisely, then hopefully chatbots will simply be used as a tool to encourage further human advancement, rather than make us “lazy”.
These readings were chock-full of great lines. From their (very valid) concern about a growing “gap between human knowledge and human understanding” to their explanation of how logic ‘used to’ work: “hypothesis was understanding ready to become knowledge, induction was knowledge turning into understanding.” I like these a lot, and think they’re really valid, but I also think this, and lots of writings on AI, reek of Anthro-Cognitive Mysticism, by which I mean the belief that humans are somehow different or transcendent when it comes to thinking and biological cognition. This obviously gets into religion and how you view the world, which is probably outside of the scope of this class, but even asserting that humans are “uniquely capable of rendering holistic judgments” isn’t clearly fair to me. I feel no reason to say that if some things had just been different, octopi couldn’t have evolved to a similar/higher level of intelligence as modern humans, even if it took a very different form. And if that’s the case, who’s to say that a machine can’t be sentient or make decisions. I’m not saying that LLMs are way smarter than they are, I’m just saying that we’re probably a lot dumber and less special than we’d like to think.
Ok that tangent done, I really did like Professor Erickson’s piece and thought that the idea of people rephrasing their sentences to be more comprehensible to Siri was worrisome. I also really do get that tendency to assign emotions and personality to objects and machines, but (see above) think that it’s maybe not as much of a problem as it’s portrayed. I’m much more concerned about the concentration of power that comes from such a resource being outside of the hands of everyone. The internet was great because it leveled the playing field. Printing presses had a similar effect! Let us not forget that before then, the abilities of reading and accessing knowledge were held and bestowed with the intent of keeping power in the hands of the powerful, which in Europe was the Church. An authoritative power with unknowable motives and incomprehensible knowledge that speaks truth (sometimes) and just asks that you trust it while the earthly organization that controls access to said truth becomes obscenely wealthy…
Kissinger’s article is interesting as it provides the implementation of policies for managing technologies. It is more engaging to read articles that not only highlight the problems associated with technologies but also provide potential solutions, unlike a lot of readings that we did in this class.
Kissinger compares the Age of Enlightenment which started with printing technology and the Age of AI which started with LLMs. He argued that the essential difference between the two ages is “cognitive” impact. During the Age of Enlightenment, philosophy including politics grew with science, but in the Age of AI, knowledge surpasses its philosophy. This, he emphasized is a challenge for human beings.
Kissinger highlights the strength of the ChatGPT’s strength, particularly in creating “highly articulate” written content. However, he also raises concerns about its weaknesses including the high cost of training, the absence of citations, the unclear process behind their outputs, and the potential to limit human abilities such as critical thinking, writing, and designing.
In his conclusion, Kissinger proposes several policy and philosophical considerations for living in the Age of AI. Firstly, he argues the importance of “the confidence and ability to challenge the outputs of AI systems” by developing “skepticism and interrogatory skill”. Second, he advocated thoughtful consideration for what questions AI can answer and what is not. Thirdly, he emphasizes the importance of having moral and strategic leadership to regulate AI technologies, to ensure they are beneficial for society. Lastly, he urges us to continually question “What happens if this technology cannot be completely controlled?” to assess the benefits and risks associated with AI.
I agree with the points raised by Kissinger because AI technologies have the potential to bring both benefits and harm to society. AI technologies can enhance efficiency and productivity in various fields but there is also the risk of misuse for malicious purposes. It is important to continue developing AI technologies while implementing international regulations, and ethical guidelines to prevent misuse of AI technologies.
The emergence and development of generative artificial intelligence has fundamentally changed how humans think and consume knowledge as well as what many industries might look like in the future. What I’m most curious about after reading the article “ChatGPT Heralds an Intellectual Revolution” by Henry Kissinger was how and from which sources these large language models and these interactive chat bots were trained. From my understanding, the AI is currently trained on a finite information base including books, news articles and human conversations. Gradually, we’ll move towards training these models in real time based on information fed by users, thereby significantly increasing the frequency of training as well as constantly updating what are considered facts or truths. My concern with this direction is how scientists and programmers plan to filter factual, unbiased and helpful information to improve the model instead of allowing a mix of unfounded or prejudiced claims to impact the fairness and accuracy. Since information can be updated within seconds, knowledge might no longer be universal, which might make it more difficult or problematic to evaluate knowledge.
Both articles draw attention to the human relationship to communicating with computers, be they chatbots like Eliza as mentioned in the Erickson piece or be they LLMs like ChatGPT. The Erickson piece in particular mentioned that people often talk to these tools for a form of companionship, and in this, they become kind of like quasi-beings. The impression of their humanity can be comforting, but as the Erickson mentions, they “may once again downgrade how humans value our own thoughts, our own words, and our own ability to be curious and come to conclusions.” Treating these communicative robots as if they are us contributes to a devaluation of actual human thoughts and capabilities, factor into human deskilling. The more previously human things we entrust to automation, the less we trust to ourselves. This is not to condemn these tools and say they should not exist, but to caution against overreliance.
This was written real late last night and maybe doesn’t make that much sense, so to clarify, the overreliance would come from the quasi being phenomenon because if we’re treating them like friends or even experts, and putting trust in them like that, we become more inclined to trust them with previously very human tasks and counting less on ourselves and each other.
The two articles focus on human-computer interaction by stressing how AI reads emotions and communicates through LLM and machine learning, such as Chat GPT. Kalra describes the precursor of ChatGPT, in which she mentions that it was originally thought because customers think the search engine is a natural person. Thus, they came up with the idea to make it more “human” by understanding emotions and being able to communicate. With the ability to talk as a human, the human-computer-human structure will make industries more efficient in labor distribution. However, although it is a good tool in data management and summarization, AI is not the one that produces the data, gives the opinion, and there is a huge money cost for training the machine according to Henry. Besides, “The lack of citations in ChatGPT’s answers makes it difficult to discern truth from misinformation.” I have asked chatGPT about some Chinese poems, but the result it brings is not correct at all. It would combine the two poems into one, which would lead to the misinformation.
It was pretty interesting to read about the philosophical implications of large language models such as ChatGPT. I found it really interesting when Kissinger talks about experiencing practical and philosophical challenges on a scale that was last experienced in the Enlightenment in the article “ChatGPT Heralds an Intellectual Revolution.” However, there is a lot of discussion in this article about the main differences between these two intellectual revolutions and the problems that are bound to occur in this revolution. To start, the major difference highlighted was that unlike the Enlightenment where each answer to a philosophical question was both teachable and testable, now the answers given by the large language models skip the part where humans can understand such answers. The amount of information these models can handle goes beyond our learning rate, and as said in the article, these types of technologies are evolving exponentially, even faster than our human genes are evolving. Two things that are discussed in this article about future problematics that may arise with this type of technology that I found particularly captivating were the impacts on learning and the requirement of strong leadership that guides humanity into a new era. It is no surprise that many people have found out that learning can be highly impacted by the use of large language models; however, the question is how will our understanding of learning be affected by this technology? Will we deviate as a society and reduce the importance we give to the act of learning? Another question that arises is what constitutes a good leader in this digital era? How can strong leadership work on solving issues we have identified so far with these large language models, such as biases? Finally, in her article “What a precursor to ChatGPT taught us about AI — in 1966,” Karla Erickson reminds us about the trends we have seen throughout history in the relationship between humans and AI.
Early on in Kissinger’s essay, he muses that the way ChatGPT works is unknowable and mystical:
> By what process the learning machine stores its knowledge, distills it and retrieves it > remains similarly unknown. Whether that process will ever be discovered, the mystery > associated with machine learning will challenge human cognition for the indefinite future.
Certainly to an uninformed user ChatGPT appears to work this way: we simply give it a prompt and it “magically” responds with a cogent and (sometimes) correct answer. But of course as computer scientists we know that ChatGPT does not work by magic. However, I think that much of the language we use to describe these systems, “artificial intelligence” and “machine learning” in particular, give the sense to laymen that these algorithms posses a mystical or even religious quality to them.
And, due to the complexity of modern ML algorithms and the rapid rate at which they have developed, even many of the practitioners who implement ML systems do not have the mathematical and theoretical background to fully understand how they work. But a lack of knowledge by users does not mean the ways the algorithms work is un-knowable. I think that this mythology of ML algorithms that is elevated by the media, companies, and software engineers alike only works to reduce the accessibility of ML.
Professor Erickson’s article was incredibly interesting to me, as even though chatGPT is exponentially more complex than previous chatbots, our reactions have been largely the same. As humans, we have a tendency to anthropomorphize anything with remotely human qualities, so it only makes sense that people would grow attached to machines we create to simulate speech. Our connection to these machines leads to a reliance on their capabilities; the more human a machine, the more it can be seemingly trusted. With generative AI, the argument for AI companions/helpers becomes harder to shut down, as now it is both convenient and often affective. Still, it’s worrying. One thing I thought about throughout this reading is that, in our whole history of chatbots, there seems to be a deep underlying misogyny. When given names or personalities, chatbots are usually made female-presenting. These virtual women are marketed as assistants that can bend to your whim, and can do anything you ask them to do. Chatbots are made specifically for romantic purposes, filling the role of a partner that can’t say no, and has what looks to the user to be an unconditional love for them. The anthropomorphizing of these machines seems to go hand-in-hand with the dehumanization of women we often see on the internet.
The Kissinger reading pairs very well with Professor Erickson’s reading. While AI offers an explosion of productivity and knowledge like the printing press, it comes with an opposite effect. The printing press allowed for mass creation and distribution of knowledge, demystifying natural processes, history, and the world for readers. Generative AI on the other hand, only leads us to ask more questions. It’s impossible to have a consistent view of the inner-workings of models like ChatGPT, as they are constantly evolving, pulling from millions of sources that would be impossible to categorize. Putting our faith in machines which we don’t totally understand to take over roles of real people is dangerous and irresponsible. This isn’t because AI isn’t capable (it often isn’t but it evolves at such a pace that the argument will eventually become irrelevant). AI is and will be capable of many things, but our capabilities as writers, artists, researchers, and humans will be diminished. Before we throw ourselves into the future of AI, it’s important for us to understand the value of human creation.
Reading these 2 articles, I found ChatGPT to be a very interesting topic to bring up. We see in society how LLMs are improving at such a fast rate that there are products such as ChatGPT, where they are able to give specific information. However, I had some questions about LLMs and particularly ChatGPT where we wonder with the growth technology is going, how will this affect people and their jobs? We notice that customer service is already being taken over by AI chatbots, but from my experience, I find it very frustrating to deal with as we would have to wait for the bots to finish their response and sometimes they don’t even answer our questions properly. Although AI can contain way more information than us, it still struggles to understand feelings and be able to fully answer our questions. And so, will we really be able to rely on technology in the future and even now? Although we believe that AI will be helpful in the future, doesn’t this just tie with the idea of algorithms as well, where we have seen cases of AI chatbots failing just as the one on twitter where it became racist. Like how will ChatGPT be different and helpful for those other than gather information? What if they gave the limited information like teachers do about history, what if they also don’t go in-depth on concepts such as genders and races.
I think the point of AI furthering human knowledge but not human understanding was communicated really well. Coupled with the machine’s answers perceived as “unbiased” and not having its own opinion, it’s worrying how little regulation there is/how this is may not even be considered by users of models like ChatGPT. It’s weird how humans can start ascribing feelings or characteristics to chat bots and language models as if they’re human, but then also view the responses of these models as factual and unbiased because they’re not human. Additionally, there’s this lack of accountability in regard to what information the models relay (which is scary especially if used in industries like healthcare). I feel like in media, the creativity and quality of some ChatGPT information has been emphasized, but the misinformation it provides is acknowledged as dangerous and then overlooked as a problem for later. I think this dismissiveness/automation bias in combination with the emphasis of rational thought will result in more people relying on these models and leave a lot of room for people to downgrade how human value our own thoughts/conclusions (as Professor Erickson said).
Loved Professor Erickson’s article. I think a lot about these ideas of charisma, and the gendering/racialization of chatbots/robots, and how a lot of computer-generated voice things are coded as female (electric kitchen tools, Siri, airport announcements, etc). The line “ChatGPT’s answers, statements and observations appear without an explanation of where they came from and without an identifiable author,” reminded me of the “You are not expected to understand this,” reading. It’s almost like the allure of this omniscient/omnipresent entity is paramount to its actually efficacy. Why do we trust that the sources and authors have been vetted appropriately? Ellie mentions this idea of deskilling, and I worry a lot about our general critical thinking skills as a society. Christopher mentioned in class that our attention spans are deteriorating, and I think things like chatGPT do not help when we essentially have access, or think we have access, to the entirety of knowledge within seconds.
One aspect both readings mentioned was the idea of deskilling in various areas of human life. Not to be a sociology major, but this is an idea that capitalistic automation is very mired in. Within this system, we take parts of human life activities such as the production of goods and first make these processes easily reproducible and cheaper through the division of labour. Then, we make it so that it is easier to get these products through consumption than to make it yourself in larger society, where time has been made a rare resource by capitalism. Then, we lose the ability to make these things from start to finish on our own, making us completely reliant on the products of capitalism and, therefore, the system itself. With things like chatbots, this deskilling could extend from the material realm into the arena of the emotional. In Professor Erickson’s research group on Robotpets and the google nest, we saw that humans are likely to anthropomorphize these softwares(as in the case of Eliza too), forgive the mistakes made by these robots more easily than the mistakes made by humans, and prefer these things over alive company (I am talking about both humans and pets) because of increased predictability (largely). If adopted by a large enough population under surveillance capitalism, we very really stand to lose the skills of emotional connection at a societal level, just as we were made to lose the skill of production.
Something the Erickson article got me thinking about is how humans alter their behavior to interface with artificial intelligence. I had never really considered analyzing from a linguistic perspective how people change their speech to accommodate virtual assistants’ capacities to hear and interpret utterances in useful ways, but now I’m realizing there a lot of dimensions worth researching. Even just phonologically, people enunciate and pronounce sounds differently when talking to Alexa, Siri, etc. Over time they learn how to alter their phrasings to increase the chance of getting useful responses. I mean, its obvious that most people don’t interact with artificial intelligence the same way they interact with other people, even if they do ascribe person-like qualities to machines, but I would like to consider in greater detail how we can behaviorally and linguistically characterize those adaptations.
I thought this today’s reading was very interesting because I wanted to know more about how machine learning works. I learned that AI responses are not just copied from memory but rather synthesized by choosing from billions of data points that are the most relevant. I’m the most fascinated and how AI is able to generate such unique responses and how does it know it’s unique from the thousands of other responses that the AI supplies. The chatGPT article mentions how AI “in its own words, it makes probabilistic judgments about future outcomes, blending information from discrete domains into an integrated answer.” This statement makes me curious to know how they define “their own words” and uphold that standard. The most interesting idea I took away from the What a precursor to ChatGPT taught us about AI — in 1966 reading, was the Eliza effect and the idea that we create robots to mirror qualities, and perform human actions faster, and more efficient and since this makes humans perceive machines with a higher level of consciousness.
The views and opinions expressed on individual web pages are strictly those of their authors and are not official statements of Grinnell College.
Copyright Statement.
In both Karla Erickson’s article on AI and Henry Kissinger’s article, the already blurred lines of consciousness, life and death, and human reasoning are blurred, where Large Language Models are described as having both human and inhuman abilities. In Erickson’s article, however, the author describes a human tendency to “read emotions, intelligence and even consciousness into machines,” a tendency often called the Eliza effect, which was named after an early precursor to ChatGPT created in 1966. Of course, Large Language Models seen today are significantly distinct from past technologies in that they are able to generate coherent pieces of text using large datasets made possible by the Internet. However, I am curious of how much excitement surrounding Large Language Models is a result of novelty and the Eliza effect.
With their novelty, people can project their hopes and desires onto this new technology due to little awareness of its limitations. Through history, people have tended to expect new technologies to bring humanity closer to a utopian society. For example, the Internet was expected to global communication and reduce unequal access to information. Another example could even by nuclear energy, where it has been seen as a means to energy independence with little cost. However, both of these technologies have had their limitations and negative consequences, dampening public belief in a path to utopia through the use of them. Paired with the Eliza effect, people can project human qualities onto Artificial Intelligence without paying attention to possible limitations of Artificial Intelligence. Many of these limitations are likely to be revealed in the future, but our ignorance of them may result in overly optimistic assertions about the abilities of Artificial Intelligence. That being said, it is entirely possible that optimistic claims about this new, unanticipated technology may come true.
Today’s readings were centered again around LLM and in particular ChatGPT and other AI language models. The first reading, “ChatGPT Heralds an Intellectual Revolution” by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher was a very interesting read. Never would I have thought that the same man who once advocated for Shuttle Diplomacy in the White House would now be advocating for caution with the use of ChatGPT. Slightly to my surprise, I thought Kissinger, Schmidt, and Huttenlocher crafted a fascinating discussion of ChatGPT and other AI models in comparison to the Enlightenment. They do an excellent job contrasting the growth and expansion of the human mind during the Enlightenment to its supposed contraction in the face of the growing use of AI chatbots. The group of three pointed out how ChatGPT and other forms of AI have evidenced the willingness of humans to blindly conform and accept technology. In particular, they discuss the lake of sourcing and regulation that leaves room for accepting problematic knowledge. Hearing this particular group of three echo words of caution speaks to the fear AI has ignited.
The second reading “What a Precursor to ChatGPT” by Karla Erickson offers a different perspective of comparison. Erickson adds context to the history of language models and how human fascination with LLMs dates back centuries. However, Erickson makes a clear point about the new threats that advanced LLMs such as ChatGPT create. Scientists who “scramble” to understand the advances in tech and the potential “deskilling” of humans are just a minute peek into the perils of LLMs. The readings make it clear that like so much AI, there is so much that is unknown and that more perils are inevitably going to be found.
Very cool that we got to read something from Professor Erickson. The desire to empathise with chat bots and language models is something that I hadn’t thought much about, but it is something that I have seen plenty of. I just had never made the connection as to why we gave Siri and Cortana actual names. It all makes me think of the movie Her. The reference to Replika in the Kissinger reading was also interesting, I have seen ads for it before. When I see those kinds of advertisements I always tend to think somewhat negative things about the kinds of people that use that technology. Meanwhile, I personify needless other pieces of technology, like my car, my watch, my mom’s Amazon Echo, my mom’s Roomba. They all have names. I have also had a conversation or two with the Snapchat chatbot. I am not above this behavior because it is a human one. It is scary to think that this behavioral pattern might continue to drive people to interact with each other less and less because they don’t want to do the “work” necessary to have a relationship with a human.
The talk about the outsourcing of the cognitive process is also concerning. I do like that Kissinger included some talk about solutions, and he didn’t necessarily write off the benefits of having these tools either. He had a very well-rounded perspective.
You could feel the worry about the implications of generative AI models within the words of both of the articles. It feels like we are in a weird stage where the effects of these systems have not been completely realized, but also in a position where it is easiest to create rules and regulations. The WSJ article on ChatGPT was talking about creating parameters for responsible use, but the problem is our legislative bodies in the U.S. where these systems seem to be most impactful and utilized are incredibly slow. I also don’t think that there is much consensus on how to regulate the use of such technologies. The Writers Guild just won a long battle against large entertainment companies with some language in their contract preventing the forced use of generative AI in scripts, but there has yet to be much interest in nationwide regulations against generative AI usage. I found the worry within both articles about the devaluation of our own thoughts very compelling (paraphrased Erickson). When these systems are able to aggregate so much information and often statistically create more accurate results than an average person, it is obviously quite daunting to question an outcome or response. I think the WSJ article uses an excellent example of doctors being willing to question a software answer. Would someone be able to sue if they knew an AI had suggested a different treatment and a patient died because a doctor chose their own path? Could that be considered negligent? What if a doctor did not use their better judgement? There are honestly a lot of ethical considerations, and I am not sure if we will be able to create regulations for this kind of technology before we see the impact it creates, good or not.
The article explores the historical and philosophical impact of early chatbots like Eliza and the revolutionary capabilities of modern AI, exemplified by ChatGPT. Eliza, created in the 1960s, was a simple chatbot that engaged in basic conversations, but users were surprisingly drawn to it and attributed intelligence and compassion to the program. This phenomenon, known as the Eliza effect, revealed the potential of AI to elicit powerful emotional responses and challenge our perceptions of machines.
ChatGPT, a generative AI system, represents a significant advancement in conversational AI. It leverages vast amounts of text data to generate human-like responses, enabling more sophisticated interactions. However, the article raises concerns about the potential deskillment of humans and the need to carefully navigate the social impacts of AI.
Generative AI, such as ChatGPT, poses profound philosophical and practical challenges akin to those faced during the Enlightenment. While the printing press facilitated the spread of knowledge, generative AI distills and synthesizes information in opaque ways, creating new elements of mystery, risk, and surprise. The article highlights the importance of understanding how generative AI stores, distills, and retrieves knowledge, as well as the ethical implications of its potential to fabricate false information.
The impacts of generative AI extend beyond commerce, influencing diplomacy, security strategy, and even the fabric of reality itself. The article emphasizes the need for moral leadership, responsible human-machine interaction, and comprehensive challenges to AI systems to prevent domination, anarchy, and societal alienation. Education must adapt to equip individuals with the skills to navigate the complexities of AI and preserve human judgment.
In conclusion, the rise of chatbots like Eliza and the transformative capabilities of generative AI systems like ChatGPT have reshaped our understanding of artificial intelligence. While these advancements offer exciting possibilities, they also bring about ethical, societal, and philosophical challenges. It is imperative that we foster responsible AI development, cultivate moral leadership, and prioritize human wisdom in order to harness the potential of AI while safeguarding the well-being of society.
Kissinger’s article was an interesting one that, pretty much line by line, brought a ton of foundational issues to the table. I particularly liked the focus on how ChatGPT “understands” and provides knowledge. LLMs like ChatGPT can basically take in the whole of human knowledge. Though they may not understand a topic in the way humans do, it has intrinsic access to all the information on that topic, and can synthesize unique and (sometimes) accurate answers to questions about it in a way that conveys information like humans do. It’s sort of scary to think that when you ask ChatGPT a question, you are asking an amalgamation of every expert on every topic for an answer. But, because no one can really unravel the daisy chain of complex calculations, associations, and analysis the model does, you can’t really say for sure where an answer comes from. “Interrogating” the model, as Kissinger describes it, still seems to be a challenge that is far less interesting to investors than an increasingly accurate and correct model.
I really liked the article by Erickson as well. There is something really incredible about the interactions humans have with increasingly human-like machines. I simply don’t think our brains are equipped to understand that language, written or spoken, can have an entirely nonhuman source. I even catch myself falling into these traps unconsciously while reading articles like these. Especially when I encounter the concept of romantic/lover AI chatbots. I consider it abnormal to engage with them, but the feeling goes past that and borders on unethical. I realize that this is really only because I feel it would be wrong to compel an actual human being to do these things. But this is entirely different despite the fact it is human like in its behavior. I don’t know exactly how to feel.
The Kissinger article brought up a notion I brought up last class. The offloading of critical thinking I have observed in some of my classes reflects the worries expressed by Kissinger in this essay. I tend to find ChatGPT most useful for explaining relatively simple technical concepts, automating formatting tasks, creating flashcards, etc. The sort of task that requires the understanding LLMs provide, but that are not acting as a stand in to my own understanding. I think the trust in AI and automation bias discussed in the text is something we will have to be very concious of going forward as technologists. The mere fact of the prevalence of fake news, and blind faith in sketchy internet information (seen often in the tech illiterate older generations) shows just how much of a threat this is to the production of knowledge and understanding.
Erickson’s article was extremely interesting, and I like the approach she took to the questions presented by the increasing prevalence of AI. I was especially compelled by the questions of value and humanity she raised in regards to humanity’s historical willingness to defer to technology.
I also looked at the full transcript of the conversation the NYT writer had with the bing chatbot, and now my brain hurts. Pattern matching and mimicry are powerful tools, I cannot tell if its doing that really well which freaks me out, or tapping into something else to some extent (which also freaks me out)
Karla Erickson’s retrospective on the development and implications of chatbots, from Eliza to ChatGPT, provides a comprehensive overview of how humanity has grappled with increasingly sophisticated artificial intelligence systems. Erickson touches on the Eliza effect, where users ascribe emotions, intelligence, and even consciousness to machines. This tendency is not new; humans have long anthropomorphized inanimate objects and animals. However, with AI, this inclination is intensified, as machines can now ‘respond’ in ways that resemble human thinking. The question then arises: to what extent should developers nurture this tendency? It’s both a technological and ethical dilemma, especially when these systems are designed to offer companionship or emotional support. The progression from text-based interaction (Eliza) to voice interactions (Siri, Cortana, Alexa) and now generative, real-time responses (ChatGPT) showcases the rapid evolution of human-machine interfaces. While these advancements are fascinating from a technological standpoint, they bring forward concerns about dependency. The article touches on how individuals have altered their behaviors to accommodate these virtual assistants, potentially at the cost of human-human interaction.
By far the most concerning aspect of AI is our infatuation with it, which, as both pieces note, may hinder our ability to safely develop it, or even cease its development. Especially now that billions of dollars are being poured into the development of AI, it seems even more likely that despite all of the potential pitfalls, development will be pushed ahead without much thought, and we will be forced to deal with the consequences. I thought it was also interesting how the Kissinger article mentioned the idea that data could become monopolized as the evolution of models becomes more integral to our economy, which presents an additional challenge to responsible development. And this is what is particularly frustrating about AI—there is so much potential for good, but so many questions about its ability to be exploited, and it seems as thought the driving forces behind its development are too focused on the positives. As Kissinger notes, we humanize it because it behaves like us, but that does not mean we truly understand what it is and what it might become.
Reading some of Karla’s work after being her student and former advisee is so exciting because she always has incredible insight into how we are reacting and learning to interact with the current technology of the present. With the rise of “Quasi-Beings,” Professor Erickson and I have discussed in her Sociology of Robots course how there is a higher possibility of someone developing feelings of loneliness. As we become less trusting of the effectiveness of our interactability with other human beings, I am afraid that we will isolate ourselves more as a result. We will become more reliant on the AI that is there to please us for entertainment, interaction, and even romance. We are naturally very trusting creatures, and tend to “breathe life” into things that neither have a consciousness nor a soul. This innate quality of ours is what makes chatbots like Eliza and ChatGPT dangerous, because although in our brains we know that these bots are not alive, we grow attached, and try to make ourselves believe that they are beings like us.
These AI programs are already changing so many different aspects of our lives; how we work, study, do a simple browser search, write a paper, practice our pickup lines, etc. At the CS table about ChatGPT a few weeks ago, we discussed how past generations grumbled and raved about how the typewriter would make the younger generations lazy because they did not have to use a pen. The next said the same thing about the cell phone. And now, the same thing is being said about chatbots. But, if we choose how to use them wisely, then hopefully chatbots will simply be used as a tool to encourage further human advancement, rather than make us “lazy”.
These readings were chock-full of great lines. From their (very valid) concern about a growing “gap between human knowledge and human understanding” to their explanation of how logic ‘used to’ work: “hypothesis was understanding ready to become knowledge, induction was knowledge turning into understanding.” I like these a lot, and think they’re really valid, but I also think this, and lots of writings on AI, reek of Anthro-Cognitive Mysticism, by which I mean the belief that humans are somehow different or transcendent when it comes to thinking and biological cognition. This obviously gets into religion and how you view the world, which is probably outside of the scope of this class, but even asserting that humans are “uniquely capable of rendering holistic judgments” isn’t clearly fair to me. I feel no reason to say that if some things had just been different, octopi couldn’t have evolved to a similar/higher level of intelligence as modern humans, even if it took a very different form. And if that’s the case, who’s to say that a machine can’t be sentient or make decisions. I’m not saying that LLMs are way smarter than they are, I’m just saying that we’re probably a lot dumber and less special than we’d like to think.
Ok that tangent done, I really did like Professor Erickson’s piece and thought that the idea of people rephrasing their sentences to be more comprehensible to Siri was worrisome. I also really do get that tendency to assign emotions and personality to objects and machines, but (see above) think that it’s maybe not as much of a problem as it’s portrayed. I’m much more concerned about the concentration of power that comes from such a resource being outside of the hands of everyone. The internet was great because it leveled the playing field. Printing presses had a similar effect! Let us not forget that before then, the abilities of reading and accessing knowledge were held and bestowed with the intent of keeping power in the hands of the powerful, which in Europe was the Church. An authoritative power with unknowable motives and incomprehensible knowledge that speaks truth (sometimes) and just asks that you trust it while the earthly organization that controls access to said truth becomes obscenely wealthy…
Kissinger’s article is interesting as it provides the implementation of policies for managing technologies. It is more engaging to read articles that not only highlight the problems associated with technologies but also provide potential solutions, unlike a lot of readings that we did in this class.
Kissinger compares the Age of Enlightenment which started with printing technology and the Age of AI which started with LLMs. He argued that the essential difference between the two ages is “cognitive” impact. During the Age of Enlightenment, philosophy including politics grew with science, but in the Age of AI, knowledge surpasses its philosophy. This, he emphasized is a challenge for human beings.
Kissinger highlights the strength of the ChatGPT’s strength, particularly in creating “highly articulate” written content. However, he also raises concerns about its weaknesses including the high cost of training, the absence of citations, the unclear process behind their outputs, and the potential to limit human abilities such as critical thinking, writing, and designing.
In his conclusion, Kissinger proposes several policy and philosophical considerations for living in the Age of AI. Firstly, he argues the importance of “the confidence and ability to challenge the outputs of AI systems” by developing “skepticism and interrogatory skill”. Second, he advocated thoughtful consideration for what questions AI can answer and what is not. Thirdly, he emphasizes the importance of having moral and strategic leadership to regulate AI technologies, to ensure they are beneficial for society. Lastly, he urges us to continually question “What happens if this technology cannot be completely controlled?” to assess the benefits and risks associated with AI.
I agree with the points raised by Kissinger because AI technologies have the potential to bring both benefits and harm to society. AI technologies can enhance efficiency and productivity in various fields but there is also the risk of misuse for malicious purposes. It is important to continue developing AI technologies while implementing international regulations, and ethical guidelines to prevent misuse of AI technologies.
The emergence and development of generative artificial intelligence has fundamentally changed how humans think and consume knowledge as well as what many industries might look like in the future. What I’m most curious about after reading the article “ChatGPT Heralds an Intellectual Revolution” by Henry Kissinger was how and from which sources these large language models and these interactive chat bots were trained. From my understanding, the AI is currently trained on a finite information base including books, news articles and human conversations. Gradually, we’ll move towards training these models in real time based on information fed by users, thereby significantly increasing the frequency of training as well as constantly updating what are considered facts or truths. My concern with this direction is how scientists and programmers plan to filter factual, unbiased and helpful information to improve the model instead of allowing a mix of unfounded or prejudiced claims to impact the fairness and accuracy. Since information can be updated within seconds, knowledge might no longer be universal, which might make it more difficult or problematic to evaluate knowledge.
Both articles draw attention to the human relationship to communicating with computers, be they chatbots like Eliza as mentioned in the Erickson piece or be they LLMs like ChatGPT. The Erickson piece in particular mentioned that people often talk to these tools for a form of companionship, and in this, they become kind of like quasi-beings. The impression of their humanity can be comforting, but as the Erickson mentions, they “may once again downgrade how humans value our own thoughts, our own words, and our own ability to be curious and come to conclusions.” Treating these communicative robots as if they are us contributes to a devaluation of actual human thoughts and capabilities, factor into human deskilling. The more previously human things we entrust to automation, the less we trust to ourselves. This is not to condemn these tools and say they should not exist, but to caution against overreliance.
This was written real late last night and maybe doesn’t make that much sense, so to clarify, the overreliance would come from the quasi being phenomenon because if we’re treating them like friends or even experts, and putting trust in them like that, we become more inclined to trust them with previously very human tasks and counting less on ourselves and each other.
The two articles focus on human-computer interaction by stressing how AI reads emotions and communicates through LLM and machine learning, such as Chat GPT. Kalra describes the precursor of ChatGPT, in which she mentions that it was originally thought because customers think the search engine is a natural person. Thus, they came up with the idea to make it more “human” by understanding emotions and being able to communicate. With the ability to talk as a human, the human-computer-human structure will make industries more efficient in labor distribution. However, although it is a good tool in data management and summarization, AI is not the one that produces the data, gives the opinion, and there is a huge money cost for training the machine according to Henry. Besides, “The lack of citations in ChatGPT’s answers makes it difficult to discern truth from misinformation.” I have asked chatGPT about some Chinese poems, but the result it brings is not correct at all. It would combine the two poems into one, which would lead to the misinformation.
It was pretty interesting to read about the philosophical implications of large language models such as ChatGPT. I found it really interesting when Kissinger talks about experiencing practical and philosophical challenges on a scale that was last experienced in the Enlightenment in the article “ChatGPT Heralds an Intellectual Revolution.” However, there is a lot of discussion in this article about the main differences between these two intellectual revolutions and the problems that are bound to occur in this revolution. To start, the major difference highlighted was that unlike the Enlightenment where each answer to a philosophical question was both teachable and testable, now the answers given by the large language models skip the part where humans can understand such answers. The amount of information these models can handle goes beyond our learning rate, and as said in the article, these types of technologies are evolving exponentially, even faster than our human genes are evolving. Two things that are discussed in this article about future problematics that may arise with this type of technology that I found particularly captivating were the impacts on learning and the requirement of strong leadership that guides humanity into a new era. It is no surprise that many people have found out that learning can be highly impacted by the use of large language models; however, the question is how will our understanding of learning be affected by this technology? Will we deviate as a society and reduce the importance we give to the act of learning? Another question that arises is what constitutes a good leader in this digital era? How can strong leadership work on solving issues we have identified so far with these large language models, such as biases? Finally, in her article “What a precursor to ChatGPT taught us about AI — in 1966,” Karla Erickson reminds us about the trends we have seen throughout history in the relationship between humans and AI.
Early on in Kissinger’s essay, he muses that the way ChatGPT works is unknowable and mystical:
> By what process the learning machine stores its knowledge, distills it and retrieves it > remains similarly unknown. Whether that process will ever be discovered, the mystery > associated with machine learning will challenge human cognition for the indefinite future.
Certainly to an uninformed user ChatGPT appears to work this way: we simply give it a prompt and it “magically” responds with a cogent and (sometimes) correct answer. But of course as computer scientists we know that ChatGPT does not work by magic. However, I think that much of the language we use to describe these systems, “artificial intelligence” and “machine learning” in particular, give the sense to laymen that these algorithms posses a mystical or even religious quality to them.
And, due to the complexity of modern ML algorithms and the rapid rate at which they have developed, even many of the practitioners who implement ML systems do not have the mathematical and theoretical background to fully understand how they work. But a lack of knowledge by users does not mean the ways the algorithms work is un-knowable. I think that this mythology of ML algorithms that is elevated by the media, companies, and software engineers alike only works to reduce the accessibility of ML.
Professor Erickson’s article was incredibly interesting to me, as even though chatGPT is exponentially more complex than previous chatbots, our reactions have been largely the same. As humans, we have a tendency to anthropomorphize anything with remotely human qualities, so it only makes sense that people would grow attached to machines we create to simulate speech. Our connection to these machines leads to a reliance on their capabilities; the more human a machine, the more it can be seemingly trusted. With generative AI, the argument for AI companions/helpers becomes harder to shut down, as now it is both convenient and often affective. Still, it’s worrying. One thing I thought about throughout this reading is that, in our whole history of chatbots, there seems to be a deep underlying misogyny. When given names or personalities, chatbots are usually made female-presenting. These virtual women are marketed as assistants that can bend to your whim, and can do anything you ask them to do. Chatbots are made specifically for romantic purposes, filling the role of a partner that can’t say no, and has what looks to the user to be an unconditional love for them. The anthropomorphizing of these machines seems to go hand-in-hand with the dehumanization of women we often see on the internet.
The Kissinger reading pairs very well with Professor Erickson’s reading. While AI offers an explosion of productivity and knowledge like the printing press, it comes with an opposite effect. The printing press allowed for mass creation and distribution of knowledge, demystifying natural processes, history, and the world for readers. Generative AI on the other hand, only leads us to ask more questions. It’s impossible to have a consistent view of the inner-workings of models like ChatGPT, as they are constantly evolving, pulling from millions of sources that would be impossible to categorize. Putting our faith in machines which we don’t totally understand to take over roles of real people is dangerous and irresponsible. This isn’t because AI isn’t capable (it often isn’t but it evolves at such a pace that the argument will eventually become irrelevant). AI is and will be capable of many things, but our capabilities as writers, artists, researchers, and humans will be diminished. Before we throw ourselves into the future of AI, it’s important for us to understand the value of human creation.
Reading these 2 articles, I found ChatGPT to be a very interesting topic to bring up. We see in society how LLMs are improving at such a fast rate that there are products such as ChatGPT, where they are able to give specific information. However, I had some questions about LLMs and particularly ChatGPT where we wonder with the growth technology is going, how will this affect people and their jobs? We notice that customer service is already being taken over by AI chatbots, but from my experience, I find it very frustrating to deal with as we would have to wait for the bots to finish their response and sometimes they don’t even answer our questions properly. Although AI can contain way more information than us, it still struggles to understand feelings and be able to fully answer our questions. And so, will we really be able to rely on technology in the future and even now? Although we believe that AI will be helpful in the future, doesn’t this just tie with the idea of algorithms as well, where we have seen cases of AI chatbots failing just as the one on twitter where it became racist. Like how will ChatGPT be different and helpful for those other than gather information? What if they gave the limited information like teachers do about history, what if they also don’t go in-depth on concepts such as genders and races.
I think the point of AI furthering human knowledge but not human understanding was communicated really well. Coupled with the machine’s answers perceived as “unbiased” and not having its own opinion, it’s worrying how little regulation there is/how this is may not even be considered by users of models like ChatGPT. It’s weird how humans can start ascribing feelings or characteristics to chat bots and language models as if they’re human, but then also view the responses of these models as factual and unbiased because they’re not human. Additionally, there’s this lack of accountability in regard to what information the models relay (which is scary especially if used in industries like healthcare). I feel like in media, the creativity and quality of some ChatGPT information has been emphasized, but the misinformation it provides is acknowledged as dangerous and then overlooked as a problem for later. I think this dismissiveness/automation bias in combination with the emphasis of rational thought will result in more people relying on these models and leave a lot of room for people to downgrade how human value our own thoughts/conclusions (as Professor Erickson said).
Loved Professor Erickson’s article. I think a lot about these ideas of charisma, and the gendering/racialization of chatbots/robots, and how a lot of computer-generated voice things are coded as female (electric kitchen tools, Siri, airport announcements, etc). The line “ChatGPT’s answers, statements and observations appear without an explanation of where they came from and without an identifiable author,” reminded me of the “You are not expected to understand this,” reading. It’s almost like the allure of this omniscient/omnipresent entity is paramount to its actually efficacy. Why do we trust that the sources and authors have been vetted appropriately? Ellie mentions this idea of deskilling, and I worry a lot about our general critical thinking skills as a society. Christopher mentioned in class that our attention spans are deteriorating, and I think things like chatGPT do not help when we essentially have access, or think we have access, to the entirety of knowledge within seconds.
One aspect both readings mentioned was the idea of deskilling in various areas of human life. Not to be a sociology major, but this is an idea that capitalistic automation is very mired in. Within this system, we take parts of human life activities such as the production of goods and first make these processes easily reproducible and cheaper through the division of labour. Then, we make it so that it is easier to get these products through consumption than to make it yourself in larger society, where time has been made a rare resource by capitalism. Then, we lose the ability to make these things from start to finish on our own, making us completely reliant on the products of capitalism and, therefore, the system itself. With things like chatbots, this deskilling could extend from the material realm into the arena of the emotional. In Professor Erickson’s research group on Robotpets and the google nest, we saw that humans are likely to anthropomorphize these softwares(as in the case of Eliza too), forgive the mistakes made by these robots more easily than the mistakes made by humans, and prefer these things over alive company (I am talking about both humans and pets) because of increased predictability (largely). If adopted by a large enough population under surveillance capitalism, we very really stand to lose the skills of emotional connection at a societal level, just as we were made to lose the skill of production.
Something the Erickson article got me thinking about is how humans alter their behavior to interface with artificial intelligence. I had never really considered analyzing from a linguistic perspective how people change their speech to accommodate virtual assistants’ capacities to hear and interpret utterances in useful ways, but now I’m realizing there a lot of dimensions worth researching. Even just phonologically, people enunciate and pronounce sounds differently when talking to Alexa, Siri, etc. Over time they learn how to alter their phrasings to increase the chance of getting useful responses. I mean, its obvious that most people don’t interact with artificial intelligence the same way they interact with other people, even if they do ascribe person-like qualities to machines, but I would like to consider in greater detail how we can behaviorally and linguistically characterize those adaptations.
I thought this today’s reading was very interesting because I wanted to know more about how machine learning works. I learned that AI responses are not just copied from memory but rather synthesized by choosing from billions of data points that are the most relevant. I’m the most fascinated and how AI is able to generate such unique responses and how does it know it’s unique from the thousands of other responses that the AI supplies. The chatGPT article mentions how AI “in its own words, it makes probabilistic judgments about future outcomes, blending information from discrete domains into an integrated answer.” This statement makes me curious to know how they define “their own words” and uphold that standard. The most interesting idea I took away from the What a precursor to ChatGPT taught us about AI — in 1966 reading, was the Eliza effect and the idea that we create robots to mirror qualities, and perform human actions faster, and more efficient and since this makes humans perceive machines with a higher level of consciousness.