Autonoumous Vehicle Case Study (Autonomous Vehicles Case Study.pdf )
Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations (Himmelreich-NeverMindTrolley-2018.pdf )
Autonoumous Vehicle Case Study (Autonomous Vehicles Case Study.pdf )
Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations (Himmelreich-NeverMindTrolley-2018.pdf )
Automated vehicles and the conversation around their safety were the focus of today’s discussion. Many are wondering how these cars should be trained to ensure that they are safe on the road. There are also ongoing debates about how the United States should regulate this new mode of transportation. While automated cars have the potential to offer a safer and more cost-effective way of traveling, it is important to note that humans tend to overestimate the capabilities of artificial intelligence. In fact, there have been instances where people behind the wheel have been so confident in their vehicle’s abilities that they failed to pay attention to the road for up to thirty minutes while in assisted driving mode. This over-reliance on technology can have fatal consequences.
It is clear that until these vehicles are proven to be 99% effective, they should not be made available to the public. People should not blindly trust their lives to AI that has not been properly trained to account for humanity’s tendency to overestimate technology. While the future of automated vehicles is promising, it is crucial to prioritize public safety above all else. We must ensure that these cars are equipped with the latest technology and that they undergo rigorous testing before becoming available to consumers. By doing so, we can help prevent unnecessary accidents and loss of life on the road.
We’ve all heard of the trolley problem: would you flip the switch so the moving trolley would hit only one person instead of five the way it’s going? It’s a classic hypothetical ethical conundrum: if you flip the switch, you are actively involved in that one person’s death, you chose for it to happen, but if you don’t, five people will die, and does your inaction really wipe all that blood from your hands? Even in the ways the article discussed, where a trolley problem can be abstracted out to not specifically be about a trolley and specifically about a switch, the format is still pretty specific, requiring an eminent collision, choice of distribution of harm, and certainty in outcome. With that specificity, the ethical conundrum of the trolley problem was, in my mind, exactly as I initially described: hypothetical. I had yet to consider that they were being played out, real time, in the case of autonomous vehicles (kind of, the metaphor is later revealed to be not quite apt).
The article from which the trolley problem analogy comes up, Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations, brings up some interesting points regarding the applicability of this metaphor. One such point is that if, as they say in the introduction, “people want cars to be moral unless they drive in them”, should that ethical decision be able to be changed by a passenger at any time. If this were a regular driving car, of course it would be, but in the case of “most challenges that autonomous vehicles will face on the engineering front, design decisions need to be made”, limiting the viability of the top-down design approach. Furthermore, the criteria of collision certainty is hard to make on a real roadway because it is probabilistic, so even as the case study mentions that AVs are on average safer drivers, decision uncertainty is introduced, which would complicate an AV making “the right call”. Furthermore, humans can make decisions intuitively (ex: don’t run down a crosswalk at the current speed, there’s currently no one in it but there are people on the side who look like they might be starting to step in but it’s pretty short so you only need to slow down a little, is a lot more complicated than the imperative slow down, but the degree of slowing down matters a lot to general traffic safety). I’ll also point out, not from the reading but from what one of my University of Washington DUB REU roommates did research on while there, that AVs stopping criteria is based a lot on what they recognize to be people in roadways, and as of 2022, the dominant models for training did not regularly recognize folks using mobility aids as people in their models. This was likely due to low representation in the data sets, but they would often flag wheelchairs as chairs, people using guide dogs as just the dog, and people using canes as several miscellaneous objects, none of which were people. If put in a rough situation where it had to make the “right choice” to hit something, in pretty much all of those cases except maybe the dog, it’d be prone to hit the mobility aid user. Even though that article as a whole attempts to take a positive look on AVs, stuff like this, in combination with points brought up by the case study wherein the future of AVs is riddled with economic inequity due to heavily reliance on good road infrastructure to be able to work in an area, alongside privacy concerns give me pause that these vehicles are something to fawn over, although I do recognize that in a future where they are done well, they could potentially offer greater autonomy for individuals who would not otherwise be able to drive—I’m thinking back to Access-A-Ride from our very first reading on disability visibility, and the futures that could exist there if AVs are done right. I’m not one to consider myself an AI optimist most of the time, and for now, that hesitancy holds up with AVs, this is an area where, with better testing and more thought out design and a stronger awareness of equity factors, could maybe be something one day.
Today’s readings centered on the safety of automated vehicles, sparking debates on their training and regulatory frameworks in the United States. While automated cars promise safer and more cost-effective travel, the inherent tendency of humans to overestimate artificial intelligence (AI) capabilities raises concerns. Instances of drivers overly relying on AI, leading to accidents due to inattention during assisted driving, underscore the potentially fatal consequences of this over-reliance.
The imperative is clear: until automated vehicles demonstrate 99% effectiveness, they should not be released to the public. Blindly trusting AI that hasn’t been adequately trained to account for human tendencies is risky. Despite the promising future of automated vehicles, prioritizing public safety is paramount. Rigorous testing and incorporation of the latest technology are essential before these vehicles are accessible to consumers, preventing unnecessary accidents and loss of life on the roads.
The readings also delves into the ethical complexities of autonomous vehicles, drawing parallels with the classic trolley problem. The article questions the applicability of this metaphor to real-time situations faced by autonomous vehicles, considering factors such as passenger decision-making, engineering challenges, collision uncertainty, and the limitations of top-down design approaches. Additionally, it highlights issues related to decision uncertainty, the recognition of individuals with mobility aids, economic inequity, and privacy concerns, urging a cautious approach toward the widespread adoption of autonomous vehicles. Despite the potential benefits, skepticism remains, emphasizing the need for thorough testing, equitable design, and a heightened awareness of societal implications in shaping the future of autonomous vehicles.
The development of automated vehicles provides an insight into some of the challenges associated with supplementing and ultimately replacing a human-driven practice with artificial intelligence. Driving has always been centered around the actions of humans, and our decisions in driving often have direct impacts on certain relevant results, such as arrival on time or even our safety. Part of the appeal of automated vehicles comes from the idea that computers could standardize driving, thereby mitigating accidents, but it was strange to see that this goal was only emphasized by one manufacturer. Since automotive accidents are a leading cause of death in the US, it would seem like this would be a near universally-supported impact of implementing AV’s. That there is more interest in military application or as a luxury device is indicative of other systemic issues, but ultimately not the main focus I got from the readings. What complicates even the most sincere AV reasoning is that there are a mess of obstacles, the least of which being cooperation between manufacturers and also with the government. Additionally, the questions associated with the various levels of automation and the need for safeguards makes the implementation of AV’s even less linear. Technology misbehaves quite frequently in far less life-threatening situations, so in high level automation, where there may be no human operators, how do automakers guarantee that there are no failures in critical situations, like with high speed and crowded travel or in domestic areas when children run out in front. Ultimately, if we cannot guarantee improved safety, then the development of automated cars is not a worthwhile investment.
Today’s readings were interesting, especially as I am someone who, for the most part, is a believer in the future of some forms of automated driving. I love driving so I do not think that I would ever be someone who would want an entirely self-driven car. A lot of the perils that we read about today were concentrated on this idea of completely self-driving and automated vehicles. However, none of them were that new to me nor did they seriously make me reconsider the implementation of self-driving cars. Additionally, a lot of the incidents mentioned occur on a much, much larger scale with human drivers. Of course in this case we are dealing with a much smaller sample size, but I have had experience in my mom’s car (which has assisted driving tech) or my friend’s where automated driving or warning systems have prevented serious accidents or damage to the vehicle. I think for primarily this reason, I am a believer.
Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations explains how a lot of the fears of AVs are concentrated in these Trolley Cases. Therefore, I think there remains a large space of viability for AVs to succeed. My main concerns surround the hacking of car systems and how AV companies use the data that they are capable of collecting through AVs. Could AVs steer cars toward certain businesses, away from certain neighborhoods, or affect traffic in communities?
Nevertheless, I think many of my own fears are yet to be seen in the limited implementation of AVs thus far. I believe that there is a series future for AVs and while complete automation may not be ready yet, I am a believer that there is a way in which automated vehicles can be implemented and implemented in a way that, while not perfect, is better than the human driver.
According to Stanford’s reading, there’s a discrepancy between companies’ promises about AVs and what they actually deliver to society. Additionally, it’s challenging to foresee all the potential ripple effects that might arise from the widespread use of AVs. These vehicles require extensive regulations, addressing issues like accident responsibility, security, and ethical dilemmas such as the trolley problem.
Another reading highlights the ethical quandaries surrounding the trolley problem. It’s a philosophical scenario where a decision-maker has to choose between actions leading to different inevitable outcomes. There’s no universally correct answer to this problem, but when it comes to applying this to AVs, it raises intriguing and complex considerations.
The ‘Autonomous Vehicles Case Study’ article showed us how AVs, like a lot of (but definitely not all) technology, was developed as a means to prevent the fatalities associated with vehicle crashes. However, the article also shows us that AVs, like so many technological solutions we have talked about throughout this term, do not treat traffic as the sociotechnical systems they truly are. What is more is that these AVs by and large are not developed with this goal of safety as the overriding force and would, therefore, do not really fulfil their purported goals. As stated by the article, their is an intrinsic mismatch between what is promised by industry leaders and what is instead delivered. Furthermore, the article also talks about how, the development of truly safe level 5 AVs would likely lead to a much greater price of this technology as opposed to something comparable at level 3. This would mean that the gold standard of safety would be inaccessible to most, and the differing level of automation and, therefore, predictability of all traffic on the road would result in greater randomness in the fleet of on road cars, reducing the safety promised by level 5 cars.
The Stanford reading was a good example of how expectations of tech should be tempered when we hear about it from its creators vs its reality. A few years ago, the words of Tesla or Google would have you believe that AV technology is nearly foolproof, safer than human drivers and ready to be implemented nationwide. But this technology is far from perfect, and there are countless issues that range from minor to massive. As far as AI and radar/lidar systems have progressed, major problems arise when trying to fit a car with expensive, energy and space-intensive tech, and road infrastructure is designed for the immensely complex systems of human thought and sense. There are also the larger issues of having to answer and implement unanswerable ethical/political questions about the already deeply imperfect automotive system we have. Regulation by governments is far behind what it should be, and it’s mostly unclear if these systems at this point are actually safer than humans. The fact remains that humans driving cars already kind of sucks, and cars in general kind of suck, as they’ve destroyed American communities/cities, are vehicles for racism and classism, and have been responsible for countless deaths, so the questions that come up when thinking about self-driving cars should also include how our transportation systems should fundamentally change. It’s the problem that keeps coming up in this class: instead of building new, better, more equitable systems, we put the tech bandaid on the already flawed systems, and we’re not sure what really changes.
Both readings discuss about how might auto vehicles react with car crushes and other social or ethical problems. In the reading Autonoumous Vehicle Case Study, the author talks about different aspects of the Autonoumous Vehicle (AV). I like the question about can human sue a robot and how would the system work when human trying to get the control back from the robot. Since the lack of smooth transitions might make people in danger. The second paper mainly focus on how might AV reacts when facing car crushes and specifically talks about 4 concerns regarding AV. In general, there is still a long way to go for AV. With the fast speed and complexity of human and the roads, the algorithms and designs need to be developed to be more controllable by peoples.
I was unaware of the difference between Trolley Cases and the Trolley Problem and found that reading generally pretty interesting but while I was thinking about what to write a weird kind of idea that I’d like to explore popped into my head: a situation where companies – in order to defer liability and bad press – require consumers to answer a few moral dilemmas before buying a self-driving car. Then if the car ends up in a morally ambiguous situation, it looks back on what the driver chose in the closet situation and acts accordingly. You could also get a variation where humans have some sort on implant that allows for rapid processing so the only hands-on part of driving is deciding situations like this, but that feels less interesting from an analytical perspective, if more promising from a literary one. I mean it’s all well and good to consider the hypotheticals, but I think from a legal perspective, it makes sense for companies to just emulate what their consumers probably would have chosen, which seems potentially even more concerning since you lose the nuance and gut-instinct of each situation. Although, I can imagine several situations where that would be a benefit.
In most discussions of Autonomous Vehicles, a discussion of the trolley problem is likely to arise. Johannes Himmelreich, the author of Never Mind the Trolley, joins a growing number of authors that caution against viewing the trolley problem as the central ethical concern of autonomous vehicle decision making. Discussing the value of trolley cases for the development of autonomous vehicles, Himmelreich also warns that the trolley problem makes a number of assumptions, like unavoidability and control, that do not always hold for dilemmas involving autonomous vehicles. The author argues that mundane situations, such as decision making in poor weather conditions, raise important ethical questions that should not be overshadowed by trolley cases. This shows how theoretical discussions of ethics may frame problems in ways that are not applicable to applied instances of ethics. While using the Trolley problem to compare philosophies of ethics in terms of their outcomes for a hypothetical situation, the use of the Trolley problem for autonomous vehicles can project the assumptions of the Trolley problem onto autonomous vehicles. Not only does the Trolley problem assume controllability and unavoidability, but the Trolley problem assumes that control is in the hands of one moral agent, the person controlling the direction of the Trolley. However, for drivers of autonomous vehicles, there can be a partnership between the human driver and the vehicle that can complicate how controllable vehicles may be in certain situations. Moreover, there are issues of whether the human driver or the vehicle should take precedence whenever there is a disagreement about the operation of the vehicle.
Firstly, I would like to comment that I think a lot more benefit towards the issues that self-driving cars are hoping to solve could come through more thoughtful application of public transit, at least in the US. With that out of the way, I think there are a lot of interesting ethical questions considered in the two article, especially with how dangerous human drivers can be. Many of the considerations like the privacy concerns over internal cameras and GPS location sharing mentioned in the Stanford case study were not something I had considered. I was also surprised by the considerations of how pedestrians might take advantage of predictable self-driving car behavior in Himmelreich’s paper. Some of the issues like the various trolley problems I think are often talked about and critiqued. I do think a large hurdle to making these cars more available is the question of who is liable for crashes. Honestly, I think that the real issue is the reliance on cars. Focusing so much time and money into cars, which could be used to better our public transit, seems like a waste. These efforts could be used to prevent many more crashes in a much more timely manner if larger cities found better ways to facilitate travel without the use of dangerous individual vehicles.
In the combination of both readings, is it possible that we could make the right decision using AVs and relying on their judgement. Do they know what is the best decision to make? I believe that in these situations it is very hard to make a decision, but more based on the instinct of the driver. However, if we were to have an autonomous vehicle make the decision, would they be able to instinctively make a decision? I feel like this is also a hard question to bring up because as the driver we would also struggle to decide what to do in this scenario. One thing I dislike though is the idea of how AVs even came to place. It all started with a challenge with a $1m prize. The fact that the incentive of creating these AVs was because of money is very disappointing. We see that the strive for more advanced technology is because of competition or incentive that would benefit the company. This connects with a lot of our previous readings, where the idea of capitalism occurs and no thoughts of helping out society with daily problems. This also made a conclusion for me with these trolley cases. If companies are trying to gain the consumers’ trust and reliability, they would conclude with protecting the driver rather than the civilian that would be hit. This is all wrapped around the idea of profiting and nothing about the safety of society. I am curious as to what we would do in these situations as well as how we could focus more on the benefit of society and not for personal gains.
Despite the fact that it was covered fairly extensively in one of the readings, I still think the most concerning aspect of AVs is privacy and security. The car exists at a bizarre peak where it can provide much information about a person, such as close associates or friends, entire conversations, movement patterns, and whatever else that could conceivably be revealed or displayed in the confines of a car. Not only is there an enormous surface area for attack, as previously mentioned, this is one of the few technologies that can be harnessed to cause serious harm or even death to a target. Zero-days will likely be plentiful in these enormously complex systems, and because of the potential of exploiting weaknesses, it is only a matter of when it occurs rather than if it occurs. Powerful spyware, such as Pegasus, that can infiltrate mobile phones without any interaction from the user already exists, and most mobile phone manufacturers pay careful attention to security. Still, these exploits are powerful, and their creators have customers willing to foot the extensive bill. Pegasus has already been linked with successful assassinations. Information and ability to kill is a dangerous combination that is quickly being realized, especially as AV components and technologies trickle down into less expensive models.
The reshaping of the autonymous vehicle conversation around everyday situations was very new and interesting to me. The ethics are no longer life and death, which allow us to begin to consider values such as road manners and etiquette. If we were to allow drivers to indicate how urgently they need to get where they’re going would the vehicle then be okay with speeding or shooting gaps in traffic? Will people be okay with using autonymous vehicles if it means that they are no longer able to drive 10 mph over the speed limit on the interstate?
These smaller use cases are more nuanced and there are many more of them to consider on a daily basis. How much of a gap does a car need to take a left turn into on coming traffic? An autonymous vehicle probably doesn’t need much, but it would probably scare the shit out of me if an autonymous vehicle decided to turn through a small gap in front of me. I am not comfortable taking a turn in traffic when there isn’t a cushion for me, I am not as precise as a machine. Would we consider human emotions and comfortability when considering these kinds of use cases?
I think that the case study article about autonomous vehicles composed by Stanford University researchers is very comprehensive, providing both historical and legal contexts to the issue as well as discussing most, if not all, of the ethical and societal concerns regarding its widespread use. A lot of these concerns tie back to some of our past discussions and make a lot more sense when put into context. For example, last Wednesday we read about Google’s use of its Captcha technology for human verification, the responses for which might have been fed into the system to improve AV’s intuition and vision of the roads in bad weather conditions. Overall, I think this is another instance where the costs of creating and applying the technology on a wide scale might overweigh the benefits it may bring. Not only does the technology lack human intuition and have limited ability to quickly react to emergent ethical dilemmas, which we also discussed the past few weeks, but many other problems also come with deploying it to the general public. If we privatize AV, it might only be financially accessible to the most wealthy in the society, which further worsens the socioeconomic divide. If we make it public or aspire for it to be widely utilized, it has to be taken into consideration whether public safety will be improved and how mandatory commuting in AVs will be made. I think that technology in general benefits and conveniences our daily lives; however, I also think the issue is much more complex when human lives are involved and when human intuition in decision making is potentially replaced.
I thoroughly agree with Spencer’s point that the larger issue at hand is our lackluster public transit system. Of course, these same ethical issues can arise in some degree with public transportation, but in more controlled environments with potentially fewer risks.
I thought the notes of the Himmelreich were fascinating. He writes “What looked like a purely hypothetical dilemma situation is about to become reality,” but notes that he “does not endorse this claim.” His article was published in 2018, but just a year later a woman was killed in Arizona by a self-driving Uber that “did not recognize that pedestrians jaywalk.” It seems clear that this is in fact becoming a reality.
Himmelreich also writes that one condition of a trolley problem is that “actions carry no risk
so that the agent can choose between outcomes.” I think I understand what his point is specifically in regards to trolley problems, but it seems that the risks are apparent in that someone or some people have to be sacrificed and someone has to make the decision as to why that is. A line that really struck me was “A majority of individuals would be unwilling to use an autonomous vehicle that makes decisions in line with what they themselves would agree is ethically preferable. People want cars to be moral, except if they drive in them.” I think it’s actually a good thing that the majority of people are wary of autonomous vehicles, regardless of their confidence in the “morality” of their decisions. I think it reflects a greater anxiety towards having machines make split second decisions without context.
For the first article, I have several thoughts on this: As a computer scientist, it is crucial to balance innovation with ethical responsibility. While AV technology can revolutionize transportation, it is essential to prioritize public safety and ethical considerations in the development process. The reliance of AVs on data and connectivity raises significant cybersecurity concerns. Ensuring the security of AV systems against potential hacks is critical to safeguard passengers and public trust. The programming of AVs presents complex ethical dilemmas. Decisions on how AVs should react in critical situations must be guided by ethical frameworks, not just technical efficiency. Successful implementation of AVs requires seamless integration with current transportation infrastructure and consideration of potential disruptions to existing traffic patterns and public transport systems. Johannes Himmelreich’s article explores the ethical implications of autonomous vehicles (AVs) beyond the well-known trolley problem. Himmelreich argues that while trolley cases (hypothetical moral dilemmas involving a choice between two harmful outcomes) are a popular topic in discussions about AV ethics, they are limited in scope and utility. Instead, he emphasizes the importance of considering ethical issues in more common, mundane driving situations. Developing algorithms that can handle mundane yet complex traffic situations requires a deep understanding of not just technical aspects, but also ethical nuances. The variability and unpredictability of mundane situations demand advanced AI capable of real-time decision-making, which goes beyond the current capabilities of rule-based systems.
In the article “Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations,” Himmelreich discusses several possible ethical considerations that arise when analyzing autonomous vehicles. First of all, Himmelreich discusses the difference between the well-known trolley problem and different trolley cases. The huge difference between these two is that in the trolley case, one of the participants will unavoidably get harmed. It is interesting to see how the author proposes different approaches and then connects them to the idea that for autonomous vehicles, the real challenge isn’t tackling those dilemmas but rather figuring out how these autonomous vehicles face mundane situations and how that will affect the way we perceive different things in society, such as who has the priority on the road or how a city should be designed. He proposes that it needs to be taken into consideration that if autonomous vehicles are safer than pedestrians, then pedestrians should always be prioritized, which might imply abandoning the idea of crosswalks. He also states that the trolley case would not be applicable to autonomous vehicles because instead of it being an ethical question, it turns out to be a political question that would be answered by the policy the company that develops the vehicle establishes.
The Stanford case study by Mark Harris gives a history and overview of the modern landscape of autonomous vehicles, also known as self-driving cars. Many modern cars already employ some form of automation, usually in the form of cruise control or lane detection and correction. Some companies like Tesla and Waymo are currently pushing to higher levels of automation where driver input is sometimes not needed but still required to be present. There are currently no systems that acheive Level 4 or Level 5 automation, where drivers need not pay attention to the system at all
I think that one of the most troubling aspects of any self driving system is the security concerns. As the reading mentions, may car manufacturers have already been found to have security holes in their level 0 or level 1 systems. Companies like Tesla and Waymo have their engineers working as fast as possible to develop better self driving technology that I fear security concerns is not their top priority. This is compounded with the fact that low-level programming is still mostly done in memory-unsafe languages like C and C++ which can introduce serious security flaws even with skilled programmers.
Something I never though about before is how road signs and other aspects of motoring infrastructure are specifically designed for human eyes, and as a result, if AVs dominated the road, the signalling infrastructure might look different. I’m imagining signs that are specifically designed for computers to identify and interpret, like roadside QR codes or something.
Too bad such a world with AVs dominating the roads comes with so many potential consequences and complications that it is virtually unthinkable that it could happen safely any time soon. Although I don’t think the alcohol consumption increase that will arise from a lack of need for designated drivers or the potential for pedestrians to abuse the cautiousness of AVs en masse represent the biggest concern. I found the points about how AVs are unlikely to reduce congestion and will likely contribute to increased sprawling and pollution much more persuasive.
The Trolley Problem as an overall concept is very familiar to us all, but honestly I have not taken much time to think about it in the context of autonomous vehicles, or even consider the plausibility of such situations. The use of trolley cases seems to be most problematic becuase it is used not as a jumping off point, but as an end point. If the conversations surrounding ethics end there, the impression that time and energy has gone into thinking about ethical concerns is given, though in reality this is not the case, and there are many more, plausible/ common scenarios that we must consider. In the other reading, I felt like I was given a fairly good crash course on the history / context of the discussion surrounding autonomous vehicles. Obviously I was familiar with the concept before reading this, but that was about it. The context surrounding this pursuit provides a lot to my understanding, and I am excited to hear opinions from classmates who perhaps have a bit more experience with the topic than I.