Slick Tom Cruise Deepfakes Signal That Near Flawless Forgeries May Be Here by Emma Bowman (https://www.npr.org/2021/03/11/975849508/slick-tom-cruise-deepfakes-signal-that-near-flawless-forgeries-may-be-here)
Using AI to Detect Seemingly Perfect Deep-Fake Videos by Edmund L. Andrews (https://hai.stanford.edu/news/using-ai-detect-seemingly-perfect-deep-fake-videos)
Watch: Fairness Accountability Transparency and Ethics in Computer Vision (https://www.youtube.com/watch?v=0sBE5OyD7fk&list=PLMfRT7Ik6baEjup3PAqbcCbp5dlP4TclS&index=31&ab_channel=RemiDenton)
Deep fakes and facial recognition technology were crazy for me to see when they first came out. Of course, I had gotten used to the green screen and the fake aliens projected onto the big screen in many sci-fi movies that use movement technology to create fake images. However, the explosion of deep fakes in everyday spaces like social media to the point that they are nearly replicable to the real thing is incredible. However, as we read and watched for class today, this technology comes with real-world implications. Already, I have read of instances in which a news outlet picks up on a deep-fake image or video and thinks it is real. These outlets report the source as being fact and as a result, there are real-world implications. One article I read told about a deep fake image of a terrorist attack on the US Pentagon which caused the stock market to lose hundreds of millions of dollars in market cap in just a matter of minutes after news outlets reported the image. Additionally, much usage of facial recognition technologies and deep fakes are being used to continue hierarchies of power. I have used the HireVue interviews with little understanding of how the tech works, and having read about biases in other AI algorithms and watching Fairness Accountability Transparency and Ethics in Computer Vision, it is apparent to me how similar biases could be replicated in the hiring technology. To me, if we are reaching a point where the best way to detect deep fakes and other malicious uses of similar tech is through AI and machine learning itself, I think we have already gotten to a place of no return. Instead, we must adapt to a deep-fake world and prepare to handle the implications that this world has.
Deepfakes are stuff of science fiction, and really scary when we think about worst case consequences of their use, but these articles made me breathe a little easier knowing that their application is still quite challenging, and is still not developed to the point where anyone is able to access it. That being said, there are quite a few ethical dilemmas around their continued development, namely the ability to propagate events that never happened, and their use on the front lines of information warfare. Though there are some benefits to the technology, like making it easier to edit movie scenes, is that really a good enough reason to allow the potential obfuscation of our world to fall into the hands of bad actors? My answer is no, and while obviously that won’t mean anything toward is development, this is one of the more important AI ethics projects in the long term that need to be visited and revisited constantly. The television was a radical change to our world because there was suddenly visible proof of the stories that people told, happening in real time. Visuals were instrumental to counter protests of the Vietnam War, security footage is used to track down murder suspects or prove guilt in court. But given a reality in which we can’t trust what we see, the burden of proof becomes non-existent. Is it really that hard to just reshoot a movie scene?
As mentioned in the article on using AI to detect Deep Fakes, there is no long term solution to detecting or correcting for misinformation, especially as advancements in technology make misinformation more sophisticated and harder to detect. The authors explain that cruder methods of video and photo editing have been as successful in creating misinformation as Deep Fake technology has been. So, the issue of misinformation is not trying to build tools that are increasingly capable of detecting Deep Fakes because crude video and photo edits are relatively easy to detect but still effective in spreading misinformation. The issue related to Deep Fake technology is that it weakens the already weak sense of trust people have in the content they see online. Even if a piece of content is authentic, the mere fact that it appears online makes it less trustworthy due to the increasing probability that it is not authentic. As a result, the issue of whether a person trusts a piece of content becomes subjective and dependent on how the person judges the content and its source. After all, most people do not have access to AI powered Deep Fake detection tools, let alone a trusted detection tool. When these tools to detect Deep Fakes are developed, there is an issue of how the results of the tool are interpreted. As discussed in class, already existing tools to detect plagiarism are rife with issues on how to interpret their results, particularly for those not technically savvy.
In this class, we are talking about computer vision. As with any technology, there are ethical considerations associated with computational vision, particularly in areas like privacy and bias. Issues related to the misuse of facial recognition technology, surveillance concerns, and potential biases in algorithmic decision-making need careful consideration and regulation. Although the first article shows a way that recent YouTubers used to edit videos which makes it seem like they are doing magic, it also demonstrates that we are hard to identify what is happening in the cyber world as everything could not be true. The second article mainly discusses Deep-Fake videos. Since many of the words fall into similar month shapes, it is hard to know if what happened is really happening. The last article talks about computer vision and black people with social justice in cyberworlds.
The emergence of deep fakes and facial recognition technology initially astonished me, reminiscent of the fantastical illusions in sci-fi films. While I had become accustomed to green screens and fabricated extraterrestrials, the proliferation of deep fakes in everyday platforms, like social media, achieving near-perfect replication, is truly remarkable. However, as explored in our recent class materials, the real-world implications are profound.
Instances abound where news outlets mistakenly treat deep-fake content as factual, leading to substantial repercussions. For instance, a fabricated image depicting a terrorist attack on the US Pentagon prompted a stock market crash, resulting in massive financial losses within minutes of media reports. Additionally, the application of facial recognition and deep fake technologies often perpetuates existing power hierarchies. As witnessed in the biased algorithms of AI systems like HireVue, the potential replication of biases in hiring technologies is concerning.
While some argue for adapting to a world inundated with deep fakes, recognizing their potential misuse and the challenges in detection, ethical concerns persist. The technology’s ability to manipulate events, blur the lines of reality, and its weaponization in information warfare pose significant ethical dilemmas. Despite the convenience of editing movie scenes, the ethical ramifications of allowing such manipulative tools into the wrong hands raise questions about the direction of technological development. In a world where visual proof becomes unreliable, the burden of proof vanishes, challenging the very foundations of truth and authenticity. As we grapple with these ethical difficulties, it becomes imperative to continually revisit and address the profound implications of advancing AI technologies like deep fakes.
Both ‘Slick Tom Cruise Deepfakes Signal That Near Flawless Forgeries May Be Here’ and ‘Using AI to Detect Seemingly Perfect Deep-Fake Videos’ rightly position deep-fakes as disinformation. The second article talks about how we should be combatting the root disinformation instead, but makes no mention of how online algorithms (and the companies that decided their parameters of success) are a large part of this spread of disinformation.
On another note, nonconsensual deep-fake porn laws are largely inadequate in protecting non-celebrities from them. Furthermore, the legislation regulating this porn has been described by legal scholars as a “messy patchwork” that is not implemented at a federal level. None of this regulation (in the USA) prohibits the production of these videos.
As Timing Gebru talks about in the video, we need an FDA for these algorithms that test their harm and make a decision regarding if they can be publicly available. Ultimately, I also think their needs to be an inclusion of a diverse set of stakeholders with different training and expertise in the development of algorithms to properly analyse potential for harm and differential effects before the release so that they may be regulated before the situation becomes dire. We need to stop playing catch up in regulating these technologies if we are not going to stop producing these things indiscriminately.
2 things: first off, the guy who said that it’s not a concern because it takes a week or so, completely underestimates what’s at stake in deep faking and U.S. elections. Like anti-American state actors aren’t going to be deterred by having to make an intern or two spend some time learning how to use open-source software. If there are countries that are willing to invest billions in nukes and aircraft carriers, they’re probably willing to invest with similar fervor in cyber attacks that could be even more effective and difficult to counteract. Secondly, this whole AI arms race is just absurd. Like training an AI to detect another AI is a completely unsustainable since you can use one to create the next version of the other. We’re just dumping all these resources into an endless loop of deception that’s only going to degrade trust in our shared institutions even if we do it well. If any town hall clip or campaign ad is misclassified either way, you start building separate foundations of fact. This fractures us and causes irreconcilable differences that I really don’t think are sustainable. I don’t have any good solution to the issue of deepfakes and separate factual spheres but it’s just frustrating.
Deepfakes have always been a huge concern of mine because there is already been a precedent that has been sent when it comes to the news that bending the truth is okay as long as it makes a better story. When applied to deepfakes, this could imply altering what people say to fit a narrative, which could get people in trouble. This could easily be utilized in smear campaigns and could ruin lives. We need to be vigilant in making sure that we are always able to recognize and disprove the use of deepfakes in this way. As we appreciate technological progress, it’s crucial to commit to ethical use, preventing these tools from becoming instruments of deception and harm. Additionally, policymakers should work to strengthen laws and regulations that are designed to prevent the misuse of deepfakes. Finally, it is important to educate the public about the dangers of deepfakes and to empower individuals to recognize and report instances of their use.
Ultimately, the rise of deepfakes is a major concern, and it is incumbent upon us to take concrete steps to address this issue. By investing in research and development, strengthening laws and regulations, and educating the public, we can work to prevent deepfakes from becoming a tool for harm and deception.
We notice that in fake news has been a huge problem throughout the years whether it be through the transformation of the lips and face, or even just new that comes out from social media, such as YouTube videos with their content creators, Instagram posts, Twitter tweets, etc. However, one thing we never questioned was whether this was true. If we were to see many people have seen it and saw the faces of certain people, they would believe it a lot easier than they should. One quote that came from the second reading was “The real challenge is less about fighting deep-fake videos than about fighting disinformation. Most disinformation comes from distorting the meaning of things people actually have said.” Here, we see that the problem isn’t necessarily the technology that is constantly improving and our job to figure out whether it is true or not, but the fact of understanding what is false information that is being spread to the public. This also makes us consider why do people even want to spread false information and how does this reflect on our society as a whole, where we want to attack other people’s reputation out of spite or hate? It is so interesting to see how society and technology are weirdly connected that people, who take advantage of technology, can figure out more unique ways of how they can hurt the people they dislike or how they can get more views. This idea of using technology to get more views is similar to just spreading fake news, where we see a famous YouTuber named Keemstar. He has been well known for his spread of false knowledge, and yet able to not only gain many views from these types of videos but also convince the audience. I find this fascinating how easy it is to convince people that you don’t even know by having influence and this just comes back to the idea of why do we tend to easily listen to these certain people and not question a lot of peoples remarks?
I have not had a lot of exposure to deep fakes, and up until now the only application that I have known of it is the use of photoshopped celebrities or online influencers in deepfake porns, which is still a very disturbing subject. I was shocked to learn about how advanced this technology has been and how widely used it is without me realizing. For example, I never thought about how the lip-sync technology is used to produce and air the same movie scenes in different languages with different regional or cultural cues. However, it was also surprising to know that people online have produced and distributed videos of people spreading misinformation by distorting their mouth formations based on phonetic sounds. It is scarier to think that this has become so common and sophisticated that it takes a lot of labor and attention to spot the difference and identify the truth.
I also realized as I read these articles and reflected on some of our last few readings that we are entering an endless loop of technological development. Since generative AI and machine learning algorithms have become so advanced, people are just working on training better models to combat or override other AI robots. At the end of the day, we’re just feeding more training datasets into these algorithms with the hope of preventing hackers or ill-intentioned AI from interfering with their systems. This is not sustainable, in fact a waste of time and financial resources, given how expensive training machine learning models has turned out to be.
In the article discussing the Tom Cruise deepfake, Chris Ume’s nonchalant attitude regarding the abuse of deepfake technology bothered me. Accessibility to the internet/new technology provides resources to a large population of people, but this also means resources can be used to negatively impact many others (even if this isn’t the intent). This idea of something being too high tech or advanced for people to use it maliciously gets us in situations where something bad happens and regulation is put in place after the fact (when it may not even be up to date at that point), rather than prior analysis being done and regulation preventing harmful use of technology. Furthermore, misinformation is easily spread with social media, and if coming from what people believe to be a credible news source, people often fail to fact-check the content they consume. With some media, it’s easy to tell if it’s been edited or manipulated, even if you’re not necessarily aware that this could be a possibility. But especially at the rate they’re advancing, it’ll be quite difficult for people to discern whether something is a deepfake or not if they don’t know what to look for (and even if they do).
I had never seen these Tom Cruise deep fakes, so seeing those videos was definitely pretty interesting. Though there are definitely uncanny things that stand out about the appearance of Cruise in the video, I was most shocked to see that the deep fake got so many subtle things about his likeness correct, like his iconic laugh. That definitely pushed things over the edge for how convincing it could be, and once I found out that they relied on a Tom Cruise impersonator for such accuracy, I was a little bit relieved. Still, I am not sure that many people who are unfamiliar with the technology or how good it is would necessarily pick up on such subtle flaws.
Of course, we also see the almost mandatory utilization of AI to counteract the negative consequences AI can bring into the world. It’s definitely an interesting technique to use to pick up on fakes, but I didn’t find the error rate overall very impressive, though I suppose if humans are simply unable to detect such minute differences in lip shape and speech, it’s definitely an improvement. I am glad that the researchers involved recognized it as a “cat and mouse game,” and that the only long term solution being regulation of this activity and consequences for those who use it unethically. As the readings said, around 90% of deep fake usage online is for nonconsensual pornographic content, which is a definite problem that goes unaddressed by laws.
The tutorial also presented some interesting cases in AI I had never considered surrounding interpretation of cultural symbols. The dominance of western cultures definitely affects how we train AI to perceive the meaning of what it is seeing, which is a dangerous practice we are already seeing with analysis of faces and people, but in terms of cultural symbols could lead to further erasure of non-dominant cultural practices or ideas.
Yet again, the use of AI to try and combat deepfake technology represents the fallacy that we can “tech” our way out of any problem, even those caused by technology itself. I appreciate the researchers behind the AI deepfake detection tool acknowledging the need for increased media literacy and an actual attempt to educate the public not only about the prevalence of disinformation, but about general best practices for interacting with information on the internet. I think about some older family members of mine, and how little internet literacy they have, and I feel like the technology for creating deepfakes is almost overkill for people as unfamiliar with the internet as them. As one of the articles mentioned, low quality images, memes, etc. have been enough to fool people in the past. However, if we consider the efficacy of these much “lower tech” solutions, i think the threat of deepfakes is clear. As mentioned by the creator of the Tom Cruise deepfake said, these videos still take time and effort, so though they might not be used by your average member of the public, as Brian mentioned, it is not at all unlikely that companies, collectives, etc. who are dedicated to disruption/ interference and misinformation in some way or another, will invest the time and resources into utilizing this technology.
I remember earlier in the semester, as we talked about LLMs, we mused on the absurdity of creating AIs that could write misinfo, only to need to train other AIs to catch the misinfo AIs, which, of course, begot AIs that could trick the AI-catching AIs, which then led to new catching AIs that could catch the new misinfo AIs—very much a dog chasing its tail situation. It’s the type of thing we’ve come to accept from things that evolve: for instance, its not particularly upsetting to most that we need slightly different flu vaccines each year, with the understanding that the virus evolves to get around the vaccine, and so we need to protect ourselves from the new version to come (or, at least the fact that it changes isn’t upsetting, there is of course a camp of people who are anti-vaccine in general, but that’s not the point). Still, I find that it’s a little different when it’s a question of evolution and adaptation, at the expense of a lot of resources and a lot of labor and a lot of ethical and environmental toll, over something that was created by humans. The flu is a fact of life, no one went out and engineered it, so we just work around it, whereas with AI, there was a time when this was not an issue for us to manage, then people went out and created the issue, and we find ourselves Sisyphus, pushing the rock of make AI->detect AI->make AI->detect AI->rinse and repeat in hell until the end of days.
That’s a lot of time to spend talking about class and readings from over a month ago, but I say it all because I got the strongest sense of déjà vu to these moments and these conversations while doing today’s readings. In reference to the Stanford piece in particular, while it’s certainly impressive that this deepfake recognition technique has been realized, we would be remiss to ignore the point brought up in both the head and tail of the article that suggests we’ll never really “solve” the deepfake problem, but, much like the textual misinfo LLMs can generate and other AIs are trained to go and catch, this cycle is, as cycles are, cyclic. With this in mind, in response to the NPR article, I’m a little skeptical of the assertion that we haven’t “arrived at an ominous point in which the technology can be readily abused”. Maybe I’m wrong for that, Ume has much more experience in that area than I, or maybe it’s that it was written in 2021 and we’ve made “progress” in this area, but as more of these tools go open access, maybe not everyone can do a flawless Tom Cruise interpretation, but they can do something of slightly lower quality that’s similarly convincing to everyone but the most expert of experts, and even if that’s not the same, I’d be inclined to call it similar in impact.
The fact that these tools were made to correct small errors in filming for movies, and are now wreaking havoc on democracy (yes the U.S. election a bit as discussed, but also recently elsewhere, such as in Slovakia a couple months back), and turning into an unsolvable problem serves as yet another warning to us as computer scientists to think long and hard about the systems we make. Even if an initial application looks pretty benign, what is the worst case, how could the benign be exploited? It’s an annoying truth to think about, because sometimes you just want to solve the problem and not think about all the evil people might want to do, and once it starts it’s hard to stop because realistically you’ll never catch every exploit so at that point, might as well get something done. Still, it’s an important thing to think about, and the responsibility of an ethical computer scientist to do so, because if we are stopped by what is annoying and difficult and halfway futile, we never change anything at all. Perhaps it is better to be the dog chasing our own tail, knowing we’re never done but are always responsible for this thing that’s been created, than it is to sit complacent.
I think deepfakes are really harmful and scary, especially in an era of unreliable news sources. I think what really struck me in the NPR article was this nefarious statistic, “A 2019 report from Sensity, a company that tracks visual threats, found that nonconsensual deepfake pornography accounted more than 90% of all deepfake material online.” This also opens the door for manufactured child pornography.
I really cannot think of any scenario where deepfakes wouldn’t be at the very least misleading and at worst enabling some sort of violence. Regardless of how “humorous” deepfakes of celebrities saying/doing stupid things are, the rise of deepfakes just exacerbates distrust in the media. There is already a massive decrease in critical thinking and media literacy skills among people that use social media, and younger and younger generations have access to cellphones or computers, so I can’t imagine the detrimental impact of this kind of technology on their development. There needs to be more regulation on how technology is employed and distributed.
I think it is entirely possible that even if Deep Fakes are detected by AI, the detection services might just get undermined by other sources of false information. I think that it would introduce enough doubt into the system to allow misinformation to continue to spread and cause damage. I wonder if eventually we might get to the point where information online might just cease to be trustworthy all together. I don’t know if I can point to any news source right now that is entirely trustworthy, and I can only see that getting worse.
Starting with the NPR article, I definitely found the deep fake’s realism very difficult to grapple with. I think the implications that an easy to use deep fake are recognizable to most people, but at the moment these techniques take too much effort, time, and talent to easily reproduce. As the NPR article puts it, “experts say [the infrequent usage to create disinformation] was only because less sophisticated tactics, like lies, crude video edits and memes, have been working just fine as a source of deception.” The Stanford article shows us some ways that we might be able to detect some deepfakes and word splicing AI techniques, but it points out these counter AIs are not a long-term solution, as “there is no long-term technical solution to deep fakes.” I think it also makes an important point of showing that despite having these technologies, it is more likely that a person would expose a deep fake because they “recognized that his own question had been changed” than because technology identified it. There are a lot of videos, and it is ultimately infeasible to run everything through these techniques. Moving on to the computer vision video, I think the line that sums it up best is when the speaker says, “even if you have a task that works equally well on everyone, it’s still bad.” Even ignoring the fact that companies used predatory ways of acquiring more diverse data, it did not address the actual concerns about identification, labeling the gender of trans people, or reinforcing gendered stereotypes through identification and advertising. The issues that these systems present are beyond just the data being used, and there needs to be clear identification and action against systemic issues to address inequities that these algorithms could exacerbate.
I think that the advancement of deep fakes are an incredibly worrying development in recent computing history. The potential for misinformation that deepfakes present is unique, since there is no other technology that can simulate real human experience so well. Not only that, but deepfake technology is available not only for large corporations like Disney (who use similar technology to “revive” dead actors in movies like StarWars). Instead, deepfakes can be created by anyone with a reasonable technical understanding of programming and machine learning. Moreover, it’s likely that in the future, the user interface to creating a deepfake will be improved upon so even technical laypeople could create them.
As the Stanford researchers note, any technical attempts to detect deepfakes will likely fail as a long term solution. The problem is that deepfake quality will only continue to improve, and developers can add features that specifically counteract detection attempts. Instead, the only long term solution seems to be a social one: we are entering a society in which any video or audio recording can’t be trusted, and we must rely on secure communication channels or face-to-face communication if we want to be sure what we’re seeing is real.
I’m not sure what else to say except that I find the existence and use of deepfake technology deeply concerning. The Tom Cruise example, on the surface, seems harmless enough if you don’t look too deep into it; he’s the biggest actor in the world, so someone using his face in a mostly innocuous way wouldn’t be an issue. But massive implications involving ownership of your own face, the impressiveness of the tech, and the spread of misinformation come up. The article brings up very probable hypotheticals involving deepfakes of political figures spreading misinformation, causing global conflicts. Others have also brought up deepfake porn, which is a massive, objectively horrible form of exploitation. I find few positive aspects of deepfake tech, aside from the example in the second reading of dubbing over actors so fewer retakes need to be done and the lips synch in different languages, but is that really enough to warrant its widespread usage?
I see multiple people likening this tech to sci-fi, and while these connections are unavoidable, I feel it is more important to be discussing it in terms of reality. Deepfake tech shouldn’t only be discussed in hypotheticals and fiction, as the consequences can easily be seen in presently. I’ve seen plenty of ads on social media with deepfakes. One was particularly concerning, where I saw a voice/video deepfake of the Rock advertising a fraudulent government subsidy check, where all you have to do is give your information to a sketchy website (I’ve seen this on YouTube multiple times, so Google clearly does not care about filtering their ads for harmful misinformation!) The consequences of this tech can be seen right now, so I feel we don’t have to think in terms of “what ifs”.
Both of the reading present the advancement and the challenges in the field of AI and deep learning, specifically regarding deepfakes.The first article highlights the increasing sophistication of deepfake technology, as demonstrated by the convincing Tom Cruise deepfakes on TikTok. The advancements in AI and machine learning algorithms have made it possible to create highly realistic deepfakes, raising concerns over their potential misuse. As a computer scientist, it’s both fascinating and alarming to see such advancements. The technology’s ability to replicate human likeness with such precision is a testament to the strides we’ve made in AI, but it also underscores the ethical and security concerns that come with it. The second article shifts the focus to the detection of deepfakes, discussing how researchers at Stanford and UC Berkeley are using AI to spot these forgeries. The approach involves analyzing inconsistencies between visemes and phonemes. This is an essential area of research, as the detection of deepfakes is a critical step in combating misinformation. However, as the technology for creating deepfakes improves, so must our methods for detecting them. This cat-and-mouse game between creation and detection highlights a key challenge in computer science: developing systems that can not only keep up with but also anticipate and counteract advancements in technology used for malicious purposes.
The juxtaposition of articles on deepfakes reveals the dual nature of AI’s capabilities and concerns. Emma Bowman’s exploration of Tom Cruise deepfakes highlights the entertainment and reminds me of the innocent Snapchat filters they have. For example, they have a filter with a man with locks and a Jamaican flag themed hat which makes me think about how someone can an consensually steal your face and parts of your identity. Thus the technology’s ability to seamlessly alter appearances sparks curiosity but also prompts reflection on its potential misuse. On the other hand, Edmund L. Andrews delves into the more alarming consequences of deepfakes in his discussion on detecting seemingly perfect forgeries. The article underscores the potential threat to truth and authenticity, emphasizing the difficulty in discerning manipulated content. The fear arises from the ability to edit someone’s words convincingly, posing challenges to accountability and trust. Finally, in the video, the Faception startup’s attempt to use facial data to determine personality introduces a new layer of concern. The prospect of AI making subjective judgments about individuals based on facial features raises ethical questions and accentuates the growing reliance on technology for complex human tasks, prompting a reassessment of our trust in these systems.
I haven’t really decided how I feel about deepfakes yet. I’m not one to worry much about hypothetical future situations for which there is not overwhelming evidence (like the effects of climate change), but I do find it alarming to imagine a world in which high-quality deepfakes can be produced quickly and cheaply. I find it reassuring that Chris Ume, who actually has a lot of experience with the subject, emphasizes just how laborious the process of creating deepfakes is. If he isn’t concerned about the future of deepfakes, then I am inclined to also not stress about it too much. It is also reassuring that technology which can detect deepfakes seems to be evolving alongside them, and I think there is a decent chance that deepfake technology will plateau before it completely changes our relationship to visual media.
Today’s readings explored the ethical considerations of computer vision, with a focus on deepfakes. I remember earlier this year seeing numerous deepfakes on social media that used politicians’ and celebrities’ voices to deliver various speeches. As indicated in the article ‘Slick Tom Cruise Deepfakes Signal That Near-Flawless Forgeries May Be Here,’ technology has reached a point where it can almost perfectly mimic someone, enabling them to say anything from singing songs to spreading political misinformation. Although it is fascinating to witness such videos, there are significant ethical considerations at play. As highlighted in another article, ‘Using AI to Detect Seemingly Perfect Deep-Fake,’ it requires considerable effort to find evidence proving that these videos are fake. Instead of solely focusing on building tools to detect deepfakes, there should also be an emphasis on educating people about their existence and the importance of double-checking sources. Additionally, something that raised concern was mentioned in the lecture ‘Fairness, Accountability, Transparency, and Ethics in Computer Vision,’ where it was noted that many people working in computer vision are unaware of how the technology they are developing is being used. Also, the presenter mentions the lack of diversity in the field.