As explained in Chapter 9 of More than a Glitch, the author explains the issues of AI diagnostics from the perspective of a person trying to replicate the AI diagnosis they received for cancer. During the process of attempting replicate the result, the author encounters a number of issues, from data formats not being readable to the assumptions made by the AI cancer detection algorithm about the images being read. Later in the chapter, it is explained that results produced by AI are shaped by human concerns for risk, liability concerns related to malpractice, and the unpredictability of AI. In many cases, Artificial Intelligences may be using proxies to detect something simply because these proxies exist in all instances where the AI is supposed to detect something. For example, all images of huskies fed to an AI had snow in the photo, resulting in the presence of snow being treated as a proxy for the photo containing a husky. Regardless, not only can the assumptions made by Artificial Intelligences be onerous, but their results can be arbitrary, a result of unintended patterns being present in the data, and shaped by human beliefs of what the results should look like, complicating the tendency to substitute human decision making with machine decision making.
Today we read Chapter 9 of “More Than a Glitch: confronting race, gender, and ability bias in tech” by Meredith Broussard. In this chapter, she discusses her breast cancer diagnosis and how an AI had looked at her mammogram scans in addition to a human professional, even though she had not consented to it to do so. Thankfully her doctor confirmed the scan, and the mass was eventually removed in surgery. This raised questions for Broussard, and she decided to double-check the validity of the AI that her healthcare provider had used. She found that they were as accurate as the original. One of her main points of contempt regarding the use of AI in this way however was its lack of instinct, which she states is one of the most powerful things when it comes to detecting things like breast cancer. It is impossible for AI to have instinct, which leaves room for false negatives––and in her case, if she had had a false negative, it would have been fatal. All AI can do is analyze images with math, pixel by pixel. For standardized procedures and checks, this could be good as a second opinion, but should never replace a human professional.
Broussard highlighted the AI detection system’s limited ‘instinct’ in cancer diagnosis. She highlighted the risks of solely relying on AI for detection, but I don’t believe AI is as hazardous as she suggests when it’s used as an assisting tool in conjunction with doctors.
The chapter doesn’t mention doctors relying entirely on AI detection systems or replacing them. It’s her concerns that magnify the perceived ‘danger’ associated with AI detection systems.
Indeed, AI detection systems may have their failures, but the goal isn’t to replace all doctors. Instead, researchers aim to assist doctors. Combining doctors’ expertise with AI data can enhance the cancer detection process, making it more reliable and efficient compared to relying solely on either doctors or AI for detection.
Anyone can throw around claims that AI tools are dangerous and harmful, but the real challenge lies in leveraging them to benefit society.
One thing that really stuck with me was the concept of technochauvinism. People have so much hope in technology that they are putting in a lot of trust into the idea of AI being able to detect cancer from a scan. Although this may be cost-efficient for society, we also have to realize that a diagnosis is a alot better where the doctor could feel out the place of possible cancer and not just have an AI detect it. The reason for this is because “machine learning models tend to be trained on data from a single clinical site, whereas they need to be tested on multiple clinical sites.” The author also gives us another example of being shown a list of dogs and detecting which ones are huskies. However, the only way the AI was able to detect huskies was because of their background, being snow. This gives us the possible inaccuracy of the AI system as well as what do they see in order to determine whether the following is the needed requirement or not. “FDA has approved many AI-based medical devices tested at single sites, the AI diagnostic performance is not reliable outside the home site.” This is very concerning that the FDA is approving of such medical devices when they are not as general as possible and more specific towards one group or category. I believe that the FDA shouldn’t be doing this as it would create several varieties of medical devices that could be different from each other. I would rather have the following be more focused on getting way more data so that the chances of the AI-based medical device being accurate is higher. I am curious as to what do these hospitals think about the idea of having their own AI-based system and why they don’t consider a variety of other possible results that could happen. Why are they only focused on their own data and no chance of outliers?
I’m always surprised by the varying levels of technological adoption in fields such as healthcare. On one hand, they’re using many different algorithms for all sorts of different and important processes, such as deciding a patient’s risk level and whether or not they have cancer. On the other hand, a patient can’t access their own input into all of these systems without having the ability to support a physical format coming on thirty years old.
Consent is also a major issue here. Broussard evidently was completely unaware that their results would be run through a computer program to assist in detection, despite the fact that other professionals found the cancer to be clearly and evidently present. It’s interesting to think that these images now have value. Broussard’s obvious cancer might not be worth much, but I’m sure that a more “subtle” case of cancer may be of more consequence to learning how to detect it.
I’m not sure I agree with Broussard completely on the ideas she has about human survival instincts. Sure, human beings excel at pattern recognition and spotting anomalies, but this does not make someone like me capable or automatically able to detect something like cancer. Both humans and computers require training to connect the dots here. Survival indeed is indeed strong, but is not always correct. We see anomalies in shapes across the room and make them faces, or someone’s head and shoulders, seeing things that aren’t really there. Maybe instinct is correct more often than not, but you can definitely be instinctually wrong.
AI is not ready to be a complete replacement for human function. Specifically, as we read in Chapter 9 of More than a Glitch by Meredith Broussard, AI is not ready to be at the forefront of diagnosing Breast Cancer. As Broussard simply put it, “predicting cancer from images alone is very hard for both humans and computers” (p. 150). Broussard goes into excellent detail about this and even provides research that shows how AI is worse than its human counterpart at detecting breast cancer. Currently, AI is incapable of considering all the variables human doctors consider when making their “diagnosis.” However, I could see AI use in the medical field and in helping to diagnose patients with things such as cancer. A major problem with this is that many see AI as the whole and entire answer; so much so that doctors will not be needed in the process at all. The issue with this is manyfold.
The biggest problem with relying on AI to make a complete diagnosis is that AI and other machine algorithms in healthcare (as we read for class today and Wednesday) are currently performing worse on minority patients than on white patients. The flaw for this primarily goes back to the data available. AI is only capable of accessing and building its algorithm(s) around data that reflects a gap in healthcare access between white patients and other minorities. This is merely where the problems begin. Therefore, to me, it is clear that AI is not ready to be the sole evaluator in a life-changing diagnosis.
The end of this reading sums up a major talking point from this module—the disparity between tech-rich and tech-poor and the increasing divide that comes from technological innovation. Technochauvinism—as the author describes it—is a mentality that prioritizes the wrong issues. Much like our metric-fixation, we lean on the newest breakthroughs in tech as though they are a better solution to long standing problems, but as illustrated in this chapter, there are several confounding limitations to technology. In medicine, for one, the nature of risk assessment may discourage more “accurate” modeling techniques, since the priority to reduce false negatives is integral to the development process. Additionally, insufficient screening of the inputs to training data may also bias a model in ways that we don’t fully understand, and likely won’t without some deep consideration of the modeling techniques. Finally, the economic and environmental costs of AI present an unrealistic barrier to widespread adoption, and so even with the potential for good, in situations such as screening for breast cancer, it makes far more sense to spend quality time and money in alternative methodology.
I think in a lot of readings when we discuss AI models surrounding ideas like healthcare, housing, and recidivism, the author points out how short-sighted approaches to mitigate these issues often are. In the reading for today, Broussard calls out the idea that these AI algorithms could be helpful in rural areas by pointing out the lack of resources necessary to run these systems. After pointing her focus to rural areas of the Global South, she says, “reducing cancer mortality in countries throughout the Global South starts with low-tech screenings and getting people access to medical care.” Even in places where these systems can be used alongside radiologists, though, the author mentions how the radiologist tend to ignore the over-predictive results of the AI’s developed. You know, it’s interesting to read all of this because my dad as a neurologist was working with a team at UAB with the intent to create a system to detect issues like Parkinson’s from brain scans. They ultimately ran into issues with the radiology department, preventing them from creating a dataset from willing participants, but it is interesting to read about similar systems for other radiology scans. I wonder if the tool he was interested in creating would have ended up in the pile of systems ignored by radiologists. I also find it interesting, but expected, that insurance companies would refuse to pay for images seen by these technologies.
In Chapter 9 of “More than a Glitch,” Meredith Broussard discusses the use of AI tools for diagnosing diseases, specifically focusing on breast cancer. Broussard recounts her personal experience with breast cancer and, having successfully recovered, decides to delve into the workings of AI tools for breast cancer detection. One aspect Broussard values from her experience is the expertise of her doctor, who accurately diagnosed and treated the disease. However, she contends that AI tools cannot fully replicate the nuanced thought processes of doctors. Many models operate on patterns that might seem trivial, as illustrated by an example Broussard provides an AI model excelling at identifying huskies by relying on the presence of snow in images. A major concern Broussard raises about AI models, particularly in the medical field, is the lack of transparency in their decision-making processes. She highlights the potential for legal ramifications if an AI diagnosis proves incorrect, emphasizing the importance of understanding the procedures implemented by these models. Broussard acknowledges the potential benefits of AI tools, especially in areas with limited access to advanced medical technology. However, she points out existing challenges that hinder their widespread implementation. In conclusion, the author anticipates a significant impact from AI models in the future. Rather than replacing human expertise, she argues for their role as complementary tools. This perspective aims to ensure a harmonious integration of AI into existing practices rather than a complete substitution.
In this chapter, Broussard expresses her distrust in AI cancer detection systems. After struggling to use such on her own mammogram scans, she determines that the belief that such systems could be used to bring high tech treatment to poor countries is a fantasy. She write: “the huge amount of money being poured into AI diagnostics, when simpler methods could have a high impact, is an example of technochauvanism.”
I agree with Broussard that AI cancer screening probably won’t bring a revolution in the healthcare of the developing world, since it won’t be able to solve the logistic hurdles and social issues that are more pressing. However, I do think there is a significant chance that AI cancer screening could have a larger impact in the developed world. What Broussard ignores is that reducing radiology costs in the U.S. system could still help millions of people from financial hardship. In this case, the investment in AI screening technology would not be a wasteful incident of technochauvanism, but instead a radical tool for helping correct soaring healthcare costs.
I wonder if the doctors are required (either legally or by their hospital) to make a note that AI looked at the mammogram, since it seems like they could have just decided to not mention it and then the author would never have had a chance to learn about it, regardless of how much she looked around in her medical forms. It feels like something that doctors might decide isn’t important for the patient to know, but could really affect your quality of care. It also feels related to the author’s concerns about data privacy, which although ultimately not as bad as she feared were definitely valid and concerning. It’s understandable that researchers would want patient data to improve current treatments and develop new approaches, but even anonymized data can have traits attached that allow individuals to be picked out (https://techcrunch.com/2019/07/24/researchers-spotlight-the-lie-of-anonymous-data/). It’s also definitely concerning that everyone has been dipping from the same database since the biases of that dataset are likely to be spread and magnified indiscriminately. Also the whole CD subplot was funny and I have to agree with her hesitation towards the tech-immersed future. Finally, I think it’s just a really interesting commentary on how detached you can get from the real world when working with data. When her neighbor was looking at all the numbers of cell radius and irregularity, I doubt he was considering them as potentially people he knew. Not necessarily a bad thing, but just something that might be grounding to keep in mind.
For today, we read chapter 9 of more than a Glitch, which focused on the AI cancer detection models. While there was a lot of focus on the trials of even getting the program to run, I took a particular interest in the issue of privacy and consent where AI diagnostics are considered. Like always, it’s probably in the fine print somewhere that they can do whatever they want, but it seems like the sort of thing where the patient should be able to consent to that type of care, particularly when sensitive photos (even if theyre of the inside of a breast) will then be possibly shown to far more people and used for far more things than actually providing the patient with medical care. It reeks of a metric centric energy, well, not quite that, but the general idea of automating anything that can be automated. This clearly felt violating to Broussard, and the actual doctor was more than capable of quickly diagnosing on their own. Maybe her case was just an extreme one, but it makes me wonder if this sort of thing is really a place where AI is needed, or if its just somewhere we’re forcing it to be, at the detriment or at least discomfort of those it impacts.
The passage details a personal story of breast cancer diagnosis and treatment, intertwined with a critical exploration of the role of artificial intelligence in medical diagnostics. AI’s role in diagnosing diseases like breast cancer is complex and multifaceted. The author’s experience with AI reading her mammograms and her subsequent experiment with an open-source AI tool underscores the challenges in AI diagnostics. This complexity reflects the intricate nature of computer science, where algorithms must be finely tuned and extensively tested to ensure accuracy and reliability. The story also highlights the limitations of AI in interpreting medical data. The AI’s performance was dependent on the resolution and format of the images, illustrating how critical data quality is for effective AI analysis. This mirrors a fundamental principle in computer science: the quality of output depends heavily on the quality of input. Moreover, the author’s concerns about consent and privacy in the use of her medical data for AI training touch upon the ethical considerations crucial in computer science. Ensuring data privacy and informed consent are fundamental challenges in the development and deployment of AI systems.
The author mentions that she was not aware that her medical file would be available to researchers. This begs the questions of informed consent and privacy that seems to be quite common in such studies. I think there is a need to be more explicit about where ones medical data is going to go when it is not explicitly clear what people involved in one’s care entails. I also think that this requirement is directly at odds with the problem of not having enough data.
Ultimately, it is clear that the role of AI is assistance and not eventual replacement of humans in the labor force. However, technochauvanism and technosolutionism (along with the drive for profit) have meant that this is rarely the case. While it does not seem that AI is going to replace the jobs of doctors (even in the task of reading scans) at present since it can only do certain isolated parts of a their jobs, I hope that it stays this way. I do think there is some value in doctors being able to verify one’s readings of a scan using second opinions from other doctors as well as AI assistance, but these AI systems should definitely not be treated like oracles.
Meredith Broussard’s chapter on tech biases is mind-blowing! She talks about how AI can detect cancer, and it’s wild. Like, seriously, tech has come so far. But then she throws a curveball by digging into the fine print, exposing some sneaky ethical stuff. It hit me like, “Whoa, didn’t see that coming!”
The fine print details are like a wakeup call about the power plays in tech. Those overlooked agreement clauses aren’t just about AI’s path; they shape how it deals with race, gender, and abilities. This chapter made me realize how much we rely on tech without knowing what we’re signing up for.
Thinking back on the chapter, it’s this mix of being amazed by what AI can do for health and going, “Hold up, what did I agree to?” Broussard’s got me questioning how we’re jumping into tech without really understanding the whole deal. Tech’s cool, but it’s making me look twice at the fine print and what it means for our lives.
The article talks about the wrong diagnosis of AI for breast cancer based on the personal experience of the author. On one hand, AI still lacks consideration of some specific things that only humans might recognize. Sometimes, they would stress some non-important things or values without comparing them with the background environment and other possible environmental conditions. Thus, on the one hand, we need to improve the ability of AI with more medical-specific factors, while also considering it only as a tool for identification. It is a joke in medical students say “It is luck to have patients that have a symptom which is totally similar to what is in the book.” This indicates the diversity in biology but also informs that human diseases are still seen as a big problem in medical fields which requires experience rather than any simple factor.
Privacy and consent are pretty widespread issues brought up when discussing technology, and they appeared again in this reading. I think when health-related data is collected by a medical professional, people primarily think its used only for personal diagnoses and medical care. Furthermore, if one’s data is shared for research or with other health-related organizations, they may be under the impression that they would be explicitly told (for example, how you’re informed when lab results get sent out if not done in the medical center). Medical data being used for AI research is probably somewhere in the form you need to sign to get care or to see a doctor, and often you don’t necessarily read the fine print when you are in urgent need for health care. I think a lot of people would be less inclined to give up their data—or want to be more informed about how it would be used—if told the specificities. At least for now this AI isn’t being used as a replacement for a doctor’s own diagnosis. If they plan on continuing to use it though, there was the issue of the algorithm accuracy failing when given datasets from various hospitals as well as the lack of a human-comprehensible reason behind its diagnoses. So before anything develops further, those developing these models and the people working with them should address issues of bias and lack of understanding.
Our readings about AI have largely shown the same level of uncertainty about about the creation and implementation of these models. Is a breast cancer detection model meant to replace radiologists or aid in the diagnosis? Even if the models look promising, they’re usually trained on one or a few test sites, and when testing a different site results are significantly worse. Radiologists and many others in the medical field are skeptical of AI, so why are they being used so commonly? If the experts are skeptical, who is pushing these implementations? Why is there still such a disparity between the results of Black and white test subjects? When, if ever, will these models be consistent enough to be used commonly? Broussard, through her own testing and research, shows that while the tech is impressive for what it is, no one seems to be on the same page about its implementation. Her colleague who created the model is candid about its strengths and weaknesses, but the fields of medicine and computer science are not consistent about their plans for implementation.
I also found the privacy concerns shared in this reading deeply concerning. Why is it that Broussard’s mammogram images were plugged into an AI model without her consent or knowledge? What’s equally concerning is that the AI presumably had access to high-res images while Broussard herself didn’t, and couldn’t without a disk drive. This strange barrier to information between a high-tech AI and someone who needed a now-antiquated data format is bizarre, to say the least.
Our readings about AI have largely shown the same level of uncertainty about about the creation and implementation of these models. Is a breast cancer detection model meant to replace radiologists or aid in the diagnosis? Even if the models look promising, they’re usually trained on one or a few test sites, and when testing a different site results are significantly worse. Radiologists and many others in the medical field are skeptical of AI, so why are they being used so commonly? If the experts are skeptical, who is pushing these implementations? Why is there still such a disparity between the results of Black and white test subjects? When, if ever, will these models be consistent enough to be used commonly? Broussard, through her own testing and research, shows that while the tech is impressive for what it is, no one seems to be on the same page about its implementation. Her colleague who created the model is candid about its strengths and weaknesses, but the fields of medicine and computer science are not consistent about their plans for implementation.
I also found the privacy concerns shared in this reading deeply concerning. Why is it that Broussard’s mammogram images were plugged into an AI model without her consent or knowledge? What’s equally concerning is that the AI presumably had access to high-res images while Broussard herself didn’t, and couldn’t without a disk drive. This strange barrier to information between a high-tech AI and someone who needed a now-antiquated data format is bizarre, to say the least.
In Chapter 9 of “More Than a Glitch,” the author explores the challenges of AI diagnostics through a personal experience with cancer diagnosis. Attempting to replicate an AI diagnosis reveals issues such as unreadable data formats and the algorithm’s assumptions about images. The chapter highlights how AI results are influenced by human concerns, liability issues, and the inherent unpredictability of AI. The text underscores that AI may rely on proxies, leading to arbitrary results based on unintended patterns in the data. The author shares a specific instance involving an AI examining mammogram scans without consent, raising concerns about the technology’s lack of instinct, crucial in detecting conditions like breast cancer. While AI can provide accurate analyses, its inherent limitations and potential for false negatives underscore the irreplaceable role of human intuition in critical medical decisions. The narrative emphasizes that AI, while valuable for standardized procedures, should complement rather than replace human expertise in healthcare.
One point from the reading that I struggled to comprehend was why exactly the AI failed to make accurate predictions when it was moved from one site to another. Cancer looks similar across all site and the reading discussed the standardization of the pictures taken for Mammography. Is it the different demographics that exist across one space to another. Those two hospitals on in the reading couldn’t even communicate because they weren’t on the same system. It also sounded like the file formats were different. But is that all that it takes to corrupt the algorithm?
I really enjoyed this chapter for its description and comparison of methods for identifying breast cancer, and the ways in which human experts and AI can work together in the medical system. I was pretty interested in the ways lung technicians vs. other cancer professionals interacted with ai tools. The chapter mentioned that many of the lung radiologists enjoyed using the tools, and that having their decisions validated / supported by ai allowed them to feel a sense of pride in their work. It also seemed valuable that you can configure the systems to lean in the direction of more false positives than false negatives. The model itself is obviously not the end of diagnostic testing, so it is better to be on the safe side and do more testing if there is unsureness. Being able to mathematically tweak that “leaning” seems very useful. However, it is clear that these models are not sufficient on their own
The views and opinions expressed on individual web pages are strictly those of their authors and are not official statements of Grinnell College.
Copyright Statement.
As explained in Chapter 9 of More than a Glitch, the author explains the issues of AI diagnostics from the perspective of a person trying to replicate the AI diagnosis they received for cancer. During the process of attempting replicate the result, the author encounters a number of issues, from data formats not being readable to the assumptions made by the AI cancer detection algorithm about the images being read. Later in the chapter, it is explained that results produced by AI are shaped by human concerns for risk, liability concerns related to malpractice, and the unpredictability of AI. In many cases, Artificial Intelligences may be using proxies to detect something simply because these proxies exist in all instances where the AI is supposed to detect something. For example, all images of huskies fed to an AI had snow in the photo, resulting in the presence of snow being treated as a proxy for the photo containing a husky. Regardless, not only can the assumptions made by Artificial Intelligences be onerous, but their results can be arbitrary, a result of unintended patterns being present in the data, and shaped by human beliefs of what the results should look like, complicating the tendency to substitute human decision making with machine decision making.
Today we read Chapter 9 of “More Than a Glitch: confronting race, gender, and ability bias in tech” by Meredith Broussard. In this chapter, she discusses her breast cancer diagnosis and how an AI had looked at her mammogram scans in addition to a human professional, even though she had not consented to it to do so. Thankfully her doctor confirmed the scan, and the mass was eventually removed in surgery. This raised questions for Broussard, and she decided to double-check the validity of the AI that her healthcare provider had used. She found that they were as accurate as the original. One of her main points of contempt regarding the use of AI in this way however was its lack of instinct, which she states is one of the most powerful things when it comes to detecting things like breast cancer. It is impossible for AI to have instinct, which leaves room for false negatives––and in her case, if she had had a false negative, it would have been fatal. All AI can do is analyze images with math, pixel by pixel. For standardized procedures and checks, this could be good as a second opinion, but should never replace a human professional.
Broussard highlighted the AI detection system’s limited ‘instinct’ in cancer diagnosis. She highlighted the risks of solely relying on AI for detection, but I don’t believe AI is as hazardous as she suggests when it’s used as an assisting tool in conjunction with doctors.
The chapter doesn’t mention doctors relying entirely on AI detection systems or replacing them. It’s her concerns that magnify the perceived ‘danger’ associated with AI detection systems.
Indeed, AI detection systems may have their failures, but the goal isn’t to replace all doctors. Instead, researchers aim to assist doctors. Combining doctors’ expertise with AI data can enhance the cancer detection process, making it more reliable and efficient compared to relying solely on either doctors or AI for detection.
Anyone can throw around claims that AI tools are dangerous and harmful, but the real challenge lies in leveraging them to benefit society.
One thing that really stuck with me was the concept of technochauvinism. People have so much hope in technology that they are putting in a lot of trust into the idea of AI being able to detect cancer from a scan. Although this may be cost-efficient for society, we also have to realize that a diagnosis is a alot better where the doctor could feel out the place of possible cancer and not just have an AI detect it. The reason for this is because “machine learning models tend to be trained on data from a single clinical site, whereas they need to be tested on multiple clinical sites.” The author also gives us another example of being shown a list of dogs and detecting which ones are huskies. However, the only way the AI was able to detect huskies was because of their background, being snow. This gives us the possible inaccuracy of the AI system as well as what do they see in order to determine whether the following is the needed requirement or not. “FDA has approved many AI-based medical devices tested at single sites, the AI diagnostic performance is not reliable outside the home site.” This is very concerning that the FDA is approving of such medical devices when they are not as general as possible and more specific towards one group or category. I believe that the FDA shouldn’t be doing this as it would create several varieties of medical devices that could be different from each other. I would rather have the following be more focused on getting way more data so that the chances of the AI-based medical device being accurate is higher. I am curious as to what do these hospitals think about the idea of having their own AI-based system and why they don’t consider a variety of other possible results that could happen. Why are they only focused on their own data and no chance of outliers?
I’m always surprised by the varying levels of technological adoption in fields such as healthcare. On one hand, they’re using many different algorithms for all sorts of different and important processes, such as deciding a patient’s risk level and whether or not they have cancer. On the other hand, a patient can’t access their own input into all of these systems without having the ability to support a physical format coming on thirty years old.
Consent is also a major issue here. Broussard evidently was completely unaware that their results would be run through a computer program to assist in detection, despite the fact that other professionals found the cancer to be clearly and evidently present. It’s interesting to think that these images now have value. Broussard’s obvious cancer might not be worth much, but I’m sure that a more “subtle” case of cancer may be of more consequence to learning how to detect it.
I’m not sure I agree with Broussard completely on the ideas she has about human survival instincts. Sure, human beings excel at pattern recognition and spotting anomalies, but this does not make someone like me capable or automatically able to detect something like cancer. Both humans and computers require training to connect the dots here. Survival indeed is indeed strong, but is not always correct. We see anomalies in shapes across the room and make them faces, or someone’s head and shoulders, seeing things that aren’t really there. Maybe instinct is correct more often than not, but you can definitely be instinctually wrong.
AI is not ready to be a complete replacement for human function. Specifically, as we read in Chapter 9 of More than a Glitch by Meredith Broussard, AI is not ready to be at the forefront of diagnosing Breast Cancer. As Broussard simply put it, “predicting cancer from images alone is very hard for both humans and computers” (p. 150). Broussard goes into excellent detail about this and even provides research that shows how AI is worse than its human counterpart at detecting breast cancer. Currently, AI is incapable of considering all the variables human doctors consider when making their “diagnosis.” However, I could see AI use in the medical field and in helping to diagnose patients with things such as cancer. A major problem with this is that many see AI as the whole and entire answer; so much so that doctors will not be needed in the process at all. The issue with this is manyfold.
The biggest problem with relying on AI to make a complete diagnosis is that AI and other machine algorithms in healthcare (as we read for class today and Wednesday) are currently performing worse on minority patients than on white patients. The flaw for this primarily goes back to the data available. AI is only capable of accessing and building its algorithm(s) around data that reflects a gap in healthcare access between white patients and other minorities. This is merely where the problems begin. Therefore, to me, it is clear that AI is not ready to be the sole evaluator in a life-changing diagnosis.
The end of this reading sums up a major talking point from this module—the disparity between tech-rich and tech-poor and the increasing divide that comes from technological innovation. Technochauvinism—as the author describes it—is a mentality that prioritizes the wrong issues. Much like our metric-fixation, we lean on the newest breakthroughs in tech as though they are a better solution to long standing problems, but as illustrated in this chapter, there are several confounding limitations to technology. In medicine, for one, the nature of risk assessment may discourage more “accurate” modeling techniques, since the priority to reduce false negatives is integral to the development process. Additionally, insufficient screening of the inputs to training data may also bias a model in ways that we don’t fully understand, and likely won’t without some deep consideration of the modeling techniques. Finally, the economic and environmental costs of AI present an unrealistic barrier to widespread adoption, and so even with the potential for good, in situations such as screening for breast cancer, it makes far more sense to spend quality time and money in alternative methodology.
I think in a lot of readings when we discuss AI models surrounding ideas like healthcare, housing, and recidivism, the author points out how short-sighted approaches to mitigate these issues often are. In the reading for today, Broussard calls out the idea that these AI algorithms could be helpful in rural areas by pointing out the lack of resources necessary to run these systems. After pointing her focus to rural areas of the Global South, she says, “reducing cancer mortality in countries throughout the Global South starts with low-tech screenings and getting people access to medical care.” Even in places where these systems can be used alongside radiologists, though, the author mentions how the radiologist tend to ignore the over-predictive results of the AI’s developed. You know, it’s interesting to read all of this because my dad as a neurologist was working with a team at UAB with the intent to create a system to detect issues like Parkinson’s from brain scans. They ultimately ran into issues with the radiology department, preventing them from creating a dataset from willing participants, but it is interesting to read about similar systems for other radiology scans. I wonder if the tool he was interested in creating would have ended up in the pile of systems ignored by radiologists. I also find it interesting, but expected, that insurance companies would refuse to pay for images seen by these technologies.
In Chapter 9 of “More than a Glitch,” Meredith Broussard discusses the use of AI tools for diagnosing diseases, specifically focusing on breast cancer. Broussard recounts her personal experience with breast cancer and, having successfully recovered, decides to delve into the workings of AI tools for breast cancer detection. One aspect Broussard values from her experience is the expertise of her doctor, who accurately diagnosed and treated the disease. However, she contends that AI tools cannot fully replicate the nuanced thought processes of doctors. Many models operate on patterns that might seem trivial, as illustrated by an example Broussard provides an AI model excelling at identifying huskies by relying on the presence of snow in images. A major concern Broussard raises about AI models, particularly in the medical field, is the lack of transparency in their decision-making processes. She highlights the potential for legal ramifications if an AI diagnosis proves incorrect, emphasizing the importance of understanding the procedures implemented by these models. Broussard acknowledges the potential benefits of AI tools, especially in areas with limited access to advanced medical technology. However, she points out existing challenges that hinder their widespread implementation. In conclusion, the author anticipates a significant impact from AI models in the future. Rather than replacing human expertise, she argues for their role as complementary tools. This perspective aims to ensure a harmonious integration of AI into existing practices rather than a complete substitution.
In this chapter, Broussard expresses her distrust in AI cancer detection systems. After struggling to use such on her own mammogram scans, she determines that the belief that such systems could be used to bring high tech treatment to poor countries is a fantasy. She write: “the huge amount of money being poured into AI diagnostics, when simpler methods could have a high impact, is an example of technochauvanism.”
I agree with Broussard that AI cancer screening probably won’t bring a revolution in the healthcare of the developing world, since it won’t be able to solve the logistic hurdles and social issues that are more pressing. However, I do think there is a significant chance that AI cancer screening could have a larger impact in the developed world. What Broussard ignores is that reducing radiology costs in the U.S. system could still help millions of people from financial hardship. In this case, the investment in AI screening technology would not be a wasteful incident of technochauvanism, but instead a radical tool for helping correct soaring healthcare costs.
I wonder if the doctors are required (either legally or by their hospital) to make a note that AI looked at the mammogram, since it seems like they could have just decided to not mention it and then the author would never have had a chance to learn about it, regardless of how much she looked around in her medical forms. It feels like something that doctors might decide isn’t important for the patient to know, but could really affect your quality of care. It also feels related to the author’s concerns about data privacy, which although ultimately not as bad as she feared were definitely valid and concerning. It’s understandable that researchers would want patient data to improve current treatments and develop new approaches, but even anonymized data can have traits attached that allow individuals to be picked out (https://techcrunch.com/2019/07/24/researchers-spotlight-the-lie-of-anonymous-data/). It’s also definitely concerning that everyone has been dipping from the same database since the biases of that dataset are likely to be spread and magnified indiscriminately. Also the whole CD subplot was funny and I have to agree with her hesitation towards the tech-immersed future. Finally, I think it’s just a really interesting commentary on how detached you can get from the real world when working with data. When her neighbor was looking at all the numbers of cell radius and irregularity, I doubt he was considering them as potentially people he knew. Not necessarily a bad thing, but just something that might be grounding to keep in mind.
For today, we read chapter 9 of more than a Glitch, which focused on the AI cancer detection models. While there was a lot of focus on the trials of even getting the program to run, I took a particular interest in the issue of privacy and consent where AI diagnostics are considered. Like always, it’s probably in the fine print somewhere that they can do whatever they want, but it seems like the sort of thing where the patient should be able to consent to that type of care, particularly when sensitive photos (even if theyre of the inside of a breast) will then be possibly shown to far more people and used for far more things than actually providing the patient with medical care. It reeks of a metric centric energy, well, not quite that, but the general idea of automating anything that can be automated. This clearly felt violating to Broussard, and the actual doctor was more than capable of quickly diagnosing on their own. Maybe her case was just an extreme one, but it makes me wonder if this sort of thing is really a place where AI is needed, or if its just somewhere we’re forcing it to be, at the detriment or at least discomfort of those it impacts.
The passage details a personal story of breast cancer diagnosis and treatment, intertwined with a critical exploration of the role of artificial intelligence in medical diagnostics. AI’s role in diagnosing diseases like breast cancer is complex and multifaceted. The author’s experience with AI reading her mammograms and her subsequent experiment with an open-source AI tool underscores the challenges in AI diagnostics. This complexity reflects the intricate nature of computer science, where algorithms must be finely tuned and extensively tested to ensure accuracy and reliability. The story also highlights the limitations of AI in interpreting medical data. The AI’s performance was dependent on the resolution and format of the images, illustrating how critical data quality is for effective AI analysis. This mirrors a fundamental principle in computer science: the quality of output depends heavily on the quality of input. Moreover, the author’s concerns about consent and privacy in the use of her medical data for AI training touch upon the ethical considerations crucial in computer science. Ensuring data privacy and informed consent are fundamental challenges in the development and deployment of AI systems.
The author mentions that she was not aware that her medical file would be available to researchers. This begs the questions of informed consent and privacy that seems to be quite common in such studies. I think there is a need to be more explicit about where ones medical data is going to go when it is not explicitly clear what people involved in one’s care entails. I also think that this requirement is directly at odds with the problem of not having enough data.
Ultimately, it is clear that the role of AI is assistance and not eventual replacement of humans in the labor force. However, technochauvanism and technosolutionism (along with the drive for profit) have meant that this is rarely the case. While it does not seem that AI is going to replace the jobs of doctors (even in the task of reading scans) at present since it can only do certain isolated parts of a their jobs, I hope that it stays this way. I do think there is some value in doctors being able to verify one’s readings of a scan using second opinions from other doctors as well as AI assistance, but these AI systems should definitely not be treated like oracles.
Meredith Broussard’s chapter on tech biases is mind-blowing! She talks about how AI can detect cancer, and it’s wild. Like, seriously, tech has come so far. But then she throws a curveball by digging into the fine print, exposing some sneaky ethical stuff. It hit me like, “Whoa, didn’t see that coming!”
The fine print details are like a wakeup call about the power plays in tech. Those overlooked agreement clauses aren’t just about AI’s path; they shape how it deals with race, gender, and abilities. This chapter made me realize how much we rely on tech without knowing what we’re signing up for.
Thinking back on the chapter, it’s this mix of being amazed by what AI can do for health and going, “Hold up, what did I agree to?” Broussard’s got me questioning how we’re jumping into tech without really understanding the whole deal. Tech’s cool, but it’s making me look twice at the fine print and what it means for our lives.
The article talks about the wrong diagnosis of AI for breast cancer based on the personal experience of the author. On one hand, AI still lacks consideration of some specific things that only humans might recognize. Sometimes, they would stress some non-important things or values without comparing them with the background environment and other possible environmental conditions. Thus, on the one hand, we need to improve the ability of AI with more medical-specific factors, while also considering it only as a tool for identification. It is a joke in medical students say “It is luck to have patients that have a symptom which is totally similar to what is in the book.” This indicates the diversity in biology but also informs that human diseases are still seen as a big problem in medical fields which requires experience rather than any simple factor.
Privacy and consent are pretty widespread issues brought up when discussing technology, and they appeared again in this reading. I think when health-related data is collected by a medical professional, people primarily think its used only for personal diagnoses and medical care. Furthermore, if one’s data is shared for research or with other health-related organizations, they may be under the impression that they would be explicitly told (for example, how you’re informed when lab results get sent out if not done in the medical center). Medical data being used for AI research is probably somewhere in the form you need to sign to get care or to see a doctor, and often you don’t necessarily read the fine print when you are in urgent need for health care. I think a lot of people would be less inclined to give up their data—or want to be more informed about how it would be used—if told the specificities. At least for now this AI isn’t being used as a replacement for a doctor’s own diagnosis. If they plan on continuing to use it though, there was the issue of the algorithm accuracy failing when given datasets from various hospitals as well as the lack of a human-comprehensible reason behind its diagnoses. So before anything develops further, those developing these models and the people working with them should address issues of bias and lack of understanding.
Our readings about AI have largely shown the same level of uncertainty about about the creation and implementation of these models. Is a breast cancer detection model meant to replace radiologists or aid in the diagnosis? Even if the models look promising, they’re usually trained on one or a few test sites, and when testing a different site results are significantly worse. Radiologists and many others in the medical field are skeptical of AI, so why are they being used so commonly? If the experts are skeptical, who is pushing these implementations? Why is there still such a disparity between the results of Black and white test subjects? When, if ever, will these models be consistent enough to be used commonly? Broussard, through her own testing and research, shows that while the tech is impressive for what it is, no one seems to be on the same page about its implementation. Her colleague who created the model is candid about its strengths and weaknesses, but the fields of medicine and computer science are not consistent about their plans for implementation.
I also found the privacy concerns shared in this reading deeply concerning. Why is it that Broussard’s mammogram images were plugged into an AI model without her consent or knowledge? What’s equally concerning is that the AI presumably had access to high-res images while Broussard herself didn’t, and couldn’t without a disk drive. This strange barrier to information between a high-tech AI and someone who needed a now-antiquated data format is bizarre, to say the least.
Our readings about AI have largely shown the same level of uncertainty about about the creation and implementation of these models. Is a breast cancer detection model meant to replace radiologists or aid in the diagnosis? Even if the models look promising, they’re usually trained on one or a few test sites, and when testing a different site results are significantly worse. Radiologists and many others in the medical field are skeptical of AI, so why are they being used so commonly? If the experts are skeptical, who is pushing these implementations? Why is there still such a disparity between the results of Black and white test subjects? When, if ever, will these models be consistent enough to be used commonly? Broussard, through her own testing and research, shows that while the tech is impressive for what it is, no one seems to be on the same page about its implementation. Her colleague who created the model is candid about its strengths and weaknesses, but the fields of medicine and computer science are not consistent about their plans for implementation.
I also found the privacy concerns shared in this reading deeply concerning. Why is it that Broussard’s mammogram images were plugged into an AI model without her consent or knowledge? What’s equally concerning is that the AI presumably had access to high-res images while Broussard herself didn’t, and couldn’t without a disk drive. This strange barrier to information between a high-tech AI and someone who needed a now-antiquated data format is bizarre, to say the least.
I think I clicked on the “post” button one too many times when uploading this, but the responses are the exact same.
In Chapter 9 of “More Than a Glitch,” the author explores the challenges of AI diagnostics through a personal experience with cancer diagnosis. Attempting to replicate an AI diagnosis reveals issues such as unreadable data formats and the algorithm’s assumptions about images. The chapter highlights how AI results are influenced by human concerns, liability issues, and the inherent unpredictability of AI. The text underscores that AI may rely on proxies, leading to arbitrary results based on unintended patterns in the data. The author shares a specific instance involving an AI examining mammogram scans without consent, raising concerns about the technology’s lack of instinct, crucial in detecting conditions like breast cancer. While AI can provide accurate analyses, its inherent limitations and potential for false negatives underscore the irreplaceable role of human intuition in critical medical decisions. The narrative emphasizes that AI, while valuable for standardized procedures, should complement rather than replace human expertise in healthcare.
One point from the reading that I struggled to comprehend was why exactly the AI failed to make accurate predictions when it was moved from one site to another. Cancer looks similar across all site and the reading discussed the standardization of the pictures taken for Mammography. Is it the different demographics that exist across one space to another. Those two hospitals on in the reading couldn’t even communicate because they weren’t on the same system. It also sounded like the file formats were different. But is that all that it takes to corrupt the algorithm?
I really enjoyed this chapter for its description and comparison of methods for identifying breast cancer, and the ways in which human experts and AI can work together in the medical system. I was pretty interested in the ways lung technicians vs. other cancer professionals interacted with ai tools. The chapter mentioned that many of the lung radiologists enjoyed using the tools, and that having their decisions validated / supported by ai allowed them to feel a sense of pride in their work. It also seemed valuable that you can configure the systems to lean in the direction of more false positives than false negatives. The model itself is obviously not the end of diagnostic testing, so it is better to be on the safe side and do more testing if there is unsureness. Being able to mathematically tweak that “leaning” seems very useful. However, it is clear that these models are not sufficient on their own