Limiting racial disparities and bias for wearable devices in health science research, by Colvonen et. al (https://academic.oup.com/sleep/article/43/10/zsaa159/5902283)
Limiting racial disparities and bias for wearable devices in health science research, by Colvonen et. al (https://academic.oup.com/sleep/article/43/10/zsaa159/5902283)
In the article on wearable technologies, the authors express concern on how wearable technology, increasingly used in healthcare related research, may exacerbate health disparities between people of differing skin tones due to technological limitations and a lack of diversity in validation studies. Of course, wearable technologies may serve as another means for corporations to collect information on users with little transparency, only confirming consent through inaccessible legalese. At the same time, these technologies can serve as a positive force in healthcare research due to their ability to make longitudinal studies more widespread without being intrusive.
With the recent announcement of the Apple Vision Pro, it is possible that wearable technologies may become more than an extension of your phone that can track health outcomes. Although mixed and virtual reality are not new technologies, Apple has demonstrated itself to be a company that is capable of introducing new technologies to mass markets. After all, Apple’s emphasis on design and ease of use enables them to describe their products as magic, something that just works. However, if the introduction of the smartphone is used as a guide, it is imaginable that mixed and/or virtual reality may become as central to peoples’ lives as the smartphones has been. Through mixed and virtual reality technology, corporations are able to shape physical environments into personally tailored digital environments, complete with advertising. However, known disparities in voice and facial recognition technologies may result in disparities in how people navigate digital environments, especially if people are expected to interact with mixed and virtual reality with voice commands and their eyes.
Today we learned about wearable technologies. With the current technology that things like Fitbits and Apple Watches use, they use green light signaling which is supposed to monitor the bloodstream through the skin. The problem with this is that green light’s penetrative qualities through the dermis vary with the pigmentation of the skin. When a user’s skin is darker, the green light does not produce as accurate of measurements when it comes to measuring what the watch promises, such as heart rate, and this is a problem for many different reasons. The first problem is the data collection that it inhibits. Fitbit and the Apple Watch have both aided in hundreds of studies respectively with the data that they collect from users, and considering what we know about their inaccuracy, this means that the data that they collect are unreliable, which invalidates the credibility of these studies. This is a form of technological bias, for wearable technologies are not equally as effective for POC and white individuals.
One thing that struck me from this reading that was not present in a lot of our previous readings is the news that a lot of these wearable technology companies are already aware of these issues and are actively working to fix them and communicate their inaccuracies with their clients. This gives me hope that more companies can do the same in the future.
The reading is mainly about the unaccessible condition of wearable technologies for different races, especially black people. For example, the wearables use a PPG green light signal which doesn’t consider the difference in the absorption of light differently. Studies have shown that green light lacks precision and accuracy, and may not read at all when measuring heart rate in darker skin types—the lack of consideration of different races and genders makes the product inaccessible in different ways. Similar things happened for the Asian group. The skin color and structure make Asian people more prone to hyperpigmentation than more fair-skinned individuals, and many of the medical industries may not think about the special treatment toward Asian groups. Besides, there are always a lot of choices that are suitable for makeup in the US for black and white people but fewer that are suitable for Asians.
Although the article mentions that some companies have started noting that their products may not be effective on individuals with darker skin tones, something about this strikes me as wrong. The fact that a product is still marketed and available for purchase, despite a disclaimer that only a percentage of the population will be likely to see any benefit from it is unfair and raises plenty of ethical concerns. Although we have talked in class about the use of technology that only benefits certain groups, there is some question as to whether the fact that the technology does work for a large subset of the population will keep companies from trying to improve their product. After all, it works for many of their users, so why should they invest more resources and try to completely redesign their product to suit a smaller group? This practice collides with the notions of classification bias that we visited earlier in the semester—namely in that the effectiveness of the device varies when considering protected characteristics. It would seem to make sense that companies should not be able to market their product with an issue like this.
The disparities in the effectiveness of fitness trackers based on skin tone, as discussed in today’s reading, are particularly troubling considering existing health disparities across racial populations. Specifically, due to medical racism, the healthcare people of color, especially dark-skinned people of color receive is of a lower quality on average than that that lighter skin folk receive, especially white people. The article mentioned this in passing, but it’s a very real issue, something that immediately jumps to mind is the fact that medical students, even today, are often taught that Black people have a naturally higher pain tolerance, which leads to their concerns often being taken less seriously, and also how the majority of medical diagrams for teaching diagnosis model conditions on white or light skin (even when that is not the population most prone to the condition). Things like this cause people of color to have to wait longer for treatment, which exacerbates conditions, not to mention systemic oppression designed to keep POC in poverty, thus limiting their access to things like quality nutrition and safe public spaces to be active in, which impacts their health in the kinds of metrics that a fitness tracker might measure like heart rate and activity level. Seeing all of this, and combining it with the fact that the green light mechanisms used in the majority of fitness trackers makes the problem way more insiduous than simply being a few beats off in terms of BPM, or missing a step here and there.
The question then becomes what can and should be done about this. The article mentions some techniques, like reflectance spectometry, that should be able to read skin tone much more accurately, but it is unclear to me if that is a technique that could replace the green light, or if it is limited in scope to being a superior alternative to the FST (FST being related to the green light processes, but still a distinct thing). If it’s the latter case, how could FST be brought into play to inform an in-tracker mechanism like the green light and make it more accurate.
This last thing is a little out of scope of the article, but I also find myself thinking of, if a fix is found, and now the trackers can be accurate to all, what are the implications of that. Everyone deserves access to their health information, and so making the trackers work across skin tones is still probably a good thing, but I highly doubt that you, the end user, is the only one that’s getting benefit from your data. Maybe I’m too cynical, but I would wager that in the fine print of most of these devices terms of service, the company gets your data too and can profit from it in some way. The thought of more of the experiments running, even though they’re already quite bad and ill informed while not understanding skin tone, is a little bit worrying in a different way when they run with an understanding. The track record of respect for POC medical data and personhood in medical experimentation is pretty awful (think Henrietta Lacks, think Tuskegee syphilis study, think poor Puerto Rican women being the bulk of the test subjects for early contraception, etc.) Obviously fitness trackers are pretty different from any of these and the opportunity for direct harm and abuse is a little smaller, but I sincerely hope any attempts to address it are done with the intent of improving health data access, not as an avenue for further medical exploitation of people of color
Consumer wearables like the Apple Watch and Fitbit provide health data, but a significant problem arises for individuals with darker skin tones—these devices exhibit lower accuracy due to their reliance on green light technology, which is less precise on darker skin.
As wearables become FDA-approved and integrated into healthcare, this accuracy gap could worsen existing health disparities for people of color. Urgent measures are necessary, including diverse validation studies, technological enhancements, and the elimination of biased measurement scales. These actions are crucial to guarantee equitable benefits from the ongoing healthcare wearable revolution.
Didn’t know that this issue existed. I am very spoiled to not have to worry about my watch not being able to read my HR very well. Though it is worth mentioning that wearables like that are not as accurate as it gets, there are other ways to measure HR more accurately. I wonder what other racial issues exist that stem from actual real-world biological differences, this issue is the first that I think I’ve come across in which it is not socially defined/constructed. This adds a whole new dimension to the conversation.
I thought that this article did a great job of adding a very important dimension to a conversation that many white people (myself included) wouldn’t have thought about, and even if we did, I think it could be dismissed if you hadn’t considered the full implications. When you’re talking about GPS and smart watches (either for a sport — the context in which I’m most used to them — or just a quality-of-life upgrade to your phone) it seems low-stakes, but heart rate is an important tool when it comes to detecting irregularity in one’s homeostasis. Obviously, having the ability to cheaply measure if you’re doing ok shouldn’t be a limited opportunity, and making early detection of illness or testing of treatments a restricted group will only reinforce existing inequalities in our healthcare system. It wasn’t something I had considered before, so even though it was short, I appreciate that this piece was included.
I think this is a really great example for demonstrating the importance of diverse sources of technology. Technology based around green light simply would not exist if it was designed by people with darker skin tones or at least with them in mind. This is definitely something I took for granted. We pretty much automatically assume that companies would not create or sell a technology that flat out does not work for an enormous amount of the population, but living in societies historically based around white people makes it so that such acts are easier to conceal instead of them being entirely absent. It’s good to see that several different types of these technologies are being included on devices so that whatever works best can be used without much compromise.
As others have said, it’s really disheartening to see these products still being sold and marketed in a way that makes little mention of the fact they may be more inaccurate for certain people. Especially when products like the newer lines of Apple Watches have a lot of features, such as notifications surrounding cardiac events and heart health, that can save lives or alert people to emergencies. It’s crucial that all people have access to this technology so that even more inequities in the healthcare system are avoided.
I’m personally very familiar with the wearable technology described in the article, as I’ve worn a Garmin watch nearly every day for the last 6 years. The watch uses the PPG data to measure heart rate, both at rest and during exercise. Some runners I know rely on the watch-generated heart rate data to measure their intensity and effort during workouts. Generally though, the running community considers the PPG data to be too inaccurate to be relied on consistently. Personally, I’ve also found that the heart rate data reported by my watch only loosely correlates with my effort level, and can vary greatly depending on how tightly it is worn and if my arm is sweaty.
Given its inaccuracy, I’m not surprised the technology has additional trouble measuring heart rate on individuals with darker skin than my own. I didn’t know that Fitbit and other companies were using this data in published research papers about health applications, since I assumed it would be too inaccurate to be seriously considered by the scientific community. However, as this article suggests, maybe some researchers aren’t aware of the device’s inaccuracies, particularly for people with darker skin.
With Fitbit, Apple watches, and Samsung Galaxy watches increase in popularity, we wonder how accurate are the watches real are based on what the companies have mentioned about it. After this reading, I have definitely grown a suspicion towards consumer wearable with their way of tracking our sleep and health, related with our heartbeat. By reading how green light technology is still the industry’s standard in wearables is very concerning, I am quite curious as to how is this still remaining as the standard as well as why is there no comments of changing the technology to monitor someone’s body. It is also quite concerning how they have no concern of how the different skin color is getting different treatment from their technology. The author suggests quite a few possible solutions in how they could approach this problem, however, it is difficult to see if companies will actually make a change and if people would make a legitimate concern over the problem of health disparities. Like many other readings, we see that this reading also displays the use of technology and how it has no care for the people and more focused on the idea of being able to capitalize on their consumers.
It is clear that from the very beginning of these products / technologies life cycles, development is underscored by an unchecked assumption of whiteness as the default when imagining users. Had the technology supporting these popular wearables been designed and developed with a full range of skin tones in mind, a method such as this, which does not work well for people with darker skin tones, would never had been selected. The consistent harm that a white (and male) “default” model perpetuates structural inequalities, as mentioned by the article. This study stuck out to me as an example with a similar aspect to the one we read earlier about the way snow clearing could have disparate effects based on gender. I think that in general, this is something that white people often take for granted. Between both situations, a community based approachhas proven particularly useful. Being treated and socialized as the default leads to the internalization of this belief, so I am not surprised by my, and many other classmates’ unawareness of the ineffective nature of many tech wearables.
The issue of wearable devices inaccurately reading data on dark skin tones was something I hadn’t been aware of. Despite advertisements boasting about the effectiveness of these devices in determining health patterns, the readings often exhibit disparities based on the user’s skin tone, as mentioned in the reading. This problem echoes a recurring theme in our class discussions – the lack of diversity in study samples behind technological advancements. As seen in many other instances, the root cause of this technological flaw lies in the failure to include a diverse range of participants during studies. Additionally, it’s crucial to acknowledge that the use of cheap and generally robust green light in these devices is driven by the profit-maximizing nature of our capitalist society. Consequently, companies tend to overlook these issues to maximize their profits. One aspect I found commendable in the article was the set of solutions proposed by the author. While some may be challenging to implement, addressing the problem through brainstorming and raising awareness is essential. By doing so, a more viable solution is likely to emerge.
This article highlights a crucial aspect often overlooked in technology design: inclusivity and bias mitigation. The issue with wearable devices reflects a broader problem in tech, where algorithms and technologies are developed with a one-size-fits-all approach, ignoring the diversity of end-users. This oversight leads to perpetuating systemic biases and inequalities. In daily life, this translates to a significant portion of the population receiving inaccurate or inadequate health monitoring, potentially leading to misdiagnoses or overlooked health issues. As technology increasingly integrates into healthcare, ensuring its accuracy and inclusivity across all racial and ethnic groups becomes imperative. This is not just a technological challenge but an ethical one, demanding a shift in how we approach technology design and validation. The article also serves as a reminder of the importance of diverse datasets and testing environments in technology development. It encourages a more responsible and inclusive approach to research and development, which is crucial in a field like health science, where the stakes are high and impact personal well-being. As computer scientists, we must advocate for and contribute to technological advancements that are equitable and beneficial for all, regardless of skin tone or other biological factors.
The article today introduced ethical concerns and the lack of inclusivity in the design of another kind of technology: wearable devices. Most of the issues raised can be generalized and tied back to some of the topics we’ve already discussed in this class. For example, last week, we talked about gender disparities in healthcare and the perpetration of biases through excluding women from medical research and clinical trials. This brings me back to our discussion of how impact assessment should include participants representative of all identities, specifically those most marginalized or likely to be affected by a certain technology, tool or algorithm. In this case, although consumer wearable companies are already aware of the lack of effectiveness and accuracy of their devices on people with darker skin tones, and some have already made an attempt to include participants with varying skin pigmentation or be transparent about the lack of representation, they still do not take action to halt the marketing and sale of their products. After all, these companies are all driven by profit, and as long as their products still work on a large population of users, they don’t have an incentive to stop selling just because a certain community does not benefit from it. The fact that companies are aware and attempt to fix the issue gives me mixed feelings. On the one hand, it gives hope that some of these companies will still eventually erase the disparity and pave the way for all their competitors. On the other hand, it confuses me how they can ignore such an issue and remain so hesitant to take action to improve their products.
I use a watch with a heart rate monitor from Garmin and I have thought that it was really cool that I could tell when I was sick or recovering just based on looking at my heart rate graphs on the watch. I think the benefits that an accurate heart rate reading could provide for health tracking would be greatly helpful if they actually worked for everyone. Furthermore, I noticed that the article for today mentioned that Apple had been doing research on an infrared sensor. They mentioned that it might be both more accurate generally and also better for all skin tones. The fact that my Garmin though still uses a green sensor goes to show that our criteria for selecting groups for testing is not stringent enough. Melanin is designed to provide protection from light, so you would think that testing a light sensor would require stringent testing on people with the highest melanin counts before approval, but the article also mentioned that a lot of these tech companies are attempting to produce a minimum viable product that works for a plurality of people not everyone. While this might be okay for a private company (although it feels ethically wrong), when combining this with healthcare it compounds already existing systems of oppression. I think if these products had been tested on people with high melanin before, the alternative technologies like infrared sensors would have shown up more quickly in the market.
This article presents another example of racial bias within tech, which again, is barely acknowledged by the tech’s creators or our culture in general. Not only is it immoral to sell tech that doesn’t work for a large portion of the population, but it’s especially sinister to not acknowledge any shortcomings. If Apple, Samsung and Fitbit don’t make a statement acknowledging the reduced accuracy of their wearables for people of darker skin tones, those people will buy those products without realizing they don’t work properly for them. While there’s the issue of deceptive advertising, this racial bias is especially concerning for wearables, as they’re often advertised as health products. If people with dark skin buy a smart watch because they need to monitor their heart rate, false or inaccurate readings can have serious implications depending on their health.
If these companies were to at least acknowledge their racial bias, they’d come under heavy scrutiny, but it would at least deter people from buying their products which would otherwise give them inaccurate health data. Instead, they continue to push their wearables without acknowledging these massive flaws. This is another example of how we refuse to acknowledge our biases, as companies and institutions focus on a white male “baseline” while ignoring the rest of the population.
I think the sentence stating how companies market a “minimally viable product to the largest population as quickly as possible” shows the current relationship between companies producing health-related devices and their consumers doesn’t prioritize the health/well-being of the consumer; rather, the company prioritizes profit.Not only does that mean health-related devices fail to serve a large demographic of the people who use them (those with darker skin tones), but it actively harms this demographic by providing them with false information in regard to health. I feel pretty pessimistic concerning change in the immediate future unless companies are penalized or lose profit selling their devices (at the current state they’re in). Similar to the content from last week, people across skin tones still purchase health-related devices, as it doesn’t seem to be stated in the devices/widely known that components like PPG green light signal and the Fitzpatrick scale are both inaccurate and used. If these companies were to make it known their products don’t serve many people—and making their results more accurate is slightly more costly—they’d be losing buyers and making less money. Hopefully, that doesn’t end up being the case. If this technology does improve in accuracy of health info across all skin tones, this would be similar to the curb-cut effect we discussed at the very beginning of this course, where in this case improving the product for darker skin tones improves the experience for all skin tones.
The article on wearable technologies raises concerns about how they may worsen health disparities due to limitations sensing darker skin and lack of diverse validation studies. These technologies enable intrusive corporate data collection with little transparency. However, they also further healthcare research through widespread, non-invasive longitudinal studies. With Apple’s new Vision Pro, wearables may become more immersive via mixed/virtual reality. But smartphone history shows mass adoption of such intimate technologies lets companies shape personalized, ad-filled environments. Known facial/vocal recognition biases could cause disparities in how people navigate these digitized spaces. Crucially, green light blood sensing in current wearables like Fitbits and Apple Watches varies in dermal penetration by skin pigmentation. So their health metrics are less accurate for darker skin, undermining research validity. Promisingly, some companies acknowledge these deficiencies and aim to improve inclusivity. Overall, wearables illustrate how bias gets embedded in technologies claiming objectivity. But awareness and mitigation efforts give hope for more equitable tech futures.
The discussion of the disparity of efficacy of the technology use in wearable devices reminded me of an internship I had last winter in an insurance company. While we did not work with this technology exactly, the company wanted a proof-of-concept app for something called transdermal optical imaging. This is a method by which one can estimate blood flow and, therefore, the heartrate of a person through a video of their face by leveraging the differences in melanin and haemoglobin. This is done by separating the frames of the video into its constituent bit planes, isolating the ones that capture haemoglobin changes under the skin, and measuring this over time. While this contactless method is shown to have success in the research papers that explore it, the videos taken for them are taken in very controlled environments. Furthermore, despite having not tested the technology for multiple races, the researchers asserted that someone’s skin tone would not make a difference to the results because the bit planes capturing melanin were being discarded, without acknowledging that differing levels of melanin could alter how visible the biplanes capturing haemoglobin are in the video taken. The insurance company, then, wanted to leverage this technology not just as a means to let their customers look at their heart rates through the app, but to assess if these customers have any underlying heart conditions in order to up their insurance premiums and to assess if they are lying while filing a remote claim. On both sides, it shows a commitment to profit/notoriety (in being the first) over good scholarship and a responsibility.
I’m never used any wearables, but I’ve been thinking about them more recently as I think a device which can wake me up between sleep cycles would be really useful. The thing that surprised me the most about this article is the Fitzpatrick Skin Type Scale apparently being the industry standard for validation studies examining skin tone. I looked it up on wikipedia and discovered that apparently the current emoji skin tone variations are also based on this strange simplistic skin tone scale that is only relevant for classifying skin reactions to sun exposure. Which is bizarre, if you ask me. From what I understand, one of the main arguments of the article is that using actual science to measure skin pigmentation might be a good start towards making wearables less racist. Good idea! Another thing that might help is if the big tech companies start caring about people with darker skin tones – I couldn’t agree more!
The article shows a pervasive bias in health wearables, demonstrating a critical need for ethical scrutiny. The utilization of technology that exhibits accuracy discrepancies based on skin tone raises ethical concerns surrounding fairness, justice, and inclusivity. The potential exacerbation of existing healthcare disparities, particularly for individuals with darker skin tones, underscores the ethical imperative to address these issues proactively. Moreover, the lack of diversity in validation studies and the reliance on outdated, subjective scales pose ethical challenges. A commitment to ethical research practices demands the inclusion of diverse populations in validation studies to ensure the generalizability of findings and the avoidance of unintentional harm. The ethical dimension extends to the responsibility of companies to rectify biases and enhance inclusivity in their wearable technologies. Transparency, accountability, and a commitment to mitigating biases should be paramount in the development and marketing of these devices.
First, I would like to say that the ordering of the readings for this module has been fantastic as each reading translates well into the subsequent reading. The editorial we read for today, “Limiting Racial Disparities and Bias for Wearable Devices in Health Science,” built upon both our knowledge of inequality and data collection in the world of healthcare. The article directly mentions how there is a lack of fair and equal representation in medical research. In contrast to our previous reading, the central focus is on skin color affecting data collection by wearable devices such as the Apple Watch or FitBit. Many of these devices “use a PPG green light signal” to get accurate readings of one’s heart rate or blood pressure. As we have read, PPG green light signals are less effective in darker skin tones. With a push to use these devices in medical research, there is extreme reason to worry this could only further drive a divide in accurate and available research for minority groups specifically those with darker skin tones. It was interesting to read about this, especially considering that many of these devices can be inaccurate and lack precision to begin with, regardless of skin tone. To me, it seems extremely hypocritical to use devices that are already plagued with issues of accuracy in a research context. Research and medical research in particular is built around quantitative accuracy. Including these devices in research would contradict this desire and hurt the ability to provide accurate healthcare information for groups that are already not well represented in Western medical studies. Clearly, capitalism and the desire of these big tech companies to have their products used in medical research (for marketing and branding purposes) are what lay at the origins of this push.
Wearable tech outside of the scope of disability initially feels dystopian to me, but reading this article helped me see where health tech could potentially bridge some of the gaps and biases we have discussed in the healthcare system. I see the benefit of restoring agency through wearable technology, but it seems to exacerbate the very biases it could potentially remedy if racial biases are embedded into these devices. That being said, I think it’s incredibly ridiculous that some of the largest and wealthiest distributors of wearable tech have not accounted for the variety of skin tones their products might be used on. I think within a lot of fitness/outdoor recreational spaces there are implicit ways in which Black people are discouraged from participation, and even when Black people do participate in these activities their behavior is policed (i.e. the Central Park birdwatching incident). So these biases within wearable tech do not help.