Reading for Monday October 30th

Weapons of Math Destruction by Cathy O’Neil, chapter 4 (WOMD-Chapter4.pdf )

You are not expected to understand this, edited by Torie Bosch. Chapter 15 (https://www.degruyter.com/document/doi/10.1515/9780691230818/html)

Technically Wrong: Sexist apps, Biased algorithms, and Other Threats of Toxic Tech, by Sara Wachchter-Boettcher. Chapter 6 (https://grinnell.ares.atlas-sys.com/areslms/ares.dll?Action=10&Type=10&Value=40185)

Solving the Problem of Racially Discriminatory Advertising on Facebook by Jinyan Zang (https://www.brookings.edu/articles/solving-the-problem-of-racially-discriminatory-advertising-on-facebook/)

23 thoughts on “Reading for Monday October 30th”

  1. In Chapter 6 of Sara Wachter-Boettcher’s Technically Wrong, the author explains how technology companies build profiles of people on the internet using information collected on their platforms and by data brokers. These companies then target advertisements toward people using these profiles. As discussed in the text, this can be used to entrench stereotypes and encourage people to have interests according to their stereotypes. After all, by exposing people to certain kinds of products, people will largely be aware of what these technology companies expect people to be aware of based on demographics.

    However, with the past articles we had on freedom of speech and moderation on the internet, it is possible political organizers may use this information to target people and convince them of their ideology. On page 78 of Weapons of Math Destruction, it is explained how advertisers typically target vulnerable groups with advertisements for payday loans and predatory education programs. Vulnerable groups of people may be targeted by groups with information to convince them of harmful ideologies, furthering the reach of echo chambers discussed in class on Friday. On some social media platforms, this is already the case, where some controversial content creators prey on peoples’ insecurities to sell them on courses. These courses can promote harmful ideologies on how to approach life more generally.

    Reply
  2. These readings really play into my fears about data privacy and social media. While they offer ideas about healthier online environments and ad practices, it seems that on an individual level, there really isn’t anything that can be done to stop your data from getting found and catalogued, as practically every web service is collecting your information. It can be easy to shrug it off, think it’s not a huge deal, and say you’ll just ignore targeted ads, but that would be a fantasy. Sponsored content and targeted media control our internet usage, and most web services heavily rely on them; they can’t be ignored. And real damage is done by these systems. The examples seen in Weapons of Math Destruction’s for the marketing of for-profit universities were monstrous, and it’s awful that these institutions built on preying on vulnerable people continue to thrive. I’ve gotten plenty of ads for the University of Phoenix and DeVry, but I never really thought about why their marketing was so aggressive or what their business models were. These deceptive, predatory business practices can be a lot harder to identify than I’d like to admit.

    I really loved the reading from Technically Wrong. It seems like every day I lose some amount of agency in my online life, and at this point I’m sure major data aggregators know basically everything about me. The chapter returns to the concept of encoding discrimination into these services, which was examined in Algorithms of Oppression. Any advantage that targeted advertising would bring to users wouldn’t make up for Facebook allowing advertisers to reinforce racist housing practices. This chapter again shows how these large media companies follow and enforce societal biases. I found Wachter-Boettcher’s thought on fake intimacy hiding these immoral practices very relatable. The analogy of a cool babysitter is kind of hilarious but incredibly accurate. I often feel pandered to with inoffensive and friendly emails from these companies, often accompanied by similarly-styled cartoon images. Google, Microsoft, and Apple are the most powerful companies on the planet but try to feel personal and fun to individuals to diminish the effects of their questionable practices. I’m not sure what the solution to these problems are, as they seem almost unsolvable, but I think transparency in data collection and usage is a major step that needs to be addressed by these companies, not buried under a hard-to-find page or entirely hidden.

    Reply
  3. Today’s readings shared incredibly disgusting details about the algorithms behind online advertising and data tracking. The last article I read, “Chapter 6: Tracked, Tagged, and Targeted” in Technically Wrong by Sara Wachter-Boettcher had a point that I believe applies to the majority of our readings on problematic and biased algorithms that we have read this semester. “The only way the technology industry will set reasonable, humane standards…is if we stop allowing it to see itself as special” and above the laws we set for humans (p.117). Algorithms should not be viewed as anything other than the result of human decisions. Therefore as a result of human decisions, algorithms should follow the same sanctions that regulate humans. The readings today illuminated how problematic data collection is; the readings reiterated how all of the problematic algorithms stem from problematic data. Using proxy variables and other forms of loosely legal data collection not only feels wrong but leads to errors in algorithms. Often these errors replicate common biases and stereotypes in society. Algorithms use this “dirty-feeling” data to make incorrectly informed decisions and calculations surrounding who clicks on which articles, who would be most desperate for this product/service, etc. To maybe fix algorithms, as evidenced by these readings, we need to fix the way in which data is collected and used.

    Reply
  4. In the WMD reading, the author discusses how the algorithms have calculated the personal advertisement for a different person, they especially stress the for-profit universities which “highly refined WMDs to target and fleece the population most in need”. This exploits the aspirations of those poor people and uses them as a tool for getting money. And they are connected with other companies to predict the “personalized” model for everyone in their profitable world. Likewise, the author also discusses how the payday loan industry operates WMDs. And they would use the information, banks especially, to get money and ruin people’s lives. In Solving the problem of racially discriminatory advertising on Facebook, the author researches how racial bias happens in the ads. The ads are posted the four major ways: Detailed Targeting, Custom Audiences, Lookalike Audiences, and Special Ad Audiences. From the result, they found that Facebook’s ad still includes ways for discrimination by race and ethnicity “despite the historic boycott it faced in 2020, and bad actors can exploit these vulnerabilities in the new digital economy.”

    Reply
  5. The readings provide valuable insights into the complex issues surrounding technology and digital platforms. The text on “The Pop-Up Ad: The Code That Made the Internet Worse” reflects on the unintended consequences of introducing pop-up ads and the broader implications of the internet’s evolution. It highlights the responsibility of individuals, tech companies, venture capitalists, and regulators in shaping the current state of the internet. The author’s call to imagine a different future for the internet emphasizes the importance of personal accountability and the need to prioritize constructive interaction and civil online communities.

    The discussion on “Weapons of Math Destruction” sheds light on the exploitative nature of predatory advertising, particularly in the context of for-profit colleges. It exposes how data and algorithms are leveraged to target vulnerable individuals, leading to financial distress and perpetuating wealth inequality. The article emphasizes the significance of addressing these issues and designing fairer advertising systems to protect marginalized communities.

    The text on “Solving the problem of racially discriminatory advertising on Facebook” reveals the persistence of racial bias in Facebook’s advertising algorithms. It calls for greater transparency, algorithmic bias audits, and the utilization of demographic information to reduce discrimination. The article highlights the real-world consequences of discriminatory advertising, including reduced opportunities and political division, underscoring the need for collaboration among regulators, advocacy groups, and industry stakeholders to create more equitable advertising platforms.

    These readings collectively emphasize the importance of critically engaging with technology, advocating for diversity and inclusion, and holding both individuals and platforms accountable for their impact on society. They serve as a reminder that technology is not neutral and can perpetuate existing inequalities if not properly regulated and guided by ethical considerations. To create a more just and inclusive digital landscape, it is essential to prioritize privacy, fairness, and the well-being of all users.

    Reply
  6. I’ve found the economics and public perception science around colleges and universities fascinating for years, so naturally, the WMD article stuck out to me, but before that I just wanted to say that I usually forget that everything online had someone who made it (maybe less the case with AI-generated content now, but like infrastructural stuff) and those people could be acting maliciously our not. I never would have considered that there was someone who “created” pop-up ads, or that they wouldn’t have even been trying to scam people or sell a shady product, just get their boss off their ass. Kinda cool. Kinda scary.

    Anyways, the whole situation in America about colleges and our obsession with name recognition and “prestige” is absurd, and exists because it really appeals to our fundamentally lazy brains. Having a heuristic that just says “Stanford=Smart” makes evaluating candidates feel easier, even if it’s not true, but the weird part is by having that heuristic (which all the Ivies (another meaningless group) and equivalents spend lots of time and money on ensuring remain in the popular consciousness) actually does make them more valuable and able to serve their students (at least in the sense of providing jobs). They’re essentially like the Dollar and other currencies, only valuable cause we’ve all agreed they’re valuable. I guess in this analogy, that makes private online colleges the giftcards. Actually way less valuable, but cost the same, and really only there so you can feel like you’re doing something proactive and thoughtful.

    Reply
  7. The readings today delve into the world of data-driven advertising and its impact on individuals. “Weapons of Math Destruction” argues that the practice of predatory advertising capitalizes on people’s vulnerabilities to market products and services. This trend is evident in the for-profit education sector, where individuals seeking opportunities to improve their lives are often targeted and lured into programs that might not be in their best interest. The book highlights the ethical concerns surrounding the use of personal data to manipulate and exploit consumers, emphasizing the need for increased scrutiny and regulation.
    Similarly, the text discussing Facebook highlights a significant concern regarding biases in the advertising algorithms employed by this social media giant. These algorithms hold a crucial role in deciding the content users encounter, thereby molding their online experiences.
    The combination of biased advertising and exploiting vulnerabilities in online marketing has triggered big debates about their impact on society. To address these issues and ensure ethical and responsible advertising, government regulations are essential.

    Reply
  8. All of the readings were really insightful on the reality behind target advertisements and how they demonstrate biases. In Sara Wachter-Boettcher’s ‘Technically Wrong,’ she discusses the nature behind how advertising companies obtain their data, how with that data different companies profile you, and how, at the end of the day, regardless of what you are doing with your personal device, data is being shared and processed. Wachter-Boettcher uses the example of Uber, which, in an update, required all their users to allow their location independently, whether they are in the app or not, and she points out how, although the company claims that it’s for the benefit of the users and to improve the app, the sentiment of distrust is present. Wachter-Boettcher also discusses how different companies profile their users based on the data they collect and use proxies to infer information about their users. All this information might then be used by advertising companies to perform targeted advertisements, as seen in the 4th chapter of ‘Weapons of Math Destruction,’ which is extremely harmful to different communities and can be used in really unethical ways that take advantage of the personal situation of unstable people, as shown in the example of for-profit colleges. Finally, it was really interesting to see the reflection of the creator of pop-up ads.

    Reply
  9. I think these readings really nailed the variety of different issues surrounding modern day advertising. As always, a tool that is theoretically helpful to people, such as the targeting of specific products to specific groups likely to purchase them, is abused in the pursuit of greed. I’m aware that all advertising comes from a place of greed, not altruism, but it’s hard to wrap my head around the despicable nature of the targeted advertisements of for profit universities, who basically sabotage the lives of already desperate people and saddle them with debt.

    It’s also amazing to me that the use of proxies for protected characteristics flies mostly under the radar. For some reason, most attention is given to the very fine details of the consideration, but not the end result of it. Sure, these algorithms aren’t directly taking in race, but if there is a 7% difference between using ethnicity directly or by using approximates to guess it then there really is no functional difference.

    As the creator of the pop-up ad said, it’s difficult to blame just one person or group for the proliferation of surveillance and rampant data collection. It’s a combination of greedy people funding endeavors that make the most money off modeling humans as accurately as possible, programmers willing to do the work due to few ethical standards, and regulators that either do not care about the issue or are profiting off of it.

    Reply
  10. Today was focused on targeted and predatory advertising. One tactic that is extremely effective in advertising is targeting someone’s weaknesses and insecurities and offering them a solution to their problems where they often exaggerate their effectiveness. These ads are extremely harmful to people’s self-esteem. Also, when it comes to targeted advertising, a lot of businesses are basing what ads they show to certain groups of people based on proxies, which are often inaccurate and make way too many assumptions. This leads to the stereotyping of internet users, which is a detrimental practice.

    What makes these harmful is the data that these companies are collecting. The invasive methods that these companies are conducting are shocking and sometimes seem unavoidable. One thing that was surprising to me was when I learned that I did not have to give Uber my location to use the app, rather I could give my pick-up address. But because the text was so small when I initially got my first Uber message saying that I needed to give them location access. This type of manipulation that large tech companies use to practically force their users into giving up so much of their information is such a malicious and terrible way for them to make more money. Money is at the root of corruption, and seeing how it makes these companies manipulate so many into compliance is appalling.

    Reply
  11. I don’t think there is a single person in my age who uses social media that is not aware of the data collection that is going on. There are definitely personal fears, but I have also seen and sometimes felt dismissive because of a personal feeling of helplessness. Looking at regulations and precedent, it doesn’t feel like in the U.S. we have any ability to protect our own information. I think, though, this is one of the first times that I have thought of critically about the inequality that can present itself from data and targeted advertisements. For profit colleges can be scummy institutions, and it is crazy to think about the way in which they target customers and abuse government systems without regard for the well-being of the students they accept. Scarier than their general model, though, is the way that they can now target individuals who are in need and push them into accepting debt far beyond their own means. Then you have Facebook using proxies to identify different ethnic groups. According to Zang, Facebook’s accuracy only got better at identifying ethnic groups from 2020 to 2021. The idea that advertisers can even select from such protected classes for an ad is mind-boggling. Obviously they have been sued over this issue before by the ACLU, but the fact that is still an option, and it can lead to discriminatory advertisements along with being generally “a violation of the existing civil rights laws that protect marginalized consumers against advertising harms and discrimination” (Zang).

    Reply
  12. Data privacy is not new to me, but the idea of predatory advertising is. I know that ads work better on their target audience, and I know that marketing departments are very motivated to get to those audiences, I did not know that these target audiences could be based on various levels of vulnerability and insecurity. That part is actually evil. The discussion on private colleges got me thinking about Grinnell. The admissions department does purchase leads, it also gathers information from kids at college fairs. I wonder if the college engages in any sort of predatory advertising. It most certainly can’t profit from any student that can’t get into grinnell, but by advertising to students that are more willing to apply they can at least drop the admissions rate. To be clear, I don’t think the college engages in any predatory advertising directly. But I also don’t know how thoroughly the college has audited the lead generation practices of its suppliers. The college is need blind, but it could absolutely market to different socio-economic groups at different rates, especially considering these different groups probably profit the college in different ways.

    Reply
  13. Reading the passage from You are Not Expected to Understand This conjured up connections to previous ethical discussions we have had in class. The notion of “if I hadn’t created a popup ad, someone else would have” is reminiscent of the argument made by Manhattan Project scientists, but the author skillfully navigates this logic by reassessing his responsibility in the current state of the internet and renewing his efforts on improving the conditions. In many ways, I am impressed at his ability to admit wrong and shortsightedness. It is encouraging to see people learn from their actions and publicly express their role without scapegoating or demanding some sort of reaction. For the most part, an issue such as this is indicative of the growth of the internet, from something new and exciting that individuals around the world contributed to in its infancy without fully assessing the consequences of their actions. But then again, how could they know what the internet would grow to be? It’s easy to look back and find the roots of the worst and most exploitative aspects of the internet and to cast blame on the individuals that we link with these problems, but in cases like this, where the level of advancement in advertising and data mining was not clear at the inception of the pop up ad, doing so seems unfair. The internet has always been a place of enormous potential, but also a tool for broadband harm. Examples like this are why we need classes like the one we are in, and remind us that despite our best intentions, we can always be wrong.

    Reply
  14. Sara Wachter-Boettcher’s ‘Technically Wrong’ and the idea of companies capturing data on a platform no matter what you do made me think of the website clickclickclick.click, which is a webapp that displays all the data it collects from a user and how it compares it to other users. It gives users insight into what all can be tracked on any given platform and, therefore, later sold to an advertiser as behavioural data. I just tried it this morning and it knew my time zone and information about how many websites I had visited before it, which was kind of freaky. Also, Wachter-Boettcher’s discussion of the use of proxy variables could also be extended to the COMPAS algorithm we had discussed earlier this semestser, wherein various other variables were already being used as a proxy for race (often not very correctly), making the fact that they are protected immaterial.

    The discussion of data collection and targeted advertisement (or predatory advertisements) in today’s reading also made me think about Gilles Deleuze’s idea of societies of control. Within this, Deleuze posits that the emergence of modern technology and the unprecedented ability of personal data collection has led to a new form of power within society – that of control- through which power is now exerted by altering or controlling access to specific futures on the basis of the data institutions have collected on us. This form of power asserts itself through its illusion of profound freedom – wherein we are all encouraged to interact with the websites and apps as much as possible – as this allows for the greatest amount of accumulation of data.

    Reply
  15. In the Brookings article about Facebook advertising, they talked a fair bit about the Special Ad audience, which tries to not use protected (or as they call it, sensitive characteristics), but ends up exhibiting bias anyway. In being poorly representative of the US racial makeup, it allows employers, housing providers, creditors, etc. to pursue discrimination that isn’t legal while kind of having a cop-out of not intentionally doing it (because they just advertised to a “non-biased list). These potential advertising parties listed provide access to resources/activity that influence quality of life a lot, and having more or better quality access to these things (or less predatory ones) makes a huge difference in one’s opportunities. This has the potential to reinforce historical systems of oppression if marginalized groups receive different advertising in these crucial areas.
    This reminded me a lot of what we discussed with COMPAS, where there are proxy variables for many protected characteristics that get to the same point of race, or gender, or a whole host of things, without targeting them directly. The Brookings article mentions names and zip codes as some examples Facebook specifically uses. Again, as came up in our COMPAS discussion, this can be quite bad as it reinforces systems of oppression by using seemingly benign or even relevant info to collect groups of population data, but classify them in a way analogous to their protected class (for example, zip code info is maybe relevant to housing or job ads, but also due to redlining, historical segregation, and barred access to generational wealth, communities of color are often clustered together and are in poorer areas more at risk of predatory landlords or further from high paying jobs). Including the protected trait to be more direct about it, however, is both illegal and likely to make the issue worse as the algorithm can more directly reflect these legacies of oppression back. This all comes back to the idea of intersectionality: protected and unprotected traits alike are part of our life and community story, they are not discrete and independent variables.
    The Brookings article recommends transparency, so that who these ads end up reaching, regardless of the name of the list, becomes clear. This would not solve that the advertising is discriminatory, but would allow ads to be approached with the context of who they’re for. Additionally, it suggests regular audits, so that this issue doesn’t leave the conversation. Neither of these fix the issue directly, but seeing as how companies are much more likely to change when reputation and therefore money is on the line, making this a broader conversation piece offers a promising pathway to making real change.

    Reply
  16. ONe of the overall themes of these readings was the idea of a “fairness through unawareness” vs. “fairness through awareness” strategy for combatting the discrimination advertisers can perform through targeted ads. This really reminded me of the “colorblind” approach to racism (and similar approaches to other forms of discrimnation). By this I mean the idea that members of a systemically privileged group can choose to “not see” something like race, gender, etc. This also relates back to our discussion of how majority groups are often not holding their identity categories at the front of their minds in the way marginalized people do. This erasure, especially of whiteness, that makes people feel they are the default is something that these articles really brought to mind. I am also curious about the privacy aspects of this discussion. Americans often list privacy as one of their main concerns when it comes to companies accessing their data, so I wonder why it seems this concern is almost performative, because they / we often take no measures to prevent the use of our data.

    Reply
  17. A lot of my thoughts have been explored in previous blog posts, but like Ellie I was also thinking about how the idea of proxy variables relate to other algorithms we’ve discussed in class. The use of proxy variables seems more dangerous than the actual “protected” variables themselves, given that they are based off of association. Who decides that association, and what nuance is lost through that? I think targeted ads in general are unethical and installs false needs (Marcuse), and as evidenced in the readings reflect this monolithic view of different racial, gender, and socioeconomic groups. I think trying to predict which ethnic groups are using a site, like Zang mentions with Facebook, is really inappropriate. Users should voluntarily provide that information and there should be transparency as to what is being done with that data.

    Social media has the capacity to be a beautiful way to connect people and make borderless communities, but, like most things, racial capitalism permeates how developers and companies engage with audiences. Even creating separate technologies that try to circumvent these issues can be infiltrated by reactionary groups and their content.

    Reply
  18. In the reading, Solving the Problem of Racially Discriminatory Advertising on Facebook by Jinyan Zang. The author brings forth several aspects surrounding algorithmic bias and discrimination in online advertising, particularly on platforms like Facebook. The research highlights the nuanced ways in which algorithmic processes, despite their mathematical foundations, can perpetuate or even exacerbate societal inequalities and biases, particularly concerning race and ethnicity. The adaptability and robustness of algorithms are central to computer science. In this context, the ability of Facebook’s algorithms to adapt and create discriminatory advertisement audiences—even after the removal of explicit racial and ethnic identifiers. However, it also reveals an alarming propensity for these algorithms to implicitly learn and utilize societal biases, such as racially correlated names and ZIP codes, to craft specialized audience groups, effectively bypassing mechanisms aimed at preventing discriminatory advertising.

    Reply
  19. The readings today are all deeply disturbing in that they brought to light some severe unethical decisions tech giants made regarding data collection and identity-based discrimination. I was not too familiar with most of these social media apps until a few years ago and was not informed about scandals surrounding them until now. It’s hard to believe that companies used to make assumptions about the race and ethnicity of users, thereby drawing incorrect conclusions about other cultures, forming unfounded racial stereotypes and further perpetuating historical biases. It’s a relief to know that they have come a long way ever since, but it’s also worrying because we don’t know whether the models trained by past data and the assumptions encoded in their systems are still there.

    I was also amazed at the extent to which they can misuse the data they collect from users. As expressed in previous discussion essays, I usually consider a lot of these as inevitable side effects that have already happened on such a wide scale that it cannot do much harm to any individuals. However, it’s possible that collected data might be sold to and used by companies that are capable of far more evil and intrusive things in the future. Although we have already signed so many agreements about data privacy everytime we use a new app, it’s true that these companies have had histories of going against their promises and visually or mentally manipulating users into thinking that they know what they’re agreeing to. There’s no guarantee that they will comply with any of that or will not find their way around the wordings, which scares me because the opportunities for them are endless and there is little we can do to prevent that at this point.

    Reply
  20. I feel like even if people do not necessarily know the inner workings of targeted ads, they are at least slightly aware that their data is collected, which influences that ads shown to them. From WMD, it was made more clear to me the extent to which the University of Phoenix—and other for-profit universities—target their ads to vulnerable people; it’s just so cruel how they market themselves as a tool for “upward social mobility” under capitalism. Additionally, I just find it curious how Facebook thought getting rid of targeted ads explicitly factoring in protected characteristics would actually rid of discriminatory ads; to be honest, from everything I’ve seen in this class and outside of it, their goal probably wasn’t to eliminate discriminatory ads but show the public they’re “improving” (similar to Google’s AI ethics page and their contradictory actions of firing those working in AI ethics who call Google out).

    Reply
  21. These readings gave us a lot of information about ads and how they are affecting today’s tech world. A reading I want to talk about is the article about Facebook’s racially discriminatory advertising. I am quite surprised by the ads I receive as I believe that they are tracking my search history and what I watch on YouTube. The moment I search for a product I want to buy, it starts showing up in all of my ads on websites that I use frequently. And so, when I read this article about how Facebook takes your race and curates ads based on what race you are, I thought this was pretty convenient but also kind of weird in the sense of how they would be able to create ads for a specific race. It also is questioning how they are creating ads, as they are using computer algorithms that “replicate existing patterns and behaviors that already exist in society.”`This creates a sense of doubt as society currently does not have reliable data to be used for computer algorithms and creates assumptions based on it. I think overall we shouldn’t try to make algorithms based on our current data as I believe we are not in the current state to be making assumptions. What I mean by current state is the knowledge that we currently have as well as the mindset we have of solving problems and what we should solve is not aligned with society’s problems and the possible effect that would come out of our results. A quote that I thought really goes well with what I’m trying to say is from Bosch, “A great first step in stretching your imagination is accepting responsibility for your role in making the Web what it is today, even if your only responsibility has been not demanding something better.”

    Reply
  22. In Chapter 4 of WoMD, O’Neil investigates targeted advertising, primarily as practiced by for-profit universities like University of Phoenix. These organizations target poor and underserved communities, promising a degree that will lift them out of poverty. Their business model relies heavily on these students securing federal loans to pay for the degree, which recently came under scrutiny, as the Supreme Court decided to allow cancellation of many of this student debt as it deemed they were predatory.

    One aspect of the reading I felt that was under-explored is the privacy issues involved with targeted advertising which I believe contribute to its status as a WMD. Targeted advetisement relies on websites and apps ability to sell demographic data about their users to companies. Presently, consumers have become so conditioned to the existence of targeted ads, that many people don’t question whether they should legally be allowed to exist at all. The recent law by the EU, GDPR, attempted to fix this issue, by forcing users to opt-in to third-party data collection through website cookies, among other restrictions. While I think the spirit of the law is important, and some of its regulations have helped, I think the “cookie banner” law has proved to be largely an ineffective way of ridding targeted advertisement from the Internet.

    A more promising approach has been taken by Apple, whereby they force apps that can be downloaded through the App Store to ask users explicitly if they can track their data. However, even if users click the “ask app not to track” button, there is not explicit way for Apple to prohibit the app from doing so. They can threaten removal from the App Store if the request is ignored, but I think it would be difficult for them to prove that fact.

    Reply
  23. In Cathy O’Neil’s “Weapons of Math Destruction,” chapter 4 takes us into the shady world of personalized advertising, where convenience masks the predatory nature of these practices. It’s like a digital version of phishing, where they create a sense of false urgency and use that to push their ads on us.
    O’Neil exposes how institutions, like Corinthian College, used personal data to overcharge students and manipulate them into military enrollment, taking advantage of their vulnerabilities. She introduces the Bayesian Approach, which ranks variables based on their impact on desired outcomes, and talks about lead generation, where data predicts our behavior, often to our detriment. Which I think is a neat tool. Now that our data is being so readily sold and sought after regulators are starting to see the need for new laws to protect our personal information from being exploited. The chapter raises crucial questions about the ethical side of algorithmic discrimination, showing how these systems can perpetuate existing biases.

    Then in You are not expected to understand this, edited by Torie Bosch Chapter 15 That chapter stresses our role in making the internet a better place, suggesting that social media should serve the common good, not just profit, saying, “I believe that social media should look more like public media, designed not to turn a profit but to help us live together in a democracy.” Which is a very important point because social media should be helpful to all and not harm people. The chapter also acknowledges that it’s tough to break away from the norm, concluding that “the hardest part of inventing alternative futures for the Internet is giving yourself permission to imagine something radically different.” I think this quote is very interesting because it makes me curious as to what type of radically different internets do other people imagine.

    Reply

Leave a Comment

css.php
The views and opinions expressed on individual web pages are strictly those of their authors and are not official statements of Grinnell College. Copyright Statement.