Exposure to ideologically diverse news and opinion on Facebook, by Bakshy, Messing, and Adamic (https://education.biu.ac.il/sites/education/files/shared/science-2015-bakshy-1130-2.pdf)
How the Biggest Decentralized Network is dealing with its Nazi Problem by Adi Robertson (https://www.theverge.com/2019/7/12/20691957/mastodon-decentralized-social-network-gab-migration-fediverse-app-blocking)
During a class at Grinnell College on the topic of Liberalism, Neoliberalism and political theory, the writings of the founders of the United States were discussed at length. In these writings, there was a belief that factions can threaten democratic processes by structuring society according to own desires. While factions can still threaten democracies today, political factions do not exist in the same media or communication landscape as those in the 18th century. As explained in the Science Magazine article, digital platforms that are social network based, like Facebook, can reinforce users’ political leanings through individual choices, not just content recommendation algorithms. As such, it is important to recognize how individuals socialize on the Internet and how this influences the communication between members of political factions.
In the 18th century, democratic theory may have been established to protect against factions, but these factions would have been self-regulating to some extent. Offline environments, through social norms, can discourage group members from promoting extremist ideas out of fear of social isolation or retribution. However, online environments lack some subtleties of communication, like facial expressions, long pauses, body language, and more. So, if someone were to say something harmful or extreme in an online chatroom, they would not be exposed to subtle forms of communication from others in the chatroom indicating a social norm was violated. Even if they were exposed to explicit, negative feedback from others in the chatroom indicating a social norm was violated, the person can easily move onto another chatroom through anonymity on the Internet. Thus, people discussing politics in online settings are not necessarily subject to social norms like those in offline settings that can promote respectful and productive discussion.
As explained in YouTube, the Great Radicalizer, algorithms can certainly promote extreme behavior, but it is important to examine how the nature of social networking platforms can promote extremism, without the help of an algorithm.
I really enjoyed the reading for today. I think in particular the idea of cross-cutting content is particularly interesting. Instagram, Facebook, Twitter, etc. are viewed as platforms for sharing “out”. Often I post on Twitter or Instagram something political or a news article because I think I am giving out “cross-cutting” and “new” to my followers when in reality, those who did see it probably had seen something similar before. The content that drives our clicks is often the stuff we want to see. I think anyone who has spent a significant amount of time on social media and people, particularly in our age group are very well aware of this trend. The readings for today, specifically YouTube, the Great Radicalizer by Zeynep Tufekc, and Exposure to Ideologically Diverse News and Opinion on Facebook by Eytan Bakshy, Solomon Messing, and Lada A. Adamic, give a fascinating look into this. The diagrams of the Facebook cross-cutting analysis really put into the scale window of the content we see and the content we pick. Overall, our accounts become trained to organize and select only the content that we have picked and will want to continue to pick.
Freedom of speech and social media has always been intertwined with controversy. To some and in particular conservative voices there should be no regulations with socials entirely free of many or all rules. Social media platforms have the ability and power to strongly influence opinions and mindsets specifically with younger minds entering the platforms. Regulating this content is thus ever so important.
I was glad to learn more about what Mastodon actually is, since I remember hearing a lot about it as like the future of social media a while ago, and agree with the founder who says that he’s sad that this is the context in which it’s being brought up again, but also I don’t think you can have unrestricted freedom and also respect and safety, at least not with where humanity is at this point. It’s the same dilemma that comes up in first amendment trials, but the scale of the internet and the obfuscation of accountability makes it more difficult and important.
Also, when it comes to YouTube, even the home page is already trying to get people to addictive/rabbit hole content. When I got a new phone a while ago, before I signed in with all my accounts, I was curious to see what YouTube would look like. It’s kinda gross how much content there is that’s clearly meant to get kids’ attention – bright colors with sensationalizing and sexualization – with such idiotic content below it. Mixed in there, were military propaganda and glorification videos (it sometimes seemed that they were masquerading as news about the US or Ukrainian armies) and other “gateway” right-wing stuff. It just seems that before you even click on anything, YouTube is trying to push you to the right and down into a rabbit hole.
Finally, I’m a bit skeptical about the method that they used to classify the “alignment” of sources. I know they say that “alignment is not a measure of media slant” but that doesn’t rectify or even address the issue that they’re using sharing to measure the alignment of content of which sharing is one of their dependent variables. I’m not saying there definitely is an issue, but it seems dangerously cyclical. I am saying that they definitely completely ignore the fact that it’s not uncommon for people to share sources they disagree with in order to say “woah! Look at how crazy this is!” which would unquestionably confound the methodology they use for alignment. So I’m skeptical of the results and think that if anything, there’s probably less “true” cross-cutting than they show, but am generally just find this whole study “kinda sus” now (to use academic terminology).
The readings provide valuable insights into the dynamics of social media platforms and their impact on information exposure and content moderation. The paper analyzing exposure to diverse ideological content on Facebook reveals the influence of friend networks and the platform’s algorithmically ranked News Feed. While algorithms play a role in shaping content exposure, the study highlights that friend networks and individual choices have a larger impact. This emphasizes the need for users to critically engage with the content they consume and actively seek out diverse perspectives.
The discussion on YouTube’s algorithm underscores the concern that the platform’s recommendation system can lead to the amplification of extreme content. As users engage with specific subjects, they are directed towards increasingly hardcore videos, potentially contributing to echo chambers and the reinforcement of existing beliefs. This raises questions about the responsibility of platforms to prioritize the well-being of their users over revenue generation.
The exploration of how the decentralized network Mastodon deals with the presence of Gab, a controversial far-right social network, raises important considerations about the balance between openness and hate speech moderation. The tension between decentralized principles and the need for safety and inclusivity is evident. Mastodon’s response to Gab highlights the challenges faced by online communities in fostering open dialogue while preventing the spread of harmful content.
Overall, these readings underscore the importance of diverse and inclusive digital spaces, where content exposure is not limited to pre-existing beliefs. It is crucial for platforms to prioritize responsible content moderation and for users to actively seek out and engage with diverse perspectives to foster a more informed and inclusive online environment.
I think a few years back, I watched a video on the issue of social media radicalization on either YouTube or TikTok. I tried to find the video again, but cannot remember who published it. To some extent, I think we all know that social media does this to us, but I also think there is some inner belief that our own views cannot be that easily swayed towards radical ideas. I definitely think it is more palatable to engage with like-minded opinions that reinforce our own viewpoints. After realizing the shortcomings of social media and the “echo chambers” that platforms create, I have tried to engage with opposing views, but honestly coming from a viewpoint that is probably considered leftist, conservative ideas as of late almost feel like disinformation and misrepresentation of complex issues. I do try to understand the perspectives, but I often feel like conservative, centrist, and neoliberal messages are picked apart in the media I consume anyway to the point it can be hard to understand the rhetoric without my underlying biases.
Moving on to censorship, I think Mastodon is in a difficult position. I personally think decisions to disconnect Gab from other nodes was a good choice. I know that Mastodon is sort of personally hosted, but I am not entirely sure how that is done. If it is something like an open source software that can be implemented on any server and used to connect users, I am not certain it is reasonable to blame Mastodon. I am somewhat at a loss for how to deal with something like Gab. I think the point made about restricting access through Chrome was an interesting idea. Should such entities that foster hate speech be blacklisted from traditional access. I think it would be very affective. I really am not sure how to approach this problem.
These readings were particularly very interesting to me. I have always been interested in studying feedback loops and their impact on the masses through social media. However, seeing how their effectiveness varied from app to app is something that I had not considered before. This mainly has to do with your social circle. On Instagram for example, your circle tends to be larger and filled with more people that you loosely know. But on Facebook, your circle tends to consist of mainly family and family friends. On Instagram, it is easier to unfollow someone if they are posting something that you do not agree with or do not enjoy, but on Facebook, a lot of people do not feel like they have that luxury. There is a larger incentive for users to be more polite, since there may be larger consequences if they choose to cut someone off on Facebook, for it could lead to a family dispute at the next Thanksgiving. As a result, cross-cutting occurs more on this app than on other social media, for it is highly likely that not everyone that you are related to has the same beliefs as you.
The concept of self-regulation was very interesting to me as well, and it came up in both readings. Although it is easy to assume that the algorithms are entirely to blame for what we consume on social media, we have more power to shape that algorithm than we think. The first reading talks about how we have more influence over what the algorithm gives us than from the algorithm alone, and the second reading discusses how Gab users are expected to self-filter what they see rather than the moderators doing it.
Some of the most head scratching lines in the reading on Mastadon were the reasonings given for not moderating Gab’s content, and then later those on developers explained their rationale on not limiting Gab’s outreach on their apps. First, Gab’s founder Andrew Torba’s misguided implication that hate speech is allowed under the constitution, when there are instances in which hate speech has been tied to real world events, is a pretty low-bar justification, and it is not made stronger by then using an “Everyone else has the same issues” excuse. This is not necessarily surprising, and neither are the responses of Fedilab’s developer and Tateisu, both of whom give an apt characterization of the modern conflict of the internet: the competing interest of openness and safety. It is a line that companies are constantly maneuvering, as they weigh the costs of moderation against real life harm, and it begs a familiar question: at what point does the harm outweigh the benefits? We talked earlier in the semester about a similar ethical dilemma, but in this case, the direct results suggest that better moderation is indeed necessary. There are platforms and niches that are filled with hate all over the internet, and this breeds real harm onto marginalized people. When companies are unwilling to take responsibility for the byproducts of their platform, such as shifting the blame onto the larger organization or maintaining a neutral stance on hateful content, it becomes clear that there is not a strong enough infrastructure around social networking to properly police it, and thus the developers of these platforms command an immense amount of power and responsibility.
Tufekci describe YouTube as a “rabbit hole of extremism”, mediated by the longer watch time per user that it enables. I honestly recognise this with my YouTube as well. Anyways, Tufekci’s book ‘Twitter and Tear Gas’ also talk about how the logic of censorship on social media has changed due to its relative decentralization in comparison with traditional new media. Now, instead of gatekeeping news or ensuring that it does not get published in the first place, Tufekci asserts that censorship operates through a deluge of information instead. In this strategy, social media are flooded with many alternate stories so that the original news loses its legitimacy.
Ultimately, I do think that the case of Mastodon highlights the pros and cons of decentralized design and open source in general. In this case, while the decentralized and open source nature of the service allows for the creation of safe online pockets for those minoritized online in social media whose moderation is governed primarily by what brings in money, the same capabilities reduce moderation abilities across the platform and allow for ease of hosting of extremist content. While I think the decision of isolating Gab from the rest of Mastodon (defederation), most of the users were already isolated to the server anyway so I don’t know if it solved the root issue.
Interesting to see that personal choices still actually influence what we see online to an extent. Even if the algorithms might then attempt to radicalize those choices to the extreme. The youtube algorithm thing scares me most in the context of my little brother who has grown up with youtube on just non stop. I don’t even know where his time on youtube has taken him.
The cross cutting content thing was interesting too, the fact that they are more of a result of our own choices and not the algorithms is not something that I had expected. We do like being right I suppose. I wonder if algorithms see this behavior and then help us to reinforce it. I could see this being another area of AI perpetuating harmful cycles/behaviors.
I wanted to address the 3rd reading “YouTube, the Great Radicalizer”. I always thought about how YouTube was able to give me videos that I thought were interesting. I kept on watching video after video without realizing the fact that each video I watched was slowly getting more and more extreme. It was shocking to hear about how one of the employees at YouTube, who was working on the recommender algorithm got fired just because they were trying to change the algorithm due to its flaw to society. The fact that YouTube did not want this and fired them is just absurd. They focus on the money that they gain and capitalize on the users to continue watching more and more videos because of the flow of income that they gain from it. These thoughts are so interesting because they just tie to other companies and how they care more about the cash flow rather than the interest and assistance of those around them. The recommender algorithm is disgusting to see as people continue to lead “viewers down a rabbit hole of extremism, while Google racks up the ad sales.” This distracts the users from their daily work and falls for the trap that YouTube set up. This makes me curious as to what we can do to stop these desires from companies and be able to set up an environment where employees don’t have to be fired just because they are trying to focus on the people and not on the money.
These three readings address topics that have always fascinated me about the internet, which are radicalization and “rabbit holes”. I’m a bit skeptical of the conclusions of the cross-cutting study, as the choices of individuals on what to click are largely determined by the algorithms of the social media sites they use. The YouTube article does a good job of explaining how these algorithms influence our choices, to the point where it often doesn’t seem like a choice at all. YouTube’s recommendation algorithm is maybe one of the least subtle version of that type of system, as a lot of the time after watching just one video on a topic I’m flooded with other videos that relate to it. It’s very easy to see how this type of algorithm would radicalize people and send them down a rabbit hole after watching one or two videos. Tufekci does a good job of explaining how this is a result of Google’s business model of keeping you on their sites as long as possible, which involves showing radical, emotionally-charged content.
The Verge article was also very interesting. In high school I was actually a research assistant for a university lab researching Gab and far-right internet media, and I spent around two months sifting through posts on Gab, cataloguing and organizing them. After this experience, my (admittedly biased) conclusion is that the “censorship” of Gab isn’t really the morally gray issue some users make it out to be. While they claim to just be upholding free speech, Gab is almost exclusively used for hate speech. If you go take a cursory glance at the site, you’d see that a vast majority of posts are clearly extremely racist, xenophobic, homophobic, transphobic, and all around deeply hateful. Its creator Andrew Torba posts white-supremacist ultra-conservative media, which should be an indicator of what the site is for. It is one of the most explicitly hateful sites I have ever been on or heard of. Hate speech is not free speech, and it should never be treated as such. I entirely understand the reasoning behind the decentralized nature of Mastodon, and love open-source projects like it. I also understand that there’s a limited amount of power Mastodon’s creators have to regulate hate speech, but it would be in all user’s best interests if they put out a statement suggesting all servers block Gab and its users. If they have the power to shut out or shut down Gab, they should, as the discourse on that site has no good reason for existing.
It’s intriguing to see how social media algorithms affect the ‘echo chamber’ effect. Research on Facebook shows that while algorithms do shape content exposure, our relationships with friends and family have a bigger influence. Another study on YouTube highlights the ‘rabbit hole’ effect, which can make people, especially kids, feel addicted to the platform.
People often focus too much on algorithms and tech companies for the toxicity of Social Networking Services. While algorithms have an impact, they don’t force people to engage with extreme content. Ultimately, it’s a matter of personal choice.
In our capitalist society, companies aim to create addictive and convenient services to generate revenue, which isn’t negative. It’s our responsibility to seek diverse information and control our consumption, such as watching YouTube videos. If you wish to avoid hateful content, you can choose not to use social networking platforms. As long as there are no direct threats or calls for violence, freedom of speech, even when it’s extreme, should be protected, with individuals being responsible for the consequences of their posts.
I think this is an interesting mix of readings demonstrating how different social networking platforms influence how modern users absorb information differently but all in a very strategic, extreme way. All the companies that were mentioned are very smart with how they control the kinds of content that their users view on a daily basis. For example, Facebook is known to be a platform for more tight-knit networks where users can more easily interact and influence each other, so they utilized the nature of their platform to expose users to more ideologically consistent content. Youtube, on the other hand, uses a very sophisticated recommendation algorithm to show users more extreme content based on their viewing history. I think Google is particularly strategic about how it codifies and plays to the human instinctive curiosity and thirst for knowledge to lead users down rabbit holes while financially benefiting from that.
I wonder if the side effect of leading users to much more extreme ideals than what they started with is something that is unintentional and unaccounted for in the planning and implementation process, or whether that is one of the goals that they are trying to get at. Either way, I’m curious to hear from others about what we can potentially do, especially when decisions about the goal and mission of a product are made at a very high-up administrative level, possibly without much ethical consideration and impact assessment. Is there a way to encourage engineers to raise their concerns about the ethics behind the products up to business leaders and protect them from the consequences of such confrontation?
Today’s readings were genuinely fascinating. All three of them discussed how various social media platforms contribute to the phenomenon known as echo chambers and filter bubbles, resulting in users being primarily exposed to content that aligns with their beliefs. Zeynep Tufekci’s metaphor, in which she explains the dangers of these phenomena by comparing YouTube to a restaurant that serves sugary and fatty food, always ready to serve more, vividly illustrates the perils of social media’s impact on people’s political views and ideologies.
Another intriguing aspect I found in the readings was the discussion surrounding censorship and free speech in Adi Robertson’s article, “How the Biggest Decentralized Network is Dealing with Its Nazi Problem.” The reading delves into the consequences of Gab joining Mastodon and Mastodon’s efforts to regulate right-wing extremist users from Gab. Some of the Mastodon administrators have taken actions to maintain a kind and respectful environment, while others have expressed uncertainty about whose role it is to censor and regulate social platforms that claim to promote “free speech.” An interesting point made by a developer was, ‘If Google wants to ban it, they should start with their Chrome web browser.’ Once again, this raises questions about who holds real power and how their actions can be interpreted.
The Mastodon/Gab article raised some interesting questions for me. In particular, I was drawn to the discussion about responsibility for bans, especially the quote “If Google wants to ban it, they should start from their Chrome web browser”. This came up as part of a larger discussion about the decision to block Gab on Mastodon’s part, some feeling it went against the mission of user choice in seeing what they want to see and others positing that there has to be a line somewhere, as well as the context of the Gab app already being banned in many major app stores. The quote was specifically about how bans from stores don’t equal banning the platform, especially where stores are also correlated with browsers (like Google play -> Google Chrome) and users can simply access an alternative version that way. The proposition of banning not just app purchase but also use through a web hosting ban pushes the boundaries of the responsibility to moderate—a ban from an App Store, preventing promotion and easy access versus a ban from a web engine, which hosts nearly the whole internet. While it feels true that Google, as a huge tech company, could do more than an App Store ban (which if I read correctly was at least temporarily lifted), the precedent banning Gab from Chrome would set seems like it could set an expectation for moderating the internet as a whole. Gab may be one of the largest Nazi-rhetoric dominant social media platforms, but it’s not the only one and certainly not the only place for internet users to interact around those talking points. To ban Gab would be noble, and to follow up with that logic and ban everything Gab-adjacent would be noble, but would it be feasible? Probably not, as new content goes up to the internet faster than it can be found, read, and decided if it can stay up.
If we considered this a possibility anyway, it would of course be a great thing for keeping the internet free of Nazi rhetoric like Gab, and probably hate speech in general, but we must consider who decides what is hateful. I would imagine most if not all of us in this course are pro reducing Nazism, but this level of arbitration of what makes the internet safe could just as easily be used to stifle marginalized voices using the internet to speak up against a majority power. This breed of suppression has occurred time and time again throughout history: through barring voting, barring literacy, barring personhood, just to name a few. Would the price of policing the speech we dislike and know is harmful be worth the cost of giving way for a new, modern way for this logic of oppression to take form?
Ad-supported services need eyes on ads to survive. The more time you get eyes to spend scrolling through content peppered with ads, the more money you’ll make, and the more money you can invest in improving your recommendations so that even more time is spent by users seeing and engaging with ads. I think most of the time these implementations are benign or even helpful, but get very sticky when it comes to politics. I definitely disagree with the way YouTube and Facebook promote content that can be described as inflammatory or just simply “ragebait,” but even if these platforms didn’t engage in this type of behavior, should they really promote “crosscutting” of views? It’s worth noting that social media interaction has, at least to some extent, replaced some of the interactions we would be having in real life that would expose us to different viewpoints. But our agency defines how we incorporate these different viewpoints into our own lives. If I hear someone say something I disagree with, there are many, many factors that play into my reaction. I may choose to completely ignore it, clearly disagree, or weigh it evenly and let it influence my own thoughts. I don’t think that these social networks should expose people to inflammatory content for extra clicks and more revenue. But I also don’t think that social media networks, which can be largely defined by our own activity, should be held accountable to stand in for the personal intellectual mindset that encourages people to reach out to others and challenge their own thoughts.
The overall idea that these articles left me reflecting on, was a frustration with the arguments of freedom, free will, choice, etc. etc. that are often employed when it comes to criticisms of content reccomendation algorithms, or used as responses to calls for more vigorous moderation practices. Arguments such as “people choose what they interact with” or anything similar honestly end up essentially meaningless to me. If you are shown one thing over and over, or if one perspective or false narrative is pushed to the forefront, I don’t think its totally fair to say that you have come by that perspective completely by your own choice or volition. I remember a podcast I listened to about the radicalization of some older family members towards Qanon, and how things started with watching fox news and other fairly “mainstream” conservative publications. Obviously not everyone who watches fox news is radicalized to this extent, but I think what you are shown (especially when you have a low level of tech literacy, or are not familiar with basic information vetting techniques etc.) has an effect that we cannot write off as completely combatted by “free will”
I think a lot about how social media and associated algorithms encourage echo chambers. I remember in the mid-2010s, being on YouTube and having a lot of these “anti-social justice” videos advertised to me, and clicking on them and being recommended even more videos, even though I wasn’t necessarily interested in/agreed with the content. Since many social media apps are based off of recommending content and learning what views resonate with people. There was this idea that pretty vile rhetoric was allowed online under the guise of “free speech.” But freedom of speech does not mean that people don’t have a right to react to that speech. And if the majority of people reacting are marginalized people saying that this speech is hurtful/violent, there should be a way to seek remedies for that and discourage similar behavior.
“If hate speech is masquerading as free speech on an app I’ve built, it’s upon myself to somehow moderate that,” does feel like the bare minimum at first glance, but I also forget that digital moderation is severely undervalued. Maybe there needs to be a wider consideration of the how we value/compensate moderators, who are doing various kinds of labor. I found it really sad when Rochko ended with, “It’s just unfortunate that these are the circumstances that we’re talking about Mastodon again…I would much prefer it was something specifically about Mastodon. Rather than, you know, Gab.” Especially since Mastodon was meant to circumvent the rise of reactionary spaces online/on Twitter, its a pressing reality that hate speech/groups will find ways to co-opt that and exacerbate digital violence.
The study, conducted by Eytan Bakshy, Solomon Messing, and Lada A. Adamic, explores the influence of social media, particularly Facebook, on exposure to ideologically diverse news and opinions. By analyzing deidentified data from 10.1 million U.S. Facebook users, the authors investigated the users’ exposure to content across ideological lines. The findings suggest that individuals’ choices play a more substantial role than algorithmic rankings in limiting exposure to diverse content. This research provides valuable insights into how personal choices and social media algorithms interplay in shaping the information landscape an individual is exposed to. It’s evident from the study that our online environment, particularly on platforms like Facebook, isn’t solely an ‘echo chamber’ curated by algorithms; it’s significantly influenced by individual choices. the second article discusses a challenge faced by Mastodon, a decentralized social network platform known for promoting friendlier and respectful interactions as opposed to larger, centralized networks that often host hate speech and offensive content. Mastodon is facing difficulties due to the migration of Gab, a social network notorious for hosting far-right and extremist content, onto its platform. Given Mastodon’s decentralized nature, dealing with such issues is complex. Some administrators have blocked Gab’s content, and debates revolve around the extent of moderation required against Gab’s hate speech and the technical possibilities of implementing such moderation due to the decentralized architecture of Mastodon.
In news stories over the past few years, I’ve heard of the so-called “alt-right pipeline” that exists on YouTube. As described in the NYTimes opinion by Zeynep Tufecki, the YouTube video recommendation algorithm often suggests videos that tend toward extremism. For instance, viewers of right-wing media like Fox News are more likely to be recommended videos from alt-right or conspiratorial sources.
I was not aware that one of the engineers of YouTube’s recommendation algorithm, Guillaume Chaslot, left the company after a dispute over the ethical nature of the algorithm. The article describes how Chaslot ran an independent analysis of the algorithm in 2016, and found that viewers of all political content tended to be given pro-Trump video recommendations. This signals that the algorithm, through it’s pursuit of maximizing attention, identified pro-Trump videos as the most engaging political content on the website.
It’s interesting to see how this is yet another instance of possible well-intentioned algorithms leading to perverse side effects. Google engineers, like those at other social media companies, sought to increase the amount of time spent on their platform, and accidentally created a system that favors extremist and increasingly dangerous content. Given the popularity of YouTube, it’s unsurprising the vast influence that social media now has in U.S. elections.
It’s unsurprising to me that, for the most part, we are shown content and introduced to users who share similar views/interests as us. These platforms want to keep us engaged to make money (for example, via users viewing ads), thus showing us mostly like-minded or similar content and increasing the time spent on social media. YouTube’s eventual exposure of extreme content to users is worrying—it’s not as if the content within recommended videos is thoroughly fact-checked. So in addition to users who may be unaware of the increasingly more extreme content they’re being fed, the misinformation contained in these videos may be taken as somewhat valid. Paired with how difficult it is to incentivize these companies to make changes to their algorithms as this wouldn’t necessarily benefit them money-wise, and the fact that they seem to fire anyone who takes issue with the implications of these algorithms, there seem to be a decreasing number of ways to mitigate these algorithms.
I never had a Facebook It was interesting to read about how Facebook works. So, when reading about Facebook interactions tracked based on one’s political association, I thought the results made sense. Because even in day-to-day life, I noticed how birds of a feather flock together, and it makes sense that people with extremist views associate with other people with the same views. But what I thought was interesting, was how reluctant the conservatives were to only associate with conservatives and neutrals, which was unlike any other group.
On the other hand, I do have a YouTube and I love it because of its personalized recommendations. It makes it so much easier to find what I want to watch and I’m into natural hair, care, videos, and podcasts so my feed typically suggests I look at other similar content. At first, I thought I didn’t get any extreme content recommendations like the YouTube article was getting, but then I realized the titles of the videos I watch reflect these extreme values. For example, I’ll get hair care videos that say to watch for extreme growth and maximum results. But my podcast recommendations are not that extreme. I think it’s good to note how helpful these algorithms can be and maximizing something isn’t always bad.
The youtube reading talks about how autoplay would lead to more extreme content. The author tests political opinions. And then finds out that it not only happens in the political setting. I think it might be because of the binary system of algorithms that they designed. Through machine learning, the algorithm will auto-choose the videos about certain tags but in a greater level since they believe that the user enjoys watching that information. And with more and more auto-play videos have been added up. The algorithm then believes that people want the most extreme content regarding certain information. This becomes the add-up loop that the content will go to the extreme without people’s interference in what to select. I would not prefer my auto-play goes to this direction, and I am surprised how they rate the qualitative content with the quantitative number of levels.
When I was reading the article about Mastodon and Gab, I kept coming back to wondering about openness. The article said the problem lied in the conflict between safety and openness. Safety is obviously important, what is so important about openness for it to potentially warrant compromising safety? I realized I didn’t understand what they meant by openness. From what I can tell, openness not only refers to open source philosophy, but also “free speech” or “anti-censorship”. I do not fully understand why these two ideas are conflated, or how restriction of hate speech could possibly compromise the advantages of open source content.