Reading for Wednesday September, 13th

ACM Code of Ethics and Professional Conduct (https://www.acm.org/binaries/content/assets/about/acm-code-of-ethics-and-professional-conduct.pdf)

Google Responsibilities and Principles (https://ai.google/responsibility/principles/)

Ethical OS Risk Mitigation Checklist (https://ethicalos.org/wp-content/uploads/2018/08/EthicalOS_Check-List_080618.pdf)

24 thoughts on “Reading for Wednesday September, 13th”

  1. In the Ethical OS Risk Mitigation Checklist, I noticed a theme of viewing unintended consequences as a result of bad actors, from users intentionally using the developed technology for some negative consequence. An example of this theme is in the checklist question starting with, “How could someone use this technology to undermine trust in established social institutions…,” or the question starting with “How could someone use your technology to bully…”. Over the course of a piece of technology’s lifetime, there is a definite chance that bad actors may intend to use the technology to commit harm. However, this assumption of unintended consequences being a result of bad actors presumes that the technology is ethical. At the same time, it assumes that the technology cannot encourage users, even without realizing, to engage in negative behaviors that may commit harm. The Checklist appears to indicate that the mere consideration of ethics absolves a technology of being harmful instead of establishing procedures where ethical issues can be negotiated. When it comes to the tendency to assume negative consequences are a result of bad actors, I am curious as to whether this may be due to how computer scientists consider unintended consequences more generally. For example, in the field of cybersecurity, systems are designed with the expectation that users will try to commit harm by breaking into systems. However, this mentality can influence how computer scientists consider ethics, expecting bad outcomes to be the result of users, not engineers.

    Regarding the other codes of ethics reviewed for class, a commonality between them is the use of general language to explain how each organization is concerned with ethics. Instead of viewing ethics as something negotiable and variable, Google explains that they will “take into account a broad range of social and economic factors” and “be socially beneficial.” When discussing responsibility, the company offers vague statements that do not define their ethics or their procedures for resolving moral issues.

    Reply
  2. Ok, I tend to be skeptical of corporations, especially big ones in tech or finance. I’m glad they have ethics (like googles AI list) but they’re always the most broad and basic thing. Like google didn’t commit to not spy on people, they just committed to do it within the bounds of “internationally accepted norms” which could mean anything from wiretaps to “enhanced interrogation techniques.” Even things like ACM’s stated “obligation of computing professionals, both individually and collectively, to use their skills for the benefit of society” and Google AI’s similar #1 don’t really take into account that CS people are way more likely to think that their technology “might do some harm but the benefits will outweigh that once we usher in the new AI/Metaverse/X/Self-Driving/Neurolink/Mars-Colony utopia” (that’s actually a direct quote from Elon Musk’s id (the Freud one)). And definitely ignores the fact that just the very act of training AI, regardless of what it does uses massive amounts of computational power and energy, and the effects of global warming are predominantly felt on the most vulnerable in the world.
    So maybe I’m cynical, but blurbs like these don’t tent to increase my trust, maybe it’s because I’ve been in lots of places where people are like, “yeah this is what our website says but that’s not really how it is,” or because even if people follow the ACM’s (a body with no enforcement or oversight authority) suggestion to “follow generally accepted best practices” those practices themselves are probably made up by CS people and likely have bias, unintended consequences, or are just straight up stupid. Even the checklist (which I’m most sympathetically to because it’s not serving to boost that org’s appearance) ignores the fact that a lot of the time, stuff like exploiting “the Dopamine economy” or collecting too much data, are very much features, not bugs.

    Reply
  3. There were a few noteworthy points within these handbooks, the first of which was that computing technology could reduce unfair bias. Much of the time we hear about technology enhancing unfair bias, so what type of tech could have the opposite impact? How might AI decrease bias without allowing the potential for the opposite?

    Additionally, both the ACM and Google’s ethical principles listed the notion of decreasing harm as opposed to eliminating it, both stating something along the lines of: only do harm where necessary. This is a difficult principle to uphold, since it is not a black and white boundary. Who decides what “undue harm” is? How does one determine whether some benefit outweighs some harm? Where is the unacceptable threshold for harm? Is answering that question unethical in and of itself?

    Finally, the risk mitigation checklist is the only reading of the three to mention addiction and dopamine monetization explicitly, something that may often be overlooked in the notion of public good. Does the design of products from companies like Google satisfy this point? Why might a company not explicitly mention this practice in their ethics handbooks? Is the practice of targeting/exploiting dopamine receptors unethical?

    Reply
  4. I thought one particular section of the ACM Code of Ethics was very telling, and emblematic of the challenges that are always faced when an organization tries to codify what is essentially a series of intensely personal decisions. In the 1.2 Avoid Harm section, the ACM writes “When harm is an intentional part of the system, those responsible are obligated to ensure that the harm is ethically justified.” It tries, and fails, in my opinion, to grapple with the fact we rely directly on a non-existent “correct” perspective to make these decisions. It is certainly a lot easier to “ethically justify” harm when you are the one doing the harming. And what is meant by the minimization of harm if the purpose is explicitly to harm? Would it minimize harm if we made killing machines, for example, more efficient? Is this an “ethical” goal?

    I think Google’s AI Principles really do the bare minimum, acknowledging that their work can have serious and far-reaching implications on society as a whole and that they should remain accountable for them. I appreciated the specific list of applications Google will not pursue, but it feels a little hollow. I didn’t expect Google to violate international laws, and I also didn’t expect them to create weapons systems simply because that is not their business. I fully expect others, though, to leverage the new knowledge Google is creating through their research to do exactly what Google won’t. I understand that this enters a more complicated ethical world, and I don’t believe that we should hold people accountable for the ways that the knowledge they create is used. But it is a certainty that no knowledge will remain purely at Google, so I think research there should be specifically tailored around that thought.

    Reply
  5. I found all of these codes of ethics to be interesting, especially Google’s, since they have the reputation of being less-than-ethical in a lot of their dealings. For example, the most recent Chrome update was marketed as being for ad privacy, but what it actually did was make it more possible for Chrome to track your activity on all aspects of the internet in detail for ad targeting use. Unsurprisingly, their code’s section on privacy was very short, reading only, “We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.” Though the update did come with a rainbow screen with a bolded “Got it” button, and a fainter button for modifying settings, technically giving “transparency and control over use of data”, that’s more of a technicality than a practice in the spirit of their code of ethics’. Moreover, they gave notice, but without a way to easily opt out of the update, which gets installed just by Chrome being closed and reopened at the time an update is launched, could hardly be called giving “consent”, without an option of opting out. I can’t say I’m surprised, just a little disappointed. It serves to show that these codes of ethics can be saving face for a company more than they are actually a commitment to principles.
    As for the other two codes of ethics, I gravitated towards the Ethical OS one (though it is not without its problems). The ACM one felt very academic and dense, which makes sense since ACM is an academic organization, but compared to Ethical OS, ACM’s did not feel as easily actionable or graspable by someone doing tech work who needs to hold these considerations in mind but may not have experience parsing language that lies more in the abstract into action steps for ethical computing. By contrast, the checklist format taken by EthicalOS felt very straightforward and provided a clear list of questions to ask yourself while doing technical work, which made it feel more accessible than ACM’s version. However, the checklist also may make what they list be perceived as more comprehensive than it actually is. As we discussed in class on Monday (I think Sira brought it up), codes of ethics need to be continuously developing, not static, and while the checklist is clear and informative, the structure fails a bit on that point. You could check every box and still not be approaching your work in the most ethical way possible, and unlike a checklist, ethics aren’t something you can ever really complete because if you do so, you risk drifting, as Monday’s Feynman’s Error piece suggested.

    Reply
  6. Google AI principles and the ACM Code of Ethics and Professional Conduct are general, and I liked it. I believe that those conducts should not restrict scientists’ and developers’ ideas to make products. Both of them defined just minimum boundaries and there is a lot of freedom for computer scientists can work on.
    The other part that I like from ACM is “Examples of harm include unjustified physical or mental injury, unjustified destruction or disclosure of information, and unjustified damage to property, reputation, and the environment”. It does not deny the “justified harm” by technologies. Technologies from computer science are great to harm people, especially AI. A lot of countries including the United States already started to employ AI technology for military usage. The damage from those weapons is considered as “justified harm” since their purpose is to protect the nation, people, and important values of each country. I believe that it is unethical to harm people with technology, but if it has legitimate purposes, it is “ethical” enough.

    Reply
  7. When looking at how Google is promising these ethical practices with their use of AI, it is interesting considering their track record with selling personal data. Also, when looking at the ACM Code of Ethics and Professional Conduct and the Ethical OS Risk Mitigation Checklist, I found them both very useful, especially considering the context in which they show up. As undergraduate coders, it is essential for us to enter the workforce with the correct code of ethics to work by. What will make technology continue to be more inclusive and have a more positive impact on society is if the next generation of young coders emerge from their educations with the proper framework from the get-go, rather than having to re-equate or equate themselves with the code of ethics and coding practices that they should model their work by.

    I truly appreciated how the ACM Code of Ethics included sections for each position of coder, whether it be a higher-up or a coder in an entry-level position. This way, new coders have a guide and proper frame of reference for how they should approach their work from any position that they might undertake. This way, whether it may be the influence of a manager or an idea from an intern, there will always be coders who work to maintain the standard code of ethics when working on a project and/or product.

    Reply
  8. Google’s AI code of ethics is interesting to me as much for what it doesn’t say as what it does. In particular, The code is missing Google’s original ethical motto “Don’t Be Evil” [1]. While that phrase in particular has been removed, the rest of the code tends to elaborate on the same idea, albeit in a more corporate-friendly tone.

    But what’s missing without “don’t be evil,” is the sense that Google as a corporation is accessible to the level of individual citizens. The original Google enabled with it’s search engine that connected the web in a way it hadn’t been before. Instead of individual islands content being lost, all of the world’s information could now be centrally indexed. Yet winning over individual consumers were still at the heart of Google’s business.

    Nowadays, as Google Search essentially has a monopoly on how the web is used and accessed. And with that, the company also lost sight of its user-centric approach. Instead they continue to employ invasive user tracking to serve ads that add bloat to every website and every search. With Google’s ethical fall from grace in it’s original domain of search, it’s hard to trust any elaboration of their AI ethics, despite their code of conduct.

    [1]: https://en.wikipedia.org/wiki/Don't_be_evil

    Reply
  9. For me, today’s readings were intriguing as we were able to take a closer look at the ethics and understanding of responsibility in computing. I think there is extreme value in exposing Grinnell CS majors to the documents we read today, especially those of us who are about to enter the real world of computing professionals. Personally, I found that the readings left a lot of vagueness about specificities and the true “why” behind the intentions of each specific policy. The ACM Code of Ethics did a far more sufficient ground at covering the latter, but still left room for improvement. In particular, I wish that there was a further discussion of the relationality between the separate policies and topics addressed. Nevertheless, I thought the ACM Code of Ethics covered a lot of ground and gave great exposure to the power that some computing organizations and researchers have. For the first time, privacy as well as the importance of leadership were discussed in a class reading. In contrast, Google’s AI Principles were not nearly as extensive or clear as ACM’s. I found this contrasting nature fascinating and in particular the acknowledgements of biases within AI. Google as well as much of the computing world understands the issues that develop with AI but was extremely vague with their origins and what is being done to combat them. In addition, the fact that there was an acknowledgment of what their AI will not do shows the knowledge of the dangers AI could pose. Overall, I found it interesting how the things we code to act as superhumans are not tasked with nearly as extensive of a code of ethics as those who created them. Obviously, I understand that these codes could never be the same, but the differences were all too glaring, and the vagueness in the ACM’s Code Ethics seemed to be aiming at covering a broad area of issues whereas Google’s AI Objectives felt like they were vague to hide underlying issues.

    Reply
  10. So, honestly, the ACM and Google ethics codes felt very corporate. I saw all the things I would expect to see on an ethics code of conduct. The focus, while sometimes focused outward, often felt like the goal was to better the appearance of the company or corporate interests. I don’t disagree that maintaining a high level of computing competence might further one’s ability to provide the best and most ethical solutions. This could especially be the case if that computing competence includes accessibility implementations or faster workflows that allow companies to spend time focusing on certain ethical issues. However, it feels like a strange thing to add to a code of ethics. The Google code of ethics also feels very outwardly focused, as if it is meant more as a business statement to the public rather than a true internal code of ethics. The privacy statement is an especially interesting inclusion, given Google’s history of data collection. Also, as a company that provides targeted ads as a product, it is difficult to imagine that Google, or any of the FANG corporations can deliver on an ethical code that includes a statement on privacy.

    I think the Ethical OS checklist does a better job at approaching ethics by giving a list of open-ended questions to think about when approaching the ethics surrounding applications. Many of the issues it brings up are discussions we have seen brought before congress and within research papers on some of the negative effects of social media algorithms. I believe that the conversations to be had around these questions are a lot more impactful than many of the points made by ACM.

    Reply
  11. Interesting reading. I’m glad that these different bodies are thinking of different ways to control the impact that their technologies will have on the world. Google’s policies seemed a little too vague for my taste. They used ambiguous terms like “internationally accepted norms” and they also stated “direct injury” instead of just saying injury. It feels like a lot of my concerns with tech are about the indirect consequences, which wasn’t really touched on by Google. I liked how thorough the ACM code was and I especially like the perspective that EthicalOS took in identifying current problems ranging from the dopamine economy to surveillance, which seem like problems that affect people on different scales. It would be cool if Google were to go through the process that EthicalOS put forth, but it would probably also be alarming to see the results. They covered less violent and more widespread issues than google’s policy against developing weapons. Which feels more practical, though I would absolutely not have google remove their no developing weapons clause. I’m just more biased towards the practicality and clarity of the document that EthicalOS put out.

    Reply
  12. As a computer science major, delving into the ACM Code of Ethics and Professional Conduct provided me with significant insight into the moral compass that is expected of me in the profession. This document, while exhaustive, can be summarized as a testament to the broader responsibility that we, as computing professionals, hold in shaping the present and future of our society.
    Foremost, the emphasis on contributing positively to society (1.1) stood out to me. With technology touching every facet of human life today, it’s crucial for computing professionals like me to recognize our influence. It’s not just about creating efficient algorithms or applications, but about ensuring that our work benefits humanity as a whole, while minimizing harm (1.2). This has shaped my perspective on my future career. As I venture into the tech industry, I aim to partake in projects that respect human rights, prioritize the less advantaged, and promote environmental sustainability.
    Another striking point was the need for honesty and trustworthiness (1.3). The rapid dissemination of information in our digital age means that false data or misleading claims can have large-scale ramifications. Whether I’m coding, collaborating on projects, or discussing implications of certain technologies with non-tech individuals, I must prioritize transparency and integrity. This principle reinforces the importance of credibility and will be my guiding light in all professional interactions.
    The code also emphasized the importance of fairness and non-discrimination (1.4). I found this particularly relevant in today’s age, where there’s a growing demand for diversity and inclusivity in tech. As a student, I’ve witnessed first-hand the lack of representation in tech classes and discussion forums. This serves as a reminder that in my future workspace, I need to advocate for and ensure a diverse and inclusive environment, not just because it’s ethically right, but because it enriches innovation and problem-solving.
    Privacy (1.6) is another pivotal topic, especially with increasing concerns over surveillance and data breaches. As someone who will potentially be on the front lines of designing and managing systems that handle vast amounts of data, understanding and safeguarding the privacy of users will be paramount. This isn’t just about encrypted passwords or secure servers, but understanding the holistic implications of data collection, storage, and dissemination.

    Reply
  13. Based on the readings today, particularly “AMC Code of Ethics” and “Google Responsibilities and Principles”, I thought it was nice to see how they were getting more about the quality of work and how they are trying to be more inclusive. However, in my opinion, I do believe that their statements are still vague, as if they are trying to explain as minimal as possible. One thing I noticed in the reading about Google, there seemed to be a lot of empty spaces in their responsibilities as it seems very general and not specific towards any groups. The quote for this was “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.” This seems like a very lack of interest for people as individuals and rather more on the idea of generalizing, as by doing so there is a bigger benefit from the group than accompanying each individual. Another quote that really got me thinking about whether Google was really going for ethics was the idea that “AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences.” This is quite interesting as they only continue to mention stem topics, which are known to be very rational decisions, proving that their AI tools have not yet considered the possibility of what is ethically right for individuals but what is ethically right as a whole. I feel like it is possible that tech companies endorse what is ethically correct and equality for the sake of being able to attract more people to their company as they know what people like based on statistics of preference of what they want and how they are. I do believe that tech companies are still on the idea of capitalism as they just want to receive more customers, with vague intentions of being able to support them and convince them that they could help. I am just curious, do these readings really have the intention of being able to create a better and more comfortable community, or are they doing this in order to convince people to support their own capitalist interest?

    Reply
  14. It’s really interesting to see the difference between the code of ethics by different organizations and how each reflects the nature of the respective org as well as their interests. For example, ACM’s code of ethics is very comprehensive and covers a variety of ethical concerns, but it also feels very dense, which makes sense given that ACM is an academic organization. The code gives guidelines to computing professionals of every level in both the industry and the research field, so it will prove to be helpful in various scenarios. Google AI’s principles are very succinct and vague. It uses very ambiguous language and does not go into detail in sections that are most important or controversial like concerns about data privacy. I really like the fact that it reiterates the importance of respecting “the cultural, social, and legal norms” in countries where they operate, and how ethical standards as well as what is considered appropriate may differ across countries or cultures. On the one hand, it acknowledges the purpose their technology serves on a global level and how they strive to cater to the most crucial wants and needs of citizens all over the world. On the other hand, it feels very ingenuine; it reads like something they put down to promote their image as an ethical corporation and shows that they care more about their corporate interests than they do about their users. My favorite out of the three is definitely the checklist, simply because of the readable format as well as very clear language. Written in the form of questions, the checklist helps coders take into consideration ethical factors as they plan, execute and constantly revise the implementation of their programs. This is not as much a set of rules that coders have to comply with, but more like a set of things to keep in mind to minimize the harm caused by technology.

    Reply
  15. The computer acts as a helper to society, solving problems that need a large amount of computation and connecting people through the cybersociety. However, as mentioned in the ACM Code of Ethics and Professional Conduct, although the code is trying to help people, it may turn out that it harms the society in some way. Just like the question for the Bayesian algorithm from the last class (Abeba Birhane, Algorithmic Injustice: a relational ethics approach), the computer and algorithm itself are not wrong, but society and ethics make it making decisions in an inappropriate way that hurts minorities’ rights.
    Besides, I think sometimes it might be more important to adjust the rules of how people should use it rather than simply consider the code as a whole to be non-ethic. For example, the social platform acts as a way to help people communicate and know the other part of the world. However, those platforms also act as a convenient and cost-less tool for cyberbullying. Since social platforms are still a necessary way to communicate, there are more codes or human police needed to detect cyberbully and illegal activities. The awareness of the downside in society in codes helps programmers to detect the problem and modify the better algorithm with the corresponding positive effect on society.

    Reply
  16. I think the risk mitigation checklist did well in providing thoughtful and straightforward questions that encourage the user to think about specific consequences and long-term effects of their products. But, it frames itself as a list to consider when creating something, and stops there, so it’s questionable how impactful this list really is when people create new pieces of technology. There’s really nothing to enforce or make sure that the answers to these questions happen or not.

    The Google AI principles and ACM Code of Ethics principles were broad and left many terms either undefined or open to interpretation. One point that caught my eye from the ACM code was the principle discussing harm (1.2). While providing examples of what unjustified harm is so the reader has an idea of what it means, they fail to further pursue the idea of “ethically justified harm” which I think they should’ve done considering they seem to be excusing it. Google’s AI principles are also quite vague, emphasizing the idea of how AI is beneficial and innovating, and adding in how they’ll try to avoid “unjust impacts.”

    Reply
  17. One of Google’s responsible AI principles states that they will be accountable to people in the development of their technology. However, there is no transparency on what this accountability process looks like. In essence, while they say that they will be accountable, there is no necessity for them to follow through on this (especially in the USA) and no easily accessible, formal way for consumers or other stakeholders (which is everyone) to pursue this accountability. Furthermore, all of their principles only reference the company themselves deciding when their AI development is ethical. Since Google (or alphabet) is a for-profit conglomerate operating in a tech space inundated by competitors racing to create the best AI first, not relying on outside regulators is a conflict of interest.

    Reply
  18. Like others have brought up, I felt like the Google and ACM readings were extremely broad and can be interpreted in a number of ways. Both of them brought up that their tech should be used to “do good” or “effect positive change”. But effect positive change for who? What does that entail? Neither explains how their guidelines will keep their AI technology from doing harm to others. Google says they will move forward so that “the overall likely benefits substantially exceed the foreseeable risks and downsides”. That’s kind of a terrifying statement to me, as it only makes me wonder who determines what those benefits and risks are, what they perceive as benefits and risks, and what they are willing to let slide in continuing to develop their AI.
    Both also bring up that they would like to avoid unfair bias and discrimination, but again, the language that they use is very broad and allows them to stretch those guidelines. They say “take action” or “seek to avoid”, but that could mean doing the absolute bare minimum. The ethicalOS checklist is not completely thorough, but it’s the only one of these readings that asks the reader to reflect on the work of AI in any meaningful way. The other two boil down to “try not to let bad things happen with AI technology”, but the checklist at least asks “how might this technology effect these specific situations?”

    Reply
  19. I thought most of these codes of ethics were incredibly unspecific, which seems intentional. Like several people have pointed out, there is not specified process of accountability, or definition of that means in different contexts. I think these organizations/corporations think just acknowledging that these processes need accountability and remediation is enough to satisfy people that are worried about abuses of power and unethical behavior, especially people who aren’t well versed in navigating the flowery language used.

    Personally, I think technology should only be used for social good, which can look really different across the board. I think corporations really need to consider hiring mitigation and prevention teams for AI tools that ensure these ethical standards are being followed and constantly developed and expanded on.

    Reply
  20. I think the ACM Code of Ethics and Professional Conduct has some pretty good general outlines of ethical principles that computer scientists should follow. One of the most important principles I think is to contribute to society and human well-being, because there’s nothing to make money off of if everybody’s dead, and broke, plus you get good karma for helping other people. Another important principle they mention is to be fair and take action and not to discriminate which is very critical to note the manner in which one codes. I also thought it was nice how they specifically say to perform work only in areas of competence. It’s good to know that Google follows a lot of the ethical principles I thought were important. Principles such as trying to be beneficial to society, and not discriminating and reinforcing unfair biases.

    Reply
  21. The text highlights the significance of the ACM Code of Ethics and Professional Conduct as a compass for ethical conduct within the computing profession. Often regarded as the “conscience of the profession,” this code sets out a comprehensive framework encompassing core principles, professional responsibilities, leadership ideals, and compliance standards. Section 1’s emphasis on societal contribution, harm prevention, and honesty underscores the ethical foundation of computing professionals. Section 2 delves into the importance of maintaining high quality and professional competence in our work, reinforcing the commitment to excellence. Section 3’s focus on the public good and enhancing the quality of working life for all stakeholders highlights the broader societal impact of computing professionals’ decisions and actions. Section 4 emphasizes compliance and accountability, stressing the need to uphold and promote these ethical principles. Crucially, the Code’s applicability extends beyond ACM members, reflecting the universal nature of ethical considerations in the field. It serves as a living document, evolving alongside technology and requiring continuous improvement by its members.

    Reply
  22. I am very curious about how these codes are worded. In one sense, I think it is important for corporations and other actors to be held to standards and scrutiny, but I also feel the acm and google principles are so vague, that I doubt their ability to be used in such a way. I think these codes of conduct come off as lip service / checking boxes / etc. etc., as a way to superficially claim they are thinking about / prioritizing ethics in anyway. Especially for corporations, I think the idea that ethics could ever be held above profit (by corporations themselves) is virtually impossible. The fundamental goal of capitalism neccesarily conflicts with any pursuit of ethics / societal good. I thought the ethical os checklist was a little more useful, as people often want very specfic suggestions / “hows” they can follow, rather than a broader set of ideas they must internalize and reckon with. I think this is also perhaps a fundamental problem, and something like the checklist addresses the effects/ symptoms, but does not neccesarily push individuals to think critically

    Reply
  23. Reading about the different ethical policies and codes computing professionals should follow was quite interesting. In the AMC code of ethics and professional conduct, we can see an ideal code that covers most of the issues we had previously discussed regarding the possible problems when developing technology. For example, point 1.4, ‘Be fair and take action not to discriminate,’ talks about fostering fair participation by all individuals regardless of their identities. At the end, it states that although technology might create new or enhance existing inequities, it needs to be as inclusive and accessible as possible. If this were followed extremely strictly, then the past article we read about the problems of biases in algorithms should not exist, which we know is not the case, as most big tech companies do not apply it.

    Another aspect I found interesting was the AI principles given by Google. In my opinion, AI is a powerful tool that should have more than seven principles, considering its capabilities. Additionally, I found these principles to be rather broad and simple. For example, in the privacy design principle, it is stated that “We will incorporate our privacy principles in the development and use of our AI technologies. We will give an opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data”. It says it will incorporate privacy principles in development, but it never states what those principles are and what their goals are. Terminology such as “appropriate transparency” is problematic because users are not informed about what the company considers appropriate. The lack of details makes it tricky to understand the true intentions behind this technology.

    Reply
  24. I think it’s important to interrogate the philosophical underpinnings that justify our ethical principles. I found the ACM code of ethics to be interesting because it appeals to multiple competing moral frameworks. For example, section 1.2 ‘Avoid Harm’ talks about minimizing harm in a way that reminds me of negative utilitarianism, whereas section 1.4 ‘Be fair and take action to not discriminate’ immediately appeals to value ethics, which is classically incompatible with utilitarianism (and more broadly consequentalism). Section 1.1, on the other hand, in advising that the needs of the disadvantaged be given increased priority, appeals to prioritarianism, a consequentalist framework that conflicts with utilitarianism. *Why* prioritize the needs of the less advantaged? Is it because they are inherently more valuable? Or is it because the less advantage tend to have more at stake, and therefore prioritizing their needs is usually consistent with the goal of minimizing harm? As others have pointed out, the ACM’s code of ethics comes off as very corporate, and this is what I have come to expect from corporate ethics: lists of general ethical principles that tick every box with very little attempt to put these principles in conflict with one another.

    Reply

Leave a Comment

css.php
The views and opinions expressed on individual web pages are strictly those of their authors and are not official statements of Grinnell College. Copyright Statement.