Ethics
Code of Ethics
- Code of Ethics
- The article discusses the ACM Code of Ethics and Professional Conduct, which is a set of ethical principles and guidelines that should guide the behavior of computing professionals. The code consists of general ethical principles, such as respecting the privacy and autonomy of users, being honest and transparent, and avoiding harm to others. It also includes more specific principles, such as avoiding unfair bias in decision-making and ensuring that computing systems are secure and reliable. The code requires computing professionals to continuously improve their professional skills and to report any violations of the code. Overall, the ACM Code of Ethics provides a framework for ethical decision-making and professional conduct in the field of computing.
- Software Engineering Code of Ethics
- The article describes the ACM Code of Ethics and Professional Conduct for software engineering, which consists of 24 ethical principles that should guide the behavior of software engineers. These principles include ensuring that software is reliable and safe, respecting the privacy of users, avoiding discrimination, and being honest and transparent about software limitations. The code also requires software engineers to continuously improve their professional skills and to report any violations of the code. Overall, the ACM Code of Ethics serves as a framework for ethical decision-making and professional conduct in the field of software engineering.
Ethics in the workplace
- The code I’m still ashamed of
- The article is a personal account of a software engineer who reflects on a piece of code that they are ashamed of writing early in their career. The author describes the code as being poorly structured, difficult to understand and maintain, and not aligned with good coding practices. They highlight the importance of code reviews and continuous learning to improve coding skills and avoid making similar mistakes in the future. The author also acknowledges that it is okay to make mistakes and that learning from them is an important part of the process of becoming a better software engineer. The article concludes by encouraging software engineers to take pride in their work and to strive for excellence in their coding practices.
- Project Dragonfly, Google’s censored search engine
- The article discusses Google’s plans to launch a censored search engine in China, code-named “Dragonfly.” The search engine would reportedly comply with China’s strict censorship laws, which would allow the government to control the information that Chinese citizens have access to. The article notes that Google previously pulled out of China in 2010 due to concerns over censorship and human rights violations. The proposed launch of Dragonfly has raised ethical concerns and led to protests from Google employees and human rights activists. The article also highlights the potential implications of Google’s decision for internet freedom and censorship in China, as well as for Google’s reputation as a company that values user privacy and freedom of expression.
- Amazon workers demand Jeff Bezos cancel “Recognition” software
- The article reports on Amazon employees’ demand for the company to cancel its facial recognition software called “Rekognition.” The employees argue that the technology poses a threat to civil liberties, especially for minority communities who are more likely to be misidentified by the software. The article notes that Rekognition has been used by law enforcement agencies, raising concerns about the potential misuse of the technology for surveillance and racial profiling. The employees’ demand follows a letter from a group of shareholders, who also called on Amazon to stop selling the technology to government agencies. The article concludes by highlighting the growing public scrutiny of facial recognition technology and the ethical implications of its use.
- Google and AI
- The article reports on Google’s announcement that it will not renew its contract with the US Department of Defense to develop artificial intelligence (AI) technology for military drones. The company’s decision follows protests from Google employees and AI experts, who expressed concern that the technology could be used to enhance the precision and lethality of military strikes, potentially leading to civilian casualties. The article notes that the controversy surrounding Google’s involvement in military projects highlights the ethical implications of AI and its potential impact on society. The article concludes by highlighting the need for tech companies to consider the ethical implications of their work and to involve a diverse range of perspectives in the development of AI technology.
- Microsoft Employees demand end of ICE contract
- The article discusses the controversy surrounding the involvement of technology companies in the enforcement of US immigration policies. The article notes that several technology companies, including Amazon, Microsoft, and Salesforce, have been criticized for providing software services and data analysis tools to immigration agencies, including Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE). The article highlights the ethical concerns raised by these partnerships, including the potential for the technology to be used for racial profiling and the harm caused to immigrant families who are separated at the border. The article also notes that the controversy has led to protests from tech workers and calls for greater corporate responsibility and ethical standards in the tech industry. The article concludes by highlighting the need for a more nuanced and ethical approach to the use of technology in the context of immigration policy.
- Microsoft and the DoD
- The article reports on protests by Microsoft employees against the company’s contract with the US Army to develop augmented reality technology using HoloLens. The employees wrote an open letter to Microsoft’s CEO, Satya Nadella, urging the company to cancel the contract due to concerns that the technology could be used for warfare and potentially result in loss of life. The article notes that the protest reflects a growing trend of tech workers voicing their ethical concerns about the use of technology for military applications. The article also highlights the need for tech companies to consider the potential ethical implications of their work and to engage in dialogue with their employees and other stakeholders to ensure that their work aligns with their values and principles.
Ethics in Technology
- Self Driving Car Ethics
- The article explores the ethical dilemmas posed by self-driving cars, which rely on complex algorithms to make decisions that impact human lives. The article discusses scenarios where autonomous vehicles may have to make split-second decisions that could result in injury or death, such as deciding between hitting a pedestrian or swerving into another vehicle. The article highlights the challenges of programming ethical decision-making into self-driving cars, as well as the potential for biases to be embedded in the algorithms. The article also discusses the need for transparency and accountability in the development of autonomous vehicle technology, as well as the importance of involving a diverse range of perspectives and ethical considerations in the decision-making process. The article concludes by noting that the ethical implications of self-driving cars are complex and require ongoing dialogue and ethical reflection.
- Ethical dilemma of self driving cars
- The article explores the ethical dilemmas posed by self-driving cars, which raise questions about responsibility, accountability, and moral decision-making. The article highlights scenarios where autonomous vehicles may have to make difficult ethical choices, such as deciding between the safety of the vehicle’s occupants and the safety of other road users. The article notes that the development of self-driving cars requires a shift in our understanding of responsibility and the role of technology in decision-making. The article also discusses the need for a robust regulatory framework to ensure that self-driving cars are developed and deployed in an ethical and responsible manner. The article concludes by noting that the ethical implications of self-driving cars are complex and require ongoing ethical reflection and dialogue.
- Cyber-Security of self driving cars
- The article explores the cybersecurity risks posed by self-driving cars, which rely on complex software and data systems that are vulnerable to hacking and other forms of cyber attack. The article notes that cyber attacks on self-driving cars could have serious consequences, including injury or death, and highlights the need for robust cybersecurity measures to ensure the safety and security of autonomous vehicles. The article also discusses the challenges of securing the supply chain for self-driving car components and the need for collaboration between car manufacturers, technology companies, and cybersecurity experts to address these risks. The article concludes by noting that the cybersecurity challenges of self-driving cars are complex and require ongoing attention and innovation to ensure the safety and security of these emerging technologies.
- Big Data is our Civil Rights issue
- The article argues that big data is a civil rights issue that is not being adequately addressed. The author suggests that the collection and use of large-scale data sets have the potential to exacerbate existing social and economic inequalities and lead to new forms of discrimination. The article highlights the challenges of regulating big data and calls for increased transparency, accountability, and ethical reflection in the development and deployment of data-driven technologies. The author also emphasizes the need for a diverse range of voices and perspectives in discussions about the use of big data to ensure that the benefits and risks of these technologies are fairly distributed. The article concludes by calling for a more proactive and inclusive approach to addressing the ethical and social implications of big data.
- Will democracy survive big data and AI?
- The article explores the potential impact of big data and artificial intelligence on democracy. The author argues that the ability of these technologies to analyze vast amounts of data could lead to a concentration of power in the hands of a few individuals or organizations, leading to a decline in democratic values and processes. The article highlights the challenges of regulating the use of big data and artificial intelligence in political contexts, and the need for transparency and accountability in the development and deployment of these technologies. The author also discusses the potential for bias and discrimination to be embedded in algorithms used for decision-making, which could have significant social and political consequences. The article concludes by calling for a more proactive and collaborative approach to addressing the ethical and social implications of big data and artificial intelligence, in order to ensure that these technologies are used in ways that support and enhance democratic values and processes.
Tech Company Principles
- Microsoft AI Principles
- This article outlines Microsoft’s approach to AI and the principles that guide their development and deployment of AI technologies. The article highlights the need for AI systems to be designed and deployed in ways that are transparent, reliable, and secure. Microsoft emphasizes the importance of building AI systems that are inclusive and respectful of individual privacy and autonomy. The article also discusses the ethical considerations involved in the development of AI systems, including the need to ensure that AI technologies are used in ways that respect human rights and do not exacerbate existing inequalities. Microsoft also recognizes the need for collaboration across multiple stakeholders to address the social and ethical implications of AI, and the company is committed to engaging in ongoing dialogue with stakeholders from diverse backgrounds and perspectives. The article concludes by outlining Microsoft’s commitment to developing and deploying AI technologies in ways that are ethical, responsible, and beneficial to society.
- Ethical OS Toolkit
- EthicalOS is a toolkit designed to help organizations develop and implement ethical principles and practices for the design and use of technology. The toolkit was created by the Tech and Society Solutions Lab at Omidyar Network, a philanthropic investment firm. The EthicalOS framework includes eight ethical dimensions: privacy, security, transparency, fairness, community, accountability, human control, and human-centered design. The toolkit provides guidance on how to identify and address ethical risks and considerations associated with technology design, development, and deployment. The goal of the EthicalOS toolkit is to help organizations create technology products and services that are aligned with ethical principles, social values, and human needs, while minimizing the risks and unintended consequences of technology use. The toolkit is designed to be flexible and adaptable to different organizational contexts and technology domains. It can be used by both technical and non-technical professionals to facilitate ethical reflection and decision-making throughout the technology development lifecycle.
- Google AI Principles
- This article outlines Google’s AI principles, which are designed to guide the development and use of AI technologies. The principles cover a range of topics, including fairness, privacy, accountability, and safety. Google emphasizes the importance of building AI systems that are inclusive and accessible to all, and that are developed in ways that are transparent and accountable to stakeholders. The principles also address the need for AI to be designed and used in ways that are safe and secure, with appropriate safeguards and testing procedures in place. Google is committed to working with policymakers and stakeholders to address the social and ethical implications of AI, and to help ensure that these technologies are used in ways that benefit society as a whole. The company recognizes the need for ongoing dialogue and collaboration with diverse stakeholders to address the complex challenges and opportunities presented by AI. The article concludes by reiterating Google’s commitment to developing and using AI technologies in ways that are responsible, ethical, and aligned with the company’s core values.