AI for Humanity: AI based on Human Rights (WorldBank)
4 Dec 2023 11:30h - 13:00h UTC
Table of contents
Disclaimer: This is not an official record of the UNCTAD eWeek session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the UNCTAD website.
Knowledge Graph of Debate
Session report
Full session report
Moira Thompson Oliver
This analysis focuses on various topics related to AI and its impact on different sectors. It begins by highlighting the need for regulation in AI tools supplied to the US State Department, as Microsoft had stopped supplying AI tools pending regulations. This demonstrates the importance of having proper regulations in place to ensure the ethical and responsible use of AI technology.
The analysis also notes the challenges in defining AI within organisations. It states that creating a common definition of AI can take months, indicating the complex nature of AI and the need for clarity in understanding its role within organisations.
One of the key points emphasised in the analysis is the importance of diversity in AI development and embedding human rights awareness. It highlights the fact that most engineers in the tech industry are predominantly white and male. This brings attention to the need for greater diversity and inclusion in the development of AI systems to ensure that they are representative and fair.
The analysis also stresses the need for end-use evaluations of AI technology, considering both its usage and geographical locations. This highlights the importance of assessing how AI technology is being used and the potential impact it may have in different contexts.
Training on AI and human rights is another crucial point raised in the analysis. It emphasises the importance of implementing training at all levels of an organisation, from the board level to the engineers. This ensures that all stakeholders involved in AI development are knowledgeable about the ethical implications and human rights considerations.
Furthermore, the analysis discusses the potential use of AI as an accountability tool in security sectors. It states that AI can be used to promote accountability in the security sector, which can have significant implications for peace, justice, and strong institutions.
The impact of AI on human rights is also explored. The analysis mentions that AI risks can be enumerated by using a guide like the BTEC taxonomy, which highlights the potential ways in which human rights can be affected by AI. This underscores the importance of considering and safeguarding human rights in the development and deployment of AI systems.
Unintended biases in AI applications are also highlighted in the analysis. It provides an example of a pothole fixing programme that favoured rich neighbourhoods due to more mobile phone users. This illustrates how unintended biases can seep into AI applications and exacerbate existing inequalities.
The potential for AI to be used as a tool for misinformation is another area of concern. The analysis mentions a discussion at the UN Forum on Business and Human Rights, which highlighted the role of generative AI in creating misinformation in public discourses. This raises the need for vigilance and measures to address the potential misuse of AI technology.
The analysis also recognises the accessibility and impact of AI in daily life. It mentions an example where AI was used by an individual’s son for a simple task like finding a recipe based on available ingredients, illustrating the widespread use and relevance of AI in everyday situations.
The potential use of AI in conflict settings is discussed, highlighting the role of generative AI in such situations. It acknowledges that AI can be utilised in conflict settings, potentially impacting peace, justice, and strong institutions.
The analysis further explores the use of AI in detecting climate risks and change. It mentions AI tools that enable farmers to monitor weather conditions and soil quality, helping them determine the optimal time to harvest. It also highlights the use of AI in detecting air quality, emphasising its potential for addressing climate-related challenges.
Due diligence and risk assessment are identified as crucial aspects of AI deployment. The analysis stresses the importance of constantly evaluating the impacts of AI tools and technologies and addressing any potential risks promptly.
Lastly, the analysis supports ongoing discussions and the creation of international frameworks for AI. It acknowledges the need for frameworks at an international level to ensure the responsible and ethical development, deployment, and use of AI technology.
In conclusion, this analysis provides valuable insights into various aspects of AI and its impact on different sectors. It highlights the need for regulation, diversity, and human rights awareness in AI development. It emphasises the importance of end-use evaluations, training, and accountability in AI deployment. It also explores unintended biases, misinformation, accessibility, and AI’s potential in conflict and climate settings. Additionally, it underlines the significance of due diligence, risk assessment, and international cooperation in shaping the future of AI.
Olivier Elas
The International Telecommunication Union (ITU) is actively working on addressing the challenges posed by Artificial Intelligence (AI) while also placing a strong emphasis on human rights and the achievement of the Sustainable Development Goals (SDGs). They started working on AI challenges in 2017 and co-lead an interagency working group on AI with UNESCO. The primary objective of this group is to deliver concrete outcomes based on human rights principles.
The ITU’s AI for Good initiative is an annual summit that aims to bring tangible benefits to society. This initiative plays a vital role in delivering technical outcomes such as machine learning for 5G and health. Furthermore, it has made significant contributions to the establishment of technical standards in the field of AI.
The ITU also recognises the importance of embedding human rights into the standardisation process. They are actively working with different study groups to develop technical recommendations that focus on human rights. The UN High Commissioner for Human Rights has asked the ITU, ISO, and EAC to integrate human rights into their standards. It’s worth noting that the ITU has been working on various digital rights issues for many years, including ICT for girls, gender balance, universal access, and accessibility.
Olivier Elas, representing the ITU, strongly advocates for the application of human rights principles within the context of AI and digital technology. He highlighted the ITU’s leading initiatives to apply human rights in the area of AI, specifically mentioning the ‘AI for Good’ initiative. Elas also mentioned that the ITU is aligning its efforts with general assembly resolutions focused on AI and human rights.
Additionally, the ITU has focus groups studying the impact of quantum computing on AI. This demonstrates their commitment to exploring emerging technologies and their potential implications.
There is growing recognition of the need for accountability and transparency in AI. While AI systems work with models and data sets, many of them lack transparency. This lack of transparency hinders the ability to audit AI systems. However, some companies, such as Hugging Face, are starting to address this issue by opening their models and data sets.
It is important to note that the ITU’s role is primarily regulatory, and they can only provide recommendations to member states regarding embedding human rights principles. Olivier Elas acknowledges the limitations of the ITU, mentioning that it is challenging for them to do more than make recommendations due to their regulatory role.
In conclusion, the ITU is actively engaged in addressing the challenges of AI while prioritising human rights and the achievement of the SDGs. They are working towards delivering concrete outcomes based on human rights principles through their AI for Good initiative and focusing on embedding human rights into the standardisation process. Olivier Elas, representing the ITU, supports the application of human rights principles in AI and digital technology and highlights the alignment of their efforts with global resolutions. The ITU’s study groups are also exploring the impacts of quantum computing on AI. However, it is important to recognise that the ITU’s role is limited primarily to making recommendations to member states, and they face challenges in taking more meaningful actions beyond regulatory recommendations.
Mila Romanoff
Artificial intelligence (AI) poses significant risks to human rights and the environment, according to various arguments and evidence. AI is increasingly used in public decision-making, which raises concerns about potential harm to individuals and groups. Notably, the World Bank lacks guidelines to address AI-related human rights risks, further exacerbating the issue.
Furthermore, AI’s carbon emissions present environmental challenges. While AI can optimize energy use, a single training session for an AI model emits 25 times more carbon than a one-way flight from New York to San Francisco. Stricter standards and regulations for AI are needed, as ethical standards alone are insufficient. Countries like China, Brazil, and the EU are taking steps to implement more stringent regulations.
The potential for discrimination and bias in AI-driven predictive policing is another cause for concern. This can lead to unfair enforcement against specific communities, violating their rights to equality and fair legal processes. Land rights projects utilizing AI can also result in property disputes and interfere with communities’ rights to property and secure housing.
In addition to surveillance risks, AI-powered tools used to understand political tension can disrupt democratic processes and elections, particularly in countries with limited democratic values. It is important to consider risks beyond surveillance, including predictive policing, land rights management, and political tension understanding.
The regulation of AI parallels the evolution of data privacy rights, highlighting the need for robust regulatory frameworks. Self-learning algorithms in AI systems escalate the risks associated with data usage, necessitating adequate regulation. While data privacy has received attention, the threats posed by AI are greater, with concerns raised by numerous individuals and experts.
The UK government’s Bleachley Declaration is commended for its efforts towards AI governance. However, international consensus on AI safety risks is crucial for effective regulation. Overall, AI presents significant risks that require a balanced approach to safeguard human rights while managing uncertainties. The review has ensured the use of UK spelling and grammar throughout the summary.
Tim Engelhardt
The analysis explores the integration of human rights into the development and governance of artificial intelligence (AI). It emphasises the importance of conducting risk-proportionate human rights due diligence by states and businesses to effectively manage AI. This approach ensures that potential risks and human rights concerns associated with AI are adequately addressed. Including a human rights framework in AI governance helps to structure discussions and mitigate the risks involved.
Transparency and stakeholder engagement are crucial in AI governance. It is vital for states to inform the public about the use of AI systems, creating an atmosphere of openness and accountability. Human Rights Due Diligence guidance plays a pivotal role in tracking and communicating the impacts and methods used in AI systems. This enables stakeholders to effectively monitor the implications of AI technology.
However, the impact of AI in society can become problematic when deployed in problematic contexts. The analysis warns that AI, which permeates various aspects of society, can become a weapon in environments with existing problems. This highlights the importance of carefully considering the context in which AI is deployed to avoid exacerbating existing issues.
Furthermore, the analysis highlights how AI can infringe upon various rights. It emphasizes the impact of AI on security issues that affect life and liberty, particularly in law enforcement and due process rights. Additionally, facial recognition technologies used for monitoring assemblies can encroach upon the right to freedom of assembly. Moreover, the attempt of AI to recognize emotions can interfere with freedom of thought, opinion, and individual autonomy.
The military applications of AI are often overlooked in discussions. The analysis notes that these applications are frequently neglected, indicating a potential blind spot when considering the ethical and strategic implications of AI in military operations.
In the healthcare sector, the analysis points out that AI tools can have a negative impact on people’s access to healthcare. This is exemplified by instances where health insurance claims have been denied based on AI assessments. The denial of claims can result in restricted access to necessary healthcare services.
The analysis further highlights the potential for AI to centralise power and shape environments. It asserts that AI has the capability to concentrate decision-making authority and influence the dynamics of power in society.
Community involvement and empowerment in shaping AI tools are important considerations. The analysis suggests that communities affected by the implementation of AI are often excluded from its development. Strengthening community abilities to shape AI tools can lead to more inclusive and beneficial outcomes.
The analysis suggests that ongoing discussions and evaluations are necessary for effective AI governance. It acknowledges the existence of advisory bodies convening regularly to deliberate on AI governance. However, it emphasises the need for increased dialogue and evaluations to ensure that AI governance aligns with human rights standards and addresses the concerns raised by the technology.
Overall, the analysis highlights the significance of integrating human rights considerations in the development and governance of AI. It emphasises the need for risk-proportionate human rights due diligence, transparency, stakeholder engagement, and careful consideration of the societal context in which AI is deployed. The analysis also points out potential infringements on various rights, the often-overlooked military applications of AI, and the impact of AI on healthcare access. It calls for community involvement and empowerment in shaping AI tools and underscores the necessity of ongoing discussions and evaluations for effective AI governance.
David Satola
David Satola, an influential voice in the field of artificial intelligence (AI) and human rights, emphasises the importance of understanding the complex relationship between these two domains in World Bank-funded projects. He highlights the potential implications of AI on human rights, specifically in the context of social protection programs.
Satola acknowledges that AI technology alone cannot address underlying policy flaws and can even exacerbate certain issues. He cautions against the misuse of data collected for social protection programs, which could worsen problems instead of solving them.
Furthermore, Satola expresses concerns about the concentration of power that AI can create. He stresses the need for balance in the use and control of AI to prevent power from being overly concentrated in the hands of a few. It is crucial to ensure that beneficiaries of AI tools also benefit from its impact.
Satola also highlights the interconnected nature of AI with other emerging technologies such as 5G and quantum computing. As technology advances, it is vital to establish regulations regarding data usage and system operation to address the challenges posed by faster data processing.
In terms of governance, Satola advocates for a multi-stakeholder approach based on the successful model of internet governance in the late 90s and early 2000s. He suggests collaboration among governments, private sector entities, and civil society in finding an appropriate solution for AI governance, drawing on the Internet Governance Forum as a potential model.
Although Satola presents a neutral stance, he emphasises the need for a comprehensive and collaborative approach that involves various stakeholders. This is necessary to effectively address the complex issues surrounding AI, ensuring the protection of human rights and the promotion of equitable outcomes.
By analysing Satola’s perspectives, we gain valuable insights into the challenges and considerations at the intersection of AI and human rights. This underscores the importance of careful navigation and proactive measures to harness AI’s potential while safeguarding human rights and minimising social inequalities.
Audience
During a discussion on the intersection of AI and human rights, several key points were raised. DCAF, an organisation investigating AI as an accountability tool, highlighted the potential of AI to be used in security sectors to provide oversight and promote accountability. This suggests that AI can play a crucial role in holding security sectors around the world accountable for their actions.
Privacy concerns related to AI were also a topic of discussion. There was a universal concern about the right to privacy when it comes to AI. This implies that there is widespread recognition of the need to protect individuals’ privacy in the face of advancing AI technologies.
The impact of AI on other human rights beyond privacy was also explored. The speaker expressed curiosity about the specific human rights that are affected by AI, indicating a desire for a broader understanding of the potential implications of AI on human rights.
Maria Dmitriadou, a representative from the World Bank, was particularly interested in the implementation of AI in line with human rights. She emphasised the importance of AI applications that demonstrate sensitive approaches to human rights. Additionally, she highlighted the potential of AI to support goals such as reducing poverty and addressing vulnerabilities. This suggests that AI has the potential to contribute positively to the achievement of these social and economic objectives.
An audience member, a digital and AI trade lead from the British government’s Department for Science, Innovation, and Technology, questioned the role of the international community in controlling AI that poses infringements on human rights. In particular, the audience member proposed applying sanctions or banning AI that is misused by countries to infringe on citizens’ rights. This highlights the need for international cooperation and regulation to ensure that AI is used responsibly and does not compromise human rights.
In conclusion, the discussions on AI and human rights touched upon various important aspects. The potential of AI as an accountability tool in security sectors was highlighted, as well as concerns about privacy and the broader impact of AI on human rights. The World Bank representative highlighted the potential positive contributions of AI, especially in reducing poverty and addressing vulnerabilities. The role of the international community in controlling AI that infringes on human rights was also brought into question, with suggestions for sanctions or bans. These discussions shed light on the complex relationship between AI and human rights and underscore the importance of careful application and regulation of AI technologies to ensure their alignment with human rights principles.
Speakers
A
Audience
Speech speed
186 words per minute
Speech length
549 words
Speech time
177 secs
Arguments
AI can be used as an accountability tool to provide oversight in security sectors around the world
Supporting facts:
- The speaker represents DCAF, which is investigating AI as an accountability tool
Topics: AI accountability, Security sector governance, Oversight
There is a universal concern about the right to privacy when it comes to AI
Supporting facts:
- The issue of privacy was brought up multiple times in the discussion
Topics: AI, Privacy, Human rights
Maria Dmitriadou, the special representative to the UN and WTO from the World Bank, expresses curiosity regarding AI applications that demonstrate sensitive approaches to human rights.
Supporting facts:
- Maria Dmitriadou is a representative from the World Bank.
- She is interested in the implementation of AI in line with human rights.
Topics: AI, Human rights, World Bank
The audience member questions the role of the international community on controlling AI that poses infringements on human rights.
Supporting facts:
- The audience member is a digital and AI trade lead from the British government’s Department for Science, Innovation, and Technology.
- Italy has banned the usage of GPT, showing a precedent of government regulation on AI.
Topics: AI, Human Rights, Government, Sanctions, International Community
Report
During a discussion on the intersection of AI and human rights, several key points were raised. DCAF, an organisation investigating AI as an accountability tool, highlighted the potential of AI to be used in security sectors to provide oversight and promote accountability.
This suggests that AI can play a crucial role in holding security sectors around the world accountable for their actions. Privacy concerns related to AI were also a topic of discussion. There was a universal concern about the right to privacy when it comes to AI.
This implies that there is widespread recognition of the need to protect individuals’ privacy in the face of advancing AI technologies. The impact of AI on other human rights beyond privacy was also explored. The speaker expressed curiosity about the specific human rights that are affected by AI, indicating a desire for a broader understanding of the potential implications of AI on human rights.
Maria Dmitriadou, a representative from the World Bank, was particularly interested in the implementation of AI in line with human rights. She emphasised the importance of AI applications that demonstrate sensitive approaches to human rights. Additionally, she highlighted the potential of AI to support goals such as reducing poverty and addressing vulnerabilities.
This suggests that AI has the potential to contribute positively to the achievement of these social and economic objectives. An audience member, a digital and AI trade lead from the British government’s Department for Science, Innovation, and Technology, questioned the role of the international community in controlling AI that poses infringements on human rights.
In particular, the audience member proposed applying sanctions or banning AI that is misused by countries to infringe on citizens’ rights. This highlights the need for international cooperation and regulation to ensure that AI is used responsibly and does not compromise human rights.
In conclusion, the discussions on AI and human rights touched upon various important aspects. The potential of AI as an accountability tool in security sectors was highlighted, as well as concerns about privacy and the broader impact of AI on human rights.
The World Bank representative highlighted the potential positive contributions of AI, especially in reducing poverty and addressing vulnerabilities. The role of the international community in controlling AI that infringes on human rights was also brought into question, with suggestions for sanctions or bans.
These discussions shed light on the complex relationship between AI and human rights and underscore the importance of careful application and regulation of AI technologies to ensure their alignment with human rights principles.
DS
David Satola
Speech speed
161 words per minute
Speech length
2101 words
Speech time
781 secs
Arguments
The importance of understanding the intersection of human rights and artificial intelligence in World Bank-financed projects
Supporting facts:
- David Sotola mentioned the goal of understanding the AI landscape and its intersection with human rights, particularly in relation to World Bank projects
- The intention is to ensure that AI projects do not exacerbate policy failures or contribute to surveillance issues through misapplication of data collected for social protection programs
Topics: Artificial Intelligence, Human Rights, World Bank
Concentration of power due to AI needs to be balanced
Supporting facts:
- David Satola expresses concern about the concentration of power into the hands of a few due to AI
- He suggests that beneficiaries of AI tools should also benefit from them, indicating a need for better power balancing in the use and control of AI
Topics: AI, Power Concentration, Equality
AI is a technology that is empowered by other technologies such as 5G and potentially quantum computing
Topics: AI, 5G, Quantum Computing
With the advancement in technology such as quantum computing, data is going to be processed faster and this could pose a challenge
Topics: Quantum Computing, Data Processing
It is crucial for the community to establish rules about data usage and system operation to prevent issues as technology progresses
Topics: Data Usage
David Satola suggests finding a solution for AI governance should be based on a multi-stakeholder approach.
Supporting facts:
- Drawing parallel to the evolution of internet governance in late 90s and early 2000s where governments, private sector, and civil society collaborated. Satola suggests the Internet Governance Forum as a potential model.
Topics: AI Governance, Multi-stakeholder approach, Internet Governance
Report
David Satola, an influential voice in the field of artificial intelligence (AI) and human rights, emphasises the importance of understanding the complex relationship between these two domains in World Bank-funded projects. He highlights the potential implications of AI on human rights, specifically in the context of social protection programs.
Satola acknowledges that AI technology alone cannot address underlying policy flaws and can even exacerbate certain issues. He cautions against the misuse of data collected for social protection programs, which could worsen problems instead of solving them. Furthermore, Satola expresses concerns about the concentration of power that AI can create.
He stresses the need for balance in the use and control of AI to prevent power from being overly concentrated in the hands of a few. It is crucial to ensure that beneficiaries of AI tools also benefit from its impact.
Satola also highlights the interconnected nature of AI with other emerging technologies such as 5G and quantum computing. As technology advances, it is vital to establish regulations regarding data usage and system operation to address the challenges posed by faster data processing.
In terms of governance, Satola advocates for a multi-stakeholder approach based on the successful model of internet governance in the late 90s and early 2000s. He suggests collaboration among governments, private sector entities, and civil society in finding an appropriate solution for AI governance, drawing on the Internet Governance Forum as a potential model.
Although Satola presents a neutral stance, he emphasises the need for a comprehensive and collaborative approach that involves various stakeholders. This is necessary to effectively address the complex issues surrounding AI, ensuring the protection of human rights and the promotion of equitable outcomes.
By analysing Satola’s perspectives, we gain valuable insights into the challenges and considerations at the intersection of AI and human rights. This underscores the importance of careful navigation and proactive measures to harness AI’s potential while safeguarding human rights and minimising social inequalities.
MR
Mila Romanoff
Speech speed
156 words per minute
Speech length
3132 words
Speech time
1208 secs
Arguments
AI poses risks of harm to individuals and groups, particularly with regard to human rights.
Supporting facts:
- AI is increasingly being used in public decision-making, posing potential human rights risks.
- There are currently no guidelines within the World Bank to help identify and address these AI-related risks.
Topics: Artificial Intelligence, Human Rights
AI technologies are improving but also becoming more unpredictable and presenting environmental challenges due to their carbon emissions.
Supporting facts:
- New reinforcement learning models show that AI can be used to optimize energy use.
- A single AI model training can emit 25 times more carbon than a one-way flight from New York to San Francisco.
Topics: Artificial Intelligence, Environment, Carbon Emissions
Discrimination and bias can be enforced via predictive policing using AI
Supporting facts:
- AI initiative aimed at public safety might inadvertently enforce racial and ethnic bias
- This can result in prejudiced enforcement action against specific communities and breach their rights to equality and fair legal process
Topics: AI, Predictive Policing, Bias
AI complications in land rights can also violate human rights
Supporting facts:
- If land management projects utilizing AI to digitize property records are not transparently executed, it can lead to property disputes
- It may displace communities, interfering with their rights to property and secure housing
Topics: AI, Land Rights
AI-powered surveillance risks might disrupt democratic processes and elections
Supporting facts:
- AI tools used in understanding political tension in countries with limited democratic values or without democratic policies and processes might impact election rights
Topics: AI, Surveillance, Democratic Processes
There is a need for a balanced approach in view of universal rights and risks
Supporting facts:
- Rights are universal, they’re not absolute
- Discussion revolves around understanding the balancing of rights and approaching the no zero risk motion
Topics: AI, Risk Balancing, Human Rights
Comparisons between the evolutions of data privacy right and AI in terms of regulation
Supporting facts:
- AI has been around for decades, similar to data privacy
- Data privacy got significant exploration and recognition in the international community after a gap
- Mila sees an analogy between data protection and AI, especially concerning the question of regulation vs innovation
- AI’s self-learning and layering algorithms are escalating the risks associated with data usage
Topics: Artificial Intelligence, Data Privacy, Regulation, Innovation
Mila Romanoff commends UK government on the Bleachley Declaration
Supporting facts:
- The Bleachley Declaration was a major step recently by the UK government
Topics: AI Governance, Bleachley Declaration
Mila Romanoff discusses necessity of international consensus on AI safety risks
Supporting facts:
- Mila Romanoff believes that there need to be steps taken towards establishing international consensus on AI safety
Topics: AI Safety, Governance
Report
Artificial intelligence (AI) poses significant risks to human rights and the environment, according to various arguments and evidence. AI is increasingly used in public decision-making, which raises concerns about potential harm to individuals and groups. Notably, the World Bank lacks guidelines to address AI-related human rights risks, further exacerbating the issue.
Furthermore, AI’s carbon emissions present environmental challenges. While AI can optimize energy use, a single training session for an AI model emits 25 times more carbon than a one-way flight from New York to San Francisco. Stricter standards and regulations for AI are needed, as ethical standards alone are insufficient.
Countries like China, Brazil, and the EU are taking steps to implement more stringent regulations. The potential for discrimination and bias in AI-driven predictive policing is another cause for concern. This can lead to unfair enforcement against specific communities, violating their rights to equality and fair legal processes.
Land rights projects utilizing AI can also result in property disputes and interfere with communities’ rights to property and secure housing. In addition to surveillance risks, AI-powered tools used to understand political tension can disrupt democratic processes and elections, particularly in countries with limited democratic values.
It is important to consider risks beyond surveillance, including predictive policing, land rights management, and political tension understanding. The regulation of AI parallels the evolution of data privacy rights, highlighting the need for robust regulatory frameworks. Self-learning algorithms in AI systems escalate the risks associated with data usage, necessitating adequate regulation.
While data privacy has received attention, the threats posed by AI are greater, with concerns raised by numerous individuals and experts. The UK government’s Bleachley Declaration is commended for its efforts towards AI governance. However, international consensus on AI safety risks is crucial for effective regulation.
Overall, AI presents significant risks that require a balanced approach to safeguard human rights while managing uncertainties. The review has ensured the use of UK spelling and grammar throughout the summary.
MT
Moira Thompson Oliver
Speech speed
175 words per minute
Speech length
3511 words
Speech time
1200 secs
Arguments
The law often lags quite a long way behind actual practice
Supporting facts:
- Microsoft no longer supplied AI tools to the US State Department awaiting regulation
Topics: AI, Law, Practice
Challenges in defining AI and understanding where in an organization it exists
Supporting facts:
- Creating a common definition of AI within an organization can take months
Topics: AI, Governance
Need for end-use evaluations of AI and technology considering usage and geographical locations
Supporting facts:
- Who the technology is sold to and where it is used are key factors to consider
Topics: AI, End-use evaluations, Geography
AI can be used as an accountability tool in security sectors
Supporting facts:
- Alex Walsh from DCAF works with AI in context of accountability in security sectors
Topics: security sector governance, AI usage
Human rights that are affected by AI can be enumerated
Supporting facts:
- BTEC taxonomy of AI risks is a guide to the risks making it concrete how human rights can be affected
Topics: human rights, AI impact
Unintended biases may seep into AI applications, exacerbating inequality
Supporting facts:
- Example of the pothole fixing program that favored rich neighborhoods due to more mobile phone users
Topics: social inequality, AI bias
AI is widely accessible and impactful in daily life
Supporting facts:
- Her son used AI for a simple task like finding a recipe based on available ingredients
Topics: AI, Accessibility, Usage
AI can be utilized in conflict settings
Supporting facts:
- Last week’s reflection discussed the impact of generative AI in conflict settings
Topics: AI, Conflict management
AI has promising uses in detecting climate risks and change
Supporting facts:
- There is an AI tool that enables farmers to monitor weather conditions and soil quality, helping them to determine the optimal time to harvest
- There are tools that use AI to detect air quality
Topics: Artificial Intelligence, Climate Change, Agriculture
The importance of due diligence and risk assessment in AI deployment
Supporting facts:
- Due diligence and risk assessment can identify the impacts of a tool, whether they are good or bad
- Stating that technology developments occur at a rapid pace implies a need for due diligence and risk assessment to keep pace
Topics: Artificial Intelligence, Risk Assessment, Due Diligence
AI being used in some countries for social scoring with pernicious effects
Supporting facts:
- Some countries are already using AI for social scoring
Topics: AI, social scoring, international law, enforcement
Report
This analysis focuses on various topics related to AI and its impact on different sectors. It begins by highlighting the need for regulation in AI tools supplied to the US State Department, as Microsoft had stopped supplying AI tools pending regulations.
This demonstrates the importance of having proper regulations in place to ensure the ethical and responsible use of AI technology. The analysis also notes the challenges in defining AI within organisations. It states that creating a common definition of AI can take months, indicating the complex nature of AI and the need for clarity in understanding its role within organisations.
One of the key points emphasised in the analysis is the importance of diversity in AI development and embedding human rights awareness. It highlights the fact that most engineers in the tech industry are predominantly white and male. This brings attention to the need for greater diversity and inclusion in the development of AI systems to ensure that they are representative and fair.
The analysis also stresses the need for end-use evaluations of AI technology, considering both its usage and geographical locations. This highlights the importance of assessing how AI technology is being used and the potential impact it may have in different contexts.
Training on AI and human rights is another crucial point raised in the analysis. It emphasises the importance of implementing training at all levels of an organisation, from the board level to the engineers. This ensures that all stakeholders involved in AI development are knowledgeable about the ethical implications and human rights considerations.
Furthermore, the analysis discusses the potential use of AI as an accountability tool in security sectors. It states that AI can be used to promote accountability in the security sector, which can have significant implications for peace, justice, and strong institutions.
The impact of AI on human rights is also explored. The analysis mentions that AI risks can be enumerated by using a guide like the BTEC taxonomy, which highlights the potential ways in which human rights can be affected by AI.
This underscores the importance of considering and safeguarding human rights in the development and deployment of AI systems. Unintended biases in AI applications are also highlighted in the analysis. It provides an example of a pothole fixing programme that favoured rich neighbourhoods due to more mobile phone users.
This illustrates how unintended biases can seep into AI applications and exacerbate existing inequalities. The potential for AI to be used as a tool for misinformation is another area of concern. The analysis mentions a discussion at the UN Forum on Business and Human Rights, which highlighted the role of generative AI in creating misinformation in public discourses.
This raises the need for vigilance and measures to address the potential misuse of AI technology. The analysis also recognises the accessibility and impact of AI in daily life. It mentions an example where AI was used by an individual’s son for a simple task like finding a recipe based on available ingredients, illustrating the widespread use and relevance of AI in everyday situations.
The potential use of AI in conflict settings is discussed, highlighting the role of generative AI in such situations. It acknowledges that AI can be utilised in conflict settings, potentially impacting peace, justice, and strong institutions. The analysis further explores the use of AI in detecting climate risks and change.
It mentions AI tools that enable farmers to monitor weather conditions and soil quality, helping them determine the optimal time to harvest. It also highlights the use of AI in detecting air quality, emphasising its potential for addressing climate-related challenges.
Due diligence and risk assessment are identified as crucial aspects of AI deployment. The analysis stresses the importance of constantly evaluating the impacts of AI tools and technologies and addressing any potential risks promptly. Lastly, the analysis supports ongoing discussions and the creation of international frameworks for AI.
It acknowledges the need for frameworks at an international level to ensure the responsible and ethical development, deployment, and use of AI technology. In conclusion, this analysis provides valuable insights into various aspects of AI and its impact on different sectors.
It highlights the need for regulation, diversity, and human rights awareness in AI development. It emphasises the importance of end-use evaluations, training, and accountability in AI deployment. It also explores unintended biases, misinformation, accessibility, and AI’s potential in conflict and climate settings.
Additionally, it underlines the significance of due diligence, risk assessment, and international cooperation in shaping the future of AI.
OE
Olivier Elas
Speech speed
141 words per minute
Speech length
1326 words
Speech time
565 secs
Arguments
The ITU working on Artificial Intelligence challenges and using a strategic approach to human rights and achieving the Sustainable Development Goals.
Supporting facts:
- The ITU started working on AI challenges in 2017.
- The ITU and UNESCO co-lead an interagency working group on AI.
- The ITU’s group focuses on delivering concrete outcome on AI and showing that it is human right based.
Topics: Artificial Intelligence, Human Rights, Sustainable Development Goals
The ITU’s AI for Good initiative is attempting to bring tangible outcomes for the society.
Supporting facts:
- AI for Good initiative is an annual summit organized by ITU.
- The initiative aims to deliver technical outcomes such as machine learning for 5G and health.
- The initiative has contributed to several technical standards in AI domain.
Topics: AI for Good Initiative, Sustainable Development Goals
ITU is trying to embed digital rights or human rights into the standardization process.
Supporting facts:
- ITU is working with the different study groups to elaborate the technical recommendation to focus on human rights.
- The UN High Commissioner for Human Rights asked ITU, ISO and EAC to embed human rights into the standard.
- ICT for girls, gender balance, universal access, and accessibility are digital rights ITU has been working on for many years.
Topics: Digital Rights, Human Rights, Standardization
There are focus groups studying the impact of quantum computing on AI
Topics: Quantum Computing, Artificial Intelligence
Accountability can be linked to technical transparency
Supporting facts:
- AI works with models and data sets, but most of the AI we’re using are not transparent.
- Ability to audit AI could be beneficial.
- Some companies are starting to open their models and data sets, like Hugging Face.
Topics: AI, Accountability, Transparency
ITU can only provide recommendations to member states about embedding human rights.
Supporting facts:
- Olivier Elas explains that ITU’s role is limited to making recommendations for member states to avoid violations and embed human rights.
Topics: ITU, Member states, Human rights
Report
The International Telecommunication Union (ITU) is actively working on addressing the challenges posed by Artificial Intelligence (AI) while also placing a strong emphasis on human rights and the achievement of the Sustainable Development Goals (SDGs). They started working on AI challenges in 2017 and co-lead an interagency working group on AI with UNESCO.
The primary objective of this group is to deliver concrete outcomes based on human rights principles. The ITU’s AI for Good initiative is an annual summit that aims to bring tangible benefits to society. This initiative plays a vital role in delivering technical outcomes such as machine learning for 5G and health.
Furthermore, it has made significant contributions to the establishment of technical standards in the field of AI. The ITU also recognises the importance of embedding human rights into the standardisation process. They are actively working with different study groups to develop technical recommendations that focus on human rights.
The UN High Commissioner for Human Rights has asked the ITU, ISO, and EAC to integrate human rights into their standards. It’s worth noting that the ITU has been working on various digital rights issues for many years, including ICT for girls, gender balance, universal access, and accessibility.
Olivier Elas, representing the ITU, strongly advocates for the application of human rights principles within the context of AI and digital technology. He highlighted the ITU’s leading initiatives to apply human rights in the area of AI, specifically mentioning the ‘AI for Good’ initiative.
Elas also mentioned that the ITU is aligning its efforts with general assembly resolutions focused on AI and human rights. Additionally, the ITU has focus groups studying the impact of quantum computing on AI. This demonstrates their commitment to exploring emerging technologies and their potential implications.
There is growing recognition of the need for accountability and transparency in AI. While AI systems work with models and data sets, many of them lack transparency. This lack of transparency hinders the ability to audit AI systems. However, some companies, such as Hugging Face, are starting to address this issue by opening their models and data sets.
It is important to note that the ITU’s role is primarily regulatory, and they can only provide recommendations to member states regarding embedding human rights principles. Olivier Elas acknowledges the limitations of the ITU, mentioning that it is challenging for them to do more than make recommendations due to their regulatory role.
In conclusion, the ITU is actively engaged in addressing the challenges of AI while prioritising human rights and the achievement of the SDGs. They are working towards delivering concrete outcomes based on human rights principles through their AI for Good initiative and focusing on embedding human rights into the standardisation process.
Olivier Elas, representing the ITU, supports the application of human rights principles in AI and digital technology and highlights the alignment of their efforts with global resolutions. The ITU’s study groups are also exploring the impacts of quantum computing on AI.
However, it is important to recognise that the ITU’s role is limited primarily to making recommendations to member states, and they face challenges in taking more meaningful actions beyond regulatory recommendations.
TE
Tim Engelhardt
Speech speed
145 words per minute
Speech length
2778 words
Speech time
1150 secs
Arguments
Human rights should be integrated into the development and governance of AI
Supporting facts:
- The Universal Declaration of Human Rights has been upheld as a key framework for AI governance
- Human rights approach helps structure the discussions and risks related to AI
Topics: AI, Human Rights, Governance
Increasing transparency and stakeholder engagement is crucial in AI governance
Supporting facts:
- States need to inform people where AI systems are used
- Human Rights Due Diligence guidance involves tracking and communicating about the impacts and the methods used
Topics: Transparency, Stakeholder Engagement, AI Governance
Impact of AI in society takes a problematic turn when put in a problematic context.
Supporting facts:
- AI is increasingly affecting all areas of society and life.
- A well-developed AI system can become a weapon in a problematic context
Topics: AI impacts, Societal context, AI deployment
AI affects all the rights we can think of.
Supporting facts:
- AI’s effect is seen on security issues affecting life and liberty.
- Due process rights are impacted by AI in areas like law enforcement.
Topics: AI impacts, Human rights
Military applications of AI are often not considered in the discussions.
Topics: AI impacts, Military applications
AI tools in the health sector can impact people’s access to healthcare.
Supporting facts:
- Health insurance claims are sometimes denied based on AI assessments.
- Recently, one insurance company reportedly denied 94% of claims based on AI tools.
Topics: AI impacts, Healthcare
AI technology like facial recognition affects freedom of assembly.
Supporting facts:
- Monitoring assemblies with facial recognition technologies infringe on the right to freedom of assembly.
Topics: AI impacts, Facial recognition, Freedom of assembly
AI has the potential to centralize power and shape environments
Supporting facts:
- Tim Engelhardt reinforces what David Satola said about AI centralizing and shaping power.
Topics: AI, Power Centralization
Communities affected by AI’s implementations are often not included in its development
Supporting facts:
- Tim Engelhardt raises the concern that AI tools may be developed more from a top-down side, meaning communities affected by their deployment are often left out of the picture.
Topics: AI, Community Involvement
Strengthening community abilities to shape AI tools could be beneficial
Supporting facts:
- Tim Engelhardt gives the example of a New Zealand indigenous community radio that developed its own language model to bring traditional language back to life.
- The community used local skills and resources to compile hundreds of hours of recordings and build models that outperformed those of large competitors.
Topics: AI, Community Empowerment
Currently, discussions are ongoing about AI governance
Supporting facts:
- An advisory body is convening on a weekly basis to discuss AI governance
- The Security Council could, in theory, impose sanctions
Topics: AI governance, Security Council
Report
The analysis explores the integration of human rights into the development and governance of artificial intelligence (AI). It emphasises the importance of conducting risk-proportionate human rights due diligence by states and businesses to effectively manage AI. This approach ensures that potential risks and human rights concerns associated with AI are adequately addressed.
Including a human rights framework in AI governance helps to structure discussions and mitigate the risks involved. Transparency and stakeholder engagement are crucial in AI governance. It is vital for states to inform the public about the use of AI systems, creating an atmosphere of openness and accountability.
Human Rights Due Diligence guidance plays a pivotal role in tracking and communicating the impacts and methods used in AI systems. This enables stakeholders to effectively monitor the implications of AI technology. However, the impact of AI in society can become problematic when deployed in problematic contexts.
The analysis warns that AI, which permeates various aspects of society, can become a weapon in environments with existing problems. This highlights the importance of carefully considering the context in which AI is deployed to avoid exacerbating existing issues. Furthermore, the analysis highlights how AI can infringe upon various rights.
It emphasizes the impact of AI on security issues that affect life and liberty, particularly in law enforcement and due process rights. Additionally, facial recognition technologies used for monitoring assemblies can encroach upon the right to freedom of assembly. Moreover, the attempt of AI to recognize emotions can interfere with freedom of thought, opinion, and individual autonomy.
The military applications of AI are often overlooked in discussions. The analysis notes that these applications are frequently neglected, indicating a potential blind spot when considering the ethical and strategic implications of AI in military operations. In the healthcare sector, the analysis points out that AI tools can have a negative impact on people’s access to healthcare.
This is exemplified by instances where health insurance claims have been denied based on AI assessments. The denial of claims can result in restricted access to necessary healthcare services. The analysis further highlights the potential for AI to centralise power and shape environments.
It asserts that AI has the capability to concentrate decision-making authority and influence the dynamics of power in society. Community involvement and empowerment in shaping AI tools are important considerations. The analysis suggests that communities affected by the implementation of AI are often excluded from its development.
Strengthening community abilities to shape AI tools can lead to more inclusive and beneficial outcomes. The analysis suggests that ongoing discussions and evaluations are necessary for effective AI governance. It acknowledges the existence of advisory bodies convening regularly to deliberate on AI governance.
However, it emphasises the need for increased dialogue and evaluations to ensure that AI governance aligns with human rights standards and addresses the concerns raised by the technology. Overall, the analysis highlights the significance of integrating human rights considerations in the development and governance of AI.
It emphasises the need for risk-proportionate human rights due diligence, transparency, stakeholder engagement, and careful consideration of the societal context in which AI is deployed. The analysis also points out potential infringements on various rights, the often-overlooked military applications of AI, and the impact of AI on healthcare access.
It calls for community involvement and empowerment in shaping AI tools and underscores the necessity of ongoing discussions and evaluations for effective AI governance.