IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation
Event report
Speakers and Moderators
Speakers:
- GABRIELA RAMOS, Intergovernmental Organization
- Anriette Esterhuysen, Civil Society, African Group
- Stefaan G Verhulst Verhulst, Technical Community, Western European and Others Group (WEOG)
- Alexandre Barbosa, Civil Society, Latin American and Caribbean Group (GRULAC)
- Dawit Bekele, Technical Community, African Group
- Dorothy Gordon, Civil Society, African Group
- Siva Prasad Rambhatla, Civil Society, Asia-Pacific Group
- Changfeng Chen, Civil Society, Asia-Pacific Group
- Marielza Oliveira, Intergovernmental Organization
- Fabio Senne, Civil Society, Latin American and Caribbean Group (GRULAC)
Moderators:
- Yves POULLET, Intergovernmental Organization, Western European and Others Group (WEOG)
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Changfeng Chen
The concept of culture lag refers to the delayed adjustment of non-material aspects such as beliefs, values, and norms to changes in material culture, such as technology. This concept aptly describes the situation with generative AI, where technology changes faster than non-material aspects such as regulations. The rapid evolution of generative AI presents challenges in adapting legal and ethical frameworks to address its potential risks and implications.
While some argue for a moratorium on generative AI to allow time for comprehensive regulation and understanding of its implications, this approach is deemed drastic and unlikely to be effective in the long term. The field of generative AI is constantly evolving, and a blanket ban would hinder progress and innovation. Instead, flexible and adaptive regulatory frameworks are needed to keep up with technological advancements and address potential risks holistically.
China has emerged as a leader in the development and regulation of generative AI. Companies like Baidu, ByteDance, and iFlight Tech are at the forefront of generative AI applications, with their technology being installed on mobile phones and laptops to assist users in decision-making processes, such as choosing a restaurant. China has released interim administrative measures for generative AI services, which demand legitimate data sourcing, respect for rights, and risk management. This highlights China’s commitment to responsible AI development and regulation.
However, there are concerns about the fairness of the regulatory framework in China. Some argue that the heaviest responsibility is placed on generative AI providers, while other stakeholders such as data owners, computing power suppliers, and model designers also play critical roles. Allocating the majority of responsibility to providers is viewed as unfair and may hinder collaboration and innovation in the field.
Generative artificial intelligence has the potential to significantly contribute to the education of young people and foster a new perspective on rights. By harnessing the power of generative AI, educational institutions can create dynamic and personalized learning experiences for students. Additionally, young people have the right to access and use new technologies for learning and development, and it is the responsibility of adults and professionals to guide them in leveraging these technologies effectively and ethically.
Efforts have already been initiated to promote these rights for young people, such as UNESCO’s Media and Information Literacy Week, which aims to enhance young people’s skills in critically analyzing and engaging with media and information. This reflects the international community’s recognition of the importance of digital literacy and ensuring equitable access to information and technology for young people.
Promoting professionalism in the field of artificial intelligence is crucial. Professionalism entails adhering to a set of standards and behaviors such as reliability, high standards, ethical behavior, respect, responsibility, and teamwork. By promoting professionalism, the field of AI can operate within ethical boundaries and ensure the responsible development and use of AI technologies.
It is also important to have a professional conscience towards new technologies that respects multicultural values. While it is necessary to respect and consider regionalized values and regulations, there should also be a broader perspective in the technical field to promote global collaboration and understanding.
In conclusion, the concept of culture lag accurately describes the challenges faced in regulating generative AI amidst rapid technological advancements. A moratorium on generative AI is seen as drastic and ineffective, and instead, flexible and adaptive regulatory frameworks should be established. China is leading in the development and regulation of generative AI, but concerns about fairness in the regulatory framework exist. Generative AI has the potential to revolutionize education and empower young people, but it requires responsible guidance from adults and professionals. Efforts are underway to promote these rights, such as UNESCO’s Media and Information Literacy Week. Promoting professionalism and a professional conscience towards new technologies is crucial in ensuring ethical and responsible AI development.
Audience
The debate surrounding the responsible usage and regulation of AI, particularly generative AI, is of significant importance in today’s rapidly advancing technological landscape. The summary highlights several key arguments and perspectives on this matter.
One argument put forth emphasises the need to utilise the existing AI tools and guidelines until specific regulations for generative AI are developed. It is acknowledged that constructing an entirely new ethical framework for generative AI would be a time-consuming process. Therefore, it is deemed wise to make use of the current available resources and regulations until more comprehensive guidelines for generative AI are established.
Another argument draws attention to the potential risks associated with the use of generative models. Specifically, it highlights the risks of inaccuracy and unreliable sources that are made up by these models. Of concern is the fact that many individuals, especially young people, are inclined to utilise generative models due to their efficiency. However, they may be unaware of the potential risks involved. Thus, it is suggested that raising awareness among the public, especially the younger generation, about the potential risks of generative AI is crucial.
Advocacy for the importance of raising awareness regarding the use of generative AI models is another notable observation. It is argued that greater awareness can be achieved through quality education and the establishment of strong institutions. By providing individuals with a deeper understanding of generative AI and its potential risks, it is believed that they will be better equipped to make responsible and informed choices.
The responsible coding and designing of AI systems are also stressed in the summary. It is essential to approach the development of AI systems with a sense of responsibility, both in terms of coding practices and design considerations. Implementing responsible practices ensures that AI systems are developed ethically and do not pose unnecessary risks to individuals or society as a whole.
One perspective questions whether self-regulation alone is sufficient for responsible AI or if an official institution should have a role in examining AI technologies. The argument here revolves around the idea that while self-regulation may be important, there is a need for external oversight to ensure the accountability and responsible usage of AI technologies.
It is worth noting that AI systems are no longer solely the domain of big tech companies. The accessibility of AI development has increased, allowing anyone, including criminals and young individuals, to develop AI models. This accessibility raises concerns regarding the potential misuse or irresponsible development of AI technologies.
The feasibility of regulating everyone as AI development becomes more accessible is called into question. It is argued that regulating every individual may not be a practical solution. With the ease of developing AI models without extensive technical expertise, alternative approaches to regulation may need to be explored.
Regulating the data that can be used for AI, both for commercial and official usage, is seen as a possibility. However, regulating the development of AI models is deemed less feasible. This observation highlights the challenges in finding a balance between ensuring responsible AI usage while still fostering innovation and development in the field.
In conclusion, the expanded summary provides a comprehensive overview of the arguments and perspectives surrounding responsible AI usage and regulation. It underscores the importance of utilising existing AI tools and guidelines, raising awareness about the potential risks of generative models, and promoting responsible coding and design practices. The debate surrounding self-regulation versus external oversight, the increasing accessibility of AI development, and the challenges of regulating AI models is also considered.
Moderator – Yves Poullet
UNESCO has made significant strides in regulating AI ethics. In November 2022, it published a recommendation on AI ethics, demonstrating its commitment to addressing the challenges posed by artificial intelligence. This recommendation has already been applied to CHAT-GPT, indicating that UNESCO is actively implementing its ethical guidelines. The director of the SIH UNESCO department, Gabriela Ramos, is leading the implementation efforts. Despite her absence at an event, she sent a video expressing support and dedication to ensuring the ethical use of AI. Generative AI systems, which include foundation models and applications, require attention from public authorities due to their unique characteristics and potential risks. There is concern about potential biases and inaccuracies in the language used by generative AI models, which deal with large amounts of big data, including language translation and speech recognition. The future of generative AI is seen as potentially revolutionary, but there are also risks associated with these systems, such as the manipulation of individuals and job security concerns. Generative AI systems also pose risks to democracy, as they can spread misinformation and disinformation. Public regulation or some form of regulation is necessary to address these risks, with discussions on the feasibility of a moratorium and different approaches taken by leading countries. The ethical values set by UNESCO are widely accepted worldwide, but the challenge lies in their enforcement. Standardization and quality assessment are proposed as effective mechanisms to reinforce ethical values. The idea of AI localism, where local communities propose AI regulations aligned with their cultural values, is appreciated. Concerns are raised about language discrimination and the poor performance of AI systems in languages other than dominant ones. Efforts to address these issues, such as Finland’s establishment of big data in the Finnish language, are encouraged. In conclusion, UNESCO’s efforts in regulating AI ethics and the need for public regulation and enforcement mechanisms are highlighted, along with the challenges and potential harms associated with generative AI systems.
Dawit Bekele
Generative AIs are advanced artificial intelligence systems that can generate human-like content. These models are built on large-scale neural networks such as GPT (Generative Pre-trained Transformer). By learning from extensive amounts of data, generative AIs can produce outputs that closely resemble human-created content. However, they may also perpetuate or amplify existing biases if the training data contains biases or unrepresentative samples.
Despite these concerns, generative AI technology presents significant opportunities for innovation. Researchers and public authorities are actively working to address the ethical issues inherent in generative AI, with discussions taking place at UNESCO. Regulatory frameworks are needed to ensure transparency and accountability in the development and deployment of these models.
Generative AI systems also have the potential to impact the education system negatively. They can provide answers to learners immediately, potentially replacing the need for human assistance. This raises concerns about the displacement of human workers and disruption of traditional job markets.
It is crucial to have local responses tailored to the specific needs and values of each society when implementing generative AI. Societies should have the autonomy to decide how they use the technology based on their specific contextual considerations. However, certain countries may face challenges in handling generative AI due to a lack of resources and knowledge. Organizations like UNESCO should empower and educate societies about AI, providing necessary resources and knowledge to ensure responsible use. Big tech companies also have a responsibility to financially support less-resourced countries in adopting and managing generative AI technology.
In conclusion, generative AI offers significant opportunities for innovation, but also raises ethical concerns. Regulatory frameworks, local responses, and support from organizations like UNESCO and big tech companies are necessary for responsible and equitable implementation of generative AI technology.
Gabriela Ramos
The analysis reveals potential negative implications of AI that necessitate effective governance systems and regulation. Concerns arise from gender and racial biases found in generative AI models, such as Chat GPT-3. This emphasizes the urgent need for ethical guidelines and frameworks to govern AI development and deployment.
UNESCO has conducted an ethical analysis of generative AI models. This analysis underscores the importance of implementing proper governance and regulation measures. The impact of AI on industries and infrastructure aligns with Sustainable Development Goal 9. However, without appropriate guidelines, the risks and consequences associated with AI deployment can be detrimental.
To mitigate these risks, UNESCO recommends the implementation of ethical impact assessments. These assessments foresee the potential consequences of AI systems and ensure adherence to ethical standards. Considering the rapid advancement of AI technology, ethical reflection is crucial in addressing questions and concerns related to AI risks.
In addition to ethical considerations, the concentration of AI power among a few companies and countries is a cause for concern. The impressive capabilities of generative AI raise worries about negative social and political implications. Furthermore, legal actions have been taken regarding potential copyright breaches by open AI. It is important to make AI power more inclusive to reduce inequalities, as emphasized by Sustainable Development Goal 10.
Moreover, countries need to be well-prepared to handle legal and regulatory issues pertaining to AI. UNESCO is actively collaborating with 50 governments globally to establish readiness and ethical impact assessment methodologies. Additionally, UNESCO, in partnership with the renowned Alan Turing Institute, is launching an AI ethics observatory. These initiatives aim to support countries in developing robust frameworks for managing AI technologies.
In conclusion, the analysis emphasizes the need for effective governance systems and regulation to address potential negative implications of AI, such as biases and concentration of power. Implementation of UNESCO’s recommendations on ethical impact assessments and ensuring a more inclusive distribution of AI power are crucial in mitigating risks. Collaboration with governments and launching the AI ethics observatory demonstrate UNESCO’s commitment to harmonizing AI technologies with ethical considerations on a global scale.
Marielza Oliveira
The International Federation of Library Associations and Institutions (IFAP) has a crucial role in advocating for ethical, legal, and human rights issues in the realm of digital technologies, particularly artificial intelligence (AI). They recognize that advancements in AI, specifically generative AI, have significant implications for global societies. As a result, IFAP emphasizes the importance of examining the impacts of AI through the lens of ethics and human rights to ensure responsible and equitable use of AI.
IFAP is committed to ensuring access to information for all individuals. They endorse a new strategic plan that highlights the importance of digital technologies, including AI, for our fundamental right to access information. IFAP aims to bridge the digital divide and ensure that everyone can benefit from the opportunities presented by these technologies.
Additionally, IFAP focuses on building capacities to address the ethical concerns arising from the use of frontier technologies. They recognize the potential of inclusive, equitable, and knowledgeable societies driven by technology. To achieve this, IFAP supports and encourages research into the implications of these frontier technologies. They assist institutions in making AI technologies accessible and beneficial to everyone, while also raising awareness about the risks associated with their use. By examining and understanding these risks, IFAP aims to develop effective mechanisms and strategies to address them.
Another important aspect of IFAP’s work is the promotion of the implementation of recommendations on the ethics of AI. They actively engage in discussions and collaborations with stakeholders to design and govern AI based on evidence-based frameworks. IFAP recognizes that a multi-stakeholder approach is essential to create responsible policies and guidelines.
In addition, IFAP actively participates in global dialogues and forums to address digital divides and inequalities. They function as a platform for sharing experiences and best practices in overcoming these challenges. Through these dialogues and forums, IFAP aims to foster collaboration and partnerships to build sustainability and equality across all knowledge societies.
In conclusion, the International Federation of Library Associations and Institutions (IFAP) is at the forefront of promoting ethical, legal, and human rights issues in the context of digital technologies, especially AI. They emphasize the need to examine the impacts of AI through ethical and human rights lenses, while also ensuring access to information for all individuals. IFAP supports research into the inclusive and beneficial use of frontier technologies, along with raising awareness about the associated risks. They actively participate in global dialogues and forums to address digital divides and inequalities. Through their collective efforts, IFAP strives to shape a digital future that upholds shared values, sustainability, and equality across knowledge societies.
Fabio Senne
The summary is based on a discussion among speakers regarding the ethical, legal, and social implications of generative AI. They agree that a global forum is necessary to address these issues. Additionally, promoting digital literacy and critical thinking skills among young people is seen as crucial for responsible use of generative AI.
One speaker, Omar Farouk from Bangladesh, emphasizes the need for convening a global forum to discuss the ethical, legal, and social implications of generative AI. This indicates an awareness of the potential risks and challenges associated with this technology.
UNICEF also voices concerns about digital literacy and critical thinking skills. They argue that young people need to be educated about generative AI to be informed users. This highlights the importance of ensuring that individuals understand the potential implications and risks of generative AI, especially as it becomes more prevalent in society.
Another area of concern raised by UNICEF is the impact of generative AI on child protection and empowerment. They express worries about the unknown effects of AI on children and the need to protect and empower them in an AI-driven world.
The importance of more investigations and data in the field of AI is suggested by a speaker working in Brazil with CETIC.br, a UNESCO Category 2 centre. This indicates a recognized need for further research and understanding of AI, as it continues to rapidly develop.
Global digital inequality is identified as a major issue in the discussion. Inequalities in accessing the internet and digital technologies can affect the quality of training data, and languages may not be properly represented in AI models. In addition, there are inequalities within countries that impact the diversity of data used. These concerns highlight the need to address digital inequalities to ensure more inclusive and human-centred AI.
The need for improved AI literacy and education is emphasised. Data from Brazil reveals an underdevelopment of informational skills among children, with many unsure of their ability to assess online information. Therefore, raising awareness and literacy about AI in educational systems is crucial.
There is a call to monitor and evaluate AI, recognising the importance of assessing its impact and making informed decisions. Mention is made of international frameworks from OECD and UNESCO, highlighting the need for global cooperation and collaboration in understanding and regulating AI.
In conclusion, the discussions highlight the need to address the ethical, legal, and social implications of generative AI through a global forum. Promoting digital literacy and critical thinking skills, protecting children, conducting further investigations, addressing digital inequalities, improving AI literacy and education, and monitoring AI are all seen as crucial steps in fostering responsible and inclusive AI development.
Stefan Verhulst
The discussion surrounding Artificial Intelligence (AI) has shifted towards responsible technology development rather than advocating for an outright ban or extensive government intervention. OpenAI, an AI research organisation, argues for closed development to prevent potential misuse and abuse of AI technology. On the other hand, Meta, formerly known as Facebook, supports an open approach to developing generative AI.
Maintaining openness in AI research is considered crucial for advancing the field, despite concerns about potential abuse. AI research has historically been open, leading to significant advancements. Closing off research could create power asymmetries and solidify the current power positions in the AI industry.
Another important aspect of the AI discourse is adopting a rights-based approach towards AI. This includes prioritising principles such as safety, effectiveness, notice and explainability, and considering human alternatives. The Office of Science and Technology Policy (OSTP) has taken a multi-stakeholder approach to developing a Bill of Rights that emphasises these aspects.
In the United States, while there is a self-regulatory and co-regulatory approach to AI governance at the federal level, states and cities have taken a proactive stance. Currently, around 200 bills are being discussed at the state level, and several cities have enacted legislation regarding AI.
Engaging with young people is crucial in addressing AI-related issues. Young people often provide informed solutions and in many countries, they represent the majority of the population. Their deep understanding of AI highlights the need to listen to their preferences and incorporate their solutions. It is believed that engaging with young people can lead to more legitimate and acceptable use of AI. Additionally, innovative methods of engagement aligned with their preferred platforms need to be developed.
The importance of data quality cannot be overlooked when discussing AI, particularly in the context of generative AI. The principle of “garbage in, garbage out” becomes crucial, as the quality of the output is only as good as the quality of the input data. Attention should be focused not only on the AI models themselves but also on creating high-quality data to feed into these models.
Furthermore, open data, open science, and quality statistics have become more important than ever for qualitative generative AI. Prioritising these aspects contributes to the overall improvement and reliability of AI systems.
Overall, the discussion on AI emphasises responsible technology development rather than outright bans or government intervention. Maintaining openness in AI research is seen as crucial for the advancement of the field, although caution must be exercised to address potential risks and abuses. A rights-based approach, proactive governance at the local level, meaningful engagement with young people, and attention to data quality are all key considerations in the development and deployment of AI technology.
Siva Prasad Rambhatia
The analysis explores different perspectives on the impact of Artificial Intelligence (AI) on society. One viewpoint highlights that AI has contributed to the creation and exacerbation of inequalities in society. Specifically, it has had a significant impact on marginalized communities, especially those in the global South. The introduction of AI technologies and applications has reinforced existing social, cultural, and economic barriers, widening the gap between privileged and disadvantaged groups. This sentiment is driven by the assertion that AI, particularly in its current form, creates new types of inequalities and further amplifies existing ones.
Another viewpoint revolves around the negative consequences of generative AI models. These models have the potential to replace various job roles traditionally performed by humans. This phenomenon has raised concerns regarding the social and economic implications of widespread job displacement. In addition, the advent of generative models has been associated with a growing disconnect within societies. As AI takes over certain tasks, the interaction and collaboration between humans may decrease, leading to potential societal fragmentation.
Conversely, there is a positive stance arguing for AI to adopt local or regionally specific approaches and to preserve local knowledge and traditional epistemologies. This perspective highlights the potential benefits of embracing context-specific AI applications that address unique regional challenges. Advocates argue that these approaches can contribute to building more inclusive and equitable knowledge societies. By utilizing local knowledge and traditions, AI can help identify appropriate solutions to complex human problems.
Inclusivity and multiculturalism are also emphasized as essential aspects of AI design. Advocates argue that AI systems must be designed with consideration for marginalized and indigenous communities. By incorporating inclusive practices in AI development, it is possible to mitigate the potential negative impacts and ensure that the benefits of AI are accessible to all.
Additionally, the analysis underscores the importance of documenting and utilizing local knowledge systems in model building. By incorporating local knowledge, AI models can be more effective in addressing local and regional issues. The accumulation of local knowledge can contribute to the development of robust and contextually sensitive AI solutions.
Overall, the analysis highlights the complex and multi-faceted impact of AI on society. While there are concerns about the creation of inequalities and job displacement, there are also opportunities for AI to be inclusive, region-specific, and leverage local knowledge. By considering these various perspectives and incorporating diverse viewpoints, it is possible to shape the development and implementation of AI technologies in a way that benefits all members of society.
Speakers
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The debate surrounding the responsible usage and regulation of AI, particularly generative AI, is of significant importance in today’s rapidly advancing technological landscape. The summary highlights several key arguments and perspectives on this matter.
One argument put forth emphasises the need to utilise the existing AI tools and guidelines until specific regulations for generative AI are developed.
It is acknowledged that constructing an entirely new ethical framework for generative AI would be a time-consuming process. Therefore, it is deemed wise to make use of the current available resources and regulations until more comprehensive guidelines for generative AI are established.
Another argument draws attention to the potential risks associated with the use of generative models.
Specifically, it highlights the risks of inaccuracy and unreliable sources that are made up by these models. Of concern is the fact that many individuals, especially young people, are inclined to utilise generative models due to their efficiency. However, they may be unaware of the potential risks involved.
Thus, it is suggested that raising awareness among the public, especially the younger generation, about the potential risks of generative AI is crucial.
Advocacy for the importance of raising awareness regarding the use of generative AI models is another notable observation.
It is argued that greater awareness can be achieved through quality education and the establishment of strong institutions. By providing individuals with a deeper understanding of generative AI and its potential risks, it is believed that they will be better equipped to make responsible and informed choices.
The responsible coding and designing of AI systems are also stressed in the summary.
It is essential to approach the development of AI systems with a sense of responsibility, both in terms of coding practices and design considerations. Implementing responsible practices ensures that AI systems are developed ethically and do not pose unnecessary risks to individuals or society as a whole.
One perspective questions whether self-regulation alone is sufficient for responsible AI or if an official institution should have a role in examining AI technologies.
The argument here revolves around the idea that while self-regulation may be important, there is a need for external oversight to ensure the accountability and responsible usage of AI technologies.
It is worth noting that AI systems are no longer solely the domain of big tech companies.
The accessibility of AI development has increased, allowing anyone, including criminals and young individuals, to develop AI models. This accessibility raises concerns regarding the potential misuse or irresponsible development of AI technologies.
The feasibility of regulating everyone as AI development becomes more accessible is called into question.
It is argued that regulating every individual may not be a practical solution. With the ease of developing AI models without extensive technical expertise, alternative approaches to regulation may need to be explored.
Regulating the data that can be used for AI, both for commercial and official usage, is seen as a possibility.
However, regulating the development of AI models is deemed less feasible. This observation highlights the challenges in finding a balance between ensuring responsible AI usage while still fostering innovation and development in the field.
In conclusion, the expanded summary provides a comprehensive overview of the arguments and perspectives surrounding responsible AI usage and regulation.
It underscores the importance of utilising existing AI tools and guidelines, raising awareness about the potential risks of generative models, and promoting responsible coding and design practices. The debate surrounding self-regulation versus external oversight, the increasing accessibility of AI development, and the challenges of regulating AI models is also considered.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The concept of culture lag refers to the delayed adjustment of non-material aspects such as beliefs, values, and norms to changes in material culture, such as technology. This concept aptly describes the situation with generative AI, where technology changes faster than non-material aspects such as regulations.
The rapid evolution of generative AI presents challenges in adapting legal and ethical frameworks to address its potential risks and implications.
While some argue for a moratorium on generative AI to allow time for comprehensive regulation and understanding of its implications, this approach is deemed drastic and unlikely to be effective in the long term.
The field of generative AI is constantly evolving, and a blanket ban would hinder progress and innovation. Instead, flexible and adaptive regulatory frameworks are needed to keep up with technological advancements and address potential risks holistically.
China has emerged as a leader in the development and regulation of generative AI.
Companies like Baidu, ByteDance, and iFlight Tech are at the forefront of generative AI applications, with their technology being installed on mobile phones and laptops to assist users in decision-making processes, such as choosing a restaurant. China has released interim administrative measures for generative AI services, which demand legitimate data sourcing, respect for rights, and risk management.
This highlights China’s commitment to responsible AI development and regulation.
However, there are concerns about the fairness of the regulatory framework in China. Some argue that the heaviest responsibility is placed on generative AI providers, while other stakeholders such as data owners, computing power suppliers, and model designers also play critical roles.
Allocating the majority of responsibility to providers is viewed as unfair and may hinder collaboration and innovation in the field.
Generative artificial intelligence has the potential to significantly contribute to the education of young people and foster a new perspective on rights.
By harnessing the power of generative AI, educational institutions can create dynamic and personalized learning experiences for students. Additionally, young people have the right to access and use new technologies for learning and development, and it is the responsibility of adults and professionals to guide them in leveraging these technologies effectively and ethically.
Efforts have already been initiated to promote these rights for young people, such as UNESCO’s Media and Information Literacy Week, which aims to enhance young people’s skills in critically analyzing and engaging with media and information.
This reflects the international community’s recognition of the importance of digital literacy and ensuring equitable access to information and technology for young people.
Promoting professionalism in the field of artificial intelligence is crucial. Professionalism entails adhering to a set of standards and behaviors such as reliability, high standards, ethical behavior, respect, responsibility, and teamwork.
By promoting professionalism, the field of AI can operate within ethical boundaries and ensure the responsible development and use of AI technologies.
It is also important to have a professional conscience towards new technologies that respects multicultural values.
While it is necessary to respect and consider regionalized values and regulations, there should also be a broader perspective in the technical field to promote global collaboration and understanding.
In conclusion, the concept of culture lag accurately describes the challenges faced in regulating generative AI amidst rapid technological advancements.
A moratorium on generative AI is seen as drastic and ineffective, and instead, flexible and adaptive regulatory frameworks should be established. China is leading in the development and regulation of generative AI, but concerns about fairness in the regulatory framework exist.
Generative AI has the potential to revolutionize education and empower young people, but it requires responsible guidance from adults and professionals. Efforts are underway to promote these rights, such as UNESCO’s Media and Information Literacy Week. Promoting professionalism and a professional conscience towards new technologies is crucial in ensuring ethical and responsible AI development.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Generative AIs are advanced artificial intelligence systems that can generate human-like content. These models are built on large-scale neural networks such as GPT (Generative Pre-trained Transformer). By learning from extensive amounts of data, generative AIs can produce outputs that closely resemble human-created content.
However, they may also perpetuate or amplify existing biases if the training data contains biases or unrepresentative samples.
Despite these concerns, generative AI technology presents significant opportunities for innovation. Researchers and public authorities are actively working to address the ethical issues inherent in generative AI, with discussions taking place at UNESCO.
Regulatory frameworks are needed to ensure transparency and accountability in the development and deployment of these models.
Generative AI systems also have the potential to impact the education system negatively. They can provide answers to learners immediately, potentially replacing the need for human assistance.
This raises concerns about the displacement of human workers and disruption of traditional job markets.
It is crucial to have local responses tailored to the specific needs and values of each society when implementing generative AI. Societies should have the autonomy to decide how they use the technology based on their specific contextual considerations.
However, certain countries may face challenges in handling generative AI due to a lack of resources and knowledge. Organizations like UNESCO should empower and educate societies about AI, providing necessary resources and knowledge to ensure responsible use. Big tech companies also have a responsibility to financially support less-resourced countries in adopting and managing generative AI technology.
In conclusion, generative AI offers significant opportunities for innovation, but also raises ethical concerns.
Regulatory frameworks, local responses, and support from organizations like UNESCO and big tech companies are necessary for responsible and equitable implementation of generative AI technology.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The summary is based on a discussion among speakers regarding the ethical, legal, and social implications of generative AI. They agree that a global forum is necessary to address these issues. Additionally, promoting digital literacy and critical thinking skills among young people is seen as crucial for responsible use of generative AI.
One speaker, Omar Farouk from Bangladesh, emphasizes the need for convening a global forum to discuss the ethical, legal, and social implications of generative AI. This indicates an awareness of the potential risks and challenges associated with this technology.
UNICEF also voices concerns about digital literacy and critical thinking skills. They argue that young people need to be educated about generative AI to be informed users. This highlights the importance of ensuring that individuals understand the potential implications and risks of generative AI, especially as it becomes more prevalent in society.
Another area of concern raised by UNICEF is the impact of generative AI on child protection and empowerment. They express worries about the unknown effects of AI on children and the need to protect and empower them in an AI-driven world.
The importance of more investigations and data in the field of AI is suggested by a speaker working in Brazil with CETIC.br, a UNESCO Category 2 centre. This indicates a recognized need for further research and understanding of AI, as it continues to rapidly develop.
Global digital inequality is identified as a major issue in the discussion. Inequalities in accessing the internet and digital technologies can affect the quality of training data, and languages may not be properly represented in AI models. In addition, there are inequalities within countries that impact the diversity of data used.
These concerns highlight the need to address digital inequalities to ensure more inclusive and human-centred AI.
The need for improved AI literacy and education is emphasised. Data from Brazil reveals an underdevelopment of informational skills among children, with many unsure of their ability to assess online information.
Therefore, raising awareness and literacy about AI in educational systems is crucial.
There is a call to monitor and evaluate AI, recognising the importance of assessing its impact and making informed decisions. Mention is made of international frameworks from OECD and UNESCO, highlighting the need for global cooperation and collaboration in understanding and regulating AI.
In conclusion, the discussions highlight the need to address the ethical, legal, and social implications of generative AI through a global forum. Promoting digital literacy and critical thinking skills, protecting children, conducting further investigations, addressing digital inequalities, improving AI literacy and education, and monitoring AI are all seen as crucial steps in fostering responsible and inclusive AI development.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis reveals potential negative implications of AI that necessitate effective governance systems and regulation. Concerns arise from gender and racial biases found in generative AI models, such as Chat GPT-3. This emphasizes the urgent need for ethical guidelines and frameworks to govern AI development and deployment.
UNESCO has conducted an ethical analysis of generative AI models.
This analysis underscores the importance of implementing proper governance and regulation measures. The impact of AI on industries and infrastructure aligns with Sustainable Development Goal 9. However, without appropriate guidelines, the risks and consequences associated with AI deployment can be detrimental.
To mitigate these risks, UNESCO recommends the implementation of ethical impact assessments.
These assessments foresee the potential consequences of AI systems and ensure adherence to ethical standards. Considering the rapid advancement of AI technology, ethical reflection is crucial in addressing questions and concerns related to AI risks.
In addition to ethical considerations, the concentration of AI power among a few companies and countries is a cause for concern.
The impressive capabilities of generative AI raise worries about negative social and political implications. Furthermore, legal actions have been taken regarding potential copyright breaches by open AI. It is important to make AI power more inclusive to reduce inequalities, as emphasized by Sustainable Development Goal 10.
Moreover, countries need to be well-prepared to handle legal and regulatory issues pertaining to AI.
UNESCO is actively collaborating with 50 governments globally to establish readiness and ethical impact assessment methodologies. Additionally, UNESCO, in partnership with the renowned Alan Turing Institute, is launching an AI ethics observatory. These initiatives aim to support countries in developing robust frameworks for managing AI technologies.
In conclusion, the analysis emphasizes the need for effective governance systems and regulation to address potential negative implications of AI, such as biases and concentration of power.
Implementation of UNESCO’s recommendations on ethical impact assessments and ensuring a more inclusive distribution of AI power are crucial in mitigating risks. Collaboration with governments and launching the AI ethics observatory demonstrate UNESCO’s commitment to harmonizing AI technologies with ethical considerations on a global scale.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The International Federation of Library Associations and Institutions (IFAP) has a crucial role in advocating for ethical, legal, and human rights issues in the realm of digital technologies, particularly artificial intelligence (AI). They recognize that advancements in AI, specifically generative AI, have significant implications for global societies.
As a result, IFAP emphasizes the importance of examining the impacts of AI through the lens of ethics and human rights to ensure responsible and equitable use of AI.
IFAP is committed to ensuring access to information for all individuals.
They endorse a new strategic plan that highlights the importance of digital technologies, including AI, for our fundamental right to access information. IFAP aims to bridge the digital divide and ensure that everyone can benefit from the opportunities presented by these technologies.
Additionally, IFAP focuses on building capacities to address the ethical concerns arising from the use of frontier technologies.
They recognize the potential of inclusive, equitable, and knowledgeable societies driven by technology. To achieve this, IFAP supports and encourages research into the implications of these frontier technologies. They assist institutions in making AI technologies accessible and beneficial to everyone, while also raising awareness about the risks associated with their use.
By examining and understanding these risks, IFAP aims to develop effective mechanisms and strategies to address them.
Another important aspect of IFAP’s work is the promotion of the implementation of recommendations on the ethics of AI. They actively engage in discussions and collaborations with stakeholders to design and govern AI based on evidence-based frameworks.
IFAP recognizes that a multi-stakeholder approach is essential to create responsible policies and guidelines.
In addition, IFAP actively participates in global dialogues and forums to address digital divides and inequalities. They function as a platform for sharing experiences and best practices in overcoming these challenges.
Through these dialogues and forums, IFAP aims to foster collaboration and partnerships to build sustainability and equality across all knowledge societies.
In conclusion, the International Federation of Library Associations and Institutions (IFAP) is at the forefront of promoting ethical, legal, and human rights issues in the context of digital technologies, especially AI.
They emphasize the need to examine the impacts of AI through ethical and human rights lenses, while also ensuring access to information for all individuals. IFAP supports research into the inclusive and beneficial use of frontier technologies, along with raising awareness about the associated risks.
They actively participate in global dialogues and forums to address digital divides and inequalities. Through their collective efforts, IFAP strives to shape a digital future that upholds shared values, sustainability, and equality across knowledge societies.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
UNESCO has made significant strides in regulating AI ethics. In November 2022, it published a recommendation on AI ethics, demonstrating its commitment to addressing the challenges posed by artificial intelligence. This recommendation has already been applied to CHAT-GPT, indicating that UNESCO is actively implementing its ethical guidelines.
The director of the SIH UNESCO department, Gabriela Ramos, is leading the implementation efforts. Despite her absence at an event, she sent a video expressing support and dedication to ensuring the ethical use of AI. Generative AI systems, which include foundation models and applications, require attention from public authorities due to their unique characteristics and potential risks.
There is concern about potential biases and inaccuracies in the language used by generative AI models, which deal with large amounts of big data, including language translation and speech recognition. The future of generative AI is seen as potentially revolutionary, but there are also risks associated with these systems, such as the manipulation of individuals and job security concerns.
Generative AI systems also pose risks to democracy, as they can spread misinformation and disinformation. Public regulation or some form of regulation is necessary to address these risks, with discussions on the feasibility of a moratorium and different approaches taken by leading countries.
The ethical values set by UNESCO are widely accepted worldwide, but the challenge lies in their enforcement. Standardization and quality assessment are proposed as effective mechanisms to reinforce ethical values. The idea of AI localism, where local communities propose AI regulations aligned with their cultural values, is appreciated.
Concerns are raised about language discrimination and the poor performance of AI systems in languages other than dominant ones. Efforts to address these issues, such as Finland’s establishment of big data in the Finnish language, are encouraged. In conclusion, UNESCO’s efforts in regulating AI ethics and the need for public regulation and enforcement mechanisms are highlighted, along with the challenges and potential harms associated with generative AI systems.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis explores different perspectives on the impact of Artificial Intelligence (AI) on society. One viewpoint highlights that AI has contributed to the creation and exacerbation of inequalities in society. Specifically, it has had a significant impact on marginalized communities, especially those in the global South.
The introduction of AI technologies and applications has reinforced existing social, cultural, and economic barriers, widening the gap between privileged and disadvantaged groups. This sentiment is driven by the assertion that AI, particularly in its current form, creates new types of inequalities and further amplifies existing ones.
Another viewpoint revolves around the negative consequences of generative AI models.
These models have the potential to replace various job roles traditionally performed by humans. This phenomenon has raised concerns regarding the social and economic implications of widespread job displacement. In addition, the advent of generative models has been associated with a growing disconnect within societies.
As AI takes over certain tasks, the interaction and collaboration between humans may decrease, leading to potential societal fragmentation.
Conversely, there is a positive stance arguing for AI to adopt local or regionally specific approaches and to preserve local knowledge and traditional epistemologies.
This perspective highlights the potential benefits of embracing context-specific AI applications that address unique regional challenges. Advocates argue that these approaches can contribute to building more inclusive and equitable knowledge societies. By utilizing local knowledge and traditions, AI can help identify appropriate solutions to complex human problems.
Inclusivity and multiculturalism are also emphasized as essential aspects of AI design.
Advocates argue that AI systems must be designed with consideration for marginalized and indigenous communities. By incorporating inclusive practices in AI development, it is possible to mitigate the potential negative impacts and ensure that the benefits of AI are accessible to all.
Additionally, the analysis underscores the importance of documenting and utilizing local knowledge systems in model building.
By incorporating local knowledge, AI models can be more effective in addressing local and regional issues. The accumulation of local knowledge can contribute to the development of robust and contextually sensitive AI solutions.
Overall, the analysis highlights the complex and multi-faceted impact of AI on society.
While there are concerns about the creation of inequalities and job displacement, there are also opportunities for AI to be inclusive, region-specific, and leverage local knowledge. By considering these various perspectives and incorporating diverse viewpoints, it is possible to shape the development and implementation of AI technologies in a way that benefits all members of society.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The discussion surrounding Artificial Intelligence (AI) has shifted towards responsible technology development rather than advocating for an outright ban or extensive government intervention. OpenAI, an AI research organisation, argues for closed development to prevent potential misuse and abuse of AI technology.
On the other hand, Meta, formerly known as Facebook, supports an open approach to developing generative AI.
Maintaining openness in AI research is considered crucial for advancing the field, despite concerns about potential abuse. AI research has historically been open, leading to significant advancements.
Closing off research could create power asymmetries and solidify the current power positions in the AI industry.
Another important aspect of the AI discourse is adopting a rights-based approach towards AI. This includes prioritising principles such as safety, effectiveness, notice and explainability, and considering human alternatives.
The Office of Science and Technology Policy (OSTP) has taken a multi-stakeholder approach to developing a Bill of Rights that emphasises these aspects.
In the United States, while there is a self-regulatory and co-regulatory approach to AI governance at the federal level, states and cities have taken a proactive stance.
Currently, around 200 bills are being discussed at the state level, and several cities have enacted legislation regarding AI.
Engaging with young people is crucial in addressing AI-related issues. Young people often provide informed solutions and in many countries, they represent the majority of the population.
Their deep understanding of AI highlights the need to listen to their preferences and incorporate their solutions. It is believed that engaging with young people can lead to more legitimate and acceptable use of AI. Additionally, innovative methods of engagement aligned with their preferred platforms need to be developed.
The importance of data quality cannot be overlooked when discussing AI, particularly in the context of generative AI.
The principle of “garbage in, garbage out” becomes crucial, as the quality of the output is only as good as the quality of the input data. Attention should be focused not only on the AI models themselves but also on creating high-quality data to feed into these models.
Furthermore, open data, open science, and quality statistics have become more important than ever for qualitative generative AI.
Prioritising these aspects contributes to the overall improvement and reliability of AI systems.
Overall, the discussion on AI emphasises responsible technology development rather than outright bans or government intervention. Maintaining openness in AI research is seen as crucial for the advancement of the field, although caution must be exercised to address potential risks and abuses.
A rights-based approach, proactive governance at the local level, meaningful engagement with young people, and attention to data quality are all key considerations in the development and deployment of AI technology.