Launch of Fellowship for Refugees on Border Surveillance | IGF 2023
Event report
Speakers and Moderators
Speakers:
- Nery Santaella, Fellow with Migration and Technology Monitor, Venezuela
- Veronika Martinez, Fellow with Migration and Technology Monitor, Mexico
- Simon Drotti, Fellow with Migration and Technology Monitor, Uganda
- Wael Qarssifi, Fellow with Migration and Technology Monitor, Syria/Malaysia
- Rajendra Paudel, Fellow with Migration and Technology Monitor, Nepal
Moderators:
- Petra Molnar, Refugee Law Lab, York University and Berkman Klein Center for Internet and Society, Harvard University
- Florian Schmitz, Migration and Technology Monitor
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Audience
This comprehensive analysis covers a wide range of topics related to education, generative AI, risk management, information literacy, multi-stakeholder engagement, the actions of the European private sector in oppressive regimes, the impact of misinformation and disinformation, and the coexistence of privacy and safety in technology design.
One of the discussions revolves around educating people about generative AI and the need to mitigate its risks. The audience seeks advice on how to educate individuals about this technology, indicating recognition of its potential risks. However, the sentiment is neutral, suggesting a need for more information and guidance in this area.
Another argument highlights the importance of promoting critical thinking and curiosity among children in the face of the age of disinformation and rapid technological change. The supporting facts include a quote from Jacinda Ardern, who emphasises the shift from relying on facts obtained from traditional library resources to the current digital age with multifaceted sources. She urges individuals to seek knowledge about the process and origin of the information presented. This positive argument underscores the need to equip children with the necessary skills to navigate and critically evaluate information in the digital era.
The analysis also addresses the need for a multi-stakeholder approach to problem-solving and the challenges faced by civil society, particularly from the Global South, in effectively participating in solution-finding dialogues. These challenges include disparities in accessibility and effectiveness compared to governments and corporate organisations. This observation points towards the importance of inclusivity and equal representation in decision-making processes.
Another notable point relates to monitoring the actions of the European private sector, particularly within countries with oppressive regimes. The argument raises questions about how to effectively monitor the activities of companies operating in these contexts, such as China, Vietnam, and Myanmar. This highlights concerns about the impact of the private sector on human rights and the need for oversight and accountability.
The analysis also delves into the impact of misinformation and disinformation, noting that individuals who distrust institutions are more susceptible to these phenomena. This observation emphasises the importance of building trust in structures and institutions to combat the spread of false information.
Furthermore, the debate on designing technology that balances privacy and safety in the online world is also addressed. The argument suggests that current technology and design choices might limit the coexistence of privacy and safety, forcing the prioritisation of one over the other. This highlights the ongoing challenge of developing technology that can effectively address both concerns.
In conclusion, this analysis highlights the need to educate about generative AI, mitigate its risks, foster critical thinking and curiosity among children, ensure inclusivity in problem-solving dialogues, monitor the actions of the European private sector, build trust in institutions to combat misinformation, and address the challenge of designing technology that balances privacy and safety. These observations reflect the complexity and interdisciplinary nature of the issues discussed, as well as the importance of considering diverse perspectives to inform effective strategies and solutions.
Karoline Edtstadler
During the analysis, several key points were discussed regarding the views expressed by Karoline Edtstadler. Firstly, she emphasised the need for greater recognition and opportunities for ambitious women. Edtstadler observed that women who strive for success are often viewed negatively, being labelled as pushy or attempting to replace men. She believes that society should overcome this perception and provide more support and encouragement to women with ambitious goals.
Secondly, Edtstadler underscored the value of women’s unique perspectives in leadership roles. She argued that women’s ability to perceive life from their point of view – particularly as those capable of giving birth and responsible for nurturing and upbringing – makes them special. The shared yet different life experiences, such as motherhood, contribute to their valuable insights and decision-making capabilities.
In terms of AI regulation, the European Union’s efforts were commended. The EU is taking the lead in regulating AI and prioritising the classification of risks associated with AI applications. This focus on risk evaluation aims to strike a balance between promoting beneficial AI technologies and addressing potential societal impacts.
Austria was recognised for its proactive approach to digital market regulation. Even before the implementation of the EU’s Digital Services Act (DSA) and the Digital Markets Act (DMA), Austria had already established the Communications Platform Act, effective from 1st January 2021. Under this act, social media platforms are obliged to promptly address online hate speech. Austria’s early actions demonstrate the country’s commitment to creating legal frameworks concerning digital services.
Collaboration and multi-stakeholder involvement were identified as crucial factors in addressing the challenges posed by AI, digital markets, and misinformation. Edtstadler advocated for a concerted effort involving governments, parliamentarians, civil society, and tech enterprises. She emphasised the importance of collective efforts and shared understanding in tackling these complex issues.
The analysis also highlighted the importance of education and awareness in effectively handling the impacts of social media and new technologies like AI. This includes equipping the public with knowledge and skills to navigate technology, particularly among the elderly. Additionally, it was emphasised that regulations should strike a balance between ensuring safety and privacy while still fostering innovation.
Restoring trust in institutions, governments, and democracy was identified as a crucial objective. Given the rise of misinformation and disinformation during events like the Covid-19 pandemic, Europe aims to counter these challenges through robust regulations. By addressing the issue of misinformation, trust can be rebuilt among citizens.
It was also noted that technology, including AI, should not replace human decision-making, particularly in matters like judgment in law enforcement. While AI can offer efficiency in finding judgments and organising knowledge, drawing a clear line between human judgment and AI is important.
Handling the downsides of technology was deemed necessary to ensure its benefits for society. Technologies like AI can be used for good, such as performing precise surgeries and speeding up tasks in law firms. However, challenges and risks should be addressed to make technology beneficial for all.
The analysis further underlined the importance of a multi-faceted approach in decision-making processes. Edtstadler highlighted Austria’s implementation of the Sustainable Development Goals (SDGs), wherein civil society was invited to contribute and share their actions in dialogue forums. This multi-stakeholder approach promotes inclusivity and diversity of perspectives in decision-making.
In conclusion, the analysis emphasised the need for recognition and empowerment of ambitious women, effective regulation of AI and digital markets, collaboration among stakeholders, education and awareness, addressing challenges in democracy and technology, and restoring trust in institutions and governments. These key points and insights offer valuable perspectives for policymakers and individuals seeking to promote a fair and inclusive society in the face of technological advancements.
Jacinda Ardern
The Christchurch Call to Action is a global initiative aimed at tackling extremist content online. It was established in response to a terrorist attack in New Zealand that was live-streamed on Facebook. Supported by over 150 member organizations, including governments, civil societies, and tech platforms, the Call sets out objectives such as creating a crisis response model and better understanding the process of radicalization.
New Zealand Prime Minister Jacinda Ardern believes that it is crucial to understand the role of content curation in driving radicalization. She highlights the case of the terrorist involved in the Christchurch attack, who acknowledged being radicalized by YouTube. Ardern calls for an improved understanding of how curated content can influence behavior online.
Ardern advocates for a multi-stakeholder solution to address the presence of extremist content online. She emphasizes the need for collaboration between governments, civil society, and tech platforms, recognizing that it requires a collective effort to effectively eliminate such content. The Call focuses not only on existing forms of online terror tools but also aims to adapt to future forms used by extremists. It proposes measures such as implementing a strong crisis response model and working towards a deeper understanding of radicalization pathways.
Privacy-enhancing tools play a crucial role in preventing radicalization. These tools enable researchers to access necessary data to understand the pathways towards radicalization. By studying successful off-ramps, these tools can contribute to preventing further instances of online radicalization.
One of the challenges in understanding the role of algorithms in radicalization is the issue of privacy and intellectual property. It is difficult to obtain insight into how algorithms may drive certain behaviors due to privacy concerns and proprietary rights. Despite these challenges, gaining a deeper understanding of how algorithms contribute to radicalization is essential.
Artificial intelligence (AI) presents both opportunities and risks in addressing online extremism. AI can assist in areas where there have been previous struggles, such as content moderation on social media. However, caution exists among the public due to potential harm and risks associated with AI. Ardern argues that guardrails need to be established before AI can cause harm, and the development of these guardrails should involve multiple stakeholders, including companies, governments, and civil society.
The involvement of civil society is crucial in discussions around AI in law enforcement to protect privacy and human rights. Ardern believes that civil society, alongside the government, can act as a pressure point in addressing questions regarding privacy and human rights in the context of AI deployment.
Education plays a vital role in addressing online extremism. Teaching critical thinking skills to children is essential to equip them with the ability to think critically and evaluate information. Adapting to rapid technological changes is also necessary, as the accessibility of information has significantly evolved from previous generations, leading to challenges such as disinformation and the need for digital literacy.
The inclusion of civil society and continuous improvement are important aspects of addressing challenges. The creation of a network that includes civil society may face practical obstacles, but ongoing efforts are being made to involve civil society in initiatives such as the Christchurch Call. Ardern acknowledges that learning and improvement are continuous processes, emphasizing the importance of making engagement meaningful and easy.
Overcoming the debate around privacy and safety on social media is a critical step in addressing extremist content online. Efforts to access previously private information through tools created by the Christchurch Call Initiative are underway, allowing researchers to study this information in real-time. The findings of the research will inform further action, involving social media companies in addressing the identified issues.
Disinformation is a significant challenge, and Ardern highlights factors that make individuals susceptible to it, such as distrust in institutions, disenfranchisement, lower socioeconomic status, and lesser education. Preventing individuals from falling for false information is crucial, and rebuilding trust in institutions is necessary to address the impact of disinformation.
Supporting regulators focusing on technological developments is crucial in managing the challenges presented by technological advancements. Ardern acknowledges the poly-crisis resulting from these developments and emphasizes the need to support regulatory efforts.
Ardern expresses optimism in the ability of humans to adapt and design solutions for crises. She has witnessed humans successfully designing solutions and rapidly adapting to protect humanity, giving hope for addressing the challenges posed by technological developments.
Information integrity issues, such as the lack of a shared reality around climate change, impact serious problems. Ardern emphasizes the need to address these issues to effectively tackle challenges like climate change.
In conclusion, the detailed analysis highlights the importance of the Christchurch Call to Action in addressing extremist content online. The Call emphasizes the need for a multi-stakeholder approach involving governments, civil society, and tech platforms. Privacy-enhancing tools and understanding the role of algorithms are crucial in preventing radicalization. Guardrails need to be established for AI before it can cause harm, with civil society involvement to protect privacy and human rights. Education plays a vital role in teaching critical thinking skills and adapting to technological changes. The involvement of civil society, continuous improvement, and overcoming the debate around privacy and safety on social media are essential steps in addressing extremist content. The management of disinformation, support for regulators, and human adaptability in designing solutions for crises are also key considerations.
Maria Ressa
The analysis of the given information reveals several important points made by the speakers. Firstly, it highlights the significant online harassment faced by women journalists, which hampers their ability to participate in public discourse. It is reported that women journalists covering misogynistic leaders often face considerable online harassment and are frequently told to ‘buckle up’ by their editors. This indicates a systemic problem that needs to be addressed.
The role of technology in facilitating hate speech and the dissemination of harmful content is also underscored. The Christchurch terrorist attack, for instance, was live-streamed, demonstrating the misuse of technology for spreading violent and harmful content. This highlights the need to address the role of technology in inciting hate and enabling the circulation of such harmful material.
Efforts to address these challenges require more than just asking news organisations to remove harmful content. The analysis suggests that a multi-stakeholder effort is necessary. Following the Christchurch attack, Jacinda Ardern led a successful multi-stakeholder initiative known as the Christchurch Initiative, which aimed to eliminate extremist content online. This approach emphasises the need for collaboration and coordination among various stakeholders to effectively combat online attacks and extremist content.
The analysis also highlights the importance of strong government action in addressing this issue. The New Zealand government, for instance, took robust measures to eliminate the influence of the Christchurch attacker by removing his name and the footage of the attack from the media. However, it is crucial that government action remains inclusive and does not suppress free speech.
Furthermore, the analysis points out that valuable lessons can be learned from the Christchurch approach in combating radicalisation. The approach was developed in response to a horrific domestic terror attack that was live-streamed on Facebook. It aims to understand how people become radicalised, with a focus on the role of curated content and algorithmic outcomes online.
The impact of social media behaviour modification systems and the current focus on content moderation is a source of concern. Data from the Philippines has been analysed, indicating that lies spread faster on social media than factual information. The analysis argues that current solutions, which mainly focus on content moderation, are not effective in addressing the problem. Instead, a shift towards addressing structural issues, such as platform design, is recommended.
Furthermore, the potential harms of generative AI should be prevented rather than merely reacted to. Concerns over the impact of generative AI are mentioned, and the need for proactive measures to address the harm caused by AI is emphasised.
Civil society collaboration and the corruption of the information ecosystem are seen as crucial problems. The analysis suggests that civil society needs to come together more to address these challenges effectively.
The weaknesses of institutions in the global south, as well as countries experiencing regression of democracy, contribute to the challenges. Authoritarian leaders are leveraging technology to retain and gain more power, which further exacerbates the issue.
Interestingly, the analysis highlights that even intelligent individuals can fall victim to misinformation and behaviour modification in information warfare or operations. This emphasises the need for education and awareness to combat these challenges effectively.
The integration of privacy and trust into tech design is seen as possible; however, it often lacks regulation and pressure from civil society.
Lastly, the analysis suggests that we are in a pivotal moment for internet governance. Maria Ressa, one of the speakers, expresses a more pessimistic viewpoint on the situation, while others remain optimistic. The importance of effective internet governance is underscored, as it directly impacts various areas, including peace, justice, and strong institutions.
In conclusion, the analysis highlights the challenges faced by women journalists in public discourse, the negative impact of technology in facilitating hate speech and harmful content, the need for multi-stakeholder approaches, the importance of strong government action, and the lessons from the Christchurch approach. It also emphasises the concerns regarding social media behaviour modification systems and the current focus on content moderation. Structural issues in platform design, prevention of harm from generative AI, civil society collaboration, corruption of the information ecosystem, weaknesses of institutions, susceptibility to misinformation, and the incorporation of privacy and trust into tech design are other noteworthy points raised. Overall, the analysis underscores the significance of effective internet governance in addressing these complex issues.
Speakers
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
This comprehensive analysis covers a wide range of topics related to education, generative AI, risk management, information literacy, multi-stakeholder engagement, the actions of the European private sector in oppressive regimes, the impact of misinformation and disinformation, and the coexistence of privacy and safety in technology design.
One of the discussions revolves around educating people about generative AI and the need to mitigate its risks.
The audience seeks advice on how to educate individuals about this technology, indicating recognition of its potential risks. However, the sentiment is neutral, suggesting a need for more information and guidance in this area.
Another argument highlights the importance of promoting critical thinking and curiosity among children in the face of the age of disinformation and rapid technological change.
The supporting facts include a quote from Jacinda Ardern, who emphasises the shift from relying on facts obtained from traditional library resources to the current digital age with multifaceted sources. She urges individuals to seek knowledge about the process and origin of the information presented.
This positive argument underscores the need to equip children with the necessary skills to navigate and critically evaluate information in the digital era.
The analysis also addresses the need for a multi-stakeholder approach to problem-solving and the challenges faced by civil society, particularly from the Global South, in effectively participating in solution-finding dialogues.
These challenges include disparities in accessibility and effectiveness compared to governments and corporate organisations. This observation points towards the importance of inclusivity and equal representation in decision-making processes.
Another notable point relates to monitoring the actions of the European private sector, particularly within countries with oppressive regimes.
The argument raises questions about how to effectively monitor the activities of companies operating in these contexts, such as China, Vietnam, and Myanmar. This highlights concerns about the impact of the private sector on human rights and the need for oversight and accountability.
The analysis also delves into the impact of misinformation and disinformation, noting that individuals who distrust institutions are more susceptible to these phenomena.
This observation emphasises the importance of building trust in structures and institutions to combat the spread of false information.
Furthermore, the debate on designing technology that balances privacy and safety in the online world is also addressed. The argument suggests that current technology and design choices might limit the coexistence of privacy and safety, forcing the prioritisation of one over the other.
This highlights the ongoing challenge of developing technology that can effectively address both concerns.
In conclusion, this analysis highlights the need to educate about generative AI, mitigate its risks, foster critical thinking and curiosity among children, ensure inclusivity in problem-solving dialogues, monitor the actions of the European private sector, build trust in institutions to combat misinformation, and address the challenge of designing technology that balances privacy and safety.
These observations reflect the complexity and interdisciplinary nature of the issues discussed, as well as the importance of considering diverse perspectives to inform effective strategies and solutions.
&
’Jacinda
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The Christchurch Call to Action is a global initiative aimed at tackling extremist content online. It was established in response to a terrorist attack in New Zealand that was live-streamed on Facebook. Supported by over 150 member organizations, including governments, civil societies, and tech platforms, the Call sets out objectives such as creating a crisis response model and better understanding the process of radicalization.
New Zealand Prime Minister Jacinda Ardern believes that it is crucial to understand the role of content curation in driving radicalization.
She highlights the case of the terrorist involved in the Christchurch attack, who acknowledged being radicalized by YouTube. Ardern calls for an improved understanding of how curated content can influence behavior online.
Ardern advocates for a multi-stakeholder solution to address the presence of extremist content online.
She emphasizes the need for collaboration between governments, civil society, and tech platforms, recognizing that it requires a collective effort to effectively eliminate such content. The Call focuses not only on existing forms of online terror tools but also aims to adapt to future forms used by extremists.
It proposes measures such as implementing a strong crisis response model and working towards a deeper understanding of radicalization pathways.
Privacy-enhancing tools play a crucial role in preventing radicalization. These tools enable researchers to access necessary data to understand the pathways towards radicalization.
By studying successful off-ramps, these tools can contribute to preventing further instances of online radicalization.
One of the challenges in understanding the role of algorithms in radicalization is the issue of privacy and intellectual property. It is difficult to obtain insight into how algorithms may drive certain behaviors due to privacy concerns and proprietary rights.
Despite these challenges, gaining a deeper understanding of how algorithms contribute to radicalization is essential.
Artificial intelligence (AI) presents both opportunities and risks in addressing online extremism. AI can assist in areas where there have been previous struggles, such as content moderation on social media.
However, caution exists among the public due to potential harm and risks associated with AI. Ardern argues that guardrails need to be established before AI can cause harm, and the development of these guardrails should involve multiple stakeholders, including companies, governments, and civil society.
The involvement of civil society is crucial in discussions around AI in law enforcement to protect privacy and human rights.
Ardern believes that civil society, alongside the government, can act as a pressure point in addressing questions regarding privacy and human rights in the context of AI deployment.
Education plays a vital role in addressing online extremism. Teaching critical thinking skills to children is essential to equip them with the ability to think critically and evaluate information.
Adapting to rapid technological changes is also necessary, as the accessibility of information has significantly evolved from previous generations, leading to challenges such as disinformation and the need for digital literacy.
The inclusion of civil society and continuous improvement are important aspects of addressing challenges.
The creation of a network that includes civil society may face practical obstacles, but ongoing efforts are being made to involve civil society in initiatives such as the Christchurch Call. Ardern acknowledges that learning and improvement are continuous processes, emphasizing the importance of making engagement meaningful and easy.
Overcoming the debate around privacy and safety on social media is a critical step in addressing extremist content online.
Efforts to access previously private information through tools created by the Christchurch Call Initiative are underway, allowing researchers to study this information in real-time. The findings of the research will inform further action, involving social media companies in addressing the identified issues.
Disinformation is a significant challenge, and Ardern highlights factors that make individuals susceptible to it, such as distrust in institutions, disenfranchisement, lower socioeconomic status, and lesser education.
Preventing individuals from falling for false information is crucial, and rebuilding trust in institutions is necessary to address the impact of disinformation.
Supporting regulators focusing on technological developments is crucial in managing the challenges presented by technological advancements.
Ardern acknowledges the poly-crisis resulting from these developments and emphasizes the need to support regulatory efforts.
Ardern expresses optimism in the ability of humans to adapt and design solutions for crises. She has witnessed humans successfully designing solutions and rapidly adapting to protect humanity, giving hope for addressing the challenges posed by technological developments.
Information integrity issues, such as the lack of a shared reality around climate change, impact serious problems.
Ardern emphasizes the need to address these issues to effectively tackle challenges like climate change.
In conclusion, the detailed analysis highlights the importance of the Christchurch Call to Action in addressing extremist content online. The Call emphasizes the need for a multi-stakeholder approach involving governments, civil society, and tech platforms.
Privacy-enhancing tools and understanding the role of algorithms are crucial in preventing radicalization. Guardrails need to be established for AI before it can cause harm, with civil society involvement to protect privacy and human rights. Education plays a vital role in teaching critical thinking skills and adapting to technological changes.
The involvement of civil society, continuous improvement, and overcoming the debate around privacy and safety on social media are essential steps in addressing extremist content. The management of disinformation, support for regulators, and human adaptability in designing solutions for crises are also key considerations.
&
’Karoline
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
During the analysis, several key points were discussed regarding the views expressed by Karoline Edtstadler. Firstly, she emphasised the need for greater recognition and opportunities for ambitious women. Edtstadler observed that women who strive for success are often viewed negatively, being labelled as pushy or attempting to replace men.
She believes that society should overcome this perception and provide more support and encouragement to women with ambitious goals.
Secondly, Edtstadler underscored the value of women’s unique perspectives in leadership roles. She argued that women’s ability to perceive life from their point of view – particularly as those capable of giving birth and responsible for nurturing and upbringing – makes them special.
The shared yet different life experiences, such as motherhood, contribute to their valuable insights and decision-making capabilities.
In terms of AI regulation, the European Union’s efforts were commended. The EU is taking the lead in regulating AI and prioritising the classification of risks associated with AI applications.
This focus on risk evaluation aims to strike a balance between promoting beneficial AI technologies and addressing potential societal impacts.
Austria was recognised for its proactive approach to digital market regulation. Even before the implementation of the EU’s Digital Services Act (DSA) and the Digital Markets Act (DMA), Austria had already established the Communications Platform Act, effective from 1st January 2021.
Under this act, social media platforms are obliged to promptly address online hate speech. Austria’s early actions demonstrate the country’s commitment to creating legal frameworks concerning digital services.
Collaboration and multi-stakeholder involvement were identified as crucial factors in addressing the challenges posed by AI, digital markets, and misinformation.
Edtstadler advocated for a concerted effort involving governments, parliamentarians, civil society, and tech enterprises. She emphasised the importance of collective efforts and shared understanding in tackling these complex issues.
The analysis also highlighted the importance of education and awareness in effectively handling the impacts of social media and new technologies like AI.
This includes equipping the public with knowledge and skills to navigate technology, particularly among the elderly. Additionally, it was emphasised that regulations should strike a balance between ensuring safety and privacy while still fostering innovation.
Restoring trust in institutions, governments, and democracy was identified as a crucial objective.
Given the rise of misinformation and disinformation during events like the Covid-19 pandemic, Europe aims to counter these challenges through robust regulations. By addressing the issue of misinformation, trust can be rebuilt among citizens.
It was also noted that technology, including AI, should not replace human decision-making, particularly in matters like judgment in law enforcement.
While AI can offer efficiency in finding judgments and organising knowledge, drawing a clear line between human judgment and AI is important.
Handling the downsides of technology was deemed necessary to ensure its benefits for society. Technologies like AI can be used for good, such as performing precise surgeries and speeding up tasks in law firms.
However, challenges and risks should be addressed to make technology beneficial for all.
The analysis further underlined the importance of a multi-faceted approach in decision-making processes. Edtstadler highlighted Austria’s implementation of the Sustainable Development Goals (SDGs), wherein civil society was invited to contribute and share their actions in dialogue forums.
This multi-stakeholder approach promotes inclusivity and diversity of perspectives in decision-making.
In conclusion, the analysis emphasised the need for recognition and empowerment of ambitious women, effective regulation of AI and digital markets, collaboration among stakeholders, education and awareness, addressing challenges in democracy and technology, and restoring trust in institutions and governments.
These key points and insights offer valuable perspectives for policymakers and individuals seeking to promote a fair and inclusive society in the face of technological advancements.
&
’Maria
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis of the given information reveals several important points made by the speakers. Firstly, it highlights the significant online harassment faced by women journalists, which hampers their ability to participate in public discourse. It is reported that women journalists covering misogynistic leaders often face considerable online harassment and are frequently told to ‘buckle up’ by their editors.
This indicates a systemic problem that needs to be addressed.
The role of technology in facilitating hate speech and the dissemination of harmful content is also underscored. The Christchurch terrorist attack, for instance, was live-streamed, demonstrating the misuse of technology for spreading violent and harmful content.
This highlights the need to address the role of technology in inciting hate and enabling the circulation of such harmful material.
Efforts to address these challenges require more than just asking news organisations to remove harmful content. The analysis suggests that a multi-stakeholder effort is necessary.
Following the Christchurch attack, Jacinda Ardern led a successful multi-stakeholder initiative known as the Christchurch Initiative, which aimed to eliminate extremist content online. This approach emphasises the need for collaboration and coordination among various stakeholders to effectively combat online attacks and extremist content.
The analysis also highlights the importance of strong government action in addressing this issue.
The New Zealand government, for instance, took robust measures to eliminate the influence of the Christchurch attacker by removing his name and the footage of the attack from the media. However, it is crucial that government action remains inclusive and does not suppress free speech.
Furthermore, the analysis points out that valuable lessons can be learned from the Christchurch approach in combating radicalisation.
The approach was developed in response to a horrific domestic terror attack that was live-streamed on Facebook. It aims to understand how people become radicalised, with a focus on the role of curated content and algorithmic outcomes online.
The impact of social media behaviour modification systems and the current focus on content moderation is a source of concern.
Data from the Philippines has been analysed, indicating that lies spread faster on social media than factual information. The analysis argues that current solutions, which mainly focus on content moderation, are not effective in addressing the problem. Instead, a shift towards addressing structural issues, such as platform design, is recommended.
Furthermore, the potential harms of generative AI should be prevented rather than merely reacted to.
Concerns over the impact of generative AI are mentioned, and the need for proactive measures to address the harm caused by AI is emphasised.
Civil society collaboration and the corruption of the information ecosystem are seen as crucial problems.
The analysis suggests that civil society needs to come together more to address these challenges effectively.
The weaknesses of institutions in the global south, as well as countries experiencing regression of democracy, contribute to the challenges. Authoritarian leaders are leveraging technology to retain and gain more power, which further exacerbates the issue.
Interestingly, the analysis highlights that even intelligent individuals can fall victim to misinformation and behaviour modification in information warfare or operations.
This emphasises the need for education and awareness to combat these challenges effectively.
The integration of privacy and trust into tech design is seen as possible; however, it often lacks regulation and pressure from civil society.
Lastly, the analysis suggests that we are in a pivotal moment for internet governance.
Maria Ressa, one of the speakers, expresses a more pessimistic viewpoint on the situation, while others remain optimistic. The importance of effective internet governance is underscored, as it directly impacts various areas, including peace, justice, and strong institutions.
In conclusion, the analysis highlights the challenges faced by women journalists in public discourse, the negative impact of technology in facilitating hate speech and harmful content, the need for multi-stakeholder approaches, the importance of strong government action, and the lessons from the Christchurch approach.
It also emphasises the concerns regarding social media behaviour modification systems and the current focus on content moderation. Structural issues in platform design, prevention of harm from generative AI, civil society collaboration, corruption of the information ecosystem, weaknesses of institutions, susceptibility to misinformation, and the incorporation of privacy and trust into tech design are other noteworthy points raised.
Overall, the analysis underscores the significance of effective internet governance in addressing these complex issues.