Harnessing AI for Child Protection | IGF 2023
Event report
Speakers and Moderators
Speakers:
- Jutta Croll, Civil Society, Western European and Others Group (WEOG)
- Ghimire Gopal Krishna, Civil Society, Asia-Pacific Group
- Sarim Aziz, Private Sector, Asia-Pacific Group
- Michael Ilishebo, Government, African Group
Moderators:
- Babu Ram Aryal, Civil Society, Asia-Pacific Group
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Audience
During the discussion, multiple speakers expressed concerns about the need to protect children from bullying on social media platforms like Metta. They raised questions about Metta’s efforts in content moderation for child protection across various languages and countries, casting doubt on the effectiveness of its strategies and policies.
The discussion also focused on the importance of social media companies enhancing their user registration systems to prevent misuse. It was argued that stricter authentication systems are necessary to prevent false identities and misuse of social media platforms. Personal incidents were shared to support this stance.
Additionally, the potential of artificial intelligence (AI) in identifying local languages on social media was discussed. It was seen as a positive step in preventing misuse and promoting responsible use of these platforms.
Responsibility and accountability of social media platforms were emphasized, with participants arguing that they should be held accountable for preventing misuse and ensuring user safety.
The discussion also highlighted the adverse effects of social media on young people’s mental health. The peer pressure faced on social media can lead to anxiety, depression, body image concerns, eating disorders, and self-harm. Social media companies were urged to take proactive measures to tackle online exploitation and address the negative impact on mental health.
Lastly, concerns were raised about phishing on Facebook, noting cases where young users are tricked into revealing their contact details and passwords. Urgent action was called for to protect user data and prevent phishing attacks.
In conclusion, the discussion underscored the urgent need for social media platforms to prioritize user safety, particularly for children. Efforts in content moderation, user registration systems, authentication systems, language detection, accountability, and mental health support were identified as crucial. It is clear that significant challenges remain in creating a safer and more responsible social media environment.
Babu Ram Aryal
The analysis covers a range of topics, starting with Artificial Intelligence (AI) and its impact on different fields. It acknowledges that AI offers numerous opportunities in areas such as education and law. However, there is also a concern that AI is taking over human intelligence in various domains. This raises questions about the extent to which AI should be relied upon and whether it poses a threat to human expertise and jobs.
Another topic explored is the access that children have to technology and the internet. On one hand, it is recognised that children are growing up in a new digital era where they utilise the internet to create their own world. The analysis highlights the example of Babu’s own children, who are passionate about technology and eager to use the internet. This suggests that technology can encourage creativity and learning among young minds.
On the other hand, there are legitimate concerns about the safety of children online. The argument put forward is that allowing children unrestricted access to technology and the internet brings about potential risks. The analysis does not delve into specific risks, but it does acknowledge the existence of concerns and suggests that caution should be exercised.
An academic perspective is also presented, which recognises the potential benefits of AI for children, as well as the associated risks. This viewpoint emphasises that permitting children to engage with platforms featuring AI can provide opportunities for growth and learning. However, it also acknowledges the existence of risks inherent in such interactions.
The conversation extends to the realm of cybercrime and the importance of expertise in digital forensic analysis. The analysis highlights that Babu is keen to learn from Michael’s experiences and practices relating to cybercrime. This indicates that there is a recognition of the significance of specialised knowledge and skills in addressing and preventing cybercrime.
Furthermore, the analysis raises the issue of child rights and the need for better control measures on social media platforms. It presents examples where individuals have disguised themselves as children in order to exploit others. This calls for improved registration and content control systems on social media platforms to protect children’s rights and prevent similar occurrences in the future.
In conclusion, the analysis reflects a diverse range of perspectives on various topics. It recognises the potential opportunities provided by AI in various fields, but also points out concerns related to the dominance of AI over human intelligence. It acknowledges the positive aspects of children having access to technology, but also raises valid concerns about safety. Additionally, the importance of expertise in combating cybercrime and the need for better control measures to protect child rights on social media platforms are highlighted. Overall, the analysis showcases the complexity and multifaceted nature of these issues.
Sarim Aziz
Child safety issues are a global challenge that require a global, multi-stakeholder approach. This means that various stakeholders from different sectors, such as governments, non-governmental organizations, and tech companies, need to come together to address this issue collectively. The importance of this approach is emphasized by the fact that child safety is not limited to any particular region or country but affects children worldwide.
One of the key aspects of addressing child safety issues is the use of technology, particularly artificial intelligence (AI). AI has proven to be a valuable tool in preventing, detecting, and responding to child safety issues. For example, AI can disrupt suspicious behaviors and patterns that may indicate child exploitation. Technology companies, such as Microsoft and Meta, have developed AI-based solutions to detect and combat child sexual abuse material (CSAM). Microsoft’s PhotoDNA technology, along with Meta’s open-sourced PDQ and TMK technologies, are notable examples. These technologies have been effective in detecting CSAM and have played a significant role in safeguarding children online.
However, it is important to note that technology alone cannot solve child safety issues. Law enforcement and safety organizations are vital components in the response to child safety issues. Their expertise and collaboration with technology companies, such as Meta, are crucial in building case systems, investigating reports, and taking necessary actions to combat child exploitation. Meta, for instance, collaborates with the National Center for Missing and Exploited Children (NECMEC) and assists them in their efforts to protect children.
Age verification is another important aspect of child safety online. Technology companies are testing age verification tools, such as the ones being tested on Instagram by Meta, to prevent minors from accessing inappropriate content. These tools aim to verify the age of users and restrict their access to age-inappropriate content. However, the challenge lies in standardizing age verification measures across different jurisdictions, as different countries have different age limits for minors using social media platforms.
Platforms, like Meta, have taken proactive steps to prioritize safety by design. They have implemented changes to default settings to safeguard youth accounts, cooperate with law enforcement bodies when necessary, and enforce policies against bullying and harassment. AI tools and human reviewers are employed to moderate and evaluate content, ensuring that harmful and inappropriate content is removed from the platforms.
Collaboration with safety partners and law enforcement is crucial in strengthening child protection responses. Platforms like Meta work closely with safety partners worldwide and have established safety advisory groups composed of experts from around the world. Integration of AI tools with law enforcement can lead to rapid responses against child abuse material and other safety concerns.
It is important to note that while AI can assist in age verification and protecting minors from inappropriate content, it is not a perfect solution. Human intervention and investigation are still needed to ensure the accuracy and effectiveness of age verification measures.
Overall, the expanded summary highlights the need for a global, multi-stakeholder approach to address child safety issues, with a focus on the use of technology, collaboration with law enforcement and safety organizations, age verification measures and prioritizing safety by design. It also acknowledges the limitations of technology and the importance of human interventions in ensuring child safety.
Michael Ilishebo
Content moderation online for children presents a significant challenge, particularly in Zambia where children are exposed to adult content due to the lack of proper control or filters. Despite the advancements in Artificial Intelligence (AI), it has not been successful in effectively addressing these issues, especially in accurately identifying the age or gender of users.
However, there is growing momentum in discussions around child online protection and data privacy. In Zambia, this has resulted in the enactment of the Cybersecurity and Cybercrimes Act of 2021. This legislation aims to address cyberbullying and other forms of online abuse, providing some legal measures to protect children.
Nevertheless, numerous cases of child abuse on online platforms remain unreported. The response from platform providers varies, with Facebook and Instagram being more responsive compared to newer platforms like TikTok. This highlights the need for consistent and effective response mechanisms across all platforms.
On a positive note, local providers in Zambia demonstrate effective compliance in bringing down inappropriate content. They adhere to guidelines that set age limits for certain types of content, making it easier to remove content that is not suitable for children.
Age-gating on platforms is another area of concern, as many children can easily fool the verification systems put in place. Reports of children setting their ages as 150 years or profiles not accurately reflecting their age raise questions about the effectiveness of age verification mechanisms.
META, a platform provider, deserves commendation for their response to issues related to child exploitation. They prioritize addressing these issues and provide requested information promptly, which is crucial in investigations and protecting children.
The classification of inappropriate content poses a significant challenge, especially considering cultural differences and diverse definitions. What might be normal or acceptable in one country can be completely inappropriate in another. For example, an image of a child holding a gun might be considered normal in the United States but unheard of in Zambia or Africa. Therefore, the classification of inappropriate content needs to be sensitive to cultural contexts.
In response to the challenges posed by online child protection, Zambia has introduced two significant legislations: the Cybersecurity and Cybercrimes Act and the Data Protections Act. These legislative measures aim to address issues of cybersecurity and data protection, which are essential for safeguarding children online.
To ensure child internet safety, a combination of manual and technological parental oversight is crucial. Installing family-friendly accounts and using filtering technology can help monitor and control what children view online. However, it is important to note that children can still find ways to outsmart these controls or be influenced by third parties to visit harmful sites.
In conclusion, protecting children online requires a multifaceted approach. Legislative measures, such as the ones implemented in Zambia, combined with the use of protective technologies and active parental oversight, are essential. Additionally, close collaboration between the private sector, governments, the public sector, and technology companies is crucial in addressing challenges in policy cyberspace. While AI plays a role, it is important to recognize that relying solely on AI is insufficient. The human factor and close collaboration remain indispensable in effectively protecting children online and addressing the complex issues associated with content moderation and classification.
Jutta Croll
The discussions revolve around protecting children in the digital environment, specifically addressing issues like online child abuse and inappropriate communication. The general sentiment is positive towards using artificial intelligence (AI) to improve the digital environment for children and detect risks. It is argued that AI tools can identify instances of child sexual abuse online, although they struggle with unclassified cases. Additionally, online platform providers could use AI to detect abnormal patterns of communication indicating grooming. However, there is concern that relying solely on technology for detection is insufficient. The responsibility for detection should not rest solely on technology, evoking a negative sentiment.
There is a debate about the role of regulators and policymakers in addressing these issues. Some argue that regulators and policymakers should not tackle these issues, asserting that the responsibility falls on platform providers, who have the resources and knowledge to implement AI-based solutions effectively. This stance is received with a neutral sentiment.
The right to privacy and protection of children in the digital era presents challenges for parents. The UNCRC emphasizes children’s right to privacy, but also stresses the need to strike a balance between digital privacy and parental protection obligations. Monitoring digital content is seen as intrusive and infringing on privacy, while not monitoring absolves platforms of accountability. This viewpoint is given a negative sentiment.
Age verification is seen as essential in addressing inappropriate communication and content concerns. A lack of age verification makes it difficult to protect children from inappropriate content and advertisers. The sentiment towards age verification is positive.
Dialogue between platform providers and regulators is considered crucial for finding constructive solutions in child protection. Such dialogue helps identify future-proof solutions. This argument receives a positive sentiment.
Newer legislations should focus more on addressing child sexual abuse in the online environment. Newer legislations are seen as more effective in addressing these issues. For instance, Germany amended its Youth Protection Act to specifically address the digital environment. The sentiment towards this is positive.
The age of consent principle is under pressure in the digital environment as discerning consensual from non-consensual content becomes challenging. The sentiment towards this argument is neutral. There are differing stances on self-generated sexualized imagery shared among young people. Some argue that it should not be criminalized, while others maintain a neutral position, questioning whether AI can determine consensual sharing of images. The sentiment towards the stance that self-generated sexualized imagery should not be criminalized is positive.
Overall, the discussions emphasize the importance of child protection and making decisions that prioritize the best interests of the child. AI can play a role in child protection, but human intervention is still considered necessary. It is concluded that all decisions, including policy making, actions of platform providers, and technological innovations, should consider the best interests of the child.
Ghimire Gopal Krishna
Nepal has a robust legal and constitutional framework in place that specifically addresses the protection of child rights. Article 39 of Nepal’s constitution explicitly outlines the rights of every child, including the right to name, education, health, proper care, and protection from issues such as child labour, child marriage, kidnapping, abuse, and torture. The constitution also prohibits child engagement in any hazardous work or recruitment into the military or armed groups.
To further strengthen child protection, Nepal has implemented the Child Protection Act, which criminalises child abuse activities both online and offline. Courts in Nepal strictly enforce these laws and take a proactive stance against any form of child abuse. This indicates a positive commitment from the legal system to safeguarding children’s well-being and ensuring their safety.
In addition to legal provisions, Nepal has also developed online child safety guidelines. These guidelines provide recommendations and guidance to various stakeholders on actions that can be taken to protect children online. This highlights Nepal’s effort to address the challenges posed by the digital age and ensure the safety of children in online spaces.
However, ongoing debates and discussions surround the appropriate age for adulthood, voting rights, citizenship, and marriage in Nepal. These discussions aim to determine the age at which individuals should be granted certain legal landmarks. The age of consent, in particular, has been a subject of court cases and controversies, with several individuals facing legal consequences due to age-related consent issues. This reflects the complexity and importance of addressing these issues in a just and careful manner.
Notably, Ghimire Gopal Krishna, the president of the Nepal Bar Association, has shown his commitment to positive amendments related to child rights protection acts. He has signed the Child Right Protection Treaty, further demonstrating his dedication to upholding child rights. This highlights the involvement of key stakeholders in advocating for improved legal frameworks that protect the rights and well-being of children in Nepal.
Overall, Nepal’s legal and constitutional provisions for child protection are commendable, with specific provisions for education, health, and safeguarding children from various forms of abuse. The implementation of the Child Protection Act and online child safety guidelines further strengthens these protections. However, ongoing debates and discussions surrounding the appropriate age for various legal landmarks highlight the need for careful consideration and resolution. The commitment of Ghimire Gopal Krishna to positive amendments underscores the importance of continuous efforts to improve child rights protection in Nepal.
Speakers
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The analysis covers a range of topics, starting with Artificial Intelligence (AI) and its impact on different fields. It acknowledges that AI offers numerous opportunities in areas such as education and law. However, there is also a concern that AI is taking over human intelligence in various domains.
This raises questions about the extent to which AI should be relied upon and whether it poses a threat to human expertise and jobs.
Another topic explored is the access that children have to technology and the internet.
On one hand, it is recognised that children are growing up in a new digital era where they utilise the internet to create their own world. The analysis highlights the example of Babu’s own children, who are passionate about technology and eager to use the internet.
This suggests that technology can encourage creativity and learning among young minds.
On the other hand, there are legitimate concerns about the safety of children online. The argument put forward is that allowing children unrestricted access to technology and the internet brings about potential risks.
The analysis does not delve into specific risks, but it does acknowledge the existence of concerns and suggests that caution should be exercised.
An academic perspective is also presented, which recognises the potential benefits of AI for children, as well as the associated risks.
This viewpoint emphasises that permitting children to engage with platforms featuring AI can provide opportunities for growth and learning. However, it also acknowledges the existence of risks inherent in such interactions.
The conversation extends to the realm of cybercrime and the importance of expertise in digital forensic analysis.
The analysis highlights that Babu is keen to learn from Michael’s experiences and practices relating to cybercrime. This indicates that there is a recognition of the significance of specialised knowledge and skills in addressing and preventing cybercrime.
Furthermore, the analysis raises the issue of child rights and the need for better control measures on social media platforms.
It presents examples where individuals have disguised themselves as children in order to exploit others. This calls for improved registration and content control systems on social media platforms to protect children’s rights and prevent similar occurrences in the future.
In conclusion, the analysis reflects a diverse range of perspectives on various topics.
It recognises the potential opportunities provided by AI in various fields, but also points out concerns related to the dominance of AI over human intelligence. It acknowledges the positive aspects of children having access to technology, but also raises valid concerns about safety.
Additionally, the importance of expertise in combating cybercrime and the need for better control measures to protect child rights on social media platforms are highlighted. Overall, the analysis showcases the complexity and multifaceted nature of these issues.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Child safety issues are a global challenge that require a global, multi-stakeholder approach. This means that various stakeholders from different sectors, such as governments, non-governmental organizations, and tech companies, need to come together to address this issue collectively. The importance of this approach is emphasized by the fact that child safety is not limited to any particular region or country but affects children worldwide.
One of the key aspects of addressing child safety issues is the use of technology, particularly artificial intelligence (AI).
AI has proven to be a valuable tool in preventing, detecting, and responding to child safety issues. For example, AI can disrupt suspicious behaviors and patterns that may indicate child exploitation. Technology companies, such as Microsoft and Meta, have developed AI-based solutions to detect and combat child sexual abuse material (CSAM).
Microsoft’s PhotoDNA technology, along with Meta’s open-sourced PDQ and TMK technologies, are notable examples. These technologies have been effective in detecting CSAM and have played a significant role in safeguarding children online.
However, it is important to note that technology alone cannot solve child safety issues.
Law enforcement and safety organizations are vital components in the response to child safety issues. Their expertise and collaboration with technology companies, such as Meta, are crucial in building case systems, investigating reports, and taking necessary actions to combat child exploitation.
Meta, for instance, collaborates with the National Center for Missing and Exploited Children (NECMEC) and assists them in their efforts to protect children.
Age verification is another important aspect of child safety online. Technology companies are testing age verification tools, such as the ones being tested on Instagram by Meta, to prevent minors from accessing inappropriate content.
These tools aim to verify the age of users and restrict their access to age-inappropriate content. However, the challenge lies in standardizing age verification measures across different jurisdictions, as different countries have different age limits for minors using social media platforms.
Platforms, like Meta, have taken proactive steps to prioritize safety by design.
They have implemented changes to default settings to safeguard youth accounts, cooperate with law enforcement bodies when necessary, and enforce policies against bullying and harassment. AI tools and human reviewers are employed to moderate and evaluate content, ensuring that harmful and inappropriate content is removed from the platforms.
Collaboration with safety partners and law enforcement is crucial in strengthening child protection responses.
Platforms like Meta work closely with safety partners worldwide and have established safety advisory groups composed of experts from around the world. Integration of AI tools with law enforcement can lead to rapid responses against child abuse material and other safety concerns.
It is important to note that while AI can assist in age verification and protecting minors from inappropriate content, it is not a perfect solution.
Human intervention and investigation are still needed to ensure the accuracy and effectiveness of age verification measures.
Overall, the expanded summary highlights the need for a global, multi-stakeholder approach to address child safety issues, with a focus on the use of technology, collaboration with law enforcement and safety organizations, age verification measures and prioritizing safety by design.
It also acknowledges the limitations of technology and the importance of human interventions in ensuring child safety.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
During the discussion, multiple speakers expressed concerns about the need to protect children from bullying on social media platforms like Metta. They raised questions about Metta’s efforts in content moderation for child protection across various languages and countries, casting doubt on the effectiveness of its strategies and policies.
The discussion also focused on the importance of social media companies enhancing their user registration systems to prevent misuse.
It was argued that stricter authentication systems are necessary to prevent false identities and misuse of social media platforms. Personal incidents were shared to support this stance.
Additionally, the potential of artificial intelligence (AI) in identifying local languages on social media was discussed.
It was seen as a positive step in preventing misuse and promoting responsible use of these platforms.
Responsibility and accountability of social media platforms were emphasized, with participants arguing that they should be held accountable for preventing misuse and ensuring user safety.
The discussion also highlighted the adverse effects of social media on young people’s mental health.
The peer pressure faced on social media can lead to anxiety, depression, body image concerns, eating disorders, and self-harm. Social media companies were urged to take proactive measures to tackle online exploitation and address the negative impact on mental health.
Lastly, concerns were raised about phishing on Facebook, noting cases where young users are tricked into revealing their contact details and passwords.
Urgent action was called for to protect user data and prevent phishing attacks.
In conclusion, the discussion underscored the urgent need for social media platforms to prioritize user safety, particularly for children. Efforts in content moderation, user registration systems, authentication systems, language detection, accountability, and mental health support were identified as crucial.
It is clear that significant challenges remain in creating a safer and more responsible social media environment.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Nepal has a robust legal and constitutional framework in place that specifically addresses the protection of child rights. Article 39 of Nepal’s constitution explicitly outlines the rights of every child, including the right to name, education, health, proper care, and protection from issues such as child labour, child marriage, kidnapping, abuse, and torture.
The constitution also prohibits child engagement in any hazardous work or recruitment into the military or armed groups.
To further strengthen child protection, Nepal has implemented the Child Protection Act, which criminalises child abuse activities both online and offline.
Courts in Nepal strictly enforce these laws and take a proactive stance against any form of child abuse. This indicates a positive commitment from the legal system to safeguarding children’s well-being and ensuring their safety.
In addition to legal provisions, Nepal has also developed online child safety guidelines.
These guidelines provide recommendations and guidance to various stakeholders on actions that can be taken to protect children online. This highlights Nepal’s effort to address the challenges posed by the digital age and ensure the safety of children in online spaces.
However, ongoing debates and discussions surround the appropriate age for adulthood, voting rights, citizenship, and marriage in Nepal.
These discussions aim to determine the age at which individuals should be granted certain legal landmarks. The age of consent, in particular, has been a subject of court cases and controversies, with several individuals facing legal consequences due to age-related consent issues.
This reflects the complexity and importance of addressing these issues in a just and careful manner.
Notably, Ghimire Gopal Krishna, the president of the Nepal Bar Association, has shown his commitment to positive amendments related to child rights protection acts.
He has signed the Child Right Protection Treaty, further demonstrating his dedication to upholding child rights. This highlights the involvement of key stakeholders in advocating for improved legal frameworks that protect the rights and well-being of children in Nepal.
Overall, Nepal’s legal and constitutional provisions for child protection are commendable, with specific provisions for education, health, and safeguarding children from various forms of abuse.
The implementation of the Child Protection Act and online child safety guidelines further strengthens these protections. However, ongoing debates and discussions surrounding the appropriate age for various legal landmarks highlight the need for careful consideration and resolution. The commitment of Ghimire Gopal Krishna to positive amendments underscores the importance of continuous efforts to improve child rights protection in Nepal.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
The discussions revolve around protecting children in the digital environment, specifically addressing issues like online child abuse and inappropriate communication. The general sentiment is positive towards using artificial intelligence (AI) to improve the digital environment for children and detect risks.
It is argued that AI tools can identify instances of child sexual abuse online, although they struggle with unclassified cases. Additionally, online platform providers could use AI to detect abnormal patterns of communication indicating grooming. However, there is concern that relying solely on technology for detection is insufficient.
The responsibility for detection should not rest solely on technology, evoking a negative sentiment.
There is a debate about the role of regulators and policymakers in addressing these issues. Some argue that regulators and policymakers should not tackle these issues, asserting that the responsibility falls on platform providers, who have the resources and knowledge to implement AI-based solutions effectively.
This stance is received with a neutral sentiment.
The right to privacy and protection of children in the digital era presents challenges for parents. The UNCRC emphasizes children’s right to privacy, but also stresses the need to strike a balance between digital privacy and parental protection obligations.
Monitoring digital content is seen as intrusive and infringing on privacy, while not monitoring absolves platforms of accountability. This viewpoint is given a negative sentiment.
Age verification is seen as essential in addressing inappropriate communication and content concerns.
A lack of age verification makes it difficult to protect children from inappropriate content and advertisers. The sentiment towards age verification is positive.
Dialogue between platform providers and regulators is considered crucial for finding constructive solutions in child protection.
Such dialogue helps identify future-proof solutions. This argument receives a positive sentiment.
Newer legislations should focus more on addressing child sexual abuse in the online environment. Newer legislations are seen as more effective in addressing these issues. For instance, Germany amended its Youth Protection Act to specifically address the digital environment.
The sentiment towards this is positive.
The age of consent principle is under pressure in the digital environment as discerning consensual from non-consensual content becomes challenging. The sentiment towards this argument is neutral. There are differing stances on self-generated sexualized imagery shared among young people.
Some argue that it should not be criminalized, while others maintain a neutral position, questioning whether AI can determine consensual sharing of images. The sentiment towards the stance that self-generated sexualized imagery should not be criminalized is positive.
Overall, the discussions emphasize the importance of child protection and making decisions that prioritize the best interests of the child.
AI can play a role in child protection, but human intervention is still considered necessary. It is concluded that all decisions, including policy making, actions of platform providers, and technological innovations, should consider the best interests of the child.
Speech speed
0 words per minute
Speech length
words
Speech time
0 secs
Report
Content moderation online for children presents a significant challenge, particularly in Zambia where children are exposed to adult content due to the lack of proper control or filters. Despite the advancements in Artificial Intelligence (AI), it has not been successful in effectively addressing these issues, especially in accurately identifying the age or gender of users.
However, there is growing momentum in discussions around child online protection and data privacy.
In Zambia, this has resulted in the enactment of the Cybersecurity and Cybercrimes Act of 2021. This legislation aims to address cyberbullying and other forms of online abuse, providing some legal measures to protect children.
Nevertheless, numerous cases of child abuse on online platforms remain unreported.
The response from platform providers varies, with Facebook and Instagram being more responsive compared to newer platforms like TikTok. This highlights the need for consistent and effective response mechanisms across all platforms.
On a positive note, local providers in Zambia demonstrate effective compliance in bringing down inappropriate content.
They adhere to guidelines that set age limits for certain types of content, making it easier to remove content that is not suitable for children.
Age-gating on platforms is another area of concern, as many children can easily fool the verification systems put in place.
Reports of children setting their ages as 150 years or profiles not accurately reflecting their age raise questions about the effectiveness of age verification mechanisms.
META, a platform provider, deserves commendation for their response to issues related to child exploitation.
They prioritize addressing these issues and provide requested information promptly, which is crucial in investigations and protecting children.
The classification of inappropriate content poses a significant challenge, especially considering cultural differences and diverse definitions. What might be normal or acceptable in one country can be completely inappropriate in another.
For example, an image of a child holding a gun might be considered normal in the United States but unheard of in Zambia or Africa. Therefore, the classification of inappropriate content needs to be sensitive to cultural contexts.
In response to the challenges posed by online child protection, Zambia has introduced two significant legislations: the Cybersecurity and Cybercrimes Act and the Data Protections Act.
These legislative measures aim to address issues of cybersecurity and data protection, which are essential for safeguarding children online.
To ensure child internet safety, a combination of manual and technological parental oversight is crucial. Installing family-friendly accounts and using filtering technology can help monitor and control what children view online.
However, it is important to note that children can still find ways to outsmart these controls or be influenced by third parties to visit harmful sites.
In conclusion, protecting children online requires a multifaceted approach. Legislative measures, such as the ones implemented in Zambia, combined with the use of protective technologies and active parental oversight, are essential.
Additionally, close collaboration between the private sector, governments, the public sector, and technology companies is crucial in addressing challenges in policy cyberspace. While AI plays a role, it is important to recognize that relying solely on AI is insufficient. The human factor and close collaboration remain indispensable in effectively protecting children online and addressing the complex issues associated with content moderation and classification.