Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World

28 May 2024 15:30h - 16:30h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Navigating the Intersection of Emerging Technologies and Human Rights in a Connected World

The panel discussion titled “Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World,” co-organised by the International Telecommunication Union (ITU), the Office of the United Nations High Commission for Human Rights (OHCHR), the International Organization for Standardization (ISO), and the Association for Progressive Communications (APC), delved into the intricate relationship between technological advancements and human rights.

The dialogue brought together a diverse group of experts who explored the potential of emerging technologies such as artificial intelligence (AI), the Internet of Things (IoT), and the metaverse to contribute positively to society, particularly in relation to the Sustainable Development Goals (SDGs). They discussed the importance of ensuring that these technologies do not exacerbate inequalities or infringe upon human rights.

A significant portion of the conversation centred on the integration of human rights into the lifecycle of technology development. The panelists recognised the transformative power of digital solutions like AI in areas such as climate action, hunger elimination, poverty eradication, and the accessibility of education and healthcare. They also acknowledged the challenges posed by AI, including discrimination, bias, disinformation, and hate speech.

The role of technical standards in ensuring the interoperability, security, and compatibility of technologies was highlighted as a critical element. Standards were identified as essential building blocks that must integrate human rights perspectives to strike a balance between innovation and ethical considerations. The panelists emphasised the need for a multi-stakeholder approach that includes governments, businesses, civil society, technical communities, and international organisations to effectively address the human rights-related impacts of new technologies.

Education on human rights was underscored as a fundamental component for the meaningful implementation and management of AI tools and services. The panelists called for more interdisciplinary work and experts to join the discussions to bridge the technical and human rights communities.

The discussion also touched on the need for regulation that is both flexible and adaptable, promoting technological innovation while protecting human rights. The panelists stressed the importance of creating clear laws and principles that align with each country’s reality, as well as the necessity of collaborative approaches to ensure diverse perspectives and practical effective regulations.

Civil society organisations were recognised for their vital role in shaping technical standards to ensure they uphold human rights and serve the interests of diverse communities. The panelists encouraged civil society to help demystify technical jargon, create awareness about the importance of standard-setting, and bring specific issues and experiences into the standard-setting process.

The event concluded with a call for leadership, collaboration, and coordination among ITU, ISO, and OHCHR to ensure that views from a wide range of stakeholders are considered in the development of technical standards. This inclusive approach was deemed crucial, regardless of whether stakeholders are from developed or developing countries.

The panelists also discussed the existing human rights norms and frameworks as a solid foundation for guiding the development and use of technologies. They acknowledged the need for ongoing collaboration between human rights organisations, standard-setting bodies, and other stakeholders to ensure that technical standards align with human rights and promote trust among users.

In summary, the panel discussion provided a comprehensive examination of the intersection between emerging technologies and human rights, highlighting the need for a concerted effort to ensure that technological advancements are harnessed responsibly and inclusively for the betterment of society.

Session transcript

Moderator:
Hello everyone, thanks a lot for being here for this panel about navigating emerging technology and human rights in a connected world. It’s really an honor to welcome you and it’s co-organized with ITU, with YTHR, with APC, with ISO also. And Ambassador Kerr is going to arrive soon, he will do the moderation, but we are starting now. And first, to open the session, I’m delighted to introduce Ms. Doreen Bogdan-Martin, the Secretary General of the ITU, who will provide the opening remark. Secretary General, please, the floor is yours.

Doreen Bogdan-Martin:
Thank you, thank you so much and good afternoon. I was worried I was going to be late. It’s great to be here. I hope you have found day two productive. I think we have seen and also heard so many great examples of digital technologies that are really demonstrating how we can make an impact when it comes to the SDGs. One number, I think, captures it like no other, and that is a 70 percent, a 70-percent increase. So, thank you. Thank you. Thank you. and that is the percentage of the Sustainable Development Goal targets that can actually directly benefit from digital technologies including artificial intelligence. Digital solutions like AI can accelerate progress in climate action, eliminating hunger, eradicating poverty and making education and healthcare accessible to all. The opportunities to strengthen human rights are enormous, but so are the challenges. The AI related challenges threatening human rights are many, from discrimination and bias to disinformation and hate speech. But governance efforts emerging at the national level, at the regional level and international levels, I think really give reasons for hope, such as the landmark AI resolution recently adopted by the General Assembly in March. And I’m sure you’ve all read it or followed the process, but I think what’s important in that resolution is that it does stress the importance, as Peggy can confirm, of human rights in that resolution. And it also acknowledges how the UN system, consistent with our mandates, uniquely contributes to reaching global consensus on a safe, secure, trustworthy AI future. And of course, that would be in line with international law and also the UN Charter and the Universal Declaration of Human Rights. So tomorrow, as many of you know, we will hold our first ever governance day, our AI governance day. We’re quite excited about that governance day. We will be convening ministers, we’ll have experts, academics, policymakers, civil society. and, of course, the technical community. And we’ll be looking at the latest AI governance efforts. What do they have in common? Where are the gaps? And how can we find the balance between guardrails and innovation while placing the UN core values, like human rights, at the center? Standards, of course, are a critical component. They’re critical in trying to find that right balance, as they are the building blocks for the design, the development, and deployment of emerging technologies. The human rights perspective must be incorporated into the standards-making process. We have been fortunate to be working very closely with ISO, with IEC, and, of course, with the High Commissioner for Human Rights. And we need to make sure that they are core to the process in ways that help to make it more inclusive, transparent, and also aligned with the vision set out in our Common Agenda and the Sustainable Development Goals. Though our work at the ITU is a bit more technical, I think, in scope, inclusion really is at the heart of everything that we do. Our ongoing standardization work, many of those aspects are dealing with emerging technologies from artificial intelligence, of course, to IoT, to quantum, and also things like the metaverse, which is a big focus of our work in the Standardization Bureau. But, of course, there’s much more work to be done to build bridges between the technical community, between technical issues, between the human rights communities. And welcome, Ambassador. I thought I was going to be late. I was glad you were late. And to this end, we are, as I said, working very closely with OHCHR, ISO-IEC, and this is all part of our world standardization collaboration. I especially hope that we will see more interdisciplinary work and we will have more interdisciplinary experts joining these discussions. I think that’s really critical. We hope they join us also, and I’ll make a plug for our upcoming World Telecommunications Standardization Assembly that will take place in November in India. Hope you will all join us for that effort. And with that, ladies and gentlemen, if we are to achieve the SDGs, and we know that they’re off track, we really do need a whole-of-society, whole-of-community approach. We really need all hands on deck. Digital cooperation is key to mitigating potential human rights-related impacts of new and emerging technologies, and the upcoming Summit of the Future and its Global Digital Compact, as well, of course, as the WSIS Plus 20 Review next year, are important milestones. They’re important milestones to ensure that everyone, everywhere, can benefit from the potential of emerging technologies. I look forward to hearing your ideas and to working with all of you more closely to ensure that we put the UN values at the core, and human rights at the core, and at the heart of a more inclusive, innovative, sustainable digital future for all. And with that, thank you again, and welcome Ambassador Kha.

Muhammadou M.O. Kah:
Good afternoon. It’s my distinct honor to welcome you to this high-level dialogue on Trust in Tech, Navigating Emerging Technologies and Human Rights in a Connected World. This event is co-organized by the International Telecommunication Union, the Office of the United Nations High Commission for Human Rights, the International Organization for Standardization, and the Association for Progressive Communications. Today we are here to explore the intersection of technology and human rights, and to discuss how we can foster trust and inclusivity in our rapidly evolving digital technologies. And we’ve just listened to the Secretary General’s insightful remarks, and we wish to take the opportunity to appreciate the brilliant leadership of our Secretary General, Ms. Bogdan Martin, and for your very insightful remarks on a very important topic today, as digitalization and AI evolves in humanities all over the world. As we gather today, it is crucial to acknowledge the profound impact that emerging technologies have on our world. They offer tremendous opportunities for innovation and progress, yet they also pose significant challenges, particularly concerning human rights and trust among users, as well as human dignity. This event aims to address these pressing issues. The role of technical standards cannot be overstated. They serve as the backbone of interoperability, security, and compatibility in the digital realm, ensuring that diverse technologies can work together seamlessly. As we advance into the era of digitalization, we must be aware of the challenges we face. We must be aware of the challenges we face. AI and other groundbreaking innovations, it is imperative that these standards are developed with a strong foundation in human rights. By translating human rights principles into technical terms, we can create frameworks that not only foster innovation but also protect the rights and dignity of individuals worldwide. Now, I know we have a very distinguished panelist here. It is my honor to introduce our distinguished panelist who will provide deeper insights into these very important issues. I have on my left here Ms. Peggy Hicks, the Director of Thematic Engagements at the Office of the Human Rights High Commissioner. I have Mr. Sajjo Mujica, the Secretary General of ISO. I have Ms. Mercedes Aramendia Falco, the President of the Board of Unidad Regulador de Servicio de Comunicación Uruguay. I have Ms. Annette Esterhuisen, the Senior Advisor on Global and Regional Internet Governance at the Association for Progressive Communications, and Mr. Olivia Elias, the Program Coordinator for Human Rights and Technology at the ITU. Each of these distinguished experts brings a wealth of knowledge and experiences to our discussion. Together, they will help us explore how we can embed human rights into technical standards, build trust in technology, and foster a more inclusive and sustainable digital future. Let us now proceed to our first round of questions, starting with Ms. Peggy Hicks. Ms. Peggy, from a human rights perspective, What are the key challenges posed by the rapid advancement of emerging technologies and how can these challenges be effectively addressed through multi-stakeholder collaborations?

Peggy Hicks:
Thank you so much, Ambassador Kha and to the Secretary General of IDU, Doreen. It’s great to have this close partnership next door to allow us to engage on these issues and to our fellow hosts and partners. It’s really, I think, for me, so encouraging to see how much human rights has been integrated into the conversations here at WSIS and as we go forward to AI for Good. I think things are evolving. I see that. I’ve been complaining to everybody because we kept wanting human rights to be part of the conversation and now it is and my staff and myself are so overstretched that we can’t be in all the rooms where people want to talk about human rights, but I think that’s overall a good sign and look for more of that to happen in the future. But going to your important question about what are the key challenges from a human rights perspective, I’m going to start with the big picture and I’m sure my colleagues will fill in some of the specifics around the technical standards setting side. The first problem that we have is that often the human rights conversation is seen as a limiting conversation. It’s about putting in place guardrails. It’s about putting in place obstacles that may get in the way of innovation. We really want to move away from that framing if we can and start to think about how human rights can actually help us achieve better results using digital technology. We’re firmly convinced that that is actually the case, that the problems that we’re raising, the issues that we’re looking at from a human rights perspective are ones that if we bring them in early, we will be able to achieve more in terms of development, more in terms of people’s rights going forward. The next challenge I want to move to then is one that really frames that issue quite clearly, and that is the need to close digital divides and ensure inclusion in a broader sense. We think about this both as we need the benefits of digital technology and AI to reach everyone. And that unfortunately is not the world that we currently live in. And secondly, we need to set those guardrails that I talked about to ensure that the harms are also mitigated for everyone as well. We can’t have an environment where it’s safe to do something in one environment because you have a nice AI safety institute there and someplace else doesn’t have the resources to invest in those frameworks and they’re not able to do it. So we need to recognize in solving that challenge of closing those digital divides first that one size won’t fit all in this. We will need to adapt the way that we approach the challenges in the digital world based on the different contexts and environments we work in, some of which have well-developed systems that can supplement and support this, others that do not. And that means that we’re going to need resources and support to make sure that we build on the learning that we have, but we don’t just try to transfer things wholesale from one place to another. We need greater human rights integration that will allow us to achieve results in development and better secure economic and social rights. So this isn’t just about the privacy and discrimination and inclusion issues, it is also about how we actually achieve better results for health, for education, across issues like digital public infrastructure and smart cities, which was a conversation happening in the room before we came in. It’s those concerns that have led us to support the proposal within the Global Digital Compact for a Digital Human Rights Advisory Service, the idea of which is we want to be able to provide support and engagement, leveraging the expertise that’s already out there. There are all sorts of academics and others who can help us solve some of these challenges, advise and support both governments at the national. and other levels, but also businesses in facing these challenges. The third of five, I’ll hope to hold to my five minute timeframe challenge that I wanted to mention. I think Doreen has already hit on, which is around the need to break down some of the silos that exist. There are too many conversations that are technical conversations where it’s really hard for the human rights voice to be heard. That is something that actually is a problem on both sides. We recognize that as much as I can say to all of you that I think the human rights framework offers a lot to these conversations, it’s not always so easy to pull out how it can offer those things. It’s got broad principles with all sorts of application through our special procedures, our human rights experts, our treaty bodies, but that’s a vast array of information that’s not necessarily accessible to the standard setting body or the regulator who wants to use it. So we have a challenge on our side to make that more accessible and more practical so it can be applied in these conversations more easily. But on the other side, of course, we also need an open door and an interest in bringing that material in. So we need to break down those barriers and I would just support what Doreen said about interdisciplinary experts. The more people we can find who actually have expertise in all of these areas, the better. The fourth piece I wanted to emphasize is the importance of inclusion, participation and a multi-stakeholder approach. This is essential to get the results we want. It’s not just a principled approach, but it’s a practical approach. Including and listening to those who will be affected by digital technologies will be crucial to keep the design and development of those technologies on track. We need broad inputs from a diverse set of perspectives. This happens all the time where technology is rolled out. And there are things that would have been obvious to one community that were overlooked in the risk management that was done in advance. So we need those perspectives to be at the table to make sure we get good results. And I have to emphasize that the global majority is often very much underrepresented in the conversations around these. So they’re being done in environments that can be very dominated by the developed world, by Western experts, and also often conducted in the English language in a way that leaves out the perspectives of others. So we need to really change that as well. The fourth challenge I’d mention is really looking at the role of business. This is an area from a human rights perspective where we’ve done some excellent work, I think, but where much more needs to be done. Businesses do have a responsibility to respect human rights, and governments have an obligation to ensure that businesses do that. I think you won’t be surprised to hear me say that I think both businesses in the tech sector and governments, in terms of the regulatory obligations, have a long way to go to make sure that we’re doing everything possible to ensure that companies are engaging in the way that we’d like to see in this space. So we have a project called BTEC where we’re working with a community of practice of big companies. We want to have it not only be the big companies. We’d like to really bring in a broader range of actors within the sector to incentivize good practices. We need to make it so that the companies that do better in this actually see that that helps them to move forward and thereby encourage others to do the same. And the final challenge, sorry, that was actually the final, yeah, that’s five, sorry. I have one point, which was to link it into the following conversation around technical standards setting. To me, it’s a really good example of where all these things come into play, and I’m sure we’ll hear that teased out in the comments that we’ll follow. It’s an example where we see the growing importance of technical standards setting, you look at. at the EU AI Act as an example, but we know that we have a long way to go to ensure that human rights is integrated into the technical standard setting area. And I’m looking forward to hearing the comments of others about how better to do that. Thank you.

Muhammadou M.O. Kah:
Thank you, Peggy, for your very insightful intervention. Picking up from your last intervention on standards, this is for Mr. Sajjo Mujica. Sajjo, welcome. Thank you. How can international standard bodies like ISO contribute to embedding human rights considerations into technical standards, particularly in the context of evolving technologies, situating it from where Peggy left? Thank you. Very well.

Sergio Mujica:
Thank you, Ambassador, and good afternoon to all of you. ISO is the International Organization for Standardization. We have 170 members all around the world, of course. And what we do is to create a global language to define technical specifications or technical requirements to ensure that a product or a process is fit for purpose. Human rights also represent a universal language and how we fight against injustice, repression, and abuse of power. So the key question for us, standard makers, is how can we help? Because this is so important. And I want to start by thanking Doreen and the ITU because two years ago, they organized a high-level meeting with the UN High Commissioner for Human Rights with exactly the same approach, how can we help? And if I wanna put it in a nutshell, it’s number one, by the portfolio of standards which already exist. And I think this is a message to everyone here, please do not reinvent the wheel. There are very relevant international standards that already exist. that can support the human rights agenda. And just to give you a couple of examples, we have very well-known standards on anti-bribery, on social responsibility, accessibility, occupational health and safety, and so on. We also have a very important and well-known standard on sustainable events. Just to give you an example, just in a few weeks in Paris, the Olympic Games will be certified as a sustainable event using an ISO standard. So that’s the first thing, using the existing portfolio. The second one is about the platform, because this is not about the how. We’ll, of course, have 25,000 international standards that can be used today. But when you know what you have, we can also identify the gaps. And what makes standards relevant here is the how, is the process we follow, is the set of core values we believe in. Because I can give you a new standard in two weeks, if you want, in my office here in Geneva, I bring a couple of top experts and I give you a new standard. It doesn’t work like that. What we believe in is in consensus building. All our standards are developed by consensus. We do not impose anything, number one. Number two, we work in a very transparent manner with inclusivity, and this is very important. And actively providing a voice to developing countries. This cannot be a conversation by a club of rich countries. We need to really ensure that this is a universal conversation taking into account all the different needs and point of view. So in that process, we have rules to ensure a stakeholder engagement. So it’s not only about IT experts here, it’s about consumers, it’s about academies, it’s about governments, it’s about civil society. And second, we also have rule to ensure that human rights and in particular, the sustainability agenda will be taken into account when developing the standard. So once again, it’s not only an expert conversation about the technical stuff, those considerations also need to be considered when developing the standard. In AI in particular, we have a dedicated technical committee to deal with AI. It’s something that we do together with our sister organization, the IEC. And in that technical committee, we have created already a set of relevant standards for AI. And probably the most important one is the management standard on AI, where we can really support all kinds of organizations to implement international best practice in the daily activities. So in a nutshell, I think it’s the national standard can be really instrumental to ensure that all the opportunities of AI can be captured, but also the big risks we have can also be properly addressed with this real framework that will support to develop, use and regulate artificial intelligence. Thank you.

Muhammadou M.O. Kah:
Thank you so much. Talking about some very important issues. Standardization is very, very crucial as AI evolves. Now I will turn to Ms. Mercedes Aramendia Falco. Welcome. Now, as a regulator, how can governments and national agencies balance the promotion of technological innovation with the protection of human rights in the telecommunication sector? Thank you, Ambassador for the question.

Mercedes Aramendia Falco:
Hello, everybody. Thank you very much for being here with us. I’m really happy to have this opportunity. And I think that this question and this matter is really relevant to the discussion. important and we really need to work on it. Firstly I want to thank all of you and of course ITU, Andrean, an ambassador, and I want to remark that I think that this topic, this matter of balance, is one of the most critical issues in which we have to work on. Why? Because on one hand, of course, this technology and everything, all the platforms, applications, and all the possibilities that technology brings to us, they try to attend different needs, social, cultural, and economic needs that society has. But at the same time, of course, it’s important to consider and to try to find a balance in order to consider ethical matters and also human rights, such as non-discrimination, freedom of expression, access to information, disinformation, privacy, among others. Of course, like everything in our life, achieving balance is very, very difficult. We know that, I think. But I think that between innovation and human rights, also, it’s not easy. We need to have perspectives and also expertise, help for developing effective and comprehensive solutions. I’m going to mention some aspects that I believe that we should consider at the time to develop technology and also at the time to regulate. Firstly, we have to look, of course, for the balance since the very design, since when we start a project. Promoting innovation while protecting human rights is, of course, delicate, and innovation drives economic growth. But without control, it can affect ethical matters, and also human rights. So we must work on maintaining that balance and review and adjust to ensure that technology serves humanity without overstepping ethical boundaries. Secondly, I think that it’s very important to have a framework and also principles. It’s important to clear, to create clear laws and principles that protect privacy, data security, freedom of expression, disinformation, among other rights, while encouraging technological advancement. This should align with the reality of each country. For example, in Uruguay our constitution recognized all inherent human personality rights, while also we ratified every international human rights treaties and various human rights conventions. Additionally, we have specific laws to protect personal data and regulations for fighting online piracy, which seek a balance between freedom of expression and intellectual property rights, for example. The third point is collaborative approach. Working with all stakeholders, including tech companies, civil society, academia, and international organizations, of course the public sector and regulators also, ensures diverse perspective and practical effective regulations. This approach helps find better solutions and create stakeholder engagement, which facilitates the implementation of these solutions. Collaboration fosters a sense of shared responsibility and mutual benefits, enhancing the effectiveness and acceptance of regulation later. The fourth point is transparency and reasonable process. Ensuring transparency is how regulations are developed and enforced to help people, academia, and the whole ecosystem to understand the motivation and the why, and also help us to build trust and legitimacy, making later to facilitate the compliance of that regulation or also about the law, helping also to create a culture of accountability. The fifth point is to monitoring compliance and also education. Monitoring solutions and systems to ensure compliance is necessary. It’s also important to seek reasonable and proportional consequences for not compliance. Nonetheless, the key is to educate and raise awareness about risk and challenges. Education empowers individuals and organizations to navigate the digital landscape responsibility and ethically. Finally, it is very important to constant modernize. We have to look at modernization. Regulation should be flexible and adaptable and should promote innovation. Also, we have to consider the speed of the changes, regularly checking if regulation is needed, if it is updated or we have to work to attract and to foster innovation, and also we have to work to attract and retain talent in the regulation and also in the ecosystem, because we really need people to understand technology and at the same time understand the human right and all the risks that this involves. In this sense, it’s important to remark that it is necessary to have interdisciplinary groups, because we need lawyers, we need engineers, we need people to be able to understand the both sides of a project and that will help us to make it real and once that the project is real, look if what we design is correct and we really achieve that balance that we really need. Of course, for that we need to create an environment, like sandboxes, that help us to design, to implement, and also to check if everything is going well or if we need to make some change or some adjustment in the design and also in the implementation, looking always to protect the ethics and all the human rights. Of course, it’s really hard to try to know all the risks that we are going to have, so for that it’s so important to try to work in sandboxes and try to find that kind of solutions that help us to try before they took that solution into reality. Thank you.

Muhammadou M.O. Kah:
Thank you. Thank you, Mercedes, for your intervention and for making the point the centrality of balance and the promotion of technological innovation with the protection of human rights in the telecom sector. And more importantly, regulation must be flexible, adaptable, and to promote innovations. Thank you so much. Now, I turn to Ms. Annette Esterhusen. In what ways can civil society organizations actively participate in shaping technical standards to ensure they uphold human rights and serve diverse communities’ interests? Thanks,

Anriette Esterhuysen:
Ambassador, and thanks to the ITU and your partners for convening this. I think civil society has a fundamental role. It’s not that human rights people don’t understand technical standards. they might just understand them a bit differently from how technical people understand them. I think that what we can do is make standard-setting organizations aware of what it means to work with a human rights-based approach, about what the differences are between consumer rights, also important, but between consumer rights and human rights. They are very different, and it’s important that regulators, that manufacturers, standard-setting organizations, service providers, pay attention to both those different sets of rights. We can also help standard-setting organizations understand the difference between multi-stakeholder collaboration and public-private partnership. There’s a long history of standard organizations working with the private sector. There’s not such a long history of them working with civil society, and I hope that history is starting here and getting deeper and longer here. I think also what we can contribute, Mercedes talked about the need for principles, the need for ethics. I mean, our perspective would be that existing human rights norms and frameworks gives us the basis that we need for those principles and ethics that can inform standard-setting. And I think because we work so closely with those norms and principles, we can assist in those processes. Then I think we can really help demystify technical jargon and create awareness in our spaces where we work in different parts of the world, different groups, that standard-setting matters, that it is important. And I think you’ve outlined exactly why it is so important, and I think people underestimate how important it is. And we can also help creating awareness of where those standards are being made. The transparency is very important, but the standard-setting organizations can’t do that alone. Civil society can help with that broader outreach and bring in a. a broader constituency. And then I think we can collaborate. I think I have to commend the Office of the High Commission for Human Rights on the report they did last year. They had a call for input on digital technical standards and human rights. And for us, it was a challenge to contribute to that report. But it made us work. It made us think about what the connections are. And I think that’s another contribution. As this link between standards and human rights evolves, between emerging technologies and human rights evolve, we can play a role. I think we can also put specific issues on the table, such as encryption, surveillance, privacy, digital inclusions, issues that matter to us and which are connected with standards. And then we can bring very specific experiences into the standard-setting process. I can mention two examples. My organization, Associations for Progressive Communications and the Internet Society were involved in establishing a community network in northern Namibia. The routers worked for about a week. And then they started melting because the temperatures were just so high. Now, those routers were fully compliant with technical standards. But they did not work in that particular context because they were in outdoor spaces. There was no air conditioning. They simply didn’t work. Another example, which I think one of our members, the Director of Secretaries, has been looking at is the new instant messaging interoperability standard being developed at the IETF. Now, this is very important to have interoperability between different messaging systems. But the work that we’ve done on cyber harassment, cyber stalking, sexual harassment of women, gender-based violence online, security is extremely important. So we don’t want the interoperability between messaging systems to come at the expense of the security that we feel people need to be able to protect themselves and not be exposed to unprotected. unwanted messages. So these are specific examples. I think then, I mean, we have played a role in human rights in digital, I think a very big role. And I think we can do that in standards as well. But I think we also need to acknowledge that there’s a lot of capacity development that we need to do ourselves as civil society. We have to be more deliberate and more active in building our own knowledge of the issues that are at stake and the processes that are at stake. And we cannot do it alone. And I think this is where it becomes very important for civil society to work with the technical community, to work with standard setting organizations as well. Now, they also need to change, but I’ll say more about that in the next round, if we have a next round.

Muhammadou M.O. Kah:
Thank you so much, Annette. And thank you so much for making the point that standard setting matters. Transparency is very important and awareness and the role of the civil society is key and crucial to build trust in tech. Thank you so much. Now I move to Mr. Olivia Elias. Olivia, welcome. How can technical standards be leveraged to ensure that emerging technologies aligned with human rights and promote trust among users?

Olivier Alais:
Thank you, Mr. Ambassador. Thanks, everyone, to being here today. As you know, ITU is a United Nations Special Agency for ICT, and that includes the development of global technical standards. We can wonder why standards are important. They are important because they help technology, as Sergio said, to work together. They are also helping interoperability. They are also helping something important, the market access. And at ITU, our members, also the private. companies, and to be sure that they are complying with market rules and international cooperation. And standards can play a role in human rights. They can ensure privacy, freedom of expression, access to information, data protection, and non-discrimination. That’s why standardization, the core function of ITU, is gaining increasing attention from the human rights community. Why ITU can play a role here. It’s a time-tested interest platform We have been working on communication and standards for 160 years. We are a global collaboration platform. It’s why IT is important. Our members are government, industry leaders, academic institutions. And we can work with all of them to build an inclusive approach to standard development. And also, we have at ITUT, so the office where we are doing the standardization, we have 11 different study groups. And they are driven by experts. And it’s also something important. They are driven by the people they know and they understand the standard. And we are working closely with them now to try to talk about human rights and how we could embed human rights into the standardization process. And as ISO, we are based on consensus. It’s something very important in the standardization world. We cannot impose anything. We need to find a common ground, a consensus. Looking at the human right consideration, and as Peggy underlined, today the government have an obligation to protect human rights offline, but also online. It’s coming from a resolution from 2013. It has been adopted at the UN General Assembly. So we need, we have to, we must protect human rights also online. And the Human Rights Council in 2021 came with a resolution. resolution to ask for better cooperation between OHCHR and the standard development organization like ITU and ready to consider the relationship between technical standard and human rights. That’s why today we are trying to work closely with OHCHR, but also with other stakeholders, with ISO, with IEC, with civil society organizations. And we already start to work on human rights. I can give you two examples in the ITU study groups, where they are already working on human rights-related needs. The study group five, for example, is working on e-waste management. And a clean and healthy and sustainable environment is a human right. And here’s a challenge that we are all facing globally. It’s the rapid digital growth has led to a surge in e-waste in poisoning also the environment. And so this group at this level is drafting standards for sustainable e-waste management and safe recycling. Another example is the study group 16, who is working with WHO on telehealth accessibility. Because one more time, access to health is a human right. And here’s a challenge that they are trying to solve is that people with disabilities often struggle to access telehealth services, especially in the global south. And so this study group 16 with WHO are developing human rights standards. They already developed standards to make telehealth platform more accessible for everyone. So to conclude, standards enable products and services to be safe, reliable, and high quality. And they enable a technical advancement on a global scale. We would like to work more. important for us with OFCHR, because IT, we are technicians, and we need also to learn and collaborate. And collaboration is key. And also with ISO, with other SDOs and other stakeholders. And finally, something I guess is important, we encourage stakeholders to establish a clear link between technical concept and human rights. Today, the link is we still need to translate really human rights into technical terms, and to embed human rights into the technical standards. So there is still a lot to do on that path. Thank you.

Muhammadou M.O. Kah:
Thank you, Olivier, for making that very last point of the importance of the linkages, as well as the central role of meaningful collaboration and partnerships that are very important for the actors and agents in the ecosystem, particularly with OHCHR. Thank you so much for making that point. Participants, I wanted to go for a second round, but unfortunately, we do not have time. And I love you so much, so I wanted to get you involved. Now I’m opening it to you to take the opportunity to ask that burning question in your minds. The floor is yours. The mic is going around. Any burning questions? Yes, we have someone there at the back.

Peggy Hicks:
Ambassador, can we ask them a question? Oh, now the hands are going up. Yes. Is there anyone in the room who’s been part of a standard-setting process? How long did it take? Just a quick sense of just how many years. Two years. I think it’s an important point. If we want to be involved in standard setting, we have to be in for the long haul. I think those are short periods, by the way.

Muhammadou M.O. Kah:
Very important question, and I think the point is taken. You have the floor. Thank you.

Audience:
Jason Peelmeyer from the Global Network Initiative. Thank you very much for the remarks from the panel. My question, really, for whichever of the panelists would like to take it, is how you see standards processes interfacing with emerging regulatory regimes for technology platforms and services? So we have the Digital Services Act, the Online Safety Act in the UK, various other regulations that are emerging in different jurisdictions, many of which refer to risk assessments, human rights due diligence, and transparency, research or access to data, other common elements, which in theory could be standardized across regulatory regimes. But I think despite some of the language in, for example, the DSA that calls for standard setting, we haven’t seen much movement towards actual standard setting through those processes. So I’m curious how you think those regulatory regimes might take advantage of, or potentially how standards bodies might step into that role. Thank you.

Muhammadou M.O. Kah:
We’ll take another one while the panel think about the question. Please go ahead.

Audience:
Hello. Maurice Turner with TikTok. I wanted a follow-up question on Arianette’s question to the audience, which was, should these processes be accelerated? So we heard one and a half years, two years, that we need to be in it for the long haul. Is that a sufficient timeline for what we’d like to get out of it, or does it need to be accelerated? accelerated, given the pace of technological advances. Thank you.

Muhammadou M.O. Kah:
One more question, please. Thank you to all the panelists, first and foremost, and to Your Excellency as well for moderating so graciously. I am Maria, consultant for the United Nations Alliance of Civilizations, and my question is whether we have consensus, since we discussed consensus building, on whether we already have the language and the tools in terms of standards, for instance, and it’s a matter of interpreting and adapting it to the human rights language, a bit of a teleological effort in legal terms, or on the contrary, we need to keep building on certain aspects that appear more evident to certain communities or certain regions. So do we have to invent some extra wheels, or do we all agree that there is already a lot and it’s a matter of streamlining? Because in my experience, I think that we are facing a bit of a data deluge, and actually coming together, streamlining, and doing this interpretation and translation effort could be more crucial. But of course, I will greatly welcome your thoughts on that. Thank you. I’ll now turn to the panel, if you can take a minute or two, because we’re really out of time to respond to the questions. We start with Peggy.

Peggy Hicks:
Thanks. No, I wish we had more time. They’re all good questions. I think the point about the regulatory regimes and how we align it with standard setting is super important, and I agree it’s not moving forward quickly enough, and it raises, it emphasizes the points that Arente made about the resources that are necessary on all sides, especially smaller companies, CSOs. If we want everybody to be engaged, we have to make it easier. So more needs to be done to align what’s happening, what we’ve heard, and I think you and I have even had a conversation around the auditing process and other things that haven’t really made the space yet to make sure that there’s enough human rights being brought into that system. But I do think if we can develop some good practices, then it does make it easier. to replicate them in other places, although keeping in mind, as I said earlier, that not one size will fit all. I think in terms of the pace, I think it’s a really good question, and I’m sure my colleagues will answer it more intelligently, but I think ultimately, we need for it to be participatory and inclusive, and that does sometimes take time. And I’m afraid that when we cut corners on that, we end up with results that just aren’t as thorough and aren’t as usable. And then we have to go back and redo it, in a way that ends up taking longer. So we’ve gotta get it right, but we should do it as quickly as we can. And then finally, on your point, it’s interesting, this is something that we go to all the time, in terms of the human rights framework. Is it enough? Do we need new digital rights? Our answer on that is always, as you’ve said, we have a lot there. Let’s apply what we have, let’s use it to the maximum of its ability, and then if we find gaps, let’s look at them. But we’re still a long way from using what we have. I think on the standard setting side, from what I understand from my colleague at ISO, is that there is a lot there that we can build on, but there’s also a lot that needs to be developed in these new areas, and we need to just make sure that we follow the good practices that have been voiced here. And that is one point I wanted to make, is that the standard setting arena is very diverse, in terms of what the practices are. So we heard some of the best practices from our colleagues at ISO and ITU, but part of what we’re trying to do is really make that more apparent, that those are, that high bar should be reached by standard setting organizations across the board, and help them to get there as well. Thank you.

Sergio Mujica:
Sayu, please take it up. Yeah, a lot of good questions. Interesting point. Try to cover all of them. First, collaboration with regulators. Absolutely, we do not compete with regulators. We do not create public policy. We help implement public policy. And some countries are very active in tasking standard setting organizations to create relevant standards. We do not always. start on time, if you will, but this is the secret. All of you, each and every one of you, has the power to initiate a standardization process through your national standard body in your country. So I do not start the standardization process here in Geneva. It is started by the members and the national level. That’s number one. Second is the speed, it’s absolutely true. I told you in my example that I can develop a new standard in two weeks in my office here in Geneva. It doesn’t work like that. We need to remain true to the set of core values we believe in, and that is not for free. That takes time. But we can improve the efficiency of our processes, and we can also work in the so-called good enough approach. COVID, we couldn’t wait three years to develop standards to support combat COVID, of course. So sometimes it’s good enough to go out with something acceptable that we can improve as we go along. So that sense of urgency, we can work with, and we have a set of rules that can allow for that sense of urgency. But full consensus, that takes time. Number three is about inclusiveness. One of the issues we have, sometimes standards are perceived like fancy technical solutions created by rich people to be used by large corporations. Most of the time, it’s not true. Sometimes, unfortunately, it is true. Because one thing is to have a door, and a very different thing is to cross that door. We need to work with civil society. We have the rules to do it, but not always we manage to engage them successfully. The same thing with developing countries. And I think it’s everybody’s responsibility to mobilize everything we have to give a voice to everyone. Otherwise, our standards are not relevant everywhere, and we end up applying an international standard in a place where it doesn’t make sense at all. And then the final point, do we need to create something new? I think it’s really important. to ensure a voice to human right organizations in the relevant projects we’re working on. What I mean, if we’re working on 3,000, and this is for real, if we’re working on 3,000 new international standards, out of those 3,000, which are the 10, 15, 20, which are really relevant for the human right community, and how do we proactively bring you to those conversations? That’s number one. And number two, how do we educate our 50,000 international experts that are creating standards we speak in the basic understanding of human rights? This is not about developing technical solutions to create a pen. It is also about embedding some basic values, the lens of human rights in those technical conversations.

Mercedes Aramendia Falco:
Okay, I will do it very fast. First of all, as a regulator, for us, it’s very important to have standards. For us, that is simplify our work. So for us, it’s great to have standards and rules that help us to know how to do it and to follow them. I think that something very important is to have clear everybody that technology is a means to an end, and not it’s an end in itself. So we have to use technology for what we need, and not this technology, it is the best or it’s everything technology. Considering that, I think that in relation with the times, of course, that depend on each objective. And I think that it’s very important to consider the time to offer transparency. And in relation to the laws, I think that it’s very important to review what we have and to update whatever we need to update. But I think that the big issue here is we have laws, we have the human rights, different instruments, et cetera, but I think that… The big question is how to apply it, how to make it real. Because we all know that in offline life, we all have to comply with what the Human Rights Convention said and what the law said. And I always ask why in online life, we have some questions, or we believe that we can do things online, that we in the offline world, we never doubt that we are not able to do it. And also, I think that we have to look always for the balance, and we have to take care to protect the human rights since the design and by default. Thank you.

Muhammadou M.O. Kah:
Thank you. Annette?

Anriette Esterhuysen:
I’ll go ahead. And thanks, Ambassador. I think, Jason, it really is a challenge. And I think as there’s more regulation, content is one, platform is another. We’re in an era of more regulation in our sector. I think the importance of using human rights norms and standards as a common baseline across different regulatory regimes, different disciplines, becomes even more important than it always has been. Now, that’s not simple. We have to work to make those existing human rights norms and standards be applicable and understandable in different contexts. And I think that we also have the recommendations from the High Commission report that makes very specific recommendations to standard organizations. There’s also the NetMundial plus 10 guidelines on multi-stakeholder participation, transparency, and inclusion that can be used. And I think we can apply those. I think the question from TikTok, it’s a really interesting question. And I think Sergio has answered it really well. But I also think we have to acknowledge that sometimes standards, device hardware standards, sometimes they’re enablers, sometimes they are. bottlenecks. And I think it can also be important to look at particular context and whether, in fact, in that particular context, I’m looking at when mobile phones first were available in developing countries, there were such backlogs in regulators and national standard-setting organizations approving those devices. We’ve seen the same thing with Wi-Fi routers. So also, and sometimes there are vested interests, sometimes there are not. Sometimes it’s just, I think, as Sergio was saying, making decisions, prioritize, and then try and speed up on that basis. And I think the issue about consensus and language is, if I understood your question, I think it’s very important. I agree with Peggy’s response, but I think we shouldn’t underestimate the complexity of making these different universes intelligible to one another. So for example, I do a lot of work with regulators. They talk about public interest, they talk about consumer rights. They don’t – I’ll be finished – they don’t talk about human rights. So making those connections, I think, is very important. And then I would say, in terms of prioritization, absolutely, and I can speak for that. My organization prioritized TV-wide space standards a few years ago because we wanted more access in developing countries, and we wanted regulators to work with that. We’re now prioritizing Study Group 5, working on circular economy. So I think that’s important as well. You can’t do everything at the same time. You have to prioritize and then try and do it well and as collaboratively as possible. Thank you.

Muhammadou M.O. Kah:
Olivier, one minute.

Olivier Alais:
Yes, thanks a lot. I’m interested about the questions on the extra well and the interpretations. Yes, it’s something that we need. We need to have a stronger link with the technical communities. And when I’m talking to engineers, technicians, working on 6G, working on metaverse, working on optical fibers, for them it’s not clear what we are talking about when we are talking about human rights. And we need and we need to support them also to build their capacity and understanding a bit more what we mean about human rights. But also, we need to maybe work on the can say it is a taxonomy, but to have a document going to go is going to take the different human rights and to try to translate them in technical terms. For example, when we are talking about freedom of expressions, we know that we are talking about access, accessibility, cyber security. But we need to go deeper and really to find the right language to work closely with the people drafting this recommendation as a standard. So we still need to work on that.

Moderator:
Thank you so much. I wanted to leave you with some very few parting thoughts. I wish we had more time. This is a very fascinating and insightful panelist. And we’ve benefited quite a bit. I wish we had more time. I’ll leave you with this point, that specific and pragmatic strategies are required to enhance accountability among tech companies and states in ensuring that technological advancements respect and protect human rights. Secondly, education is fundamental on human rights. Education is very, very important to meaningfully implement and harness benefits from AI tools and services and managing risks. Thirdly, given the diverse global perspectives and unique and diverse context on technology and human rights, a meaningful multi-stakeholder collaborations and partnerships are required to support international engagements on technical standards to ensure inclusivity and respect for digital rights. And that came out from the panel. The leadership, collaboration, coordination among ITU, ISO, and OHCHR is crucial to ensure that views. a wide range of stakeholders are considered in the development of technical standards, regardless of whether they are from developed or developing countries. I think the panel deserves a round of applause. Thank you so much. And thank you for coming. Thank you so much. See you on the next session. Thank you.

AE

Anriette Esterhuysen

Speech speed

178 words per minute

Speech length

1382 words

Speech time

465 secs

A

Audience

Speech speed

158 words per minute

Speech length

251 words

Speech time

95 secs

DB

Doreen Bogdan-Martin

Speech speed

147 words per minute

Speech length

913 words

Speech time

372 secs

MA

Mercedes Aramendia Falco

Speech speed

144 words per minute

Speech length

1277 words

Speech time

533 secs

M

Moderator

Speech speed

132 words per minute

Speech length

340 words

Speech time

154 secs

MM

Muhammadou M.O. Kah

Speech speed

142 words per minute

Speech length

1261 words

Speech time

533 secs

OA

Olivier Alais

Speech speed

158 words per minute

Speech length

1005 words

Speech time

382 secs

PH

Peggy Hicks

Speech speed

206 words per minute

Speech length

2257 words

Speech time

658 secs

SM

Sergio Mujica

Speech speed

173 words per minute

Speech length

1338 words

Speech time

464 secs