Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World
28 May 2024 15:30h - 16:30h
Table of contents
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Navigating the Intersection of Emerging Technologies and Human Rights in a Connected World
The panel discussion titled “Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World,” co-organised by the International Telecommunication Union (ITU), the Office of the United Nations High Commission for Human Rights (OHCHR), the International Organization for Standardization (ISO), and the Association for Progressive Communications (APC), delved into the intricate relationship between technological advancements and human rights.
The dialogue brought together a diverse group of experts who explored the potential of emerging technologies such as artificial intelligence (AI), the Internet of Things (IoT), and the metaverse to contribute positively to society, particularly in relation to the Sustainable Development Goals (SDGs). They discussed the importance of ensuring that these technologies do not exacerbate inequalities or infringe upon human rights.
A significant portion of the conversation centred on the integration of human rights into the lifecycle of technology development. The panelists recognised the transformative power of digital solutions like AI in areas such as climate action, hunger elimination, poverty eradication, and the accessibility of education and healthcare. They also acknowledged the challenges posed by AI, including discrimination, bias, disinformation, and hate speech.
The role of technical standards in ensuring the interoperability, security, and compatibility of technologies was highlighted as a critical element. Standards were identified as essential building blocks that must integrate human rights perspectives to strike a balance between innovation and ethical considerations. The panelists emphasised the need for a multi-stakeholder approach that includes governments, businesses, civil society, technical communities, and international organisations to effectively address the human rights-related impacts of new technologies.
Education on human rights was underscored as a fundamental component for the meaningful implementation and management of AI tools and services. The panelists called for more interdisciplinary work and experts to join the discussions to bridge the technical and human rights communities.
The discussion also touched on the need for regulation that is both flexible and adaptable, promoting technological innovation while protecting human rights. The panelists stressed the importance of creating clear laws and principles that align with each country’s reality, as well as the necessity of collaborative approaches to ensure diverse perspectives and practical effective regulations.
Civil society organisations were recognised for their vital role in shaping technical standards to ensure they uphold human rights and serve the interests of diverse communities. The panelists encouraged civil society to help demystify technical jargon, create awareness about the importance of standard-setting, and bring specific issues and experiences into the standard-setting process.
The event concluded with a call for leadership, collaboration, and coordination among ITU, ISO, and OHCHR to ensure that views from a wide range of stakeholders are considered in the development of technical standards. This inclusive approach was deemed crucial, regardless of whether stakeholders are from developed or developing countries.
The panelists also discussed the existing human rights norms and frameworks as a solid foundation for guiding the development and use of technologies. They acknowledged the need for ongoing collaboration between human rights organisations, standard-setting bodies, and other stakeholders to ensure that technical standards align with human rights and promote trust among users.
In summary, the panel discussion provided a comprehensive examination of the intersection between emerging technologies and human rights, highlighting the need for a concerted effort to ensure that technological advancements are harnessed responsibly and inclusively for the betterment of society.
Session transcript
Moderator:
Hello everyone, thanks a lot for being here for this panel about navigating emerging technology and human rights in a connected world. It’s really an honor to welcome you and it’s co-organized with ITU, with YTHR, with APC, with ISO also. And Ambassador Kerr is going to arrive soon, he will do the moderation, but we are starting now. And first, to open the session, I’m delighted to introduce Ms. Doreen Bogdan-Martin, the Secretary General of the ITU, who will provide the opening remark. Secretary General, please, the floor is yours.
Doreen Bogdan-Martin:
Thank you, thank you so much and good afternoon. I was worried I was going to be late. It’s great to be here. I hope you have found day two productive. I think we have seen and also heard so many great examples of digital technologies that are really demonstrating how we can make an impact when it comes to the SDGs. One number, I think, captures it like no other, and that is a 70 percent, a 70-percent increase. So, thank you. Thank you. Thank you. and that is the percentage of the Sustainable Development Goal targets that can actually directly benefit from digital technologies including artificial intelligence. Digital solutions like AI can accelerate progress in climate action, eliminating hunger, eradicating poverty and making education and healthcare accessible to all. The opportunities to strengthen human rights are enormous, but so are the challenges. The AI related challenges threatening human rights are many, from discrimination and bias to disinformation and hate speech. But governance efforts emerging at the national level, at the regional level and international levels, I think really give reasons for hope, such as the landmark AI resolution recently adopted by the General Assembly in March. And I’m sure you’ve all read it or followed the process, but I think what’s important in that resolution is that it does stress the importance, as Peggy can confirm, of human rights in that resolution. And it also acknowledges how the UN system, consistent with our mandates, uniquely contributes to reaching global consensus on a safe, secure, trustworthy AI future. And of course, that would be in line with international law and also the UN Charter and the Universal Declaration of Human Rights. So tomorrow, as many of you know, we will hold our first ever governance day, our AI governance day. We’re quite excited about that governance day. We will be convening ministers, we’ll have experts, academics, policymakers, civil society. and, of course, the technical community. And we’ll be looking at the latest AI governance efforts. What do they have in common? Where are the gaps? And how can we find the balance between guardrails and innovation while placing the UN core values, like human rights, at the center? Standards, of course, are a critical component. They’re critical in trying to find that right balance, as they are the building blocks for the design, the development, and deployment of emerging technologies. The human rights perspective must be incorporated into the standards-making process. We have been fortunate to be working very closely with ISO, with IEC, and, of course, with the High Commissioner for Human Rights. And we need to make sure that they are core to the process in ways that help to make it more inclusive, transparent, and also aligned with the vision set out in our Common Agenda and the Sustainable Development Goals. Though our work at the ITU is a bit more technical, I think, in scope, inclusion really is at the heart of everything that we do. Our ongoing standardization work, many of those aspects are dealing with emerging technologies from artificial intelligence, of course, to IoT, to quantum, and also things like the metaverse, which is a big focus of our work in the Standardization Bureau. But, of course, there’s much more work to be done to build bridges between the technical community, between technical issues, between the human rights communities. And welcome, Ambassador. I thought I was going to be late. I was glad you were late. And to this end, we are, as I said, working very closely with OHCHR, ISO-IEC, and this is all part of our world standardization collaboration. I especially hope that we will see more interdisciplinary work and we will have more interdisciplinary experts joining these discussions. I think that’s really critical. We hope they join us also, and I’ll make a plug for our upcoming World Telecommunications Standardization Assembly that will take place in November in India. Hope you will all join us for that effort. And with that, ladies and gentlemen, if we are to achieve the SDGs, and we know that they’re off track, we really do need a whole-of-society, whole-of-community approach. We really need all hands on deck. Digital cooperation is key to mitigating potential human rights-related impacts of new and emerging technologies, and the upcoming Summit of the Future and its Global Digital Compact, as well, of course, as the WSIS Plus 20 Review next year, are important milestones. They’re important milestones to ensure that everyone, everywhere, can benefit from the potential of emerging technologies. I look forward to hearing your ideas and to working with all of you more closely to ensure that we put the UN values at the core, and human rights at the core, and at the heart of a more inclusive, innovative, sustainable digital future for all. And with that, thank you again, and welcome Ambassador Kha.
Muhammadou M.O. Kah:
Good afternoon. It’s my distinct honor to welcome you to this high-level dialogue on Trust in Tech, Navigating Emerging Technologies and Human Rights in a Connected World. This event is co-organized by the International Telecommunication Union, the Office of the United Nations High Commission for Human Rights, the International Organization for Standardization, and the Association for Progressive Communications. Today we are here to explore the intersection of technology and human rights, and to discuss how we can foster trust and inclusivity in our rapidly evolving digital technologies. And we’ve just listened to the Secretary General’s insightful remarks, and we wish to take the opportunity to appreciate the brilliant leadership of our Secretary General, Ms. Bogdan Martin, and for your very insightful remarks on a very important topic today, as digitalization and AI evolves in humanities all over the world. As we gather today, it is crucial to acknowledge the profound impact that emerging technologies have on our world. They offer tremendous opportunities for innovation and progress, yet they also pose significant challenges, particularly concerning human rights and trust among users, as well as human dignity. This event aims to address these pressing issues. The role of technical standards cannot be overstated. They serve as the backbone of interoperability, security, and compatibility in the digital realm, ensuring that diverse technologies can work together seamlessly. As we advance into the era of digitalization, we must be aware of the challenges we face. We must be aware of the challenges we face. AI and other groundbreaking innovations, it is imperative that these standards are developed with a strong foundation in human rights. By translating human rights principles into technical terms, we can create frameworks that not only foster innovation but also protect the rights and dignity of individuals worldwide. Now, I know we have a very distinguished panelist here. It is my honor to introduce our distinguished panelist who will provide deeper insights into these very important issues. I have on my left here Ms. Peggy Hicks, the Director of Thematic Engagements at the Office of the Human Rights High Commissioner. I have Mr. Sajjo Mujica, the Secretary General of ISO. I have Ms. Mercedes Aramendia Falco, the President of the Board of Unidad Regulador de Servicio de Comunicación Uruguay. I have Ms. Annette Esterhuisen, the Senior Advisor on Global and Regional Internet Governance at the Association for Progressive Communications, and Mr. Olivia Elias, the Program Coordinator for Human Rights and Technology at the ITU. Each of these distinguished experts brings a wealth of knowledge and experiences to our discussion. Together, they will help us explore how we can embed human rights into technical standards, build trust in technology, and foster a more inclusive and sustainable digital future. Let us now proceed to our first round of questions, starting with Ms. Peggy Hicks. Ms. Peggy, from a human rights perspective, What are the key challenges posed by the rapid advancement of emerging technologies and how can these challenges be effectively addressed through multi-stakeholder collaborations?
Peggy Hicks:
Thank you so much, Ambassador Kha and to the Secretary General of IDU, Doreen. It’s great to have this close partnership next door to allow us to engage on these issues and to our fellow hosts and partners. It’s really, I think, for me, so encouraging to see how much human rights has been integrated into the conversations here at WSIS and as we go forward to AI for Good. I think things are evolving. I see that. I’ve been complaining to everybody because we kept wanting human rights to be part of the conversation and now it is and my staff and myself are so overstretched that we can’t be in all the rooms where people want to talk about human rights, but I think that’s overall a good sign and look for more of that to happen in the future. But going to your important question about what are the key challenges from a human rights perspective, I’m going to start with the big picture and I’m sure my colleagues will fill in some of the specifics around the technical standards setting side. The first problem that we have is that often the human rights conversation is seen as a limiting conversation. It’s about putting in place guardrails. It’s about putting in place obstacles that may get in the way of innovation. We really want to move away from that framing if we can and start to think about how human rights can actually help us achieve better results using digital technology. We’re firmly convinced that that is actually the case, that the problems that we’re raising, the issues that we’re looking at from a human rights perspective are ones that if we bring them in early, we will be able to achieve more in terms of development, more in terms of people’s rights going forward. The next challenge I want to move to then is one that really frames that issue quite clearly, and that is the need to close digital divides and ensure inclusion in a broader sense. We think about this both as we need the benefits of digital technology and AI to reach everyone. And that unfortunately is not the world that we currently live in. And secondly, we need to set those guardrails that I talked about to ensure that the harms are also mitigated for everyone as well. We can’t have an environment where it’s safe to do something in one environment because you have a nice AI safety institute there and someplace else doesn’t have the resources to invest in those frameworks and they’re not able to do it. So we need to recognize in solving that challenge of closing those digital divides first that one size won’t fit all in this. We will need to adapt the way that we approach the challenges in the digital world based on the different contexts and environments we work in, some of which have well-developed systems that can supplement and support this, others that do not. And that means that we’re going to need resources and support to make sure that we build on the learning that we have, but we don’t just try to transfer things wholesale from one place to another. We need greater human rights integration that will allow us to achieve results in development and better secure economic and social rights. So this isn’t just about the privacy and discrimination and inclusion issues, it is also about how we actually achieve better results for health, for education, across issues like digital public infrastructure and smart cities, which was a conversation happening in the room before we came in. It’s those concerns that have led us to support the proposal within the Global Digital Compact for a Digital Human Rights Advisory Service, the idea of which is we want to be able to provide support and engagement, leveraging the expertise that’s already out there. There are all sorts of academics and others who can help us solve some of these challenges, advise and support both governments at the national. and other levels, but also businesses in facing these challenges. The third of five, I’ll hope to hold to my five minute timeframe challenge that I wanted to mention. I think Doreen has already hit on, which is around the need to break down some of the silos that exist. There are too many conversations that are technical conversations where it’s really hard for the human rights voice to be heard. That is something that actually is a problem on both sides. We recognize that as much as I can say to all of you that I think the human rights framework offers a lot to these conversations, it’s not always so easy to pull out how it can offer those things. It’s got broad principles with all sorts of application through our special procedures, our human rights experts, our treaty bodies, but that’s a vast array of information that’s not necessarily accessible to the standard setting body or the regulator who wants to use it. So we have a challenge on our side to make that more accessible and more practical so it can be applied in these conversations more easily. But on the other side, of course, we also need an open door and an interest in bringing that material in. So we need to break down those barriers and I would just support what Doreen said about interdisciplinary experts. The more people we can find who actually have expertise in all of these areas, the better. The fourth piece I wanted to emphasize is the importance of inclusion, participation and a multi-stakeholder approach. This is essential to get the results we want. It’s not just a principled approach, but it’s a practical approach. Including and listening to those who will be affected by digital technologies will be crucial to keep the design and development of those technologies on track. We need broad inputs from a diverse set of perspectives. This happens all the time where technology is rolled out. And there are things that would have been obvious to one community that were overlooked in the risk management that was done in advance. So we need those perspectives to be at the table to make sure we get good results. And I have to emphasize that the global majority is often very much underrepresented in the conversations around these. So they’re being done in environments that can be very dominated by the developed world, by Western experts, and also often conducted in the English language in a way that leaves out the perspectives of others. So we need to really change that as well. The fourth challenge I’d mention is really looking at the role of business. This is an area from a human rights perspective where we’ve done some excellent work, I think, but where much more needs to be done. Businesses do have a responsibility to respect human rights, and governments have an obligation to ensure that businesses do that. I think you won’t be surprised to hear me say that I think both businesses in the tech sector and governments, in terms of the regulatory obligations, have a long way to go to make sure that we’re doing everything possible to ensure that companies are engaging in the way that we’d like to see in this space. So we have a project called BTEC where we’re working with a community of practice of big companies. We want to have it not only be the big companies. We’d like to really bring in a broader range of actors within the sector to incentivize good practices. We need to make it so that the companies that do better in this actually see that that helps them to move forward and thereby encourage others to do the same. And the final challenge, sorry, that was actually the final, yeah, that’s five, sorry. I have one point, which was to link it into the following conversation around technical standards setting. To me, it’s a really good example of where all these things come into play, and I’m sure we’ll hear that teased out in the comments that we’ll follow. It’s an example where we see the growing importance of technical standards setting, you look at. at the EU AI Act as an example, but we know that we have a long way to go to ensure that human rights is integrated into the technical standard setting area. And I’m looking forward to hearing the comments of others about how better to do that. Thank you.
Muhammadou M.O. Kah:
Thank you, Peggy, for your very insightful intervention. Picking up from your last intervention on standards, this is for Mr. Sajjo Mujica. Sajjo, welcome. Thank you. How can international standard bodies like ISO contribute to embedding human rights considerations into technical standards, particularly in the context of evolving technologies, situating it from where Peggy left? Thank you. Very well.
Sergio Mujica:
Thank you, Ambassador, and good afternoon to all of you. ISO is the International Organization for Standardization. We have 170 members all around the world, of course. And what we do is to create a global language to define technical specifications or technical requirements to ensure that a product or a process is fit for purpose. Human rights also represent a universal language and how we fight against injustice, repression, and abuse of power. So the key question for us, standard makers, is how can we help? Because this is so important. And I want to start by thanking Doreen and the ITU because two years ago, they organized a high-level meeting with the UN High Commissioner for Human Rights with exactly the same approach, how can we help? And if I wanna put it in a nutshell, it’s number one, by the portfolio of standards which already exist. And I think this is a message to everyone here, please do not reinvent the wheel. There are very relevant international standards that already exist. that can support the human rights agenda. And just to give you a couple of examples, we have very well-known standards on anti-bribery, on social responsibility, accessibility, occupational health and safety, and so on. We also have a very important and well-known standard on sustainable events. Just to give you an example, just in a few weeks in Paris, the Olympic Games will be certified as a sustainable event using an ISO standard. So that’s the first thing, using the existing portfolio. The second one is about the platform, because this is not about the how. We’ll, of course, have 25,000 international standards that can be used today. But when you know what you have, we can also identify the gaps. And what makes standards relevant here is the how, is the process we follow, is the set of core values we believe in. Because I can give you a new standard in two weeks, if you want, in my office here in Geneva, I bring a couple of top experts and I give you a new standard. It doesn’t work like that. What we believe in is in consensus building. All our standards are developed by consensus. We do not impose anything, number one. Number two, we work in a very transparent manner with inclusivity, and this is very important. And actively providing a voice to developing countries. This cannot be a conversation by a club of rich countries. We need to really ensure that this is a universal conversation taking into account all the different needs and point of view. So in that process, we have rules to ensure a stakeholder engagement. So it’s not only about IT experts here, it’s about consumers, it’s about academies, it’s about governments, it’s about civil society. And second, we also have rule to ensure that human rights and in particular, the sustainability agenda will be taken into account when developing the standard. So once again, it’s not only an expert conversation about the technical stuff, those considerations also need to be considered when developing the standard. In AI in particular, we have a dedicated technical committee to deal with AI. It’s something that we do together with our sister organization, the IEC. And in that technical committee, we have created already a set of relevant standards for AI. And probably the most important one is the management standard on AI, where we can really support all kinds of organizations to implement international best practice in the daily activities. So in a nutshell, I think it’s the national standard can be really instrumental to ensure that all the opportunities of AI can be captured, but also the big risks we have can also be properly addressed with this real framework that will support to develop, use and regulate artificial intelligence. Thank you.
Muhammadou M.O. Kah:
Thank you so much. Talking about some very important issues. Standardization is very, very crucial as AI evolves. Now I will turn to Ms. Mercedes Aramendia Falco. Welcome. Now, as a regulator, how can governments and national agencies balance the promotion of technological innovation with the protection of human rights in the telecommunication sector? Thank you, Ambassador for the question.
Mercedes Aramendia Falco:
Hello, everybody. Thank you very much for being here with us. I’m really happy to have this opportunity. And I think that this question and this matter is really relevant to the discussion. important and we really need to work on it. Firstly I want to thank all of you and of course ITU, Andrean, an ambassador, and I want to remark that I think that this topic, this matter of balance, is one of the most critical issues in which we have to work on. Why? Because on one hand, of course, this technology and everything, all the platforms, applications, and all the possibilities that technology brings to us, they try to attend different needs, social, cultural, and economic needs that society has. But at the same time, of course, it’s important to consider and to try to find a balance in order to consider ethical matters and also human rights, such as non-discrimination, freedom of expression, access to information, disinformation, privacy, among others. Of course, like everything in our life, achieving balance is very, very difficult. We know that, I think. But I think that between innovation and human rights, also, it’s not easy. We need to have perspectives and also expertise, help for developing effective and comprehensive solutions. I’m going to mention some aspects that I believe that we should consider at the time to develop technology and also at the time to regulate. Firstly, we have to look, of course, for the balance since the very design, since when we start a project. Promoting innovation while protecting human rights is, of course, delicate, and innovation drives economic growth. But without control, it can affect ethical matters, and also human rights. So we must work on maintaining that balance and review and adjust to ensure that technology serves humanity without overstepping ethical boundaries. Secondly, I think that it’s very important to have a framework and also principles. It’s important to clear, to create clear laws and principles that protect privacy, data security, freedom of expression, disinformation, among other rights, while encouraging technological advancement. This should align with the reality of each country. For example, in Uruguay our constitution recognized all inherent human personality rights, while also we ratified every international human rights treaties and various human rights conventions. Additionally, we have specific laws to protect personal data and regulations for fighting online piracy, which seek a balance between freedom of expression and intellectual property rights, for example. The third point is collaborative approach. Working with all stakeholders, including tech companies, civil society, academia, and international organizations, of course the public sector and regulators also, ensures diverse perspective and practical effective regulations. This approach helps find better solutions and create stakeholder engagement, which facilitates the implementation of these solutions. Collaboration fosters a sense of shared responsibility and mutual benefits, enhancing the effectiveness and acceptance of regulation later. The fourth point is transparency and reasonable process. Ensuring transparency is how regulations are developed and enforced to help people, academia, and the whole ecosystem to understand the motivation and the why, and also help us to build trust and legitimacy, making later to facilitate the compliance of that regulation or also about the law, helping also to create a culture of accountability. The fifth point is to monitoring compliance and also education. Monitoring solutions and systems to ensure compliance is necessary. It’s also important to seek reasonable and proportional consequences for not compliance. Nonetheless, the key is to educate and raise awareness about risk and challenges. Education empowers individuals and organizations to navigate the digital landscape responsibility and ethically. Finally, it is very important to constant modernize. We have to look at modernization. Regulation should be flexible and adaptable and should promote innovation. Also, we have to consider the speed of the changes, regularly checking if regulation is needed, if it is updated or we have to work to attract and to foster innovation, and also we have to work to attract and retain talent in the regulation and also in the ecosystem, because we really need people to understand technology and at the same time understand the human right and all the risks that this involves. In this sense, it’s important to remark that it is necessary to have interdisciplinary groups, because we need lawyers, we need engineers, we need people to be able to understand the both sides of a project and that will help us to make it real and once that the project is real, look if what we design is correct and we really achieve that balance that we really need. Of course, for that we need to create an environment, like sandboxes, that help us to design, to implement, and also to check if everything is going well or if we need to make some change or some adjustment in the design and also in the implementation, looking always to protect the ethics and all the human rights. Of course, it’s really hard to try to know all the risks that we are going to have, so for that it’s so important to try to work in sandboxes and try to find that kind of solutions that help us to try before they took that solution into reality. Thank you.
Muhammadou M.O. Kah:
Thank you. Thank you, Mercedes, for your intervention and for making the point the centrality of balance and the promotion of technological innovation with the protection of human rights in the telecom sector. And more importantly, regulation must be flexible, adaptable, and to promote innovations. Thank you so much. Now, I turn to Ms. Annette Esterhusen. In what ways can civil society organizations actively participate in shaping technical standards to ensure they uphold human rights and serve diverse communities’ interests? Thanks,
Anriette Esterhuysen:
Ambassador, and thanks to the ITU and your partners for convening this. I think civil society has a fundamental role. It’s not that human rights people don’t understand technical standards. they might just understand them a bit differently from how technical people understand them. I think that what we can do is make standard-setting organizations aware of what it means to work with a human rights-based approach, about what the differences are between consumer rights, also important, but between consumer rights and human rights. They are very different, and it’s important that regulators, that manufacturers, standard-setting organizations, service providers, pay attention to both those different sets of rights. We can also help standard-setting organizations understand the difference between multi-stakeholder collaboration and public-private partnership. There’s a long history of standard organizations working with the private sector. There’s not such a long history of them working with civil society, and I hope that history is starting here and getting deeper and longer here. I think also what we can contribute, Mercedes talked about the need for principles, the need for ethics. I mean, our perspective would be that existing human rights norms and frameworks gives us the basis that we need for those principles and ethics that can inform standard-setting. And I think because we work so closely with those norms and principles, we can assist in those processes. Then I think we can really help demystify technical jargon and create awareness in our spaces where we work in different parts of the world, different groups, that standard-setting matters, that it is important. And I think you’ve outlined exactly why it is so important, and I think people underestimate how important it is. And we can also help creating awareness of where those standards are being made. The transparency is very important, but the standard-setting organizations can’t do that alone. Civil society can help with that broader outreach and bring in a. a broader constituency. And then I think we can collaborate. I think I have to commend the Office of the High Commission for Human Rights on the report they did last year. They had a call for input on digital technical standards and human rights. And for us, it was a challenge to contribute to that report. But it made us work. It made us think about what the connections are. And I think that’s another contribution. As this link between standards and human rights evolves, between emerging technologies and human rights evolve, we can play a role. I think we can also put specific issues on the table, such as encryption, surveillance, privacy, digital inclusions, issues that matter to us and which are connected with standards. And then we can bring very specific experiences into the standard-setting process. I can mention two examples. My organization, Associations for Progressive Communications and the Internet Society were involved in establishing a community network in northern Namibia. The routers worked for about a week. And then they started melting because the temperatures were just so high. Now, those routers were fully compliant with technical standards. But they did not work in that particular context because they were in outdoor spaces. There was no air conditioning. They simply didn’t work. Another example, which I think one of our members, the Director of Secretaries, has been looking at is the new instant messaging interoperability standard being developed at the IETF. Now, this is very important to have interoperability between different messaging systems. But the work that we’ve done on cyber harassment, cyber stalking, sexual harassment of women, gender-based violence online, security is extremely important. So we don’t want the interoperability between messaging systems to come at the expense of the security that we feel people need to be able to protect themselves and not be exposed to unprotected. unwanted messages. So these are specific examples. I think then, I mean, we have played a role in human rights in digital, I think a very big role. And I think we can do that in standards as well. But I think we also need to acknowledge that there’s a lot of capacity development that we need to do ourselves as civil society. We have to be more deliberate and more active in building our own knowledge of the issues that are at stake and the processes that are at stake. And we cannot do it alone. And I think this is where it becomes very important for civil society to work with the technical community, to work with standard setting organizations as well. Now, they also need to change, but I’ll say more about that in the next round, if we have a next round.
Muhammadou M.O. Kah:
Thank you so much, Annette. And thank you so much for making the point that standard setting matters. Transparency is very important and awareness and the role of the civil society is key and crucial to build trust in tech. Thank you so much. Now I move to Mr. Olivia Elias. Olivia, welcome. How can technical standards be leveraged to ensure that emerging technologies aligned with human rights and promote trust among users?
Olivier Alais:
Thank you, Mr. Ambassador. Thanks, everyone, to being here today. As you know, ITU is a United Nations Special Agency for ICT, and that includes the development of global technical standards. We can wonder why standards are important. They are important because they help technology, as Sergio said, to work together. They are also helping interoperability. They are also helping something important, the market access. And at ITU, our members, also the private. companies, and to be sure that they are complying with market rules and international cooperation. And standards can play a role in human rights. They can ensure privacy, freedom of expression, access to information, data protection, and non-discrimination. That’s why standardization, the core function of ITU, is gaining increasing attention from the human rights community. Why ITU can play a role here. It’s a time-tested interest platform We have been working on communication and standards for 160 years. We are a global collaboration platform. It’s why IT is important. Our members are government, industry leaders, academic institutions. And we can work with all of them to build an inclusive approach to standard development. And also, we have at ITUT, so the office where we are doing the standardization, we have 11 different study groups. And they are driven by experts. And it’s also something important. They are driven by the people they know and they understand the standard. And we are working closely with them now to try to talk about human rights and how we could embed human rights into the standardization process. And as ISO, we are based on consensus. It’s something very important in the standardization world. We cannot impose anything. We need to find a common ground, a consensus. Looking at the human right consideration, and as Peggy underlined, today the government have an obligation to protect human rights offline, but also online. It’s coming from a resolution from 2013. It has been adopted at the UN General Assembly. So we need, we have to, we must protect human rights also online. And the Human Rights Council in 2021 came with a resolution. resolution to ask for better cooperation between OHCHR and the standard development organization like ITU and ready to consider the relationship between technical standard and human rights. That’s why today we are trying to work closely with OHCHR, but also with other stakeholders, with ISO, with IEC, with civil society organizations. And we already start to work on human rights. I can give you two examples in the ITU study groups, where they are already working on human rights-related needs. The study group five, for example, is working on e-waste management. And a clean and healthy and sustainable environment is a human right. And here’s a challenge that we are all facing globally. It’s the rapid digital growth has led to a surge in e-waste in poisoning also the environment. And so this group at this level is drafting standards for sustainable e-waste management and safe recycling. Another example is the study group 16, who is working with WHO on telehealth accessibility. Because one more time, access to health is a human right. And here’s a challenge that they are trying to solve is that people with disabilities often struggle to access telehealth services, especially in the global south. And so this study group 16 with WHO are developing human rights standards. They already developed standards to make telehealth platform more accessible for everyone. So to conclude, standards enable products and services to be safe, reliable, and high quality. And they enable a technical advancement on a global scale. We would like to work more. important for us with OFCHR, because IT, we are technicians, and we need also to learn and collaborate. And collaboration is key. And also with ISO, with other SDOs and other stakeholders. And finally, something I guess is important, we encourage stakeholders to establish a clear link between technical concept and human rights. Today, the link is we still need to translate really human rights into technical terms, and to embed human rights into the technical standards. So there is still a lot to do on that path. Thank you.
Muhammadou M.O. Kah:
Thank you, Olivier, for making that very last point of the importance of the linkages, as well as the central role of meaningful collaboration and partnerships that are very important for the actors and agents in the ecosystem, particularly with OHCHR. Thank you so much for making that point. Participants, I wanted to go for a second round, but unfortunately, we do not have time. And I love you so much, so I wanted to get you involved. Now I’m opening it to you to take the opportunity to ask that burning question in your minds. The floor is yours. The mic is going around. Any burning questions? Yes, we have someone there at the back.
Peggy Hicks:
Ambassador, can we ask them a question? Oh, now the hands are going up. Yes. Is there anyone in the room who’s been part of a standard-setting process? How long did it take? Just a quick sense of just how many years. Two years. I think it’s an important point. If we want to be involved in standard setting, we have to be in for the long haul. I think those are short periods, by the way.
Muhammadou M.O. Kah:
Very important question, and I think the point is taken. You have the floor. Thank you.
Audience:
Jason Peelmeyer from the Global Network Initiative. Thank you very much for the remarks from the panel. My question, really, for whichever of the panelists would like to take it, is how you see standards processes interfacing with emerging regulatory regimes for technology platforms and services? So we have the Digital Services Act, the Online Safety Act in the UK, various other regulations that are emerging in different jurisdictions, many of which refer to risk assessments, human rights due diligence, and transparency, research or access to data, other common elements, which in theory could be standardized across regulatory regimes. But I think despite some of the language in, for example, the DSA that calls for standard setting, we haven’t seen much movement towards actual standard setting through those processes. So I’m curious how you think those regulatory regimes might take advantage of, or potentially how standards bodies might step into that role. Thank you.
Muhammadou M.O. Kah:
We’ll take another one while the panel think about the question. Please go ahead.
Audience:
Hello. Maurice Turner with TikTok. I wanted a follow-up question on Arianette’s question to the audience, which was, should these processes be accelerated? So we heard one and a half years, two years, that we need to be in it for the long haul. Is that a sufficient timeline for what we’d like to get out of it, or does it need to be accelerated? accelerated, given the pace of technological advances. Thank you.
Muhammadou M.O. Kah:
One more question, please. Thank you to all the panelists, first and foremost, and to Your Excellency as well for moderating so graciously. I am Maria, consultant for the United Nations Alliance of Civilizations, and my question is whether we have consensus, since we discussed consensus building, on whether we already have the language and the tools in terms of standards, for instance, and it’s a matter of interpreting and adapting it to the human rights language, a bit of a teleological effort in legal terms, or on the contrary, we need to keep building on certain aspects that appear more evident to certain communities or certain regions. So do we have to invent some extra wheels, or do we all agree that there is already a lot and it’s a matter of streamlining? Because in my experience, I think that we are facing a bit of a data deluge, and actually coming together, streamlining, and doing this interpretation and translation effort could be more crucial. But of course, I will greatly welcome your thoughts on that. Thank you. I’ll now turn to the panel, if you can take a minute or two, because we’re really out of time to respond to the questions. We start with Peggy.
Peggy Hicks:
Thanks. No, I wish we had more time. They’re all good questions. I think the point about the regulatory regimes and how we align it with standard setting is super important, and I agree it’s not moving forward quickly enough, and it raises, it emphasizes the points that Arente made about the resources that are necessary on all sides, especially smaller companies, CSOs. If we want everybody to be engaged, we have to make it easier. So more needs to be done to align what’s happening, what we’ve heard, and I think you and I have even had a conversation around the auditing process and other things that haven’t really made the space yet to make sure that there’s enough human rights being brought into that system. But I do think if we can develop some good practices, then it does make it easier. to replicate them in other places, although keeping in mind, as I said earlier, that not one size will fit all. I think in terms of the pace, I think it’s a really good question, and I’m sure my colleagues will answer it more intelligently, but I think ultimately, we need for it to be participatory and inclusive, and that does sometimes take time. And I’m afraid that when we cut corners on that, we end up with results that just aren’t as thorough and aren’t as usable. And then we have to go back and redo it, in a way that ends up taking longer. So we’ve gotta get it right, but we should do it as quickly as we can. And then finally, on your point, it’s interesting, this is something that we go to all the time, in terms of the human rights framework. Is it enough? Do we need new digital rights? Our answer on that is always, as you’ve said, we have a lot there. Let’s apply what we have, let’s use it to the maximum of its ability, and then if we find gaps, let’s look at them. But we’re still a long way from using what we have. I think on the standard setting side, from what I understand from my colleague at ISO, is that there is a lot there that we can build on, but there’s also a lot that needs to be developed in these new areas, and we need to just make sure that we follow the good practices that have been voiced here. And that is one point I wanted to make, is that the standard setting arena is very diverse, in terms of what the practices are. So we heard some of the best practices from our colleagues at ISO and ITU, but part of what we’re trying to do is really make that more apparent, that those are, that high bar should be reached by standard setting organizations across the board, and help them to get there as well. Thank you.
Sergio Mujica:
Sayu, please take it up. Yeah, a lot of good questions. Interesting point. Try to cover all of them. First, collaboration with regulators. Absolutely, we do not compete with regulators. We do not create public policy. We help implement public policy. And some countries are very active in tasking standard setting organizations to create relevant standards. We do not always. start on time, if you will, but this is the secret. All of you, each and every one of you, has the power to initiate a standardization process through your national standard body in your country. So I do not start the standardization process here in Geneva. It is started by the members and the national level. That’s number one. Second is the speed, it’s absolutely true. I told you in my example that I can develop a new standard in two weeks in my office here in Geneva. It doesn’t work like that. We need to remain true to the set of core values we believe in, and that is not for free. That takes time. But we can improve the efficiency of our processes, and we can also work in the so-called good enough approach. COVID, we couldn’t wait three years to develop standards to support combat COVID, of course. So sometimes it’s good enough to go out with something acceptable that we can improve as we go along. So that sense of urgency, we can work with, and we have a set of rules that can allow for that sense of urgency. But full consensus, that takes time. Number three is about inclusiveness. One of the issues we have, sometimes standards are perceived like fancy technical solutions created by rich people to be used by large corporations. Most of the time, it’s not true. Sometimes, unfortunately, it is true. Because one thing is to have a door, and a very different thing is to cross that door. We need to work with civil society. We have the rules to do it, but not always we manage to engage them successfully. The same thing with developing countries. And I think it’s everybody’s responsibility to mobilize everything we have to give a voice to everyone. Otherwise, our standards are not relevant everywhere, and we end up applying an international standard in a place where it doesn’t make sense at all. And then the final point, do we need to create something new? I think it’s really important. to ensure a voice to human right organizations in the relevant projects we’re working on. What I mean, if we’re working on 3,000, and this is for real, if we’re working on 3,000 new international standards, out of those 3,000, which are the 10, 15, 20, which are really relevant for the human right community, and how do we proactively bring you to those conversations? That’s number one. And number two, how do we educate our 50,000 international experts that are creating standards we speak in the basic understanding of human rights? This is not about developing technical solutions to create a pen. It is also about embedding some basic values, the lens of human rights in those technical conversations.
Mercedes Aramendia Falco:
Okay, I will do it very fast. First of all, as a regulator, for us, it’s very important to have standards. For us, that is simplify our work. So for us, it’s great to have standards and rules that help us to know how to do it and to follow them. I think that something very important is to have clear everybody that technology is a means to an end, and not it’s an end in itself. So we have to use technology for what we need, and not this technology, it is the best or it’s everything technology. Considering that, I think that in relation with the times, of course, that depend on each objective. And I think that it’s very important to consider the time to offer transparency. And in relation to the laws, I think that it’s very important to review what we have and to update whatever we need to update. But I think that the big issue here is we have laws, we have the human rights, different instruments, et cetera, but I think that… The big question is how to apply it, how to make it real. Because we all know that in offline life, we all have to comply with what the Human Rights Convention said and what the law said. And I always ask why in online life, we have some questions, or we believe that we can do things online, that we in the offline world, we never doubt that we are not able to do it. And also, I think that we have to look always for the balance, and we have to take care to protect the human rights since the design and by default. Thank you.
Muhammadou M.O. Kah:
Thank you. Annette?
Anriette Esterhuysen:
I’ll go ahead. And thanks, Ambassador. I think, Jason, it really is a challenge. And I think as there’s more regulation, content is one, platform is another. We’re in an era of more regulation in our sector. I think the importance of using human rights norms and standards as a common baseline across different regulatory regimes, different disciplines, becomes even more important than it always has been. Now, that’s not simple. We have to work to make those existing human rights norms and standards be applicable and understandable in different contexts. And I think that we also have the recommendations from the High Commission report that makes very specific recommendations to standard organizations. There’s also the NetMundial plus 10 guidelines on multi-stakeholder participation, transparency, and inclusion that can be used. And I think we can apply those. I think the question from TikTok, it’s a really interesting question. And I think Sergio has answered it really well. But I also think we have to acknowledge that sometimes standards, device hardware standards, sometimes they’re enablers, sometimes they are. bottlenecks. And I think it can also be important to look at particular context and whether, in fact, in that particular context, I’m looking at when mobile phones first were available in developing countries, there were such backlogs in regulators and national standard-setting organizations approving those devices. We’ve seen the same thing with Wi-Fi routers. So also, and sometimes there are vested interests, sometimes there are not. Sometimes it’s just, I think, as Sergio was saying, making decisions, prioritize, and then try and speed up on that basis. And I think the issue about consensus and language is, if I understood your question, I think it’s very important. I agree with Peggy’s response, but I think we shouldn’t underestimate the complexity of making these different universes intelligible to one another. So for example, I do a lot of work with regulators. They talk about public interest, they talk about consumer rights. They don’t – I’ll be finished – they don’t talk about human rights. So making those connections, I think, is very important. And then I would say, in terms of prioritization, absolutely, and I can speak for that. My organization prioritized TV-wide space standards a few years ago because we wanted more access in developing countries, and we wanted regulators to work with that. We’re now prioritizing Study Group 5, working on circular economy. So I think that’s important as well. You can’t do everything at the same time. You have to prioritize and then try and do it well and as collaboratively as possible. Thank you.
Muhammadou M.O. Kah:
Olivier, one minute.
Olivier Alais:
Yes, thanks a lot. I’m interested about the questions on the extra well and the interpretations. Yes, it’s something that we need. We need to have a stronger link with the technical communities. And when I’m talking to engineers, technicians, working on 6G, working on metaverse, working on optical fibers, for them it’s not clear what we are talking about when we are talking about human rights. And we need and we need to support them also to build their capacity and understanding a bit more what we mean about human rights. But also, we need to maybe work on the can say it is a taxonomy, but to have a document going to go is going to take the different human rights and to try to translate them in technical terms. For example, when we are talking about freedom of expressions, we know that we are talking about access, accessibility, cyber security. But we need to go deeper and really to find the right language to work closely with the people drafting this recommendation as a standard. So we still need to work on that.
Moderator:
Thank you so much. I wanted to leave you with some very few parting thoughts. I wish we had more time. This is a very fascinating and insightful panelist. And we’ve benefited quite a bit. I wish we had more time. I’ll leave you with this point, that specific and pragmatic strategies are required to enhance accountability among tech companies and states in ensuring that technological advancements respect and protect human rights. Secondly, education is fundamental on human rights. Education is very, very important to meaningfully implement and harness benefits from AI tools and services and managing risks. Thirdly, given the diverse global perspectives and unique and diverse context on technology and human rights, a meaningful multi-stakeholder collaborations and partnerships are required to support international engagements on technical standards to ensure inclusivity and respect for digital rights. And that came out from the panel. The leadership, collaboration, coordination among ITU, ISO, and OHCHR is crucial to ensure that views. a wide range of stakeholders are considered in the development of technical standards, regardless of whether they are from developed or developing countries. I think the panel deserves a round of applause. Thank you so much. And thank you for coming. Thank you so much. See you on the next session. Thank you.
Speakers
AE
Anriette Esterhuysen
Speech speed
178 words per minute
Speech length
1382 words
Speech time
465 secs
Report
The talk begins by highlighting the important role of civil society in the arena of technical standardisation, offering a unique human rights perspective which stands apart from the technical angles typically considered by professionals in the field. The speaker argues that civil to alert standard-setting bodies to the importance of adopting a human rights-based approach, distinguishing it from consumer rights—both of which are essential yet distinct considerations for regulators, manufacturers, and those who conform to international standards.
Civil society can also help decipher technical terminology, foster global awareness around the significance of technical standards, and promote better transparency within the standard-setting process. The speaker notes the relative novelty of engagement between civil society and standard-setting entities, despite a history of collaboration with the private sector, suggesting that this conference might signify a turning point for deeper civil society involvement.
Drawing from their organisation’s experiences, the presenter exemplifies how technical standards can sometimes fail to meet local needs. They describe how routers that meet technical specifications may still malfunction under extreme temperatures, such as those found in northern Namibia, if those conditions are not considered.
Additionally, they emphasise the critical need for safeguarding security and user privacy in the interoperability standards for instant messaging to prevent misuse such as harassment. The speaker then addresses the wider theme of integrating human rights within digital technology, positing that civil society could emulate its influential role in the human rights arena when establishing technical standards.
While leveraging established human defence frameworks could lay down an ethical foundation for these standards, the speaker acknowledges the challenge in fully integrating civil society into these technical discussions and stresses building the sector’s capacity for meaningful contributions. The speech goes on to discuss regulatory trends influencing standardisation, advocating for the alignment with established human norms and standards to achieve consistency amidst various regimes.
It references the High Commission’s report which provides recommendations for standard-setting groups and the NetMundial guidelines promoting multi-stakeholder engagement and transparency. In response to a query from TikTok, the speaker concedes that while standards can facilitate development, over-regulation can sometimes impede technological advances, such as the distribution of mobile phones in developing countries, with issues potentially compounded by vested interests or bureaucratic delays.
Finally, the speaker refers to the necessity of prioritising within the development of standards, citing their own organisation’s work on TV white spaces for enhancing access in developing countries and efforts towards a circular economy. This strategic prioritisation is deemed crucial for significant strides in aligning technical standards with human rights objectives.
In summation, the speaker calls for a nuanced fusion of human rights considerations within the process of technical standard-setting, urging for a collaborative, informed, and context-aware partnership between civil society, the private sector, and standard-setting organisations.
A
Audience
Speech speed
158 words per minute
Speech length
251 words
Speech time
95 secs
Arguments
Technological regulatory regimes and standards processes need interfacing
Supporting facts:
- Emerging regulations like the Digital Services Act and the Online Safety Act mention risk assessments, due diligence, and transparency, which could be standardized.
- Actual standard setting through processes mentioned in regulatory acts like the DSA is not yet seen despite the call for it.
Topics: Digital Services Act, Online Safety Act, Technology Regulation, Standard Setting
Report
Emerging regulatory frameworks, such as the Digital Services Act (DSA) and the Online Safety Act, increasingly highlight the importance of implementing standardised processes in the realm of technology regulation. These acts underscore the necessity for robust risk assessments, thorough due diligence, and enhanced transparency to safeguard online interactions and services.
Despite the clear references within these frameworks to the potential benefits of standardisation, there is a notable absence of actual standard setting. The regulatory texts call for such standards, but the execution of establishing these norms has not been observed in practice.
The argument at the core of this discussion posits that technological regulatory regimes require a closer interface with standards processes. The formal establishment of consistent standards is essential to ensure that entities adhere to regulations and engage in best practices.
However, the evidence suggests a critical stance, emphasising that, despite theoretical acknowledgment of standardisation, there is a shortfall in the integration of these processes within the regulatory systems. The practical implementation of standards would lead to more uniform compliance across the industry, reducing ambiguity and potentially enhancing the effectiveness of the regulations.
Nevertheless, the lack of movement in actual standard setting, particularly noted with the DSA, suggests that the call for standardisation may not have translated into concrete action. This disparity between the regulatory expectations and the establishment of substantive procedures presents a challenge for regulatory compliance and the overall goal of fostering a safe and accountable digital environment.
In conclusion, while there is an observed acknowledgment of the necessity for standardised practices as part of effective regulatory compliance in digital services, the disconnect between emerging regulatory regimes and the development of real-world standard processes remains a concern. This situation demands focused attention to bridge the gap, ensuring that standards do not remain merely theoretical but are actively developed and implemented.
The insights from this analysis highlight the importance of proactive engagement by stakeholders in both the regulatory and technology sectors to translate the call for standardisation into tangible actions that reinforce the integrity of the digital space. The summary maintains British spelling and grammar throughout, and reflects the analysis accurately without grammatical errors or typos.
It incorporates long-tail keywords such as “technology regulation”, “risk assessments”, “digital services”, and “regulatory compliance” while retaining the quality and clarity of the summary.
DB
Doreen Bogdan-Martin
Speech speed
147 words per minute
Speech length
913 words
Speech time
372 secs
Report
In an impassioned speech about the amalgamation of digital technologies with sustainable development, the orator fervently underscores the transformative potential that digital innovations, most notably artificial intelligence (AI), present for advancing the Sustainable Development Goals (SDGs). A striking statistic is presented—a 70% rise in the SDG targets that could reap direct benefits from digital technologies—underscore the prevalent belief that, despite extant challenges, the positive effects of such innovations could be momentous.
Digital technologies are depicted as a beacon of hope for issues ranging from climate change mitigation to hunger and poverty alleviation, and for expanding access to education and healthcare. Nevertheless, the speaker also candidly addresses the host of risks AI entails.
The potential for these technologies to contravene human rights through discrimination, the perpetuation of bias, and the dissemination of misinformation and hate speech is acknowledged. This portion of the speech serves as an acute awareness of the twin narratives that accompany technological advancement and the need for a balance between the beneficial and adverse effects.
Hope persists throughout the address, stimulated by emerging governance structures aimed at improved regulation and surveillance of AI systems. The AI resolution recently passed by the General Assembly stands out as a demonstration of the international community’s resolve to guide the trajectory of AI in a way that protects human rights.
This represents a critical juncture, reflecting a united effort to reconcile new technologies with established human values and legal norms. A principal highlight of the oration is the upcoming AI governance day, symbolising the global dedication to scrutinising and shaping AI governance.
The aim is unambiguous: forge common ground, bridge gaps, and discover an equilibrium between nurturing innovation and implementing needed safeguards. Upholding UN core values, particularly human rights, remains central to these pursuits. Exploring standardisation procedures, the speaker asserts the crucial role that embedding human rights in technology design, development, and utilisation plays.
Standards are described as fundamental for nascent technologies. By partnering with organisations like the ISO and IEC, as well as the High Commissioner for Human Rights, the speaker visualises a collaborative, multidisciplinary approach to standardisation, promoting inclusiveness, transparency, and alignment with broader global objectives like the Common Agenda and SDGs.
Highlighting the ITU’s technical mandate in standardisation across domains from AI to the Internet of Things (IoT) and emerging metaverse, the speaker underlines the imperative of sustaining inclusivity as a cornerstone principle within this intricate web of evolving technologies. They advocate for increased interdisciplinary efforts and expertise, calling for contributions to the imminent World Telecommunications Standardisation Assembly.
The conclusion of the speech is a resounding appeal for unified, intersectoral, global cooperation—highlighting ‘whole-of-society’ collaboration—to stay aligned with the objectives of the SDGs. The orator stresses that collaboration on digital matters is crucial to counter the potential human rights impacts from the deployment of advanced technologies.
Notable upcoming events, the Summit of the Future and the WSIS Plus 20 Review, are underscored as crucial venues for ensuring the global community taps into the potential of emerging technologies for the greater good. The speaker wraps up by reiterating the imperativeness of centring UN values and human rights at the core of an inclusive, innovative, and sustainable digital future.
This envisioned future is portrayed not merely as an aspiration but as an essential, attainable goal, achievable through collective commitment towards a shared vision. The speech, thus, operates not only as an overview of the current state of affairs but also as an inspirational directive—a call to arms for the advancement of a socially responsible digital realm.
MA
Mercedes Aramendia Falco
Speech speed
144 words per minute
Speech length
1277 words
Speech time
533 secs
Report
The speaker ardently emphasises the vital need to strike a balance between the relentless progression of technology and the defence of fundamental human liberties. They delve into details on six central strategies to tackle this paramount issue: 1. **Design Phase Balance**: It is critical, as the speaker emphasises, to incorporate the consideration for human rights at the technology design stage.
Innovations must respect ethical norms and human rights, including non-discrimination and the freedom of expression. Ensuring technology addresses diverse societal requirements, with a foundational focus on these rights, is imperative from the outset. 2. **Legal Frameworks and Principles**: The speaker cites Uruguay as a prime example, showcasing the significance of robust legal infrastructures.
Uruguayan constitutional and data protection laws, alongside international human rights treaties, expertly balance technological progress with individual rights and freedoms. Legal frameworks, the speaker suggests, should be precise and customised to fit a country’s specific circumstances, harmonising technological innovation with the protection of privacy and free speech rights.
3. **Collaborative Approach**: The speaker advocates for a collaborative model involving private sector entities, civil society, academia, and government bodies in shaping regulations. This approach pools diverse viewpoints, resulting in practical, effective legislation and promoting a universal sense of responsibility and better norm adherence.
4. **Transparency and Due Process**: Translucency in the regulatory creation and implementation is deemed essential for cultivating societal trust and confirming the authenticity of the measures. Transparency, extending to the logic behind actions, promotes a culture of accountability and fosters overall compliance.
5. **Monitoring Compliance and Education**: Vigilant monitoring of regulation compliance is necessary, the speaker insists, with proportionate repercussions for non-conformance. However, a more significant emphasis is placed on education, which capacitates individuals and organisations to responsibly and ethically venture into digital realms — implying preventive awareness is preferable over corrective measures.
6. **Continuous Modernisation**: Regulations must be agile, accommodating, and innovation-friendly. As technology rapidly evolves, regular reviews are necessary to evaluate regulations’ relevance, potentially leading to essential amendments. Interdisciplinary teams are lauded for their ability to grasp both technological advancements and human rights, thus enriching regulatory framework quality and appropriateness.
The speaker recognises the practicality of having set standards within regulatory context, facilitating easier compliance for those regulating. However, they caution that while technology is a mere instrument and not an end in itself, it should be wielded with that understanding.
They highlight the importance of transparent and timely processes, the essential review of laws, and the need for online life to reflect offline values. In summarising, despite the complexities and challenges in achieving a fair equilibrium, the speaker advocates for a thorough, unified approach that negotiates technological advances with the inherent ethics of human rights.
They reiterate that the safeguarding of human rights must remain a priority throughout the lifespan of technological development and applications.
M
Moderator
Speech speed
132 words per minute
Speech length
340 words
Speech time
154 secs
Arguments
Specific and pragmatic strategies are required to enhance accountability among tech companies and states in ensuring technological advancements respect and protect human rights.
Supporting facts:
- Tech companies and states play a crucial role in the responsible development of technology.
- There is a need for clear strategies to ensure technology does not infringe on human rights.
Topics: Accountability of Tech Companies, Human Rights Protection, Technological Advancements
Education is fundamental to meaningfully implement and harness benefits from AI tools and services and managing risks.
Supporting facts:
- Education on human rights is crucial for effective use of AI.
- Understanding the implications of AI tools and services is necessary for managing associated risks.
Topics: Education, AI Implementation, Risk Management
Multi-stakeholder collaborations and partnerships are required to support international engagements on technical standards for inclusivity and respect for digital rights.
Supporting facts:
- Diverse perspectives are essential for inclusive technical standards.
- Partnerships help to ensure digital rights are respected.
Topics: Multi-stakeholder Collaboration, Technical Standards, Digital Rights
Leadership, collaboration, and coordination among ITU, ISO, and OHCHR are crucial to ensure views from a wide range of stakeholders are considered in the development of technical standards.
Supporting facts:
- International organizations like ITU, ISO, and OHCHR facilitate the inclusion of diverse views.
- Technical standard development requires input from various stakeholders, including those from developing countries.
Topics: Global Leadership, Standard Development, Stakeholder Engagement
Report
In an age where technological incorporation is ubiquitous, the responsibility of tech companies and states to safeguard human rights is paramount. With continual technological progress, it is essential that robust and effective strategies are developed and put into practice to prevent technological operations from infringing upon human dignity and freedoms.
This focus on accountability aligns closely with SDG 16, which stresses peace, justice, and robust institutions, and is indicative of a positive stance toward preserving fundamental rights within the digital realm. Fundamental to these protective measures is the role of education, which is vital in the understanding and deployment of artificial intelligence (AI).
As underscored by SDGs 4 and 9, quality education is necessary to equip individuals with the capacity to utilise AI tools and services meaningfully while managing the accompanying risks. SDG 4 emphasises the importance of education, while SDG 9 highlights the need for innovation and resilient infrastructure, reiterating the significance of tailored educational frameworks that can cope with AI’s complexity.
International technical standards are fundamental for creating inclusive digital practices and ensuring digital rights, reflective of human rights, are observed. The pursuit of these standards resonates with the aims of SDG 17, advocating for the formation of partnerships to fulfil these goals, and SDG 10, which calls for a reduction in inequalities.
Such standards necessitate contributions from a broad range of stakeholders to construct digital environments that are equitable and accessible. In cultivating necessary collaborations, the value of global leadership and coordinated efforts is unquestionable. Organisations like the ITU, ISO, and OHCHR play a critical role in marshalling collective action and integrating a diverse range of perspectives in the creation of technical standards.
Their involvement is particularly significant in bringing forth the voices of developing countries, ensuring that international standards are truly representative and not dominated by technologically advanced nations alone. The expanded summary maintains that technological advancement and the protection of human rights can and should go hand-in-hand.
It reflects an interplay between various Sustainable Development Goals (SDGs), showcasing the interconnectedness of human rights, education, technological progression, and global partnerships. Through multi-stakeholder collaborations and responsible governance, there is an achievable vision of a future where technology is a force for the enhancement and protection of human rights instead of a hazard to them.
The text has been carefully reviewed to ensure it is free of grammatical errors and adheres to UK spelling and grammar conventions. The summary now accurately reflects the main analysis text, incorporating important key phrases and terms such as “robust and effective strategies,” “utilise AI tools,” and “inclusive digital practices,” all while retaining the high quality and clear communication of the original content.
MM
Muhammadou M.O. Kah
Speech speed
142 words per minute
Speech length
1261 words
Speech time
533 secs
Arguments
Emerging technologies greatly impact human rights and trust
Supporting facts:
- Technologies offer opportunities and challenges for human rights.
- There’s a need for trust and inclusivity in digital evolution.
Topics: Human Rights, Trust in Technology, Emerging Technologies
Technical standards are paramount for interoperability and human rights
Supporting facts:
- Standards ensure diverse technologies can work together seamlessly.
- Standards must be developed considering human rights.
Topics: Technical Standards, Interoperability, Human Rights
Human rights principles should guide tech standards to protect individuals
Supporting facts:
- Translating human rights into technical terms is essential.
- Frameworks should foster innovation while protecting rights.
Topics: Human Rights, Technical Standards, Individual Protection
Importance of balance between technological innovation and protection of human rights in telecom
Supporting facts:
- Regulation must be flexible, adaptable, and promote innovations
- Balance is central to promoting technological innovation alongside human rights protection
Topics: Technological Innovation, Human Rights, Telecommunications Regulation
Report
Emerging technologies considerably influence human rights, with a neutral stance underscoring the imperative for trust-building and inclusivity in their digital advancement. This perspective recognises the dual nature of technologies as harbingers of both opportunities and challenges, while stressing the importance of prioritising individual rights in the digital evolution process.
Technical standards are lauded for their crucial role in facilitating global interoperability, ensuring that myriad technological systems can function together seamlessly. Importantly, the integration of human rights into these standards is advocated, illuminating the need for technologies to operate efficiently while safeguarding fundamental human rights.
Affirmatively, there’s a consensus that human rights principles must be the cornerstone of technical standard development, fostering frameworks that both stimulate innovation and protect individual rights. The argument for human rights integration within technology evolution further reinforces this, particularly with advancements in areas like Artificial Intelligence and digitalisation, which must adhere to human rights tenets.
In the telecommunications sector, the creation of regulatory frameworks is discussed, highlighting the necessity for them to be adaptable and flexible. Such characteristics are paramount for driving innovation and at the same time, safeguarding human rights. Striking a balance between technological innovation incentives and the imperative of safeguarding human rights emerges as a core theme.
The collective narratives underscore the essential convergence of fostering innovation with the strict adherence to human rights standards. Regulations are called upon to evolve alongside technological progress while upholding ethical standards and human rights compliance, aligning with objectives of both SDG 9: Industry, Innovation and Infrastructure, and SDG 16: Peace, Justice and Strong Institutions.
Overall, this analysis offers a comprehensive examination of the relationship between technological advancement and human rights. The shared outlook is cautiously optimistic, conditional upon the responsible deployment of technology, steered by a principled commitment to innovation and infrastructure guided by human rights.
OA
Olivier Alais
Speech speed
158 words per minute
Speech length
1005 words
Speech time
382 secs
Arguments
Technical standards are crucial for technology interoperability and market access
Supporting facts:
- Standards help technology work together and ensure compliance with market rules
Topics: Interoperability, Market Access
Standards can safeguard human rights by ensuring privacy, freedom of expression, non-discrimination, etc.
Supporting facts:
- Standardization processes are increasingly considering human rights implications
Topics: Human Rights, Privacy, Freedom of Expression, Non-Discrimination
Human rights considerations should inform the technical standards development process
Supporting facts:
- Resolutions from UN General Assembly and Human Rights Council emphasize protection of human rights online
Topics: Human Rights, Technical Standards Development
E-waste management and telehealth accessibility are current examples where ITU groups are addressing human rights
Supporting facts:
- Study Group 5 is drafting standards for sustainable e-waste management, Study Group 16 is developing standards for accessible telehealth platforms
Topics: E-waste Management, Telehealth Accessibility, Human Rights, ITU Study Groups
Report
Within the landscape of technological standards, there is a growing recognition of their pivotal role in enabling both interoperability among technologies and access to international markets. These standards are linked with essential developmental goals, such as Sustainable Development Goal 9 (SDG9), which promotes innovation and infrastructure to bolster resilient industries.
By facilitating technology interaction and ensuring market regulation compliance, these standards foster an efficient and inclusive marketplace. Furthermore, technical standards are increasingly utilised to uphold human rights, addressing critical issues related to privacy, freedom of expression, and non-discrimination—core tenets aligned with Sustainable Development Goals 10 and 16 (SDG10 and SDG16).
These goals champion reduced inequalities and advocate for peaceful and inclusive societies, respectively. Standardisation processes now consciously consider the impact of technology on society and, specifically, on individual human rights. There is positive sentiment about the growing collaboration among key institutional bodies such as the International Telecommunication Union (ITU) and the Office of the High Commissioner for Human Rights (OHCHR).
Their joint efforts to embed human rights standards into global technical frameworks underscore the commitment to Sustainable Development Goal 17 (SDG17), which promotes partnerships for sustainable development. United Nations General Assembly and Human Rights Council resolutions stress the importance of protecting human rights in digital spaces, reinforcing the imperative that human rights considerations are integral to the development of technical standards.
The ITU’s Study Groups exemplify this practical integration. Study Group 5 is formulating standards for sustainable e-waste management, responding to Sustainable Development Goal 12 (SDG12), which encourages sustainable consumption and production. Simultaneously, Study Group 16 is generating standards that aim to enhance telehealth platform accessibility, directly contributing to Sustainable Development Goal 3 (SDG3), focused on universal health and well-being.
These initiatives reflect a paradigm shift in standardisation practices, merging technical benchmarks with societal values and human rights principles. The harmonisation of detailed technical processes with an unwavering adherence to human integrity signifies a progressive transformation in global technology governance.
Evidently, the interconnection between technology, human rights, and global development is becoming increasingly synergetic, leading to a more inclusive and just digital era. The summary is replete with key discussions on global development goals, human rights protection, technical standards for technology interoperability, and the pivotal role of international partnerships in fostering sustainable policy-making and standard-setting.
It illustrates how technology governance is being steered towards inclusivity and sustainability, ensuring technology serves wider societal interests and ethical considerations.
PH
Peggy Hicks
Speech speed
206 words per minute
Speech length
2257 words
Speech time
658 secs
Arguments
Standard-setting processes can take years and require long-term commitment.
Supporting facts:
- Peggy Hicks mentioned that two years is a typical duration for standard-setting processes, but even that is considered short.
- Participants in the conversation have experienced standard-setting processes, possibly in different contexts.
Topics: Standard-setting, International Collaboration, Policy Development
Report
The process of setting standards is widely acknowledged as a lengthy one, with a typical duration spanning at least two years, a timeline often considered to be quite short in the context of such complex activities. Participants engaged in International Collaboration, Policy Development, and meeting objectives aligned with SDG 16 (Peace, Justice and Strong Institutions) and SDG 17 (Partnerships for the Goals) are advised to prepare for an extended commitment.
This readiness is crucial for fostering peace, establishing justice, and ensuring the development of robust global institutions, alongside building partnerships that have a significant impact. Peggy Hicks, an active participant in these discussions, has emphasised the need to anticipate a substantial time frame when engaging in standard-setting activities.
She has enquired about the duration of other participants’ involvement, establishing that a two-year effort, though notable, may merely be the outset of a potentially more prolonged endeavour. The understanding that two years is relatively brief suggests a common expectation of extended commitments in standard-setting processes.
Other participants, leveraging their own experiences, have contributed insights into the standard-setting processes they have been involved with across various sectors. Their collective experiences help illustrate the complexities and collaborative nature of establishing internationally recognised standards that necessitate sustained, long-term dedication.
The sentiment towards these extended processes is generally neutral to positive, signifying acceptance and support for the required time and effort to establish effective standards. This viewpoint likely reflects the high esteem in which the goals of global peace, justice, and the efficiency of global partnerships are held, as endorsed by the Sustainable Development Goals.
In sum, the discussions and experiences concerning the standard-setting process in policy development and international governance highlight an undertaking that, while intensive and time-consuming, is integral to achieving long-term sustainability, institutional integrity, and the enhancement of global governance standards.
SM
Sergio Mujica
Speech speed
173 words per minute
Speech length
1338 words
Speech time
464 secs
Report
The International Organisation for Standardisation (ISO), with members from 170 countries, plays a pivotal role in establishing a universal language of standardisation to ensure the quality and reliability of products and processes. It acknowledges the intrinsic link between human rights and technical standards, wielding them as tools to combat injustice and the abuse of power.
ISO encourages standard-makers to leverage existing international standards in areas such as anti-bribery, social responsibility, and safety, to avoid redundancy – with venues like the upcoming Paris Olympic Games set to exemplify ISO’s sustainable event certification. ISO cultivates its standards through a meticulous process that places emphasis on consensus-building, transparency, inclusivity, and stakeholder engagement – principles reflecting the organisation’s core values.
There is a marked effort to include voices from developing countries in the standardisation dialogue to prevent any geographical or elite exclusivity. This incorporates the broad participation of stakeholders including IT experts, consumers, academics, governments, and civil society. When it comes to Artificial Intelligence (AI), ISO is proactive in balancing the technology’s potential with risk mitigation, entrusted to a dedicated technical committee that has developed a pivotal management standard for AI in line with international best practices.
In its collaboration with regulators, ISO defines itself as a facilitator of public policy implementation, rather than as a policymaker. It also clarifies that new standard initiatives are proposed by national member bodies, not the central body in Geneva. While the process of developing standards is inherently time-consuming due to its commitment to value, ISO has demonstrated capacity to expedite this process without compromising its principles, as evidenced by the accelerated development of standards during the COVID-19 pandemic.
However, challenges in inclusivity remain, with a recognition that standards may sometimes be perceived as favouring the wealthy and large corporations. There is an ongoing need to more effectively engage with civil society and developing nations to ensure standards are globally relevant and context-sensitive.
In summary, ISO acknowledges the critical role human rights organisations can play in the standards development process, encouraging their active input on standards that significantly impact human rights. ISO underlines the importance of educating its network of experts on human rights to integrate these fundamental values into technical considerations.
This illustrates a commitment to embedding ethical, social, and human rights considerations into ISO standards, aiming to harmonise technical excellence with justice and fairness.
Related event
World Summit on the Information Society (WSIS)+20 Forum High-Level Event
27 May 2024 - 31 May 2024
Geneva, Switzerland and online