The Government’s AI dilemma: how to maximize rewards while minimizing risks?
29 May 2024 16:00h - 16:30h
Table of contents
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Experts discuss the impact of AI on global governance and the need for international cooperation
During a panel discussion moderated by Robert F. Trager, experts from Uruguay, Namibia, and India delved into the multifaceted impact of Artificial Intelligence (AI) on their nations, exploring its benefits and risks, as well as the importance of international governance in managing AI’s development.
Mercedes Aramendia Falco of Uruguay outlined her country’s strategic approach to AI, which began with a national strategy in 2020 and has since been updated to incorporate UNESCO’s recommendations. She highlighted Uruguay’s Solival Plan, which has significantly improved digital access for public school students, as a testament to AI’s positive influence. Falco emphasized the need for transparency and accountability in AI applications, particularly to mitigate risks such as privacy breaches and data leaks. She advocated for the adoption of international standards to ensure responsible AI deployment while safeguarding human rights and fostering innovation.
Emma Inamutila Theofelus from Namibia discussed the challenges her country faces due to its large landmass and small population. She noted AI’s potential to enhance the delivery of health services and the allocation of educational resources. However, Theofelus also highlighted the prevalence of cyber attacks in Namibia, stressing the need for improved digital literacy to protect citizens. She called for an international governance framework that allows for flexibility to accommodate the unique circumstances of each country, especially those in the global south.
Niraj Verma of India discussed AI’s potential to significantly boost India’s GDP by 2030. He cited the success of the UPI mobile payment system, which has democratized financial transactions even in rural areas. Verma also acknowledged the rise in cyber crimes and detailed the government’s response, including the shutdown of millions of SIM cards linked to cyber attacks. He pointed to the Global Partnership on AI (GPAI) as a crucial platform for collaborative AI governance.
The panelists also addressed the proposal of AI Safety Institutes. Falco supported the idea, seeing international standards as a way to balance innovation with ethical considerations. Theofelus, however, suggested that for countries like Namibia with limited resources, it would be more practical to incorporate AI governance functions within existing institutions. Verma was in favor of establishing an AI Safety Institute in India to set guidelines and standards for startups and to help mitigate risks.
In summary, the panelists agreed on AI’s transformative potential and the need for national strategies to harness its benefits while acknowledging the associated risks, particularly in privacy and security. They emphasized the need for robust governance and international cooperation to address these challenges. While international governance frameworks were seen as beneficial, there was a consensus that they should be adaptable to the specific needs and capacities of individual countries. The concept of AI Safety Institutes received mixed reactions, with the emphasis on the need to tailor the approach to the resources and existing infrastructure of each country. The discussion underscored the importance of balancing innovation with ethical considerations and the need for a collaborative approach to maximize the benefits and minimize the risks of AI.
Session transcript
Robert F. Trager:
We have good music here. Thank you. All right. I’d like to ask you about the good and the bad of AI and also invite you to respond to each other if you would like to as we go and make this as much of a conversation as we can. So the first thing I’d like to hear from each of you is an example of where AI is a force for good and an example of where you think your country needs to minimize the risks posed by AI. So Mrs. Falco, since you’re here on my right, maybe we can start with you.
Mercedes Aramendia Falco:
Hello. Good afternoon, everyone. Thank you very much, ITU, for the invitation. It’s a pleasure and an honor to be here. And also, I think that it’s great to have the opportunity to share our experience and also work together in order to form a more partnership. I think that the universalization of knowledge is essential, so I will switch to my native language, to Spanish, in order to work on that. Ahora vamos al español. Muy buenas tardes a todos. Es un placer poder estar aquí con ustedes. Quiero felicitar a Doreen y a todo el equipo del ITU por hacer este encuentro que nos ayuda a todos a trabajar juntos para que la inteligencia artificial se pueda desarrollar y la podamos universalizar, siendo transversal a todo lo que hacemos. Yo soy la presidenta del Regulador de las Comunicaciones de Uruguay. I am the president of the Communication Services Regulatory Unit, RESEC. And as you know, I’m involved in a lot of things and a lot of different aspects in Uruguay. There are a number of challenges that students face when they try to understand what artificial intelligence AI is and how they can understand how to better use it and to implement it, as well as understanding the different risks associated with AI. In Uruguay, we’re using AI in a number of different areas. But let me start by saying that we have a national strategy for AI, which dates from 2020. In 2021, following the recommendations of UNESCO, we understood the need to update that strategy on AI. So we currently are collaborating with the different actors of the whole ecosystem to update that AI national strategy to ensure that the public and private sectors contribute to it and to ensure that there are representatives from the entire ecosystem. We are basing it on all the principles that are recommended by the UNESCO to ensure that it is easier to have the engagement of all the different actors of this ecosystem. This will also help us to ensure that there is transparency, accountability, and implementation of AI. Regarding use cases, I would like to highlight the Solival Plan. This is a plan that we have been implementing since 2017, which will help us to universalize access to all students in public schools and ensure that they have access to internet and to laptops. This plan has been evolving constantly because we cannot just have one concrete action. We need many parallel activities to ensure that AI has a positive impact and to ensure that students can access internet and be connected. We are analyzing data. To do so, we are using AI. Of course, AI has risks. What is the main risk that we have identified? Well, privacy and risks of leaks. To do this, we need to be able to identify the risks and to be able to work on them. We need to use AI to improve security. For example, in sport, we want to ensure that there’s no harassment or violent acts in sport, for example. We are able to create profiles on different people that could have a risk to your privacy or risk on security or badly use that data. There are also use cases for the finance sector. We want to make better decisions in real time, and we want to better use resources. Of course, there are risks associated with this sector as well. We want to ensure that there is transparency. We want to ensure that data is not used in a bad way and that there are no security breaches. That’s why we need to ensure that data is being used in a just manner. To do so, we need to ensure that data is open and transparent. In that way, we need to have a hand of control over the algorithms. AI can also be used in medicine. For example, we can predict certain types of illnesses to seek better diagnoses. Of course, this data is extremely sensitive. So what would happen to that information if there was a security breach and it fell into the wrong hands? Well, of course, this could have a negative effect on the privacy and security or safety of those people. It could also have an impact on discrimination. We also see a very good use case in productivity. In agriculture, for example, we could use sensors or drones to ensure better monitoring and better use of data. This would help us to forecast and predict a better use of our resources. Of course, there are risks also associated with this sector. We need to ensure that we have good quality data. As you can see, there are many use cases. These are some of the examples. And you can always see the risks associated with them. But what we need to do is identify those risks and then work on overcoming them. In 2020, we set up the very first strategy for national strategy for AI. And now we are updating that strategy so that the new one is based on international standards associated with the public sector, but also incorporating the private sector. This is key. We also are working parallel to this on a strategy on data. We are setting up a national strategy on citizenship and cyber security. Thank you very much for your attention.
Robert F. Trager:
Okay, thank you so much. Thank you. Yes, and also for many examples. So, appreciate that. I’d like to turn to our other panelists. So, Minister Theofilus, if you could, one example of the good and the bad as you see it in Namibia.
Emma Inamutila Theofelus:
Thank you so much, Robert. And I’m very happy to be on this panel with Neeraj and Mercedes, especially looking at the AI, where AI developers could either be good or bad. And that’s the dilemma with innovation, new technology, new and emerging technologies that use case scenario would be that anybody who has good or bad potential can have access to them. AI for good, a Namibian case scenario, we have such a large country, such a small population. Namibia has 3 million people and a land mass size of about 825,000 square kilometers, which means that services across the country can be quite expensive to expand. And this requires a lot of investment. This requires a lot of input in terms of where do you actually then get the best outcomes where you put your dollars. I can give a sector of health, for example, where we try our best to actually localize health services. But many times you do not necessarily know where you need to put a dialysis center, depending on the need or put a cancer ward as a priority of particular health facility and all over in many other areas of health, we face the same situation. But with data collection and the necessary data, either during a census or during routine ways of collecting data in other ways, we’re able to then prioritize the best places to put the necessary services, depending on the need, saving lives, saving money in terms of investment, but also ensuring that the healthcare system actually functions as supposed to do. Secondly, would be around education as well. We see that we have various sectors in which where we need capacities to build in our country. But many times we do not have the right data to actually determine which industries would need the labor force in the next three to five years. And therefore we need to train students right now and prioritize those academic programs at university level. So we find ourselves three, five years later, actually having a shortage of XYZ specialties because we didn’t have the data three or five years ago to actually plan in time for those skills. So we’re looking at AI to try to bridge that effort so that we’re able to train the necessary students in the necessary skills that we’ll need in the industries of the future. And of course, on the other side of the coin, the same is true. Namibia, for example, today records around 2.7 million cyber attacks, especially in the financial sector. And this is because, one, we don’t have the capacity in terms of digital literacy. Among our citizens, we actually train them to protect themselves online. And we cannot reach every single person because there are barriers in terms of distance, barriers in terms of rolling out facilities for them to get trained. We can find better ways to do that with the support of programming that responds to people in their vernacular languages that they can understand in order to be able to learn the skills to be safe online. And I think AI can be that bridge builder to ensuring that government reaches people, that they have the right information, that they’re able to get the necessary capacity building and skills, but also to ensure that our sectors, whether health or education, is able to respond to the outcomes necessary results of a healthy and productive nation once we know where to actually focus our investment as a state. Thank you.
Robert F. Trager:
Thank you very much. Yes. Secretary Verma, the floor is yours. An example of where it’s a force for good and an example of where it’s a risk requiring governance.
Niraj Verma:
Thank you. I’ll just, before that I’ll mention that there’s a recent study which says that in India, AI will boost its GDP by 2030 up to 360 to 460 billion. So this is the positive side of AI. So I’ll share one use case where AI has been very good. We have UPI system, mobile-based payment system, where EKY has been increasingly through AI. Last year, approximately 300 billion transactions took place and the total amount involved was $2 trillion. So this is transaction, if you go to India, you’ll find that poor people in rural areas, those who are selling vegetables, there’s a complete digital divide. There was earlier, but now they have got UPI and they are making payment. You can make payment through digital UPI. So this is a great thing which is happening in India. One bad thing which is that increasing cyber attacks and cyber crimes in India, it has been a very, very widespread now. We are getting calls, both emerging inside India and outside India. We have taken steps in Department of Telecommunication. We have started a portal called Sanchar Sarthi and we have an app which is Astra. So I’ll share the magnitude of problem. In last three to four months, we have disconnected six million SIM cards because of cyber attack. Many of them were already, and many of the SIMs are not there. They are using IVRs and all that. So this is the problem which is emerging out of here. Thank you.
Robert F. Trager:
Thank you very much. I want to follow up actually with one question for you on the regulation of larger models because according to press reports in India, there has been, I think, a new approach of requiring the very largest models to receive permission from the government. I’m reading press reports here. So I’m also asking you, this is also a question, but the very largest models to receive permission from the government prior to deployment, which seemed to represent a bit of a change from a kind of hands-free policy when it comes to regulating the largest AI models. So I wanted to ask you about that and how you see the regulation of the largest models. I couldn’t get permission for what? Oh, for deployment of the largest models, the largest AI models.
Niraj Verma:
I’m not aware of this, but yeah, I’m not aware.
Robert F. Trager:
That’s fair. I sort of apologize. I was just very interested in this, but there’s some really interesting articles and I’d be happy to send you later, but I apologize for pursuing this interest. Actually, what I want to do now is turn to international governance, I think. So if I could, Minister Theofilus, what approaches to international governance do you find most appealing, including the ones that we heard about earlier today and in general? Which path should we be pursuing?
Emma Inamutila Theofelus:
Okay, so from our national perspective, I think as governments, we see that many times, many frameworks come up from multilateral organizations, different kinds from either the United Nations or other state actors or non-state actors that feel that this is the way to go. this is the governing framework you should follow as a state, these are the principles you need to adopt, follow these guidelines. And for many a times, it’s quite confusing. You have some guidelines from an original level, then there’ll be another one at the continental level, then others at an international level. And from our perspective in Namibia is that we look at the African proverb that says, if you want to go fast, go alone, but if you want to go far, go together. And looking at that, we want to see there to be some consensus around a governance framework that works majority for everybody with, you know, common principles or common values. Of course, there are questions about whose principles and whose values we’re adopting, but there must be a way in which we can all come to common ground. And once we take that approach, I think it becomes easier for national states and national governments to look at that governance framework and tailor make it to their own country. Because I know that as Minister of ICT, I do not want to reinvent the wheel or feel pressure that I need to jump onto a particular framework for the fear of being left behind. If anything, I want to actually be rest assured that this framework is something that user case, usability in my country would actually make sense, that we would be able to adapt at the pace at which our country is at, because the reality of the fact is that we are not all at the same level in terms of, you know, whether it’s connectivity, affordability of devices, capacities in our countries, frameworks, regulations, or laws, legislation. So everybody is working towards, you know, adapting to AI models at their own pace, but there must be some commonality in which everybody can agree to, so that we can limit the risk, cross-border risks, but also allow governments to be able to, especially governments of the global south, be able to take a governance framework that fits their user case for their citizens, that also makes sense at the level where they’re at. So I think that is the approach we’ll take from an African state in the global south. Thank you.
Robert F. Trager:
Thank you so much. So I’d like to go to ask the same question to Ms. Falco. So which approaches are the most appealing?
Mercedes Aramendia Falco:
I think it depends on the specificities of each country. In Uruguay, as I previously mentioned, we started off with our own strategy on AI, which we set up in 2020. Then we followed the international discussions on this matter. We followed UNESCO’s recommendations and the resolution from 2024 that was adopted then. We recognize that artificial intelligence is essential. We believe that we have to respect human rights and ethics, whilst at the same time promoting innovation in a safe way. So, while following UNESCO’s recommendations, we also want to constantly assess the situation in our own country, because there are lots of aspects in which we’re very well placed. For example, the protection of people, in which we are quite well placed, and the analysis of artificial intelligence together with data. But there are also lots of other aspects in which we really need to work far more, including with the private sector. We need to develop talent and create more capacity building for our population. In this sense, following UNESCO’s recommendations, we have started a brand new process. We have approved a national law in which a national agency for everything regarding the society of information has been set up, which has created a new strategy, which incorporates the international standards. The public and private sector participate in this strategy. There are many different meetings and roundtables, in which more than 400 people have participated. These bring together academia, civil society, various representatives of governments and agencies, ministries, etc. They all made proposals and they’re working on this basis to update our national strategy. Parallel to this, this agency has been asked to work together with the regulator on the protection of personal data to establish recommendations to send them to the parliament to see whether it would be necessary to approve a law in line with the recommendations of UNESCO, based on the principles like equity, non-discrimination, responsibility and social responsibility, among others. We’ve also focused on the need to simultaneously work on the national strategy on artificial intelligence, along with the national strategy on data. Because, as we have previously seen in the different use cases, there are risks associated with AI and they’re often associated with data protection, particularly personal data and the eventuality that they might be used in a way that isn’t in line with the standard. For example, there may be security breaches or because the data is confidential or secure, that needs special protections, or there could be discrimination or bias in it, because there is bias built into it that wasn’t seen beforehand and there are various ways of going about this. So, we need to take into account those considerations, as well as the theme of interoperability and to ensure that those international standards are key to helping countries to ensure that we have a common foundation, a common basis on which we can all work. We believe that the work on an international level is important, as well as the work at a domestic level, so that we can learn from each other and all work together to ensure that artificial intelligence can continue to be developed worldwide, so that there are use cases that can be used to promote development, to push us further forward, so that we can be more productive in a more sustainable manner in our countries and to ensure that we can control the risks that are there and ensuring that we keep an eye on human rights and ethics. Thank you very much.
Robert F. Trager:
Thank you. I’d like to turn now to Secretary Verma and ask what approaches you favour, and also ask you about the Global Partnership on AI, of which India is the chair in 2020-2024. And for those of you who don’t know, GPI, as it’s called, GPAY, as it’s called, is a multi-stakeholder initiative with 29 member countries, and reportedly France, which is the convener of the next AI Action Summit, is interested in exploring further what could be done with the GPAY process. So I’m wondering if you could comment on how you see GPAY creating positive outcomes, as well as what is appealing to you in international governance broadly?
Niraj Verma:
Yeah, I think that GPAY is going to be very important because this is an area where, you know, the action has to be a collaborative action. In India, we have, in pursuance of that, NITI.io, which is a planning commission, it has set up India AI Mission, and this mission is looking at various aspects like AI in governance, AI in IP and innovations, AI in computer and system, and data for AI, skilling in AI, and AI ethics and governance. We have a department which deals with IT, and AI is anchored there, but all the departments are involved in various use cases of AI. So I feel that, you know, because ultimately what framework we have and how we can minimize the risk and maximize the benefit out of it, this collaborative approach is going to help India and the nations together.
Robert F. Trager:
Great, thank you very much. I think we have just a little time for one last question, and there’s been some discussion of the AI Safety Institute, so I thought maybe that would be a really interesting thing to get all of your views on. Somebody just a little while ago on this stage suggested that it was very important for all countries to stand up an AI Safety Institute, and I think what I want to ask is not so much that, but the question is how you think about the interaction between your governing approach and the safety institutes, which now I think there’s some talk about creating more of a network of safety institutes, so what plans do you have for interacting with that network? How do you think about it? Maybe we could start with Ms. Falco.
Mercedes Aramendia Falco:
We base ourselves on international standards. Yesterday we had a panel. I was the ISO representative there, and we were talking about standards. We were talking about the importance of checks and balances and maintaining that balance between respecting human rights but also promoting innovation. If we relate that to standards, how do we bring standards into that fine line that is that balance? How do we maintain the promotion of innovation because, of course, that needs to help promote different aspects of our country, to help us develop ourselves, and of course in a sustainable manner, but that also needs to be accompanied by risk analysis, of which I already spoke, and those risks, of course, can be seen in human rights, so standards help us to be able to regulate. That’s the work of policy makers in general and also companies when they have to develop their own standards. They have references. They have guidelines which help them to be able to go down the right line, and that just simplifies everyone’s work. It helps us to develop knowledge, and then that when we have to go to develop standards, well, then we can bring in the participation of technicians, experts, and other stakeholders in the ecosystem so that we can best reflect the security of everyone and create clear rules.
Robert F. Trager:
Thank you so much, and we just have a couple of minutes, so I have to ask you each just to say one minute, if you wish to, about your view on the AI Safety Institutes as a network, please.
Emma Inamutila Theofelus:
Well, I think I want to provoke some thought. Just even the word AI Safety Institute already presumes that there’s some threat as opposed to looking at the positive side of things, so maybe let’s scrap the safety and just say AI Institute as opposed to presuming that AI, you know, primarily and at the core is a threat, but secondly, from a country like mine, three million people, we do not have the luxury of having an institution for everything. We don’t have endless resources to do it, and it would require a lot of investment. We have to leverage on existing infrastructure and existing institutions, so from our perspective, we would rather have something that, you know, as a department or another form of a forum in an existing institution to take up that role. We did it with the search around cyber security, we’re considering doing it in terms of our data protection, so for countries like ours, it rather than becomes just a function in an existing institution to support the role around, already in the country. Thank you. whether it’s a governance framework, legislative frameworks, standards for an existing institution
Robert F. Trager:
Yeah, thank you so much. Yes.
Niraj Verma:
I think that it is a good idea to have an AI Safety Institute because my country is a big country, there are a lot of startups, and there would be some safeguards, there are guidelines, and they would follow that, and it is also, while we do that, there is a standard getting set up. We will also learn what direction we should go, and what is going to, we will be able to minimize the risk and maximize gain. Thank you.
Robert F. Trager:
Thanks so much. Please join me in thanking this panel.
Speakers
EI
Emma Inamutila Theofelus
Speech speed
169 words per minute
Speech length
1334 words
Speech time
474 secs
Report
The summary appears to be reflective of the in-depth analysis provided by the expert panel discussion, including representatives from Namibia, on the implications of AI. It accurately represents both the advantages and the potential risks associated with AI, particularly within the context of Namibia’s unique challenges.
There were no grammatical errors, sentence formation issues, or typos detected in the summary, and UK spelling and grammar usage was consistent throughout the text. Moreover, the summary efficiently captures the long-tail keywords indicative of such a discussion, like “AI-powered data analytics,” “AI governance,” “digital literacy rates,” “cybersecurity,” and “AI strategy implementation,” without compromising on the quality or accuracy of the information being conveyed.
The key points from the panel discussion are well-represented, including the application of AI in resource allocation for healthcare and in forecasting labour market demands for education. The concerns over cybersecurity within Namibia’s financial sector, the need for tailored educative programs, and the call for international consensus on AI governance are also correctly captured.
Furthermore, the summary underlines the practical approach taken by Namibia in integrating AI, highlighting Namibia’s preference for adaptable frameworks over the onerous regulatory propositions. The final paragraphs of the summary maintain the integrity of the analysis, touching on the debate over the conceptual framing of an AI Safety Institute and advocating for a unified AI Institute that reflects the comprehensive nature of AI technology.
It concludes by reiterating the panel’s recognition of AI’s transformative potential and the necessity for considered policymaking, along with the support of data intelligence. Therefore, the summary remains a faithful and high-quality representation of the main analysis, with an added emphasis on including relevant long-tail keywords without detracting from the content’s accuracy or clarity.
MA
Mercedes Aramendia Falco
Speech speed
147 words per minute
Speech length
1883 words
Speech time
769 secs
Arguments
International standards are central to regulating AI and maintaining balances.
Supporting facts:
- Aramendia advocates for the use of international standards by policymakers and companies.
- Standards provide guidelines that simplify work and help to navigate the balance between human rights and innovation.
Topics: AI Governance, International Standards
Standards facilitate the balancing act between human rights and innovation.
Supporting facts:
- There is an emphasis on the importance of maintaining a balance between respecting human rights and promoting innovation.
- Standards help to define clear rules that ensure security and human rights are considered.
Topics: Human Rights, Innovation, AI Regulation
Risk analysis is crucial for sustainable AI development.
Supporting facts:
- Aramendia mentions that risk analysis is important in AI governance.
- Risk analysis is linked to understanding consequences on human rights.
Topics: Risk Analysis, Sustainable Development
Standards development should be a collaborative process with multiple stakeholders.
Supporting facts:
- Standards are best reflected when there is participation from technicians, experts, and stakeholders.
- Collaborative development of standards ensures the security and interests of all parties.
Topics: Collaboration, Stakeholder Engagement
Report
Mercedes Aramendia fervently supports the implementation of international standards in the governance of artificial intelligence (AI), contending that these frameworks are vital for maintaining a balance between spurring technological innovation and upholding human rights. Representing her views on panels for the International Organization for Standardization (ISO), Aramendia highlights the importance of international standards as pivotal guidelines that assist policymakers and corporations in navigating the intricate regulatory landscape of AI development.
A key aspect of Aramendia’s argument is the role of these standards in providing clarity and consistency, streamlining the work for AI developers and mitigating associated risks. By adhering to established international guidelines, developers gain a clearer understanding of AI’s impact on human rights, fostering security and preventing potential violations.
Aramendia’s perspective on AI governance appreciates the synergy between technological advancements and societal welfare, advocating that the transformative potential of AI should not compromise human rights. She emphasises how standards not only define clear rules that integrate safety and human rights considerations but also create an environment that stimulates innovation.
Highlighting the value of inclusive participation in setting these standards, Aramendia stresses the necessity of engaging a diverse array of individuals and groups, encompassing technicians, experts, and stakeholders. Such a participatory approach not only democratises the standard-setting process but also ensures that the diverse interests and security of all involved are taken into account.
Aramendia’s stance intersects with various United Nations Sustainable Development Goals (SDGs). Her support for international standards in AI correlates with SDG 9, which focuses on resilient infrastructure, sustainable industrialisation, and innovation, and with SDG 16, which relates to peaceful societies, access to justice, and accountable institutions—highlighting the connection between regulatory frameworks and the preservation of human rights.
Moreover, the spirit of SDG 17, which is centred on strengthening partnerships to achieve the SDGs, resonates with her advocacy for collaboration in standard development. In summary, Aramendia’s optimistic view on the incorporation of international standards in AI governance champions both innovation and human rights, proposing a path to a more sustainable and just technological future.
Her emphasis on collective responsibility for the creation and adherence to these standards suggests that ethical considerations of AI should be fundamental to the innovation process from its inception.
NV
Niraj Verma
Speech speed
161 words per minute
Speech length
543 words
Speech time
202 secs
Report
The study anticipates that artificial intelligence (AI) could significantly enhance India’s GDP by $360 to $460 billion by 2035, reflecting an optimistic outlook on the economic benefits of AI. A key example of AI’s positive impact is the Unified Payments Interface (UPI) system—a mobile-based payment platform in India that processed nearly 300 billion transactions worth approximately $2 trillion last year.
This system has notably reduced the digital divide and provided benefits to economically weaker sections, including rural vendors who have transitioned to digital payments. However, India faces a growing threat from cyber attacks and cybercrime, with a high volume of such incidents both domestically and internationally.
In response, the Department of Telecommunications has taken initiatives such as launching the Sanchar Sarthi portal and introducing the Astra app, which focus on providing protection and raising awareness among citizens. Significantly, India has recently disconnected six million SIM cards linked to cyber threats, stressing the seriousness of the problem.
NITI Aayog, India’s policy think tank, has launched the India AI Mission to address the multifaceted aspects of AI, such as its use in government, innovation, intellectual property, data management, skill development, and considerations of ethics and governance. This initiative demonstrates the Indian government’s collaborative, multi-departmental strategy for realising AI’s full potential while mitigating associated risks.
Moreover, the discussion includes the proposition to create an AI Safety Institute, aiming to ensure responsible development and use of AI. Emphasising the need for guidelines and standards, particularly within India’s bustling startup ecosystem, underpins the nation’s dedication to creating AI benchmarks that promote both safety and innovation.
The foundation of such an institute would offer guidance to startups, contain risks, and safeguard the benefits of AI advancements. In summary, AI presents a dichotomy of effects in India with its promise of significant economic growth and transformation in digital payment systems accessible to all levels of society, paired with the challenge of increased cyber threats.
India’s comprehensive strategy to embrace AI opportunities while concurrently addressing potential dangers showcases a balanced and discerned approach, involving strategic collaborations, targeted institutional efforts, and well-crafted regulatory frameworks.
RF
Robert F. Trager
Speech speed
162 words per minute
Speech length
810 words
Speech time
300 secs
Report
The panel discussion offered a rich exploration of AI’s multifaceted influence on society, highlighting the potential benefits and challenges it presents to governance. Each panellist contributed distinctive perspectives based on their respective nation’s experiences and strategies regarding AI implementation. Opening the debate, Mrs.
Falco underscored AI’s role as a force for good within her national context, albeit the specifics were not detailed in the provided summary. She underscored the necessity of stringent regulation to mitigate the potential negative ramifications of AI technology. Minister Theofilus, representing Namibia, also shared his insights.
Although specific examples were not provided, it is deduced that he advocated for a well-rounded approach to capitalise on AI’s advantages while ensuring protective measures against its risks, particularly within the setting of a developing nation. Secretary Verina’s address shed light on India’s evolving perspective on large AI model regulation.
It was implied that India is formulating new regulatory frameworks mandating that sizeable AI architectures seek governmental authorisation before deployment. This move indicates a shift towards stricter regulation, diverging from India’s historically more lenient regulatory environment. The ensuing discussion brought to the fore crucial questions about the equilibrium between fostering innovation and exercising regulatory control, with Verma’s responses reflecting a sophisticated grasp of the potential need to escalate supervision of advanced AI systems.
When the dialogue turned to the complexities of international collaboration on AI, the Global Partnership on AI (GPAI) was recognised as a key platform for uniting diverse stakeholders dedicated to responsible AI development. With India chairing the GPAI until 2024, Secretary Verma was called upon to elucidate India’s stance on global AI governance, and to opine on how the GPAI might enable the pursuit of favourable outcomes.
The conversation then acknowledged the value of AI Safety Institutes in promoting the secure progression of AI technologies. The notion of creating an international network of such institutes was proposed, envisaging a collaborative model for global AI safety and the dissemination of best practices.
Panellists were prompted to reflect on how this network could complement or shape their domestic policies and governance approaches. Due to time restrictions during the summary phase, the panellists offered brief remarks on their view of the role of AI Safety Institutes.
They collectively recognised the significance of these bodies, whilst alluding to the intricate dynamic between national regulation and global cooperation. In summary, the panel shed light on the diverse pathways nations may adopt in contending with AI’s rapid development. It emphasised the imperative for decisive governance to balance the risks of AI against its capacity to benefit society.
The discussion concluded with a consensus that both international governance frameworks and safety institutes are crucial in steering the responsible and coordinated global advancement of AI for the common good.