Chief AI scientist at Meta states that AI won’t permanently displace jobs

Prof. Yann LeCun, the Chief AI Scientist at Meta and one of the people dubbed “godfathers of AI”, believes that AI technology will not take over the world or cause permanent job loss. In 2018, Prof. LeCun, Geoffrey Hinton, and Yoshua Bengio won the Turing Award for their breakthroughs in Artificial Intelligence, earning them the nickname “godfathers of AI.” He disagrees with his fellow pioneers on the issue of AI posing a threat to humanity.

Prof. LeCun believes those who worry about AI being a threat to humans cannot imagine how it can be made safe. He also believes that computers will eventually surpass human intelligence, but that this is still years away. If it is determined that creating this technology is not safe, it should not be pursued.

Regarding AI’s impact on jobs, Prof. LeCun states that AI has the potential to change many jobs but will not permanently displace many people. Instead, he believes it will create a new renaissance for humanity. He also commented on Europe’s AI Act, stating that AI start-ups believe it is too broad and restrictive and that each AI application should have its own rules.

US citizen sues ChatGPT for defamation

OpenAI and ChatGPT are facing a defamation lawsuit over the false identification of a US citizen, Mark Walters, in a criminal case involving a pro-gun foundation and embezzlement of funds. According to the filing, Walters claims that ChatGPT generated false information after the local journalist asked for a summary of the Second Amendment Foundation (SAF) complaint against Washington’s State Attorney General’s office over harassment accusations by giving the chatbot the URL link. Walters claims that ChatGPT responded in its summary and falsely identified him as the Chief Financial Officer of SAF, accusing him of defrauding SAF and identifying him as. In addition to falsely identifying Walters, his lawyers claim that ChatGPT erred in saying that the case was about financial and accounting claims. Walters became aware of the situation only after the local journalist contacted him to fact-check the information. As such, OpenAI is accused of negligence, libel, and recklessly disregarding the falsity of the communication.

Meta previous new AI tools on its platforms

Meta Platforms (formerly known as Facebook) recently previewed a range of new AI tools to its employees. These tools include ChatGPT-like chatbots for Messenger and WhatsApp for using different personas for conversations. According to a Meta spokesperson, there will also be two Instagram features, one which will modify user photos via text prompts and another one to create emoji stickers for messaging services. The company also announced a productivity assistant for employees called Metamate, which would answer questions and perform tasks based on the company’s internal system. Meta has not released any generative AI products for consumers yet. However, it recently revealed its collaboration with a select group of advertisers to test AI tools for generating image backgrounds and different versions of written content for their ad campaigns. Meta EO, Mark Zuckerberg, stated that recent progress in generative AI of Meta’s AI divisions has enabled the company to incorporate this technology into every single one of their products.

Indian government is not considering any law to regulate AI

India has taken a firm stance by announcing that it has no intentions to regulate AI. This announcement comes amidst growing concerns caused by the rise of generative AI tools like ChatGPT and Bard, which has prompted calls to slow down from influential voices.

While acknowledging the risks that AI can pose, such as privacy concerns, bias, and potential job displacement, India’s Ministry of Electronics and Information Technology appears to prioritize the positive impact that AI can have on the digital economy. The ministry believes that AI will strengthen entrepreneurship and businesses and play a significant strategic role in India’s future. While working on standardizing responsible AI guidelines to promote healthy industry growth, India diverges from the increasing alarm expressed by US and European policymakers.

The ‘What about China?’ argument has also been raised, suggesting that India’s approach to AI regulation sets it apart from China’s cautious regulatory environment. Although India has been a relatively late entrant in AI investment, it aims to create an environment conducive to AI innovation, especially if ambitious technologists feel constrained by regulatory frameworks in the US and other countries.

UK’s Prime Minister anounced that UK will host the first global summit on AI

UK Prime Minister Rishi Sunak has announced that the UK will host the world’s first major global summit on AI safety in autumn 2023. The summit aims to address the risks associated with the rapid development of AI and discuss international cooperation to ensure its safe and responsible use. The announcement coincides with the Prime Minister’s visit to Washington DC to discuss the UK-US approach to opportunities and challenges of emerging technologies.

The summit will focus on examining the potential risks associated with AI, including frontier systems, and will discuss ways to address these risks through collaborative efforts on a global scale. It will be build on recent discussions at the G7, OECD, and Global Partnership on AI. Additionally, technology companies, including DeepMind, Anthropic, and Palantir, have expressed support for the summit and highlighted the importance of international collaboration in developing AI safely and ethically.

In parallel, the UK government plans to increase the number of scholarships for post-graduate STEM studies at UK and US universities to enhance expertise in these fields. The increased scholarships include a rise in Marshall scholarships and funding for five new Fulbright scholarships annually, focusing on STEM-related subjects, aiming to strengthen the mutual expertise of the UK and the US in future technologies.

Highlights from the Fourth EU-US Ministerial Meeting of the Trade and Technology Council


The fourth high-level Ministerial meeting of the Trade and Technology Council (TTC) took place in Luleå, Sweden, on 31 May 2023. Launched in June 2021, the TTC has become the primary platform for the United States and the European Union (EU) to coordinate their strategies on important global trade, economic, and technology issues. Over time, the TTC has evolved into a biannual gathering attended by senior transatlantic leaders, where they discuss common approaches, assess the progress made, and unveil crucial initiatives on various topics ranging from semiconductors and quantum computing to infrastructure investments.

This year’s hot topic is AI- ‘we must mitigate its risks.’  

Parties have emphasised the need to address the risks associated with AI. They have reaffirmed their commitment to a risk-based approach in advancing AI technologies that are trustworthy and responsible. They recognise the significance of cooperation and aim to foster responsible AI innovation that respects rights, safety, and democratic values.

In pursuit of this objective, parties have proactively prioritised research and understanding of the implications of generative AI. Their goal is to establish guidelines and safeguards that promote this technology’s responsible and ethical use. As part of their efforts, they have announced the advancement of a Joint Roadmap for Trustworthy AI and Risk Management, which involves the establishment of three expert groups. These groups will focus on AI terminology and taxonomy, developing standards and tools for trustworthy AI and risk management, and devising methods to monitor and measure AI risks. This focused initiative will complement the ongoing G7 Hiroshima AI process, thereby strengthening the global approach to addressing challenges related to AI.

Cooperation: Parties have also reaffirmed their commitment to cooperation on various aspects of AI. Their collaboration extends to multilateral discussions within the G7 and Organisation for Economic Co-operation and Development (OECD), and they remain actively involved in the Global Partnership for Artificial Intelligence. The European Commission and the United States have also signed an administrative arrangement to support collaboration on advanced AI research in five areas: extreme weather forecasting, emergency response management, healthcare improvements, energy grid optimisation, and agriculture optimisation. The intent is to share findings and resources with international partners, including low- and middle-income countries, to foster broad societal benefits. The parties aim to implement this cooperation by establishing an internal catalogue of relevant research results and resources.

 Face, Happy, Head, Person, Smile, Adult, Female, Woman, Dimples, Body Part, Neck, Blonde, Hair, Margrethe Vestager

In a press conference following the meeting, Margrethe Vestager, the European Union’s top tech official and vice president of the European Commission called on EU and the USA to take the lead in creating and establishing a code of conduct for the AI industry. Immediate action is needed to foster public trust in the ongoing development of AI technologies, Vestager noted and warned that proposed regulations fall behind.

War in Ukraine – a commitment to stand by Ukraine and combat disinformation campaigns

They have acknowledged the importance of identifying the critical tools and technologies used in Russia’s war in Ukraine. Collaborative efforts are aimed at countering the evasion of sanctions and export controls. Furthermore, they have a strong determination to combat foreign information manipulation, interference, and disinformation campaigns that pose a threat to human rights, democratic processes, and the welfare of societies, including those in third countries.

Content regulation and protecting human rights online

Protection of basic human rights and well-being in the digital arena, especially that of children and youth, featured prominently in the discussion. The parties agreed that online platforms should take their services’ risks to children’s mental health and well-being more seriously. To this end, the EU and US proposed a set of principles for Transparent and accountable online platforms aimed at protecting and empowering children and youth online and facilitating data access from online platforms for independent research.

The parties also addressed the issue of foreign information manipulation and interference (FIMI) in third countries, noting increased cooperation in the following areas:

  • Harmonising threat information exchange on FIMI and developing a common methodology for identifying, analysing and countering FIMI;
  • Actions aimed at preparing multistakeholder community to address FIMI threats, including capacity development efforts in Africa, Latin America, and EU Neighbourhood countries.
  •  Calling on online platforms to ensure their services’ integrity and tackle disinformation and FIMI.

Finally, the USA and EU pledged their support for human rights defenders online, recalling the specific responsibilities of states and the private sector.

Advancing collaboration in digital identity

Through a series of technical exchanges and engagement events involving experts from various sectors, they aim to develop a transatlantic mapping of digital identity resources, initiatives, and use cases. This mapping will contribute to transatlantic pre-standardisation research efforts, promote interoperability, and provide implementation guidance while upholding human rights.

Semiconductors

Parties have completed a joint early warning mechanism in addressing semiconductor supply chain disruptions. They are committed to sharing information on public support given to the semiconductor sector, aiming to avoid subsidy races and foster mutual benefits from investments in this field. They acknowledged the progress in implementing their CHIPS Acts and maintained a continuous exchange of best practices. They also seek to collaborate further by incentivising research, promoting alternatives to harmful substances in semiconductor manufacturing, and building a comprehensive and resilient supply chain ecosystem.

Quantum technologies

A joint Task Force has been established to address various science and technology cooperation aspects in quantum technologies. The Task Force will focus on participation in public research and development programs, intellectual property rights framework, identification of critical components, standardisation, benchmarking of quantum computers, and export control. Discussions are also underway regarding cooperation in Post-Quantum Cryptography standardisation and potential avenues for future collaboration, contributing to the EU-US Cyber Dialogue.

Connectivity, digital infrastructure, subsea cables and investments:

Overall, the parties agreed to collaborate on various fronts to promote secure, resilient, and inclusive digital infrastructure and connectivity within their regions and globally.

  • Beyond 5G/6G: Both the EU and the United States are working together to develop a common vision and industry roadmap for the research and development of 6G wireless communication systems. The aim is to ensure these technologies are designed based on shared values and principles.
  • Secure and Trusted Digital Infrastructure in Third Countries: The EU and the United States share a commitment to promoting digital inclusion and secure connectivity in emerging economies. They plan to organise a ‘Digital Ministerial Roundtable on Inclusion and Connectivity’ with the participation of digital ministers from key emerging economies. The goal is to identify common needs and challenges in digital infrastructure and explore collaboration opportunities to support the digitalisation needs of these countries. Additionally, the EU and the US intend to enhance cooperation with like-minded countries, such as the G7, to support the deployment of secure and trustworthy ICT networks globally.
  • Support for Connectivity Projects: The EU and the United States are operationalising their support for inclusive ICT projects in countries like Jamaica, Kenya, Costa Rica, and the Philippines. The objective is to expedite the implementation of secure and resilient connectivity initiatives in these nations by engaging reliable vendors and offering technical aid, financial resources, and cybersecurity assistance to facilitate digital infrastructure growth. Parties aim to achieve this within the Memorandum of Understanding framework, signed between the European Investment Bank and the US International Development Finance Corporation. The parties are supporting two new connectivity projects, including the one in Costa Rica, aiming to enhance assistance for the expansion of secure and resilient digital connectivity in the country. The second connectivity project focuses on implementing a standalone 5G network, providing cybersecurity training, and extending support for establishing a national Copernicus data center in the Philippines.
  • International Connectivity and Subsea Cable Projects: The EU and the United States recognise the strategic importance of international connectivity for security and trade. They aim to promote the selection of trusted subsea cable providers for new cable projects, particularly those that promote trustworthy suppliers, reduce latency, and enhance route diversity. Both parties are also discussing connectivity and security measures for transatlantic subsea cables, including alternate routes connecting Europe, North America, and Asia.

Microsoft is calling for regulations on AI to mitigate risks and promote ethical development

Microsoft is advocating for the implementation of guidelines and laws that govern the use of AI technologies. For that, Microsoft has released a 40-page report named ‘Governing AI: A Blueprint for the Future,’ advocating for increased regulation of the AI industry. Microsoft President Brad Smith highlights the importance of shared responsibility in implementing the necessary guardrails for AI. By advocating for AI regulations, Microsoft aims to establish a framework that promotes transparency, accountability, and fairness in AI systems. They emphasise the importance of considering societal impact and ensuring that AI technologies align with human values.

The tech company recognises the potential risks associated with AI, such as privacy concerns, bias in algorithms, and potential job displacement. They believe that regulations can help address these issues and ensure that AI is developed and deployed in an ethical and responsible manner.

The analysis in the report delves deeply into the following aspects:

image 10
Microsoft is calling for regulations on AI to mitigate risks and promote ethical development 10

Source: Microsoft’s report titled ‘Governing AI: A Blueprint for the Future.’

Italian data authority to evaluate AI platforms for privacy and legal compliance

Garante, the data protection authority of Italy, is set to undertake an assessment of diverse AI platforms and bring in specialists in the field of AI. Agostino Ghiglia, a member of Garante’s board, stated that the objective is to evaluate whether these tools are adequate and tackle issues concerning data protection and adherence to privacy laws. Should it be deemed necessary, Garante will initiate further investigations based on the outcomes of this evaluation.

Garante’s current initiative to evaluate AI aligns with their ongoing focus on scrutinising the technology. This focus was intensified following the temporary ban of ChatGPT in March due to concerns over privacy and the risk of a data breach. However, the ban was eventually lifted after OpenAI fulfilled the requirements outlined by the Italian watchdog. OpenAI’s compliance involved various measures, such as providing comprehensive information regarding data collection and usage, introducing a new mechanism for EU users to express objection to their data being utilised for training, and implementing an age verification tool for users.

G7 leaders call for trustworthy AI: Focus on standards and democratic values

G7 leaders have called for the development and adoption of technical standards to ensure the trustworthiness of AI. While acknowledging the potential for different approaches in achieving trustworthy AI, the leaders stressed the importance of regulations for digital technologies, including AI, aligning with shared democratic values. The leaders expressed concerns that the governance of AI has not kept up with its rapid advancement. G7 leaders also agreed that ministers would discuss the technology as the ‘Hiroshima AI Process’ and deliver the results by the end of the year, conforming to a working lunch outline.

In previous weeks, the global attention was directed towards the regulation of AI. In the EU, AI Act has received approval from the Civil Liberties and Internal Market committees of the European Parliament. This significant step means that the proposal will now progress to plenary adoption in June, marking the final stage of the legislative process – which will involve negotiations with the EU Council and Commission.

The US regulators approach AI regulation with greater caution, and currently, there is a heated debate without any definitive steps taken. Last week, the global media focus was on Sam Altman’s testimony before US Congress, where the CEO of OpenAI expressed concerns about AI and called for regulatory measures. Altman outlined his plan for regulating AIproposing the formation of a new government agency responsible for licensing large AI models, with authority to revoke licences from companies that fail to meet government standards. He emphasised the importance of establishing safety standards for AI models, including evaluating their dangerous capabilities

In the Far East, China has adopted a more limited approach to AI regulation, releasing draft laws aligning with its socialist ideals.

New York City public schools lift restrictions on ChatGPT use

The New York City school system recently lifted its restriction on the use of ChatGPT in public schools, allowing access to this technology. Previously, access to ChatGPT in New York City public schools was restricted due to concerns about potential misuse. Such a prohibition also applies to websites like YouTube, Netflix, and Roblox, which require schools to request access for their staff and students.

Chancellor David Banks stated in an opinion piece that the decision to lift the restriction was made after consulting experts. The school system plans to provide resources and support to help educators and students learn about and explore AI technology, including successful implementation examples and a toolkit for classroom discussions. The schools will also gather information from experts to assist in using AI tools effectively.

The chancellor added that Generative artificial intelligence has the potential to cause significant shifts in society, and it is essential to ensure that the benefits of this technology are distributed fairly to prevent widening socioeconomic gaps. It is crucial to educate students about AI’s ethical concerns and prepare them for AI-related job opportunities.