Industry leaders unite for ethical AI data practices

Several companies that license music, images, videos, and other datasets for training AI systems have formed the first trade group in the sector, the Dataset Providers Alliance (DPA). The founding members of the DPA include Rightsify, vAIsual, Pixta, and Datarade. The group aims to advocate for ethical data sourcing, including protecting intellectual property rights and ensuring rights for individuals depicted in datasets.

The rise of generative AI technologies has led to backlash from content creators and numerous copyright lawsuits against major tech companies like Google, Meta, and OpenAI. Developers often train AI models using vast amounts of content, much of which is scraped from the internet without permission. To address these issues, the DPA will establish ethical standards for data transactions, ensuring that members do not sell data obtained without explicit consent. The alliance will also push for legislative measures in the NO FAKES Act, penalising unauthorised digital replicas of voices or likenesses and supporting transparency requirements in AI training data.

The DPA plans to release a white paper in July outlining its positions and advocating for these standards and legislative changes to ensure ethical practices in AI data sourcing and usage.

Qatar’s tech hub expansion accelerates as AI boosts its growth

Qatar is rapidly advancing in technology, positioning itself as a global tech hub. The President of Qatar Science and Technology Park (QSTP), Dr Jack Lau, highlighted the role of AI in boosting the Qatari and GCC markets, emphasising the need for region-specific, tailored solutions.

AI applications like ChatGPT are well-researched in Qatar. However, optimization for different languages, increased speed, and more accurate responses have yet to be implemented.

Dr Lau noted the satisfaction with emerging AI tools, particularly in translating and customising presentation content. He stressed the importance of cultural sensitivity and corporate-specific needs in AI applications while ensuring data privacy and security, underscoring also that these technologies have significant potential for perfection and further development.

QSTP is crucial in supporting Qatar’s national vision of talent transformation through education, innovation, and entrepreneurship. The organisation is exploring opportunities for individuals with the right educational background to contribute significantly to AI, robotics, medical sciences, and sustainable farming.

CLTR urges UK government to create formal system for managing AI misuse and malfunctions

The UK should implement a system to log misuse and malfunctions in AI to keep ministers informed of alarming incidents, according to a report by the Centre for Long-Term Resilience (CLTR). The think tank, which focuses on responses to unforeseen crises, urges the next government to establish a central hub for recording AI-related episodes across the country, similar to the Air Accidents Investigation Branch.

CLTR highlights that since 2014, news outlets have recorded 10,000 AI ‘safety incidents,’ documented in a database by the Organisation for Economic Co-operation and Development (OECD). These incidents range from physical harm to economic, reputational, and psychological damage. Examples include a deepfake of Labour leader Keir Starmer and Google’s Gemini model depicting World War II soldiers inaccurately. The report’s author, Tommy Shaffer Shane, stresses that incident reporting has been transformative in aviation and medicine but is largely missing in AI regulation.

The think tank recommends the UK government adopt a robust incident reporting regime to manage AI risks effectively. It suggests following the safety protocols of industries like aviation and medicine, as many AI incidents may go unnoticed due to the lack of a dedicated AI regulator. Labour has pledged to introduce binding regulations for advanced AI companies, and CLTR emphasises that such a setup would help the government anticipate and respond quickly to AI-related issues.

Additionally, CLTR advises creating a pilot AI incident database, which could collect episodes from existing bodies such as the Air Accidents Investigation Branch and the Information Commissioner’s Office. The think tank also calls for UK regulators to identify gaps in AI incident reporting and build on the algorithmic transparency reporting standard already in place. An effective incident reporting system would help the Department for Science, Innovation and Technology (DSIT) stay informed and address novel AI-related harms proactively.

Meta launches AI chatbot in India, rivaling Google’s Gemini

Meta has officially introduced its AI chatbot, powered by Llama 3, to all users in India following comprehensive testing during the country’s general elections. Initially trialled on WhatsApp, Instagram, Messenger, and Facebook since April, the chatbot is now fully accessible through the search bars in these apps and the Meta.AI website. Despite supporting only English, its functionalities are on par with other major AI services like ChatGPT and Google’s Gemini, including tasks such as suggesting recipes, planning workouts, writing emails, summarising text, recommending Instagram Reels, and answering questions about Facebook posts.

The launch aims to capitalise on India’s vast user base, notably the 500 million WhatsApp users, by embedding the chatbot deeper into the user experience. However, some limitations have been observed, such as the chatbot’s inability to fully understand the context of group conversations, except in direct mentions or replies. Moreover, while it cannot be disabled, users can choose not to interact with it during searches.

Despite its capabilities, Meta AI has faced criticisms for biases in its image generation, often depicting Indian men with turbans and producing images of traditional Indian houses, which Meta has acknowledged and aims to address through ongoing updates. The launch coincides with Google releasing its Gemini app in India, which, unlike Meta’s chatbot, supports multiple local languages, potentially giving Google a competitive advantage in the linguistically diverse Indian market.

Why does it matter?

In summary, Meta’s rollout of its English-only AI chatbot in India is a strategic effort to leverage its extensive user base by offering robust functionalities similar to established competitors. While it faces initial limitations and biases, Meta is actively working on improvements. The concurrent release of Google’s Gemini app sets up a competitive landscape, underscoring the dynamic and evolving nature of AI services in India.

Central banks urged to embrace AI

The Bank for International Settlements (BIS) has advised central banks to harness the benefits of AI while cautioning against its use in replacing human decision-makers. In its first comprehensive report on AI, the BIS highlighted the technology’s potential to enhance real-time data monitoring and improve inflation predictions – capabilities that have become critical following the unforeseen inflation surges during the COVID-19 pandemic and the Ukraine crisis. While AI models could mitigate future risks, their unproven and sometimes inaccurate nature makes them unsuitable as autonomous rate setters, emphasised Cecilia Skingsley of the BIS. Human accountability remains crucial for decisions on borrowing costs, she noted.

The BIS, often termed the central bank for central banks, is already engaged in eight AI-focused projects to explore the technology’s potential. Hyun Song Shin, the BIS’s head of research, stressed that AI should not be seen as a ‘magical’ solution but acknowledged its value in detecting financial system vulnerabilities. However, he also warned of the risks associated with AI, such as new cyber threats and the possibility of exacerbating financial crises if mismanaged.

The widespread adoption of AI could significantly impact labour markets, productivity, and economic growth, with firms potentially adjusting prices more swiftly in response to economic changes, thereby influencing inflation. The BIS has called for the creation of a collaborative community of central banks to share experiences, best practices, and data to navigate the complexities and opportunities presented by AI. That collaboration aims to ensure AI’s integration into financial systems is both effective and secure, promoting resilient and responsive economic governance.

In conclusion, the BIS’s advisory underscores the importance of balancing AI’s promising capabilities with the necessity for human intervention in central banking operations. By fostering an environment for shared knowledge and collaboration among central banks, the BIS seeks to maximise AI benefits while mitigating inherent risks, thereby supporting more robust economic management in the face of technological advancements.

AI startup Etched to produce $120M worth specialised chip

Etched, an AI startup based in San Francisco, announced that it secured $120 million, aiming to create a specialised kind of chip tailored to run a specific type of AI model commonly used by OpenAI’s ChatGPT and Google’s Gemini.

Unlike Nvidia, which dominates the market for server AI chips with a roughly 80% market share, Etched aims to create a specialized processor optimized for running inference tasks. The produced chip would focus on generating content and responses, which is particularly suited for transformer-based AI models. The company’s CEO, Gavin Uberti, sees this as a strategic bet on the longevity of transformer models in the AI landscape.

In Etched’s funding round, key investors include former PayPal CEO Peter Thiel and Replit CEO Amjad Masad. The startup has also partnered with Taiwan Semiconductor Manufacturing Co. (TSMC) to fabricate its chips. Uberti highlighted the importance of the funding to cover the costs associated with sending chip designs to TSMC and manufacturing the chips, a process known as ‘taping out.’

While Etched did not disclose its current valuation, its $5.4-million seed-funding round in March 2023 valued the company at $34 million. The success of its specialised chip could position Etched as an important player in the AI chip market, provided transformer-based AI models continue to be prevalent in the industry.

Chinese AI companies respond to OpenAI restrictions

Chinese AI companies are swiftly responding to reports that OpenAI intends to restrict access to its technology in certain regions, including China. OpenAI, the creator of ChatGPT, is reportedly planning to block access to its API for entities in China and other countries. While ChatGPT is not directly available in mainland China, many Chinese startups have used OpenAI’s API platform to develop their applications. Users in China have received emails warning about restrictions, with measures set to take effect from 9 July.

In light of these developments, Chinese tech giants like Baidu and Alibaba Cloud are stepping in to attract users affected by OpenAI’s restrictions. Baidu announced an ‘inclusive Program,’ offering free migration to its Ernie platform for new users and additional Ernie 3.5 flagship model tokens to match their OpenAI usage. Similarly, Alibaba Cloud provides free tokens and migration services for OpenAI API users through its AI platform, offering competitive pricing compared to GPT-4.

Zhipu AI, another prominent player in China’s AI sector, has also announced a ‘Special Migration Program’ for OpenAI API users. The company emphasises its GLM model as a benchmark against OpenAI’s ecosystem, highlighting its self-developed technology for security and controllability. Over the past year, numerous Chinese companies have launched chatbots powered by their proprietary AI models, indicating a growing trend towards domestic AI development and innovation.

Italian watchdog tests AI for market oversight

Italy’s financial watchdog, Consob, has begun experimenting with AI to enhance its oversight capabilities, particularly in the initial review of listing prospectuses and the detection of insider trading. According to Consob, these AI algorithms aim to swiftly identify potential instances of insider trading, which traditionally requires significantly more time when conducted manually.

The agency reported that its AI algorithms can detect errors in just three seconds, a task typically taking a human analyst at least 20 minutes. These efforts were part of testing conducted last year using prototypes developed in collaboration with Scuola Normale Superiore University in Pisa, alongside an additional model developed independently.

Consob views the integration of AI as pivotal in enhancing the effectiveness of regulatory controls to detect financial misconduct. The next phase involves transitioning from prototype testing to fully incorporating AI into Consob’s regular operational procedures. That initiative mirrors similar efforts by financial regulators globally who are increasingly leveraging AI to bolster consumer protection and regulatory oversight.

For instance, in the United Kingdom, the Financial Conduct Authority (FCA) has utilised AI technologies to combat online scams and protect consumers. That trend underscores a broader international movement within regulatory bodies to harness AI’s potential in safeguarding market integrity and enhancing regulatory efficiency.

EvolutionaryScale secures $142 million to enhance AI applications in biology

AI startup EvolutionaryScale has secured $142 million in seed funding, led by investors including Nat Friedman, Daniel Gross, and Lux Capital. Both Amazon Web Services (AWS) and NVIDIA’s venture capital arm participated in this substantial funding round. Lux Capital’s co-founder Josh Wolfe likened EvolutionaryScale’s achievements to a ‘ChatGPT moment for biology,’ highlighting their development of a groundbreaking large language model capable of designing new proteins and biological systems.

EvolutionaryScale aims to deploy its AI across diverse applications, from accelerating drug discovery processes to engineering microbes that can degrade plastic pollution. The company’s chief scientist, Alex Rives, emphasised the growing significance of AI in creating innovative biological solutions. That aligns with broader industry trends where AI is increasingly pivotal in advancing biotech and pharmaceutical research.

However, concerns have been raised regarding the potential misuse of generative AI in bioweapons development. Despite these ethical considerations, EvolutionaryScale plans to use its newly secured funding to train its AI models further and expand its team for collaborations within the biotech sector. They have also released the ESM3 models, with the smaller variant open-sourced for non-commercial research, while AWS and NVIDIA will offer the larger ESM3 commercially.

Why does it matter?

One notable achievement highlighted by EvolutionaryScale involves engineering a novel fluorescent protein using their ESM3 model. That protein represents a significant departure from naturally occurring variants, a process typically requiring nature millions of years to evolve. The company’s advancements underscore the transformative potential of AI in pushing the boundaries of biological innovation.

UAE government partners with Rittal for AI development

During the 2024 AI Retreat, the Artificial Intelligence, Digital Economy, and Remote Work Applications Office of the United Arab Emirates entered a strategic partnership with Rittal FZE, a Rittal and Co KG division based in Herborn, Germany. Rittal, a renowned provider of IT infrastructure solutions, power distribution, climate management, and industrial enclosures, is set to collaborate with the UAE to enhance the implementation and training of AI technologies. This partnership, focusing on advancing technology and training, is poised to inspire and excite the future of AI in the UAE.

The Executive Director of the AI, Digital Economy, and Remote Work Applications Office, Saqr Binghalib, underscored that the UAE government prioritises enhancing skills in digital advancement. He stressed that such efforts are vital for enhancing the country’s standing as a leading global AI hub and fostering stronger collaborations with the private sector. Rittal is committed to advancing technology and training through AI advancements and other smart applications, especially to support robotics and Industry 4.0 programming. The AI Retreat in 2024 saw the participation of over 2,000 decision-makers, experts, and representatives from the public and private sectors.