ByteDance to invest $2.13 billion in Malaysia AI hub

China’s ByteDance, the parent company of TikTok, plans to invest around $2.13 billion to establish an AI hub in Malaysia. The plan includes an additional $320 million to expand data centre facilities in Johor state, according to Malaysia’s Trade Minister Tengku Zafrul Aziz.

The development follows significant investments by other tech giants in Malaysia. Google recently announced a $2 billion investment to create its first data centre and Google Cloud region in the country, while Microsoft is set to invest $2.2 billion to enhance cloud and AI services.

The investment is expected to boost Malaysia’s digital economy, aiming to increase its contribution to 22.6% of its GDP by 2025, underscoring the county’s growing importance as a digital economy hub in Southeast Asia.

AI growth faces data shortage

The surge in AI, particularly with systems like ChatGPT, is facing a potential slowdown due to the impending depletion of publicly available text data, according to a study by Epoch AI. The shortage is projected to occur between 2026 and 2032, highlighting a critical challenge in maintaining the rapid advancement of AI.

AI’s growth has relied heavily on vast amounts of human-generated text data, but this finite resource is diminishing. Companies like OpenAI and Google are currently purchasing high-quality data sources, such as content from Reddit and news outlets, to sustain their AI training. However, the scarcity of fresh data might soon force them to consider using sensitive private data or less reliable synthetic data.

The Epoch AI study emphasises that scaling AI models, which requires immense computing power and large data sets, may become unfeasible as data sources dwindle. While new techniques have somewhat mitigated this issue, the fundamental need for high-quality human-generated data remains. Some experts suggest focusing on specialised AI models rather than larger ones to address this bottleneck.

In response to these challenges, AI developers are exploring alternative methods, including generating synthetic data. However, concerns about the quality and efficiency of such data persist, underlining the complexity of sustaining AI advancements in the face of limited natural resources.

RSF urges countries adopting CoE’s AI Framework to avoid self-regulation

Reporters Without Borders (RSF) has praised the Council of Europe’s (CoE) new Framework Convention on AI for its progress but criticised its reliance on private sector self-regulation. The Convention, which includes 46 European countries, aims to address the impact of AI on human rights, democracy, and the rule of law. While it acknowledges the threat of AI-fueled disinformation, RSF argues that it fails to provide the necessary mechanisms to achieve its goals.

The CoE Convention mandates strict regulatory measures for AI use in the public sector but allows member states to choose self-regulation for the private sector. RSF believes this distinction is a critical flaw, as the private sector, particularly social media companies and other digital service providers, have historically prioritised business interests over the public good. According to RSF, this approach will not effectively combat the disinformation challenges posed by AI.

RSF urges countries that adopt the Convention to implement robust national legislation to strictly regulate AI development and use. That would ensure that AI technologies are deployed ethically and responsibly, protecting the integrity of information and democratic processes. Vincent Berthier, Head of RSF’s Tech Desk, emphasised the need for legal requirements over self-regulation to ensure AI serves the public interest and upholds the right to reliable information.

RSF’s recommendations provide a framework for AI regulation that addresses the shortcomings of both the Council of Europe’s Framework Convention and the European Union’s AI Act, advocating for stringent measures to safeguard the integrity of information and democracy.

EU banks’ increasing reliance on US tech giants for AI raises concerns

According to European banking executives, the rise of AI is increasing banks’ reliance on major US tech firms, raising new risks for the financial industry. AI, already used in detecting fraud and money laundering, has gained significant attention following the launch of OpenAI’s ChatGPT in late 2022, with banks exploring more applications of generative AI.

At a fintech conference in Amsterdam, industry leaders expressed concerns about the heavy computational power needed for AI, which forces banks to depend on a few big tech providers. Bahadir Yilmaz, ING’s chief analytics officer, noted that this dependency on companies like Microsoft, Google, IBM, and Amazon poses one of the biggest risks, as it could lead to ‘vendor lock-in’ and limit banks’ flexibility. These facts also imply the strong impact AI could have on retail investor protection.

Britain has proposed regulations to manage financial firms’ reliance on external tech companies, reflecting concerns that issues with a single cloud provider could disrupt services across multiple financial institutions. Deutsche Bank’s technology strategy head, Joanne Hannaford, highlighted that accessing the necessary computational power for AI is feasible only through Big Tech.

The European Union’s securities watchdog recently emphasised that banks and investment firms must protect customers when using AI and maintain boardroom responsibility.

US officials clash over AI disclosure in political ads

Top officials at the US Federal Election Commission (FEC) are divided over a proposal requiring political advertisements on broadcast radio and television to disclose if their content is generated by AI. FEC Vice Chair Ellen Weintraub backs the proposal, initiated by FCC Chairwoman Jessica Rosenworcel, which aims to enhance transparency in political ads, whereas FEC Chair Sean Cooksey opposes it.

The proposal, which does not ban AI-generated content, comes amid increasing concerns in Washington that such content could mislead voters in the upcoming 2024 elections. Rosenworcel emphasised the risk of ‘deepfakes’ and other altered media misleading the public and noted that the FCC has long-standing authority to mandate disclosures. Weintraub also highlighted the importance of transparency for public benefit and called for collaborative regulatory efforts between the FEC and FCC.

However, Cooksey warned that mandatory disclosures might conflict with existing laws and regulations, creating confusion in political campaigns. Republican FCC Commissioner Brendan Carr criticised the proposal, pointing out inconsistencies in regulation, as the FCC cannot oversee internet, social media, or streaming service ads. The debate gained traction following an incident in January where a fake AI-generated robocall impersonating US President Joe Biden aimed to influence New Hampshire’s Democratic primary, leading to charges against a Democratic consultant.

Tech titans announce AI-driven PC revolution at Computex 2024

This week, key players in the chip industry, including Nvidia, Intel, AMD, Qualcomm, and Arm, gathered in Taiwan for the annual Computex conference, announcing an ‘AI PC revolution.’ They showcased AI-enabled personal computers with specialised chips for running AI applications directly on the device, promising a significant leap in user interaction with PCs.

Intel CEO Pat Gelsinger called this the most exciting development in 25 years since the arrival of WiFi, while Qualcomm’s Cristiano Amon likened it to the industry being reborn. Microsoft has driven this push by introducing AI PCs equipped with its Copilot assistant and choosing Qualcomm as its initial AI chip supplier. Despite this, Intel and AMD are also gearing up to launch their AI processors soon.

Why does it matter?

The conference was strategically timed to precede Apple’s annual Worldwide Developers Conference, hinting at the competitive landscape in AI advancements. As the PC market shows signs of recovery, analysts predict a rise in AI PC adoption, potentially transforming how PCs are used. However, there needs to be more skepticism about whether consumer demand will justify the higher costs associated with these advanced devices, as the Financial Times reports.

Salesforce launches first AI centre in London

Salesforce has chosen London for its first AI centre, where experts, developers, and customers will collaborate on innovation and skill development. The US cloud software company, which is hosting its annual London World Tour event, announced last year a $4 billion investment in the UK over five years, focusing on AI innovation.

Zahra Bahrololoumi, CEO of Salesforce UK and Ireland, highlighted customer enthusiasm for AI’s benefits while noting caution about potential risks. She emphasised the importance of trust in AI adoption, citing Salesforce’s Einstein technology’s ‘Trust Layer’ that protects customer data.

Moreover, Salesforce’s dedication to responsible AI goes beyond data security. Bahrololoumi emphasises the company’s commitment to making AI a force for good. Their message to customers and partners is clear as they are deeply committed to collaborating closely to ensure that the transformative technology of AI brings about positive impacts.

Meta faces EU complaints over AI data use

Meta Platforms is facing 11 complaints over proposed changes to its privacy policy that could violate EU privacy regulations. The changes, set to take effect on 26 June, would allow Meta to use personal data, including posts and private images, to train its AI models without user consent. Advocacy group NOYB has urged privacy watchdogs to take immediate action against these changes, arguing that they breach the EU’s General Data Protection Regulation (GDPR).

Meta claims it has a legitimate interest in using users’ data to develop its AI models, which can be shared with third parties. However, NOYB founder Max Schrems contends that the European Court of Justice has previously ruled against Meta’s arguments for similar data use in advertising, suggesting that the company is ignoring these legal precedents. Schrems criticises Meta’s approach, stating that the company should obtain explicit user consent rather than complicating the opt-out process.

In response to the impending policy changes, NOYB has called on data protection authorities across multiple European countries, including Austria, Germany, and France, to initiate an urgent procedure to address the situation. If found in violation of GDPR, Meta could face strict fines.

Chinese AI chip firms downgrading designs to secure TSMC production

Chinese AI chip firms, including industry leaders such as MetaX and Enflame, are downgrading their chip designs in order to comply with Taiwan Semiconductor Manufacturing Company’s (TSMC) stringent supply chain security protocols and regulatory requirements. This strategic adjustment comes amidst heightened scrutiny and restrictions imposed by the US on semiconductor exports to Chinese companies, which includes limitations on accessing advanced manufacturing technologies critical for AI chip production.

The US has imposed strict export controls to obstruct China’s military advancements in AI and supercomputing. These controls include restrictions on sophisticated processors from companies like Nvidia, as well as on-chip manufacturing equipment crucial for advanced semiconductor production. That move has prevented TSMC and other overseas chip manufacturers using US tools from fulfilling orders for these restricted technologies.

In response to these restrictions, top Chinese AI chip firms MetaX and Enflame have reportedly submitted downgraded chip designs to TSMC in late 2023. MetaX, founded by former Advanced Micro Devices (AMD) executives and backed by state support, had to introduce the C280 chip after its more advanced C500 Graphic Processing Unit (GPU) ran out of stock in China earlier in the year. Enflame, also Shanghai-based and supported by Tencent, faces similar challenges.

Why does it matter?

The decision to downgrade chip designs to meet production demands reflects the delicate balance between technological advancement and supply chain resilience. While simplifying designs may expedite production and mitigate supply risks in the short term, it also raises questions about long-term innovation and competitiveness. The ability to innovate and deliver cutting-edge AI technologies hinges on access to advanced chip manufacturing processes, which are increasingly concentrated among a few global players.

OpenAI insiders call for stronger oversight and whistleblower protections in open letter

On Tuesday, a group of current and former OpenAI employees issued an open letter warning that leading AI companies lack necessary transparency and accountability to address potential risks. The letter highlights AI safety concerns, such as deepening inequalities, misinformation, and loss of control over autonomous systems, potentially leading to catastrophic outcomes.

The 16 signatories, including Google DeepMind staff, emphasised that AI firms have financial incentives to avoid effective oversight and criticised their weak obligations to share critical information. They called for stronger whistleblower protections, noting that confidentiality agreements often prevent employees from raising concerns. Some current OpenAI employees signed anonymously, fearing retaliation. AI pioneers like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell also endorsed the letter, criticising inadequate preparations for AI’s dangers.

The letter also calls for AI companies to commit to main principles in order to maintain a curtain level of accountability and transparency. Those principles are –  not to enter into or enforce any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism, facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise, and support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.

Why does it matter?

In response, OpenAI defended its record, citing its commitment to safety, rigorous debate, and engagement with various stakeholders. The company highlighted its anonymous integrity hotline and newly formed Safety and Security Committee as channels for employee concerns. The critique of OpenAI comes amid growing scrutiny of CEO Sam Altman’s leadership. The concerns raised by OpenAI insiders highlights the critical need for transparency and accountability in AI development. Ensuring that AI companies are effectively overseen and held accountable and that insiders are enabled to speak out about unethical or dangerous practices without fear of retaliation represent pivotal safeguards to inform the public and the decision makers about AI’s potential capabilities and risks.