AI ‘girlfriend’ ads raise concerns on Meta platforms

Meta’s integration of AI across its platforms, including Facebook, Instagram, and WhatsApp, has raised concerns as Wired reports the proliferation of explicit ads for AI ‘girlfriends’ on these platforms. The investigation found tens of thousands of such ads violating Meta’s adult content advertising policy, which prohibits nudity, sexually suggestive content, and sexual services. Despite this policy, these ads continue to circulate on Meta’s platforms, sparking criticism from various communities, including sex workers, educators, and LGBTQ individuals, who feel unfairly targeted by Meta’s content policies.

For years, users have criticised Meta for what they perceive as discriminatory enforcement of its community guidelines. LGBTQ and sex educator accounts have reported instances of shadowbanning on Instagram, while WhatsApp has banned accounts associated with sex work. Additionally, Meta’s advertising approval process has come under scrutiny, with reports of gender-biased rejections of ads, such as those for sex toys and period care products. Despite these issues, explicit AI ‘girlfriend’ ads have evaded Meta’s enforcement mechanisms, highlighting a gap in the company’s content moderation efforts.

When approached, Meta acknowledged the presence of these ads and stated its commitment to removing them promptly. A Meta spokesperson emphasised the company’s ongoing efforts to improve its systems for detecting and removing ads that violate its policies. However, despite Meta’s assurances, Wired found that thousands of these ads remained active even days after the initial inquiry.

Google and Microsoft impress investors with AI growth

Microsoft Corp. and Google owner Alphabet Inc. impressed investors surpassing Wall Street expectations with robust quarterly results driven by AI and cloud computing. The surge in cloud revenue, fueled partly by the increasing use of AI services, propelled both companies’ shares higher in late trading, with Alphabet soaring up to 17% and Microsoft gaining 6.3%.

The tech giants are in a fierce competition for AI dominance, with Microsoft partnering with startup OpenAI to challenge Google’s longstanding dominance in internet search. Yet, the latest results indicate significant growth opportunities for both companies in the AI and cloud computing landscape.

Also, 2024 is hailed as the year of generative AI deployment, a technology that creates text, images, and videos from simple prompts. Executives from Alphabet and Microsoft highlighted how these programs drive business growth for their cloud computing units, with corporate clients increasingly investing in long-term cloud infrastructure.

Why does it matter?

Google’s cloud operation, which once lagged behind competitors, is now thriving, posting a significant profit and attracting enterprise clients. Despite setbacks in the consumer market, Google Cloud’s AI offerings have gained traction among corporate customers, driving substantial revenue growth.

Similarly, Microsoft’s Azure cloud computing platform saw a 31% sales increase, surpassing analyst expectations. Integrating AI technology across Microsoft’s product line, mainly through its partnership with OpenAI, is successfully driving customer adoption and revenue growth. With promising uptake for AI tools and services, both companies are optimistic about the future of AI-driven solutions in cloud computing.

Microsoft unveils Phi-3 Mini for AI innovation

Microsoft has introduced Phi-3 Mini, the latest addition to its lineup of lightweight AI models. It marks the first instalment of a planned trio of small-scale models. With 3.8 billion parameters, Phi-3 Mini offers a streamlined alternative to larger language models like GPT-4, catering to users’ diverse needs. On platforms like Azure, Hugging Face, and Ollama, Phi-3 Mini represents a significant step forward in democratising AI technology.

Compared to its predecessors, Phi-3 boasts improved performance, with Microsoft touting its ability to rival larger models while maintaining a compact form factor. According to Eric Boyd, corporate vice president of Microsoft Azure AI Platform, Phi-3 Mini is on par with LLMs like GPT-3.5, offering comparable capabilities in a smaller package. This advancement underscores Microsoft’s commitment to enhancing accessibility and efficiency in AI development.

Small AI models like Phi-3 are gaining traction for their cost-effectiveness and suitability for personal devices, offering optimal performance without compromising functionality. Microsoft’s strategic focus on lightweight models aligns with industry trends, as evidenced by the company’s development of Orca-Math, a specialised model for solving mathematical problems. With Phi-3, Microsoft aims to empower developers with versatile tools catering to various applications.

Why does it matter?

As the AI landscape evolves, companies increasingly turn to tailored solutions like Phi-3 for their specific needs. With its refined capabilities in coding and reasoning, Phi-3 represents a significant milestone in Microsoft’s AI journey. While it may not match the breadth of larger models like GPT-4, Phi-3’s adaptability and affordability make it a compelling choice for custom applications and resource-conscious organisations.

BBC boosts educational content with £6 million AI investment

The BBC is embarking on a multimillion-pound investment in AI to revamp its educational offerings. It aims to cater to the learning needs of young users while securing its relevance in the digital age. This £6 million investment will bolster BBC Bitesize, transitioning it from a trusted digital textbook to a personalised learning platform, ensuring that learning adapts to each user’s needs and preferences.

As the broadcaster marks a century since its first educational program, it plans to enhance its educational brand further by offering special Live Lessons and interactive content on platforms like CBBC and BBC iPlayer. By leveraging AI-powered learning tools akin to Duolingo, the BBC aims to harness its extensive database of educational content to provide personalised testing, fill learning gaps, and offer tailored suggestions for further learning, akin to a ‘spinach version’ of YouTube.

Why does it matter?

Recognising the need to engage younger audiences and fulfil its founding purpose of informing, educating, and entertaining, the BBC’s investment in educational content serves dual purposes. Amidst concerns over declining viewership among younger demographics, the broadcaster seeks to reinforce its value proposition and attract a broader audience while reaffirming its commitment to public service. Through initiatives like Bitesize, which saw a surge in users during the pandemic, the BBC aims to educate and foster a lifelong relationship with audiences, irrespective of age.

Bollywood actors featured in AI fake videos for India’s election

In the midst of India’s monumental general election, AI-generated fake videos featuring Bollywood actors criticising Prime Minister Narendra Modi and endorsing the opposition Congress party have gone viral. The videos, viewed over half a million times on social media, underscore the growing role of AI-generated content in elections worldwide.

India’s election, involving almost one billion voters, pits Modi’s Bharatiya Janata Party (BJP) against an alliance of opposition parties. As campaigning shifts towards digital platforms like WhatsApp and Facebook, AI is being utilised for the first time in Indian elections, signalling a new era of political communication.

Despite efforts by platforms like Facebook to remove the fake videos, they continue to circulate, prompting police investigations and highlighting the challenges of combating misinformation in the digital age. While actors Aamir Khan and Ranveer Singh have denounced the videos as fake, their proliferation underscores the potential impact of AI-generated content on public opinion.

Why does it matter?

In this year’s election in India, politicians employ AI in various ways, from creating videos featuring deceased family members to using AI-generated anchors to deliver political messages. These tactics raise questions about the ethical implications of AI in politics and its potential to shape public discourse in unprecedented ways.

UK bans sex offender from AI tools after child abuse conviction

A convicted sex offender in the UK has been banned from using ‘AI-creating tools’ for five years, marking the first known case of its kind. Anthony Dover, 48, received the prohibition as part of a sexual harm prevention order, preventing him from accessing AI generation tools without prior police permission. This includes text-to-image generators and ‘nudifying’ websites used to produce explicit deepfake content.

Dover’s case highlights the increasing concern over the proliferation of AI-generated sexual abuse imagery, prompting government action. The UK recently introduced a new offence making it illegal to create sexually explicit deepfakes of adults without consent, with penalties including prosecution and unlimited fines. The move aims to address the evolving landscape of digital exploitation and safeguard individuals from the misuse of advanced technology.

Charities and law enforcement agencies emphasise the urgent need for collaboration to combat the spread of AI-generated abuse material. Recent prosecutions reveal a growing trend of offenders exploiting AI tools to create highly realistic and harmful content. The Internet Watch Foundation (IWF) and the Lucy Faithfull Foundation (LFF) stress the importance of targeting both offenders and tech companies to prevent the production and dissemination of such material.

Why does it matter?

The decision to restrict an adult sex offender’s access to AI tools sets a precedent for future monitoring and prevention measures. While the specific reasons for Dover’s ban remain unclear, it underscores the broader effort to mitigate the risks posed by digital advancements in sexual exploitation. Law enforcement agencies are increasingly adopting proactive measures to address emerging threats and protect vulnerable individuals from harm in the digital age.

OpenAI hires first employee in India for AI policy

OpenAI, the company behind ChatGPT, has appointed Pragya Misra, its first employee in India, to lead government relations and public policy affairs. This move comes as India prepares for a new administration to influence AI regulations in one of the world’s largest and fastest-growing tech markets. Previously with Truecaller AB and Meta Platforms Inc., Misra brings a wealth of experience navigating policy issues and partnerships within the tech industry.

The hiring reflects OpenAI’s strategic efforts to advocate for favourable regulations amid the global push for AI governance. Given its vast population and expanding economy, India presents a significant growth opportunity for tech giants. However, regulatory complexities in India have posed challenges, with authorities aiming to protect local industries while embracing technological advancements.

Why does it matter?

OpenAI’s engagement in India mirrors competition from other tech giants like Google, which is developing AI models tailored for the Indian market to address linguistic diversity and expand internet access beyond English-speaking urban populations. OpenAI’s CEO, Sam Altman, emphasised the need for AI research to enhance government services like healthcare, underscoring the importance of integrating emerging technologies into public sectors.

During Altman’s visit to India last year, he highlighted the country’s early adoption of OpenAI’s ChatGPT. Altman has advocated for responsible AI development, calling for regulations to mitigate potential harms from AI technologies. While current AI versions may not require major regulatory changes, Altman believes that evolving AI capabilities will soon necessitate comprehensive governance.

UK union proposes bill to protect workers from AI risks

The Trade Union Congress (TUC) in the UK has proposed a bill to safeguard workers from the potential risks posed by AI-powered decision-making in the workplace. The government has maintained a relatively light-touch approach to regulating AI, preferring to rely on existing laws and regulatory bodies. The TUC’s proposal seeks to prompt the government to adopt a firmer stance on regulating AI and ensuring worker protection.

According to the TUC, the bill addresses the risks associated with AI deployment and advocates for trade union rights concerning employers’ use of AI systems. Mary Towers, a TUC policy officer, emphasised the urgency of action, stating that while AI rapidly transforms society and work, there are currently no specific AI-related laws in the UK. The proposed bill aims to fill this legislative gap and ensure everyone benefits from AI opportunities while being shielded from potential harm.

Why does it matter?

While the UK government has outlined an approach to AI regulation based on harms rather than risks, the TUC argues for more comprehensive legislation akin to the EU’s AI Act, which is the world’s first legislation to address AI risks. The TUC’s efforts, including forming an AI task force and the proposed AI bill, underscore the pressing need for legislation to protect workers’ rights and ensure that AI advancements benefit all members of society, not just a few.

Meta launches Llama 3 to challenge OpenAI

Meta Platforms launched its latest large language model, Llama 3, and a real-time image generator designed to update pictures as users type prompts. The development aims to catch up with the generative AI market leader, OpenAI. The models are set to be integrated into Meta’s virtual assistant, Meta AI, which the company claims to be the most advanced among its free-to-use counterparts. Performance comparisons highlight its reasoning, coding, and creative writing capabilities against competitors like Google and Mistral AI.

Meta is giving prominence to its updated Meta AI assistant within its various platforms, positioning it to compete more directly with OpenAI’s ChatGPT. The assistant will feature prominently in Meta’s Facebook, Instagram, WhatsApp, and Messenger apps, along with a standalone website offering various functionalities, from creating vacation packing lists to providing homework help.

The development of Llama 3 is part of Meta’s efforts to challenge OpenAI’s leading position in generative AI. The company has openly released its Llama models for developers, aiming to disrupt rivals’ revenue plans with powerful free options. However, critics have raised safety concerns about unscrupulous actors’ potential misuse of such models.

While Llama 3 currently outputs only text, future versions will incorporate multimodal capabilities, generating text and images. Meta CEO Mark Zuckerberg emphasised the performance of Llama 3 versions against other free models, indicating a growing performance gap between free and proprietary models. The company aims to address previous issues with understanding context by leveraging high-quality data and significantly increasing the training data for Llama 3.

NSA’s AISC releases guidance on securing AI systems

The National Security Agency’s Artificial Intelligence Security Center (NSA AISC) has introduced new guidelines to bolster cybersecurity in the era of AI integration into daily operations. The initiative, developed with key agencies like CISA, FBI, and others, focuses on safeguarding AI systems against potential threats.

The recently released Cybersecurity Information Sheet, ‘Deploying AI Systems Securely,’ outlines essential best practices for organisations deploying externally developed AI systems. The guidelines emphasise three primary objectives: confidentiality, integrity, and availability. Confidentiality ensures sensitive information remains protected; integrity maintains accuracy and reliability, and availability guarantees authorised access as needed.

The guidance stresses the importance of mitigating known vulnerabilities in AI systems to preemptively address security risks. Agencies advocate for implementing methodologies and controls to detect and respond to malicious activities targeting AI systems, their data, and associated services.

The recommendations include ongoing compromise assessments, IT deployment environment hardening, and thorough validation of AI systems before deployment. Strict access controls and robust monitoring tools, such as user behaviour analytics, are advised to identify and mitigate insider threats and other malicious activities.

Organisations deploying AI systems are urged to review and implement the prescribed practices to enhance the security posture of their AI deployments. This proactive approach ensures that AI systems remain resilient against evolving cybersecurity threats in the rapidly advancing AI landscape.