FTC bans NGL app from minors, issues $5 million fine for cyberbullying exploits

The US Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office have banned the anonymous messaging app NGL from serving children under 18 due to rampant cyberbullying and threats.

The FTC’s latest action, part of a broader crackdown on companies mishandling consumer data or making exaggerated AI claims, also requires NGL to pay $5 million and implement age restrictions to prevent minors from using the app. NGL, which marketed itself as a safe space for teens, was found to have exploited its young users by sending them fake, anonymous messages designed to prey on their social anxieties.

The app then charged users for information about the senders, often providing only vague hints. The FTC lawsuit, which names NGL’s co-founders, highlights the app’s deceptive practices and its failure to protect users. However, the case against NGL is a notable example of FTC Chair Lina Khan’s focus on regulating digital data and holding companies accountable for AI-related misconduct.

The FTC’s action is part of a larger effort to protect children online, with states like New York and Florida also passing laws to limit minors’ access to social media. Regulatory push like this one aims to address the growing concerns about the impact of social media on children’s mental health.

US authorities disrupt Russian AI-powered disinformation campaign

Authorities from multiple countries have issued warnings about a sophisticated disinformation campaign backed by Russia that leverages AI-powered software to spread false information both in the US and internationally. The operation, known as Meliorator, is reportedly being carried out by affiliates of RT (formerly Russia Today), a Russian state-sponsored media outlet, to create fake online personas and disseminate misleading content. Since at least 2022, Meliorator has been employed to spread disinformation targeting the US, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel, as detailed in a joint advisory released by US, Canadian, and Dutch security services.

Meliorator is designed to create fake social media profiles that appear to be real individuals, primarily from the US. These bots can generate original posts, follow users, like, comment, repost, and gain followers. They are capable of mirroring and amplifying existing Russian disinformation narratives. The identities of these bots are crafted based on specific parameters like location, political ideologies, and biographical data. Meliorator can also group bots with similar ideologies to enhance their personas.

Moreover, most bot accounts had over 100,000 followers to avoid detection and followed genuine accounts aligned with their fabricated political leanings. As of June 2024, Meliorator was only operational on X, but there are indications that its functionality might have expanded to other social media networks.

The US Justice Department (DOJ) announced the seizure of two domain names and the search of nearly a thousand social media accounts used by Russian actors to establish an AI-enhanced bot farm with Meliorator’s assistance. The bot farm operators registered fictitious social media accounts using private email servers linked to the seized domain names. The FBI took control of these domains, while social media platform X (formerly Twitter) voluntarily suspended the remaining identified bot accounts for violating terms of service.

FBI Director Christopher Wray emphasised that this marks a significant step in disrupting a Russian-sponsored AI-enhanced disinformation bot farm. The goal of the bot farm was to use AI to scale disinformation efforts, undermining partners in Ukraine and influencing geopolitical narratives favouring the Russian government. These accounts commonly posted pro-Kremlin content, including videos of President Vladimir Putin and criticism of the Ukrainian government.

US authorities have linked the development of Meliorator to a former deputy editor-in-chief at RT in early 2022. RT viewed this bot farm as an alternative means of distributing information beyond its television broadcasts, especially after going off the air in the US in early 2022. The Kremlin approved and financed the bot farm, with Russia’s Federal Security Service (FSB) having access to the software to advance its goals.

The DOJ highlighted that the use of US-based domain names by the FSB violates the International Emergency Economic Powers Act, and the associated payments breach US money laundering laws. Deputy Attorney General Lisa Monaco stated that the DOJ and its partners will not tolerate the use of AI by Russian government actors to spread disinformation and sow division among Americans.

Why does it matter?

The disruption of the Russian operation comes just four months before the US presidential election, a period during which security experts anticipate heightened hacking and covert social media influence attempts by foreign adversaries. Attorney General Merrick Garland noted that this is the first public accusation against a foreign government for using generative AI in a foreign influence operation.

AI-powered workplace innovation: Tech Mahindra partners with Microsoft

Tech Mahindra has partnered with Microsoft to enhance workplace experiences for over 1,200 customers and more than 10,000 employees across 15 locations by adopting Copilot for Microsoft 365. The collaboration aims to boost workforce efficiency and streamline processes through Microsoft’s trusted cloud platform and generative AI capabilities. Additionally, Tech Mahindra will deploy GitHub Copilot for 5,000 developers, anticipating a productivity increase of 35% to 40%.

Mohit Joshi, CEO and Managing Director of Tech Mahindra, highlighted the transformative potential of the partnership, emphasising the company’s commitment to shaping the future of work with cutting-edge AI technology. Tech Mahindra plans to extend Copilot’s capabilities with plugins to leverage multiple data sources, enhancing creativity and productivity. The focus is on increasing efficiency, reducing effort, and improving quality and compliance across the board.

As part of the initiative, Tech Mahindra has launched a dedicated Copilot practice to help customers unlock the full potential of AI tools, including workforce training for assessment and preparation. The company will offer comprehensive solutions to help customers assess, prepare, pilot, and adopt business solutions using Copilot for Microsoft 365, providing a scalable and personalised user experience.

Judson Althoff, Executive Vice President and Chief Commercial Officer at Microsoft, remarked that the collaboration would empower Tech Mahindra’s employees with new generative AI capabilities, enhancing workplace experiences and increasing developer productivity. The partnership aligns with Tech Mahindra’s ongoing efforts to enhance workforce productivity using GenAI tools, demonstrated by the recent launch of a unified workbench on Microsoft Fabric to accelerate the adoption of complex data workflows.

Basel Committee of banking regulators proposes principles to reduce risk from third-party tech firms

The Basel Committee of banking regulators, consisting of regulators from the G20 and other nations, proposed 12 principles for banks and emphasised that the board of directors holds ultimate responsibility for overseeing third-party arrangements and that they must assume full responsibility for outsourced services and document their risk management strategies for service outages and disruptions.

Banks’ increasing reliance on third-party tech companies like Microsoft, Amazon, and Google for cloud computing services raises regulatory concerns about the potential financial sector impact if a widely used provider experiences downtime. Moreover, increased dependence on third-party services has led to heightened scrutiny due to frequent cyberattacks that threaten banks’ operational resilience and can potentially disrupt customer services. As such, banks should implement strong business continuity plans to ensure operations during disruptions.

In the consultative document, the committee also highlighted the importance of maintaining documentation for critical decisions in banks’ records, such as third-party strategies and board minutes.

Why does this matter?

With the financial sector becoming increasingly reliant on technology and tech companies to provide financial services, it makes them more susceptible to cyber-attacks or incidents, potentially affecting the larger economy. As such, there is an increasing worldwide need to improve the financial sector’s digital resilience. Previously, Europe’s Digital Operational Resilience Act (DORA), scheduled to be operational starting January next year, has also recognised this issue.

ChatGPT vs Google: The battle for search dominance

OpenAI’s ChatGPT, launched in 2022, has revolutionised the way people seek answers, shifting from traditional methods to AI-driven interactions. This AI chatbot, along with competitors like Anthropic’s Claude, Google’s Gemini, and Microsoft’s CoPilot, has made AI a focal point in information retrieval. Despite these advancements, traditional search engines like Google remain dominant.

Google’s profits surged by nearly 60% due to increased advertising revenue from Google Search, and its global market share reached 91.1% in June, even as ChatGPT’s web visits declined by 12%.

Google is not only holding its ground but also leveraging AI technology to enhance its services. Analysts at Bank of America credit Gemini, Google’s AI, with contributing to the growth in search queries. By integrating Gemini into products such as Google Cloud and Search, Google aims to improve their performance, blending traditional search capabilities with cutting-edge AI innovations.

However, Google’s dominance faces significant legal challenges. The U.S. Department of Justice has concluded a major antitrust case against Google, accusing the company of monopolising the digital search market, with a verdict expected by late 2024.

Additionally, Google is contending with another antitrust lawsuit filed by the U.S. government over alleged anticompetitive behaviour in the digital advertising space. These legal challenges could reshape the digital search landscape, potentially providing opportunities for AI chatbots and other emerging technologies to gain a stronger foothold in the market.

Vanuatu PM visits Huawei to view policing technology

Vanuatu Prime Minister Charlot Salwai visited Huawei’s headquarters in Shenzhen to explore surveillance technology aimed at enhancing policing and reducing criminal activity, his office announced on Tuesday. The visit is part of Salwai’s trip to China before attending a Pacific Island leaders meeting in Japan next week.

China is Vanuatu’s largest external creditor and a major provider of infrastructure. Australia, Vanuatu’s biggest aid donor and policing partner, has expressed concerns about China’s expanding security influence in the Pacific Islands, especially after a policing equipment deal with Vanuatu and a security pact with the Solomon Islands.

Huawei has supplied digital systems to cities like Port Vila, Vanuatu’s capital, to help lower crime rates. However, Vanuatu’s police currently use something other than Huawei’s surveillance system despite the need for a data centre to support such technology. Australia has banned Huawei from its 5G network on national security grounds and has funded subsea telecommunications cables in the Pacific Islands to counter Huawei’s influence, a move Beijing has criticised as discriminatory.

Microsoft employees in China to use iPhones

Microsoft has announced plans to provide Apple iOS devices to its employees in China so they can access authentication apps due to the unavailability of Google’s Android services in the country. This move, part of Microsoft’s global Secure Future Initiative, aims to mitigate security risks highlighted by recent breaches, including a high-profile hack by Russian hackers earlier this year.

Bloomberg News first reported that Microsoft, starting in September, will instruct its employees in China to use Apple devices at the workplace. The decision is driven by the absence of the Google Play Store in China, which limits employees’ access to essential security apps like Microsoft Authenticator and Identity Pass.

A Microsoft spokesperson confirmed the shift, emphasising the need for reliable access to required security apps. The company, which has operated in China since 1992 and maintains a significant research and development centre there, will provide iPhone 15 models to employees currently using Android handsets across China, including Hong Kong.

Australia accuses China-backed APT40 of cyberattacks on national networks

Australia’s government cybersecurity agency has pointed fingers at a China-backed hacker group, APT40, for pilfering passwords and usernames from two undisclosed Australian networks back in 2022. The Australian Cyber Security Centre, in collaboration with leading cybersecurity agencies from the US, Britain, Canada, New Zealand, Japan, South Korea, and Germany, released a joint report attributing these malicious cyber operations to China’s Ministry of State Security, the primary agency overseeing foreign intelligence. Despite these claims, China’s embassy in Australia refrained from immediate comments on the matter, dismissing the hacking allegations as ‘political manoeuvring’.

The accusations against APT40 come in the wake of previous allegations by US and British officials in March, implicating Beijing in a large-scale cyberespionage campaign that targeted a wide range of individuals and entities, including lawmakers, academics, journalists, and defence contractors.  Moreover, New Zealand also reported on APT40’s targeting of its parliamentary services and parliamentary counsel office in 2021, which resulted in unauthorised access to critical information.

In response to these cyber threats, Defence Minister Richard Marles emphasised the commitment of the Australian government to safeguard its organisations and citizens in the cyber sphere. The attribution of cyber attacks marks a significant step for Australia, signalling its proactive stance in addressing cybersecurity challenges. The timing of this report is noteworthy as Australia and China are in the process of repairing strained relations following tensions that peaked in 2020 over the origins of COVID-19, leading to retaliatory tariffs imposed by Beijing on Australian exports, most of which have now been lifted.

The identification of APT40’s cyber activities stresses the persistent threat posed by state-sponsored hacker groups and the critical importance of robust cybersecurity measures to protect sensitive information and national security. The incident serves as a reminder of the importance of joint attribution networks and international cooperation in combating cyber threats.

Thousands of event tickets leaked because of Ticketmaster hack

In an ongoing extortion scheme targeting Ticketmaster, nearly 39,000 print-at-home tickets for 150 upcoming concerts and events featuring artists like Pearl Jam, Phish, Tate McCrae, and Foo Fighters have been leaked by threat actors. The person responsible, known as ‘Sp1derHunters,’ is the same individual who sold data stolen from recent data breaches targeting Snowflake, a third-party cloud database provider.

The chain of events began in April when threat actors initiated the download of Snowflake databases from over 165 organisations using stolen credentials acquired through information-stealing malware. Subsequently, in May, a prominent threat actor named ShinyHunters started to sell the data of 560 million Ticketmaster customers, allegedly extracted from Ticketmaster’s Snowflake account. Ticketmaster later verified that their data had indeed been compromised through their Snowflake account.

Initially, the threat actors demanded a ransom of $500,000 from Ticketmaster to prevent the dissemination or sale of the data to other malicious actors. However, a recent development saw the same threat actors leaking 166,000 Taylor Swift ticket barcodes and increasing their demand to $2 million.
In response to the situation, Ticketmaster asserted that the leaked data was ineffective due to their anti-fraud measures with a system that continuously generates unique mobile barcodes. According to Ticketmaster, their SafeTix technology safeguards tickets by automatically refreshing barcodes every few seconds, making them impervious to theft or replication.

Contrary to Ticketmaster’s claims, Sp1d3rHunters refuted the assertion, stating that numerous print-at-home tickets with unalterable barcodes had been stolen. The threat actor emphasised that Ticketmaster’s ticket database has online and physical ticket types, such as Ticketfast, e-ticket, and mail, which are printed and cannot be automatically refreshed. Instead, they suggested that Ticketmaster must invalidate and reissue the tickets to affected customers.

The threat actors shared a link to a CSV file containing the barcode data for 38,745 TicketFast tickets, revealing ticket information for various events and concerts, including those featuring Aerosmith, Alanis Morissette, Billy Joel & Sting, Bruce Springsteen, Carrie Underwood, Cirque du Soleil, Dave Matthews Band, Foo Fighters, Metallica, Pearl Jam, Phish, P!NK, Red Hot Chili Peppers, Stevie Nicks, STING, Tate McRae, and $uicideboy$.

AI cybersecurity in devices deemed high-risk by European Commission

AI-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk under the AI Act, according to a European Commission document seen by Euractiv. The document, which interprets the relationship between the 2014 Radio Equipment Directive (RED) and the AI Act, marks the first known instance of how AI-based safety components will be treated under the new regulations. The RED pertains to wireless devices, including those using Wi-Fi and Bluetooth, beyond traditional radios.

Under the AI Act, high-risk AI systems will be subject to extensive testing, risk management, security measures, and documentation. The Act includes a list of use cases where AI deployment is automatically considered high-risk, such as in critical infrastructure and law enforcement. It also sets criteria for categorising other high-risk products, requiring third-party conformity assessments in line with sector-specific regulations. AI cybersecurity and emergency services components meet these criteria under the RED, thus being classified as high-risk.

Even in cases where the RED allows for self-assessment compliance with harmonised standards, these AI-based components are still deemed high-risk. The AI Act references numerous sectoral regulations that could classify AI products as high-risk, extending beyond electronics to medical devices, aviation, heavy machinery, and personal watercraft. The preliminary interpretation suggests that self-assessment standards are insufficient to remove the high-risk classification from AI products in these industries.

The AI Act imposes significant requirements on high-risk AI systems, while those not in this category face only minor transparency obligations. The Commission’s document is a preliminary interpretation, and the full application of the AI Act, which spans over 500 pages, remains to be seen. Despite initial estimates that 5-15% of AI systems would be classified as high-risk, a 2022 survey of EU-based startups indicated that 33-50% of these startups consider their products high-risk. Further interpretive work is needed to understand how the AI Act will impact various sectors.

Why does it matter?

The abovementioned proceedings highlight the European Commission’s stringent approach to regulating AI-based cybersecurity and emergency services in internet-connected devices. By classifying these components as high-risk, the AI Act mandates rigorous testing, security measures, and documentation, ensuring robust safety standards. This move underscores the EU’s commitment to protecting critical infrastructure and sensitive data and signals significant regulatory implications for various industries, potentially influencing global standards and practices in AI technology.