BlackBerry surpasses revenue expectations, driven by cybersecurity demand

BlackBerry surpassed expectations for Q1 revenue by reporting $144 million, exceeding the estimated $134.1 million by analysts. The Canadian firm credits this achievement to a strong demand for cybersecurity services in response to rising online threats.

Looking ahead to Q2, BlackBerry forecasts revenue between $136 million and $144 million, with its cybersecurity division expected to contribute $82 million to $86 million. Furthermore, BlackBerry’s collaboration with AMD to develop robotic systems for industrial and healthcare applications indicates its diversification beyond cybersecurity.

Why does it matter?

Recent significant data breaches in sectors like automotive and healthcare have intensified the need for enhanced cybersecurity measures, benefiting companies like BlackBerry. Despite a general slowdown in tech spending, these security concerns are prompting organisations and governments to strengthen their defences, thereby boosting BlackBerry’s performance.

Evolve Bank cyberattack exposes customer data and prompts US federal response

Arkansas-based Evolve Bank and Trust confirmed a cyberattack that led to customer data being leaked on the dark web. The cybercrime group Lockbit 3.0 claimed responsibility for the hack, demanding a ransom from the Federal Reserve. The bank has involved law enforcement in the investigation, providing free credit monitoring and identity theft protection to affected customers.

The breach follows a directive from the US Federal Reserve for Evolve to improve its risk management and compliance with anti-money laundering regulations. Additionally, Fintech company Mercury revealed that some of its customers’ account numbers and deposit balances were compromised, and those affected have been informed and given preventive measures.

Why does it matter?

The cyberattack on Evolve Bank exposed sensitive customer data to potential misuse, including identity theft and financial fraud. It highlights vulnerabilities in financial institutions’ cybersecurity defences, prompting data protection and regulatory compliance concerns.

New report unveils cyberespionage groups using ransomware for evasion and profit

A recent report from SentinelLabs and Recorded Future analysts contends that cyberespionage groups have increasingly turned to ransomware as a strategic tool to complicate attribution, divert attention from defenders, or as a secondary objective for financial gain alongside data theft.

The report specifically sheds light on the activities of ChamelGang, a suspected Chinese advanced persistent threat (APT) group that uses the CatB ransomware strain in attacks targeting prominent organisations globally.  Operating under aliases like CamoFei, ChamelGang has targeted mostly governmental bodies and critical infrastructure entities, operating mostly from 2021 to 2023.

Employing sophisticated tactics for initial access, reconnaissance, lateral movement, and data exfiltration, ChamelGang executed a notable attack in November 2022 on the Presidency of Brazil, compromising 192 computers. The group leveraged standard reconnaissance tools to map the network and identify critical systems before deploying CatB ransomware, leaving ransom notes with contact details and payment instructions on encrypted files. While initially attributed to TeslaCrypt, new evidence points to ChamelGang’s involvement.

In a separate incident, ChamelGang targeted the All India Institute Of Medical Sciences (AIIMS), disrupting healthcare services with CatB ransomware. Other suspected attacks on a government entity in East Asia and an aviation organisation in the Indian subcontinent share similarities in tactics, techniques, and procedures (TTPs) and the use of custom malware like BeaconLoader. 

These intrusions have impacted 37 organisations, primarily in North America, with additional victims in South America and Europe. Moreover, analysis of past cyber incidents reveals connections to suspected Chinese and North Korean APTs. 

Why does it matter?

The integration of ransomware into cyberespionage operations offers strategic advantages, blurring the lines between APT and cybercriminal activities to obfuscate attribution and mask data collection efforts. The emergence of ChamelGang in ransomware incidents stresses adversaries’ evolving tactics to achieve their objectives while evading detection.

Helsing in talks for $500 million funding, poised to become Europe’s top AI defence startup

European defence technology startup Helsing is currently in negotiations to secure nearly $500 million from investors in Silicon Valley, including Accel and Lightspeed Venture Partners, amounting to $4.5 billion. This valuation marks a significant increase, tripling the company’s value in less than a year, possibly driven by heightened global conflicts which in turn are prompting a surge in private investments within the military supply sector.

Specialising in AI-based software for defence, Helsing was established in 2021 and works with AI to analyse extensive data from sensors and weapons systems, providing real-time battlefield intelligence to assist military decision-making processes. The company’s software is also contributing to the advancement of AI capabilities for drones in Ukraine.

Sources familiar with the negotiations revealed that Accel and Lightspeed will be new investors in Helsing, potentially joined by General Catalyst, a previous investor in the company. If finalised, this deal would position Helsing as one of Europe’s most valued artificial intelligence startups in terms of worth, at par with Paris-based Mistral, an AI startup that recently secured €600 million at a valuation nearing €6 billion. The reluctance of venture investors to engage with defence tech firms has notably shifted, particularly in the US and Europe, driven by escalating tensions between major powers and the ongoing conflict in Ukraine, leading to increased defence expenditure by nations.

NATO’s recent allocation of its €1 billion ‘innovation fund’ towards European tech firms points towards a notable shift, with Europe rapidly closing the investment gap in defence and dual-use technologies as compared to the US. The evolving landscape of modern warfare, as is the case in the Ukrainian conflict, emphasises the transition towards software-defined technologies over traditional hardware, enabling military forces to enhance strategic capabilities.

Why does it matter?

Helsing has forged partnerships with established defence contractors in Europe, such as Germany’s Rheinmetall and Sweden’s Saab, to integrate AI into existing platforms like fighter jets. Collaborating with Airbus, the startup is also developing AI technologies for application in both manned and unmanned systems.

US Department of Justice charges Russian hacker in cyberattack plot against Ukraine

The US Department of Justice has charged a Russian individual for allegedly conspiring to sabotage Ukrainian government computer systems as part of a broader hacking scheme orchestrated by Russia in anticipation of its unlawful invasion of Ukraine.

In a statement released by US prosecutors in Maryland, it was disclosed that Amin Stigal, aged 22, stands accused of aiding in the establishment of servers used by Russian state-backed hackers to carry out destructive cyber assaults on Ukrainian government ministries in January 2022, a month preceding the Kremlin’s invasion of Ukraine.

The cyber campaign, dubbed ‘WhisperGate,’ employed wiper malware posing as ransomware to intentionally and irreversibly corrupt data on infected devices. Prosecutors asserted that the cyberattacks were orchestrated to instil fear across Ukrainian civil society regarding the security of their government’s systems.

The indictment notes that the Russian hackers pilfered substantial volumes of data during the cyber intrusions, encompassing citizens’ health records, criminal histories, and motor insurance information from Ukrainian government databases. Subsequently, the hackers purportedly advertised the stolen data for sale on prominent cybercrime platforms.

Stigal is moreover charged with assisting hackers affiliated with Russia’s military intelligence unit, the GRU, in targeting Ukraine’s allies, including the United States. US prosecutors highlighted that the Russian hackers repeatedly targeted an unspecified US government agency situated in Maryland between 2021 and 2022 before the invasion, granting jurisdiction to prosecutors in the district to pursue charges against Stigal.

In a subsequent development in October 2022, the same servers arranged by Stigal were reportedly employed by the Russian hackers to target the transportation sector of an undisclosed central European nation, which allegedly provided civilian and military aid to Ukraine post-invasion. The incident aligns with a cyberattack in Denmark during the same period, resulting in widespread disruptions and delays across the country’s railway network.

The US government has announced a $10 million reward for information leading to the apprehension of Stigal, who is currently evading authorities and believed to be in Russia. If convicted, Stigal could face a maximum sentence of five years in prison.

AI protections included in new Hollywood worker’s contracts

The International Alliance of Theatrical Stage Employees (IATSE) has reached a tentative three-year agreement with major Hollywood studios, including Disney and Netflix. The deal promises significant pay hikes and protections against the misuse of AI, addressing key concerns of the workforce.

Under the terms of the agreement, IATSE members, such as lighting technicians and costume designers, will receive pay raises of 7%, 4%, and 3.5% over the three-year period. These increases mark a substantial improvement in compensation for the crew members who are vital to film and television production.

A crucial element of the deal is the inclusion of language that prevents employees from being required to provide AI prompts if it could result in job displacement. The provision aims to safeguard jobs against the potential threats posed by AI technologies in the industry.

The new agreement comes on the heels of a similar labor deal reached in late 2023 between the SAG-AFTRA actors’ union and the studios. That contract, which ended a nearly six-month production halt, provided substantial pay raises, streaming bonuses, and AI protections, amounting to over $1 billion in benefits over three years.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Levi Strauss & Co reports data breach affecting 72,000 customers

Levi Strauss & Co, the renowned manufacturer of Levi’s denim jeans, recently disclosed a data breach incident in a notification submitted to the Office of the Maine Attorney General. The company revealed that on June 13, it detected an unusual surge in activity on its website, prompting an immediate investigation to understand the nature and extent of the breach.

Following the investigation, Levi’s determined that the incident was a ‘credential stuffing’ attack, a tactic whereby malicious actors leverage compromised account credentials obtained from external breaches to launch automated bot attacks on another platform – in this case, www.levis.com. Importantly, Levi’s clarified that the compromised login credentials did not originate from their systems.

The attackers successfully executed the credential stuffing attack, gaining unauthorised access to customer accounts and extracting sensitive personal data. The compromised information included customers’ names, email addresses, saved addresses, order histories, payment details, and partial credit card information encompassing the last four digits of card numbers, card types, and expiration dates.

In the report submitted to the Maine state regulator, Levi’s disclosed that approximately 72,231 individuals were impacted by this security breach. Despite the breach, Levi’s assured that there was no evidence of fraudulent transactions conducted using the compromised data, as their systems need additional authentication for saved payment methods to be used in purchases.

In response to the breach, Levi Strauss & Co took swift action by deactivating account credentials for all affected user accounts during the relevant timeframe. Additionally, the company enforced a mandatory password reset after detecting suspicious activities on its website, thereby prioritising the security and protection of its customers’ data.

Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities

Apple, Microsoft, and Google are spearheading a technological revolution with their vision of AI smartphones and computers. These advanced devices aim to automate tasks like photo editing and sending birthday wishes, promising a seamless user experience. However, to achieve this level of functionality, these tech giants are seeking increased access to user data.

In this evolving landscape, users are confronted with the decision of whether to share more personal information. Windows computers may capture screenshots of user activities, iPhones could aggregate data from various apps, and Android phones might analyse calls in real time to detect potential scams. The shift towards data-intensive operations raises concerns about privacy and security, as companies require deeper insights into user behaviour to deliver tailored services.

The emergence of OpenAI’s ChatGPT has catalysed a transformation in the tech industry, prompting major players like Apple, Google, and Microsoft to revamp their strategies and invest heavily in AI-driven services. The focus is on creating a dynamic computing interface that continuously learns from user interactions to provide proactive assistance, an essential strategy for the future. While the potential benefits of AI integration are substantial, inherent security risks are associated with the increased reliance on cloud computing and data processing. As AI algorithms demand more computational power, sensitive personal data may need to be transmitted to external servers for analysis. The data transfer to the cloud introduces vulnerabilities, potentially exposing user information to unauthorised access by third parties.

Against this backdrop, tech companies have emphasised their commitment to safeguarding user data, implementing encryption and stringent protocols to protect privacy. As users navigate this evolving landscape of AI-driven technologies, understanding the implications of data sharing and the mechanisms employed to protect privacy is crucial. Apple, Microsoft, and Google are at the forefront of integrating AI into their products and services, each with a unique data privacy and security approach. Apple, for instance, unveiled Apple Intelligence, a suite of AI services integrated into its devices, promising enhanced functionalities like object removal from photos and intelligent text responses. Apple is also revamping its voice assistant, Siri, to enhance its conversational abilities and provide it with access to data from various applications.

The company aims to process AI data locally to minimise external exposure, with stringent measures in place to secure data transmitted to servers. Apple’s commitment to protecting user data differentiates it from other companies that retain data on their servers. However, concerns have been raised about the lack of transparency regarding Siri requests sent to Apple’s servers. Security researcher Matthew Green argued that there are inherent security risks to any data leaving a user’s device for processing in the cloud.

Microsoft has introduced AI-powered features in its new Windows computers called Copilot+ PC, ensuring data privacy and security through a new chip and other technologies. The Recall system enables users to quickly retrieve documents and files by typing casual phrases, with the computer taking screenshots every five seconds for analysis directly on the PC. While Recall offers enhanced functionality, security researchers caution about potential risks if the data is hacked. Google has also unveiled a suite of AI services, including a scam detector for phone calls and an ‘Ask Photos’ feature. The scam detector operates on the phone without Google listening to calls, enhancing user security. However, concerns have been raised about the transparency of Google’s approach to AI privacy, particularly regarding the storage and potential use of personal data for improving its services.

Why does it matter?

As these tech giants continue to innovate with AI technologies, users must weigh the benefits of enhanced functionalities against potential privacy and security risks associated with data processing and storage in the cloud. Understanding how companies handle user data and ensuring transparency in data practices are essential for maintaining control over personal information in the digital age.

CLTR urges UK government to create formal system for managing AI misuse and malfunctions

The UK should implement a system to log misuse and malfunctions in AI to keep ministers informed of alarming incidents, according to a report by the Centre for Long-Term Resilience (CLTR). The think tank, which focuses on responses to unforeseen crises, urges the next government to establish a central hub for recording AI-related episodes across the country, similar to the Air Accidents Investigation Branch.

CLTR highlights that since 2014, news outlets have recorded 10,000 AI ‘safety incidents,’ documented in a database by the Organisation for Economic Co-operation and Development (OECD). These incidents range from physical harm to economic, reputational, and psychological damage. Examples include a deepfake of Labour leader Keir Starmer and Google’s Gemini model depicting World War II soldiers inaccurately. The report’s author, Tommy Shaffer Shane, stresses that incident reporting has been transformative in aviation and medicine but is largely missing in AI regulation.

The think tank recommends the UK government adopt a robust incident reporting regime to manage AI risks effectively. It suggests following the safety protocols of industries like aviation and medicine, as many AI incidents may go unnoticed due to the lack of a dedicated AI regulator. Labour has pledged to introduce binding regulations for advanced AI companies, and CLTR emphasises that such a setup would help the government anticipate and respond quickly to AI-related issues.

Additionally, CLTR advises creating a pilot AI incident database, which could collect episodes from existing bodies such as the Air Accidents Investigation Branch and the Information Commissioner’s Office. The think tank also calls for UK regulators to identify gaps in AI incident reporting and build on the algorithmic transparency reporting standard already in place. An effective incident reporting system would help the Department for Science, Innovation and Technology (DSIT) stay informed and address novel AI-related harms proactively.

Ransomware actors encrypted Indonesia’s national data centre

Hackers have encrypted systems at Indonesia’s national data centre with ransomware, causing disruptions in immigration checks at airports and various public services, according to the country’s communications ministry. The ministry reported that the Temporary National Data Centre (PDNS) systems were infected with Brain Cipher, a new variant of the LockBit 3.0 ransomware.

Communications Minister Budi Arie Setiadi informed that the hackers demanded $8 million for decryption but emphasised that the government would not comply. The attack targeted the Surabaya branch of the national data centre, not the Jakarta location.

The breach risks exposing data from state institutions and local governments. The cyberattack, which began last Thursday, disrupted services such as visa and residence permit processing, passport services, and immigration document management, according to Hinsa Siburian, head of the national cyber agency. The ransomware also impacted online enrollment for schools and universities, prompting an extension of the registration period, as local media reported. Overall, at least 210 local services were disrupted.

Although LockBit ransomware was used, it may have been deployed by a different group, as many use the leaked LockBit 3.0 builder, noted SANS Institute instructor Will Thomas. LockBit was a prolific ransomware operation until its extortion site was shut down in February, but it resurfaced three months later. Cybersecurity analyst Dominic Alvieri also pointed out that the Indonesian government hasn’t been listed on LockBit’s leak site, likely due to typical delays during negotiations. Previously, Indonesia’s data centre has been targeted by hackers, and in 2023, ThreatSec claimed to have breached its systems, stealing sensitive data, including criminal records.