Hedge funds target South Korean chipmakers amid AI demand surge

Hedge funds are increasingly investing in South Korean chipmakers, betting on a surge in demand for high-end memory chips driven by AI advancements and government support. Notable funds, including Britain’s Man Group and Singapore’s FengHe Fund Management, target giants like SK Hynix and Samsung Electronics, which have lagged behind the broader AI sector rally.

FengHe and other investors see SK Hynix as a key player in the AI market, given its significant supply of high-bandwidth memory (HBM) chips to Nvidia. Despite Hynix’s crucial role, its stock trades at a lower multiple than Taiwan’s TSMC, presenting a perceived value opportunity. Additionally, the South Korean government’s 26 trillion won support package for the chip industry and initiatives to enhance shareholder returns add to the appeal of these stocks.

The influx of hedge fund investment has bolstered the stock market in South Korea, with the KOSPI index achieving its best performance in seven months in June. South Korean stocks have attracted the highest inflows among Asian emerging markets this year, with Samsung and Hynix accounting for a significant portion of KOSPI’s market capitalisation. Despite Hynix’s substantial gains, Samsung is expected to catch up in the latter half of the year.

Beyond chipmakers, the AI boom is benefiting other South Korean industries. For instance, HD Hyundai Electric has seen a significant rise in share price, driven by increased power consumption from AI developments. The ongoing US-China technology conflict further ensures demand for South Korean advanced memory chips as Chinese manufacturers struggle under US export restrictions.

South Korean company launches AI beauty lab

South Korean cosmetics giant AmorePacific has seen immense interest in its new AI beauty lab, where robots custom mix face products and advanced technology recommends the most suitable lipstick colours. Customers like Kwon You-jin appreciate the personalised service, which uses AI-generated reports to analyse skin conditions and match products precisely to individual skin tones.

AI technology is becoming increasingly prevalent in the cosmetics industry, with global brands like L’Oréal and Sephora also adopting it to tailor products to customer needs. In 2023, global beauty industry sales, including cosmetics, reached $625.6 billion, showing steady growth since a dip during the COVID-19 pandemic.

AmorePacific employs deep learning and machine learning techniques to recommend the best product choices. The use of AI speeds up product development and reduces human error and variability in consultations. Analysts believe that AI integration will continue accelerating product launches and lowering industry hurdles.

The market for AI in the beauty and cosmetics sector is projected to more than double from $3.27 billion in 2023 to $8.1 billion by 2028. According to Business Research Company, services such as personalised beauty recommendations, skin analysis, diagnostics, and virtual makeup artists are expected to drive this growth.

Vimeo introduces AI labelling for videos

Vimeo has joined TikTok, YouTube, and Meta in requiring creators to label AI-generated content. Announced on Wednesday, this new policy mandates that creators disclose when realistic content is produced using AI. The updated terms of service aim to prevent confusion between genuine and AI-created videos, addressing the challenge of distinguishing real from fake content due to advanced generative AI tools.

Not all AI usage requires labelling; animated content, videos with obvious visual effects, or minor AI production assistance are exempt. However, videos that feature altered depictions of celebrities or events must include an AI content label. Vimeo’s AI tools, such as those that edit out long pauses, will also prompt labelling.

Creators can manually indicate AI usage when uploading or editing videos, specifying whether AI was used for audio, visuals, or both. Vimeo plans to develop automated systems to detect and label AI-generated content to enhance transparency and reduce the burden on creators. CEO Philip Moyer emphasised the importance of protecting user-generated content from AI training models, aligning Vimeo with similar policies at YouTube.

AI investment risks and uncertainties highlighted by Goldman Sachs

Goldman Sachs has cast doubt on the economic viability of AI investments, despite substantial spending on AI infrastructure. The firm estimates around $1 trillion will be spent on AI-related infrastructure, including data centres, semiconductors, and grid upgrades. However, Goldman Sachs raises a crucial question: what problem will this massive AI investment actually solve?

According to Jim Covello, head of global equity research at Goldman Sachs, the current scenario contrasts sharply with past technological transitions. He argues that while the internet revolutionised commerce by offering low-cost solutions, AI today is exceedingly expensive and lacks clear applications capable of justifying its high costs. Covello highlights concerns that investor enthusiasm may wane if substantial AI use-cases fail to materialise within the next 12 to 18 months.

Despite these reservations, Kash Rangan of Goldman Sachs acknowledges that the AI cycle is still in its early stages, primarily focused on building infrastructure rather than discovering groundbreaking applications. He remains optimistic that as the AI ecosystem matures, a transformative ‘killer application’ will eventually emerge.

Looking forward, Goldman Sachs anticipates that the ongoing AI build-out will exert considerable pressure on national grids and electricity consumption. The report forecasts a 2.4% compound annual growth rate in UK electricity demand and projects that data centres will double their electricity consumption by 2030, underscoring the immediate impacts of AI infrastructure development on energy resources.

While AI holds potential for revolutionary advancements, Goldman Sachs suggests that its current trajectory raises fundamental questions about economic feasibility and the pace of transformative breakthroughs needed to justify its substantial investments.

French startup unveils AI model for disease diagnosis

French startup Bioptimus has unveiled an AI model, H-optimus-0, designed to assist in disease research and diagnosis. The AI model, trained on hundreds of millions of images, can perform complex tasks such as identifying cancerous cells and detecting genetic abnormalities in tumours. Bioptimus claims it is the largest open-source model for pathology, aiming to enhance transparency and accelerate medical advancements.

The launch of H-optimus-0 is part of a broader trend of leveraging AI for medical breakthroughs. Similar initiatives include Google’s DeepMind and its AlphaFold system and American startup K Health, which recently raised $50 million for its patient-interfacing chatbot. Despite these advancements, there is widespread concern about AI in healthcare. A 2023 Pew Research Center survey indicated that 60% of patients are uncomfortable with doctors relying on AI for their care.

Bioptimus CTO Rodolphe Jenatton emphasised that this release is just the beginning, with plans for further developments to extend the model’s capabilities beyond tissue analysis. The startup, founded in February with backing from French biotech firm Owkin Inc., secured $35 million in seed funding from investors including Bpifrance and telecom billionaire Xavier Niel.

Rising threat of deepfake pornography for women

As deepfake pornography becomes an increasing threat to women online, both international and domestic lawmakers face difficulties in creating effective protections for victims. The issue has gained prominence through cases like that of Amy Smith, a student in Paris who was targeted with manipulated nude images and harassed by an anonymous perpetrator. Despite reporting the crime to multiple authorities, Smith found little support due to the complexities of tracking faceless offenders across borders.

Recent data shows that deepfake pornography is predominantly used for malicious purposes, with 98% of such videos being explicit. The FBI has identified a rise in “sextortion schemes,” where altered images are used for blackmail. Public awareness of these crimes is often heightened by high-profile cases, but many victims are not celebrities and face immense challenges in seeking justice.

Efforts are underway to address these issues through new legislation. In the US, proposed bills aim to hold perpetrators accountable and require prompt removal of deepfake content from the internet. Additionally, President Biden’s recent executive order seeks to develop technology for detecting and tracking deepfake images. In Europe, the AI Act introduces regulations for AI systems but faces criticism for its limited scope. While these measures represent progress, experts caution that they may not fully prevent future misuse of deepfake technology.

Intuit to cut 1,800 jobs, focus on AI investments

Intuit, the parent company of TurboTax, has announced plans to reduce its workforce by 10%, affecting approximately 1,800 jobs. This move comes as Intuit shifts its focus towards enhancing its AI-powered tax preparation software and other financial tools.

The company intends to close two sites in Edmonton, Canada and Boise, Idaho, while aiming to rehire for new positions primarily in engineering, product development, and customer-facing roles.

CEO Sasan Goodarzi outlined that while 300 roles will be eliminated to streamline operations, another 80 technology positions will be consolidated across locations such as Atlanta, Bengaluru, and Tel Aviv.

This restructuring effort is expected to incur costs between $250 million and $260 million, with significant charges anticipated in the fourth quarter of this year.

Despite the layoffs, Intuit plans to ramp up its investments in generative AI and expand its market presence, targeting regions including Canada, the United Kingdom, and Australia. Goodarzi expressed confidence in growing the company’s headcount beyond fiscal 2025, following recent positive financial performance and increased demand for its AI-integrated products.

OpenAI and Los Alamos collaborate on AI research

OpenAI is partnering with Los Alamos National Laboratory, most famous for creating the first atomic bomb, to explore how AI can assist scientific research. The collaboration will evaluate OpenAI’s latest model, GPT-4o, in supporting lab tasks and employing its voice assistant technology to aid scientists. This new initiative is part of OpenAI’s broader efforts to showcase AI’s potential in healthcare and biotech, alongside recent partnerships with companies like Moderna and Color Health.

However, the rapid advancement of AI has sparked concerns about its potential misuse. Lawmakers and tech executives have expressed fears that AI could be used to develop bioweapons. Earlier tests by OpenAI indicated that GPT-4 posed only a slight risk of aiding in creating biological threats.

Erick LeBrun, a research scientist at Los Alamos, emphasised the importance of this partnership in understanding both the benefits and potential dangers of advanced AI. He highlighted the need for a framework to evaluate current and future AI models, particularly concerning biological threats.

Healthcare experts demand transparency in AI use

Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but demand greater transparency regarding its application. A survey by Elsevier reveals that 94% of researchers and 96% of clinicians believe AI will accelerate knowledge discovery, while a similar proportion sees it boosting research output and reducing costs. Both groups, however, stress the need for quality content, trust, and transparency before they fully embrace AI tools.

The survey, involving 3,000 participants across 123 countries, indicates that 87% of respondents think AI will enhance overall work quality, and 85% believe it will free up time for higher-value projects. Despite these positive outlooks, there are significant concerns about AI’s potential misuse. Specifically, 95% of researchers and 93% of clinicians fear that AI could be used to spread misinformation. In India, 82% of doctors worry about overreliance on AI in clinical decisions, and 79% are concerned about societal disruptions like unemployment.

To address these issues, 81% of researchers and clinicians expect to be informed if the tools they use depend on generative AI. Moreover, 71% want assurance that AI-dependent tools are based on high-quality, trusted data sources. Transparency in peer-review processes is also crucial, with 78% of researchers and 80% of clinicians expecting to know if AI influences manuscript recommendations. These insights underscore the importance of transparency and trust in the adoption of AI in healthcare.

Biden administration assembles AI expert team

President Joe Biden has assembled a team of lawyers, engineers, and national security specialists to develop standards for the training and deployment of AI across various industries. The team, tasked with creating AI guardrails, aims to identify AI-generated images, determine suitable training data, and prevent China from accessing essential AI technology. They are also collaborating with foreign governments and Congress to align AI approaches globally.

Laurie E. Locascio, Director of the National Institute of Standards and Technology (NIST), oversees the institute’s AI research, biotechnology, quantum science, and cybersecurity. She has requested an additional $50 million to fulfil the institute’s responsibilities under Biden’s AI executive order.

Alan Estevez, Under Secretary of Commerce for Industry and Security, is responsible for preventing adversaries like China, Russia, Iran, and North Korea from obtaining semiconductor technology crucial for AI systems. Estevez has imposed restrictions on the sale of advanced computer chips to China and is encouraging US allies to do the same.

Elizabeth Kelly, Director of the AI Safety Institute, leads the team at NIST in developing AI tests, definitions, and voluntary standards as per Biden’s directive. She facilitated data and technology sharing agreements with over 200 companies, civil society groups, and researchers, and represented the US at an AI summit in South Korea.

Elham Tabassi, Chief Technology Officer of the AI Safety Institute, focuses on identifying AI risks and testing the most powerful AI models. She has been with NIST since 1999, working on machine learning and developing voluntary guidelines to mitigate AI risks. Saif M. Khan, Senior Adviser to the Secretary for Critical and Emerging Technologies, coordinates AI policy activities across the Commerce Department, including export controls and copyright guidance for AI-assisted inventions. Khan acts as a key liaison between the department and Congress on AI issues.