US tech giant Microsoft is committed to offering generative AI services in Hong Kong through educational initiatives, despite OpenAI’s access restrictions in the city and mainland China. Microsoft collaborated with the Education University of Hong Kong Jockey Club Primary School to offer AI services starting last year.
About 220 students in grades 5 and 6 used Microsoft’s chatbot and text-to-image tools in science classes. Principal Elsa Cheung Kam Yan noted that AI enhances learning by broadening students’ access to information and allowing exploration beyond textbooks. Vice-Principal Philip Law Kam Yuen added that in collaboration with Microsoft Hong Kong for 12 years, the school plans to extend AI usage to more classes.
Additionally, Microsoft also has agreements with eight Hong Kong universities to promote AI services. Fred Sheu, national technology officer of Microsoft in Hong Kong, reaffirmed Microsoft’s commitment to maintaining its Azure AI services, which use OpenAI’s models, further emphasising that API restrictions by OpenAI will not affect the company. Microsoft’s investment in OpenAI reportedly allows it to receive up to 49% of the profits from OpenAI’s for-profit arm. As all government-funded universities in Hong Kong have already acquired the Azure OpenAI service, they are thus qualified users. He also emphasised that Microsoft intends to extend this service to all schools in Hong Kong over the next few years.
A recent survey by Tracklib reveals that 25% of music producers are now integrating AI into their creative processes, marking a significant adoption of technology within the industry. However, most producers exhibit resistance towards AI, citing concerns over losing creative control as a primary barrier.
Among those using AI, the survey found that most employ it for stem separation (73.9%) rather than full song creation, which is used by only a small fraction (3%). Concerns among non-users primarily revolve around artistic integrity (82.2%) and doubts about AI’s ability to maintain quality (34.5%), with additional concerns including cost and copyright issues.
Interestingly, the survey highlights a stark divide between perceptions of assistive AI, which aids in music creation, and generative AI, which directly generates elements or entire songs. While some producers hold a positive view of assistive AI, generative AI faces stronger opposition, especially among younger respondents.
Overall, the survey underscores a cautious optimism about AI’s future impact on music production, with 70% of respondents expecting it to have a significant influence going forward. Despite current reservations, Tracklib predicts continued adoption of music AI, noting it is entering the “early majority” phase of adoption according to technology adoption models.
Microsoft has warned about a new jailbreaking technique called Skeleton Key, which can prompt AI models to disclose harmful information by bypassing their behavioural guidelines. Detailed in a report published on 26 June, Microsoft explained that Skeleton Key forces AI models to respond to illicit requests by modifying their behavioural guidelines to provide a warning rather than refusing the request outright. A technique like this, called Explicit: forced instruction-following, can lead models to produce harmful content.
The report highlighted an example where a model was manipulated to provide instructions for making a Molotov cocktail under the guise of an educational context. The prompt allowed the model to deliver the information with only a prefixed warning by instructing the model to update its behaviour. Microsoft tested the Skeleton Key technique between April and May 2024 on various AI models, including Meta LLama3-70b, Google Gemini Pro, and GPT 3.5 and 4.0, finding it effective but noting that attackers need legitimate access to the models.
Microsoft has addressed the issue in its Azure AI-managed models using prompt shields and has shared its findings with other AI providers. The company has also updated its AI offerings, including its Copilot AI assistants, to prevent guardrail bypassing. Furthermore, the latest disclosure underscores the growing problem of generative AI models being exploited for malicious purposes, following similar warnings from other researchers about vulnerabilities in AI models.
Why does it matter?
In April 2024, Anthropic researchers discovered a technique that could force AI models to provide instructions for constructing explosives. Earlier this year, researchers at Brown University found that translating malicious queries into low-resource languages could induce prohibited behaviour in OpenAI’s GPT-4. These findings highlight the ongoing challenges in ensuring the safe and responsible use of advanced AI models.
Adobe is expanding its generative AI team in India, seeking researchers skilled in NLP, LLMs, computer vision, deep learning, and more. With approximately 7,000 employees already in India, Adobe aims to bolster its research capabilities across various AI domains. Candidates will innovate and prototype AI technologies, contributing to Adobe’s products, publishing research, and collaborating globally.
Successful applicants are expected to demonstrate research excellence and a robust publication history, with backgrounds in computer science, electrical engineering, or mathematics. Senior roles require a minimum of seven years’ research experience, coupled with strong problem-solving abilities and analytical skills. Adobe prioritises integrating generative AI across its Experience Cloud, Creative Cloud, and Document Cloud, aiming to enhance content workflows and customer interactions.
Adobe’s foray into generative AI began with Adobe Firefly in collaboration with NVIDIA in March 2023. The company recently integrated third-party AI tools such as OpenAI’s Sora into Premiere Pro, offering users flexibility in AI model selection.
By partnering with AI providers like OpenAI, RunwayML, and Pika, Adobe continues to innovate, enabling personalised and efficient content creation workflows for enterprise customers.
Why does it matter?
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid technological advancements do not come at the expense of human employment.
YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.
Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.
YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.
In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.
Why does it matter?
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.
Actor Morgan Freeman, renowned for his distinctive voice, recently addressed concerns over a video circulating on TikTok featuring a voice purportedly his own but created using AI. The video, depicting a day in his niece’s life, prompted Freeman to emphasise the importance of reporting unauthorised AI usage. He thanked his fans on social media for their vigilance in maintaining authenticity and integrity, underscoring the need to protect against such deceptive practices.
This isn’t the first time Freeman has encountered unauthorised use of his likeness. Previously, his production company’s EVP, Lori McCreary, encountered deepfake videos attempting to mimic Freeman, including one falsely depicting him firing her. Such incidents highlight the growing prevalence of AI-generated content, prompting discussions about its ethical implications and the need for heightened awareness.
Freeman’s case joins a broader trend of celebrities, from Taylor Swift to Tom Cruise, facing similar challenges with AI-generated deepfakes. These instances underscore ongoing concerns about digital identity theft and the blurred lines between real and fabricated content in the digital age.
The US intelligence community is fully embracing generative AI, marking a significant shift towards transparency in its adoption of cutting-edge technology. Leaders within agencies like the CIA are openly discussing how generative AI enhances intelligence operations, from aiding in content triage and search capabilities to supporting analysts in generating counter arguments and ideation.
Lakshmi Raman, the CIA’s director of Artificial Intelligence Innovation, highlighted the transformative impact of generative AI during a recent address at the Amazon Web Services Summit in Washington, D.C. She noted its critical role in processing vast amounts of data to extract actionable insights, crucial for keeping pace with global developments and informing policymakers amidst a constant influx of news.
Despite its potential benefits, the deployment of generative AI within the intelligence community is not without its challenges and risks. Concerns over accuracy and security persist, as erroneous outputs—termed ‘hallucinations’—could have severe consequences in national security contexts. Adele Merritt, Intelligence Community Chief Information Officer, stressed the need for cautious adoption, ensuring that AI technologies adhere to strict privacy and security standards.
In response to these challenges, major tech companies like Microsoft and AWS are adapting their cloud services to cater to classified government needs, offering secure environments for deploying generative AI tools. AWS, for instance, launched a significant initiative to support government agencies with training and technical support for generative AI, underscoring its commitment to enhancing national security capabilities through innovative technology solutions.
However, this concerted effort by both intelligence agencies and tech providers aims to harness the full potential of generative AI while mitigating associated risks, thus shaping the future of intelligence operations in an increasingly data-driven world.
Why does it matter?
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.
A new platform, Build-A-Brain, aims to democratise access to generative AI, offering users control over data and AI engines unlike public tools. Founder Howard Jones emphasises its tailored AI products for businesses, harnessing proprietary data to enhance decision-making processes.
The platform supports project management akin to SharePoint, featuring an Articles Wizard for content generation, including images.
Build-A-Brain stands out with its Custom AI Brain feature, enabling users to interact securely with their own data. It finds applications across diverse sectors like healthcare and finance, offering bespoke AI solutions that drive innovation. Businesses can integrate their brand identity and utilise additional tools like audio transcription and file conversion, enhancing workflow efficiency.
Jones highlights the platform’s future potential in facilitating comprehensive workflows, combining content creation with data interrogation and collaboration tools.
Build-A-Brain operates on a freemium model, allowing free access with upgrade options, supported by user-friendly tutorials on its website. Explore more at virinity.ai to unlock AI-driven insights tailored to your organisation’s needs.
Why does it matter?
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.
Realme has unveiled plans to integrate Sony’s cutting-edge LYT-701 camera sensor into its upcoming 5G smartphone, marking a significant leap into AI-enhanced imaging technology. The announcement, made at a pre-launch event in Bangkok, underscores Realme’s strategic partnership with Sony to elevate mobile photography capabilities.
Francis Wong, Head of Product Marketing at Realme, highlighted the shift from traditional hardware-centric advancements to AI-driven innovations in mobile photography. He emphasised that while past improvements focused on megapixels and sensor sizes, future progress hinges on AI to redefine the mobile imaging experience.
The Realme 13 Pro Series 5G will feature the HYPERIMAGE+ technology, integrating multiple lenses and a 50MP periscope telephoto camera powered by Sony’s LYT-600 sensor. This setup promises to deliver superior image quality and unprecedented flexibility for users capturing diverse scenes.
The collaboration aims not only to advance technological capabilities but also to democratise advanced imaging tools, enabling users worldwide to capture and share their experiences in unprecedented detail. Realme plans to announce the official launch dates for the device in India and other markets soon.
Why does it matter?
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.
Samsung Electronics reported a significant surge in its second-quarter operating profit, driven by rising semiconductor prices amid booming demand for AI. The company’s operating profit is estimated to have increased more than 15-fold to 10.4 trillion won ($7.54 billion) from 670 billion won a year earlier, surpassing analysts’ expectations. The surge marks Samsung’s most profitable quarter since Q3 2022, primarily due to higher chip prices and a reversal of previous inventory writedowns.
The company’s revenue likely increased by 23% to 74 trillion won in the second quarter compared to last year’s period. Samsung’s semiconductor division posted its second consecutive quarterly profit as prices for memory chips, particularly high-end DRAM and NAND Flash chips used in AI applications, saw significant increases. According to TrendForce, DRAM and NAND Flash chip prices jumped 13% to 20% from the previous quarter.
However, analysts expect the price increases for memory chips to slow down in the third quarter, with only a 5% to 10% rise forecasted for conventional DRAM and NAND Flash chips. Despite the solid AI-driven demand for high-end chips, Samsung needs to catch up with its rival, SK Hynix, in supplying these advanced chips to major clients like Nvidia. Investors are keenly awaiting Samsung’s outlook on legacy chips and Nvidia’s approval of its latest HBM chips after previous heat and power consumption issues.