User concerns grow as AI reshapes online interactions

As AI continues to evolve, it’s reshaping online platforms and stirring concerns among longtime users. At a recent tech conference, concerns were raised about AI-generated content flooding forums like Reddit and Stack Overflow, mimicking human interactions. Reddit moderator Sarah Gilbert highlighted the frustration felt by many contributors who see their genuine contributions overshadowed by AI-generated posts.

Stack Overflow, a hub for programming solutions, faced backlash when it initially banned AI-generated responses due to inaccuracies. However, it’s now embracing AI through partnerships to enhance user experience, sparking debates about the balance between human input and AI automation. CEO Prashanth Chandrasekar acknowledged the challenges, noting their efforts to maintain a community-driven knowledge base amidst technological shifts.

Meanwhile, social media platforms like Meta (formerly Facebook) are under scrutiny for using AI to train models on user-generated content without explicit consent. That has prompted regulatory action in countries like Brazil, where fines were imposed for non-compliance with data protection laws. In Europe and the US, similar concerns over privacy and transparency persist as AI integration grows.

The debate underscores broader issues of digital ethics and the future of online interaction, where authenticity and user privacy collide with technological advancements. Platforms must navigate these complexities to retain user trust while embracing AI’s potential to innovate and automate online experiences.

Chinese AI companies react to OpenAI block with SenseNova 5.5

At the recent World AI Conference in Shanghai, SenseTime introduced its latest model, SenseNova 5.5, showcasing capabilities comparable to OpenAI’s GPT-4o. This unveiling coincided with OpenAI’s decision to block its services in China, leaving developers scrambling for alternatives.

OpenAI’s move, effective from July 9th, blocks API access from regions where it does not support service, impacting Chinese developers who relied on its tools via virtual private networks. The decision, amid US-China technology tensions, underscores broader concerns about global access to AI technologies.

The ban has prompted Chinese AI companies like SenseTime, Baidu, Zhipu AI, and Tencent Cloud to offer incentives, including free tokens and migration services, to lure former OpenAI users. Analysts suggest this could accelerate China’s AI development, challenging US dominance in generative AI technologies.

The development has sparked mixed reactions in China, with some viewing it as a move to bolster domestic AI independence amidst geopolitical pressures. However, it also highlights challenges in China’s AI industry, such as reliance on US semiconductors, impacting capabilities like Kuaishou’s AI models.

AI stocks surge prompts profit-taking advice

According to strategists at Citigroup Inc., investors are being advised to consider cashing in on the recent surge in AI stocks. The analysis highlights strong investor sentiment towards AI-exposed equities, reminiscent of levels seen in 2019. Drew Pettit’s team at Citi notes that while there’s no clear bubble in AI stocks overall, the rapid rise in specific names raises concerns about increased volatility ahead.

This year, the AI frenzy has driven Nvidia Corp. to briefly claim the title of the world’s most valuable company, while Taiwan Semiconductor Manufacturing Co. surpassed $1 trillion in market value. Citi suggests focusing on profit-taking, particularly among chip-makers, and diversifying investments across the broader AI sector.

Despite cautious signals from Citi, many market observers believe the AI momentum will persist through the year’s second half. Bloomberg News reports a split among investors, some favouring established giants like Nvidia, while others look to secondary beneficiaries such as utilities and infrastructure providers.

Acknowledging AI stocks’ optimism, Citi’s strategists emphasise that current stock prices imply high expectations.

Singapore advocates for international AI standards

Singapore’s digital development minister, Josephine Teo, has expressed concerns about the future of AI governance, emphasising the need for an internationally agreed-upon framework. Speaking at the Reuters NEXT conference in Singapore, Teo highlighted that while Singapore is more excited than worried about AI, the absence of global standards could lead to a ‘messy’ future.

Teo pointed out the necessity for specific legislation to address challenges posed by AI, particularly focusing on using deepfakes during elections. She stressed that implementing clear and effective laws will be crucial as AI technology advances to manage its impact on society and ensure responsible use.

Singapore’s proactive stance on AI reflects its commitment to balancing technological innovation with necessary regulatory measures. The country aims to harness the benefits of AI while mitigating potential risks, especially in critical areas like electoral integrity.

AI cybersecurity in devices deemed high-risk by European Commission

AI-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk under the AI Act, according to a European Commission document seen by Euractiv. The document, which interprets the relationship between the 2014 Radio Equipment Directive (RED) and the AI Act, marks the first known instance of how AI-based safety components will be treated under the new regulations. The RED pertains to wireless devices, including those using Wi-Fi and Bluetooth, beyond traditional radios.

Under the AI Act, high-risk AI systems will be subject to extensive testing, risk management, security measures, and documentation. The Act includes a list of use cases where AI deployment is automatically considered high-risk, such as in critical infrastructure and law enforcement. It also sets criteria for categorising other high-risk products, requiring third-party conformity assessments in line with sector-specific regulations. AI cybersecurity and emergency services components meet these criteria under the RED, thus being classified as high-risk.

Even in cases where the RED allows for self-assessment compliance with harmonised standards, these AI-based components are still deemed high-risk. The AI Act references numerous sectoral regulations that could classify AI products as high-risk, extending beyond electronics to medical devices, aviation, heavy machinery, and personal watercraft. The preliminary interpretation suggests that self-assessment standards are insufficient to remove the high-risk classification from AI products in these industries.

The AI Act imposes significant requirements on high-risk AI systems, while those not in this category face only minor transparency obligations. The Commission’s document is a preliminary interpretation, and the full application of the AI Act, which spans over 500 pages, remains to be seen. Despite initial estimates that 5-15% of AI systems would be classified as high-risk, a 2022 survey of EU-based startups indicated that 33-50% of these startups consider their products high-risk. Further interpretive work is needed to understand how the AI Act will impact various sectors.

Why does it matter?

The abovementioned proceedings highlight the European Commission’s stringent approach to regulating AI-based cybersecurity and emergency services in internet-connected devices. By classifying these components as high-risk, the AI Act mandates rigorous testing, security measures, and documentation, ensuring robust safety standards. This move underscores the EU’s commitment to protecting critical infrastructure and sensitive data and signals significant regulatory implications for various industries, potentially influencing global standards and practices in AI technology.

AI app aids pastors with sermons

A new AI platform called Pulpit AI, designed to assist pastors in delivering their sermons more effectively, is set to launch on 22 July. Created by Michael Whittle and Jake Sweetman, the app allows pastors to upload their sermons in various formats such as audio, video, manuscript, or outline. The app generates content like devotionals, discussion questions, newsletters, and social media posts. The aim is to ease the workload of church staff while enhancing communication with the congregation.

Whittle and Sweetman, who have been friends for over a decade, developed the idea from their desire to extend the impact of a sermon beyond Sunday services. They believe Pulpit AI can significantly benefit pastors who invest substantial time preparing sermons by repurposing their content for broader use without additional effort. This AI tool does not create sermons but generates supplementary materials based on the original sermon, ensuring the content remains faithful to the pastor’s message.

Despite the enthusiasm, some, like Dr Charlie Camosy from Creighton University, urge caution in adopting AI within the church. He suggests that while AI can be a valuable tool, it is crucial to consider its long-term implications on human interactions and the traditional processes within the church. Nonetheless, pastors who have tested Pulpit AI, such as Pastor Adam Mesa of Patria Church, report significant benefits in managing their communication and expanding their outreach efforts.

Researchers develop a method to improve reward models using LLMs for synthetic critiques

Researchers from Cohere and the University of Oxford have introduced an innovative method to enhance reward models (RMs) in reinforcement learning from human feedback (RLHF) by leveraging large language models (LLMs) for synthetic critiques. The novel approach aims to reduce the extensive time and cost associated with human annotation, which is traditionally required for training RMs to predict scores based on human preferences.

In their paper, ‘Improving Reward Models with Synthetic Critiques’, the researchers detailed how LLMs could generate critiques that evaluate the relationship between prompts and generated outputs, predicting scalar rewards. These synthetic critiques improved the performance of reward models on various benchmarks by providing additional feedback on aspects like instruction following, correctness, and style, leading to better assessment and scoring of language models.

The study highlighted that high-quality synthetic critiques significantly increased data efficiency, with one enhanced preference pair as valuable as forty non-enhanced pairs. The approach makes the training process more cost-effective and has the potential to match or surpass traditional reward models, as demonstrated by GPT-4.0’s performance in certain benchmarks.

As the field continues to explore alternatives to RLHF, including reinforcement learning from AI feedback (RLAIF), this research indicates a promising shift towards AI-based critiquing, potentially transforming how major AI players such as Google, OpenAI, and Meta align their large language models.

AI’s digital twin technology revolution

The AI industry invests heavily in digital twin technology, creating virtual replicas of humans and objects for research. Tech companies believe these digital twins can unlock AI’s full potential by mirroring our physiologies, personalities, and objects around us. Digital twins can range from models of complex phenomena, like organisms or weather systems, to video avatars of individuals. This new technology promises to revolutionise healthcare by providing personalised treatment, accelerating drug development, and enhancing our understanding of environments and objects.

Gartner predicts the global market for digital twins will surge to $379 billion by 2034, mainly driven by the healthcare industry, which is expected to reach a market size of $110.1 billion by 2028. The concept of digital twins began in engineering and manufacturing but has expanded thanks to improved data storage and connectivity, making it more accessible and versatile.

One notable example is LinkedIn co-founder Reid Hoffman, who created his digital twin, REID.AI, using two decades of his content. Hoffman demonstrated the potential of this technology by releasing videos of himself conversing with the twins and even sending them for an on-stage interview. While most digital twins focus on statistical applications, their everyday utility is evident in projects like Twin Health, which uses sensors to monitor patients’ health and provide personalised advice. The technology has shown promise in helping diabetic patients reverse their condition and reduce medication reliance.

Like the broader AI boom, the digital twin market starts with impressive demonstrations but aims to deliver significant practical benefits, especially in healthcare and personalised services.

Samsung wins AI chip order from Japan

Samsung Electronics announced securing an order from Japanese AI company Preferred Networks to manufacture chips using its advanced 2-nanometre foundry process and advanced chip packaging service. The trade exchange between the two countries marks Samsung’s first disclosed order for its cutting-edge 2-nanometre chip manufacturing process, although the order size remains undisclosed.

The chips will employ high-tech gate-all-around (GAA) architecture and integrate multiple chips into a single package to enhance connection speed and reduce size. According to Preferred Networks ‘ VP Junichiro Makino, designed by South Korea‘s Gaonchips, these chips will support high-performance computing hardware for generative AI technologies, including large language models.

The development highlights Samsung’s advancements in semiconductor technology and its role in supporting innovative AI applications.

IBM’s GenAI center to advance AI technology in India

IBM has launched its GenAI Innovation Center in Kochi, designed to help enterprises, startups, and partners explore and develop generative AI technology. The centre aims to accelerate AI innovation, increase productivity, and enhance generative AI expertise in India, addressing challenges organisations face when transitioning from AI experimentation to deployment.

The centre will provide access to IBM experts and technologies, assisting in building, scaling, and adopting enterprise-grade AI. It will utilise InstructLab, a technology developed by IBM and Red Hat for enhancing Large Language Models (LLMs) with client data, along with IBM’s ‘watsonx’ AI and data platform and AI assistant technologies. The centre will be part of the IBM India Software Lab in Kochi and managed by IBM’s technical experts.

IBM highlights that the centre will nurture a community that uses generative AI to tackle societal and business challenges, including sustainability, public infrastructure, healthcare, education, and inclusion. The initiative underscores IBM’s commitment to fostering AI innovation and addressing complex integration issues in the business landscape.

Why does it matter?

lBM’s new GenAI hub stems from a significant investment in advancing AI technology in India. This centre is set to play a crucial role in accelerating AI innovation, boosting productivity, and enhancing generative AI expertise, which is critical for the growth of enterprises, startups, and partners. By providing access to advanced AI technologies and expert knowledge, the centre aims to overcome the challenges of AI integration and deployment, thereby fostering a robust AI ecosystem. Furthermore, the initiative underscores the potential of generative AI to address pressing societal and business challenges, contributing to advancements in sustainability, public infrastructure, healthcare, education, and inclusion.