A recent poll by the AI Policy Institute has shed light on strong public opinion in the United States regarding the regulation of AI.
Contrary to claims from the tech industry that strict regulations could hinder competition with China, a majority of American voters prioritise safety and control over the rapid development of AI. The poll reveals that 75% of both Democrats and Republicans prefer a cautious approach to AI development to prevent its misuse by adversaries.
The debate underscores growing concerns about national security and technological competitiveness. While China leads in AI patents, with over 38,000 registered compared to the US’s 6,300, Americans seem wary of sacrificing regulatory oversight in favour of expedited innovation.
Most respondents advocate for stringent safety measures and testing requirements to mitigate potential risks associated with powerful AI technologies.
Moreover, the poll highlights widespread support for restrictions on exporting advanced AI models to countries like China, reflecting broader apprehensions about technology transfer and national security. Despite the absence of comprehensive federal AI regulation in the US, states like California have begun to implement their own measures, prompting varied responses from tech industry leaders and policymakers alike.
With the instances of scammers using AI-generated photos and videos on dating apps, Bumble has added a new feature that lets users report suspected AI-generated profiles. Now, users can select ‘Fake profile’ and then choose ‘Using AI-generated photos or videos’ among other reporting options such as inappropriate content, underage users, and scams. By allowing users to report such profiles, Bumble aims to reduce the misuse of AI in creating misleading profiles.
Earlier in February this year, Bumble introduced the ‘Deception Detector’, which combines AI and human moderators to detect and eliminate fake profiles and scammers. Following this measure, Bumble has witnessed a 45% overall reduction in reported spam and scams. Another notable feature of Bumble is its ‘Private Detector‘ AI tool that blurs unsolicited nude photos.
Risa Stein, Bumble’s VP of Product, emphasised the importance of creating a safe space and stated, ‘We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections.’
The US National Education Association (NEA) Representative Assembly (RA) delegates have approved the NEA’s first policy statement on the use of AI in education, providing educators with a roadmap for the safe, effective, and accessible use of AI in classrooms.
Since the fall of 2023, a task force of teachers, education support professionals, higher-ed faculty, and other stakeholders has been diligently working on this policy. Their efforts resulted in a 6-page policy statement, which RA delegates reviewed during an open hearing on 24 June and overwhelmingly approved on Thursday.
A central tenet of the new policy is that students and educators must remain at the heart of the educational process. AI should continue the human connection essential for inspiring and guiding students. The policy highlights that while AI can enhance education, it must be used responsibly, focusing on protecting data, ensuring equitable access, and providing opportunities for learning about AI.
The task force identified several opportunities AI presents, such as customising instructional methods for students with disabilities and making classrooms more inclusive. However, they also acknowledged risks, including potential biases due to the lack of diversity among AI developers and the environmental impact of AI technology. It’s crucial to involve traditionally marginalised groups in AI development and policy-making to ensure inclusivity. The policy clarifies that AI shouldn’t be used to make high-stakes decisions like class placements or graduation eligibility.
Why does this matter?
The policy underscores the importance of comprehensive professional learning for educators on AI to ensure its ethical and effective use in teaching. More than 7 in 10 K-12 teachers have never received professional learning on AI. It also raises concerns about exacerbating the digital divide, emphasising that all students should have access to cutting-edge technology and educators skilled in its use across all subjects, not just in computer science.
OpenAI and Arianna Huffington are teaming up to fund the development of an AI health coach through Thrive AI Health, aiming to personalise health guidance using scientific data and personal health metrics shared by users. The initiative, detailed in a Time magazine op-ed by OpenAI CEO Sam Altman and Huffington, seeks to leverage AI advancements to provide insights and advice across sleep, nutrition, fitness, stress management, and social connection.
DeCarlos Love, a former Google executive with experience in wearables, has been appointed CEO of Thrive AI Health. The company has also formed research partnerships with institutions like Stanford Medicine and the Rockefeller Neuroscience Institute to bolster its AI-driven health coaching capabilities.
While AI-powered health coaches are gaining popularity, concerns over data privacy and the potential for misinformation persist. Thrive AI Health aims to support users with personalised health tips, targeting individuals lacking access to immediate medical advice or specialised dietary guidance.
Why does this matter?
The development of AI in healthcare promises significant advancements, including accelerating drug development and enhancing diagnostic accuracy. However, challenges remain in ensuring the reliability and safety of AI-driven health advice, particularly in maintaining trust and navigating the limitations of AI’s capabilities in medical decision-making.
Mark Matlock, a political candidate for the right-wing Reform UK party, has affirmed that he is indeed a real person, dispelling rumours that he might be an AI bot. The suspicions arose from a highly edited campaign image and his absence from critical events, prompting a thread on social media platform X that questioned his existence.
The speculation about AI involvement is partially plausible, especially considering that an AI company executive recently used an AI persona to run for Parliament in the UK, though he garnered only 179 votes. However, Matlock clarified that he was severely ill with pneumonia during the election period, rendering him unable to attend events. He provided the original campaign photo, explaining that only minor edits were made.
Why does it matter?
The incident highlights the broader implications of AI in politics. The 2024 elections in the US and elsewhere are already witnessing the impact of AI tools, from deepfake videos to AI-generated political ads. As the use of such technology grows, candidates must maintain transparency and authenticity to avoid similar controversies.
Investments in AI startups soared to $24 billion in the last two months, more than doubling from the previous quarter, as reported by sources familiar with the matter. The surge reflects a growing interest in AI technology, making it the largest investment sector, followed by healthcare and biotech. Overall startup funding increased 16% to $79 billion in the last quarter, driven mainly by AI.
The success of OpenAI’s ChatGPT has sparked a race to integrate the latest AI technology in various fields, including business productivity, healthcare, and manufacturing. However, investors and major tech firms caution that substantial returns from these investments are expected to materialise over the next few years.
Five out of six billion-dollar funding rounds were for AI companies during this period. Notable deals included Elon Musk’s xAI raising $6 billion and AI infrastructure provider CoreWeave securing $1.1 billion. In addition, the automated driving company Wayve and data preparation company Scale AI have attracted substantial investments. Cybersecurity firm Wiz, for example, raised a billion dollars in its latest funding round outside the AI sector.
Why does it matter?
Despite the recent increase, overall startup funding remains lower than in the past three years. Global funding dropped 5% to $147 billion in the year’s first half and remained flat compared to the latter half of 2023. The tight monetary policy in the US has also slowed the revival of initial public offerings (IPOs), a significant source of returns for institutional private market investors who typically invest in startups and sell shares during IPOs.
At the recent World Artificial Intelligence Conference in Shanghai, Chinese GPU developers seized the opportunity to showcase their products in Nvidia’s absence. Prominent companies such as Iluvatar Corex, Moore Threads, Enflame Technology, Sophgo, and Huawei’s Ascend were at the forefront, highlighting their advancements despite significant challenges in manufacturing and software ecosystems.
Enflame Technology emphasised the shift from foreign-dominated computing clusters to a mix of Chinese and foreign GPUs. The company, along with AI solutions firm Infinigence, is promoting compute resources that utilise a variety of chips from both Nvidia and Chinese manufacturers. However, US export restrictions have prevented Nvidia from selling its most advanced chips in China, and several Chinese firms, including Huawei, are struggling with manufacturing hurdles due to being blacklisted by the US.
Huawei’s booth was a major attraction, showcasing its Ascend 910B chips, which train numerous large language models in China. Meanwhile, Enflame presented its Cloudblazer T20 and T21 AI-training chips, benefiting from not being on the US trade blacklist, which allows it access to global foundries like TSMC.
Despite these efforts, Chinese GPUs still need to catch up with their global counterparts regarding performance. Nvidia remains a dominant player, with tailored chips for the Chinese market continuing to be popular. Nvidia is expected to deliver over 1 million H20 GPUs in China this year, generating $12 billion in sales. However, experts highlight that China’s in-house technology still needs to meet its substantial domestic AI demand.
At YourStory’s Tech Leaders’ Conclave, Ankur Pal, Chief Data Scientist at Aplazo, discussed how AI is transforming the financial services industry. Aplazo aims to address financial inclusion, especially in developing countries with low credit card penetration, by providing fair and transparent solutions like their Buy Now Pay Later (BNPL) platform. Pal highlighted AI’s potential to revolutionise fintech by creating personalised financial products and improving operational efficiency, ultimately reducing friction for consumers and institutions.
Pal emphasised AI’s role in enhancing decision-making processes, reducing fraud, and improving customer service. AI-driven solutions enable real-time data processing, which helps financial institutions detect and prevent fraud more effectively.
Additionally, AI can automate routine tasks, allowing financial professionals to focus on strategic initiatives. The real-time decision-making is becoming increasingly important as financial institutions invest in event streaming infrastructure and machine learning operations (MLOps) stacks to manage high transaction volumes with low latency.
Overcoming financial inclusion barriers was a key topic, with Pal noting that many developing countries still have a large unbanked or underbanked population despite high bank account ownership. AI can bridge this gap by offering tailored financial solutions for underserved communities.
Pal also discussed the importance of leadership and the skill sets required for building successful AI teams. He stressed the need for adaptability, continuous learning, and a deep understanding of both technology and business to create valuable AI solutions. While AI will transform job roles, it will also create new opportunities, making it crucial for leaders to foster a culture of innovation.
Tech Mahindra has partnered with Microsoft to enhance workplace experiences for over 1,200 customers and more than 10,000 employees across 15 locations by adopting Copilot for Microsoft 365. The collaboration aims to boost workforce efficiency and streamline processes through Microsoft’s trusted cloud platform and generative AI capabilities. Additionally, Tech Mahindra will deploy GitHub Copilot for 5,000 developers, anticipating a productivity increase of 35% to 40%.
Mohit Joshi, CEO and Managing Director of Tech Mahindra, highlighted the transformative potential of the partnership, emphasising the company’s commitment to shaping the future of work with cutting-edge AI technology. Tech Mahindra plans to extend Copilot’s capabilities with plugins to leverage multiple data sources, enhancing creativity and productivity. The focus is on increasing efficiency, reducing effort, and improving quality and compliance across the board.
As part of the initiative, Tech Mahindra has launched a dedicated Copilot practice to help customers unlock the full potential of AI tools, including workforce training for assessment and preparation. The company will offer comprehensive solutions to help customers assess, prepare, pilot, and adopt business solutions using Copilot for Microsoft 365, providing a scalable and personalised user experience.
Judson Althoff, Executive Vice President and Chief Commercial Officer at Microsoft, remarked that the collaboration would empower Tech Mahindra’s employees with new generative AI capabilities, enhancing workplace experiences and increasing developer productivity. The partnership aligns with Tech Mahindra’s ongoing efforts to enhance workforce productivity using GenAI tools, demonstrated by the recent launch of a unified workbench on Microsoft Fabric to accelerate the adoption of complex data workflows.
OpenAI’s ChatGPT, launched in 2022, has revolutionised the way people seek answers, shifting from traditional methods to AI-driven interactions. This AI chatbot, along with competitors like Anthropic’s Claude, Google’s Gemini, and Microsoft’s CoPilot, has made AI a focal point in information retrieval. Despite these advancements, traditional search engines like Google remain dominant.
Google’s profits surged by nearly 60% due to increased advertising revenue from Google Search, and its global market share reached 91.1% in June, even as ChatGPT’s web visits declined by 12%.
Google is not only holding its ground but also leveraging AI technology to enhance its services. Analysts at Bank of America credit Gemini, Google’s AI, with contributing to the growth in search queries. By integrating Gemini into products such as Google Cloud and Search, Google aims to improve their performance, blending traditional search capabilities with cutting-edge AI innovations.
However, Google’s dominance faces significant legal challenges. The U.S. Department of Justice has concluded a major antitrust case against Google, accusing the company of monopolising the digital search market, with a verdict expected by late 2024.
Additionally, Google is contending with another antitrust lawsuit filed by the U.S. government over alleged anticompetitive behaviour in the digital advertising space. These legal challenges could reshape the digital search landscape, potentially providing opportunities for AI chatbots and other emerging technologies to gain a stronger foothold in the market.