Town Hall: How to Trust Technology
17 Jan 2024 13:15h - 14:00h
Event report
AI and immersive technologies will fundamentally change how humanity interacts with society, government and even the environment.
How can we meet the challenge presented by the complex risks we face while building trust in our technological future?
Join this interactive town hall with leading voices to understand why it is essential to build trust in technology.
More info @ WEF 2024.
Table of contents
Disclaimer: This is not an official record of the WEF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the WEF YouTube channel.
Knowledge Graph of Debate
Session report
Full session report
Ayanna Howard
The analysis covered various topics related to artificial intelligence (AI) and robotics, presenting different perspectives. One main point discussed was the tendency of humans to place excessive trust in technology, despite its known flaws. Dr. Howard’s 2011 research on human trust in robotics during emergency scenarios supported this observation. The study found that people overwhelmingly followed a robot’s directions, even when they conflicted with visible exit signs or when the robot exhibited poor behavior. This highlights the issue of overtrust in technology, where people sometimes disregard their own common sense.
However, an alternative viewpoint argued that the real problem lies in how we react when technology fails or makes mistakes. Instead of solely focusing on the level of trust we have in technology, Dr. Howard suggests that we should also emphasize how we respond and manage when technology fails. She cited instances such as airplane crashes, where people have demonstrated adverse overreactions. According to this perspective, improving our ability to appropriately react when technology fails is crucial.
Another key point discussed was the need to incorporate human emotional intelligence (EQ) into robots and AI tools to prevent errors. Despite the valuable and widespread use of AI tools like ChatGPT, they are not flawless and can make mistakes, potentially leading to harmful outcomes such as errors in legal briefs. To mitigate these issues, it was argued that robots and AI tools should integrate human EQ and possibly limit their usage to avoid negative consequences.
The lack of necessary rules and regulations in the AI industry was also highlighted. Currently, the AI industry lacks the standards and certifications seen in other industries, such as electricity during its early days. This can be problematic, as anyone with minimal knowledge can create AI and connect it to machines, potentially endangering consumers who trust these products. Strengthening rules, certifications, and validations in the AI industry is essential for consumer safety and trust.
The convergence between the digital and physical worlds in robotics and AI was also discussed. The ability to connect to the cloud and learn in real-time has accelerated this convergence. However, the implications of a world where digital personas transition to physical forms remain uncertain. Therefore, it is important to consider the societal and ethical implications as technology progresses.
The analysis also emphasized the need to design AI systems while assuming flaws or bad elements exist. While the cybersecurity field adheres to a zero-trust approach, assuming the presence of bad actors and hacks, the same approach is not yet prevalent in AI system design. AI systems should be designed with the understanding that flaws or bad elements may exist, similar to the cybersecurity approach.
Establishing policies and regulations governing the use of AI was deemed crucial for accountability and trust. Such policies provide the necessary standards and define the expectations and consequences for companies utilizing AI. However, for consistency and clarity, these policies should be uniform across different regions and jurisdictions.
Lastly, the analysis stressed the importance of collaboration between technologists and experts from other fields. Collaborating with professionals from diverse disciplines enables technologists to gain a comprehensive understanding of both the risks and benefits associated with technology. This interdisciplinary collaboration is crucial for a holistic appraisal of technology.
Overall, the analysis explored a wide range of considerations in the field of AI and robotics. It uncovered the tendency to overtrust technology, the importance of addressing our reactions when technology fails, the integration of human EQ into AI systems, the need for rules and regulations, the convergence of the digital and physical worlds, the significance of assuming flaws in AI system design, the establishment of policies and regulations, and the necessity of collaboration across different disciplines. These insights provide a comprehensive understanding of the challenges and key aspects in the realm of AI and robotics.
Mustafa Suleyman
The discussion revolves around the topic of artificial intelligence (AI) and large language models (LLMs). One viewpoint argues that people should be critical, skeptical, doubtful, and ask tough questions of LLM technology. This perspective is based on the probabilistic nature of LLMs, which can provide multiple responses. Furthermore, the previous mental model of default trust in technology may not apply in the case of LLM technology.
On the other hand, another viewpoint highlights two elements that can build trust in LLM technology: the factual accuracy (IQ) of the models and the emotional connection (EQ) with them. The extent to which the models are factually correct can be formally measured, and emotional connection plays a significant role in decision-making. It is argued that trust can be established by focusing on both IQ and EQ aspects.
Mustafa Suleyman, a key figure in the discussion, shares a positive outlook on the progress and capabilities of AI systems. He predicts a 99.9% accuracy rate in factual outputs from AI within the next three years. Moreover, he anticipates that AI will evolve from a one-shot question-answer engine to a provider of accurate predictions that can take actions on our behalf. Suleyman believes in the potential of AI to support and assist humans in various tasks, envisioning a future where everyone will have their own personal AI.
Transparency, accountability, and careful consideration of values embedded in the emotional side of AI models are stressed for building trust. EQ plays a significant role in decision-making, and LLMs are becoming more dynamic and interactive.
The discussion touches upon the obsession with artificial general intelligence (AGI) in Silicon Valley. While some have a negative sentiment towards this obsession, others see AGI as having the potential to address societal challenges such as food, climate, transportation, education, and health.
The integration of AI systems into various fields is highlighted, with Mustafa Suleyman expressing trust in technology. He has observed a significant improvement in the quality of AI models over the past two years and regularly uses them to access knowledge in a fluent conversational style. Additionally, the advent of large language models has lowered the barrier to accessing information, making it easier to ask AI models wide-ranging questions.
The potential risks and challenges of AI are also discussed. It is suggested that stress testing of models is necessary to identify flaws and weaknesses. Attention is drawn to the need for caution when AI deals with sensitive or conflicting information. Additionally, the inherent risk of AI systems improving without human oversight is raised, along with the potential for AI to be misused in elections.
Regulation of AI is deemed necessary due to the increasing risks associated with its deployment. However, there is a debate regarding the appropriate legislative approach, with some arguing that formal regulations may not be needed at present. Bias and fairness testing are highlighted as areas of focus, and there is growing concern regarding potential risks, such as biohazards, posed by AI.
The discussion emphasizes the importance of transparency, testing, and evaluation in the development and deployment of AI systems. The ability to estimate uncertainties, be aware of errors, and communicate confidence intervals is seen as a way to increase trustworthiness. It is acknowledged that AI systems are held to a higher standard than humans, particularly in fields like healthcare and autonomous vehicles.
Finally, the discussion explores the progress and efficiency gains in AI models. The focus is on creating models that perform better while being smaller, and the positive implications for the open-source ecosystem and startups. Mustafa Suleyman expresses awe and hesitation to bet against any technological development, indirectly supporting it.
Overall, the discussion explores the complex and multifaceted nature of AI and large language models. It raises important points about trust, regulation, transparency, testing, and the future implications of AI in various fields and society as a whole.
Audience
During a recent event, speakers engaged in discussions about technology and artificial intelligence (AI), exploring different aspects of the field. One speaker raised concerns about Silicon Valley’s preoccupation with Artificial General Intelligence (AGI). They argued that Silicon Valley’s obsession with AGI may divert attention away from more pressing issues such as climate change and the full automation of manufacturing. Emphasising the importance of addressing these challenges, the speaker suggested that Silicon Valley should shift its focus towards solving climate change and achieving full automation in manufacturing processes. This stance aligns with the Sustainable Development Goals (SDGs) of Climate Action and Decent Work and Economic Growth.
The potential misuse of AI also emerged as a prominent topic of discussion. Concerns were raised about the possibility of malicious entities exploiting AI for harmful purposes. One audience member, employed in a company that takes risks in cyber insurance, shared their apprehensions about the misuse of AI. While acknowledging that AI is trusted due to its good intentions, the speaker cautioned that it could also be misused by “bad people.” This discussion emphasised the need for vigilant monitoring and regulation to prevent the malicious use of AI.
Another debate centred around the deployment of AI and the consideration of potential risks and misuse. It was argued that while legislation and rules might be in place, they may not be sufficient to prevent misuse. The speaker raised questions about potential risks associated with AI deployment that may have been overlooked, highlighting the importance of taking into account not only the societal benefits but also the potential negative consequences. This perspective aligns with the SDG of Industry, Innovation, and Infrastructure.
The audience also expressed curiosity about how trustworthiness in AI is assessed. Drawing a parallel with human trust, where trust is based on the capability and character of individuals, the audience questioned if a similar framework of trustworthiness could be applied to AI. This discussion revealed the importance of context when evaluating trust in AI. The audience argued that trust is only valuable when it is contextual, suggesting that trust in AI should be evaluated within specific situations and applications.
However, there were doubts about the application of a human behaviour framework onto AI. One audience member questioned whether there might be dangers in trying to apply a human framework of behaviour onto AI. This uncertainty highlights the challenges in understanding and predicting the behaviour of AI systems based on human patterns.
In conclusion, the event provided a platform for speakers and audience members to engage in discussions on various topics related to technology and AI. The concerns raised about Silicon Valley’s focus on AGI, the need to prioritise solving climate change and achieving full automation of manufacturing, and the potential misuse of AI highlighted the importance of considering wider societal impacts and risks. The audience’s inquiries about trust in AI and the applicability of human behaviour frameworks underscored the complexities involved in assessing AI’s trustworthiness and behaviour. The event offered valuable insights into the current debates surrounding AI and technology, stimulating further reflection and exploration in these areas.
Ben Thompson
In an insightful discussion on trust in technology, Ben Thompson raises thought-provoking questions and offers unique insights. One of his key questions is whether it is necessary to communicate to people their over-reliance on technology or if this responsibility lies with technologists themselves.
Thompson argues that it is crucial to consider people’s actual preferences and behaviors, rather than solely relying on their stated feelings, when examining trust in technology. Many individuals express mistrust in technology but continue to use it extensively, revealing a discrepancy between their stated opinions and their actions. This idea of “revealed versus stated preferences” sheds light on the true level of trust people have in technology.
Thompson also questions the boundaries between the digital and physical worlds. He wonders if there is a clear distinction between the two realms or if they are becoming increasingly intertwined. This inquiry highlights the evolving nature of technology and its impact on our daily lives.
Additionally, Thompson raises concerns about potential job losses, particularly in the digital space, due to emerging technologies. With advancements in automation and artificial intelligence, there is genuine fear that certain roles may become obsolete. This concern emphasizes the need to consider the socio-economic implications of technological progress.
Within the realm of artificial intelligence (AI), Thompson speculates on the progress and development of artificial general intelligence (AGI). He asks whether AGI will make more significant strides in the digital space compared to its progress in the physical world. This speculation reflects ongoing efforts to unlock the full potential of AGI and its various applications.
Trust in technology also brings up concerns about the role of government regulation and policy support. Thompson expresses doubts about solely relying on government regulations and suggests exploring alternative approaches to establish trust in technology. The adequacy of current regulatory measures in ensuring technology’s trustworthiness is questioned, leading to potential considerations for additional means.
Transparency in AI usage and development is another area that Thompson examines. He challenges the idea that companies should expose their full prompts for the sake of transparency. This viewpoint raises important questions about the trade-offs between transparency and the protection of proprietary information in AI technology development.
Furthermore, Thompson suggests that excessive regulation may hinder the development of certain technological benefits and capabilities. This concern highlights the delicate balance between regulation and innovation in the technology sector.
Lastly, Thompson emphasizes the need for the tech industry to effectively communicate the importance and excitement of technology development to the public. Despite potential issues and challenges, bridging the gap between technological advancements and public perception is crucial. This observation underscores the significance of public education and engagement in ensuring the positive reception of technology.
In conclusion, Ben Thompson’s discussion provokes critical analysis of trust in technology. His questions and insights shed light on the complexities surrounding this topic. The interplay between trust, societal implications, regulation, and the role of technology in our lives calls for ongoing dialogue and examination.
Speakers
A
Ayanna Howard
Speech speed
222 words per minute
Speech length
2093 words
Speech time
565 secs
Arguments
People tend to overtrust technology
Supporting facts:
- Dr. Howard’s research in 2011 focused on human trust in robotics in emergency scenarios.
- In simulated scenarios, people overwhelmingly chose to follow a robot’s directions even when they conflicted with visible exit signs or when the robot demonstrated poor behavior.
Topics: Artificial Intelligence, Robotics
Robots and AI tools should incorporate human EQ and possibly limit their use to avoid errors.
Supporting facts:
- Despite being imperfect and sometimes wrong, AI tools like ChatGPT are valuable and commonly used.
- Mistakes made by these tools, such as in legal briefs, can have harmful effects.
Topics: AI, Robotics, Human EQ
AI lacks necessary rules and regulations
Supporting facts:
- She compared the current state of AI to the early days of electricity, when there were no standards or certifications, and dangerous incidents like exploding light bulbs were common.
- Currently, anyone with minimal knowledge can create AI and hook it up to a machine, which can potentially harm consumers who trust these products.
- Consumers may get swayed by the presence of a VC investor and think the product is safe.
Topics: Artificial Intelligence, Regulations, Consumer Safety
Ayanna Howard trusts AI more as she can query and understand it
Supporting facts:
- Ayanna can query AI
- She is positive of the content in the AI ‘black box’ because she knows exactly what’s going on
Topics: Artificial Intelligence, Trust in AI
The organization that creates a true AGI will control the world, unless we have some regulations and rules
Topics: Artificial General Intelligence, Control, Regulation, Rules
AGI represents the ability to not just create our physical, but also create our mind
Topics: Artificial General Intelligence, Creation, Mind
There is a rapid convergence between the digital and physical world in terms of robotics and AI.
Supporting facts:
- The possibility to connect to the cloud and learn almost in real time has accelerated this convergence.
Topics: Robotics, Artificial Intelligence, Convergence
AI systems should be designed assuming there will be flaws or bad elements
Supporting facts:
- There is a move towards zero trust in cybersecurity, which assumes the existence of bad actors and hacks
- There is not an equivalent move in AI to design interactions assuming the AI could be bad
Topics: Artificial Intelligence, Cybersecurity, System Design
Policies and regulations are necessary to establish expectations and consequences when it comes to the use of AI
Supporting facts:
- Policies and regulations establish a standard for the use of AI
- Violating these rules could result in ramifications, offering a level of accountability for companies using AI
Topics: AI, Policies, Regulations
Technology is always moving forward and has the potential to equalize society.
Supporting facts:
- The advent of the internet and laptops caused panic about societal imbalance but has led to more equalization.
- Innovation like direct transition from landlines to cellphones provided connectivity in remote areas of Africa
Topics: Internet, Technology Development, Rural Connectivity
The LM is going to be one piece of many pieces in AI
Topics: AI, Robotics, LM
AI is efficient but harmful to the environment due to high energy costs
Supporting facts:
- Current AI technologies are energy-intensive
Topics: AI, Environment, Energy Efficiency
AI alone won’t achieve AGI or other high-end goals
Supporting facts:
- There’s a lack of capability to achieve AGI through current AI
Topics: AI, AGI
Ayanna Howard believes that even seemingly unattainable goals can be achieved, like going to Mars.
Supporting facts:
- We couldn’t go to Mars at one point
Topics: Achieving Goals, Space Exploration
Report
The analysis covered various topics related to artificial intelligence (AI) and robotics, presenting different perspectives. One main point discussed was the tendency of humans to place excessive trust in technology, despite its known flaws. Dr. Howard’s 2011 research on human trust in robotics during emergency scenarios supported this observation.
The study found that people overwhelmingly followed a robot’s directions, even when they conflicted with visible exit signs or when the robot exhibited poor behavior. This highlights the issue of overtrust in technology, where people sometimes disregard their own common sense.
However, an alternative viewpoint argued that the real problem lies in how we react when technology fails or makes mistakes. Instead of solely focusing on the level of trust we have in technology, Dr. Howard suggests that we should also emphasize how we respond and manage when technology fails.
She cited instances such as airplane crashes, where people have demonstrated adverse overreactions. According to this perspective, improving our ability to appropriately react when technology fails is crucial. Another key point discussed was the need to incorporate human emotional intelligence (EQ) into robots and AI tools to prevent errors.
Despite the valuable and widespread use of AI tools like ChatGPT, they are not flawless and can make mistakes, potentially leading to harmful outcomes such as errors in legal briefs. To mitigate these issues, it was argued that robots and AI tools should integrate human EQ and possibly limit their usage to avoid negative consequences.
The lack of necessary rules and regulations in the AI industry was also highlighted. Currently, the AI industry lacks the standards and certifications seen in other industries, such as electricity during its early days. This can be problematic, as anyone with minimal knowledge can create AI and connect it to machines, potentially endangering consumers who trust these products.
Strengthening rules, certifications, and validations in the AI industry is essential for consumer safety and trust. The convergence between the digital and physical worlds in robotics and AI was also discussed. The ability to connect to the cloud and learn in real-time has accelerated this convergence.
However, the implications of a world where digital personas transition to physical forms remain uncertain. Therefore, it is important to consider the societal and ethical implications as technology progresses. The analysis also emphasized the need to design AI systems while assuming flaws or bad elements exist.
While the cybersecurity field adheres to a zero-trust approach, assuming the presence of bad actors and hacks, the same approach is not yet prevalent in AI system design. AI systems should be designed with the understanding that flaws or bad elements may exist, similar to the cybersecurity approach.
Establishing policies and regulations governing the use of AI was deemed crucial for accountability and trust. Such policies provide the necessary standards and define the expectations and consequences for companies utilizing AI. However, for consistency and clarity, these policies should be uniform across different regions and jurisdictions.
Lastly, the analysis stressed the importance of collaboration between technologists and experts from other fields. Collaborating with professionals from diverse disciplines enables technologists to gain a comprehensive understanding of both the risks and benefits associated with technology. This interdisciplinary collaboration is crucial for a holistic appraisal of technology.
Overall, the analysis explored a wide range of considerations in the field of AI and robotics. It uncovered the tendency to overtrust technology, the importance of addressing our reactions when technology fails, the integration of human EQ into AI systems, the need for rules and regulations, the convergence of the digital and physical worlds, the significance of assuming flaws in AI system design, the establishment of policies and regulations, and the necessity of collaboration across different disciplines.
These insights provide a comprehensive understanding of the challenges and key aspects in the realm of AI and robotics.
A
Audience
Speech speed
179 words per minute
Speech length
383 words
Speech time
128 secs
Arguments
Silicon Valley’s obsession with AGI
Supporting facts:
- Audience member is Deepjani from NASSCOM India
- Question raised at an event where Ben Thompson was a speaker
Topics: Silicon Valley, AGI
Concern about misuse of AI by malicious entities
Supporting facts:
- The audience member is from a company that takes risks in cyber insurance
- He believes AI is trusted because of good intentions, but acknowledges the potential for misuse by ‘bad people’
Topics: Artificial Intelligence, Cybercrime, Misuse
Audience is curious about how trustworthiness in AI is assessed
Supporting facts:
- Audience mentions that trust is based on capability and character in humans
- Audience is interested if similar framework is applicable to AI
Topics: Artificial Intelligence, Trust in AI
Report
During a recent event, speakers engaged in discussions about technology and artificial intelligence (AI), exploring different aspects of the field. One speaker raised concerns about Silicon Valley’s preoccupation with Artificial General Intelligence (AGI). They argued that Silicon Valley’s obsession with AGI may divert attention away from more pressing issues such as climate change and the full automation of manufacturing.
Emphasising the importance of addressing these challenges, the speaker suggested that Silicon Valley should shift its focus towards solving climate change and achieving full automation in manufacturing processes. This stance aligns with the Sustainable Development Goals (SDGs) of Climate Action and Decent Work and Economic Growth.
The potential misuse of AI also emerged as a prominent topic of discussion. Concerns were raised about the possibility of malicious entities exploiting AI for harmful purposes. One audience member, employed in a company that takes risks in cyber insurance, shared their apprehensions about the misuse of AI.
While acknowledging that AI is trusted due to its good intentions, the speaker cautioned that it could also be misused by “bad people.” This discussion emphasised the need for vigilant monitoring and regulation to prevent the malicious use of AI.
Another debate centred around the deployment of AI and the consideration of potential risks and misuse. It was argued that while legislation and rules might be in place, they may not be sufficient to prevent misuse. The speaker raised questions about potential risks associated with AI deployment that may have been overlooked, highlighting the importance of taking into account not only the societal benefits but also the potential negative consequences.
This perspective aligns with the SDG of Industry, Innovation, and Infrastructure. The audience also expressed curiosity about how trustworthiness in AI is assessed. Drawing a parallel with human trust, where trust is based on the capability and character of individuals, the audience questioned if a similar framework of trustworthiness could be applied to AI.
This discussion revealed the importance of context when evaluating trust in AI. The audience argued that trust is only valuable when it is contextual, suggesting that trust in AI should be evaluated within specific situations and applications. However, there were doubts about the application of a human behaviour framework onto AI.
One audience member questioned whether there might be dangers in trying to apply a human framework of behaviour onto AI. This uncertainty highlights the challenges in understanding and predicting the behaviour of AI systems based on human patterns. In conclusion, the event provided a platform for speakers and audience members to engage in discussions on various topics related to technology and AI.
The concerns raised about Silicon Valley’s focus on AGI, the need to prioritise solving climate change and achieving full automation of manufacturing, and the potential misuse of AI highlighted the importance of considering wider societal impacts and risks. The audience’s inquiries about trust in AI and the applicability of human behaviour frameworks underscored the complexities involved in assessing AI’s trustworthiness and behaviour.
The event offered valuable insights into the current debates surrounding AI and technology, stimulating further reflection and exploration in these areas.
BT
Ben Thompson
Speech speed
218 words per minute
Speech length
1741 words
Speech time
479 secs
Arguments
Ben Thompson questions if there is a need to communicate to people that they are over-relying on technology or if it’s a matter for technologists to address and warrant such trust.
Supporting facts:
- A poll question was mentioned comparing people’s increase or decrease in trust in technology and general institutions.
- Raised point on how people often state their distrust in technology but continue to use it implicitly, revealing true preferences.
Topics: Trust in Technology, ChatGPT, Tech Reliability
Ben Thompson wonders if there is a clear line of distinction between the digital world and the physical world.
Topics: digital, physical, robotics
Thompson questions the potential job losses, particularly in the digital space, as a result of emerging technologies.
Topics: job loss, digital space, emerging technologies
He speculates if there is more progress to be made for artificial general intelligence (AGI) that function in the digital space compared to those in the physical world.
Topics: AGI, digital space, physical world
Concerns about trusting technology without policy support
Topics: Artificial Intelligence, Policy Making, Regulation
Questioning the adequacy of government regulation for trustable technology
Topics: Government Regulation, Artificial Intelligence, Trust
Should companies be exposing their full prompts for transparency reasons
Topics: AI, corporate transparency
There is a worry that regulation might prevent certain benefits or capabilities from being developed in technology
Topics: Regulation, Technology development
Report
In an insightful discussion on trust in technology, Ben Thompson raises thought-provoking questions and offers unique insights. One of his key questions is whether it is necessary to communicate to people their over-reliance on technology or if this responsibility lies with technologists themselves.
Thompson argues that it is crucial to consider people’s actual preferences and behaviors, rather than solely relying on their stated feelings, when examining trust in technology. Many individuals express mistrust in technology but continue to use it extensively, revealing a discrepancy between their stated opinions and their actions.
This idea of “revealed versus stated preferences” sheds light on the true level of trust people have in technology. Thompson also questions the boundaries between the digital and physical worlds. He wonders if there is a clear distinction between the two realms or if they are becoming increasingly intertwined.
This inquiry highlights the evolving nature of technology and its impact on our daily lives. Additionally, Thompson raises concerns about potential job losses, particularly in the digital space, due to emerging technologies. With advancements in automation and artificial intelligence, there is genuine fear that certain roles may become obsolete.
This concern emphasizes the need to consider the socio-economic implications of technological progress. Within the realm of artificial intelligence (AI), Thompson speculates on the progress and development of artificial general intelligence (AGI). He asks whether AGI will make more significant strides in the digital space compared to its progress in the physical world.
This speculation reflects ongoing efforts to unlock the full potential of AGI and its various applications. Trust in technology also brings up concerns about the role of government regulation and policy support. Thompson expresses doubts about solely relying on government regulations and suggests exploring alternative approaches to establish trust in technology.
The adequacy of current regulatory measures in ensuring technology’s trustworthiness is questioned, leading to potential considerations for additional means. Transparency in AI usage and development is another area that Thompson examines. He challenges the idea that companies should expose their full prompts for the sake of transparency.
This viewpoint raises important questions about the trade-offs between transparency and the protection of proprietary information in AI technology development. Furthermore, Thompson suggests that excessive regulation may hinder the development of certain technological benefits and capabilities. This concern highlights the delicate balance between regulation and innovation in the technology sector.
Lastly, Thompson emphasizes the need for the tech industry to effectively communicate the importance and excitement of technology development to the public. Despite potential issues and challenges, bridging the gap between technological advancements and public perception is crucial. This observation underscores the significance of public education and engagement in ensuring the positive reception of technology.
In conclusion, Ben Thompson’s discussion provokes critical analysis of trust in technology. His questions and insights shed light on the complexities surrounding this topic. The interplay between trust, societal implications, regulation, and the role of technology in our lives calls for ongoing dialogue and examination.
MS
Mustafa Suleyman
Speech speed
210 words per minute
Speech length
4211 words
Speech time
1201 secs
Arguments
People should be critical, skeptical, doubtful, and ask tough questions of LLM technology.
Supporting facts:
- The nature of LLMs (Large Language Models) is probabilistic, hence can provide multiple responses.
- Past mental model of default trusting technology does not apply with LLM technology.
Topics: LLM technology, trust in technology
Two elements that will drive trust in LLM technology: the extent to which models are factual (IQ) and the emotional connection (EQ).
Supporting facts:
- Models’ factuality can be formally measured.
- Emotional connection with a model plays a large role in decision making.
Topics: LLM technology, trust in technology, EQ, IQ
Mustafa Suleyman believes everyone will have their own personal AI in the future, to support and assist in various tasks.
Supporting facts:
- They have created an AI named PI (Personal Intelligence) which serves as a reminder to question and be skeptic about the information that AI provides.
- They have also added a feature in PI that reminds the user to take a break and return to the real world after 30 minutes of interaction.
Topics: Artificial Intelligence, Future of AI
Mustafa Suleyman trusts technology more than he did two years ago
Supporting facts:
- Mustafa Suleyman has been working on AI and related technologies for 13 years
- He finds that the quality of AI models has significantly improved
- He regularly uses AI models to access knowledge and information in a fluent conversational style
Topics: Technology, Artificial Intelligence
Silicon Valley is overly obsessed with AGI
Supporting facts:
- Silicon Valley companies are heavily investing in AGI development
Topics: Artificial General Intelligence, Silicon Valley
In the near future, AIs will have equivalent intelligence to humans
Supporting facts:
- In three to five years, AIs will be equivalent to digital people
Topics: Artificial Intelligence, Artificial General Intelligence
Robotics is going to remain behind for quite a while due to the constraint of having to physically manufacture them
Supporting facts:
- Unlike digital information, robots cannot be endlessly duplicated
- Robotics relies on physical infrastructure for its development
Topics: Robotics, Manufacturing Constraints
The reality is that the more centralized the model, the easier it is for some number of regulators to provide oversight
Topics: AI regulation, centralized model
The fundamental debate in the community at the moment is there’s absolutely no harm, I think, today being caused by open source
Supporting facts:
- It’s been the backbone of software development for as long as software has been around
Topics: Open source, Software development
AI systems can assist in detecting fraud and are a part of the solution
Supporting facts:
- AI systems are already used for pattern matching systems in detecting insurance fraud, credit fraud
Topics: Artificial Intelligence, Fraud Detection
The deployment and integration of AI systems should not be slowed down
Topics: Artificial Intelligence, AI Deployment
There is currently no equivalent to a zero trust framework in AI
Supporting facts:
- There is a move in cybersecurity to assume bad actors and design processes accordingly, which is not the case with AI
Topics: Artificial Intelligence, Cybersecurity, Zero Trust Framework
AI systems are held to a higher standard than humans
Supporting facts:
- Models that can perform clinical diagnostics at a human level are not deployed unless they meet a higher standard
- Self-driving cars are expected to be safer and more reliable than human drivers
Topics: Artificial Intelligence, Healthcare, Autonomous Vehicles
Companies might need to expose their full prompts for transparency reasons
Supporting facts:
- The prompt isn’t the only way to control AI.
- The AI learning process also involves feedback.
Topics: AI Transparency, AI Regulation
Outputs can be tested and evaluated
Supporting facts:
- Different battery of test questions and evaluations can be asked.
- Government are adopting new evaluations for bias and fairness in AI.
- Governments are increasing concern on potential risks such as biohazards from AI.
Topics: AI Testing, AI Evaluation
Reducing barrier to entry in accessibility of models is beneficial
Supporting facts:
- Models can be stress tested with automated questions or attacks for reassurance
Topics: Accessibility, Stress testing, Models
Uncertainty estimation, awareness of errors, and communication of confidence intervals can increase the trustworthiness of AI.
Supporting facts:
- Mustafa Suleyman believes that we can increase trust in AI through its ability to estimate uncertainties, be aware of its errors, and communicate confidence intervals. This could also solve the issues with hallucinations in AI
Topics: Artificial Intelligence, Trustworthiness, Uncertainty Estimation
Technologists often discuss the benefits of technology
Supporting facts:
- All we do as technologists is talk up the benefits.
Topics: Technology, AI, Hype
Profit has driven much of civilization’s progress
Supporting facts:
- Profit is the engine of progress that has driven so much of our civilization.
Topics: Profit, Progress
Increased risk with AI interaction in the physical world
Supporting facts:
- Interacts with the physical world – higher risks
- High stakes environment like healthcare or self-driving – more risks
Topics: Artificial Intelligence, Physical World Interaction, Risk
AI improving with no human oversight implies more risk
Supporting facts:
- Autonomy – if the model improves without human oversight, it’s risky
Topics: Artificial Intelligence, Self-improvement, Absence of Human Oversight, Risk
General AI entails more risk
Supporting facts:
- If AI tries to be good at everything, it’s more powerful and risky
Topics: Artificial Intelligence, General AI
AI is safer with narrow focus and human in the loop
Supporting facts:
- Narrow AI has less risk
- Human in the loop makes AI safer
Topics: Artificial Intelligence, Human Oversight, Safety
The models are being evaluated against a fixed threshold of human performance
Supporting facts:
- The models are pushing through this curve over time
Topics: Artificial Intelligence, Model Evaluation, Human Performance
Bigger models seem to perform better
Topics: Artificial Intelligence, Model Scale
Once models achieve a certain state of capability, there is pressure to make them smaller without losing performance
Supporting facts:
- Today you can train a GPT-3 level capability model at 60 times smaller in terms of flops than the original 175 billion parameter model
Topics: Model Efficiency, Artificial Intelligence
The progress in model performance and efficiency is good for the small ecosystem, open source and startups
Topics: Artificial Intelligence, Open Source, Startups
Mustafa Suleyman believes that betting against an LLM is akin to betting against Elon Musk
Topics: LLM, Elon Musk
Report
The discussion revolves around the topic of artificial intelligence (AI) and large language models (LLMs). One viewpoint argues that people should be critical, skeptical, doubtful, and ask tough questions of LLM technology. This perspective is based on the probabilistic nature of LLMs, which can provide multiple responses.
Furthermore, the previous mental model of default trust in technology may not apply in the case of LLM technology. On the other hand, another viewpoint highlights two elements that can build trust in LLM technology: the factual accuracy (IQ) of the models and the emotional connection (EQ) with them.
The extent to which the models are factually correct can be formally measured, and emotional connection plays a significant role in decision-making. It is argued that trust can be established by focusing on both IQ and EQ aspects. Mustafa Suleyman, a key figure in the discussion, shares a positive outlook on the progress and capabilities of AI systems.
He predicts a 99.9% accuracy rate in factual outputs from AI within the next three years. Moreover, he anticipates that AI will evolve from a one-shot question-answer engine to a provider of accurate predictions that can take actions on our behalf.
Suleyman believes in the potential of AI to support and assist humans in various tasks, envisioning a future where everyone will have their own personal AI. Transparency, accountability, and careful consideration of values embedded in the emotional side of AI models are stressed for building trust.
EQ plays a significant role in decision-making, and LLMs are becoming more dynamic and interactive. The discussion touches upon the obsession with artificial general intelligence (AGI) in Silicon Valley. While some have a negative sentiment towards this obsession, others see AGI as having the potential to address societal challenges such as food, climate, transportation, education, and health.
The integration of AI systems into various fields is highlighted, with Mustafa Suleyman expressing trust in technology. He has observed a significant improvement in the quality of AI models over the past two years and regularly uses them to access knowledge in a fluent conversational style.
Additionally, the advent of large language models has lowered the barrier to accessing information, making it easier to ask AI models wide-ranging questions. The potential risks and challenges of AI are also discussed. It is suggested that stress testing of models is necessary to identify flaws and weaknesses.
Attention is drawn to the need for caution when AI deals with sensitive or conflicting information. Additionally, the inherent risk of AI systems improving without human oversight is raised, along with the potential for AI to be misused in elections.
Regulation of AI is deemed necessary due to the increasing risks associated with its deployment. However, there is a debate regarding the appropriate legislative approach, with some arguing that formal regulations may not be needed at present. Bias and fairness testing are highlighted as areas of focus, and there is growing concern regarding potential risks, such as biohazards, posed by AI.
The discussion emphasizes the importance of transparency, testing, and evaluation in the development and deployment of AI systems. The ability to estimate uncertainties, be aware of errors, and communicate confidence intervals is seen as a way to increase trustworthiness. It is acknowledged that AI systems are held to a higher standard than humans, particularly in fields like healthcare and autonomous vehicles.
Finally, the discussion explores the progress and efficiency gains in AI models. The focus is on creating models that perform better while being smaller, and the positive implications for the open-source ecosystem and startups. Mustafa Suleyman expresses awe and hesitation to bet against any technological development, indirectly supporting it.
Overall, the discussion explores the complex and multifaceted nature of AI and large language models. It raises important points about trust, regulation, transparency, testing, and the future implications of AI in various fields and society as a whole.