morning session
Event report
morning session
Table of contents
Disclaimer: This is not an official record of the UNCTAD eWeek session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the UNCTAD website.
Knowledge Graph of Debate
Session report
Full session report
James Revill
AI systems heavily rely on data and computational power to predict, cluster, and generate results, but certain AI designs can create a false sense of causality. In the field of biological science, the reproducibility, verifiability, and certainty of data can be challenging. Although AI can help translate biological data into applications, interpreting predictive models and causal relations remains incomplete. Concerningly, AI could be misused to create new weapons or increase accessibility to biological weapons, potentially undermining the Biological Weapons Convention (BWC). To prevent harm, careful regulation and management of AI is necessary. Despite advancements in science and technology, the BWC remains adaptable, and the equal distribution of AI’s benefits and international cooperation are crucial. The European Union (EU) employs a collaborative scoping paper to guide discussions, but diplomats may need input from states parties for effective development of questions in a scientific advisory mechanism. AI’s potential role in spreading misinformation and exacerbating biological weapons issues calls for responsible states to take ownership of developing Confidence-Building Measures (CBMs). While technology is valuable, it has limitations and access to it is a politically sensitive topic. Having scientific secretariat support facilitates communication between scientists and policymakers, enhancing understanding of the BWC’s principles.
Jenifer Mackby
The analysis discusses three topics related to Goal 16: Peace, Justice and Strong Institutions and Goal 17: Partnerships for the Goals. The first topic focuses on the establishment of an S&T (Science and Technology) advisory body. It is anticipated that the decision to establish this body will be made either in a special conference in 2025 or the review conference in 2027. The analysis notes that further discussions about S&T will take place in February and August.
The second topic examines the issue of countries not adequately fulfilling their reporting obligations under Confidence Building Measures (CBMs). CBMs consist of six different categories, and countries should exchange information under these categories. However, the analysis highlights a negative sentiment as it reveals that half of the countries do not respond adequately to their reporting obligations. This suggests a lack of commitment from these countries in promoting peace, justice, and strong institutions.
The third topic suggests the need to update the CBM categories. The analysis reveals that only half of the countries are responding to CBMs, which leads some groups to deem it necessary to update these categories. By updating the categories, it is believed that international cooperation can be enhanced in achieving the goals of peace, justice, and strong institutions. The analysis presents a neutral sentiment towards this suggestion, indicating that further considerations are needed to assess the potential impact of updating CBM categories.
In conclusion, the analysis provides insights into the ongoing discussions related to Goal 16 and Goal 17. It highlights the potential establishment of an S&T advisory body, the lack of adequate response from countries in fulfilling their reporting obligations under CBMs, and the suggestion to update CBM categories. The analysis emphasizes the need for further engagement and cooperation to achieve the desired outcomes of peace, justice, and strong institutions.
Rajae Ek Aouad
Rajae Ek Aouad, an expert, has expressed concern about the lack of regulation and oversight in the rapidly growing field of artificial intelligence (AI) and biotechnology in developing countries. Aouad emphasizes the need for ethical considerations in the development and use of AI and biotech. In response, the Ministry of Industry and Numeric Transition in Morocco has taken on the responsibility of addressing ethical issues. Additionally, Aouad highlights the potential harm that could result from the misuse of AI and biotech, particularly in clinical trials.
Aouad mentions a partnership between a US university and AI-focused companies Biovia and Medidata, which specializes in drug development and virtual clinical trials. This partnership has raised concerns among the public regarding decreased fertility and the creation of harmful substances in vaccines generated through AI. Aouad argues that without proper regulations and safeguards, the population could experience negative consequences.
Furthermore, a discrepancy is noted between the rapid progress of the private sector in AI applications and the slower advancement within universities and research communities. This observation highlights the importance of involving the private sector in AI governance, with appropriate precautions, to maximize benefits and reduce risks.
In conclusion, Aouad emphasizes the need for regulatory frameworks and ethical considerations in AI and biotech development in developing countries. The involvement of the Ministry of Industry and Numeric Transition and the Academy of Science and Technology in addressing ethical concerns demonstrates a recognition of the importance of responsible development and implementation of these technologies. The concerns raised about potential harm from the misuse of AI and biotech in clinical trials underscore the necessity for stronger regulations and safeguards. Lastly, the recommendation to involve the rapidly advancing private sector in AI governance, implemented with due diligence, can lead to positive outcomes and risk reduction.
Ljupco Gjorgjinski
The discussion focused on the importance of international cooperation and knowledge sharing in relation to the Bioweapons Convention, specifically Article 10. This article emphasizes the role of international cooperation in facilitating knowledge sharing, which is crucial for understanding and preparedness against biological threats. Participants highlighted the need for additional structures to strengthen the convention, as it currently lacks certain elements found in other agreements. For example, the absence of a science and technology advisory body hinders the effective addressing of emerging challenges. Confidence-building measures were advocated as a means to strengthen the convention and promote transparency, trust, and cooperation among member states. The discussion also explored the possibility of establishing an open-ended body and temporary working groups to examine the impact of novel technologies. It was suggested that the convention should focus on principles and general frameworks rather than specific details to ensure flexibility in response to different situations. Furthermore, specific technical expertise was deemed important for discussions on specific issues, requiring more than a general understanding of life sciences. In conclusion, the speakers emphasized the significance of international cooperation, knowledge sharing, and the need for additional structures to strengthen the Biological Weapons Convention. Implementation of confidence-building measures, exploration of new collaborative mechanisms, focus on principles, and inclusion of specific technical expertise were highlighted as critical aspects in effectively addressing biological threats.
Una Jakob
The discussions highlighted the importance of integrating ethical norms into the fields of artificial intelligence (AI) and biotechnology, with a specific focus on developing behavioural standards for ethical guidelines. It was argued that establishing these standards is crucial to ensure responsible practices and prevent any potential misuse or harm in these domains. Specifically, there were suggestions for creating specific behavioural standards for the intersection of AI and biotechnology.
Furthermore, there was an emphasis on integrating this ethical thinking into the Biological Weapons Convention (BWC). It was proposed that the BWC should consider incorporating these behavioural standards into its framework, recognizing the potential risks and benefits of AI and biotechnology in relation to biological disarmament and norms. This integration could help ensure that technological advancements in AI and biotechnology are aligned with ethical principles and international norms.
Another important point raised in the discussions was the need to clarify the term “biosecurity” in the context of the BWC. It was noted that while biosecurity in the BWC context traditionally referred to the prevention of unauthorised access to labs, pathogens, and biological materials, outside of the convention, the term has acquired a broader meaning. This broader meaning includes considerations such as disease surveillance and pandemic preparedness, which may be important but not central to the BWC. Therefore, it was argued that before evaluating the risks and benefits associated with biosecurity, a clarification of the term in the BWC context is necessary to ensure a common understanding.
The perspective of approaching technology discussions from the vantage point of the BWC was deemed beneficial. This approach would allow for a more specific and tailored focus, allowing participants to delve into the particular technologies and their implications within the context of the BWC. By narrowing down the scope of the discussions to specific elements of the BWC, a more targeted understanding of risks and benefits can be achieved.
The proponents also emphasised the benefits of AI within the BWC framework. It was highlighted that discussions on AI should consider how it can enhance the BWC’s verification process, aid in assistance under Article 7, and impact non-proliferation under Article 3. By focusing on the provisions of the BWC treaty and its specific needs, rather than just the interests or needs of states parties, a more comprehensive assessment of the benefits and risks of AI can be attained.
In conclusion, the discussions underscored the significance of integrating ethical norms into the fields of AI and biotechnology. The development of behavioural standards can ensure responsible practices and prevent any potential misuse or harm. Furthermore, the integration of ethical thinking into the BWC was highlighted as a means to align technological advancements with international norms and promote biological disarmament. Clearing the ambiguity surrounding the term “biosecurity” in the BWC context and approaching technology discussions from the perspective of the BWC were also deemed important steps. Ultimately, the proponents emphasised that discussions should be guided by the substance of the BWC treaty, prioritising its needs and provisions over the interests of states parties.
Nariyoshi Shinomiya
The use of artificial intelligence (AI) in predicting and searching for toxin protein sequences is a significant advancement in the field of biotechnology. What makes it even more impressive is that these predictions can be made using ordinary laptops, making it accessible to a wider range of researchers. AI has the ability to predict amino acid sequences that have toxin activity, even if they differ from the currently known sequences. This is a crucial development as it allows scientists to identify potentially harmful proteins more quickly and efficiently.
However, while the use of AI in biotechnology is certainly promising, there is a need for a deeper understanding of its applications and its regulation. The implications of AI in generating toxin protein sequences raise concerns about potential risks. It is important to carefully examine the ethical and safety implications of creating such sequences. To address these concerns, there is a call for international or national regulation to ensure that the use of AI in this context is carefully monitored and controlled.
Furthermore, the advancements in AI have the potential to significantly impact discussions on biosecurity measures. The ability of AI to generate different amino acid sequences with toxin activity has important implications for biosecurity. This may require a reevaluation of current biosecurity measures to effectively address the new threats that could arise from the use of AI.
In conclusion, the use of AI in predicting and searching for toxin protein sequences is a significant development in biotechnology. It has the potential to greatly enhance our ability to identify harmful proteins and improve the safety of various applications. However, it is important to have a deeper understanding of AI applications in biotechnology and to implement adequate regulation to address potential risks. Additionally, the impact of AI on biosecurity measures is an important consideration that should not be overlooked. Overall, further research and careful regulation are necessary to fully grasp the potential of AI in this field and ensure its responsible and safe use.
Xuxu Wang
Artificial intelligence (AI) plays a significant role in biosecurity research, offering numerous benefits to researchers in terms of processing, analysing, and extracting patterns from biodata. AI provides tools and infrastructure that make the cleaning and aggregation of data from multiple sources more accessible, thereby facilitating research on a larger scale. By reconciling data from various resources, AI enables researchers to have foundational data that is essential for conducting comprehensive and accurate biosecurity studies.
Moreover, the adoption of AI algorithms in biosecurity expands the possibilities for predictive modelling and deep learning-based analysis. These algorithms have a lower barrier to entry, allowing individuals with varying levels of domain knowledge in biosecurity to adopt and utilise such tools effectively. Machine learning and deep learning techniques enable researchers to develop predictive models that enhance their understanding of biosecurity threats and devise proactive measures to mitigate risks.
However, alongside these positive advancements, concerns arise regarding the potential misuse of AI by terrorists or criminals in the development of bioweapons. AI can be leveraged by malicious actors to identify the optimal time, place, and target audience for an attack. By analysing real-world events data, AI can predict demographics and schedules, making it a valuable tool for planning and executing bioweapon attacks. Given these risks, it becomes crucial to factor in the potential misuse of AI in bioweapon attacks when formulating policies and regulations.
Considering the implications of AI misuse in bioweapon attacks, policymakers and institutions need to include this consideration in their scoping and decision-making processes. By factoring in the possibilities that AI and data analytics present in predicting high-impact events for attacks, policymakers can develop regulations and guidelines that address these concerns. Incorporating guidelines that guard against the misuse of AI technology in the context of bioweapons aligns with the broader objective of promoting peace, justice, and strong institutions (SDG 16).
In conclusion, AI’s integration into biosecurity research brings numerous benefits, including improved data processing, analysis, and predictive modelling. However, the potential misuse of AI in the development of bioweapons raises concerns that need to be addressed. Policymakers should consider the potential risks associated with AI misuse in bioweapon attacks when creating policies and regulations to ensure the responsible use of AI technology. By doing so, they can promote a safe and secure environment in which AI can continue to contribute positively to biosecurity research and innovation.
Luis Ochoa Carerra
The analysis of the arguments on AI technology and regulation reveals several key points. Firstly, it is evident that AI technology is rapidly evolving, and with each new discovery, the associated risks of using this technology are also increasing. This highlights the need for effective risk management strategies to mitigate these risks and ensure the safe and responsible use of AI technology.
Secondly, the development of regulations specifically tailored to AI technology is crucial. The continuous advancement of AI necessitates the establishment of robust regulatory mechanisms that can adapt to the evolving landscape. It is essential to recognise that we cannot halt the progress of technology; instead, we must embrace it and ensure that regulations are in place to govern its usage. This will provide a framework for addressing emerging ethical and legal concerns and allow for necessary amendments to be made as the technology evolves.
The arguments put forward also emphasise that the responsibility for AI regulation lies with all stakeholders. This includes individuals, governments, and countries. All parties must take action to ensure peace, justice, and strong institutions in relation to AI technology. It is crucial to foster collaboration and cooperation, as the development and implementation of AI regulations require global participation. By involving all stakeholders, we can collectively work towards creating a regulatory framework that is fair, transparent, and aligned with the broader goals of society.
In conclusion, the analysis highlights the increasing risks associated with AI technology, the importance of developing tailored regulations to govern its usage, and the need for collective action to address AI regulation. It reveals the pressing need for effective risk management strategies and the establishment of regulatory mechanisms that can adapt to the fast-paced advancements in AI. Furthermore, it underscores the importance of involving all stakeholders in the process, as their participation is vital for achieving equitable and ethical AI regulation. By addressing these concerns, we can harness the potential of AI technology while mitigating its risks and ensuring a prosperous future for all.
Hong Ping Wei
The analysis explores the risks and potential consequences of artificial intelligence (AI) on global biosecurity, emphasizing the importance of comprehending these risks and implementing precautionary measures. It provides a nuanced understanding of the subject through various viewpoints.
One argument emphasizes the need for awareness of the sources of AI risks, highlighting the importance of understanding where these risks originate. Risk assessment considers the likelihood of an event occurring and the severity of its consequences. Understanding these risks and their likelihood is essential for effective risk management.
Opinions vary regarding the risks AI poses to global biosecurity. One perspective argues that the risk is currently low due to AI’s stage of development. However, the rapid evolution of the field makes its future uncertain. Another viewpoint highlights a gap in AI’s ability to interpret biological data reliably, particularly in the domain of biosecurity. The “information to physical agent” gap indicates limitations in AI’s capacity to accurately understand and interpret biological information.
To address these risks, precautionary principles and measures are advocated. Given the rapid development of AI, potential future risks should be mitigated through precautionary steps. These measures may include implementing codes of conduct and data control for AI workers and parties involved. It is believed that handling AI with precautionary principles is essential to prevent potential harm and negative consequences.
Furthermore, perceived risks and real risks are distinguished. It is argued that most perceived risks of AI causing harm are mere imagination rather than reality. However, when AI is combined with other elements, such as weapons, it can become dangerous. Considering the context and potential influences that amplify or enhance AI risks is crucial.
The analysis concludes that understanding the real risks of AI, rather than focusing on imagined threats, is important. This understanding enables effective risk management strategies and responsible decision-making regarding the integration and deployment of AI technologies.
In summary, the analysis highlights the risks of AI on global biosecurity and underscores the need to understand these risks. Divergent viewpoints are presented on the current risk level and AI’s ability to interpret biological data. Precautionary measures are proposed to address potential future risks. The distinction between perceived and real risks is emphasized, along with the dangers of combining AI with other elements. Overall, a comprehensive understanding of the real risks associated with AI is crucial for effective risk management and responsible decision-making.
Kavita Berger
The use of artificial intelligence (AI) in biotechnology presents a range of challenges and opportunities. One key concern is the reliability and verifiability of data inputs for AI models in biotechnology. It has been observed that biological data used to feed AI models is often biased, not reproducible, and comes with uncertainty. For example, protein databases are well-curated and considered examples of good data, while genomic databases are less well-curated in comparison.
Another challenge is the need for specific skills beyond computer science and computational biology when translating AI computational models into physical substances in biotechnology. Moving from the design and modeling stage to creating a physical biological agent or device requires expertise in fields beyond AI, highlighting the interdisciplinary nature of this field.
The use of predictive AI in biotechnology is still in its early stages. The field currently lacks a comprehensive understanding of the cause and effect of different molecules and their traits. While there are known models for specific uses, such as digital twins, more complex systems still require further exploration.
AI also has implications in terms of biosecurity. Concerns have been raised about the potential for AI to facilitate the design of biological agents for malicious purposes, lowering the barriers for accessing harmful biological agents. However, AI can also have potential benefits in biosecurity, such as identifying malicious sequences or spotting unusual activity in publicly available information.
In the field of biotechnology, AI is being applied to address various biological problems, ranging from environmental sustainability to human health and synthetic biology. This suggests that AI has the potential to contribute to advancements in these areas.
However, there are challenges in applying AI in the biosciences. Data management and interpretation pose significant obstacles. The quality and type of data input into AI systems influence the output. Ensuring high-quality, curated, and interoperable data becomes crucial in generating accurate results. Additionally, understanding correlation and causation in the context of AI’s application in the biosciences remains a challenge.
In terms of ethics, numerous ethical guidelines and norms exist for the use of AI in healthcare and other fields. Efforts are being made to reduce bias and prevent the retrieval of unsafe or harmful information by developing algorithms and tools that promote more ethical AI practices. Despite these efforts, biases in life science data still pose a challenge as data can be biased due to the questions asked, model systems used, cohorts, and measurement parameters.
AI in the context of bioweapons raises concerns about potential risks. AI could potentially facilitate the design and development of biological agents with negative implications. However, it is also important to consider the potential benefits of AI in improving biosecurity and identifying potential threats.
Additionally, the opportunity costs of not investing in AI in life sciences should be taken into account. Neglecting AI in this field could hinder progress and advancements in various sectors that are traditionally considered areas of dual-use research of concern.
The importance of adapting existing systems to keep pace with new technologies is highlighted. Rather than reinventing the wheel, efforts should be focused on leveraging and adapting existing systems to incorporate AI and other emerging technologies effectively.
In conclusion, the use of AI in biotechnology presents a range of challenges and opportunities. Concerns related to data reliability and verifiability, the need for specific skills, limited understanding of cause and effect, biosecurity risks, biases in data, and ethical considerations require thoughtful attention. Efforts must be made to address these challenges effectively while embracing the potential benefits that AI can offer in advancing various aspects of the biosciences.
Tshilidzi Marwala
AI, or Artificial Intelligence, is classified into three types and has the ability to predict, classify, and generate data. This showcases its potential to mimic human behaviours and decision-making processes. AI also finds applications in various fields like agriculture, biotechnology, vaccine development, and wastewater treatment, thereby addressing global challenges related to food security, health, and environmental sustainability. However, there are challenges associated with AI, such as the uneven quality and availability of data across different regions, impacting its performance and effectiveness. Another challenge is the balance between interpretability and accuracy in machine learning models. Governance is crucial, especially in the context of potential weaponization of AI and biotechnology. The distinction between correlation and causality in AI algorithms also needs to be understood. AI progress is driven by advancements in computational power and data abundance. Handling unstructured data remains a challenge, but future developments in computational capabilities are expected to improve it. Incentivizing responsible behaviour in AI system design is important. A comprehensive approach is required to harness the potential of AI while mitigating its risks.
Peter McGrath
A meeting is currently taking place to discuss and explore the proof of concept for a scientific advisory body for the Biological and Toxin Weapons Convention (BWC). Peter McGrath welcomes the participants to the meeting, which is being recorded to assist the organizing team in writing their report. It is important to note that no parts of the recording will be shared or made public.
Peter McGrath introduces the Inter-Academy Partnership (IAP), a global network comprising 150 academies of science, medicine, and engineering. The IAP is renowned for providing independent expert advice on various scientific, technological, and health issues. Since 2005, the IAP has particularly focused on biosecurity and the promotion of responsible research practices. Peter McGrath highlights the positive role played by the IAP in this regard.
The meeting is structured as a two-step process, with the current phase being the first. It is the initial stage of a project examining the proof of concept for a scientific advisory body for the BWC. The second phase of the meeting will take place in Trieste, in person. During the meeting, discussions will revolve around the potential benefits and risks of artificial intelligence in the context of global biosecurity within the BWC framework.
Peter McGrath outlines the working procedure of the meeting and the goals for the day. He emphasizes the need for an open and informative discussion and encourages participants to introduce themselves and use the chat function for commenting. Furthermore, a QA function is provided, allowing all participants to respond to questions. However, strict restrictions are in place to ensure the anonymity of speakers and participants.
The meeting also prompts a comparison between artificial intelligence and the non-proliferation treaty, highlighting that AI can be considered as an enabling technology rather than fitting into a direct comparison with the treaty.
Notably, the rapid development of AI applications by the private sector, across various sectors, is acknowledged. The private sector possesses advanced tools and expertise in AI that may not be readily available or mastered by academia or scientific communities.
Consideration is given to involving the private sector in AI governance, as they are at the forefront of AI application development and possess unique tools and expertise.
The changing nature of risks with the advent of AI is contemplated by Peter McGrath, who questions whether enough norms and standards have been established around AI to adequately address these risks.
Ultimately, the organizers of the meeting seek to address the risks and benefits of AI from the perspective of state parties involved, showcasing an intent to formulate a question on the risks and benefits of AI from a state-focused mindset.
In conclusion, the ongoing meeting focuses on the proof of concept for a scientific advisory body for the BWC. The IAP is introduced as a valuable contributor in the field of biosecurity and responsible research practices. The meeting aims to foster an open and informative discussion, exploring the potential benefits and risks associated with the deployment of AI in global biosecurity efforts. The private sector’s rapid development of AI applications is acknowledged, leading to considerations for the involvement of the private sector in AI governance. The changing landscape of risks with the advent of AI is also brought into question. Finally, the organizers aim to address the risks and benefits of AI from the perspective of state parties engaged in the discussion.
Nisreen Al-Hmoud
The analysis emphasises the importance of evaluating the risks and benefits associated with emerging technologies, particularly in relation to AI applications. Nisreen Al-Hmoud raises a pertinent question regarding the lack of a specific methodology or criteria for conducting risk assessment and management in this domain. She asserts that there is a need for a well-defined approach to effectively assess and manage the potential risks and benefits of AI applications.
Furthermore, the complexity of AI technology presents significant challenges for governance. Due to its multidimensional nature, it is challenging to find a single subject matter expert who can serve as a key authority for defining regulations. The regulation of AI technology encompasses not only the AI code itself but also the manner in which AI is employed.
The analysis highlights the necessity for further thoughts and ideas on how to govern AI technology. This suggests that more research and discussion are required to develop effective governance frameworks that can address the complexity and potential risks associated with AI. In addition, questions have been raised regarding how to define and mitigate risks and non-compliance within the context of AI.
Overall, the analysis underscores the critical need for comprehensive risk assessment and management in the field of AI applications. It highlights the necessity of a robust and clearly defined approach to evaluate and regulate the risks and benefits associated with this technology. The complexity of AI technology and the challenges it poses for governance further emphasises the importance of ongoing research and discussion in this area.
Teresa Rinaldi
During the discussion, the speakers emphasized the significance of prevention in the fields of data mining, pharmacogenomics, and toxin development. They pointed out that many drugs currently in the development stage are based on data mining, highlighting the crucial role this practice plays in the advancement of drug research and discovery. Furthermore, they discussed how in silico analysis is being used to identify and create toxins, emphasizing the potential dangers associated with these substances.
The first speaker presented a negative sentiment and argued for the importance of focusing on prevention in data mining and pharmacogenomics. They highlighted the fact that many drugs in development rely on data mining techniques, underscoring the need for careful consideration of potential risks and adverse effects.
The second speaker took a positive stance, asserting that education and awareness are key in preventing harm caused by toxic substances developed through data mining. They stressed the importance of educating students and scientists about the connections between chemistry and biology in this context.
To support their argument, the second speaker mentioned the lack of adherence to guidelines by professionals in the field, suggesting that this could contribute to the potential harm caused by toxic substances. They emphasized the need for education and awareness campaigns aimed at these professionals as a means of mitigating risks and promoting responsible practices.
In conclusion, both speakers agreed on the significance of prevention in the fields of data mining, pharmacogenomics, and toxin development. They highlighted the use of data mining in drug development and the identification of toxins through in silico analysis. Additionally, they stressed the importance of education and awareness in preventing harm caused by toxic substances created via data mining. The speakers provided compelling arguments and evidence, reinforcing the need for risk assessment, responsible practices, and adherence to guidelines in these fields.
Anna Holmstrom
Artificial Intelligence (AI) models have the potential to greatly contribute to pathogen biosurveillance systems and the development of medical countermeasures. This technological advancement can have a significant impact on achieving SDG 3 (Good Health and Well-being) and SDG 9 (Industry, Innovation, and Infrastructure).
In the field of pathogen biosurveillance, AI models can support the identification and tracking of pathogens, helping to detect outbreaks and prevent the spread of diseases. By analyzing large amounts of data, these models can identify patterns and provide insights that can aid in the development of effective medical countermeasures.
Moreover, the use of next-generation AI tools holds great promise for early detection and rapid response in public health emergencies. By leveraging advanced algorithms and machine learning techniques, these tools can quickly analyze vast amounts of data to identify potential threats and take swift action. This aligns with both SDG 3 and SDG 9.
AI’s capabilities also extend to the prediction of supply chain shortages during public health emergencies. By considering various factors such as demand, production capacity, and transportation logistics, AI can provide valuable insights that enable proactive measures to mitigate the impact of these shortages. This is particularly relevant in the context of SDG 3, which aims to ensure access to quality healthcare for all.
Furthermore, AI tools can play a crucial role in detecting unusual or potentially dangerous behaviors among AI-mode users or life science practitioners. By monitoring and analyzing patterns, these tools can contribute to improving the safety and security of individuals and communities, in alignment with SDG 16.
Another key application of AI is in strengthening DNA sequence screening approaches. By leveraging AI algorithms, DNA analysis and screening can be enhanced to capture novel threats and identify potential risks in a more efficient and accurate manner. This development aligns with both SDG 3 and SDG 9.
However, there are concerns about the time gap between the suggestion and establishment of a temporary working group. The delay in establishing these groups may hinder timely decision-making and hinder the achievement of SDG 16. This gap in time could result from the need to organize and convene these groups, which may prolong the process. Efforts should be made to streamline and expedite this process to ensure the efficient establishment and functioning of temporary working groups.
In conclusion, AI models and tools have the potential to revolutionize various aspects of public health and disease prevention. Their applications range from pathogen biosurveillance and medical countermeasure development to rapid response, supply chain management, and the detection of dangerous behaviors. While there are concerns about the efficiency of establishing temporary working groups, the overall impact of AI on achieving the SDGs, particularly SDG 3, SDG 9, and SDG 16, is promising.
Maximilian Brackmann
When evaluating a specific technology, it is vital for the group to consider both the current limitations and the current state of the art. Doing so allows for a more comprehensive assessment, taking into account potential challenges and advancements in the field.
Understanding the limitations of a technology is crucial as it provides insight into its drawbacks and potential issues. This knowledge helps identify areas for improvement and manage risks during implementation or operation.
In addition, considering the current state of the art provides a broader perspective on advancements and breakthroughs in the technology. It helps determine if the technology is keeping up with the latest trends and developments, and if there are opportunities for enhancement or integration with other technologies.
By thoroughly considering both the limitations and the state of the art, the group can form a well-rounded assessment of the technology. This comprehensive evaluation allows for a better understanding of its capabilities, potential impact, and feasibility.
In conclusion, when evaluating a technology, it is crucial to consider both its limitations and the current state of the art. This ensures informed decision-making and maximizes the benefits while mitigating the risks associated with the technology.
Aamer Ikram
In light of the current scenario, it has been identified that there is a pressing need for revised confidence-building measures. Aamer Ikram, along with a committee, engaged in a thorough deliberation of this matter around 2009 and 2010. With a positive sentiment, the argument put forth is that revising these measures is essential to address the challenges and concerns faced in the present circumstances. However, specific details about the challenges or reasons behind the necessity for the revised measures are not provided.
Moving on, it has been highlighted that a scientific advisory board supporting the Biological and Toxin Weapon Convention (BWC) is of utmost importance. Prior to the COVID-19 pandemic, a workshop was scheduled in Geneva to discuss this topic. Participants in the workshop expressed a positive sentiment towards the notion of a supportive advisory board for the BWC. It is stated that most participants felt that such a board is mandatory. However, the specifics of why the board is deemed necessary and its potential benefits are not provided in the summary.
The COVID-19 pandemic has further underscored the significance of establishing a supporting advisory board for the BWC. Unfortunately, no supporting facts or evidence are provided to elaborate on this point. Nevertheless, the positive sentiment suggests that the current situation, shaped by the COVID-19 pandemic, provides a compelling rationale to contemplate the idea of a supporting advisory board for the BWC.
In addition to the discussions surrounding confidence-building measures and the BWC, this expanded summary also emphasizes the importance of addressing the connection between Artificial Intelligence (AI) and biosecurity. The stance is that this connection needs to be acknowledged and understood. The topic of AI versus biosecurity is considered highly relevant and is regarded as a key aspect of cyber biosecurity. However, no specific arguments or supporting evidence are provided to support this claim.
In conclusion, the need for revised confidence-building measures in light of the current scenario is emphasized, but the specific challenges or reasons for revision are not mentioned. The establishment of a supportive advisory board for the BWC is deemed crucial, with the COVID-19 pandemic further emphasizing its importance. The connection between AI and biosecurity is also highlighted as a significant aspect, although no supporting arguments or evidence are provided.
Bert Rima
Interpreting data from biological databases is a complex task that poses significant challenges, as highlighted by a recent study conducted by Puglisi et al. Previous efforts by the biosecurity panel of the International Academy of Pathology (IAP) to interpret interactions of different reference systems in databases did not yield conclusive results. This suggests that there is a need to further explore and enhance methodologies for analyzing biological data that was not primarily designed to answer specific questions.
One potential benefit of artificial intelligence (AI) is its ability to differentiate natural disease outbreaks from those that are caused by interventions. This capability could prove crucial in the implementation of the Bioweapons Convention. By using AI, it becomes possible to distinguish attacks by state or non-state actors from natural events, which is important for complying with Article 7 and Article 10 of the Bioweapons Convention. AI applications in this context have the potential to assist in effectively implementing the convention and ensuring global health and well-being.
AI can play a significant role in effectively implementing the Bioweapons Convention by identifying unnatural disease outbreaks. This stance is supported by the notion that if AI can distinguish between natural epidemics and those triggered by an intentional attack, it would greatly contribute to identifying and responding to potential bioweapons threats. The ability to accurately identify such outbreaks is vital for ensuring the peace, justice, and strong institutions as outlined in SDG 16, in addition to advancing good health and well-being as highlighted in SDG 3.
The interaction between scientific panels and policy leaders is key for addressing relevant scientific topics and questions within the International Science Advisory Council (ISAC). However, it is important to note that scientific panels sometimes focus on scientific questions that have little relevance to policy development. To ensure the alignment of research priorities with policy goals, scoping papers—products of scientific panels—should consider the interests of receiving parties such as the European Commission and Parliamentarians.
In line with this, it is suggested that standing panels with longer membership should be created to provide continuity and expertise, bridging the gap between review conferences. These standing panels would help guide the direction of questions in working groups, ensuring a more focused and effective exchange of information and ideas.
In conclusion, interpreting data from biological databases is challenging, but artificial intelligence holds the potential to aid in the effective implementation of the Bioweapons Convention by differentiating natural disease outbreaks from those caused by interventions. The interaction between scientific panels and policy leaders within ISAC is crucial for addressing relevant scientific topics, but scoping papers need to consider the interests of receiving parties. The creation of standing panels with longer membership is suggested to focus the direction of questions in working groups and facilitate effective communication between scientific and policy communities.
Johannes Rosenstand
The analysis examines different arguments concerning biosecurity within the context of the Biological Weapons Convention (BWC). One argument suggests that there is uncertainty regarding the interpretation of biosecurity in the BWC context. This confusion arises from the lack of distinction between nonproliferation of dual-use research, biological matters, and other benefits such as vaccine development and surveillance. The argument calls for a clearer focus on the specific aspects of biosecurity within the BWC framework.
An alternative viewpoint asserts that biosecurity in the BWC context should primarily address the nonproliferation of dual-use research and biological matter. According to this perspective, benefits such as surveillance and vaccine development may not be directly related to biosecurity. The emphasis is on maintaining a precise and concise approach to prevent the proliferation of research and substances that can potentially be misused.
Another aspect of the analysis explores the potential influence of drafters of a scoping paper within the Biological and Toxin Weapons Convention (BTWC) framework. Concern is raised about how the drafters’ ability to shape the questions in the scoping paper can significantly impact the outcomes. This raises questions about the individuals responsible for drafting these papers and the need to ensure an unbiased and objective process.
While the analysis lacks specific supporting evidence for the second argument regarding biosecurity in the BWC context, it does highlight the importance of considering the potential influence and implications of drafters in the BTWC context.
In conclusion, the analysis examines the challenges and uncertainties surrounding the interpretation of biosecurity within the BWC context. It presents two contrasting perspectives, one stressing the need for clarity and specificity in defining biosecurity, and the other raising concerns about potential bias in drafting crucial documents. Further research and discussions are required to establish a comprehensive and consensus-driven understanding of biosecurity within both the BWC and BTWC contexts.
Masresha Fetene
The Biological and Toxin Weapons Convention (BWC) is currently lacking a scientific advisory body, which puts it behind other international instruments. While weapons of mass destruction have successfully established global scientific advisory bodies, the BWC has not done so. However, many state parties of the BWC now believe it is the right time to establish such a body.
An proposed solution is a hybrid structure for the scientific advisory body. This structure involves an open-ended phase one, followed by a more limited group of experts in phase two. During the open-ended phase, various stakeholders would be involved in discussing and refining information. This would lead to the development of recommendations to be presented at the BWC review conference, which is held every five years. Experts in phase two would further discuss and refine these recommendations before presenting them at the conference.
The Inter-Academy Partnership (IAP) has a significant history of work in the field of biosecurity. Their involvement in establishing a scientific advisory body for the BWC would be valuable due to their expertise and experience. The IAP’s work in this area dates back to the publication of the IAP Statement of Biosecurity in 2005.
Balanced representation from both high-income and low-middle-income countries is crucial when forming the scientific advisory body. By ensuring fair and inclusive representation, perspectives from different regions and economic backgrounds can be considered. This balanced representation is necessary to avoid income disparities and reduce inequalities within the BWC.
In conclusion, the establishment of a scientific advisory body would greatly benefit the BWC. The proposed hybrid structure, involving an open-ended phase and a limited group of experts, offers a potential solution. Additionally, the involvement of the Inter-Academy Partnership, with its history of work in biosecurity, would bring valuable expertise to this endeavor. Furthermore, ensuring balanced representation from high-income and low-middle-income countries would contribute to a fair and inclusive advisory body. By implementing these measures, the BWC can strengthen its capabilities in addressing biological and toxin weapons.
Speakers
AI
Aamer Ikram
Speech speed
153 words per minute
Speech length
426 words
Speech time
167 secs
Arguments
There is a necessity for revised confidence building measures after deliberating on the current scenario.
Supporting facts:
- Confidence building measures were revised around 2009 and 2010.
- Aamer Ikram was a part of the committee that revised these measures.
Topics: Confidence Building Measures, Current Scenario
A scientific advisory board supporting the Biological and Toxin Weapon Convention (BWC) is crucial.
Supporting facts:
- Before the COVID-19 pandemic, there was workshop scheduled in Geneva regarding this topic.
- Most of the participants felt that a supportive advisory board for the BWC was mandatory.
Topics: Scientific Advisory Board, Biological and Toxin Weapon Convention
The present time, after experiencing the COVID-19 scenario, could be the best time to contemplate the idea of a supporting advisory board for the BWC.
Supporting facts:
Topics: COVID-19, Scientific Advisory Board, Biological and Toxin Weapon Convention
Report
In light of the current scenario, it has been identified that there is a pressing need for revised confidence-building measures. Aamer Ikram, along with a committee, engaged in a thorough deliberation of this matter around 2009 and 2010. With a positive sentiment, the argument put forth is that revising these measures is essential to address the challenges and concerns faced in the present circumstances.
However, specific details about the challenges or reasons behind the necessity for the revised measures are not provided. Moving on, it has been highlighted that a scientific advisory board supporting the Biological and Toxin Weapon Convention (BWC) is of utmost importance.
Prior to the COVID-19 pandemic, a workshop was scheduled in Geneva to discuss this topic. Participants in the workshop expressed a positive sentiment towards the notion of a supportive advisory board for the BWC. It is stated that most participants felt that such a board is mandatory.
However, the specifics of why the board is deemed necessary and its potential benefits are not provided in the summary. The COVID-19 pandemic has further underscored the significance of establishing a supporting advisory board for the BWC. Unfortunately, no supporting facts or evidence are provided to elaborate on this point.
Nevertheless, the positive sentiment suggests that the current situation, shaped by the COVID-19 pandemic, provides a compelling rationale to contemplate the idea of a supporting advisory board for the BWC. In addition to the discussions surrounding confidence-building measures and the BWC, this expanded summary also emphasizes the importance of addressing the connection between Artificial Intelligence (AI) and biosecurity.
The stance is that this connection needs to be acknowledged and understood. The topic of AI versus biosecurity is considered highly relevant and is regarded as a key aspect of cyber biosecurity. However, no specific arguments or supporting evidence are provided to support this claim.
In conclusion, the need for revised confidence-building measures in light of the current scenario is emphasized, but the specific challenges or reasons for revision are not mentioned. The establishment of a supportive advisory board for the BWC is deemed crucial, with the COVID-19 pandemic further emphasizing its importance.
The connection between AI and biosecurity is also highlighted as a significant aspect, although no supporting arguments or evidence are provided.
AH
Anna Holmstrom
Speech speed
131 words per minute
Speech length
289 words
Speech time
132 secs
Arguments
AI models can aid pathogen biosurveillance systems and the development of medical countermeasures
Topics: AI models, Pathogen Biosurveillance, Medical Countermeasures
Next generation AI tools for early detection and rapid response can be developed
Topics: Next generation AI tools, Early detection, Rapid Response
AI can predict supply chain shortages during public health emergencies
Topics: AI, Supply chain shortages prediction, Public Health Emergencies
AI tools can detect unusual or potential dangerous behaviors among AI-mode users or life science practitioners
Topics: Detection of dangerous behaviors, AI-mode users, Life science practitioners
AI can strengthen DNA sequence screening approaches to capture novel threats
Topics: AI, DNA sequence screening, Novel threats
Concern about the time gap between the suggestion and establishment of a temporary working group
Supporting facts:
- The body can be suggested by either the open-ended or the limited size group
- The establishment of a temporary working group can be made at the next meeting of states parties
- Hypothetically, if a suggestion is made in January and the MSP is in December, there is a long gap in time
Topics: Mechanism, Temporary working groups, Decision-making
Report
Artificial Intelligence (AI) models have the potential to greatly contribute to pathogen biosurveillance systems and the development of medical countermeasures. This technological advancement can have a significant impact on achieving SDG 3 (Good Health and Well-being) and SDG 9 (Industry, Innovation, and Infrastructure).
In the field of pathogen biosurveillance, AI models can support the identification and tracking of pathogens, helping to detect outbreaks and prevent the spread of diseases. By analyzing large amounts of data, these models can identify patterns and provide insights that can aid in the development of effective medical countermeasures.
Moreover, the use of next-generation AI tools holds great promise for early detection and rapid response in public health emergencies. By leveraging advanced algorithms and machine learning techniques, these tools can quickly analyze vast amounts of data to identify potential threats and take swift action.
This aligns with both SDG 3 and SDG 9. AI’s capabilities also extend to the prediction of supply chain shortages during public health emergencies. By considering various factors such as demand, production capacity, and transportation logistics, AI can provide valuable insights that enable proactive measures to mitigate the impact of these shortages.
This is particularly relevant in the context of SDG 3, which aims to ensure access to quality healthcare for all. Furthermore, AI tools can play a crucial role in detecting unusual or potentially dangerous behaviors among AI-mode users or life science practitioners.
By monitoring and analyzing patterns, these tools can contribute to improving the safety and security of individuals and communities, in alignment with SDG 16. Another key application of AI is in strengthening DNA sequence screening approaches. By leveraging AI algorithms, DNA analysis and screening can be enhanced to capture novel threats and identify potential risks in a more efficient and accurate manner.
This development aligns with both SDG 3 and SDG 9. However, there are concerns about the time gap between the suggestion and establishment of a temporary working group. The delay in establishing these groups may hinder timely decision-making and hinder the achievement of SDG 16.
This gap in time could result from the need to organize and convene these groups, which may prolong the process. Efforts should be made to streamline and expedite this process to ensure the efficient establishment and functioning of temporary working groups.
In conclusion, AI models and tools have the potential to revolutionize various aspects of public health and disease prevention. Their applications range from pathogen biosurveillance and medical countermeasure development to rapid response, supply chain management, and the detection of dangerous behaviors.
While there are concerns about the efficiency of establishing temporary working groups, the overall impact of AI on achieving the SDGs, particularly SDG 3, SDG 9, and SDG 16, is promising.
BR
Bert Rima
Speech speed
154 words per minute
Speech length
762 words
Speech time
297 secs
Arguments
Interpreting data from biological databases is quite challenging.
Supporting facts:
- Earlier efforts from IAP’s biosecurity panel to interpret interactions of different reference systems in databases didn’t bring about conclusive results.
- A recent study by Puglisi et al demonstrates the challenges of analyzing biological data not primarily designed to answer specific questions.
Topics: AI, Biological databases, Dual gain-of-function research
Potential benefit of AI might be its use in distinguishing natural disease outbreaks from those caused by interventions.
Supporting facts:
- It could be advantageous if AI could help differentiate attacks by state or non-state actors from natural events.
- Such applications of AI could be essential for the implementation of Article 7 and Article 10 of the Bioweapons Convention.
Topics: AI, Bioweapons Convention, state actor, non-state actor
Scoping papers are largely products of scientific panels, but they should consider interest of receiving parties such as the European Commission and Parliamentarians.
Supporting facts:
- Bert co-chairs the science and biosciences panel for ISAC
Topics: Scoping papers, scientific panels, policy development
Bert Rima suggests the creation of standing panels with longer membership to focus the direction of questions in working groups.
Topics: Review Conferences, Working Groups
Report
Interpreting data from biological databases is a complex task that poses significant challenges, as highlighted by a recent study conducted by Puglisi et al. Previous efforts by the biosecurity panel of the International Academy of Pathology (IAP) to interpret interactions of different reference systems in databases did not yield conclusive results.
This suggests that there is a need to further explore and enhance methodologies for analyzing biological data that was not primarily designed to answer specific questions. One potential benefit of artificial intelligence (AI) is its ability to differentiate natural disease outbreaks from those that are caused by interventions.
This capability could prove crucial in the implementation of the Bioweapons Convention. By using AI, it becomes possible to distinguish attacks by state or non-state actors from natural events, which is important for complying with Article 7 and Article 10 of the Bioweapons Convention.
AI applications in this context have the potential to assist in effectively implementing the convention and ensuring global health and well-being. AI can play a significant role in effectively implementing the Bioweapons Convention by identifying unnatural disease outbreaks. This stance is supported by the notion that if AI can distinguish between natural epidemics and those triggered by an intentional attack, it would greatly contribute to identifying and responding to potential bioweapons threats.
The ability to accurately identify such outbreaks is vital for ensuring the peace, justice, and strong institutions as outlined in SDG 16, in addition to advancing good health and well-being as highlighted in SDG 3. The interaction between scientific panels and policy leaders is key for addressing relevant scientific topics and questions within the International Science Advisory Council (ISAC).
However, it is important to note that scientific panels sometimes focus on scientific questions that have little relevance to policy development. To ensure the alignment of research priorities with policy goals, scoping papers—products of scientific panels—should consider the interests of receiving parties such as the European Commission and Parliamentarians.
In line with this, it is suggested that standing panels with longer membership should be created to provide continuity and expertise, bridging the gap between review conferences. These standing panels would help guide the direction of questions in working groups, ensuring a more focused and effective exchange of information and ideas.
In conclusion, interpreting data from biological databases is challenging, but artificial intelligence holds the potential to aid in the effective implementation of the Bioweapons Convention by differentiating natural disease outbreaks from those caused by interventions. The interaction between scientific panels and policy leaders within ISAC is crucial for addressing relevant scientific topics, but scoping papers need to consider the interests of receiving parties.
The creation of standing panels with longer membership is suggested to focus the direction of questions in working groups and facilitate effective communication between scientific and policy communities.
HP
Hong Ping Wei
Speech speed
139 words per minute
Speech length
826 words
Speech time
358 secs
Arguments
Awareness of where the risks of AI come from is crucial.
Supporting facts:
- Risk assessment includes two factors, the likelihood of occurrence and the severity of the consequences.
- Understanding the risks and the chances of those risks materializing is important.
- At the current stage, the risk of AI to global biosecurity is seen as low but it’s rapidly developing.
Topics: Risk Assessment, Artificial Intelligence
A gap exists in AI’s ability to interpret biological data reliably.
Supporting facts:
- Interpretation of biological data by AI is currently not very viable.
- The information to physical agent gap is huge in the domain of biosecurity.
Topics: Artificial Intelligence, Data Interpretation, Biosecurity
AI needs to be handled with precautionary principles.
Supporting facts:
- Given AI’s rapid development, potential future risk should be mitigated by taking precautionary steps.
- Measures such as code of conducts and data control can be implemented for AI workers and involved parties.
Topics: Artificial Intelligence, Precautionary Principles
Risk of AI in causing harm is largely imaginative and not reality
Supporting facts:
- Most of the perceived risks about AI seem like imagination
Topics: Artificial Intelligence, Risk Assessment
AI in itself cannot cause harm, it becomes dangerous when combined with other elements like weapons
Supporting facts:
- AI can be used in weapons to recognize specific people from a mass and target them
Topics: Artificial Intelligence, Weapons, Security
Report
The analysis explores the risks and potential consequences of artificial intelligence (AI) on global biosecurity, emphasizing the importance of comprehending these risks and implementing precautionary measures. It provides a nuanced understanding of the subject through various viewpoints. One argument emphasizes the need for awareness of the sources of AI risks, highlighting the importance of understanding where these risks originate.
Risk assessment considers the likelihood of an event occurring and the severity of its consequences. Understanding these risks and their likelihood is essential for effective risk management. Opinions vary regarding the risks AI poses to global biosecurity. One perspective argues that the risk is currently low due to AI’s stage of development.
However, the rapid evolution of the field makes its future uncertain. Another viewpoint highlights a gap in AI’s ability to interpret biological data reliably, particularly in the domain of biosecurity. The “information to physical agent” gap indicates limitations in AI’s capacity to accurately understand and interpret biological information.
To address these risks, precautionary principles and measures are advocated. Given the rapid development of AI, potential future risks should be mitigated through precautionary steps. These measures may include implementing codes of conduct and data control for AI workers and parties involved.
It is believed that handling AI with precautionary principles is essential to prevent potential harm and negative consequences. Furthermore, perceived risks and real risks are distinguished. It is argued that most perceived risks of AI causing harm are mere imagination rather than reality.
However, when AI is combined with other elements, such as weapons, it can become dangerous. Considering the context and potential influences that amplify or enhance AI risks is crucial. The analysis concludes that understanding the real risks of AI, rather than focusing on imagined threats, is important.
This understanding enables effective risk management strategies and responsible decision-making regarding the integration and deployment of AI technologies. In summary, the analysis highlights the risks of AI on global biosecurity and underscores the need to understand these risks. Divergent viewpoints are presented on the current risk level and AI’s ability to interpret biological data.
Precautionary measures are proposed to address potential future risks. The distinction between perceived and real risks is emphasized, along with the dangers of combining AI with other elements. Overall, a comprehensive understanding of the real risks associated with AI is crucial for effective risk management and responsible decision-making.
JR
James Revill
Speech speed
206 words per minute
Speech length
2809 words
Speech time
820 secs
Arguments
AI systems fundamentally depend on data and computational power
Supporting facts:
- AI can predict, cluster and serve as a generative tool
- Deep learning was perfected in the 1980s, but some AI designs can still give a false sense of causality
Topics: AI, Data, Computational Power
Data in biological science is not reproducible, verifiable, or without uncertainty
Supporting facts:
- Translating biological data into physical things such as an agent requires additional skills
- Predictive models and causal relations remain incomplete in biological data interpretation
Topics: AI, Data, Biological Science
AI could do potential harm such as creation of new weapons or increased access to biological weapons
Supporting facts:
- Additional information, skills, and resources are necessary to create a viable biological weapon
- AI could potentially change our perception on biological weapons, creating a loophole in the Biological Weapons Convention
Topics: AI, Weapons, Biological Weapons
A scoping paper in the EU mechanism is a collective development
Supporting facts:
- EU uses a scoping paper to give discussions a direction
- In the EU mechanism, the scoping paper is developed collectively
Topics: BTWC, EU mechanism, scoping paper
A scoping paper can have a lot of influence.
Supporting facts:
- The BTWC context influenced the question on who would draft the scoping paper
Topics: Scoping paper, Influence
AI could have a devastating role in fueling miss and disinformation, which could exacerbate some of those existing issues in relation to biological weapons.
Topics: AI, biological weapons, miss and disinformation
States should take responsibility for developing Confidence-Building Measures (CBMs), instead of outsourcing to AI or chatbots because it could undermine the value of CBMs.
Topics: Confidence-Building Measures, AI, responsibility
Discussion over access to tech makes him nervous, stressing that it is politically sensitive and states parties should tread carefully.
Topics: tech access, politics, sensitivity
Report
AI systems heavily rely on data and computational power to predict, cluster, and generate results, but certain AI designs can create a false sense of causality. In the field of biological science, the reproducibility, verifiability, and certainty of data can be challenging.
Although AI can help translate biological data into applications, interpreting predictive models and causal relations remains incomplete. Concerningly, AI could be misused to create new weapons or increase accessibility to biological weapons, potentially undermining the Biological Weapons Convention (BWC). To prevent harm, careful regulation and management of AI is necessary.
Despite advancements in science and technology, the BWC remains adaptable, and the equal distribution of AI’s benefits and international cooperation are crucial. The European Union (EU) employs a collaborative scoping paper to guide discussions, but diplomats may need input from states parties for effective development of questions in a scientific advisory mechanism.
AI’s potential role in spreading misinformation and exacerbating biological weapons issues calls for responsible states to take ownership of developing Confidence-Building Measures (CBMs). While technology is valuable, it has limitations and access to it is a politically sensitive topic. Having scientific secretariat support facilitates communication between scientists and policymakers, enhancing understanding of the BWC’s principles.
JM
Jenifer Mackby
Speech speed
166 words per minute
Speech length
349 words
Speech time
126 secs
Arguments
The establishment of the S&T advisory body will likely be decided in either a 2025 special conference or the 2027 review conference
Supporting facts:
- Working group will discuss S&T more in August
- The meetings are scheduled for February and August
Topics: S&T advisory body, Timeline
Half of the countries do not respond adequately to their reporting obligations under Confidence Building Measures (CBMs)
Supporting facts:
- The CBMs consist of six different categories of which countries should exchange information
Topics: Confidence Building Measures, international cooperation, reporting mechanism
Report
The analysis discusses three topics related to Goal 16: Peace, Justice and Strong Institutions and Goal 17: Partnerships for the Goals. The first topic focuses on the establishment of an S&T (Science and Technology) advisory body. It is anticipated that the decision to establish this body will be made either in a special conference in 2025 or the review conference in 2027.
The analysis notes that further discussions about S&T will take place in February and August. The second topic examines the issue of countries not adequately fulfilling their reporting obligations under Confidence Building Measures (CBMs). CBMs consist of six different categories, and countries should exchange information under these categories.
However, the analysis highlights a negative sentiment as it reveals that half of the countries do not respond adequately to their reporting obligations. This suggests a lack of commitment from these countries in promoting peace, justice, and strong institutions. The third topic suggests the need to update the CBM categories.
The analysis reveals that only half of the countries are responding to CBMs, which leads some groups to deem it necessary to update these categories. By updating the categories, it is believed that international cooperation can be enhanced in achieving the goals of peace, justice, and strong institutions.
The analysis presents a neutral sentiment towards this suggestion, indicating that further considerations are needed to assess the potential impact of updating CBM categories. In conclusion, the analysis provides insights into the ongoing discussions related to Goal 16 and Goal 17. It highlights the potential establishment of an S&T advisory body, the lack of adequate response from countries in fulfilling their reporting obligations under CBMs, and the suggestion to update CBM categories.
The analysis emphasizes the need for further engagement and cooperation to achieve the desired outcomes of peace, justice, and strong institutions.
JR
Johannes Rosenstand
Speech speed
130 words per minute
Speech length
281 words
Speech time
130 secs
Arguments
Uncertainty over the interpretation of biosecurity in the BWC context
Supporting facts:
- There seems to be confusion between nonproliferation of dual-use research, biological matters and benefits like vaccine development, surveillance of potential attack
Topics: Biosecurity, AI applications, BWC
Johannes Rosenstand questions who would draft the scoping paper in the BTWC context
Supporting facts:
- James Reville suggested that the European Union uses a scoping paper for guiding discussions
Topics: Scoping Paper, BTWC, European Union
Report
The analysis examines different arguments concerning biosecurity within the context of the Biological Weapons Convention (BWC). One argument suggests that there is uncertainty regarding the interpretation of biosecurity in the BWC context. This confusion arises from the lack of distinction between nonproliferation of dual-use research, biological matters, and other benefits such as vaccine development and surveillance.
The argument calls for a clearer focus on the specific aspects of biosecurity within the BWC framework. An alternative viewpoint asserts that biosecurity in the BWC context should primarily address the nonproliferation of dual-use research and biological matter. According to this perspective, benefits such as surveillance and vaccine development may not be directly related to biosecurity.
The emphasis is on maintaining a precise and concise approach to prevent the proliferation of research and substances that can potentially be misused. Another aspect of the analysis explores the potential influence of drafters of a scoping paper within the Biological and Toxin Weapons Convention (BTWC) framework.
Concern is raised about how the drafters’ ability to shape the questions in the scoping paper can significantly impact the outcomes. This raises questions about the individuals responsible for drafting these papers and the need to ensure an unbiased and objective process.
While the analysis lacks specific supporting evidence for the second argument regarding biosecurity in the BWC context, it does highlight the importance of considering the potential influence and implications of drafters in the BTWC context. In conclusion, the analysis examines the challenges and uncertainties surrounding the interpretation of biosecurity within the BWC context.
It presents two contrasting perspectives, one stressing the need for clarity and specificity in defining biosecurity, and the other raising concerns about potential bias in drafting crucial documents. Further research and discussions are required to establish a comprehensive and consensus-driven understanding of biosecurity within both the BWC and BTWC contexts.
KB
Kavita Berger
Speech speed
143 words per minute
Speech length
3614 words
Speech time
1514 secs
Arguments
Data inputs for AI in biotechnology are not always reliable or verifiable.
Supporting facts:
- Biological data to feed AI models is often biased, not reproducible, and comes with uncertainty.
- Protein databases, which are cited as examples of good data, are extremely well-curated in comparison to others, like genomic databases.
Topics: AI, Biotechnology, Biosecurity, Data Reliability
Translating AI computational models into physical substances in biotechnology requires a specific set of skills.
Supporting facts:
- Moving from the design and modeling stage using computational power to creating a physical biological agent or device requires expertise beyond computer science and computational biology.
Topics: AI, Biotechnology, Skills, Data Science
AI in biotechnology is seen as a tool that can lower the barriers for accessing biological agents for harmful purposes.
Supporting facts:
- There are concerns that AI can facilitate the design of such agents for malicious intents.
Topics: AI, Biotechnology, Biosecurity, Biological Agents
AI could have potential benefits in biosecurity.
Supporting facts:
- AI could be used to identify malicious sequences sought by harmful users or to spot unusual activity in publicly available information.
Topics: AI, Biosecurity, Biological Agents
AI is being used for various biological problems, from environmental sustainability to human health and synthetic biology.
Supporting facts:
- Kavita stated that AI is being used to address multiple problems in the biosciences.
Topics: Artificial Intelligence, Biological Problems, Environmental Sustainability, Human Health, Synthetic Biology
There are challenges with data, interpretation, understanding correlation and causation in AI’s application in the biosciences.
Supporting facts:
- Kavita pointed out the current challenges in applying AI in the biosciences including data management and interpretation.
Topics: Artificial Intelligence, Data Challenges, Correlation and Causation
AI and biotech are becoming more intertwined, with increasing interest in understanding causal relationships in addition to statistical analyses
Supporting facts:
- The types of AI models used depend on the data available
- Different types of data include text, large language models, sentiment analysis, and image data
- Interest is growing in understanding cause and effect relationships
Topics: Artificial Intelligence, Biotechnology, Data Analysis
Lab automation is being combined with AI tools
Supporting facts:
- AI tools are being married with lab automation
- This falls into the realm of biotechnology advances
Topics: Artificial Intelligence, Laboratory Automation, Biotechnology
The need to generate high quality, curated, interoperable data for predictive models is recognized
Supporting facts:
- For AI to work effectively in predicting, quality data is critical
- The data also needs to be interoperable, that is, it can be used in various systems or organizations
Topics: Artificial Intelligence, Data Analysis, Predictive Modeling
Broader community of scientists or security experts often focus on either the risks or the benefits.
Supporting facts:
- Discussions often highlight risks or benefits, depending on what is more familiar or agreeable
Topics: Security, Data Management, AI, Risk Assessment
Lack of good benefit assessment tools, we don’t integrate benefits assessments into our conversations.
Supporting facts:
- Benefit assessment conversations are often neglected and decision-making is subject to assumptions, perspectives, and biases
Topics: AI, Security, Data Management
Risks and benefits could very well be equally as tangible and equally as intangible.
Supporting facts:
- Lack of objective approach in evaluating risks and benefits
Topics: AI, Security, Data Management
Potential to reduce risk sufficiently to ensure that society reaps the benefits.
Supporting facts:
- Benefits of AI are promised to be incredibly widespread across a number of different sectors
Topics: AI, Security, Risk Assessment
Numerous ethical guidelines and norms exist for the use of AI in healthcare and other fields
Supporting facts:
- A lot of activity is taking place in developing norms for the use of AI, with various acronyms being used, such as FAIR and CARE
- Several hundreds of ethical codes exist for the use of AI in health-related settings
Topics: AI, Healthcare, Ethics
Research is being done to identify patterns, hotspots of violence and hate and other sorts of things which suggests the possibility of something similar in relation to biological agents.
Supporting facts:
- Research is used to identify patterns and hotspots of violence and hate.
Topics: research, violence, hate, biological agents
Assessments of dual use or gain of function are subjective and might vary depending on whether you’re in a crisis situation or not.
Supporting facts:
- Subjectivity affects assessments of dual use or gain of function.
Topics: dual use, gain of function, subjectivity, crisis situations
There is a loose and sometimes imprecise use of the terms like ‘gain of function’ and ‘large scale surveillance studies’.
Supporting facts:
- Terms like ‘gain of function’ and ‘large scale surveillance studies’ are used loosely.
Topics: gain of function, large scale surveillance studies, terminology
Quality and type of data input into AI systems influences the output
Supporting facts:
- Depends on the input information into the AI systems and algorithms
- high-quality, highly curated data can help in identifying alternate protein designs
Topics: Artificial Intelligence, Data Curation, Data Quality
Possibility of incorrect or outdated information in open-data models can affect output
Supporting facts:
- Much of the information is in public domain and can be accessed by these AI models
- There is a risk of pulling in incorrect or old information
Topics: Artificial Intelligence, Open Data, Data Quality
AI as an enabling technology
Supporting facts:
- AI can lower the barriers to design, to development.
- Using AI in life sciences can lead to advancements in health, agriculture, environmental science.
Topics: AI, Life sciences, Risk management
Need for detailed risk assessment and expected consequences
Supporting facts:
- Important to understand the risk that these tools might lower the barriers to design, to development, as well as potential unforeseen risks.
Topics: AI, Risk management, Security
Importance of understanding the potential benefits to BWC (Biological Weapons Convention)
Supporting facts:
- Her suggestion indicates the need to use available data in the AI to identify potential benefits to the BWC.
Topics: AI, BWC, Life sciences
Report
The use of artificial intelligence (AI) in biotechnology presents a range of challenges and opportunities. One key concern is the reliability and verifiability of data inputs for AI models in biotechnology. It has been observed that biological data used to feed AI models is often biased, not reproducible, and comes with uncertainty.
For example, protein databases are well-curated and considered examples of good data, while genomic databases are less well-curated in comparison. Another challenge is the need for specific skills beyond computer science and computational biology when translating AI computational models into physical substances in biotechnology.
Moving from the design and modeling stage to creating a physical biological agent or device requires expertise in fields beyond AI, highlighting the interdisciplinary nature of this field. The use of predictive AI in biotechnology is still in its early stages.
The field currently lacks a comprehensive understanding of the cause and effect of different molecules and their traits. While there are known models for specific uses, such as digital twins, more complex systems still require further exploration. AI also has implications in terms of biosecurity.
Concerns have been raised about the potential for AI to facilitate the design of biological agents for malicious purposes, lowering the barriers for accessing harmful biological agents. However, AI can also have potential benefits in biosecurity, such as identifying malicious sequences or spotting unusual activity in publicly available information.
In the field of biotechnology, AI is being applied to address various biological problems, ranging from environmental sustainability to human health and synthetic biology. This suggests that AI has the potential to contribute to advancements in these areas. However, there are challenges in applying AI in the biosciences.
Data management and interpretation pose significant obstacles. The quality and type of data input into AI systems influence the output. Ensuring high-quality, curated, and interoperable data becomes crucial in generating accurate results. Additionally, understanding correlation and causation in the context of AI’s application in the biosciences remains a challenge.
In terms of ethics, numerous ethical guidelines and norms exist for the use of AI in healthcare and other fields. Efforts are being made to reduce bias and prevent the retrieval of unsafe or harmful information by developing algorithms and tools that promote more ethical AI practices.
Despite these efforts, biases in life science data still pose a challenge as data can be biased due to the questions asked, model systems used, cohorts, and measurement parameters. AI in the context of bioweapons raises concerns about potential risks.
AI could potentially facilitate the design and development of biological agents with negative implications. However, it is also important to consider the potential benefits of AI in improving biosecurity and identifying potential threats. Additionally, the opportunity costs of not investing in AI in life sciences should be taken into account.
Neglecting AI in this field could hinder progress and advancements in various sectors that are traditionally considered areas of dual-use research of concern. The importance of adapting existing systems to keep pace with new technologies is highlighted. Rather than reinventing the wheel, efforts should be focused on leveraging and adapting existing systems to incorporate AI and other emerging technologies effectively.
In conclusion, the use of AI in biotechnology presents a range of challenges and opportunities. Concerns related to data reliability and verifiability, the need for specific skills, limited understanding of cause and effect, biosecurity risks, biases in data, and ethical considerations require thoughtful attention.
Efforts must be made to address these challenges effectively while embracing the potential benefits that AI can offer in advancing various aspects of the biosciences.
LG
Ljupco Gjorgjinski
Speech speed
171 words per minute
Speech length
818 words
Speech time
288 secs
Arguments
Article 10 of the Bioweapons Convention talks about international cooperation, important for knowledge sharing.
Topics: AI, Biosecurity, Bioweapons Convention
Biological Weapons Convention needs additional structures
Supporting facts:
- Biological Weapons Convention was the first to ban a whole category of weapons
- There are key blocks missing that other conventions has, such as a science and technology advisory body
Topics: Biological Weapons, International Relations
Considering possibility of open-ended body and temporary working groups to explore novel technology.
Supporting facts:
- Discussion in the summer’s working group meeting about the possibility of temporary working groups and unlimited size body.
Topics: S&T mechanism, BWC, temporary working groups, open-ended body, novel technology
Importance of specific technical expertise for discussion of specific issues.
Supporting facts:
- Current discussion required not just general expertise of life science experts or life scientists, but some specific technical experience.
Topics: specific technical expertise, BWC, specific issues
Report
The discussion focused on the importance of international cooperation and knowledge sharing in relation to the Bioweapons Convention, specifically Article 10. This article emphasizes the role of international cooperation in facilitating knowledge sharing, which is crucial for understanding and preparedness against biological threats.
Participants highlighted the need for additional structures to strengthen the convention, as it currently lacks certain elements found in other agreements. For example, the absence of a science and technology advisory body hinders the effective addressing of emerging challenges. Confidence-building measures were advocated as a means to strengthen the convention and promote transparency, trust, and cooperation among member states.
The discussion also explored the possibility of establishing an open-ended body and temporary working groups to examine the impact of novel technologies. It was suggested that the convention should focus on principles and general frameworks rather than specific details to ensure flexibility in response to different situations.
Furthermore, specific technical expertise was deemed important for discussions on specific issues, requiring more than a general understanding of life sciences. In conclusion, the speakers emphasized the significance of international cooperation, knowledge sharing, and the need for additional structures to strengthen the Biological Weapons Convention.
Implementation of confidence-building measures, exploration of new collaborative mechanisms, focus on principles, and inclusion of specific technical expertise were highlighted as critical aspects in effectively addressing biological threats.
LO
Luis Ochoa Carerra
Speech speed
188 words per minute
Speech length
312 words
Speech time
100 secs
Arguments
AI technology is evolving and its risks are rising
Supporting facts:
- With each discovery, risk of using that discovery rises.
Topics: AI technology, Risk management
The time to develop regulation mechanism for AI technology is now
Supporting facts:
- We cannot stop the evolving of the technologies.
- As soon as we continue using this technology, we need to have opportunities to make amendments to these regulations.
Topics: AI regulation, Timeline
Everyone needs to take action for AI regulation
Supporting facts:
- The actors will need to be each one of us.
- I think that the answer is gonna be like everyone, like each one of the countries.
Topics: AI Regulation, Stakeholders
Report
The analysis of the arguments on AI technology and regulation reveals several key points. Firstly, it is evident that AI technology is rapidly evolving, and with each new discovery, the associated risks of using this technology are also increasing. This highlights the need for effective risk management strategies to mitigate these risks and ensure the safe and responsible use of AI technology.
Secondly, the development of regulations specifically tailored to AI technology is crucial. The continuous advancement of AI necessitates the establishment of robust regulatory mechanisms that can adapt to the evolving landscape. It is essential to recognise that we cannot halt the progress of technology; instead, we must embrace it and ensure that regulations are in place to govern its usage.
This will provide a framework for addressing emerging ethical and legal concerns and allow for necessary amendments to be made as the technology evolves. The arguments put forward also emphasise that the responsibility for AI regulation lies with all stakeholders.
This includes individuals, governments, and countries. All parties must take action to ensure peace, justice, and strong institutions in relation to AI technology. It is crucial to foster collaboration and cooperation, as the development and implementation of AI regulations require global participation.
By involving all stakeholders, we can collectively work towards creating a regulatory framework that is fair, transparent, and aligned with the broader goals of society. In conclusion, the analysis highlights the increasing risks associated with AI technology, the importance of developing tailored regulations to govern its usage, and the need for collective action to address AI regulation.
It reveals the pressing need for effective risk management strategies and the establishment of regulatory mechanisms that can adapt to the fast-paced advancements in AI. Furthermore, it underscores the importance of involving all stakeholders in the process, as their participation is vital for achieving equitable and ethical AI regulation.
By addressing these concerns, we can harness the potential of AI technology while mitigating its risks and ensuring a prosperous future for all.
MF
Masresha Fetene
Speech speed
120 words per minute
Speech length
713 words
Speech time
355 secs
Arguments
The Biological and Toxin Weapons Convention (BWC) needs a scientific advisory body
Supporting facts:
- The BWC lags behind international instruments with global participation governing other weapons of mass destruction, which have benefited for years from the establishment of scientific advisory bodies.
- Many BWC state parties now consider the time ripe for a decision to establish such a scientific advisory body in the BWC.
Topics: BWC, Scientific Advisory Body
The Inter-Academy Partnership (IAP) has a long history of work in the area of biosecurity
Supporting facts:
- IAP has a long history of work in the area of biosecurity, starting with the publication of the IAP Statement of Biosecurity back in 2005.
Topics: Biosecurity, IAP
There is a need to balance representation from high-income and low and middle-income countries.
Supporting facts:
- We have balanced representation of experts from high-income and low-income and middle-income countries.
Topics: Representation, Income disparity, BWC
Report
The Biological and Toxin Weapons Convention (BWC) is currently lacking a scientific advisory body, which puts it behind other international instruments. While weapons of mass destruction have successfully established global scientific advisory bodies, the BWC has not done so. However, many state parties of the BWC now believe it is the right time to establish such a body.
An proposed solution is a hybrid structure for the scientific advisory body. This structure involves an open-ended phase one, followed by a more limited group of experts in phase two. During the open-ended phase, various stakeholders would be involved in discussing and refining information.
This would lead to the development of recommendations to be presented at the BWC review conference, which is held every five years. Experts in phase two would further discuss and refine these recommendations before presenting them at the conference. The Inter-Academy Partnership (IAP) has a significant history of work in the field of biosecurity.
Their involvement in establishing a scientific advisory body for the BWC would be valuable due to their expertise and experience. The IAP’s work in this area dates back to the publication of the IAP Statement of Biosecurity in 2005. Balanced representation from both high-income and low-middle-income countries is crucial when forming the scientific advisory body.
By ensuring fair and inclusive representation, perspectives from different regions and economic backgrounds can be considered. This balanced representation is necessary to avoid income disparities and reduce inequalities within the BWC. In conclusion, the establishment of a scientific advisory body would greatly benefit the BWC.
The proposed hybrid structure, involving an open-ended phase and a limited group of experts, offers a potential solution. Additionally, the involvement of the Inter-Academy Partnership, with its history of work in biosecurity, would bring valuable expertise to this endeavor. Furthermore, ensuring balanced representation from high-income and low-middle-income countries would contribute to a fair and inclusive advisory body.
By implementing these measures, the BWC can strengthen its capabilities in addressing biological and toxin weapons.
MB
Maximilian Brackmann
Speech speed
140 words per minute
Speech length
50 words
Speech time
21 secs
Arguments
The group should also consider current limitations and current state of the art in a given technology
Topics: Technology Assessment, State of the Art, Limitations
Report
When evaluating a specific technology, it is vital for the group to consider both the current limitations and the current state of the art. Doing so allows for a more comprehensive assessment, taking into account potential challenges and advancements in the field.
Understanding the limitations of a technology is crucial as it provides insight into its drawbacks and potential issues. This knowledge helps identify areas for improvement and manage risks during implementation or operation. In addition, considering the current state of the art provides a broader perspective on advancements and breakthroughs in the technology.
It helps determine if the technology is keeping up with the latest trends and developments, and if there are opportunities for enhancement or integration with other technologies. By thoroughly considering both the limitations and the state of the art, the group can form a well-rounded assessment of the technology.
This comprehensive evaluation allows for a better understanding of its capabilities, potential impact, and feasibility. In conclusion, when evaluating a technology, it is crucial to consider both its limitations and the current state of the art. This ensures informed decision-making and maximizes the benefits while mitigating the risks associated with the technology.
NS
Nariyoshi Shinomiya
Speech speed
138 words per minute
Speech length
182 words
Speech time
79 secs
Arguments
The use of AI to predict and search for toxin protein sequences
Supporting facts:
- Easy to search for and predict toxin protein sequences using today’s ordinary laptop computers.
- The use of AI is significant in predicting an amino acid sequence that has toxin activity even if it’s different from the currently known amino acid sequence.
Topics: Artificial Intelligence, Protein Sequences, Biotechnology
Advancements in AI can impact discussions on biosecurity measures
Supporting facts:
- The application of AI in generating different amino acid sequences with toxin activity can influence biosecurity discussions.
- The potential uses of AI in computers around us can redefine current biosecurity measures.
Topics: Biosecurity, Artificial Intelligence, Regulation
Report
The use of artificial intelligence (AI) in predicting and searching for toxin protein sequences is a significant advancement in the field of biotechnology. What makes it even more impressive is that these predictions can be made using ordinary laptops, making it accessible to a wider range of researchers.
AI has the ability to predict amino acid sequences that have toxin activity, even if they differ from the currently known sequences. This is a crucial development as it allows scientists to identify potentially harmful proteins more quickly and efficiently.
However, while the use of AI in biotechnology is certainly promising, there is a need for a deeper understanding of its applications and its regulation. The implications of AI in generating toxin protein sequences raise concerns about potential risks. It is important to carefully examine the ethical and safety implications of creating such sequences.
To address these concerns, there is a call for international or national regulation to ensure that the use of AI in this context is carefully monitored and controlled. Furthermore, the advancements in AI have the potential to significantly impact discussions on biosecurity measures.
The ability of AI to generate different amino acid sequences with toxin activity has important implications for biosecurity. This may require a reevaluation of current biosecurity measures to effectively address the new threats that could arise from the use of AI.
In conclusion, the use of AI in predicting and searching for toxin protein sequences is a significant development in biotechnology. It has the potential to greatly enhance our ability to identify harmful proteins and improve the safety of various applications.
However, it is important to have a deeper understanding of AI applications in biotechnology and to implement adequate regulation to address potential risks. Additionally, the impact of AI on biosecurity measures is an important consideration that should not be overlooked.
Overall, further research and careful regulation are necessary to fully grasp the potential of AI in this field and ensure its responsible and safe use.
NA
Nisreen Al-Hmoud
Speech speed
144 words per minute
Speech length
261 words
Speech time
109 secs
Arguments
Nisreen Al-Hmoud questions whether a specific methodology or criteria exists for carrying out risk assessment and management for AI applications.
Supporting facts:
- Nisreen Al-Hmoud highlights the importance of assessing the risk and benefits of emerging technologies.
Topics: AI applications, Risk assessment, Risk management
AI governance is a major challenge due to its complexity and multi-dimensionality
Supporting facts:
- The complexity of AI lies in the fact that it’s hard to find one subject matter expert who can be a key matter for one defined regulation.
- The code of AI needs to be regulated, as well as the way AI is used.
Topics: AI Technology, AI regulation, AI governance
Report
The analysis emphasises the importance of evaluating the risks and benefits associated with emerging technologies, particularly in relation to AI applications. Nisreen Al-Hmoud raises a pertinent question regarding the lack of a specific methodology or criteria for conducting risk assessment and management in this domain.
She asserts that there is a need for a well-defined approach to effectively assess and manage the potential risks and benefits of AI applications. Furthermore, the complexity of AI technology presents significant challenges for governance. Due to its multidimensional nature, it is challenging to find a single subject matter expert who can serve as a key authority for defining regulations.
The regulation of AI technology encompasses not only the AI code itself but also the manner in which AI is employed. The analysis highlights the necessity for further thoughts and ideas on how to govern AI technology. This suggests that more research and discussion are required to develop effective governance frameworks that can address the complexity and potential risks associated with AI.
In addition, questions have been raised regarding how to define and mitigate risks and non-compliance within the context of AI. Overall, the analysis underscores the critical need for comprehensive risk assessment and management in the field of AI applications. It highlights the necessity of a robust and clearly defined approach to evaluate and regulate the risks and benefits associated with this technology.
The complexity of AI technology and the challenges it poses for governance further emphasises the importance of ongoing research and discussion in this area.
PM
Peter McGrath
Speech speed
149 words per minute
Speech length
5778 words
Speech time
2323 secs
Arguments
Peter McGrath welcomes participants to the meeting on proof of concept for a scientific advisory body for the Biological and Toxin Weapons Convention.
Supporting facts:
- The meeting is being held as a two-step process.
- This meeting is being recorded for the purpose of helping the organizing team write their report.
- No parts of the recording will be shared or made public.
Topics: Biological and Toxin Weapons Convention, Scientific advisory body
Peter McGrath introduces IAP, the Inter-Academy Partnership.
Supporting facts:
- IAP is a global network of 150 academies of science, medicine and engineering.
- IAP provides independent expert advice on scientific, technological and health issues.
- IAP has been covering biosecurity and the promotion of responsible research practices since 2005.
Topics: IAP, Inter-Academy Partnership
Peter McGrath is explaining the working procedure of the meeting and the goals for the day
Supporting facts:
- The meeting is the first phase of a project on the proof of concept for a scientific advisory body for the Biological and Toxic Weapons Convention
- Phase two will be held in Trieste in person
- The meeting will discuss the possible benefits and risks of artificial intelligence for global biosecurity in the BWC context
Topics: meeting structure, goals, Chatham House rule, expert discussion, process
AI could be considered as an enabling technology and might not fit the comparison with the non-proliferation treaty
Supporting facts:
- The non-proliferation treaty was against a class of weapons of mass destruction, whereas AI is an enabling technology
Topics: AI, non-proliferation treaty
The private sector is developing AI applications rapidly and has tools that academia may not be able to master
Supporting facts:
- The private sector is developing AI applications rapidly, including in many sectors
- The Private sector has advanced tools that might not be available or mastered by the academia
Topics: AI, Private sector, academic research
Peter McGrath ponders about the changing nature of risks with the advent of AI
Supporting facts:
- Peter asks if AI is moving the goalposts regarding risks and if enough norms and standards are established around AI
Topics: Artificial Intelligence, Risks, AI Hazards
The role of the organizers was to formulate a question on the risks and benefits of AI from the mindset of the state parties.
Topics: Organizers’ role, AI risks and benefits
Report
A meeting is currently taking place to discuss and explore the proof of concept for a scientific advisory body for the Biological and Toxin Weapons Convention (BWC). Peter McGrath welcomes the participants to the meeting, which is being recorded to assist the organizing team in writing their report.
It is important to note that no parts of the recording will be shared or made public. Peter McGrath introduces the Inter-Academy Partnership (IAP), a global network comprising 150 academies of science, medicine, and engineering. The IAP is renowned for providing independent expert advice on various scientific, technological, and health issues.
Since 2005, the IAP has particularly focused on biosecurity and the promotion of responsible research practices. Peter McGrath highlights the positive role played by the IAP in this regard. The meeting is structured as a two-step process, with the current phase being the first.
It is the initial stage of a project examining the proof of concept for a scientific advisory body for the BWC. The second phase of the meeting will take place in Trieste, in person. During the meeting, discussions will revolve around the potential benefits and risks of artificial intelligence in the context of global biosecurity within the BWC framework.
Peter McGrath outlines the working procedure of the meeting and the goals for the day. He emphasizes the need for an open and informative discussion and encourages participants to introduce themselves and use the chat function for commenting. Furthermore, a QA function is provided, allowing all participants to respond to questions.
However, strict restrictions are in place to ensure the anonymity of speakers and participants. The meeting also prompts a comparison between artificial intelligence and the non-proliferation treaty, highlighting that AI can be considered as an enabling technology rather than fitting into a direct comparison with the treaty.
Notably, the rapid development of AI applications by the private sector, across various sectors, is acknowledged. The private sector possesses advanced tools and expertise in AI that may not be readily available or mastered by academia or scientific communities. Consideration is given to involving the private sector in AI governance, as they are at the forefront of AI application development and possess unique tools and expertise.
The changing nature of risks with the advent of AI is contemplated by Peter McGrath, who questions whether enough norms and standards have been established around AI to adequately address these risks. Ultimately, the organizers of the meeting seek to address the risks and benefits of AI from the perspective of state parties involved, showcasing an intent to formulate a question on the risks and benefits of AI from a state-focused mindset.
In conclusion, the ongoing meeting focuses on the proof of concept for a scientific advisory body for the BWC. The IAP is introduced as a valuable contributor in the field of biosecurity and responsible research practices. The meeting aims to foster an open and informative discussion, exploring the potential benefits and risks associated with the deployment of AI in global biosecurity efforts.
The private sector’s rapid development of AI applications is acknowledged, leading to considerations for the involvement of the private sector in AI governance. The changing landscape of risks with the advent of AI is also brought into question. Finally, the organizers aim to address the risks and benefits of AI from the perspective of state parties engaged in the discussion.
RE
Rajae Ek Aouad
Speech speed
109 words per minute
Speech length
836 words
Speech time
461 secs
Arguments
Rajae Ek Aouad is concerned about the lack of regulation and control over the rapidly growing field of AI and biotech in developing countries
Supporting facts:
- Rajae Ek Aouad mentioned the practice of sharing data with international groups and startups offering products created from these technologies with minimal oversight.
- Rajae Ek Aouad mentioned that there is only one existing law in Morocco for personal data protection, which can be considered as a basis for any regulations around the use of AI.
Topics: Artificial Intelligence, Biotech, Regulation
Rajae Ek Aouad emphasized the need for an ethical consideration in the development and use of AI and biotech.
Supporting facts:
- The Ministry of Industry and Numeric Transition in Morocco are getting involved, possibly in response to these ethical concerns.
- Rajae Ek Aouad mentioned that the Academy of Science and Technology in Morocco is also being tasked to address these ethical issues.
Topics: Ethics, Artificial Intelligence, Biotech
Rajae Ek Aouad worries that without proper rules and safeguards, the misuse of AI and biotech, particularly in clinical trials could harm the population.
Supporting facts:
- Rajae Ek Aouad specifically mentioned the upcoming partnership with a US university using Biovia and Medidata, companies specialising in AI for drug development and virtual clinical trials.
- The public expressed concerns about bioreduction of fertility and the development of harmful substances through new vaccines developed by AI.
Topics: Harm, Misuse, Artificial Intelligence, Biotech, Clinical Trials
The private sector is developing AI applications rapidly, possessing appropriate tools that universities and research communities are lagging behind in mastering
Topics: Artificial Intelligence, Private Sector, Universities, Research Communities
Involving the private sector in AI governance, with due diligence, can reap benefits and reduce risk
Topics: AI Governance, Private Sector, Risk reduction
Report
Rajae Ek Aouad, an expert, has expressed concern about the lack of regulation and oversight in the rapidly growing field of artificial intelligence (AI) and biotechnology in developing countries. Aouad emphasizes the need for ethical considerations in the development and use of AI and biotech.
In response, the Ministry of Industry and Numeric Transition in Morocco has taken on the responsibility of addressing ethical issues. Additionally, Aouad highlights the potential harm that could result from the misuse of AI and biotech, particularly in clinical trials.
Aouad mentions a partnership between a US university and AI-focused companies Biovia and Medidata, which specializes in drug development and virtual clinical trials. This partnership has raised concerns among the public regarding decreased fertility and the creation of harmful substances in vaccines generated through AI.
Aouad argues that without proper regulations and safeguards, the population could experience negative consequences. Furthermore, a discrepancy is noted between the rapid progress of the private sector in AI applications and the slower advancement within universities and research communities. This observation highlights the importance of involving the private sector in AI governance, with appropriate precautions, to maximize benefits and reduce risks.
In conclusion, Aouad emphasizes the need for regulatory frameworks and ethical considerations in AI and biotech development in developing countries. The involvement of the Ministry of Industry and Numeric Transition and the Academy of Science and Technology in addressing ethical concerns demonstrates a recognition of the importance of responsible development and implementation of these technologies.
The concerns raised about potential harm from the misuse of AI and biotech in clinical trials underscore the necessity for stronger regulations and safeguards. Lastly, the recommendation to involve the rapidly advancing private sector in AI governance, implemented with due diligence, can lead to positive outcomes and risk reduction.
TR
Teresa Rinaldi
Speech speed
130 words per minute
Speech length
264 words
Speech time
122 secs
Arguments
The importance of prevention in the field of data mining and pharmacogenomics
Supporting facts:
- Many drugs in development are based on data mining
- Many toxins are being identified and created via silico analysis
Topics: data mining, pharmacogenomics, artificial intelligence, toxins, protein folding
Report
During the discussion, the speakers emphasized the significance of prevention in the fields of data mining, pharmacogenomics, and toxin development. They pointed out that many drugs currently in the development stage are based on data mining, highlighting the crucial role this practice plays in the advancement of drug research and discovery.
Furthermore, they discussed how in silico analysis is being used to identify and create toxins, emphasizing the potential dangers associated with these substances. The first speaker presented a negative sentiment and argued for the importance of focusing on prevention in data mining and pharmacogenomics.
They highlighted the fact that many drugs in development rely on data mining techniques, underscoring the need for careful consideration of potential risks and adverse effects. The second speaker took a positive stance, asserting that education and awareness are key in preventing harm caused by toxic substances developed through data mining.
They stressed the importance of educating students and scientists about the connections between chemistry and biology in this context. To support their argument, the second speaker mentioned the lack of adherence to guidelines by professionals in the field, suggesting that this could contribute to the potential harm caused by toxic substances.
They emphasized the need for education and awareness campaigns aimed at these professionals as a means of mitigating risks and promoting responsible practices. In conclusion, both speakers agreed on the significance of prevention in the fields of data mining, pharmacogenomics, and toxin development.
They highlighted the use of data mining in drug development and the identification of toxins through in silico analysis. Additionally, they stressed the importance of education and awareness in preventing harm caused by toxic substances created via data mining. The speakers provided compelling arguments and evidence, reinforcing the need for risk assessment, responsible practices, and adherence to guidelines in these fields.
TM
Tshilidzi Marwala
Speech speed
138 words per minute
Speech length
2505 words
Speech time
1087 secs
Arguments
AI is a technique for making machines act like humans, and it is classified into three types.
Supporting facts:
- AI depends on data and computational power
- AI systems can predict, classify and generate data.
Topics: Artificial Intelligence, Machine Learning, Fuzzy based AI systems, Computational Intelligence
The quality and availability of data is uneven across the globe, which impacts the effectiveness of AI.
Supporting facts:
- Quality of data varies depending on location, more data is available in the Global North
- Data scarcity in the Global South impacts AI’s performances
Topics: Data insufficiency, Global North, Global South
AI has potential positive and negative impacts, thus requiring a proper governance model.
Supporting facts:
- The UN’s High-Level Advisory Board on Artificial Intelligence is exploring the potential benefits and harms of AI, and the proper governance model
Topics: AI Applications, AI Governance
AI has wide-ranging applications in various fields like agriculture, biotechnology, vaccine development, and wastewater treatment.
Supporting facts:
- AI applications vary widely across sectors
Topics: AI, Agriculture, Biotechnology, Wastewater Treatment, Vaccine development
There is an issue of data asymmetry, with limited data being collected from certain regions like the African continent which could have negative consequences for AI development and application.
Supporting facts:
- Not as much data has been collected in the African continent
Topics: AI, Data Asymmetry, Data Collection, African Continent
The potential use or weaponization of AI technology in biological processes necessitates governance similar to nuclear technologies.
Supporting facts:
- This technology is easily accessible, all the important algorithms can be downloaded and computational power can be bought.
Topics: AI, Governance, Weaponization, Biological Processes
AI algorithms are correlation machines, not causality machines
Supporting facts:
- The structure of AI algorithms gives a false sense of causality as they are designed with input-output relationships
- Correlation is a necessary but not sufficient condition for causality
Topics: AI algorithms, Deep Learning, Causality, Correlation
AI progress is majorly due to more powerful computers and data abundance
Supporting facts:
- AI algorithms popular today were already perfected in the 80s
- The increase in computational power and more data has enabled the building of bigger models
- Moore’s law observance, that computational power doubles every two years, anticipates more powerful machines in the future
Topics: AI progress, Data collection, Computational power, Moore’s law
Most of the risks of AI technologies are already identified
Supporting facts:
- 67 countries approach to AI identifies the risks
- European Union and African Union have their own policy around AI.
Topics: AI, Risks
The missing link is the mechanisms to mitigate the risks.
Supporting facts:
- In game theory, reverse game theory, they have a concept called mechanism design
Topics: AI, Mechanism Design, Risk mitigation
Report
AI, or Artificial Intelligence, is classified into three types and has the ability to predict, classify, and generate data. This showcases its potential to mimic human behaviours and decision-making processes. AI also finds applications in various fields like agriculture, biotechnology, vaccine development, and wastewater treatment, thereby addressing global challenges related to food security, health, and environmental sustainability.
However, there are challenges associated with AI, such as the uneven quality and availability of data across different regions, impacting its performance and effectiveness. Another challenge is the balance between interpretability and accuracy in machine learning models. Governance is crucial, especially in the context of potential weaponization of AI and biotechnology.
The distinction between correlation and causality in AI algorithms also needs to be understood. AI progress is driven by advancements in computational power and data abundance. Handling unstructured data remains a challenge, but future developments in computational capabilities are expected to improve it.
Incentivizing responsible behaviour in AI system design is important. A comprehensive approach is required to harness the potential of AI while mitigating its risks.
UJ
Una Jakob
Speech speed
192 words per minute
Speech length
967 words
Speech time
302 secs
Arguments
The question of norms is crucial within the context of AI and biotechnology, particularly in terms of behavioural standards for ethical guidelines.
Supporting facts:
- Discussed the importance of integrating this thinking into biological disarmament and norms
- Mentioned the idea of developing a specific code of ethics for the nexus of AI and biotechnology
Topics: Artificial Intelligence, Biotechnology, Ethics
Clarification of the term ‘biosecurity’ in the context of BWC is important before considering risks and benefits.
Supporting facts:
- Biosecurity in the BWC context used to mean the prevention of unauthorized access to labs, pathogens, biological materials, etc.
- Outside the BWC, the term ‘biosecurity’ has come to mean something different.
- Having a broader meaning brings in issues such as disease surveillance and pandemic preparedness, which are important but not central to the BWC.
Topics: Biosecurity, BWC context, Discussion on risks and benefits
In the next round of the smaller body, it would be beneficial to come from the BWC and narrow down the technologies, focusing more on specific parts of the technology.
Supporting facts:
- Approaching from the BWC can relate to the technology and benefits of the technology which can allow us to define more specific questions.
- This approach can make the situation more specific and tailored towards the context of the BWC as opposed to general risks and benefits of technologies.
Topics: BWC, Technology
Una Jakob emphasizes on taking the provisions of the BWC treaty while discussing on AI benefits and risks
Supporting facts:
- Discussions on AI should include how it benefits BWC’s verification process, aids in assistance under Article 7, and impacts non-proliferation under Article 3
Topics: AI in verification, Assistance under Article 7, Non-proliferation under Article 3
Report
The discussions highlighted the importance of integrating ethical norms into the fields of artificial intelligence (AI) and biotechnology, with a specific focus on developing behavioural standards for ethical guidelines. It was argued that establishing these standards is crucial to ensure responsible practices and prevent any potential misuse or harm in these domains.
Specifically, there were suggestions for creating specific behavioural standards for the intersection of AI and biotechnology. Furthermore, there was an emphasis on integrating this ethical thinking into the Biological Weapons Convention (BWC). It was proposed that the BWC should consider incorporating these behavioural standards into its framework, recognizing the potential risks and benefits of AI and biotechnology in relation to biological disarmament and norms.
This integration could help ensure that technological advancements in AI and biotechnology are aligned with ethical principles and international norms. Another important point raised in the discussions was the need to clarify the term “biosecurity” in the context of the BWC.
It was noted that while biosecurity in the BWC context traditionally referred to the prevention of unauthorised access to labs, pathogens, and biological materials, outside of the convention, the term has acquired a broader meaning. This broader meaning includes considerations such as disease surveillance and pandemic preparedness, which may be important but not central to the BWC.
Therefore, it was argued that before evaluating the risks and benefits associated with biosecurity, a clarification of the term in the BWC context is necessary to ensure a common understanding. The perspective of approaching technology discussions from the vantage point of the BWC was deemed beneficial.
This approach would allow for a more specific and tailored focus, allowing participants to delve into the particular technologies and their implications within the context of the BWC. By narrowing down the scope of the discussions to specific elements of the BWC, a more targeted understanding of risks and benefits can be achieved.
The proponents also emphasised the benefits of AI within the BWC framework. It was highlighted that discussions on AI should consider how it can enhance the BWC’s verification process, aid in assistance under Article 7, and impact non-proliferation under Article 3. By focusing on the provisions of the BWC treaty and its specific needs, rather than just the interests or needs of states parties, a more comprehensive assessment of the benefits and risks of AI can be attained.
In conclusion, the discussions underscored the significance of integrating ethical norms into the fields of AI and biotechnology. The development of behavioural standards can ensure responsible practices and prevent any potential misuse or harm. Furthermore, the integration of ethical thinking into the BWC was highlighted as a means to align technological advancements with international norms and promote biological disarmament.
Clearing the ambiguity surrounding the term “biosecurity” in the BWC context and approaching technology discussions from the perspective of the BWC were also deemed important steps. Ultimately, the proponents emphasised that discussions should be guided by the substance of the BWC treaty, prioritising its needs and provisions over the interests of states parties.
XW
Xuxu Wang
Speech speed
124 words per minute
Speech length
491 words
Speech time
237 secs
Arguments
AI can help biosecurity researchers process, analyze, and extract patterns from massive biodata
Supporting facts:
- AI makes processing and analysis of biodata easier and quicker
- Generative AIs and co-pilots trained on massive algorithms are used
Topics: Biosecurity, Artificial Intelligence, Data Analysis
AI offers tools and infrastructure to clean and aggregate data from various sources
Supporting facts:
- AI has the ability to reconcile data from different resources and makes the foundational data for research on a massive scale
Topics: Artificial Intelligence, Data Aggregation, Data Cleaning
AI enables the use of machine learning and deep learning-based predictive modeling algorithms in biosecurity
Supporting facts:
- AI algorithms have a lower barrier requirement and can be adopted by people with various levels of domain knowledge in biosecurity
Topics: Artificial Intelligence, Machine Learning, Biosecurity
Concern about the AI misuse in bioweapon attacks.
Supporting facts:
- AI could be used by terrorists or criminals to develop bioweapons.
- They can use AI to predict the best time, place and audience for an attack.
- AI can analyze real-world events data to predict demographics and schedules for events
Topics: AI, Terrorism, Bioweapons
Report
Artificial intelligence (AI) plays a significant role in biosecurity research, offering numerous benefits to researchers in terms of processing, analysing, and extracting patterns from biodata. AI provides tools and infrastructure that make the cleaning and aggregation of data from multiple sources more accessible, thereby facilitating research on a larger scale.
By reconciling data from various resources, AI enables researchers to have foundational data that is essential for conducting comprehensive and accurate biosecurity studies. Moreover, the adoption of AI algorithms in biosecurity expands the possibilities for predictive modelling and deep learning-based analysis.
These algorithms have a lower barrier to entry, allowing individuals with varying levels of domain knowledge in biosecurity to adopt and utilise such tools effectively. Machine learning and deep learning techniques enable researchers to develop predictive models that enhance their understanding of biosecurity threats and devise proactive measures to mitigate risks.
However, alongside these positive advancements, concerns arise regarding the potential misuse of AI by terrorists or criminals in the development of bioweapons. AI can be leveraged by malicious actors to identify the optimal time, place, and target audience for an attack.
By analysing real-world events data, AI can predict demographics and schedules, making it a valuable tool for planning and executing bioweapon attacks. Given these risks, it becomes crucial to factor in the potential misuse of AI in bioweapon attacks when formulating policies and regulations.
Considering the implications of AI misuse in bioweapon attacks, policymakers and institutions need to include this consideration in their scoping and decision-making processes. By factoring in the possibilities that AI and data analytics present in predicting high-impact events for attacks, policymakers can develop regulations and guidelines that address these concerns.
Incorporating guidelines that guard against the misuse of AI technology in the context of bioweapons aligns with the broader objective of promoting peace, justice, and strong institutions (SDG 16). In conclusion, AI’s integration into biosecurity research brings numerous benefits, including improved data processing, analysis, and predictive modelling.
However, the potential misuse of AI in the development of bioweapons raises concerns that need to be addressed. Policymakers should consider the potential risks associated with AI misuse in bioweapon attacks when creating policies and regulations to ensure the responsible use of AI technology.
By doing so, they can promote a safe and secure environment in which AI can continue to contribute positively to biosecurity research and innovation.