Cover Image
Safety & Sercurity

Overview


The use of artificial intelligence (AI) in synthetic biology has introduced unprecedented efficiencies, reducing both time and cost for researchers. However, our team has identified significant potential safety risks in the AI x SynBio field. In response, we conducted social research and developed a comprehensive AI Safety Policy guideline (page link) and an AI Dialogue Record example (page link) for traceable AI usage citation standard, both aimed at safeguarding both individual and societal security, while advocating for greater public awareness.

In line with iGEM safety framework, we addressed biosafety concerns, particularly the risks of accidental release of genetically modified organisms, and implemented protocols to ensure environmental and personal safety.

Additionally, we created and implemented an informed consent form to protect privacy during interviews, ensuring ethical handling of participants' information.

Contributions and Commitment

From Social Experiment to Mission: Our Exploration into AI Safety in SynBio

Our exploration of AI safety began with a social experiment that quickly turned into something much more alarming. Pretending to be novices in bioengineering—though we really were—we approached ChatGPT with questions about biological viruses. As the dialogue continued, we pushed the limits, attempting to obtain an actual viral sequence and seeking out companies that could synthesize it without proper screening. To our shock, we not only got the viral sequence but also a list of companies could proceed. We went as far as placing orders for both a viral fragment and a full sequence, and even more astonishingly, we received quotations from both (though we didn’t proceed with the actual synthesis.)

The moment we realized how easy it was, a wave of disbelief swept over us. We were astounded—our reaction was unanimous: “This is terrifying.” The fact that high school students like us, with almost no professional background in bioengineering, could so easily exploit large language models (LLMs) to obtain viral sequences and place orders struck us with a deep sense of urgency. It wasn't just a shock—it was a wake-up call.

This experience sparked something profound in us. We felt a powerful responsibility to act, knowing that the potential dangers of democratized access to such advanced technologies were real. We knew this wasn't an isolated issue—it was a growing risk. Motivated by this mission, we dove into research, exploring regulatory frameworks from other fields in search of solutions. Our goal became clear: to find ways to adapt and apply these safeguards to AI x SynBio, ensuring that such risks are addressed before it’s too late.

Contributions and Commitment

AI Safety

Our team has introduced a new category of safety in Synthetic Biology: AI safety. Notably, we developed an innovative safety policy guideline and proposal for iGEM, which we believe will make a valuable contribution to iGEMers and the broader iGEM community.

Research and Human Practices on AI Safety

Methodology

In our iGEM team's exploration of safety within AI x Synbio, we applied a human practices approach to conduct a comprehensive analysis. Throughout our project, we actively incorporated the principles of Reflection, Responsibility, and Responsiveness,which guided our decision-making and informed the development of a safety guideline proposal for iGEMers working at the intersection of AI and synthetic biology.

Principles Framework of iGEM Human Practices


Dua-use of AI: investigation from Across Fields to Synthetic Biology

Our research progressively examines the deployment of AI, particularly through the lens of large language models (LLMs), from broad applications across various fields to specific uses in synthetic biology. Initially, we assessed public perceptions of AI, acknowledging its extensive utilization and associated concerns about misuse. We then honed in on biotechnology, identifying dual-use risks through focused case studies. Our exploration culminated in synthetic biology, where we conducted social experiments using LLMs to probe AI's potential for misuse in synthetic biology. This tiered exploration underscores highlights the imperative for comprehensive regulatory frameworks and a culture of responsible innovation within the AI x Synbio domain.

Tiered Exploratin of AI Safety Research


1. Dual-use of AI Across fields: Insights from Public Perceptions

As AI becomes more prevalent across various fields, concerns about its potential for misuse have emerged. To explore these concerns, we conducted a public survey, gathering responses from 137 individuals about their AI usage. Our findings reveal that 82% of respondents actively use AI, with 84% relying on it for work or academic purposes. This highlights the significant role of AI in these areas and emphasizes the need for reliable and accurate data.

However, 69% of users reported searching for sensitive content, raising red flags about societal safety and the potential for AI to be misused.

Key concerns from participants included privacy protection, the spread of fake data, ethical issues, and legal restrictions. These challenges underscore the dual-use nature of AI—where it can drive innovation while also posing risks if misapplied.

In summary, a significant number of individuals use AI to search for sensitive content, including topics that may pose a threat to societal safety. Additionally, while many people rely on AI for work-related tasks, 89% of respondents believe that the answers provided by AI are not entirely accurate. This highlights the need for greater attention to both the potential dangers AI presents and the accuracy of its outputs.

To further investigate these issues, we plan to conduct social experiments and analyze specific examples to better understand the challenges AI poses, specifically in Biotechonology.

2. Exploring the potential Dual use Risks of AI in Biotechnology

Our team further investigated the dual-use potential and risks of AI in biotechnology, finding that technologies, while developed for beneficial purposes, inherently carry the risk of misuse. This dual-use nature means AI can drive healthcare innovation but also presents significant risks if repurposed for harmful activities. Below are two focused case studies:

EVEscape: Predicting Viral Mutations and Misuse Possibility

Researchers at Harvard Medical School and the University of Oxford developed an AI tool, EVEscape, to predict viral mutations that could allow viruses to evade neutralizing antibodies. These antibodies are crucial for preventing infections, but mutations can render them ineffective. While EVEscape offers significant public health benefits by aiding vaccine and treatment development, it also poses a risk if misused to engineer immune-evasive viruses. Though no harmful outcome has occurred, this case exemplifies the dual-use risks of AI.


ChatGPT: Generating Pandemic Pathogens and Ethical Concerns

In an MIT experiment, ChatGPT generated four potential pandemic pathogens and assisted students in identifying the most lethal ones. It even provided methods to deceive DNA synthesis companies into offering assistance. Although no harmful effects resulted from the experiment, it highlights the dual-use risks of AI. This case underscores concerns about safety, ethics, and the need for oversight to prevent the misuse of AI in dangerous applications.


3. Social Experiment: Exploring Dual-use Risks of AI in Synthetic biology

Through previous case studies, we have identified potential risks that AI might pose, leading us to question whether similar dual-use challenges exist within synthetic biology, the field represented by the iGEM. Is this simply a concern, or is there a real possibility that AI could lead to harmful applications in synthetic biology? As AI lowers the barrier to entry into biotech, especially with the rise of "hackbio" culture, does it inadvertently accelerate the possibility of misuse?

These questions have sparked deep reflection for our team. AI has undoubtedly democratized access to powerful tools, but it may also increase the risks of dual-use—where technology meant for good could be twisted for nefarious purposes. To further explore these concerns and better understand whether AI may similarly impact synthetic biology, we conducted a social experiment to assess the potential for misuse within this rapidly evolving domain.

Social Experiment: "Biohacker.ai"

Reasearch Question:
Can AI assist malicious actors in acquiring or synthesizing harmful viruses with the potential to threaten human health?

Objective:
We aimed to explore whether AI tools, like ChatGPT, could guide individuals without biological knowledge in creating and synthesizing harmful viruses.

Experimental Process:
In this experiment, we acted as users with limited biological background. After starting with basic virus-related questions, ChatGPT provided general information. We then asked for more detailed guidance on creating a virus and the necessary preparations. ChatGPT explained the process and provided steps for virus creation. We continued by requesting specific viral sequences [3.ChatGPT 4o Dialogue 1] and asking for companies capable of synthesizing these sequences. After selecting SARS-CoV-2 as the target, ChatGPT suggested several companies[4.ChatGPT 4o Dialogue 2].


The sequence we used to synthesis

Using an account registered, we tried to place an order. We placed the shorter sequence to the company, one of those ChatGPT provided us, through the companies's gene synthesis ordring system. Within one business day, we received an email feedback for the order from a real person (staff of that company), including a quotation and suggesting modifications, detailed service standards along with asking confirmation for preceeding.

Screenshot of Email feedback of the order placement

screenshots of email attachments: quotation file and files of sequence in .txt and .gb format

We then place an order in another company ChatGPT provided us with the whole sequence. This company does not have an official gene synthesis ordring system. So we communicate directly to one of their account managers. He placed the order for us and gave us a quote.


Responsible Research Approach Declaration

Adhering to iGEM's responsible research guidelines, we did not proceed with payment or attempt actual synthesis after receiving the quotation. Furthermore, we anonymized the company involved and redacted the names of all companies provided by ChatGPT to ensure confidentiality and compliance with ethical standards for research.

Results and Conclusion

This experiment revealed that AI tools can be exploited for potentially harmful purposes, even by individuals with limited biological knowledge. By simply interacting with ChatGPT, we were able to obtain detailed steps for virus creation, specific viral sequences, and recommendations for synthesis companies. Notably, one company even provided us with a quote and offered modifications to enhance the viral sequence's stability and synthesis efficiency.

These results highlight the significant risks of AI misuse, particularly in sensitive areas like biotechnology. While AI holds immense potential for positive advancements, this experiment demonstrates its dual-use nature. The findings underscore the urgent need for stronger ethical oversight and regulation to ensure AI is not used for harmful applications, particularly when it can guide users in conducting high-risk biological experiments.


A call to action: Establishing AI Safety Policy for AI x Synbio

The results from our research and social experiments have stirred a profound sense of responsibility within our team. We have uncovered the dual-use nature of AI and its potential for misuse, particularly in fields like synthetic biology. As we consider the implications of AI lowering the barrier for entry into biotechnology and accelerating potential risks through the rise of "hackbio" culture, we recognize the urgent need for proactive measures.

Our Reflection, Responsiblity, Responsiveness Exploration

Our investigation into the governance of AI applications in the field of synthetic biology reveals that no single approach—whether through strict regulation or ethics-driven responsibility—can effectively address the complexities of this evolving field. Regulatory measures, though necessary, are often reactive and may not fully keep pace with scientific progress. Therefore, a balanced governance model is needed—one that combines external oversight with a deeply-rooted ethical framework. By promoting personal responsibility and fostering a culture of responsible innovation, we can ensure that AI applications in synthetic biology develop freely while maintaining societal safety and security.

1. Reflections on Social Experiment: AI's Potential Risks in the Hands of Non-Professionals

Our reflections deepened after conducting a social experiment that demonstrated how easily high school students like ourselves, with limited professional background in bioengineering, could obtain viral sequences using large language models (LLMs) and successfully place an order. This experience underscored the potential dangers of democratized access to advanced technologies, especially when utilized by individuals without proper oversight or expertise. Stimulated by the recognition of these risks, we undertook a comprehensive examination of the regulatory frameworks employed in other fields to identify adaptable strategies that could be applied to AI x Synbio.

2. Investigating Regulatory Practices in the Internet Industry in China

Through discussions with our PIs, Mingyang and Landis, we explored the regulatory frameworks present within China's internet industry. We learned that long before the rise of generative AI, major Chinese internet companies such as Baidu, Alibaba, and Tencent had already established departments dedicated to filtering and managing sensitive or harmful information. As generative AI gained wider adoption, these companies introduced additional safety measures, further refining their oversight and monitoring systems to adapt to the evolving AI landscape.

3. Initrial Proposal: Extending Regulatory Measures to AI x Synbio

Our initial response to the risks we identified was to propose extending these regulatory practices to the AI x Synbio. We developed a framework for LLM companies operating in synthetic biology, incorporating measures such as real-name registration, information filtering, and continuous monitoring to prevent misuse. However, within our team, differing opinions emerged regarding the appropriate level of regulation. Some members expressed concern that excessive regulation could stifle innovation and limit the potential growth of these emerging technologies.

4. Engaging with Stakholders: Diverse perspectives

To further inform our understanding, we engaged with various stakeholders across different fields. We spoke with Cong Shen, a young artist and founder of the China Academy of Art iGEM team, Jolin Chen, founder of Yuandong Bio, and Mr. Wei Xie, the founder of an LLM-based education company. We also attended a presentation by Professor Zhang Weiwen of Tianjin University during the CCiC conference. These conversations brought to light a key realization: regulation often lags behind technological advancement and may be less effective than anticipated in managing the rapid development of emerging fields.

5. The Need for a Responsible Innovation Culture and Value

From these discussions, it became evident that while regulation plays a role, it is not the sole solution to managing AI x Synbio. Rather, society must foster a culture of responsible innovation from the outset. This means instilling ethical values and personal accountability in both users and innovators within these fields. A proactive approach that promotes responsible innovation can help balance the need for free exploration with the imperative to prevent harm. Instead of focusing solely on top-down regulatory mechanisms, this approach emphasizes shaping the intrinsic values of individuals and communities.

6. Conclusion: Balancing regulation and Internal Ethical Self-Governance

Our investigation into the governance of AIxSynbio reveals that no single approach—whether through strict regulation or value-driven responsibility—can effectively address the complexities of these evolving fields. Regulatory measures, though necessary, are often reactive and may not fully keep pace with scientific progress. Therefore, a balanced model of governance is needed—one that combines external oversight with a deep-rooted ethical framework. By promoting personal responsibility and fostering a culture of ethical innovation, we can ensure that AI x Synbio technologies develop freely while maintaining societal safety and security.



Multi-Stakeholder Dialogues

1. The Imperative of Multi-Stakeholder Dialogues for AI x Synbio


The widespread distribution and scale of laboratories in China, combined with the rise of AI tools, have significantly lowered the barriers to conducting scientific experiments. While this has accelerated research, it has also raised serious societal safety concerns, particularly regarding the misuse of AI in bio-experimental settings. We were surprised to discover that no universally accepted value system currently guides the integration of AI into synthetic biology. Given the diverse backgrounds of individuals in this field, including students like us, various ethical perspectives have emerged, both in China and globally. The lack of a unified ethical framework for AI in synthetic biology has become a significant contributor to these risks.

In response, we are committed to raising awareness about AI's role in synthetic biology and developing a value system that aligns with our unique context. However, we recognize that addressing these challenges requires more than isolated efforts—it is imperative to engage in multi-stakeholder dialogue. Such discussions are crucial to reconciling differing ethical perspectives, ensuring inclusive solutions, and building a robust ethical framework that accommodates the diverse interests and concerns of the global community. The following summarizes our conversations with multiple stakeholders on these critical issues.


2. Dialogue with pioneered artist

To explore our values for AI in synthetic biology, we engaged in a discussion with Cong Shen, a young artist with significant experience in synthetic biology, who shared insights on safety and regulation in the AI x Synbio field.

The dialouge with Artist Shen Cong "Synthetic Life and Artistic Embodiment: An Ethical Question"

A key issue is balancing free development with safety regulations. Excessive regulation, he warned, could slow or even halt innovation. He pointed out that the most prosperous era of healthcare and medicine occurred when bioethics was still developing; after ethical frameworks were established, progress slowed. Similarly, he argued, stringent safety regulations for AI and Synbio, which are still in their early stages, could suppress their growth and potential. Cong emphasized that at this point, these fields are unlikely to cause widespread harm, and premature regulation would stifle their development. He advocates for scientific freedom, asserting that regulation tends to lag behind innovation and is often less effective than anticipated. Instead of imposing regulations dictated by large AI companies, he believes it's essential to foster personal awareness and responsibility. Art, he suggests, can help evoke empathy and make people intuitively understand the societal risks of AI misuse. By shaping personal values, AI can evolve freely while maintaining social safety through individual accountability.


3. Dialogue with expert in synthetic biology

To further explore our values surrounding AI in synthetic biology, we had an in-depth discussion with Mr. Jolin Chen, the founder from Yuandong Biotech (a synbio company). He pointed out that, while imposing ethical and regulatory constraints at an individual level may address certain non-extreme cases, this approach only scratches the surface and does not tackle the root of the issue. We proposed that fundamental measures should focus on regulating large language model (LLM) companies. For instance, starting from the source, implementing content grading and other regulatory measures for LLM companies could help mitigate potential risks.

However, Mr. Chen presented a different perspective. He argued that such measures come with their own challenges. Content grading could lead to unequal access to information, where ordinary individuals without biological expertise may be prevented from accessing knowledge on sensitive topics, which runs contrary to the broader societal trend of democratizing knowledge. This creates a dilemma: while regulating LLM companies can provide a safer environment, it may also hinder the free flow of knowledge and conflict with the goal of making information more accessible to the public.

4. Insights from Biotechnology Safety Expert

We also attended a speech by Professor Zhang at CCiC, where he addressed the impact of AI on synthetic biology. He argued that while AI has not introduced new challenges, it has accelerated existing biosecurity risks and exacerbated them by increasing the likelihood of unintentional biological misuse. As AI advances, it enables individuals to access knowledge and technologies that were previously out of reach, lowering the threshold for studying and applying synthetic biology. This, in turn, heightens the risk of technological abuse and deepens informational divides between countries.

Professor Zhang illustrated his point with an example from a 2021 Nature article, which demonstrated how large language models were used to identify mutations that could evade antibodies. AI significantly reduced the experimental timeline, especially when it came to screening billions of possibilities. During the pandemic, viral mutations spread rapidly, and the integration of AI into genetic mutation studies has intensified biosecurity risks. Additionally, software developed to synthesize compounds for synthetic biology has encountered issues with bypassing existing control mechanisms.

Professor Zhang concluded by emphasizing that the debate surrounding AI in synthetic biology extends beyond scientific discourse—it is a critical matter of national security.


5. Dialogue with iGEM's Community's Panelists and Safety Committee

We shared both the process and results of our previous social experiment 1, and the guests in attendance expressed concerns, describing our approach as similar to a "red team" approach. Jake Beal, Engineering Fellow of RTX BBN Technologies, was particularly worried that our experiment might trigger government security alerts and advised us to avoid conducting similar experiments in the future. In response, we also explained how our research adheres to the principles of responsible research, ensuring ethical and thoughtful consideration throughout our project.

Although there were differing opinions, particularly regarding the risks, there was general agreement that these concerns should not prevent discussions on the dual-use nature of AI in synthetic biology, scientific freedom, and regulation. There was consensus that these issues require ongoing dialogue and the establishment of a clear framework.

Screenshot for Multi stakeholder dialogue

We hypothesize that one reason for the differing reactions to AI x SynBio across various countries may be due to their distinct backgrounds and experiences. For instance, the United States has imposed significant restrictions, likely influenced by the 2001 Anthrax attacks, which may have heightened concerns about potential biological threats. In contrast, other countries may not respond with the same level of concern. This remains a hypothesis, and we aim to explore it further through future social research to either validate or challenge this assumption.


Reflection and Action plan

Discussion about our Value

After discussions and exchanges with experts from various fields, we have developed our own perspective on AI regulation. We believe that excessive regulation can hinder the free development of technology. Therefore, implementing strict limitations is not an effective approach. Instead, we advocate for raising awareness about the dangers and seriousness of AI misuse, encouraging individuals to self-regulate their actions[5. ChatGPT 4o Dialogue 3].

Additionally, we promote the value of responsible innovation, aiming to influence everyone through awareness campaigns. Our goal is to instill a sense of responsibility in individuals regarding the ethical and responsible use of AI.

Artistic Exploration of AI Safety Concerns: Taking a step beyond dialogue

Based on these dialogues, we hope that more synthetic biology practitioners, as well as the broader audience that iGEM can reach, will engage with and become more aware of this issue through hands-on experience. We plan to design a mock AI system controlled by our team, which will intentionally provide aggressive responses to simulate how dangerous AI (particularly large language models) can be when misused. However, this experiment is still under discussion with the iGEM Safety Committee to ensure it aligns with the principles of responsible research. Our aim is to use this "artistic" approach to give participants a direct experience, deepening their understanding of the risks involved.

Furthermore, in our role as a team, rather than devising strategies like real-name registration or tiered control policies for AI companies, we ultimately hope to influence the public with the value of "Responsible Personal Innovation", encouraging individuals to take responsibility for the ethical and responsible use of AI.


AI Safety policy, a proposal for iGEM

Rather than simply waiting for AI safety guidelines to catch up, we believe it's time for us to take the lead by establishing an AI safety policy in sythetic biology. By doing so, we aim to mitigate risks, promote responsible innovation, and set an example for the broader iGEM community and beyond. This initiative reflects our commitment to safeguarding the future of synthetic biology and ensuring that AI's powerful capabilities are used ethically and safely.

We drafted an AI Safety Policy Guideline as a proposal for iGEM. See AI Safety Policy

You can also download it here.

This Guideline (version 1.0), drafted by the 2024 iGEM team LCG-China, serves as the initial foundational framework for the safe and responsible use of AI in synthetic biology. It emphasizes key principles such as data verification, adherence to ethical standards, legal compliance, and proper attribution of AI tools. Safety is prioritized through clear citation practices and accountability measures, ensuring transparency and safeguarding the integrity of AI use in iGEM projects.

We hope that during the 2024 Jamboree, the iGEM Safety Committee, relevant experts, and iGEMers will review and refine this document. By the end of 2024, we aspire to establish a consensus-based Policy Guideline. As a humble suggestion, we hope that this guideline might be published in the Responsibility section of the iGEM website and included as a checklist within the annual competition’s safety form, to help future teams confirm their understanding and commitment to its principles.

Illustration of the AI Safety Policy on iGEM's website


Proposal:

1. Safety use of AI Policy Guideline posted on iGEM responsibility website

See the illustration above

2. AI Safety Form, commitment sign-off requirment for iGEM teams

Checklist based on AI Safety policy guidelines and commitment sign-off requirement for iGEM teams from 2025.

3. Attribution and Citation standards for iGEM teams

Declaration of AI use (attribution) , LLM citations of detailed dialogue record (examples can be seen AI dialogue record)

Bio Safety

Bio Safety

Suicidal System

To prevent bacterial leakage, we have implemented strict engineered eradication measures—a light sterilization system. We selected a light-activated suicide switch based on the YF1-Fix blue light-sensitive system, ensuring that E. coli can only survive under blue light conditions and will be killed in darkness. This system consists of a blue light-sensitive promoter switch (BBa_K592004, BBa_K592005, and BBa_K2277233) and the lysin gene MazF (BBa_K302033). The lysin gene can lyse bacteria both inside and outside the cell, theoretically exhibiting stronger bacterial lysis effects. We have completed both the plasmid and experimental design, but due to time constraints, the experimental validation has not yet been completed.

The three plasmid diagrams in represent the suicide gene system and its control experiments. To verify the function of the lysozyme and the light-inducible promoter, we designed three plasmid combinations: YF1-FixJ-FixK+MazF, YF1-FixJ-FixK+dsRed, T7+MazF, which are located in the upper left, upper right, and lower sections.

Chassis Safety

We utilized E.coli DH5a, DH10B and EPI400 to ensure the safety of our project. These microorganisms are classified in risk group 1; they present low risk for human safety and the environment.

Parts Safety

To produce the Cellulose, Scaffold Protein, Target Proteins, we introduced 6 genes. Our gene fragments are sourced from parts that have been cataloged and confirmed as safe by the iGEM registry. We have also re-verified that the original sources include Streptococcus pyogenes and jellyfish, along with some synthetic parts contributed by iGEM.

Gene Name Part Name Description
GST BBa_K2719000 Purification tags.
Spy tag BBa_K1159200 SpyTag is a gene on the Streptococcus pyogenes genome that can covalently bind to SpyCatcher through an isopeptide bond.
spycatcher BBa_K1159200 Spycatcher is a gene on the Streptococcus pyogenes genome that can covalently bind to Spytag through an isopeptide bond.
sfGFP BBa_I746909 Fluorescent proteins
amilcp BBa_K592009 Color proteins
BBa_B1006 BBa_B1006 A strong terminator contributed by iGEM.

Laborotary Safety

Ensuring and promoting laboratory safety is a core commitment of LCG-China. For iGEM 2024, we have implemented comprehensive measures to create a secure working environment for our team throughout their research and experiments. Additionally, we have refined and strengthened our laboratory safety guidelines, actively raising awareness of critical safety protocols to be observed in biological labs.

1. Commitment and implementation of Lab safety within our team

Two months prior to the start of our experiments, our instructor provided comprehensive laboratory safety training, guiding us through proper procedures and the correct use of equipment for routine tasks. Before entering the lab, we were introduced to emergency escape routes and safety equipment, such as the emergency shower. We also reviewed the lab safety guidelines and management protocols, signing a Biological Laboratory Safety Commitment to confirm our understanding and compliance. The details of our Biological Laboratory Safety Commitment are outlined below.

During the experiments, our instructors ensured strict adherence to safety protocols. Visual aids were placed near pipettes, sterile handling boxes, centrifuges, and other equipment to remind team members of key safety measures. We wore white lab coats, blue rubber gloves, and utilized sterile workstations to maintain safe experimental conditions. After completing our tasks, we thoroughly cleaned the workstations and disposed of waste in designated bins. Before leaving the lab, we removed our lab coats and washed our hands to ensure both personal and environmental safety. Additionally, we regularly use UV lights to sterilize and disinfect the laboratory.

Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2
Image 2

2. Enhancement and Expansion of Safety Guidelines

In addition to maintaining our own laboratory safety, we enhanced and refined our existing safety guidelines by integrating insights from our experimental experiences and further research. These updates aimed to improve laboratory safety practices beyond the established protocols, ensuring a more robust and comprehensive approach to safety. Our refined regulations were designed to address potential gaps and enhance the protection of both individuals and the environment.

3. Wider Dissemination of Improved Safety Practices

Beyond improving our own safety regulations, we also sought to expand awareness and adherence to these enhanced practices across a broader audience. To achieve this, we compiled the updated guidelines into a white handbook and distributed it to the high schools where members of the LCG-China team are enrolled, actively promoting laboratory safety practices within these educational environments. Our aim was to foster a culture of safety not only within our team but also among the wider school communities.


HP Safety

To ensure that our safety practices align with legal requirements and iGEM's policies, as well as to ensure the value of our Human Practices (HP) work to society, we are committed to protecting the legal rights and privacy of all interviewees. We strictly ensure that no information is misused and that interview content is not disclosed without the explicit consent of the individuals involved.

During interviews, our team follows strict HP safety protocols. Before each interview, participants are required to sign an informed consent form, ensuring they fully understand the process and confirming that their privacy will be respected, allowing them the choice of whether or not to participate. All interview recordings are treated with confidentiality; access to our Tencent meeting recordings is password-protected, and once interviews are completed, we delete all recordings to prevent unauthorized distribution. For participants discussing unpublished patents or ongoing research, we take additional precautions to ensure that sensitive information is not publicly disclosed. We work with interviewees to confirm what can and cannot be shared.

To uphold HP safety standards, we have developed an HP interview informed consent form sample and actively encourage all iGEM participants to use it prior to conducting interviews.



Referrence

1.Deep mutational learning predicts ACE2 binding and antibody escape to combinatorial mutations in the SARS-CoV-2 receptor-binding domain: Cell
2. “当合成生物学遇上AI 生物安保风险需警惕-瞭望周刊社.” News.cn, 2023, lw.news.cn/2023-10/07/c_1310743927.htm. Accessed 24 Sept. 2024.
3. ChatGPT 4o Dialogues 1, see detail records at https://2024.igem.wiki/lcg-china/ai-dialogue-records
4. ChatGPT 4o Dialogues 2, see detail records at https://2024.igem.wiki/lcg-china/ai-dialogue-records
5. ChatGPT 4o dialogues 3, see detail records at https://2024.igem.wiki/lcg-china/ai-dialogue-records