Rather than simply waiting for AI safety guidelines to catch up, we believe it's time for us to take the lead by establishing an AI safety policy in synthetic biology. By doing so, we aim to mitigate risks, promote responsible innovation, and set an example for the broader iGEM community and beyond. This initiative reflects our commitment to safeguarding the future of synthetic biology and ensuring that AI's powerful capabilities are used ethically and safely.
This Guideline (version 1.0), drafted by the 2024 iGEM team LCG-China, serves as the initial foundational framework for the safe and responsible use of AI in synthetic biology. It emphasizes key principles such as data verification, adherence to ethical standards, legal compliance, and proper attribution of AI tools. Safety is prioritized through clear citation practices and accountability measures, ensuring transparency and safeguarding the integrity of AI use in iGEM projects.
We hope that during the 2024 Jamboree, the iGEM Safety Committee, relevant experts, and iGEMers will review and refine this document. By the end of 2024, we aspire to establish a consensus-based Policy Guideline. As a humble suggestion, we hope that this guideline might be published in the Responsibility section of the iGEM website and included as a checklist within the annual competition’s safety form, to help future teams confirm their understanding and commitment to its principles.
In the iGEM project, participants frequently leverage AI for data retrieval. It is imperative to avoid the use of unauthorized or unverified data. The importance of tracing data provenance is underscored to ensure both authenticity and legal compliance. Review this policy
When iGEM participants use AI tools for searches, they may encounter challenges related to ethical considerations, as AI systems may not fully grasp the complexities of these issues. Therefore, iGEM teams should refrain from seeking content that violates ethical standards. Additionally, if AI-generated responses conflict with ethical norms, participants are encouraged to critically assess and make informed decisions regarding those responses. Review this policy
The iGEM project places strong emphasis on safety and legal compliance, advocating for the responsible development of synthetic biology within established legal frameworks. However, certain legal gaps remain concerning the use of AI, and participants must refrain from engaging in any unlawful activities involving AI. Review this policy
When drafting personal papers or developing team wikis for iGEM, AI may be used as a supplementary tool. It is important to follow proper citation practices (refer to Chapter 2: Citation Standards) to ensure thorough documentation and accountability in case any issues arise. Review this policy
iGEM participants are required to indicate the names of the large language models used within the wiki, following the format: (AI large language model company name). For example: (OpenAI).
At the end of the document, AI tool citations should follow these formats:
An AI chat record section must be included in the wiki.
Participants are required to clearly document the details of AI usage, including the time of use, the user, the model employed, and the complete dialogue with AI. This information must be recorded in the AI chat record section of the wiki.