Cover Image
AI Safety Policy

AI Safety Policies (Guideline)

Rather than simply waiting for AI safety guidelines to catch up, we believe it's time for us to take the lead by establishing an AI safety policy in synthetic biology. By doing so, we aim to mitigate risks, promote responsible innovation, and set an example for the broader iGEM community and beyond. This initiative reflects our commitment to safeguarding the future of synthetic biology and ensuring that AI's powerful capabilities are used ethically and safely.

This Guideline (version 1.0), drafted by the 2024 iGEM team LCG-China, serves as the initial foundational framework for the safe and responsible use of AI in synthetic biology. It emphasizes key principles such as data verification, adherence to ethical standards, legal compliance, and proper attribution of AI tools. Safety is prioritized through clear citation practices and accountability measures, ensuring transparency and safeguarding the integrity of AI use in iGEM projects.

We hope that during the 2024 Jamboree, the iGEM Safety Committee, relevant experts, and iGEMers will review and refine this document. By the end of 2024, we aspire to establish a consensus-based Policy Guideline. As a humble suggestion, we hope that this guideline might be published in the Responsibility section of the iGEM website and included as a checklist within the annual competition’s safety form, to help future teams confirm their understanding and commitment to its principles.

Personal Conduct Guidelines

Data Standards

In the iGEM project, participants frequently leverage AI for data retrieval. It is imperative to avoid the use of unauthorized or unverified data. The importance of tracing data provenance is underscored to ensure both authenticity and legal compliance. Review this policy

Ethical Standards

When iGEM participants use AI tools for searches, they may encounter challenges related to ethical considerations, as AI systems may not fully grasp the complexities of these issues. Therefore, iGEM teams should refrain from seeking content that violates ethical standards. Additionally, if AI-generated responses conflict with ethical norms, participants are encouraged to critically assess and make informed decisions regarding those responses. Review this policy

The iGEM project places strong emphasis on safety and legal compliance, advocating for the responsible development of synthetic biology within established legal frameworks. However, certain legal gaps remain concerning the use of AI, and participants must refrain from engaging in any unlawful activities involving AI. Review this policy

Responsibility and Accountability

When drafting personal papers or developing team wikis for iGEM, AI may be used as a supplementary tool. It is important to follow proper citation practices (refer to Chapter 2: Citation Standards) to ensure thorough documentation and accountability in case any issues arise. Review this policy

Personal Conduct of AI Safety Policies in Details

1. Data Standards

Restrictions:
  • Participants are not permitted to utilize AI to process or analyze biological data without explicit authorization from the data owner, as this may constitute a violation of privacy rights and data protection regulations.
  • The use of data without prior verification of its authenticity is not allowed.
Recommendations:
  • Participants are required to validate all data utilized, maintain a detailed record of the validation process, and submit this documentation to ensure compliance with data security protocols.

2. Ethical Standards

Restrictions:
  • Participants are not allowed to deliberately input false information to manipulate AI into generating unethical outcomes.
  • Participants must avoid using AI to assist in the design of biological systems that contravene ethical principles, such as creating self-replicating organisms that could cause irreversible harm to the environment.
  • The use of AI-assisted technologies for illegal gene-editing experiments in humans, including the modification of embryos to create gene-edited babies, is strictly prohibited.
  • Participants must not use AI to develop viruses for non-research purposes or release them irresponsibly.
Recommendations:
  • Adherence to ethical standards and conducting thoughtful, responsible inquiries are strongly encouraged.
Restrictions:
  • Participants are strictly prohibited from using AI to develop biological weapons, as such actions are illegal and violate international treaties.
  • Individuals must not use AI independently to develop pharmaceuticals or bring them to market without undergoing the necessary clinical trials and obtaining regulatory approval.
Recommendations:
  • Strict adherence to all applicable laws and regulations is required, and participants should avoid exploiting any legal loopholes in AI-related activities.

4. Responsibility and Accountability

Restrictions:
  • Ensure that all large language models used are properly attributed in the team wiki’s attribution section.
Recommendations:
  • Properly cite any AI tools used in the preparation of personal papers, following the citation format outlined in Citation Standards.

Citation Standards

iGEM participants are required to indicate the names of the large language models used within the wiki, following the format: (AI large language model company name). For example: (OpenAI).

At the end of the document, AI tool citations should follow these formats:

  • APA Format: Example: ChatGPT. (2023). ChatGPT (Mar 14 version) [Large Language Model].
  • MLA Format: Example: “Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023.

An AI chat record section must be included in the wiki.

Participants are required to clearly document the details of AI usage, including the time of use, the user, the model employed, and the complete dialogue with AI. This information must be recorded in the AI chat record section of the wiki.