Backend Environment

Hardware Environment

The device deploying the backend model requires at least 16GB of video memory and 24GB of free memory. The recommended minimum configuration is a Tesla T4 graphics card.

Conda Installation and Environment Creation

First, install the Anaconda environment and create a virtual environment for model deployment.

wget -c https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Linux-x86_64.sh
chmod +x ./Anaconda3-2024.06-1-Linux-x86_64.sh
./Anaconda3-2024.06-1-Linux-x86_64.sh
conda create -n igem python=3.10
conda activate igem
mkdir igem
cd igem

pip Installation

Next, you need to install the necessary software packages via pip.

sudo yum install epel-release
sudo yum install python-pip
sudo yum install python3-pip

BioSentVec Installation

mkdir Sent2Vec && cd Sent2Vec
wget -c https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/BioSentVec/BioSentVec_PubMed_MIMICIII-bigram_d700.bin
git clone https://github.com/epfml/sent2vec.git
cd sent2vec
pip install .
make

After installing Sent2vec, you can test whether the installation was successful using the following sample code:

import sent2vec
model = sent2vec.Sent2vecModel()
model.load_model('./BioSentVec_PubMed_MIMICIII-bigram_d700.bin')
emb = model.embed_sentence("once upon a time .")
embs = model.embed_sentences(["first sentence .", "another sentence"])
print(embs)

If the installation is successful, you should see the following output:

[[ 0.38158065 -0.11221295 -0.13689151 ... -0.10100605 -0.69602627
   0.2560813 ]
 [ 0.1980965  -0.74697006 -0.3691535  ...  0.11909663 -0.6209392
  -0.27145746]]

Llama3 Installation

Refer to the official GitHub link for installing Llama3; you need to download the model files from the Llama3 official website or HuggingFace.

mkdir llama3 && cd llama3
git clone https://github.com/meta-llama/llama3.git
cd llama3
pip install -e .
pip install transformers
pip install accelerate

Model Generation and Loading

Organize the knowledge graph files according to json requirements, as follows:

[
    {
        "name": "BBa_K1585100",
        "catalog": [
            "Promoter"
        ],
        "description": "",
        "twin": [],
        "uses": [],
        "function": "Anderson Promoter with lacI binding site. Promoters of the Anderson Promoter Collection with lacI binding site derived from R_0010 to make them promoters of the Anderson Promoter Collection with lacI binding site derived from R_0010 to make them promoters of the Anderson Promoter Collection with lacI binding site derived from R_0010 to make them. Promoters of the Anderson Promoter Collection with lacI binding site derived from R_0010 to make them IPTG inducible.",
        "environment": "No description of the environment",
        "sequence": "ttgacggctagctcagtcctaggtacagtgctagctactagagtgtgtggaattgtgagcggataacaatttcacaca",
        "short_desc": "Anderson Promoter with lacI binding site"
    },
    ...
]

Then run ./generate_embedding.py to generate the model file. Note that during execution, you need to configure model_path as the model file, json_file_path as the knowledge graph file part_llama3.json, and the final output file of np.save.

When running on the backend, you need to configure the model_path, embedding_path, and json_path in the constructors of Matcher() and Wrapper() in cond_match_new.py to be the locations of the BioSentVec model, the generated model file, and the location of part_llama3.json, respectively. Meanwhile, model_id should be set properly as the path of Llama 3 model.

Frontend Environment

In script.js, change line 4's const wsaddr to your server address. The server uses nginx to provide services; configure the reverse proxy in the server's nginx.conf as follows:

server {
    listen 80;
    server_name [YOUR_DOMAIN_NAME] localhost;
    location / {
        root [YOUR_WEBPAGE_PATH];
        index  index.html;
    }
    location /ws {
        proxy_pass http://[YOUR_BACKEND_IP]:[YOUR_BACKEND_PORT(default 8192)];
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

After completing the above deployment, you can use the Prometheus service by opening index.html in a browser or accessing the frontend via domain name or IP address.

BACK TO
TOP !