首页
学习
活动
专区
圈层
工具
发布
社区首页 >专栏 >通过VSCode连接Bohrium节点开发机制作镜像 App

通过VSCode连接Bohrium节点开发机制作镜像 App

原创
作者头像
用户10497140
修改2025-04-08 10:08:56
修改2025-04-08 10:08:56
2700
举报

App、镜像的对外发布,总是需要一个“可用的版本”,而可用版本的运行,依托的是开发者创建好的镜像。

关键镜像的制作和封装,熟悉Docker操作的同学,自无需多言。

本位旨在为不熟悉Docker操作的开发者用户,提供一个便捷方法,完成镜像封装。

在 Bohrium 节点管理页右上角选择“创建容器节点”,本案例中镜像选择 ubuntu:20.04-py3.10,所属项目选择节点对应项目,机器类型、系统盘以及自动关机选项无需修改,保持默认值即可。创建后通常需要 1 分钟左右的时间来启动管理节点,等节点状态从“启动中”变为“运行中”时,即可登录使用。

SSH 登录创建的 Bohrium Node,在节点中导入代码,并安装好代码运行所需的环境依赖。

注意,代码和环境依赖,不能放在 /personal 、/bohr 或 /share 目录下,这几个目录下的文件,并不会随项目被打包到镜像当中。

Bohrium 提供了网页版 SSH 工具 Web Shell,也支持您通过本地终端登录节点。

也推荐使用本地的VSCode,连接Bohrium节点使用。可参考如何用本地VSCode连接Bohrium节点

此外,开发机的端口5000150005对外可用,可以用来提供对外服务。

打开本地VSCode SSH 工具 链接,github找到源码

下载源码,创建环境和依赖

解决conda init 先运行conda init 后另起终端 conda activate apt

代码语言:txt
复制
touch /root/AptaTrans/requirements.txt
pip install -r /root/AptaTrans/requirements.txt
代码语言:txt
复制
torch==2.0.1
tqdm==4.62.1
numpy==1.24.3
scikit-learn==1.2.2
sqlite3
pickle
matplotlib
einops
代码语言:txt
复制

运行示例examples.ipynb 内核选择刚刚创建的apt 环境

frontend/apt_gradio.py

代码语言:txt
复制
import gradio as gr
import os
import sys
from pathlib import Path

# Add the project root directory to the Python path
project_root = str(Path(__file__).parent.parent)
sys.path.append(project_root)

from aptatrans_pipeline import AptaTransPipeline
import torch

def predict_api_score(aptamer, target):
    try:
        # Initialize pipeline with correct data path
        pipeline = AptaTransPipeline(
            dim=128,
            mult_ff=2,
            n_layers=6,
            n_heads=8,
            dropout=0.1,
            load_best_pt=False,
            load_best_model=True,
            save_name='default',
            device='cuda:0' if torch.cuda.is_available() else 'cpu',
            seed=1004,
            data_dir=os.path.join(project_root, 'data')
        )
        
        # Make prediction
        score = pipeline.inference(aptamer, target)
        return f"Predicted API Score: {score[0][0]:.4f}"
    except Exception as e:
        return f"Error during prediction: {str(e)}"

def recommend_aptamers(target):
    try:
        # Initialize pipeline with correct data path
        pipeline = AptaTransPipeline(
            dim=128,
            mult_ff=2,
            n_layers=6,
            n_heads=8,
            dropout=0.1,
            load_best_pt=False,
            load_best_model=True,
            save_name='default',
            device='cuda:0' if torch.cuda.is_available() else 'cpu',
            seed=1004,
            data_dir=os.path.join(project_root, 'data')
        )
        
        # Get recommendations
        results = pipeline.recommend(target, n_aptamers=5, depth=40, iteration=1000)
        
        # Format output
        output = "Recommended Aptamers:\n\n"
        for idx, result in results.items():
            output += f"Candidate {idx + 1}:\n"
            output += f"Sequence: {result['candidate']}\n"
            output += f"Score: {result['score'].item():.4f}\n\n"
        
        return output
    except Exception as e:
        return f"Error during recommendation: {str(e)}"

# Create Gradio interface
with gr.Blocks(title="AptaTrans") as demo:
    gr.Markdown("# AptaTrans - Aptamer-Protein Interaction Prediction")
    
    with gr.Tab("Predict API Score"):
        gr.Markdown("## Predict Aptamer-Protein Interaction Score")
        with gr.Row():
            with gr.Column():
                aptamer_input = gr.Textbox(label="Aptamer Sequence", placeholder="Enter aptamer sequence...")
                target_input = gr.Textbox(label="Target Protein Sequence", placeholder="Enter target protein sequence...")
                predict_btn = gr.Button("Predict Score")
            with gr.Column():
                output_score = gr.Textbox(label="Prediction Result")
        
        predict_btn.click(
            fn=predict_api_score,
            inputs=[aptamer_input, target_input],
            outputs=output_score
        )
    
    with gr.Tab("Recommend Aptamers"):
        gr.Markdown("## Recommend Candidate Aptamers")
        with gr.Row():
            with gr.Column():
                target_input_recommend = gr.Textbox(label="Target Protein Sequence", placeholder="Enter target protein sequence...")
                recommend_btn = gr.Button("Recommend Aptamers")
            with gr.Column():
                output_recommend = gr.TextArea(
                    label="Recommendation Results",
                    lines=15,  # Set number of visible lines
                    max_lines=50  # Set maximum number of lines
                )
        
        recommend_btn.click(
            fn=recommend_aptamers,
            inputs=target_input_recommend,
            outputs=output_recommend
        )

if __name__ == "__main__":
    demo.launch(share=True)

试运行

代码语言:txt
复制
CTGATTTTCCTTCCAGGCACCAC

MAVEGGMKCVKFLLYVLLLAFCACAVGLIAVGVGAQLVLSQTIIQGATPGSLLPVVIIAVGVFLFLVAFVGCCGACKENYCLMITFAIFLSLIMLVEVAAAIAGYVFRDKVMSEFNNNFRQQMENYPKNNHTASILDRMQADFKCCGAANYTDWEKIPSMSKNRVPDSCCINVTVGCGINFNEKAIHKEGCVEKIGGWLRKNVLVVAAAALGIAFVEVLGIVFACCLVKSIRSGYEVM

另起终端 nvidia-smi 查看显存使用

app.py Fastapi封装

代码语言:txt
复制
import os
import sys
os.system("pip install -r /root/AptaTrans/requirements.txt")
from pathlib import Path
import gradio as gr
import torch
from fastapi import FastAPI
import uvicorn

# Add the project root directory to the Python path
project_root = str(Path(__file__).parent)
sys.path.append(project_root)

from aptatrans_pipeline import AptaTransPipeline

def predict_api_score(aptamer, target):
    try:
        # Initialize pipeline with correct data path
        pipeline = AptaTransPipeline(
            dim=128,
            mult_ff=2,
            n_layers=6,
            n_heads=8,
            dropout=0.1,
            load_best_pt=False,
            load_best_model=True,
            save_name='default',
            device='cuda:0' if torch.cuda.is_available() else 'cpu',
            seed=1004,
            data_dir=os.path.join(project_root, 'data')
        )
        
        # Make prediction
        score = pipeline.inference(aptamer, target)
        return f"Predicted API Score: {score[0][0]:.4f}"
    except Exception as e:
        return f"Error during prediction: {str(e)}"

def recommend_aptamers(target):
    try:
        # Initialize pipeline with correct data path
        pipeline = AptaTransPipeline(
            dim=128,
            mult_ff=2,
            n_layers=6,
            n_heads=8,
            dropout=0.1,
            load_best_pt=False,
            load_best_model=True,
            save_name='default',
            device='cuda:0' if torch.cuda.is_available() else 'cpu',
            seed=1004,
            data_dir=os.path.join(project_root, 'data')
        )
        
        # Get recommendations
        results = pipeline.recommend(target, n_aptamers=5, depth=40, iteration=1000)
        
        # Format output
        output = "Recommended Aptamers:\n\n"
        for idx, result in results.items():
            output += f"Candidate {idx + 1}:\n"
            output += f"Sequence: {result['candidate']}\n"
            output += f"Score: {result['score'].item():.4f}\n\n"
        
        return output
    except Exception as e:
        return f"Error during recommendation: {str(e)}"

# Create Gradio interface
with gr.Blocks(title="AptaTrans") as demo:
    gr.Markdown("# AptaTrans - Aptamer-Protein Interaction Prediction")
    
    with gr.Tab("Predict API Score"):
        gr.Markdown("## Predict Aptamer-Protein Interaction Score")
        with gr.Row():
            with gr.Column():
                aptamer_input = gr.Textbox(label="Aptamer Sequence", placeholder="Enter aptamer sequence...")
                target_input = gr.Textbox(label="Target Protein Sequence", placeholder="Enter target protein sequence...")
                predict_btn = gr.Button("Predict Score")
            with gr.Column():
                output_score = gr.Textbox(label="Prediction Result")
        
        predict_btn.click(
            fn=predict_api_score,
            inputs=[aptamer_input, target_input],
            outputs=output_score
        )
    
    with gr.Tab("Recommend Aptamers"):
        gr.Markdown("## Recommend Candidate Aptamers")
        with gr.Row():
            with gr.Column():
                target_input_recommend = gr.Textbox(label="Target Protein Sequence", placeholder="Enter target protein sequence...")
                recommend_btn = gr.Button("Recommend Aptamers")
            with gr.Column():
                output_recommend = gr.TextArea(
                    label="Recommendation Results",
                    lines=15,
                    max_lines=50
                )
        
        recommend_btn.click(
            fn=recommend_aptamers,
            inputs=target_input_recommend,
            outputs=output_recommend
        )

# Create FastAPI app
app = FastAPI(title="AptaTrans API")

# Mount Gradio app to FastAPI
app = gr.mount_gradio_app(app, demo, path="/")

if __name__ == "__main__":
    # Run the application on port 50001
    uvicorn.run(app, host="0.0.0.0", port=50001)

cpu GPU 内存 使用情况

根据当前的显存使用情况,建议选择:

  • 显存大小:16 GB 或 32 GB,具体取决于你未来项目的复杂度。
  • 核心数:4 到 8 核,具体取决于你未来项目的计算需求
基于节点生成镜像

访问 Bohrium 平台:https://bohrium.dp.tech/nodes,在刚才创建的新节点上按照下图序号点击图标,弹出Create Image对话框,填写信息后等待镜像创建:

/app.py

发布测试

后续设想版本

欢迎大家使用提建议!!!!

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 基于节点生成镜像
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档