jetpack4.6.3
cuda10.2
cudnn8.2
tensorrt8.2
GCC==7.5.0
cmake==3.25.2
FastDeploy当前在Jetson仅支持ONNX Runtime CPU和TensorRT GPU/Paddle Inference三种后端推理
sudo apt purge cmake
添加签名密钥
wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | sudo apt-key add - 将存储库添加到您的源列表并进行更新
稳定版
sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main' sudo apt-get update
候选发布版本(可选) sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic-rc main' sudo apt-get update
安装新版本:
sudo apt install -y cmake
编译需满足
如果需要集成Paddle Inference后端,在Paddle Inference预编译库页面根据开发环境选择对应的Jetpack C++包下载,并解压。
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
mkdir build && cd build
cmake .. -DBUILD_ON_JETSON=ON \
-DENABLE_VISION=ON \
-DENABLE_PADDLE_BACKEND=ON \ # 可选项,如若不需要Paddle Inference后端,可关闭
-DPADDLEINFERENCE_DIRECTORY=/Download/paddle_inference_jetson \
-DCMAKE_INSTALL_PREFIX=${PWD}/installed_fastdeploy
make -j8
make install
编译完成后,即在CMAKE_INSTALL_PREFIX
指定的目录下生成C++推理库
编译过程同样需要满足
Python打包依赖wheel
,编译前请先执行pip install wheel
如果需要集成Paddle Inference后端,在Paddle Inference预编译库页面根据开发环境选择对应的Jetpack C++包下载,并解压。
所有编译选项通过环境变量导入
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/python
export BUILD_ON_JETSON=ON
export ENABLE_VISION=ON
# ENABLE_PADDLE_BACKEND & PADDLEINFERENCE_DIRECTORY为可选项
export ENABLE_PADDLE_BACKEND=ON
export PADDLEINFERENCE_DIRECTORY=/Download/paddle_inference_jetson
python setup.py build
python setup.py bdist_wheel
编译完成即会在FastDeploy/python/dist
目录下生成编译后的wheel
包,直接pip install即可
编译过程中,如若修改编译参数,为避免带来缓存影响,可删除FastDeploy/python
目录下的build
和.setuptools-cmake-build
两个子目录后再重新编译