AI科技评论消息,日前,百度在 GitHub 上开源了移动端深度学习框架 mobile-deep-learning(MDL)的全部代码以及脚本,这项研究旨在让卷积神经网络(CNNC)能更简单和高速的部署在移动端,支持iOS GPU,目前已经在百度APP上有所使用。
大小:340k+(在arm v7上)
速度:对于 iOS Metal GPU Mobilenet,速度是40ms,对于 Squeezenet,速度是30ms
展示案例
如果想先运行demo或快速使用这个框架,你可以扫下面的二维码安装编译好的apk/ipa文件,不用知道详细的安装细节。
iOS-MobileNet:
Android-Googlenet:
想要了解源码实现可以继续往下看,源码位于examples文件夹里。
执行样例
1、复制项目代码
2、安装apk\ipa文件,或导入到IDE
3、运行
前期准备
使用MDL lib步骤
在OSX或Linux上测试:
# mac or linux: ./build.sh mac cd build/release/x86/build ./mdlTest
使用MDL lib
#android Copy so file to your project. According to the example of writing your code.
#ios The example code is your code.
多线程执行
# After a Net instance in MDL is created, you could set its thread numbers for execution like this. net->set_thread_num(3); # Now MDL is tuned to run in 3 parallel threads.
开发
在android端编译MDL资源
# android: # prerequisite: install ndk from google ./build.sh android cd build/release/armv-v7a/build ./deploy_android.sh adb shell cd /data/local/tmp ./mdlTest
在iOS端编译MDL资源
# ios: # prerequisite: install xcode from apple ./build.sh ios copy ./build/release/ios/build/libmdl-static.a to your iOS project
把caffemodel转换成mdl格式
#Convert model.prototxt and model.caffemodel to model.min.json and data.min.bin that mdl use ./build.sh mac cd ./build/release/x86/tools/build # copy your model.prototxt and model.caffemodel to this path # also need the input data ./caffe2mdl model.prototxt model.caffemodel data # after this command, model.min.json data.min.bin will be created in current # some difference step you need to do if you convert caffe model to iOS GPU format # see this: open iOS/convert/iOSConvertREADME.md
特征
项目地址:https://github.com/baidu/mobile-deep-learning