测试经过如下:1.下载最新paddlelite编译安装2.python版paddlelite执行npu版模型,python版paddlelite执行arm版模型使用pp-yolo运行arm版模型执行成功,运行paddlelite执行npu版模型出现上面错误3.c++版paddlelite执行npu版模型,python版paddlelite执行arm版模型使用mobilenet_v1运行arm版模型执行成功,运行paddlelite执行npu版模型出现上面错误4.使用cannacl推理用acl加载Yolov3算法om格式模型成功执行推理,执行过程中用npu-smiinfo查看Memory-Usage(MB)有上升
1.在cd/usr/local/Ascend/ascend-toolkit/latest/atc/lib64中没有libascend_protobuf.soln-slibascend_protobuf.so.3.13.0.0libascend_protobuf.so2.通过apt安装patchelf出现”paddlelite/lite.so:ELFloadcommandalignmentnotpage-aligned”错误,需要手动编译安装3.需提前安装并更新wheel,setuptools,否则在执行pythonsetup.pybdist_wheel报错error:invalidcommand'bdist_wheel'
Thetextwasupdatedsuccessfully,buttheseerrorswereencountered:
Sorry,somethingwentwrong.
编译最新的paddlelite后
cp-rfbuild.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8.nnadapter/cxx/include//Work/PaddleLite-generic-demo/libs/PaddleLite/linux/arm64/include/
cp-rfbuild.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8.nnadapter/cxx/lib/libnnadapter.so/Work/PaddleLite-generic-demo/libs/PaddleLite/linux/arm64/lib/huawei_ascend_npu/
cp-rfbuild.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8.nnadapter/cxx/lib/libhuawei_ascend_npu.so/Work/PaddleLite-generic-demo/libs/PaddleLite/linux/arm64/lib/huawei_ascend_npu/
cp-rfbuild.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8.nnadapter/cxx/lib/libpaddle_full_api_shared.so/Work/PaddleLite-generic-demo/libs/PaddleLite/linux/arm64/lib/
到PaddleLite-generic-demo/image_classification_demo/shell下执行./build.shlinuxarm64./run.shmobilenet_v1_fp32_224linuxarm64huawei_ascend_npu在./run.sh中我也echo输出NNADAPTER_DEVICE_NAMES,内容也是"huawei_ascend_npu"
另外,目前Python版的PaddleLite还不支持用AscendNPU进行推理,近期会有计划对其支持。
如果build.sh内使用USE_FULL_API=TRUE则执行的时候只依赖libpaddle_full_api_shared.so
如果使用PaddleLite-generic-demo可以不用执行opt转换的过程,直接将模型的inference.pdmodel和inference.pdiparams重命名成model和params放在目录xxx下,然后将这个目录xxx拷贝到PaddleLite-generic-demo/image_classification_demo/assets/models目录下(假设是一个分类模型),最后在PaddleLite-generic-demo/image_classification_demo/shell目录下直接执行$./run.shxxxlinuxarm64huawei_ascend_npu
error_ssd.log:在这个位置报Segmentationfault是由于没有正常加载Ascend的动态库,导致coredump。您需要检查下您昇腾CANN安装的路径,并且在./run.sh脚本里设置Ascend环境变量时,检查下环境变量的路径是否与您的环境实际安装路径一致
PaddleLite-generic-demo中的内容都是重新解压的,paddlelite也是重新编译,只替换过重新编译paddlelite后的5处复制文件
error_mobilenet_v1_fp32_224.log当时在PaddleLite-generic-demo/image_classification_demo/shell/run.sh中,后运行得到的。初略的找了下其它文件中含.nd的代码,当时没找到。paddlelite模型格式一般是.nb
error_1.log:/demo/cxx/mobilnetv1_light这个我再参考PaddleLite-generic-demo运行的环境试试
error_ssd.log中docker容器实际安装路径也是HUAWEI_ASCEND_TOOLKIT_HOME的值,并且也执行了了"$NNADAPTER_DEVICE_NAMES"=="huawei_ascend_npu"条件下的环境设置
您看下demo里的image_classification_demo.cc的320行,mobile_config.set_model_from_file(model_dir+".nb");,这里确认是nb吗?
您不替换那5处文件前,PaddleLite-generic-demo能正常运行image_classification_demo和ssd_detection_demo吗?
./run.sh:line108:19206Aborted(coredumped)./mobilenet_light_apimobilenet_v1_fp32_224.nb
使用PaddleLite-generic-demo/image_classification_demo/shell里的run.sh
提供一下您编译的PaddleLite的版本和具体的commit节点。
[OpenCL]Fixoptailorerror(#7778)Allreactions
[OpTestPy]Add4sequenceunittests(#7889)Allreactions
后面发现修改USE_FULL_API=FALSE后,不执行paddle模型转nb,直接从paddlelite模型加载;就是之前的环境错误问题了
[112/178:47:14.777.../Paddle-Lite-v2/lite/core/device_info.cc:1137Setup]49152KB[112/178:47:14.777.../Paddle-Lite-v2/lite/core/device_info.cc:1137Setup]49152KB[112/178:47:14.777.../Paddle-Lite-v2/lite/core/device_info.cc:1137Setup]49152KB[112/178:47:14.777.../Paddle-Lite-v2/lite/core/device_info.cc:1139Setup]Totalmemory:266486848KB./run.sh:line108:1979Segmentationfault(coredumped)./${BUILD_DIR}/image_classification_demo../assets/models/$MODEL_NAME$MODEL_TYPE../assets/labels/$LABEL_NAME../assets/images/$IMAGE_NAME$NNADAPTER_DEVICE_NAMES$NNADAPTER_CONTEXT_PROPERTIES$NNADAPTER_MODEL_CACHE_DIR$NNADAPTER_MODEL_CACHE_TOKEN$NNADAPTER_SUBGRAPH_PARTITION_CONFIG_PATH
rm-rfPaddleLite-generic-demo/libs/PaddleLite/linux/arm64/include/cp-rfbuild.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8.nnadapter/cxx/include/PaddleLite-generic-demo/libs/PaddleLite/linux/arm64/include/cp-rfbuild.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8.nnadapter/cxx/lib/libnnadapter.soPaddleLite-generic-demo/libs/PaddleLite/linux/arm64/lib/huawei_ascend_npu/cp-rfbuild.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8.nnadapter/cxx/lib/libhuawei_ascend_npu.soPaddleLite-generic-demo/libs/PaddleLite/linux/arm64/lib/huawei_ascend_npu/cp-rfbuild.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8.nnadapter/cxx/lib/libpaddle_full_api_shared.soPaddleLite-generic-demo/libs/PaddleLite/linux/arm64/lib/cp-rfbuild.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8.nnadapter/cxx/lib/libpaddle_light_api_shared.soPaddleLite-generic-demo/libs/PaddleLite/linux/arm64/lib/