* There is interesting issue in do_patch, I was debugging strange
behavior with .bbappend where I've added another small patch.
And it started failing to configure completely.
bitbake -e shows that all .patch files are in SRC_URI and
log.do_patch shows that all were applied, but git diff (as well as
patches/series) shows only the last one added from the bbappend
to be applied.
This was caused by 8 existing patches in .bb file using ;patchdir=../
and my patch in .bbappend using ;patchdir=.. without slash at the end,
it should be fixed in quilt (or how do_patch is using it), but for
now just drop the trailing slash, because 99.9% recipes use ;patchdir=..
without the slash.
It's easily reproducible by removing the slash from the last patch
(without any bbappend).
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* add PACKAGECONFIG for vpu
* add extra package for firmware files
* tested on rpi4 with NCS2
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* otherwise components depending on them won't be able to find them
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Instead of letting clDNN build against intel_ocl_icd prebuilt binaries
under clDNN/common/intel_ocl_icd, configure cmake build to pick up
opencl-icd-loader headers and libraries from staging directory.
Do not set CMAKE_INSTALL_LOCAL_ONLY as it is unused.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Refresh patches so that they apply cleanly on 2019r3.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Inference engine is still downloading and building it's own copy of
mkl-dnn, so remove it from DEPENDS.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Add PACKAGECONFIG[python3] for building dldt-inference-engine-python3
package which contains the inference engine python API.
Also tweak recipe to inherit python3native instead of relying on host
python as building the python API requires python3-cython which might
not be available on the host.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Install clDNN to /usr/lib to resolve the following inference engine
error when running with GPU plugin:
[ ERROR ] Failed to create plugin libclDNNPlugin.so for device GPU
Please, check your environment
Cannot load library 'libclDNNPlugin.so': libclDNNPlugin.so: cannot open
shared object file: No such file or directory
/usr/src/debug/dldt-inference-engine/2019r2-r0/git/inference-engine/include/details/os/lin_shared_object_loader.h:36
/usr/src/debug/dldt-inference-engine/2019r2-r0/git/inference-engine/src/inference_engine/ie_core.cpp:277
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* Release notes:
https://software.intel.com/en-us/articles/OpenVINO-RelNotes
* Enable unit tests to be built and tested using ptest mechanism.
* Include patches from Clear Linux for build fixes.
* Switch to using python3 and threading to using TBB. Switch ENABLE_OPENCV
to off so opencv from system is used.
* Remove do_install and patch Makefiles instead to install libraries correctly.
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
This recipe builds the inference engine from opencv/dldt 2019 R1.1
release.
OpenVINO™ toolkit, short for Open Visual Inference and Neural network
Optimization toolkit, provides developers with improved neural network
performance on a variety of Intel® processors and helps further unlock
cost-effective, real-time vision applications.
The toolkit enables deep learning inference and easy heterogeneous
execution across multiple Intel® platforms (CPU, Intel® Processor Graphics)—providing
implementations across cloud architectures to edge device.
For more details, see:
https://01.org/openvinotoolkit
The recipe needs components from meta-oe so move it to
dynamic-layers/openembedded-layer. GPU plugin support needs intel-compute-runtime
which can be built by including clang layer in the mix as well.
CPU and GPU plugins have been sanity tested to work using
classification_sample. Further fine-tuning is still needed to improve
the performance.
Original patch by Anuj Mittal.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>