Dropped the patch which is already upstream.
Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Fix the dependency to be added when ptests are not enabled. Fixes:
| ERROR: Nothing PROVIDES '0' (but /meta-intel/dynamic-layers/openembedded-layer/recipes-support/opencv/dldt-inference-engine_2019r3.1.bb DEPENDS on or otherwise requires it)
| ERROR: Required build target 'dldt-inference-engine' has no buildable providers.
| Missing or unbuildable dependency chain was: ['dldt-inference-engine', '0']
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
LIBCPLUSPLUS is now set to use GNU libstdc++ by default. A new variable
RUNTIME can be used to change this behaviour.
6895c79e05
Remove this bbappend as it is not required any more.
Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Install the OpenCL kernels and cldnn_global_custom_kernels.xml to allow
specification of OpenCL kernels for custom layers.
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* Make sure that USB udev rules for Intel(R) Movidius(TM) Neural Compute
Stick and Intel(R) Neural Compute Stick 2 are packaged.
* Package vpu firmware only when it is enabled.
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* There is interesting issue in do_patch, I was debugging strange
behavior with .bbappend where I've added another small patch.
And it started failing to configure completely.
bitbake -e shows that all .patch files are in SRC_URI and
log.do_patch shows that all were applied, but git diff (as well as
patches/series) shows only the last one added from the bbappend
to be applied.
This was caused by 8 existing patches in .bb file using ;patchdir=../
and my patch in .bbappend using ;patchdir=.. without slash at the end,
it should be fixed in quilt (or how do_patch is using it), but for
now just drop the trailing slash, because 99.9% recipes use ;patchdir=..
without the slash.
It's easily reproducible by removing the slash from the last patch
(without any bbappend).
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* add PACKAGECONFIG for vpu
* add extra package for firmware files
* tested on rpi4 with NCS2
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* otherwise components depending on them won't be able to find them
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
This shouldn't be forced upon users including the layer without using the
machine value from meta-intel.
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Instead of letting clDNN build against intel_ocl_icd prebuilt binaries
under clDNN/common/intel_ocl_icd, configure cmake build to pick up
opencl-icd-loader headers and libraries from staging directory.
Do not set CMAKE_INSTALL_LOCAL_ONLY as it is unused.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Refresh patches so that they apply cleanly on 2019r3.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Inference engine is still downloading and building it's own copy of
mkl-dnn, so remove it from DEPENDS.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
LLD linker is no longer the default for clang-native, so
we can build binaries linking to clang-native using GNU ld.
Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
LLD linker is no longer the default for clang-native, so
we can build binaries linking to clang-native using GNU ld.
Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
LLD linker is no longer the default for clang-native, so
we can build binaries linking to clang-native using GNU ld.
Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
It depends on ace which is marked as incompatible for musl as well
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Add PACKAGECONFIG[python3] for building dldt-inference-engine-python3
package which contains the inference engine python API.
Also tweak recipe to inherit python3native instead of relying on host
python as building the python API requires python3-cython which might
not be available on the host.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
No need to set LLVM_TARGETS_TO_BUILD here as it
is set by meta-clang layer.
Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Install clDNN to /usr/lib to resolve the following inference engine
error when running with GPU plugin:
[ ERROR ] Failed to create plugin libclDNNPlugin.so for device GPU
Please, check your environment
Cannot load library 'libclDNNPlugin.so': libclDNNPlugin.so: cannot open
shared object file: No such file or directory
/usr/src/debug/dldt-inference-engine/2019r2-r0/git/inference-engine/include/details/os/lin_shared_object_loader.h:36
/usr/src/debug/dldt-inference-engine/2019r2-r0/git/inference-engine/src/inference_engine/ie_core.cpp:277
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
* Release notes:
https://software.intel.com/en-us/articles/OpenVINO-RelNotes
* Enable unit tests to be built and tested using ptest mechanism.
* Include patches from Clear Linux for build fixes.
* Switch to using python3 and threading to using TBB. Switch ENABLE_OPENCV
to off so opencv from system is used.
* Remove do_install and patch Makefiles instead to install libraries correctly.
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
This recipe builds the inference engine from opencv/dldt 2019 R1.1
release.
OpenVINO™ toolkit, short for Open Visual Inference and Neural network
Optimization toolkit, provides developers with improved neural network
performance on a variety of Intel® processors and helps further unlock
cost-effective, real-time vision applications.
The toolkit enables deep learning inference and easy heterogeneous
execution across multiple Intel® platforms (CPU, Intel® Processor Graphics)—providing
implementations across cloud architectures to edge device.
For more details, see:
https://01.org/openvinotoolkit
The recipe needs components from meta-oe so move it to
dynamic-layers/openembedded-layer. GPU plugin support needs intel-compute-runtime
which can be built by including clang layer in the mix as well.
CPU and GPU plugins have been sanity tested to work using
classification_sample. Further fine-tuning is still needed to improve
the performance.
Original patch by Anuj Mittal.
Signed-off-by: Chin Huat Ang <chin.huat.ang@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Model Optimizer is a cross-platform command-line tool that facilitates
the transition between the training and deployment environment,
performs static model analysis, and adjusts deep learning models for
optimal execution on end-point target devices.
For more details, see:
https://software.intel.com/en-us/openvino-toolkit/deep-learning-cv
Since the recipe requires bits from meta-python, move this to the
dynamic layers section and add meta-python to BBFILES_DYNAMIC.
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
We'd like to ignore the older tags using year and work week strings like
2018ww19-010806 and use only the ones that have numbers and are in
x.y.z format.
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
There is no need to invoke the cmake target explicitly now to have cmake
files installed. Remove the append to do_install doing that.
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Removed patches which are not required anymore.
Updated python version to 3.
In this release, fixed issue/improvements can be
found here:
https://github.com/intel/intel-graphics-compiler/releases/tag/igc-1.0.11
Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
With this upgrade, by default -DPREFERRED_LLVM_VERSION="9.0.0" is set.
Changes can be check here:
https://github.com/intel/opencl-clang
Signed-off-by: Naveen Saini <naveen.kumar.saini@intel.com>
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>