neural_network_core.h

Overview

Defines APIs for the Neural Network Core module. The AI inference framework uses the native interfaces provided by Neural Network Core to build models and perform inference and computing on acceleration hardware.

NOTE
Currently, the APIs of Neural Network Core do not support multi-thread calling.

File to include: <neural_network_runtime/neural_network_core.h>

Library: libneural_network_core.so

System capability: @Syscap SystemCapability.Ai.NeuralNetworkRuntime

Since: 11

Related module: NeuralnetworkRuntime

Summary

Functions

Name Description
*OH_NNCompilation_Construct (const OH_NNModel *model) Creates a model building instance of the OH_NNCompilation type.
*OH_NNCompilation_ConstructWithOfflineModelFile (const char *modelPath) Creates a build instance based on an offline model file.
*OH_NNCompilation_ConstructWithOfflineModelBuffer (const void *modelBuffer, size_t modelSize) Creates a build instance based on the offline model buffer.
*OH_NNCompilation_ConstructForCache () Creates an empty build instance for later recovery from the model cache.
OH_NNCompilation_ExportCacheToBuffer (OH_NNCompilation *compilation, const void *buffer, size_t length, size_t *modelSize) Writes the model cache to the specified buffer.
OH_NNCompilation_ImportCacheFromBuffer (OH_NNCompilation *compilation, const void *buffer, size_t modelSize) Reads the model cache from the specified buffer.
OH_NNCompilation_AddExtensionConfig (OH_NNCompilation *compilation, const char *configName, const void *configValue, const size_t configValueSize) Adds extended configurations for custom hardware attributes.
OH_NNCompilation_SetDevice (OH_NNCompilation *compilation, size_t deviceID) Sets the device for model building and computing.
OH_NNCompilation_SetCache (OH_NNCompilation *compilation, const char *cachePath, uint32_t version) Sets the cache directory and version for model building.
OH_NNCompilation_SetPerformanceMode (OH_NNCompilation *compilation, OH_NN_PerformanceMode performanceMode) Sets the performance mode for model computing.
OH_NNCompilation_SetPriority (OH_NNCompilation *compilation, OH_NN_Priority priority) Sets the priority for model computing.
OH_NNCompilation_EnableFloat16 (OH_NNCompilation *compilation, bool enableFloat16) Enables float16 for computing.
OH_NNCompilation_Build (OH_NNCompilation *compilation) Performs model building.
OH_NNCompilation_Destroy (OH_NNCompilation **compilation) Destroys a model building instance of the OH_NNCompilation type.
*OH_NNTensorDesc_Create () Creates an NN_TensorDesc instance.
OH_NNTensorDesc_Destroy (NN_TensorDesc **tensorDesc) Releases an NN_TensorDesc instance.
OH_NNTensorDesc_SetName (NN_TensorDesc *tensorDesc, const char *name) Sets the name of an NN_TensorDesc instance.
OH_NNTensorDesc_GetName (const NN_TensorDesc *tensorDesc, const char **name) Obtains the name of an NN_TensorDesc instance.
OH_NNTensorDesc_SetDataType (NN_TensorDesc *tensorDesc, OH_NN_DataType dataType) Sets the data type of an NN_TensorDesc instance.
OH_NNTensorDesc_GetDataType (const NN_TensorDesc *tensorDesc, OH_NN_DataType *dataType) Obtains the data type of an NN_TensorDesc instance.
OH_NNTensorDesc_SetShape (NN_TensorDesc *tensorDesc, const int32_t *shape, size_t shapeLength) Sets the data shape of an NN_TensorDesc instance.
OH_NNTensorDesc_GetShape (const NN_TensorDesc *tensorDesc, int32_t **shape, size_t *shapeLength) Obtains the shape of an NN_TensorDesc instance.
OH_NNTensorDesc_SetFormat (NN_TensorDesc *tensorDesc, OH_NN_Format format) Sets the data format of an NN_TensorDesc instance.
OH_NNTensorDesc_GetFormat (const NN_TensorDesc *tensorDesc, OH_NN_Format *format) Obtains the data format of an NN_TensorDesc instance.
OH_NNTensorDesc_GetElementCount (const NN_TensorDesc *tensorDesc, size_t *elementCount) Obtains the number of elements in an NN_TensorDesc instance.
OH_NNTensorDesc_GetByteSize (const NN_TensorDesc *tensorDesc, size_t *byteSize) Obtains the number of bytes occupied by the data obtained through calculation based on the shape and data type of an NN_TensorDesc instance.
*OH_NNTensor_Create (size_t deviceID, NN_TensorDesc *tensorDesc) Creates an NN_Tensor instance from an NN_TensorDesc instance.
*OH_NNTensor_CreateWithSize (size_t deviceID, NN_TensorDesc *tensorDesc, size_t size) Creates an NN_Tensor instance based on the specified memory size and NN_TensorDesc instance.
*OH_NNTensor_CreateWithFd (size_t deviceID, NN_TensorDesc *tensorDesc, int fd, size_t size, size_t offset) Creates an {@Link NN_Tensor} instance based on the specified file descriptor of the shared memory and NN_TensorDesc instance.
OH_NNTensor_Destroy (NN_Tensor **tensor) Destroys an NN_Tensor instance.
*OH_NNTensor_GetTensorDesc (const NN_Tensor *tensor) Obtains an NN_TensorDesc instance of NN_Tensor.
*OH_NNTensor_GetDataBuffer (const NN_Tensor *tensor) Obtains the memory address of NN_Tensor data.
OH_NNTensor_GetFd (const NN_Tensor *tensor, int *fd) Obtains the file descriptor of the shared memory where NN_Tensor data is stored.
OH_NNTensor_GetSize (const NN_Tensor *tensor, size_t *size) Obtains the size of the shared memory where the NN_Tensor data is stored.
OH_NNTensor_GetOffset (const NN_Tensor *tensor, size_t *offset) Obtains the offset of NN_Tensor data in the shared memory.
*OH_NNExecutor_Construct (OH_NNCompilation *compilation) Creates an OH_NNExecutor instance.
OH_NNExecutor_GetOutputShape (OH_NNExecutor *executor, uint32_t outputIndex, int32_t **shape, uint32_t *shapeLength) Obtains the dimension information about the output tensor.
OH_NNExecutor_Destroy (OH_NNExecutor **executor) Destroys an executor instance to release the memory occupied by it.
OH_NNExecutor_GetInputCount (const OH_NNExecutor *executor, size_t *inputCount) Obtains the number of input tensors.
OH_NNExecutor_GetOutputCount (const OH_NNExecutor *executor, size_t *outputCount) Obtains the number of output tensors.
*OH_NNExecutor_CreateInputTensorDesc (const OH_NNExecutor *executor, size_t index) Creates the description of an input tensor based on the specified index value.
*OH_NNExecutor_CreateOutputTensorDesc (const OH_NNExecutor *executor, size_t index) Creates the description of an output tensor based on the specified index value.
OH_NNExecutor_GetInputDimRange (const OH_NNExecutor *executor, size_t index, size_t **minInputDims, size_t **maxInputDims, size_t *shapeLength) Obtains the dimension range of all input tensors.
OH_NNExecutor_SetOnRunDone (OH_NNExecutor *executor, NN_OnRunDone onRunDone) Sets the callback processing function invoked when the asynchronous inference ends.
OH_NNExecutor_SetOnServiceDied (OH_NNExecutor *executor, NN_OnServiceDied onServiceDied) Sets the callback processing function invoked when the device driver service terminates unexpectedly during asynchronous inference.
OH_NNExecutor_RunSync (OH_NNExecutor *executor, NN_Tensor *inputTensor[], size_t inputCount, NN_Tensor *outputTensor[], size_t outputCount) Performs synchronous inference.
OH_NNExecutor_RunAsync (OH_NNExecutor *executor, NN_Tensor *inputTensor[], size_t inputCount, NN_Tensor *outputTensor[], size_t outputCount, int32_t timeout, void *userData) Performs asynchronous inference.
OH_NNDevice_GetAllDevicesID (const size_t **allDevicesID, uint32_t *deviceCount) Obtains the ID of the device connected to Neural Network Runtime.
OH_NNDevice_GetName (size_t deviceID, const char **name) Obtains the name of the specified device.
OH_NNDevice_GetType (size_t deviceID, OH_NN_DeviceType *deviceType) Obtains the type of the specified device.