neural_network_runtime.h
Overview
Defines the Neural Network Runtime APIs. The AI inference framework uses the Native APIs provided by Neural Network Runtime to construct and compile models and perform inference and computing on acceleration hardware. Note: Currently, the APIs of Neural Network Runtime do not support multi-thread calling.
Since: 9
Related Modules:
Summary
Functions
Name | Description |
---|---|
OH_NNModel_Construct (void) | Creates a model instance of the OH_NNModel type and uses other APIs provided by OH_NNModel to construct the model instance. |
OH_NNModel_AddTensor (OH_NNModel *model, const OH_NN_Tensor *tensor) | Adds a tensor to a model instance. |
OH_NNModel_SetTensorData (OH_NNModel *model, uint32_t index, const void *dataBuffer, size_t length) | Sets the tensor value. |
OH_NNModel_AddOperation (OH_NNModel *model, OH_NN_OperationType op, const OH_NN_UInt32Array *paramIndices, const OH_NN_UInt32Array *inputIndices, const OH_NN_UInt32Array *outputIndices) | Adds an operator to a model instance. |
OH_NNModel_SpecifyInputsAndOutputs (OH_NNModel *model, const OH_NN_UInt32Array *inputIndices, const OH_NN_UInt32Array *outputIndices) | Specifies the inputs and outputs of a model. |
OH_NNModel_Finish (OH_NNModel *model) | Completes model composition. |
OH_NNModel_Destroy (OH_NNModel **model) | Releases a model instance. |
OH_NNModel_GetAvailableOperations (OH_NNModel *model, size_t deviceID, const bool **isSupported, uint32_t *opCount) | Queries whether the device supports operators in the model. The support status is indicated by the Boolean value. |
OH_NNCompilation_Construct (const OH_NNModel *model) | Creates a compilation instance of the OH_NNCompilation type. |
OH_NNCompilation_SetDevice (OH_NNCompilation *compilation, size_t deviceID) | Specifies the device for model compilation and computing. |
OH_NNCompilation_SetCache (OH_NNCompilation *compilation, const char *cachePath, uint32_t version) | Set the cache directory and version of the compiled model. |
OH_NNCompilation_SetPerformanceMode (OH_NNCompilation *compilation, OH_NN_PerformanceMode performanceMode) | Sets the performance mode for model computing. |
OH_NNCompilation_SetPriority (OH_NNCompilation *compilation, OH_NN_Priority priority) | Sets the model computing priority. |
OH_NNCompilation_EnableFloat16 (OH_NNCompilation *compilation, bool enableFloat16) | Enables float16 for computing. |
OH_NNCompilation_Build (OH_NNCompilation *compilation) | Compiles a model. |
OH_NNCompilation_Destroy (OH_NNCompilation **compilation) | Releases the Compilation object. |
OH_NNExecutor_Construct (OH_NNCompilation *compilation) | OH_NNExecutor * Creates an executor instance of the OH_NNExecutor type. |
OH_NNExecutor_SetInput (OH_NNExecutor *executor, uint32_t inputIndex, const OH_NN_Tensor *tensor, const void *dataBuffer, size_t length) | Sets the single input data for a model. |
OH_NNExecutor_SetOutput (OH_NNExecutor *executor, uint32_t outputIndex, void *dataBuffer, size_t length) | Sets the buffer for a single output of a model. |
OH_NNExecutor_GetOutputShape (OH_NNExecutor *executor, uint32_t outputIndex, int32_t **shape, uint32_t *shapeLength) | Obtains the dimension information about the output tensor. |
OH_NNExecutor_Run (OH_NNExecutor *executor) | Performs inference. |
OH_NNExecutor_AllocateInputMemory (OH_NNExecutor *executor, uint32_t inputIndex, size_t length) | Allocates shared memory to a single input on a device. |
OH_NNExecutor_AllocateOutputMemory (OH_NNExecutor *executor, uint32_t outputIndex, size_t length) | Allocates shared memory to a single output on a device. |
OH_NNExecutor_DestroyInputMemory (OH_NNExecutor *executor, uint32_t inputIndex, OH_NN_Memory **memory) | Releases the input memory to which the OH_NN_Memory instance points. |
OH_NNExecutor_DestroyOutputMemory (OH_NNExecutor *executor, uint32_t outputIndex, OH_NN_Memory **memory) | Releases the output memory to which the OH_NN_Memory instance points. |
OH_NNExecutor_SetInputWithMemory (OH_NNExecutor *executor, uint32_t inputIndex, const OH_NN_Tensor *tensor, const OH_NN_Memory *memory) | Specifies the hardware shared memory pointed to by the OH_NN_Memory instance as the shared memory used by a single input. |
OH_NNExecutor_SetOutputWithMemory (OH_NNExecutor *executor, uint32_t outputIndex, const OH_NN_Memory *memory) | Specifies the hardware shared memory pointed to by the OH_NN_Memory instance as the shared memory used by a single output. |
OH_NNExecutor_Destroy (OH_NNExecutor **executor) | Destroys an executor instance to release the memory occupied by the executor. |
OH_NNDevice_GetAllDevicesID (const size_t **allDevicesID, uint32_t *deviceCount) | Obtains the ID of the device connected to Neural Network Runtime. |
OH_NNDevice_GetName (size_t deviceID, const char **name) | Obtains the name of the specified device. |
OH_NNDevice_GetType (size_t deviceID, OH_NN_DeviceType *deviceType) | Obtains the type information of the specified device. |