Namespace Unity.Barracuda
Classes
ArrayTensorData
Internal Tensor
data backed by managed array
BarracudaTextureUtils
Deprecated. Use Tensor.ToRenderTexture
method instead
BarracudaWorkerFactory
Deprecated. Use WorkerFactory
class instead
BurstBLAS
Burst specific BLAS implementation
BurstCPUOps
Burst specific implementation of IOps
BurstTensorData
Burst specific internal Tensor
data storage
CompareOps
Compares output of two different implementations of IOps
. Useful for debugging purposes
CompareOpsUtils
CompareOps
utilities
ComputeInfo
GPU compute info
ComputeOps
GPU compute implementation of IOps
ComputeShaderSingleton
Stores compute kernel cache for GPU compute backends
ComputeTensorData
Tensor
data storage for GPU backends
D
Barracuda debug logging utility
DeprecatedTensorDataExtensions
Deprecated ITensorData
extensions
DeprecatedTensorExtensions
Deprecated APIs, left here only for backwards compatibility
DeprecatedWorkerExtensions
Deprecated IWorker
extensions
GenericWorker
Generic IWorker
implementation
JSONTensor
JSON tensor
JSONTensorShape
JSON tensor shape
JSONTestSet
JSON test structure
Layer
Barracuda Model Layer
Model
Neural Net Model data structure
Model.ImporterWarning
Importer warning data structure
ModelBuilder
Class responsible for run-time model building from Neural Net primitives.
ModelExtensions
Extensions for Model
class
ModelLoader
Barracuda Model
loader
ModelMetadataExtensions
Model metadata extensions
ModelWriter
Serializes model to binary stream
NNModel
Barracuda Model asset
NNModelData
Barracuda Model
data storage
NNModelExtensions
Extensions for NNModel
class
NNModelImporter
Asset Importer of barracuda models.
ONNXModelImporter
Asset Importer for Open Neural Network Exchange (ONNX) files. For more information about ONNX file format see: https://github.com/onnx/onnx
PrecompiledComputeOps
Precompiled GPU compute IOps
implementation
RawTestSet
Raw test structure
RecurrentState
Object that represent memory (recurrent state) between the executions of a given model.
ReferenceComputeOps
Reference GPU compute IOps
implementation
ReferenceCPUOps
Reference CPU implementation of IOps
SharedArrayTensorData
Internal Tensor
data backed by managed array that is shared between multiple tensors
StatsOps
Proxy IOps
implementation for tracking computational expenses for specific model
Tensor
Multidimensional array-like data storage
TensorExtensions
Tensor extension methods
TestSet
Test set loading utility
TestSetLoader
Test set loader
TextureAsTensorData
Texture based Tensor
storage
UnsafeArrayCPUOps
Unsafe array based IOps
implementation
UnsafeArrayTensorData
Tensor
data storage based on unsafe array
VerboseOps
Verbose proxy to other IOps
implementation
WaitForCompletion
Suspends the coroutine execution until worker has completed execution on a device and
contents of the specified tensor are downloaded to the main CPU memory.
WaitForCompletion
is not necessary and should NOT be used, unless tensor contents are accessed on CPU!
WaitForCompletion
can only be used with a yield
statement in coroutines.
WorkerExtensions
IWorker interface extensions
WorkerFactory
Factory to create worker that executes specified model on a particular device (GPU, CPU, etc) using particular backend.
See IWorker
for usage of the worker itself.
Structs
Layer.DataSet
Layer param data structure
Model.Input
Input data structure
Model.Memory
Memory data structure. Used by recurrent models to store information about recurrent inputs/outputs
TensorIterator
Helper structure to iterate over tensor shape
TensorShape
TensorShape are immutable representation of a Tensor dimensions and rank. At the moment a TensorShape is always of rank 4 and channels last ie NHWC. However an axis can be of size 1. For example a tensor without spatial information will be N,1,1,C
WorkerFactory.WorkerConfiguration
Worker configuration
compareAgainstType
if different than the worker type
, the model will be run on both backend and result of every layer will be compared, checking for divergence. Great for debugging, but very slow because of the sync needed.
verbose
will log scheduling of layers execution to the console (default == false).
compareLogLevel
define how difference will be reported (default == Warning).
compareEpsilon
the maximum tolerance before a difference is reported (default == 0.0001f).
Interfaces
BLASPlugin
BLAS plugin interface, allows to supply platform specific implementation of matrix multiplication
IDependableTensorData
Interface for device dependent representation of Tensor data that provides a read fence for scheduling data consumer job.
IOps
Interfaces for backend implementers see ModelBuilder.cs for detail on layers.
ITensorAllocator
Interfaces for tensor allocator
ITensorData
Interface for device dependent representation of Tensor data.
IVars
Interfaces for variables
IWorker
The main interface to execute neural networks (a.k.a models).
IWorker
abstracts implementation details associated with various hardware devices (CPU, GPU and NPU in the future)
that can execute neural networks and provides clean and simple interface to:
1) specify inputs, 2) schedule the work and 3) retrieve outputs.
Internally IWorker
translates description of the neural network provided by Model
instance
into the set of operations that are sent to hardware device for execution in a non-blocking (asynchronous) manner.
The following is a simple example of image classification using pretrained neural network:
using UnityEngine;
using Unity.Barracuda;
public class ImageRecognitionSample : MonoBehaviour
{
// small ready to use image classification neural network in ONNX format can be obtained from https://github.com/onnx/models/tree/master/vision/classification/mobilenet
public NNModel onnxAsset;
public Texture2D imageToRecognise;
private IWorker worker;
void Start()
{
worker = onnxAsset.CreateWorker();
}
void Update()
{
// convert texture into Tensor of shape [1, imageToRecognise.height, imageToRecognise.width, 3]
using (var input = new Tensor(imageToRecognise, channels:3))
{
// execute neural network with specific input and get results back
var output = worker.Execute(input).PeekOutput();
// the following line will access values of the output tensor causing the main thread to block until neural network execution is done
var indexWithHighestProbability = output.ArgMax()[0];
UnityEngine.Debug.Log($"Image was recognised as class number: {indexWithHighestProbability}");
}
}
void OnDisable()
{
worker.Dispose();
}
}
The following example demonstrates the use of coroutine to continue smooth app execution while neural network executes in the background:
using UnityEngine;
using Unity.Barracuda;
using System.Collections;
public class CoroutineImageRecognitionSample : MonoBehaviour
{
// small ready to use image classification neural network in ONNX format can be obtained from https://github.com/onnx/models/tree/master/vision/classification/mobilenet
public NNModel onnxAsset;
public Texture2D imageToRecognise;
private IWorker worker;
void Start()
{
worker = onnxAsset.CreateWorker();
StartCoroutine(ImageRecognitionCoroutine());
}
IEnumerator ImageRecognitionCoroutine()
{
while (true)
{
// convert texture into Tensor of shape [1, imageToRecognise.height, imageToRecognise.width, 3]
using (var input = new Tensor(imageToRecognise, channels:3))
{
// execute neural network with specific input and get results back
var output = worker.Execute(input).PeekOutput();
// allow main thread to run until neural network execution has finished
yield return new WaitForCompletion(output);
var indexWithHighestProbability = output.ArgMax()[0];
UnityEngine.Debug.Log($"Image was recognised as class number: {indexWithHighestProbability}");
}
// wait until a new image is provided
var previousImage = imageToRecognise;
while (imageToRecognise == previousImage)
yield return null;
}
}
void OnDisable()
{
worker.Dispose();
}
}
Use WorkerFactory.CreateWorker
or Model.CreateWorker
to create new worker instance.
Enums
BarracudaWorkerFactory.Flags
Device type enum
CompareOpsUtils.LogLevel
CompareOps
log level enum
ComputeInfo.ChannelsOrder
Channel order enum
Layer.Activation
Activation enum
Layer.AutoPad
Auto padding enum
Layer.DepthToSpaceMode
Depth to space mode enum
Layer.FusedActivation
Fused activations enum
Layer.Type
Layer Type
TextureAsTensorData.Flip
Flip flag enum
TextureAsTensorData.InterpretColorAs
Interpret color enum
TextureAsTensorData.InterpretDepthAs
Interpret depth as enum
WorkerFactory.Device
Supported device type
WorkerFactory.Type
Backend type