Layer Coverage

Which Layers Run Where

NatML is comprised of two components: a marshaling engine and an inference engine. The inference engine delegates inference to the platform's machine learning accelerator. This page describes the inference engine's layer coverage across different accelerators.

NatML has full coverage of the ONNX specification, and supports all operations.

If an accelerator does not support a given layer, inference will fall back to the CPU. This work transfer can degrade performance, so choose your models accordingly.

Convolution

Layer

CPU

Core ML (iOS, macOS)

NNAPI (Android)

DirectML (Windows)

Conv1d

Conv2d

Conv3d

ConvTranspose1d

ConvTranspose2d

ConvTranspose3d

Pooling

Layer

CPU

CoreML (iOS, macOS)

NNAPI (Android)

DirectML (Windows)

AvgPool1d

AvgPool2d

AvgPool3d

MaxPool1d

MaxPool2d

MaxPool3d

Activations

Layer

CPU

CoreML (iOS, macOS)

NNAPI (Android)

DirectML (Windows)

Sigmoid

Tanh

ReLU

ELU

ReLU6

GELU

LeakyReLU

PReLU

Softplus

Softmax

Softsign

Normalization

Layer

CPU

CoreML (iOS, macOS)

NNAPI (Android)

DirectML (Windows)

BatchNorm1d

BatchNorm2d

BatchNorm3d

InstanceNorm1d

InstanceNorm2d

InstanceNorm3d

LayerNorm

LocalResponseNorm

Recurrent

Layer

CPU

CoreML (iOS, macOS)

NNAPI (Android)

DirectML (Windows)

RNN

LSTM

GRU