Authoring Predictors

Implementing Custom Models

In NatML, predictors are lightweight primitives that make predictions with one or more models. Predictors play a very crucial role in using ML models, because of their two primary purposes:

  • Predictors provide models with the exact input data they need.

  • Predictors convert model outputs to a form that is usable by developers.

This page describes how you can write custom predictors for your ML models.

Defining Predictors

All predictors must inherit from the IMLPredictor<TOutput> interface. The predictor has a single generic type argument, TOutput, which is a developer-friendly type that is returned when a prediction is made. For example, the MLClassificationPredictor class uses a tuple for its output type:

// The classification predictor returns a tuple
class MLClassificationPredictor : IMLPredictor<(string label, float confidence)> { ... }

The IMLPredictor interface resides in the NatSuite.ML.Internal namespace.

Writing Constructors

All predictors must define one or more constructors that accept one or more MLModel instances, along with any other supplemental data needed to make predictions with the model(s). For example:

/// <summary>
/// Create a custom predictor.
/// </summary>
/// <param name="model">ML model used to make predictions.</param>
public CustomPredictor (MLModel model) { ... }

Within the constructor, the model should store a readonly reference to the model(s). The type of this reference should be IMLModel, instead of MLModel:

// Keep a reference to the model
private readonly IMLModel model;

The IMLModel interface is implemented by the MLModel class, and exposes a hidden Predict method for making predictions with the model.

Making Predictions

All predictors must implement a public Predict method which accepts a params MLFeature[] and returns a TOutput, for example:

/// <summary>
/// Make a prediction with the model.
/// </summary>
/// <param name="inputs">Input feature.</param>
/// <returns>Output label with unnormalized confidence value.</returns>
public (string label, float confidence) Predict (params MLFeature[] inputs);

Within the Predict method, the predictor should do three things:

Input Checking

The predictor should check that the client has provided the correct number of input features, and that the features have the model's expected types.

If these checks fail, an appropriate exception should be thrown. Do this instead of returning an un-initialized output.


To make predictions, the predictor must create native features from input features. All MLFeature instances inherit from the IMLFeature interface, which exposes a single Create method for creating native features that can be used for prediction. A native feature must be created with a corresponding MLFeatureType which defines how it is created. You will typically use the input feature types reported by the model for this:

// Get or create the desired native feature type
MLFeatureType inputType = model.inputs[0];
// Get the input feature and cast it to `IMLFeature`
IMLFeature inputFeature = inputs[0] as IMLFeature;
// Create a native feature
IntPtr nativeFeature = inputFeature.Create(inputType);

Once you have created a native feature, you can then make predictions with a model:

// Make a prediction with one or more native input features
IntPtr[] outputFeatures = model.Predict(nativeFeature);

Make sure to release the native input features when you are done with them:

// Release the native feature


Once you have native output features from the model, you can then marshal the feature data into a more developer-friendly type. This is where most of the heavy-lifting happens in a predictor:

// Marshal the output feature data into a developer-friendly type
float* rawData = (float*)outputFeatures[0].FeatureData();
// Do stuff with this data...
// And when you're done, release the feature

Finally, return your predictor's output:

// Create the prediction result from the output data
TOutput result = ...;
// Return it
return result;

Disposing Predictors

All predictors must define a Dispose method, because IMLPredictor implements the IDisposable interface. This method should be used to dispose any explicitly-managed resources used by the predictor.

The predictor must not Dispose any models provided to it. This is the responsibility of the client.

If a predictor does not have any explicitly-managed resources to dispose, then the predictor should hide the Dispose method using interface hiding:

// Hide the `Dispose` method so that clients cannot use it directly
void IDisposable.Dispose () { }