Distributing Predictors

Sharing with the World

Predictors are designed to be shared. Whether you choose to open-source your predictor or sell it for some price, here are some general guidelines:

Packaging Predictors

We highly recommend packaging a predictor with the following layout:

model-name/ // Package name should be lowercase and dasherized
├─ Runtime/
│ ├─ MLPackage.asmdef // Assembly definition for your package scripts
│ ├─ Predictor.cs // Model predictor
│ ├─ ...
├─ Sample/
│ ├─ example.unity // Example scene demonstrating model
│ ├─ ...
├─ README.md // Readme explaining how the predictor is used
├─ LICENSE.md // License if applicable

Your package assembly definition should reference NatSuite.ML for access to NatML classes and interfaces.

You can use NatML Hub to generate a template predictor package that already has this layout, saving you time.

Publishing on Hub

All public predictors on NatML Hub must pass a review process to ensure that they meet developer experience and performance standards. Below are the criteria used in the review process:

Developer Experience

The foundational principle in designing the developer experience is to reduce cognitive load. The developer should not have to learn many--or ideally, any--new concepts in order to use your predictor.

Try to keep the number of public methods in your predictor at a minimum. Ideally, there should only be one public method: Predict.

The README should be the entrypoint for developers. Keeping in line with the considerations above, the README should very quickly discuss how the predictor is used, with code snippets.

Most developers will simply not read the README, so keeping it short and sweet would increase their chances of actually reading it.

Sample Code

INCOMPLETE.

API Design

NatML predictors have a typical usage pattern:

  1. Create the predictor.

  2. Call Predict with one or more features.

  3. Use the output(s) directly, or call a post-processing method on the output(s).

Predictors must not deviate from this usage pattern. Specifically, the predictor must not have any public methods for feature pre- or post-processing.

Furthermore, all predictors must be compatible with the MLAsyncPredictor. This is critical because developers might need to run predictions asynchronously to preserve their app's frame rate. The implication of this requirement is that the predictor's Predict method must not use any Unity API's which cannot be used from background threads.

Due to the threading restriction, familiar classes like Texture2D, RenderTexture, ComputeShader, Job should not be used in your predictor's Predict method.

If your predictor requires pre-processing on the main thread, you should instead create a CustomFeature class which derives from MLFeature and IMLFeature.

If your predictor requires further post-processing before the outputs can be used, then your predictor should return an instance of an inner class. This inner class should expose a method to perform the required post-processing. This is a common pattern for computer vision predictors that output an image:

// Predictor outputs an inner class
Predictor.Output output = predictor.Predict(...);
// Then developer performs post-processing on the output
RenderTexture result = ...;
output.PostProcessIntoRenderTexture(result);

One advantage of this pattern is that the developer can run your post-processing code on the main thread, giving you full access to Unity API's.

Finally, all public methods must be annotated with XML documentation. This is critical for developers to know how to use different methods in your classes.

Most code editors have intellisense which automatically display the XML docs to the developer. This significantly increases developer productivity.

Performance

Predictors should be written for maximum performance and minimal overhead. This is especially important because NatML Hub does not measure the time taken by the predictor; it only measures the time taken by the MLModel itself. So if the predictor takes too much more time, developers might notice this discrepancy and raise issues with your predictor, leaving it with negative reviews.

Check out performance considerations to keep in mind when writing predictors.

Predictors, along with any pre- or post-processors, must not use any performance-degrading API's that might have significant adverse effects on the entire app.

Predictor packages with any of the following functions present will be immediately rejected:

  • GPU readbacks (Texture2D.ReadPixels, ComputeBuffer.GetData)

  • Disk IO (System.IO.File methods)