Compiling AI Functions
Function’s raison d’être 🗽
Function is primarily designed to compile AI inference functions to run on-device. We will walk through the general workflow required to compile these functions.
Defining an AI Function
Let’s begin with a function that classifies an image, returning
the label along with a confidence score. To do so, we will use the MobileNet v2 model from
torchvision
:
The code above has nothing to do with Function. It is plain PyTorch code.
Compiling the AI Function
There are a few steps needed to prepare an AI function for compilation:
In this section, required changes to the above code are highlighted.
Decorating the Function
First, apply the @compile
decorator to the function to prepare it for compilation:
Defining the Compiler Sandbox
Depending on how you run AI inference, you will likely have to install libraries (e.g. PyTorch) and/or upload model
weights. To do so, create a Sandbox
:
Specifying an Inference Backend
Let’s use the ONNXRuntime inference backend to run the AI model:
Compiling the Function
Now, compile the function using the Function CLI:
Inference Backends
Function supports a fixed set of backends for running AI inference. You must opt in to using an inference backend for a specific model by providing inference metadata. The provided metadata will allow the Function compiler to lower the inference operation to native code.
Supported Backends
Below are supported inference metadata types:
A single model can be lowered to use multiple inference backends. Simply provide multiple metadata
instances
that refer to the model.
Request a Backend
We are always looking to add support for new inference backends. So if there is an inference backend you would like to see supported in Function, please reach out to us.