Because Function runs all prediction functions locally, we can create Realtime AI which runs the AI model repeatedly at interactive rates, often at 30 or 60 frames per second.

Using realtime mode on fxn.ai

Making Predictions in Realtime

Before calling fxn.predictions.create on every frame, you must first ensure that the predictor has been preloaded on the current device.

Failing to preload a predictor before using it in realtime will result in your Function client making network requests every frame in an attempt to initially load the predictor. This will lead to your app hanging and crashing.

Preloading the Predictor

To preload a predictor, make a prediction and pass in empty inputs:

This works by forcing the Function client to fetch and initialize the predictor. The empty inputs will cause the prediction to fail due to missing inputs, but it can be safely ignored.

Making Realtime Predictions

After preloading the predictor, you can then make predictions in realtime using your app’s update loop, or other similar mechanisms.

function preloadPredictor () {
    // Preload the predictor
    await fxn.predictions.create({
        tag: "@vision-co/object-detector",
        inputs: { }
    });
    // Make predictions in realtime
    while (true)
        doPredictictions();
}

Performance Considerations

Function automatically optimizes the runtime performance of predictors on a given device by leveraging aggregated performance data. While this means that developers have little control over performance, there are several ways to ensure a smooth user experience in your application: