Building realtime AI experiences.
Using realtime mode on fxn.ai
fxn.predictions.create
on every frame, you must first ensure that the predictor has been
preloaded on the current device.
Function
client making network requests
every frame in an attempt to initially load the predictor. This will lead to your app hanging and crashing.Function
client to fetch and initialize the predictor. The empty inputs
will cause the prediction to fail due to missing inputs, but it can be safely ignored.Finding Similar Predictors
Overriding the Acceleration Type
Acceleration | Notes |
---|---|
Acceleration.CPU | Use the CPU to accelerate predictions. This is always enabled. |
Acceleration.GPU | Use the GPU to accelerate predictions. |
Acceleration.NPU | Use the neural processor to accelerate predictions. |
acceleration
used to power predictions:acceleration: Acceleration.GPU | Acceleration.NPU
.acceleration
only applies when preloading a predictor.
Once a predictor has been loaded, the acceleration
is ignored.acceleration
is merely a hint, which the Function client will try its best to honor.
Setting an acceleration
does not guarantee that all or any operation in the prediction function
will actually use that acceleration type.Specifying the Acceleration Device
OS | Device type | Notes |
---|---|---|
Android | - | Currently unsupported. |
iOS | id<MTLDevice> | Metal device. |
Linux | int* | CUDA device ID pointer. |
macOS | id<MTLDevice> | Metal device. |
visionOS | id<MTLDevice> | Metal device. |
Web | GPUDevice | WebGPU device. |
Windows | ID3D12Device* | DirectX 12 device. |
device
only applies when preloading a predictor.
Once a predictor has been loaded, the device
is ignored.device
is merely a hint, which the Function client will try its best to honor.
Setting a device
does not guarantee that all or any operation in the prediction function
will actually use that acceleration device.Concurrency with Threading
Function
client across multiple threads.