Function supports requesting predictions to be created on serverless containers, backed by powerful GPUs.

All predictors on Function support remote predictions by default.

Remote predictions are an experimental feature, and can be drastically altered or removed on short notice.

Making a Remote Prediction

Use the fxn.beta.predictions.remote.create method to request a prediction to be created in the cloud:

Leveraging GPU Acceleration

One advantage with remote predictions is having access to orders of magnitude more compute than on your local device. Function supports specifying a RemoteAcceleration when creating remote predictions:

Below are the currently supported types of RemoteAcceleration:

Remote AccelerationNotes
RemoteAcceleration.AutoAutomatically use the ideal remote acceleration.
RemoteAcceleration.CPUPredictions are run on AMD CPU servers.
RemoteAcceleration.A40Predictions are run on an Nvidia A40 GPU.
RemoteAcceleration.A100Predictions are run on an Nvidia A100 GPU.

Remote predictions are priced by the remote acceleration, per second of prediction time (i.e. prediction.latency). See our pricing for more information.

If you want to self-host the remote acceleration servers in your VPC or on-prem, reach out to us at sales@fxn.ai.