Function supports requesting predictions to be created on serverless containers, backed by powerful GPUs.

All predictors on Function support remote predictions by default.

Remote predictions are an experimental feature, and can be drastically altered or removed on short notice.

Making a Remote Prediction

Use the fxn.beta.predictions.remote.create method to request a prediction to be created in the cloud:

import { Function } from "fxnjs"

// 💥 Create your Function client
const fxn = new Function({ accessKey: "..." });

// 🔥 Run the prediction remotely
const prediction = await fxn.beta.predictions.remote.create({
    tag: "@fxn/greeting",
    inputs: { name: "Yusuf" }
});

// 🚀 Print the result
console.log(prediction.results[0]);

Leveraging GPU Acceleration

One advantage with remote predictions is having access to orders of magnitude more compute than on your local device. Function supports specifying a RemoteAcceleration when creating remote predictions:

// Create a remote prediction on an Nvidia A100 GPU
const prediction = await fxn.beta.predictions.remote.create({
    tag: "@meta/llama-3.1-70b",
    inputs: { ... },
    acceleration: "a100"
});

Below are the currently supported types of RemoteAcceleration:

Remote AccelerationNotes
RemoteAcceleration.AutoAutomatically use the ideal remote acceleration.
RemoteAcceleration.CPUPredictions are run on AMD CPU servers.
RemoteAcceleration.A40Predictions are run on an Nvidia A40 GPU.
RemoteAcceleration.A100Predictions are run on an Nvidia A100 GPU.

Remote predictions are priced by the remote acceleration, per second of prediction time (i.e. prediction.latency). See our pricing for more information.

If you want to self-host the remote acceleration servers in your VPC or on-prem, reach out to us.