Spin Serverless AI API
The Spin Serverless AI API enables Spin components to run AI inference using built-in language model support (Llama 2, CodeLlama, etc.) via the Spin SDK's infer() function. Components must declare the ai_models they need in spin.toml. Supported on Fermyon Cloud and SpinKube deployments.