llama.cpp Adds gpt-oss-20b Support in May 12 Build

Tools official 1 src. ~1 min

A llama.cpp release on May 12, 2026 added support for running OpenAI's gpt-oss-20b model locally, along with prebuilt binaries for macOS (Apple Silicon and Intel), Linux (Vulkan, ROCm, OpenVINO, SYCL backends), Android, and Windows with CUDA 12.4.

Why it matters

Enables local inference of OpenAI's recently released open-weight gpt-oss-20b without requiring cloud API access

Importance: 1/5

Routine llama.cpp release adding gpt-oss-20b model format support across major platforms.

Sources