LLM (OpenAI)
Configure the agentgateway binary to route requests to the OpenAI chat completions API.
Before you begin
Install the agentgateway binary.
To install the latest release:
curl -sL https://agentgateway.dev/install | bashTo install a specific version, such as the development version
1.1.0, include the--versionflag:curl -sL https://agentgateway.dev/install | bash -s -- --version 1.1.0
- Get an OpenAI API key.
Steps
Route to an OpenAI backend through agentgateway.
Step 1: Set your API key
Store your OpenAI API key in an environment variable so agentgateway can authenticate to the API.
export OPENAI_API_KEY="${OPENAI_API_KEY:-<your-api-key>}"Step 2: Create the configuration
Create a config.yaml that defines an LLM model for OpenAI. This configuration uses the simplified LLM format to route traffic to the OpenAI backend.
cat > config.yaml << 'EOF'
# yaml-language-server: $schema=https://agentgateway.dev/schema/config
llm:
models:
- name: gpt-3.5-turbo
provider: openAI
params:
model: gpt-3.5-turbo
apiKey: "$OPENAI_API_KEY"
EOFStep 3: Start agentgateway
Run agentgateway with the config file.
agentgateway -f config.yamlExample output:
info state_manager loaded config from File("config.yaml")
info app serving UI at http://localhost:15000/ui
info proxy::gateway started bind bind="bind/4000"Step 4: Send a chat completion request
From another terminal, send a request to the chat completions endpoint.
curl -s http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say hello in one sentence."}]
}' | jq .Example output (abbreviated):
{
"choices": [
{
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
}
}
]
}Next steps
Check out more guides related to LLM consumption with agentgateway.