Skip to content

Multi-Provider Support

Anvil supports multiple LLM providers for tool generation. Use the provider that best fits your needs.

ProviderModelsDefault Model
AnthropicClaude familyclaude-sonnet-4-20250514
OpenAIGPT-4 familygpt-4o
Grok (xAI)Grok familygrok-2-latest
from anvil import Anvil
# Explicit
anvil = Anvil(provider="anthropic")
# Or implicit (default)
anvil = Anvil()
# Set API key via environment
# export ANTHROPIC_API_KEY="sk-ant-..."
anvil = Anvil(provider="openai")
# Set API key via environment
# export OPENAI_API_KEY="sk-..."
anvil = Anvil(provider="grok")
# Set API key via environment
# export XAI_API_KEY="..."

Override the default model:

# Use a specific Claude model
anvil = Anvil(
provider="anthropic",
model="claude-opus-4-20250514"
)
# Use GPT-4 Turbo
anvil = Anvil(
provider="openai",
model="gpt-4-turbo"
)
# Use specific Grok version
anvil = Anvil(
provider="grok",
model="grok-2"
)

Each provider looks for a specific environment variable:

Terminal window
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# OpenAI
export OPENAI_API_KEY="sk-..."
# Grok
export XAI_API_KEY="..."

Pass API keys directly (not recommended for production):

anvil = Anvil(
provider="anthropic",
api_key="sk-ant-..."
)
from dotenv import load_dotenv
load_dotenv()
anvil = Anvil(provider="openai") # Uses OPENAI_API_KEY from .env

Pros:

  • Excellent code generation
  • Strong reasoning capabilities
  • Good at following complex instructions

Cons:

  • Higher cost for Opus models

Best for:

  • Complex tool generation
  • Tools requiring nuanced understanding
  • Production use

Pros:

  • Widely used, well-documented
  • Good function calling support
  • Broad knowledge base

Cons:

  • Rate limits can be restrictive

Best for:

  • Teams already using OpenAI
  • Integration with OpenAI ecosystem

Pros:

  • Real-time information access
  • Unique personality
  • Competitive pricing

Cons:

  • Newer, less battle-tested

Best for:

  • Tools needing current information
  • Experimental use cases

Anvil accepts various provider name formats:

# All equivalent for Anthropic
Anvil(provider="anthropic")
Anvil(provider="claude")
# All equivalent for OpenAI
Anvil(provider="openai")
Anvil(provider="gpt")
# All equivalent for Grok
Anvil(provider="grok")
Anvil(provider="xai")

See which provider is active:

anvil = Anvil(provider="openai")
print(anvil.mode) # "local"
# The provider is set on the generator
# Access via internal API if needed

For advanced use, access the provider factory:

from anvil.llm import ProviderFactory
# List available providers
providers = ProviderFactory.list_providers()
print(providers) # ["anthropic", "openai", "grok"]
# Create a provider instance
provider = ProviderFactory.get_provider(
"anthropic",
api_key="sk-ant-...",
model="claude-sonnet-4-20250514"
)
# Generate directly
response = provider.generate(
system="You are a code generator.",
user="Write a function to add two numbers."
)
print(response.content)

Anvil doesn’t have built-in fallback, but you can implement it:

def create_anvil_with_fallback():
providers = ["anthropic", "openai", "grok"]
for provider in providers:
try:
anvil = Anvil(provider=provider)
# Test with a simple generation
anvil.use_tool(name="test", intent="Test tool", use_stub=True)
return anvil
except Exception:
continue
raise RuntimeError("No providers available")
anvil = create_anvil_with_fallback()

Tool generation uses LLM tokens. Consider:

  1. Caching - Tools are cached, so generation happens once
  2. Intent changes - Changing intent triggers regeneration
  3. Self-healing - Each heal attempt uses tokens
  4. Documentation fetching - Larger docs = more tokens
# Use caching effectively
anvil = Anvil()
tool = anvil.use_tool(name="search", intent="Search the web")
# Second call uses cache - no LLM call
# Limit self-healing
anvil = Anvil(max_heal_attempts=1)
# Use stub mode for testing
anvil = Anvil(use_stub=True) # No LLM calls