Multi-Provider Support
Anvil supports multiple LLM providers for tool generation. Use the provider that best fits your needs.
Supported Providers
Section titled “Supported Providers”| Provider | Models | Default Model |
|---|---|---|
| Anthropic | Claude family | claude-sonnet-4-20250514 |
| OpenAI | GPT-4 family | gpt-4o |
| Grok (xAI) | Grok family | grok-2-latest |
Using Different Providers
Section titled “Using Different Providers”Anthropic (Default)
Section titled “Anthropic (Default)”from anvil import Anvil
# Explicitanvil = Anvil(provider="anthropic")
# Or implicit (default)anvil = Anvil()
# Set API key via environment# export ANTHROPIC_API_KEY="sk-ant-..."OpenAI
Section titled “OpenAI”anvil = Anvil(provider="openai")
# Set API key via environment# export OPENAI_API_KEY="sk-..."anvil = Anvil(provider="grok")
# Set API key via environment# export XAI_API_KEY="..."Custom Models
Section titled “Custom Models”Override the default model:
# Use a specific Claude modelanvil = Anvil( provider="anthropic", model="claude-opus-4-20250514")
# Use GPT-4 Turboanvil = Anvil( provider="openai", model="gpt-4-turbo")
# Use specific Grok versionanvil = Anvil( provider="grok", model="grok-2")API Keys
Section titled “API Keys”Environment Variables
Section titled “Environment Variables”Each provider looks for a specific environment variable:
# Anthropicexport ANTHROPIC_API_KEY="sk-ant-..."
# OpenAIexport OPENAI_API_KEY="sk-..."
# Grokexport XAI_API_KEY="..."Direct Configuration
Section titled “Direct Configuration”Pass API keys directly (not recommended for production):
anvil = Anvil( provider="anthropic", api_key="sk-ant-...")Using .env Files
Section titled “Using .env Files”from dotenv import load_dotenvload_dotenv()
anvil = Anvil(provider="openai") # Uses OPENAI_API_KEY from .envProvider Comparison
Section titled “Provider Comparison”Anthropic (Claude)
Section titled “Anthropic (Claude)”Pros:
- Excellent code generation
- Strong reasoning capabilities
- Good at following complex instructions
Cons:
- Higher cost for Opus models
Best for:
- Complex tool generation
- Tools requiring nuanced understanding
- Production use
OpenAI (GPT-4)
Section titled “OpenAI (GPT-4)”Pros:
- Widely used, well-documented
- Good function calling support
- Broad knowledge base
Cons:
- Rate limits can be restrictive
Best for:
- Teams already using OpenAI
- Integration with OpenAI ecosystem
Pros:
- Real-time information access
- Unique personality
- Competitive pricing
Cons:
- Newer, less battle-tested
Best for:
- Tools needing current information
- Experimental use cases
Provider Aliases
Section titled “Provider Aliases”Anvil accepts various provider name formats:
# All equivalent for AnthropicAnvil(provider="anthropic")Anvil(provider="claude")
# All equivalent for OpenAIAnvil(provider="openai")Anvil(provider="gpt")
# All equivalent for GrokAnvil(provider="grok")Anvil(provider="xai")Checking Provider
Section titled “Checking Provider”See which provider is active:
anvil = Anvil(provider="openai")print(anvil.mode) # "local"
# The provider is set on the generator# Access via internal API if neededProvider Factory
Section titled “Provider Factory”For advanced use, access the provider factory:
from anvil.llm import ProviderFactory
# List available providersproviders = ProviderFactory.list_providers()print(providers) # ["anthropic", "openai", "grok"]
# Create a provider instanceprovider = ProviderFactory.get_provider( "anthropic", api_key="sk-ant-...", model="claude-sonnet-4-20250514")
# Generate directlyresponse = provider.generate( system="You are a code generator.", user="Write a function to add two numbers.")print(response.content)Fallback Providers
Section titled “Fallback Providers”Anvil doesn’t have built-in fallback, but you can implement it:
def create_anvil_with_fallback(): providers = ["anthropic", "openai", "grok"]
for provider in providers: try: anvil = Anvil(provider=provider) # Test with a simple generation anvil.use_tool(name="test", intent="Test tool", use_stub=True) return anvil except Exception: continue
raise RuntimeError("No providers available")
anvil = create_anvil_with_fallback()Cost Considerations
Section titled “Cost Considerations”Tool generation uses LLM tokens. Consider:
- Caching - Tools are cached, so generation happens once
- Intent changes - Changing intent triggers regeneration
- Self-healing - Each heal attempt uses tokens
- Documentation fetching - Larger docs = more tokens
Cost Optimization
Section titled “Cost Optimization”# Use caching effectivelyanvil = Anvil()tool = anvil.use_tool(name="search", intent="Search the web")# Second call uses cache - no LLM call
# Limit self-healinganvil = Anvil(max_heal_attempts=1)
# Use stub mode for testinganvil = Anvil(use_stub=True) # No LLM calls