Lokasi ngalangkungan proxy:   [ UP ]  
[Ngawartoskeun bug]   [Panyetelan cookie]                
Skip to content

Use native Anthropic API for documentation agents when provider=anthropic#53

Open
pruv wants to merge 1 commit intoFSoft-AI4Code:mainfrom
pruv:main
Open

Use native Anthropic API for documentation agents when provider=anthropic#53
pruv wants to merge 1 commit intoFSoft-AI4Code:mainfrom
pruv:main

Conversation

@pruv
Copy link
Copy Markdown

@pruv pruv commented Apr 24, 2026

Summary
Routes pydantic-ai documentation agents through AnthropicModel + AnthropicProvider (Anthropic Messages API) when config.provider is anthropic, instead of always using CompatibleOpenAIModel + OpenAIProvider. Declares the pydantic-ai[anthropic] extra so the Anthropic SDK is installed with the package.

Motivation
With provider: anthropic and Anthropic model IDs, the previous stack still used the OpenAI-compatible client, which led to 404s and confusion when base_url pointed at Anthropic or when model names were valid for Anthropic but not for a chat.completions endpoint. Agents should use the same native Anthropic path that matches user configuration.

Changes

codewiki/src/be/llm_services.py
create_main_model / create_fallback_model: if provider (trimmed, lowercased) is anthropic, build AnthropicModel with AnthropicModelSettings (temperature=0, max_tokens from config) and AnthropicProvider (api_key, optional base_url).
_anthropic_api_model_name: strips a leading anthropic/ prefix so LiteLLM-style names still work.
_anthropic_provider_base_url: passes through a normal HTTPS base URL; if llm_base_url looks like a local OpenAI-compatible LiteLLM host (0.0.0.0:4000, localhost:4000, etc.), logs a warning and uses None (Anthropic default host). Documented expectation: use openai-compatible for LiteLLM’s OpenAI port.
Lazy import of Anthropic pydantic-ai modules with a clear ImportError if the extra is missing.
Return type widened to Any for the mixed OpenAI / Anthropic model types.
pyproject.toml
Dependency pydantic-ai>=1.0.6 → pydantic-ai[anthropic]>=1.0.6.```

Copy link
Copy Markdown
Collaborator

@anhnh2002 anhnh2002 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this is a focused, correct fix for the Anthropic agent path, flagging two issues inline that I think should be addressed before merging (or at least explicitly accepted): inconsistent provider normalization across the file, and a regression in the default fallback model when provider=anthropic.

@@ -106,8 +164,11 @@ def _create_litellm_openai_client(config: Config) -> OpenAI:
)


Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Provider comparison is inconsistent with the rest of this file. The new branches use a tolerant check — (config.provider or "").strip().lower() == "anthropic" — but the existing checks elsewhere in the same module use raw equality:

  • _create_litellm_openai_client line 95: if config.provider == "bedrock":
  • call_llm line 177: if provider == "azure-openai":
  • _call_llm_via_litellm lines 216/220: if config.provider == "bedrock": / elif config.provider == "anthropic":

This means a config with provider="Anthropic" (or with stray whitespace) will route the agents through the native Anthropic path here, but call_llm will silently fall through to the OpenAI-compatible default — a confusing split. Please normalize once (either pick one rule and apply it everywhere in this file, or normalize at Config construction time so all comparisons see the canonical value).



def create_fallback_model(config: Config) -> CompatibleOpenAIModel:
def create_fallback_model(config: Config) -> Any:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Behavior change worth calling out: with this branch, the fallback model is also routed through the native Anthropic API. The default fallback in codewiki/src/config.py is FALLBACK_MODEL_1 = os.getenv('FALLBACK_MODEL_1', 'glm-4p5') — a non-Anthropic model. Any user who sets provider=anthropic without explicitly overriding the fallback will now hit a hard Anthropic 4xx for glm-4p5 whenever the main model fails over.

Before this PR the fallback went through the OpenAI-compatible/LiteLLM client and could broker to glm-4p5, so this is a regression for that configuration. Options: (a) validate at Config construction that the fallback model is Anthropic-compatible when provider=anthropic, (b) pick an Anthropic-native default fallback when provider=anthropic, or (c) document the constraint prominently. At minimum I'd flag this in the PR description so users updating to this build know to reset FALLBACK_MODEL_1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants