Skip to content

feat(dotAI): Dot AI LangChain4J - Google Vertex#35241

Draft
ihoffmann-dot wants to merge 8 commits intodot-ai-langchain-amazon-bedrockfrom
dot-ai-langchain-google-vertex
Draft

feat(dotAI): Dot AI LangChain4J - Google Vertex#35241
ihoffmann-dot wants to merge 8 commits intodot-ai-langchain-amazon-bedrockfrom
dot-ai-langchain-google-vertex

Conversation

@ihoffmann-dot
Copy link
Copy Markdown
Member

Summary

Adds Google Vertex AI (Gemini) as a supported chat provider.
Auth is handled via Application Default Credentials — no API key required.

  • Add langchain4j-vertex-ai-gemini dependency (with enforcer exclusions)
  • Add vertex_ai case to LangChain4jModelFactory switch
  • Implement buildVertexAiChatModel using VertexAiGeminiChatModel
  • buildVertexAiEmbeddingModel and buildVertexAiImageModel throw UnsupportedOperationException
  • Add 3 unit tests in LangChain4jModelFactoryTest

Configuration

{
  "chat": {
    "provider": "vertex_ai",
    "projectId": "my-gcp-project",
    "location": "us-central1",
    "model": "gemini-1.5-pro",
    "maxTokens": 8192,
    "temperature": 1.0
  }
}

Notes

  • Authentication uses ADC automatically. The host running dotCMS must have a valid GCP identity configured (gcloud auth application-default login or a service account attached to the instance).
  • No API key is needed or accepted for Vertex AI.
  • Embeddings and image generation are not available for this provider via LangChain4J. Both operations throw UnsupportedOperationException.
  • Two transitive dependencies banned by the project Maven enforcer (org.checkerframework:checker-qual, com.google.android:annotations) are explicitly excluded from the langchain4j-vertex-ai-gemini dependency.

Related Issue

This PR fixes #35183
EPIC: dotAI Multi-Provider Support #33970

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 7, 2026

Claude finished @ihoffmann-dot's task in 2m 4s —— View job


Review

  • Read changed files and diff
  • Review pom.xml changes
  • Review LangChain4jModelFactory.java
  • Review ProviderConfig.java
  • Review tests
  • Post findings

1. Design smell: build() positional provider parameters (medium)

LangChain4jModelFactory.java:108-113 — The private build() method now takes one positional Function<ProviderConfig, T> per provider. With 4 providers it's already hard to read, and every future provider adds another argument. The existing design (from the parent PRs) introduced this pattern, but each merge makes it worse. A Map<String, Function<ProviderConfig, T>> registry would remove the coupling between provider count and method arity. Not a blocker, but the structural debt compounds with each new provider.

Fix this →

2. maxRetries missing from streaming builder (bug-level inconsistency)

LangChain4jModelFactory.java:359-368buildVertexAiChatModel applies config.maxRetries() but buildVertexAiStreamingChatModel silently drops it. If a caller sets maxRetries, it will take effect for chat but be ignored for streaming with no error or warning.

// buildVertexAiChatModel — applies maxRetries ✓
if (config.maxRetries() != null) builder.maxRetries(config.maxRetries());

// buildVertexAiStreamingChatModel — maxRetries silently dropped ✗

Fix this →

3. temperature double→float precision narrowing (minor)

LangChain4jModelFactory.java:354, 365 — Both Vertex AI builders call .temperature(config.temperature().floatValue()). All other providers keep temperature as Double. This is a silent narrowing conversion — e.g., 0.7 stored as a double becomes 0.69999999... as a float. LangChain4J's Vertex AI API accepts float, so the narrowing is required, but the autoboxing round-trip (Double.floatValue()) should at minimum be documented inline if this is intentional.

4. Unit test may fail in CI without GCP credentials (medium)

LangChain4jModelFactoryTest.java:87test_buildChatModel_vertexAi_returnsModel() calls VertexAiGeminiChatModel.builder().build(). Unlike OpenAI (the builder just stores strings), the Vertex AI Gemini SDK initializes a gRPC VertexAI client at construction time, which resolves ADC credentials eagerly. In a CI environment with no GCP identity, this test is likely to throw a credential-resolution exception rather than returning a non-null model, making it a flaky or broken test.

The existing Bedrock equivalent works because the AWS SDK uses lazy credential resolution; GCP does not follow the same pattern.

5. No test for the streaming path (low)

There is no test_buildStreamingChatModel_vertexAi_* test. Given that buildVertexAiStreamingChatModel has a distinct code path (and item 2 above reveals a divergence from the chat path), at minimum a missingProjectId and missingLocation validation test should exist for the streaming builder.

6. pom.xml: comment on dependency is verbose (trivial)

dotCMS/pom.xml:516 — The XML comment inside <dependency> is non-standard Maven style and repeats information already in the PR description. Not a problem, just noise.


Summary: Items 2 and 4 are the most actionable — the missing maxRetries in the streaming builder is a silent behavioral divergence, and the unit test that instantiates a real Vertex AI client without credentials is likely broken in CI. Item 1 is a structural concern worth tracking as a follow-up.

@dotCMS dotCMS deleted a comment from github-actions bot Apr 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

1 participant