|
|
||
|---|---|---|
| .gitea/workflows | ||
| .github/workflows | ||
| .idea | ||
| visits.bleve | ||
| .gitignore | ||
| Dockerfile | ||
| README.md | ||
| chat_service.go | ||
| chat_service_integration_test.go | ||
| config.go | ||
| config.yaml | ||
| db.go | ||
| db.yaml | ||
| go.mod | ||
| go.sum | ||
| llm.go | ||
| log.go | ||
| main.go | ||
| maindb.yaml | ||
| models.go | ||
| openrouter_integration_test.go | ||
| run.sh | ||
| test.sh | ||
| test_copy.sh | ||
| ui.go | ||
| ui.html | ||
| ui_dbedit.html | ||
README.md
Vetrag
Lightweight veterinary visit reasoning helper with LLM-assisted keyword extraction and disambiguation.
Features
- Switch seamlessly between local Ollama and OpenRouter (OpenAI-compatible) LLM backends by changing environment variables only.
- Structured JSON outputs enforced using provider-supported response formats (Ollama
format, OpenAI/OpenRouterresponse_format: { type: json_object }). - Integration tests using mock LLM & DB (no network dependency).
- GitHub Actions CI (vet, test, build).
- Gitea Actions CI parity (same steps mirrored under
.gitea/workflows/ci.yml).
Quick Start
1. Clone & build
git clone <repo-url>
cd vetrag
go build ./...
2. Prepare data
Ensure config.yaml and maindb.yaml / db.yaml exist as provided. Visit data is loaded at runtime (see models.go / db.go).
3. Run with Ollama (local)
Pull or have a model available (example: ollama pull qwen2.5):
export OPENAI_BASE_URL=http://localhost:11434/api/chat
export OPENAI_MODEL=qwen2.5:latest
# API key not required for Ollama
export OPENAI_API_KEY=
go run .
4. Run with OpenRouter
Sign up at https://openrouter.ai and get an API key.
export OPENAI_BASE_URL=https://openrouter.ai/api/v1/chat/completions
export OPENAI_API_KEY=sk-or-XXXXXXXXXXXXXXXX
export OPENAI_MODEL=meta-llama/llama-3.1-70b-instruct # or any supported model
go run .
Open http://localhost:8080/ in your browser.
5. Health & Chat
curl -s http://localhost:8080/health
curl -s -X POST http://localhost:8080/chat -H 'Content-Type: application/json' -d '{"message":"my dog has diarrhea"}' | jq
Environment Variables
| Variable | Purpose | Default (if empty) |
|---|---|---|
| OPENAI_BASE_URL | LLM endpoint (Ollama chat or OpenRouter chat completions) | http://localhost:11434/api/chat |
| OPENAI_API_KEY | Bearer token for OpenRouter/OpenAI-style APIs | (unused if empty) |
| OPENAI_MODEL | Model identifier (Ollama model tag or OpenRouter model slug) | none (must set for remote) |
How Backend Selection Works
llm.go auto-detects the style:
- If the base URL contains
openrouter.aior/v1/it uses OpenAI-style request & parseschoices[0].message.content. - Otherwise it assumes Ollama and posts to
/api/chatwithformatfor structured JSON.
Structured Output
We define a JSON Schema-like map internally and:
- Ollama: send as
format(native structured output extension). - OpenRouter/OpenAI: send
response_format: { type: "json_object" }plus a system instruction describing the expected keys.
Prompts
Prompts in config.yaml have been adjusted to explicitly demand JSON only. This reduces hallucinated prose and plays well with both backends.
Testing
Run:
go test ./...
All tests mock the LLM so no network is required.
CI
GitHub Actions workflow at .github/workflows/ci.yml runs vet, tests, build on push/PR.
Gitea Actions Support
A mirrored workflow exists at .gitea/workflows/ci.yml so self-hosted Gitea instances (with Actions/act_runner) execute the same pipeline:
- Triggers: push to
main/master, pull requests, manual dispatch. - Steps: checkout, Go setup (from
go-version-file), vet, test, build. - Default dummy env vars (override with repository secrets if needed).
To add real secrets in Gitea (names should match those used in code):
- In the repository UI: Settings → Secrets → Actions (menu names may differ slightly by Gitea version).
- Add
OPENAI_API_KEY(optional for OpenRouter usage). - (Optional) Add
OPENAI_MODEL/OPENAI_BASE_URLoverrides if you want CI integration tests against a live endpoint (currently tests are mocked and do not require network access).
If you later add integration tests that call a real provider, modify the workflow:
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_BASE_URL: https://openrouter.ai/api/v1/chat/completions
OPENAI_MODEL: meta-llama/llama-3.1-70b-instruct
Make sure rate limits and billing are acceptable before enabling real calls.
Runner notes:
- Ensure your Gitea Actions runner image includes Git and Go toolchain or relies on
actions/setup-go(supported in newer Gitea releases with remote action fetching). - If remote action fetching is restricted, vendor the actions by mirroring their repositories or replace with inline shell steps (fallback approach).
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
Provider error referencing response_format and json_schema |
Some providers reject json_schema |
We now default to json_object; ensure you pulled latest changes. |
| Empty response | Model returned non-JSON or empty content | Enable debug logs (see below) and inspect raw response. |
| Non-JSON content (code fences) | Model ignored instruction | Try a stricter system message or switch to a model with better JSON adherence. |
Enable Debug Logging
Temporarily edit main.go:
logrus.SetLevel(logrus.DebugLevel)
(You can also refactor later to read a LOG_LEVEL env var.)
Sanitizing Output (Optional Future Improvement)
If some models wrap JSON in text, a post-processor could strip code fences and re-parse. Not implemented yet to keep logic strict.
Next Ideas
- Add retry with exponential backoff for transient 5xx.
- Add optional
jsonfallback if a provider rejectsjson_object. - Add streaming support.
- Add integration test with recorded OpenRouter fixture.
License
(Choose and add a LICENSE file if planning to open source.)