AI provider configuration and token-
Bring‑your‑own LLM: Healenium Pro does not include any built‑in language model or token quota. You choose the provider (e.g. OpenAI, Anthropic, etc.) and the model that best fits your needs.
-
Where configuration is stored: LLM settings (provider/base URL, model name, API key) are stored on the Healenium Backend side and managed through the integration API and UI.
-
How the token is used:the AI service reads only the active LLM configuration and uses the API key exclusively to call the selected LLM provider over HTTPS. This token is never shared with GitHub, never exposed to the frontend, and not forwarded to any third party other than the chosen LLM provider.
-
Minimal and efficient usage:the LLM is invoked only for the steps that truly require code understanding (such as validating ambiguous matches or generating an updated file), which keeps latency low and helps you control token consumption and costs.
How to obtain an LLM API key (examples)- OpenAI (e.g. GPT‑4o, GPT‑5.x) 1. Sign in to your OpenAI account.
2. Go to the
API keys page in your OpenAI dashboard.
3. Create a new secret key and copy it.
4. In Healenium, configure the LLM provider as OpenAI, set the desired model name, and paste this key into the LLM settings.
- Anthropic (e.g. Claude models) 1. Sign in to the Anthropic console.
2. Navigate to the
API keys section.
3. Create a new API key and copy it.
4. In Healenium, configure the LLM provider as Anthropic (using the appropriate base URL), set the model name, and paste this key into the LLM settings.
In all cases, you remain in full control of which provider and model are used and which limits apply to your account; Healenium only uses the keys you supply to perform the configured AI operations.