Architecture
Healenium Pro combines AI-powered locator handling with GitHub integration so that healed selectors can be validated against your repository and proposed as code changes via pull requests.
What the integration does

Selector detection button
For a given Healenium report, the service:
- Searches the linked repository for code containing the failed locator and builds selector detection results (file paths and, when needed, more precise usage locations).
- Optionally applies AI-based validation to ambiguous candidates (for example, Selenium snippets) to ensure that the locator type and value really match the code fragment.
AI is used selectively only where it is required to make a reliable decision, which reduces latency and avoids unnecessary token consumption.

Create GitHub PR button
For a report with resolved selector paths, the service:
- Prepares code changes (updates locators in the target file while preserving formatting and style) and creates a branch with a pull request that describes the fixes based on the report.
- Handles all low-level repository operations (branch creation, file read, commit, pull request creation) automatically behind the scenes.
AI is applied only on the steps that require understanding of code and context, which keeps the operation fast and cost‑efficient for the customer.

Healenium Pro connects to external LLM providers via API and customer‑supplied access tokens.
Healenium does not bundle its own LLM, does not resell model access, and does not include any token quota.
Customers are responsible for obtaining and managing API keys (and associated limits/costs) for the LLMs they choose to use with Healenium.
Filling out the GitHub integration form

In Integration → GitHub Setting you will see a form with the following fields. Use the values that match the repository where your tests and locators live.

Repository - The full repo identifier in the form `owner/repository` (e.g. `my-org/my-automation-tests`). This is the repo Healenium will search and where it can create branches and pull requests.
Branch - The default branch to use as the base (e.g. `main` or `master`). New branches for locator fixes will be created from this branch, and pull requests will target it.
Access Token - Your GitHub classic PAT with the repo scope. Create one at GitHub → Settings → Personal access tokens (classic) if you have not already.

After filling in all fields, click Save. The form will only enable Save when at least one value has been changed.

GitHub OAuth/PAT setup and scopes

Integration with GitHub is done via a classic Personal Access Token (PAT). OAuth apps can be used to obtain such a token; the product uses the token as a Bearer credential in GitHub API calls.
The AI service never receives, logs, or forwards the GitHub token, and it is never sent to any LLM provider. All LLM calls are made without including your GitHub credentials, so your repository token remains fully isolated from AI traffic.

Required scopes / permissions (classic PAT)

Healenium Pro needs a classic PAT with at least the following scope:

repo – full control of private repositories, which implicitly provides:
- Permission to search code in repositories used with Healenium.
- Read access to branches/refs and file metadata/content.
- Write access to create branches, update files (commits), and open pull requests.
AI provider configuration and token

- Bring‑your‑own LLM: Healenium Pro does not include any built‑in language model or token quota. You choose the provider (e.g. OpenAI, Anthropic, etc.) and the model that best fits your needs.
- Where configuration is stored: LLM settings (provider/base URL, model name, API key) are stored on the Healenium Backend side and managed through the integration API and UI.
- How the token is used:the AI service reads only the active LLM configuration and uses the API key exclusively to call the selected LLM provider over HTTPS. This token is never shared with GitHub, never exposed to the frontend, and not forwarded to any third party other than the chosen LLM provider.
- Minimal and efficient usage:the LLM is invoked only for the steps that truly require code understanding (such as validating ambiguous matches or generating an updated file), which keeps latency low and helps you control token consumption and costs.

How to obtain an LLM API key (examples)

- OpenAI (e.g. GPT‑4o, GPT‑5.x)
1. Sign in to your OpenAI account.
2. Go to the API keys page in your OpenAI dashboard.
3. Create a new secret key and copy it.
4. In Healenium, configure the LLM provider as OpenAI, set the desired model name, and paste this key into the LLM settings.

- Anthropic (e.g. Claude models)
1. Sign in to the Anthropic console.
2. Navigate to the API keys section.
3. Create a new API key and copy it.
4. In Healenium, configure the LLM provider as Anthropic (using the appropriate base URL), set the model name, and paste this key into the LLM settings.

In all cases, you remain in full control of which provider and model are used and which limits apply to your account; Healenium only uses the keys you supply to perform the configured AI operations.
Video walkthrough

For a practical, end‑to‑end demonstration of AI and GitHub integration in Healenium Pro (from configuring tokens to creating a pull request with updated locators), refer to the following video:

Healenium Pro – AI & GitHub integration walkthrough

The video briefly shows:
- How to configure GitHub PAT and LLM API keys in the Settings pages.
- How to run a report and trigger AI‑based selector validation.
- How a branch and pull request with updated locators are created automatically.
About Healenium
Contacts
  • Dmitriy_Gumeniuk@epam.com - Project Supervisor