# FetchML API Key Process This document describes how API keys are issued and how team members should configure the `ml` CLI to use them. The goal is to keep access easy for your homelab while treating API keys as sensitive secrets. ## Overview - Each user gets a **personal API key** (no shared admin keys for normal use). - API keys are used by the `ml` CLI to authenticate to the FetchML API. - API keys and their **SHA256 hashes** must both be treated as secrets. There are two supported ways to receive your key: 1. **Bitwarden (recommended)** – for users who already use Bitwarden. 2. **Direct share (minimal tools)** – for users who do not use Bitwarden. --- ## 1. Bitwarden-based process (recommended) ### For the admin - Use the helper script to create a Bitwarden item for each user: ```bash ./scripts/create_bitwarden_fetchml_item.sh ``` This script: - Creates a Bitwarden item named `FetchML API – `. - Stores: - Username: `` - Password: `` (the actual API key) - Custom field `api_key_hash`: `` - Share that item with the user in Bitwarden (for example, via a shared collection like `FetchML`). ### For the user 1. Open Bitwarden and locate the item: - **Name:** `FetchML API – ` 2. Copy the **password** field (this is your FetchML API key). 3. Configure the CLI, e.g. in `~/.ml/config.toml`: ```toml api_key = "" worker_host = "localhost" worker_port = 9100 api_url = "ws://localhost:9100/ws" ``` 4. Test your setup: ```bash ml status ``` If the command works, your key and tunnel/config are correct. --- ## 2. Direct share (no password manager required) For users who do not use Bitwarden, a lightweight alternative is a direct one-to-one share. ### For the admin 1. Generate a **per-user** API key and hash as usual. 2. Store them securely on your side (for example, in your own Bitwarden vault or configuration files). 3. Share **only the API key** with the user via a direct channel you both trust, such as: - Signal / WhatsApp direct message - SMS - Short call/meeting where you read it to them 4. Ask the user to: - Paste the key into their local config. - Avoid keeping the key in plain chat history if possible. ### For the user 1. When you receive your FetchML API key from the admin, create or edit `~/.ml/config.toml`: ```toml api_key = "" worker_host = "localhost" worker_port = 9100 api_url = "ws://localhost:9100/ws" ``` 2. Save the file and run: ```bash ml status ``` 3. If it works, you are ready to use the CLI: ```bash ml queue my-training-job ml cancel my-training-job ``` --- ## 3. Security notes - **API key and hash are secrets** - The 64-character `api_key_hash` is as sensitive as the API key itself. - Do not commit keys or hashes to Git or share them in screenshots or tickets. - **Rotation** - If you suspect a key has leaked, notify the admin. - The admin will revoke the old key, generate a new one, and update Bitwarden or share a new key. - **Transport security** - The `api_url` is typically `ws://localhost:9100/ws` when used through an SSH tunnel to the homelab. - The SSH tunnel and nginx/TLS provide encryption over the network. Following these steps keeps API access easy for the team while maintaining a reasonable security posture for a personal homelab deployment.