initial implementation

This commit is contained in:
Ashley Mensah
2026-03-02 14:20:50 +01:00
parent 721aa41361
commit cc15f5cb03
11 changed files with 1346 additions and 31 deletions

View File

@@ -0,0 +1,516 @@
# Migrating from an External IdP to NetBird's Embedded IdP
This guide walks you through migrating a self-hosted NetBird deployment from an external identity provider (Zitadel, Keycloak, Auth0, Okta, etc.) to NetBird's built-in embedded IdP (powered by DEX).
After this migration, NetBird manages authentication directly — no external IdP dependency required.
---
## Table of Contents
1. [What This Migration Does](#what-this-migration-does)
2. [Before You Start](#before-you-start)
3. [Step 1: Choose Your Connector ID](#step-1-choose-your-connector-id)
4. [Step 2: Stop the Management Server](#step-2-stop-the-management-server)
5. [Step 3: Run a Dry-Run](#step-3-run-a-dry-run)
6. [Step 4: Run the Migration](#step-4-run-the-migration)
7. [Step 5: Update management.json](#step-5-update-managementjson)
8. [Step 6: Start the Management Server](#step-6-start-the-management-server)
9. [Step 7: Verify Everything Works](#step-7-verify-everything-works)
10. [Rollback](#rollback)
11. [FAQ](#faq)
---
## What This Migration Does
NetBird's embedded IdP (DEX) uses a different format for user IDs than external providers do. When a user logs in through DEX, the user ID stored in the JWT `sub` claim looks like this:
```
CiQ3YWFkOGMwNS0zMjg3LTQ3M2YtYjQyYS0zNjU1MDRiZjI1ZTcSBG9pZGM
```
This is a base64-encoded blob that contains two pieces of information:
- The **original user ID** (e.g., `7aad8c05-3287-473f-b42a-365504bf25e7`)
- The **connector ID** (e.g., `oidc`)
The migration tool reads every user from your database, encodes their existing user ID into this DEX format, and updates all references across the database. After migration, when DEX issues tokens for your users, the `sub` claim will match what's in the database, and everything works seamlessly.
### What gets updated
The tool updates user ID references in **13 database columns** across two databases:
**Main database (store.db or PostgreSQL):**
| Table | Column | What it stores |
|-------|--------|----------------|
| `users` | `id` | The user's primary key |
| `personal_access_tokens` | `user_id` | Which user owns the token |
| `personal_access_tokens` | `created_by` | Who created the token |
| `peers` | `user_id` | Which user registered the peer |
| `user_invites` | `created_by` | Who sent the invitation |
| `accounts` | `created_by` | Who created the account |
| `proxy_access_tokens` | `created_by` | Who created the proxy token |
| `jobs` | `triggered_by` | Who triggered the job |
| `policy_rules` | `authorized_user` | SSH policy user authorization |
| `access_log_entries` | `user_id` | Reverse proxy access logs |
**Activity database (events.db or PostgreSQL):**
| Table | Column | What it stores |
|-------|--------|----------------|
| `events` | `initiator_id` | Who performed the action |
| `events` | `target_id` | Who was the target of the action |
| `deleted_users` | `id` | Archived deleted user records |
### What does NOT change
- Peer IDs, group IDs, network configurations, DNS settings, routes, and setup keys are **not affected**.
- Your WireGuard tunnels and peer connections continue working throughout.
- The migration only touches user identity references.
---
## Before You Start
### Requirements
- **Access to the management server machine** (SSH or direct).
- **The `migrate-idp` binary** — built from `management/cmd/migrate-idp/`.
- **Management server must be stopped** during migration. The tool works directly on the database files.
- **A backup strategy** — the tool creates automatic SQLite backups, but for PostgreSQL you should run `pg_dump` yourself.
### What you will need to know
Before starting, gather these pieces of information:
1. **Where your management.json lives** — typically `/etc/netbird/management.json`.
2. **Your connector ID** — see [Step 1](#step-1-choose-your-connector-id).
3. **Your public management URL** — the URL users and agents use to reach the management server (e.g., `https://netbird.example.com`).
4. **Your dashboard URL** — where the NetBird web dashboard is hosted (e.g., `https://app.netbird.example.com`).
5. **An admin email and password** — for the initial owner account in the embedded IdP.
### Build the migration tool
From the NetBird repository root:
```bash
cd management && go build -o migrate-idp ./cmd/migrate-idp/
```
This produces a `migrate-idp` binary. Copy it to your management server if building remotely.
---
## Step 1: Choose Your Connector ID
The connector ID is a short string that gets baked into every user's new ID. It tells DEX which authentication connector a user came from. You will use this same connector ID later when configuring the embedded IdP.
**For most migrations, use `oidc` as the connector ID.** This is the standard value for any OIDC-based external provider (Zitadel, Keycloak, Auth0, Okta, etc.).
Some specific cases:
| Previous IdP | Recommended connector ID |
|-------------|------------------------|
| Zitadel | `oidc` |
| Keycloak | `oidc` |
| Auth0 | `oidc` |
| Okta | `oidc` |
| Google Workspace | `google` |
| Microsoft Entra (Azure AD) | `microsoft` |
| Any generic OIDC provider | `oidc` |
The connector ID is arbitrary — it just needs to match between the migration and the DEX connector configuration you set up in Step 5. If you later add the old IdP as a DEX connector (to allow existing users to log in via their old provider through DEX), the connector's ID in the DEX config must match the value you use here.
---
## Step 2: Stop the Management Server
The migration modifies the database directly. The management server must not be running.
```bash
# systemd
sudo systemctl stop netbird-management
# Docker
docker compose stop management
# or
docker stop netbird-management
```
Verify it's stopped:
```bash
# systemd
sudo systemctl status netbird-management
# Docker
docker ps | grep management
```
---
## Step 3: Run a Dry-Run
A dry-run shows you exactly what the migration would do without writing any changes. Always do this first.
```bash
./migrate-idp \
--config /etc/netbird/management.json \
--connector-id oidc \
--dry-run
```
You will see output like:
```
INFO loaded config from /etc/netbird/management.json (datadir: /var/lib/netbird, engine: sqlite)
INFO [DRY RUN] mode enabled — no changes will be written
INFO found 15 users to process
INFO [DRY RUN] would migrate user 7aad8c05-3287-... -> CiQ3YWFkOGMw... (account: abc123)
INFO [DRY RUN] would migrate user auth0|abc123... -> CgxhdXRoMHxh... (account: abc123)
...
INFO [DRY RUN] migration summary: 15 users would be migrated, 0 already migrated
Migration summary:
Migrated: 15 users
Skipped: 0 users (already migrated)
[DRY RUN] No changes were written. Remove --dry-run to apply.
```
**Check the output carefully.** Every user should show their old ID transforming to a new base64-encoded ID. If anything looks wrong (unexpected user count, errors), stop and investigate before proceeding.
### Available flags
| Flag | Required | Default | Description |
|------|----------|---------|-------------|
| `--config` | Yes | `/etc/netbird/management.json` | Path to your management config file |
| `--connector-id` | Yes | — | The connector ID to encode into user IDs |
| `--dry-run` | No | `false` | Preview changes without writing |
| `--no-backup` | No | `false` | Skip automatic database backup |
| `--log-level` | No | `info` | Log verbosity: `debug`, `info`, `warn`, `error` |
---
## Step 4: Run the Migration
Once you are satisfied with the dry-run output, run the actual migration:
```bash
./migrate-idp \
--config /etc/netbird/management.json \
--connector-id oidc
```
The tool will:
1. **Back up your databases** — for SQLite, it copies `store.db` and `events.db` to timestamped backups (e.g., `store.db.backup-20260302-140000`). For PostgreSQL, it prints a warning reminding you to use `pg_dump`.
2. **Migrate each user** — encodes their ID into DEX format and updates all 13 columns in a single database transaction per user.
3. **Print a summary** of how many users were migrated and how many were skipped.
Example output:
```
INFO loaded config from /etc/netbird/management.json (datadir: /var/lib/netbird, engine: sqlite)
INFO backed up /var/lib/netbird/store.db -> /var/lib/netbird/store.db.backup-20260302-140000
INFO backed up /var/lib/netbird/events.db -> /var/lib/netbird/events.db.backup-20260302-140000
INFO found 15 users to process
INFO migration complete: 15 users migrated, 0 already migrated
Migration summary:
Migrated: 15 users
Skipped: 0 users (already migrated)
Next step: update management.json to enable EmbeddedIdP with connector ID "oidc"
```
### Idempotency
The migration is safe to run multiple times. If it's interrupted or you run it again, it detects already-migrated users (their IDs are already in DEX format) and skips them. A second run will report `0 users migrated, 15 already migrated`.
---
## Step 5: Update management.json
This is the manual configuration step. You need to add an `EmbeddedIdP` block to your `management.json` file so the management server starts with the built-in identity provider instead of your old external IdP.
### 5a: Gather the required information
You need these values:
| Value | Where to find it | Example |
|-------|------------------|---------|
| **Issuer URL** | Your public management server URL + `/oauth2`. This must be reachable by browsers and the NetBird client. | `https://netbird.example.com/oauth2` |
| **Local address** | The port the management server listens on locally. Check your current config's `HttpConfig` section. | `:443` or `:8080` or `:33073` |
| **Dashboard redirect URIs** | Your dashboard URL + `/nb-auth` and `/nb-silent-auth`. Check your current `HttpConfig.AuthAudience` or dashboard deployment for the base URL. | `https://app.netbird.example.com/nb-auth` |
| **CLI redirect URIs** | Standard localhost ports used by the NetBird CLI for OAuth callbacks. These are always the same. | `http://localhost:53000/` and `http://localhost:54000/` |
| **IdP storage path** | Where DEX should store its database. Use your existing data directory. | `/var/lib/netbird/idp.db` |
| **Owner email** | The email address of the initial admin user. This should be the email of the account owner who currently manages your NetBird deployment. | `admin@example.com` |
| **Owner password hash** | A bcrypt hash of the password for the initial admin. See section 5b below. | `$2a$10$N9qo8uLO...` |
**How to find your dashboard URL:** Look at the current `DeviceAuthorizationFlow` or `PKCEAuthorizationFlow` section in your `management.json`. The redirect URIs there point to your dashboard. You can also check what URL you use to access the NetBird web dashboard in your browser.
**How to find your local listen address:** Look at the current `HttpConfig` section in your `management.json` for the `ListenAddress` or check what port the management server binds to (default is `443` or `33073`).
### 5b: Generate a bcrypt password hash
The owner password must be stored as a bcrypt hash, not as plain text. Use any of these methods to generate one:
**Using htpasswd (most systems):**
```bash
htpasswd -nbBC 10 "" 'YourSecurePassword' | cut -d: -f2
```
**Using Python:**
```bash
python3 -c "import bcrypt; print(bcrypt.hashpw(b'YourSecurePassword', bcrypt.gensalt()).decode())"
```
If the `bcrypt` module is not installed: `pip3 install bcrypt`.
**Using Docker (no local dependencies):**
```bash
docker run --rm python:3-slim sh -c \
"pip -q install bcrypt && python3 -c \"import bcrypt; print(bcrypt.hashpw(b'YourSecurePassword', bcrypt.gensalt()).decode())\""
```
The output will look like: `$2b$12$LJ3m4ys3Gl.2B1FlKNUyde8R7sCgSEO6k.gSCiBfQKOJDMBz.bXXi`
### 5c: Edit management.json
Open your `management.json` and make these changes:
**1. Add the `EmbeddedIdP` block.** Add it as a top-level key:
```json
{
"Stuns": [...],
"TURNConfig": {...},
"Signal": {...},
"Datadir": "/var/lib/netbird",
"DataStoreEncryptionKey": "...",
"HttpConfig": {...},
"EmbeddedIdP": {
"Enabled": true,
"Issuer": "https://netbird.example.com/oauth2",
"LocalAddress": ":443",
"Storage": {
"Type": "sqlite3",
"Config": {
"File": "/var/lib/netbird/idp.db"
}
},
"DashboardRedirectURIs": [
"https://app.netbird.example.com/nb-auth",
"https://app.netbird.example.com/nb-silent-auth"
],
"CLIRedirectURIs": [
"http://localhost:53000/",
"http://localhost:54000/"
],
"Owner": {
"Email": "admin@example.com",
"Hash": "$2b$12$LJ3m4ys3Gl.2B1FlKNUyde8R7sCgSEO6k.gSCiBfQKOJDMBz.bXXi",
"Username": "Admin"
},
"SignKeyRefreshEnabled": false,
"LocalAuthDisabled": false
},
"StoreConfig": {...},
...
}
```
**2. Update `HttpConfig` to point at the embedded IdP:**
```json
"HttpConfig": {
"AuthAudience": "netbird-dashboard",
"AuthIssuer": "https://netbird.example.com/oauth2",
"AuthUserIDClaim": "sub",
"CLIAuthAudience": "netbird-cli",
...
}
```
- `AuthAudience` must be `"netbird-dashboard"` — this is the static client ID DEX uses for the dashboard.
- `CLIAuthAudience` must be `"netbird-cli"` — the static client ID DEX uses for the CLI.
- `AuthIssuer` must match the `Issuer` in your `EmbeddedIdP` block.
**3. Remove or leave the old `IdpManagerConfig` block.** When `EmbeddedIdP` is configured, the management server uses it instead of any external IdP config. You can either delete the old `IdpManagerConfig` block or leave it — it will be ignored.
### 5d: Explanation of each field
| Field | Required | Description |
|-------|----------|-------------|
| `Enabled` | Yes | Must be `true` to activate the embedded IdP. |
| `Issuer` | Yes | The public URL where DEX serves OIDC endpoints. Must be your management server's public URL with `/oauth2` appended. Browsers and clients will call this URL to authenticate. Must be HTTPS in production. |
| `LocalAddress` | Yes | The local listen address of the management server (e.g., `:443`). Used internally for JWT validation to avoid external network calls during token verification. |
| `Storage.Type` | Yes | `"sqlite3"` or `"postgres"`. This is the storage DEX uses for its own data (connectors, tokens, keys). Separate from NetBird's main store. |
| `Storage.Config.File` | For sqlite3 | Path where DEX creates its SQLite database. Use your data directory (e.g., `/var/lib/netbird/idp.db`). |
| `Storage.Config.DSN` | For postgres | PostgreSQL connection string for DEX storage (e.g., `host=localhost dbname=netbird_idp sslmode=disable`). |
| `DashboardRedirectURIs` | Yes | OAuth2 redirect URIs for the web dashboard. Must include `/nb-auth` and `/nb-silent-auth` paths on your dashboard URL. |
| `CLIRedirectURIs` | Yes | OAuth2 redirect URIs for the NetBird CLI. Always use `http://localhost:53000/` and `http://localhost:54000/`. |
| `Owner.Email` | Recommended | Email for the initial admin user. This user can log in immediately with email/password. |
| `Owner.Hash` | Recommended | Bcrypt hash of the admin password. See [5b](#5b-generate-a-bcrypt-password-hash). |
| `Owner.Username` | No | Display name for the admin user. Defaults to the email if not set. |
| `SignKeyRefreshEnabled` | No | Enables automatic rotation of JWT signing keys. Default `false`. |
| `LocalAuthDisabled` | No | Set to `true` to disable email/password login entirely (only allow login via external connectors configured in DEX). Default `false`. |
### 5e: If using PostgreSQL for DEX storage
If your main NetBird store uses PostgreSQL, you may want DEX to use PostgreSQL too. Create a separate database for DEX:
```sql
CREATE DATABASE netbird_idp;
```
Then configure:
```json
"Storage": {
"Type": "postgres",
"Config": {
"DSN": "host=localhost port=5432 user=netbird password=secret dbname=netbird_idp sslmode=disable"
}
}
```
---
## Step 6: Start the Management Server
```bash
# systemd
sudo systemctl start netbird-management
# Docker
docker compose start management
# or
docker start netbird-management
```
Check the logs for successful startup:
```bash
# systemd
sudo journalctl -u netbird-management -f
# Docker
docker logs -f netbird-management
```
Look for:
- `"embedded IdP started"` or similar DEX initialization messages.
- No errors about missing users, foreign key violations, or IdP configuration.
- The management server accepting connections on its listen port.
---
## Step 7: Verify Everything Works
### Test the dashboard
1. Open your NetBird dashboard in a browser.
2. You should see a DEX login page (NetBird-branded) instead of your old IdP's login page.
3. Log in with the **owner email and password** you configured in Step 5.
4. Verify you can see your account, peers, and policies.
### Test the CLI
```bash
netbird login --management-url https://netbird.example.com
```
This should open a browser for DEX authentication. Log in with the owner credentials.
### Test peer connectivity
Existing peers should continue to work. Their WireGuard tunnels are not affected by the IdP change. New peers can be registered by users who authenticate through the embedded IdP.
---
## Rollback
If something goes wrong, you can restore the database backups and revert `management.json`.
### SQLite
```bash
# Stop management
sudo systemctl stop netbird-management
# Restore backups (find the timestamp from migration output)
cp /var/lib/netbird/store.db.backup-20260302-140000 /var/lib/netbird/store.db
cp /var/lib/netbird/events.db.backup-20260302-140000 /var/lib/netbird/events.db
# Revert management.json (remove EmbeddedIdP block, restore old IdpManagerConfig)
# Then start management
sudo systemctl start netbird-management
```
### PostgreSQL
Restore from the `pg_dump` you took before migration:
```bash
# Stop management
sudo systemctl stop netbird-management
# Restore
pg_restore -d netbird /path/to/backup.dump
# or
psql netbird < /path/to/backup.sql
# Revert management.json and start
sudo systemctl start netbird-management
```
---
## FAQ
### Can I run the migration multiple times?
Yes. The migration is idempotent. It detects users whose IDs are already in DEX format and skips them. Running it twice will report `0 users migrated, N already migrated`.
### What happens if the migration is interrupted?
Each user is migrated in its own database transaction. If the process is killed mid-migration, some users will have new IDs and some will still have old IDs. Simply run the migration again — it will pick up where it left off and skip already-migrated users.
### Does this affect my WireGuard tunnels?
No. WireGuard tunnels are identified by peer keys, not user IDs. All existing tunnels continue working during and after migration. No client-side changes are needed.
### What about service users?
Service users (`IsServiceUser=true`) are migrated like all other users. Their IDs are re-encoded with the connector ID. This ensures consistency — all user IDs in the database follow the same format after migration.
### Can I keep my old IdP as a connector in DEX?
Yes. After migration, you can add your old IdP as an OIDC connector in DEX. This lets existing users log in via their old provider, but through DEX. The connector ID in DEX must match the `--connector-id` you used during migration (e.g., `oidc`).
To add a connector, create a connector via the DEX API or configure it as a static connector in the DEX config. The connector must have:
- `ID`: the same value you used for `--connector-id` (e.g., `oidc`)
- `Type`: `oidc` (or the specific provider type)
- `Issuer`, `ClientID`, `ClientSecret`: your old IdP's OAuth2 credentials
### What if I used the wrong connector ID?
Restore from backup and run the migration again with the correct connector ID. Already-migrated users cannot be re-migrated to a different connector ID without restoring the original data first.
### Does this work with the combined management container?
No. The combined container (`combined/cmd/`) only supports setups that already have the embedded IdP enabled. This migration tool is for standalone management server deployments (`management/cmd/`) that are switching from an external IdP.
### What database engines are supported?
SQLite, PostgreSQL, and MySQL are all supported. The tool reads the database engine from your `management.json` `StoreConfig` and connects accordingly. For SQLite, automatic backups are created. For PostgreSQL and MySQL, you must create your own backups before running the migration.

View File

@@ -0,0 +1,68 @@
package main
import (
"fmt"
"io"
"os"
"path/filepath"
"time"
log "github.com/sirupsen/logrus"
"github.com/netbirdio/netbird/management/server/types"
)
const (
storeDBFile = "store.db"
eventsDBFile = "events.db"
)
// backupDatabases creates backups of SQLite database files before migration.
// For PostgreSQL/MySQL, it prints instructions for the operator to run pg_dump/mysqldump.
func backupDatabases(dataDir string, engine types.Engine) error {
switch engine {
case types.SqliteStoreEngine:
for _, dbFile := range []string{storeDBFile, eventsDBFile} {
src := filepath.Join(dataDir, dbFile)
if _, err := os.Stat(src); os.IsNotExist(err) {
log.Infof("skipping backup of %s (file does not exist)", src)
continue
}
if err := backupSQLiteFile(src); err != nil {
return fmt.Errorf("backup %s: %w", dbFile, err)
}
}
case types.PostgresStoreEngine:
log.Warn("PostgreSQL detected — automatic backup is not supported. " +
"Please ensure you have a recent pg_dump backup before proceeding.")
case types.MysqlStoreEngine:
log.Warn("MySQL detected — automatic backup is not supported. " +
"Please ensure you have a recent mysqldump backup before proceeding.")
}
return nil
}
// backupSQLiteFile copies a SQLite database file to a timestamped backup.
func backupSQLiteFile(srcPath string) error {
timestamp := time.Now().Format("20060102-150405")
dstPath := fmt.Sprintf("%s.backup-%s", srcPath, timestamp)
src, err := os.Open(srcPath)
if err != nil {
return fmt.Errorf("open source: %w", err)
}
defer src.Close()
dst, err := os.Create(dstPath)
if err != nil {
return fmt.Errorf("create backup: %w", err)
}
defer dst.Close()
if _, err := io.Copy(dst, src); err != nil {
return fmt.Errorf("copy data: %w", err)
}
log.Infof("backed up %s -> %s", srcPath, dstPath)
return nil
}

View File

@@ -0,0 +1,151 @@
// Command migrate-idp is a standalone CLI tool that migrates self-hosted NetBird
// deployments from an external IdP (Zitadel, Keycloak, Okta, etc.) to NetBird's
// embedded DEX-based IdP. It re-keys all user IDs in the database to match DEX's
// encoded format.
//
// Usage:
//
// migrate-idp --config /etc/netbird/management.json --connector-id oidc [--dry-run]
package main
import (
"context"
"flag"
"fmt"
"os"
log "github.com/sirupsen/logrus"
nbconfig "github.com/netbirdio/netbird/management/internals/server/config"
activitystore "github.com/netbirdio/netbird/management/server/activity/store"
"github.com/netbirdio/netbird/management/server/idp/migration"
"github.com/netbirdio/netbird/management/server/store"
"github.com/netbirdio/netbird/util"
"github.com/netbirdio/netbird/util/crypt"
)
func main() {
configPath := flag.String("config", "/etc/netbird/management.json", "path to management.json config file")
connectorID := flag.String("connector-id", "", "DEX connector ID to encode into user IDs (required)")
dryRun := flag.Bool("dry-run", false, "preview changes without writing to the database")
noBackup := flag.Bool("no-backup", false, "skip automatic database backup (SQLite only)")
logLevel := flag.String("log-level", "info", "log verbosity: debug, info, warn, error")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, `migrate-idp - Migrate NetBird user IDs from external IdP to embedded DEX
This tool re-keys all user IDs in the management database so they match DEX's
encoded format (base64-encoded protobuf with user ID + connector ID). Run this
with management stopped, then update management.json to enable EmbeddedIdP.
Service users (IsServiceUser=true) are re-keyed like all other users. All user
types will be looked up by DEX-encoded IDs after migration.
Usage:
migrate-idp --config /etc/netbird/management.json --connector-id oidc [flags]
Flags:
`)
flag.PrintDefaults()
fmt.Fprintf(os.Stderr, `
Migration procedure:
1. Stop management: systemctl stop netbird-management
2. Dry-run: migrate-idp --config <path> --connector-id <id> --dry-run
3. Run migration: migrate-idp --config <path> --connector-id <id>
4. Update management.json: Add EmbeddedIdP config with matching connector ID
5. Start management: systemctl start netbird-management
`)
}
flag.Parse()
level, err := log.ParseLevel(*logLevel)
if err != nil {
log.Fatalf("invalid log level %q: %v", *logLevel, err)
}
log.SetLevel(level)
if *connectorID == "" {
fmt.Fprintln(os.Stderr, "error: --connector-id is required")
flag.Usage()
os.Exit(1)
}
if err := run(context.Background(), *configPath, *connectorID, *dryRun, *noBackup); err != nil {
log.Fatalf("migration failed: %v", err)
}
}
func run(ctx context.Context, configPath, connectorID string, dryRun, noBackup bool) error {
// Load management config
config := &nbconfig.Config{}
if _, err := util.ReadJsonWithEnvSub(configPath, config); err != nil {
return fmt.Errorf("read config %s: %w", configPath, err)
}
if config.Datadir == "" {
return fmt.Errorf("config has empty Datadir")
}
log.Infof("loaded config from %s (datadir: %s, engine: %s)", configPath, config.Datadir, config.StoreConfig.Engine)
if dryRun {
log.Info("[DRY RUN] mode enabled — no changes will be written")
}
// Open main store
mainStore, err := store.NewStore(ctx, config.StoreConfig.Engine, config.Datadir, nil, false)
if err != nil {
return fmt.Errorf("open main store: %w", err)
}
defer mainStore.Close(ctx) //nolint:errcheck
// Set up field encryption for user data decryption
if config.DataStoreEncryptionKey != "" {
fieldEncrypt, err := crypt.NewFieldEncrypt(config.DataStoreEncryptionKey)
if err != nil {
return fmt.Errorf("create field encryptor: %w", err)
}
mainStore.SetFieldEncrypt(fieldEncrypt)
}
// Open activity store (optional — warn and continue if unavailable)
var actStore migration.ActivityStoreUpdater
activitySqlStore, err := activitystore.NewSqlStore(ctx, config.Datadir, config.DataStoreEncryptionKey)
if err != nil {
log.Warnf("could not open activity store, activity events will not be migrated: %v", err)
} else {
defer activitySqlStore.Close(ctx) //nolint:errcheck
actStore = activitySqlStore
}
// Backup databases before migration (unless --no-backup or --dry-run)
if !noBackup && !dryRun {
if err := backupDatabases(config.Datadir, config.StoreConfig.Engine); err != nil {
return fmt.Errorf("backup: %w", err)
}
}
// Run migration
result, err := migration.Migrate(ctx, &migration.Config{
ConnectorID: connectorID,
DryRun: dryRun,
MainStore: mainStore,
ActivityStore: actStore,
})
if err != nil {
return err
}
fmt.Printf("\nMigration summary:\n")
fmt.Printf(" Migrated: %d users\n", result.Migrated)
fmt.Printf(" Skipped: %d users (already migrated)\n", result.Skipped)
if dryRun {
fmt.Printf("\n [DRY RUN] No changes were written. Remove --dry-run to apply.\n")
} else if result.Migrated > 0 {
fmt.Printf("\n Next step: update management.json to enable EmbeddedIdP with connector ID %q\n", connectorID)
}
return nil
}

View File

@@ -13,6 +13,8 @@ type Store interface {
Get(ctx context.Context, accountID string, offset, limit int, descending bool) ([]*Event, error)
// Close the sink flushing events if necessary
Close(ctx context.Context) error
// UpdateUserID re-keys all references to oldUserID in events and deleted_users tables.
UpdateUserID(ctx context.Context, oldUserID, newUserID string) error
}
// InMemoryEventStore implements the Store interface storing data in-memory
@@ -55,3 +57,8 @@ func (store *InMemoryEventStore) Close(_ context.Context) error {
store.events = make([]*Event, 0)
return nil
}
// UpdateUserID is a no-op for the in-memory store.
func (store *InMemoryEventStore) UpdateUserID(_ context.Context, _, _ string) error {
return nil
}

View File

@@ -227,6 +227,32 @@ func (store *Store) saveDeletedUserEmailAndNameInEncrypted(event *activity.Event
return event.Meta, nil
}
// UpdateUserID updates all references to oldUserID in events and deleted_users tables.
func (store *Store) UpdateUserID(ctx context.Context, oldUserID, newUserID string) error {
return store.db.Transaction(func(tx *gorm.DB) error {
if err := tx.Model(&activity.Event{}).
Where("initiator_id = ?", oldUserID).
Update("initiator_id", newUserID).Error; err != nil {
return fmt.Errorf("update events.initiator_id: %w", err)
}
if err := tx.Model(&activity.Event{}).
Where("target_id = ?", oldUserID).
Update("target_id", newUserID).Error; err != nil {
return fmt.Errorf("update events.target_id: %w", err)
}
// Raw exec: GORM can't update a PK via Model().Update()
if err := tx.Exec(
"UPDATE deleted_users SET id = ? WHERE id = ?", newUserID, oldUserID,
).Error; err != nil {
return fmt.Errorf("update deleted_users.id: %w", err)
}
return nil
})
}
// Close the Store
func (store *Store) Close(_ context.Context) error {
if store.db != nil {

View File

@@ -0,0 +1,152 @@
// Package migration provides utility functions for migrating from an external IdP
// to NetBird's embedded DEX-based IdP. It re-keys user IDs in the main store and
// activity store so that they match DEX's encoded format.
package migration
import (
"context"
"fmt"
log "github.com/sirupsen/logrus"
"github.com/netbirdio/netbird/idp/dex"
"github.com/netbirdio/netbird/management/server/types"
)
// MainStoreUpdater is the subset of the main store needed for migration.
type MainStoreUpdater interface {
ListUsers(ctx context.Context) ([]*types.User, error)
UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error
}
// ActivityStoreUpdater is the subset of the activity store needed for migration.
type ActivityStoreUpdater interface {
UpdateUserID(ctx context.Context, oldUserID, newUserID string) error
}
// Config holds migration parameters.
type Config struct {
ConnectorID string
DryRun bool
MainStore MainStoreUpdater
ActivityStore ActivityStoreUpdater // nil if activity store is unavailable
}
// Result holds migration outcome counts.
type Result struct {
Migrated int
Skipped int
}
// progressInterval controls how often progress is logged for large user counts.
const progressInterval = 100
// Migrate re-keys every user ID in both stores so that it encodes the given
// connector ID. Already-migrated users (detectable via DecodeDexUserID) are
// skipped, making the operation idempotent.
func Migrate(ctx context.Context, cfg *Config) (*Result, error) {
if cfg.ConnectorID == "" {
return nil, fmt.Errorf("connector ID must not be empty")
}
users, err := cfg.MainStore.ListUsers(ctx)
if err != nil {
return nil, fmt.Errorf("list users: %w", err)
}
if len(users) == 0 {
log.Info("no users found, nothing to migrate")
return &Result{}, nil
}
log.Infof("found %d users to process", len(users))
// Reconciliation pass: fix activity store for users already migrated in
// the main DB but whose activity references may still use old IDs (from
// a previous partial failure).
if cfg.ActivityStore != nil && !cfg.DryRun {
if err := reconcileActivityStore(ctx, cfg.ActivityStore, users); err != nil {
return nil, err
}
}
res := &Result{}
for i, user := range users {
if user.Id == "" {
log.Warnf("skipping user with empty ID in account %s", user.AccountID)
res.Skipped++
continue
}
_, _, decErr := dex.DecodeDexUserID(user.Id)
if decErr == nil {
// Already encoded in DEX format — skip.
res.Skipped++
continue
}
newUserID := dex.EncodeDexUserID(user.Id, cfg.ConnectorID)
if cfg.DryRun {
log.Infof("[DRY RUN] would migrate user %s -> %s (account: %s)",
user.Id, newUserID, user.AccountID)
res.Migrated++
continue
}
if err := migrateUser(ctx, cfg, user.Id, user.AccountID, newUserID); err != nil {
return nil, err
}
res.Migrated++
if (i+1)%progressInterval == 0 {
log.Infof("progress: %d/%d users processed", i+1, len(users))
}
}
if cfg.DryRun {
log.Infof("[DRY RUN] migration summary: %d users would be migrated, %d already migrated",
res.Migrated, res.Skipped)
} else {
log.Infof("migration complete: %d users migrated, %d already migrated",
res.Migrated, res.Skipped)
}
return res, nil
}
// reconcileActivityStore updates activity store references for users already
// migrated in the main DB whose activity entries may still use old IDs from a
// previous partial failure.
func reconcileActivityStore(ctx context.Context, activityStore ActivityStoreUpdater, users []*types.User) error {
for _, user := range users {
originalID, _, err := dex.DecodeDexUserID(user.Id)
if err != nil {
// Not yet migrated — will be handled in the main loop.
continue
}
if err := activityStore.UpdateUserID(ctx, originalID, user.Id); err != nil {
return fmt.Errorf("reconcile activity store for user %s: %w", user.Id, err)
}
}
return nil
}
// migrateUser updates a single user's ID in both the main store and the activity store.
func migrateUser(ctx context.Context, cfg *Config, oldID, accountID, newID string) error {
if err := cfg.MainStore.UpdateUserID(ctx, accountID, oldID, newID); err != nil {
return fmt.Errorf("update user ID for user %s: %w", oldID, err)
}
if cfg.ActivityStore == nil {
return nil
}
if err := cfg.ActivityStore.UpdateUserID(ctx, oldID, newID); err != nil {
return fmt.Errorf("update activity store user ID for user %s: %w", oldID, err)
}
return nil
}

View File

@@ -0,0 +1,287 @@
package migration
import (
"context"
"errors"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/netbirdio/netbird/idp/dex"
"github.com/netbirdio/netbird/management/server/types"
)
const testConnectorID = "oidc"
// mockMainStore implements MainStoreUpdater for testing.
type mockMainStore struct {
users []*types.User
listErr error
updateErr error
updateCalls []updateCall
}
type updateCall struct {
AccountID string
OldID string
NewID string
}
func (m *mockMainStore) ListUsers(_ context.Context) ([]*types.User, error) {
return m.users, m.listErr
}
func (m *mockMainStore) UpdateUserID(_ context.Context, accountID, oldUserID, newUserID string) error {
m.updateCalls = append(m.updateCalls, updateCall{accountID, oldUserID, newUserID})
return m.updateErr
}
// mockActivityStore implements ActivityStoreUpdater for testing.
type mockActivityStore struct {
updateErr error
updateCalls []activityUpdateCall
}
type activityUpdateCall struct {
OldID string
NewID string
}
func (m *mockActivityStore) UpdateUserID(_ context.Context, oldUserID, newUserID string) error {
m.updateCalls = append(m.updateCalls, activityUpdateCall{oldUserID, newUserID})
return m.updateErr
}
func TestMigrate_NormalMigration(t *testing.T) {
mainStore := &mockMainStore{
users: []*types.User{
{Id: "user-1", AccountID: "acc-1"},
{Id: "user-2", AccountID: "acc-1"},
},
}
actStore := &mockActivityStore{}
res, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
ActivityStore: actStore,
})
require.NoError(t, err)
assert.Equal(t, 2, res.Migrated)
assert.Equal(t, 0, res.Skipped)
assert.Len(t, mainStore.updateCalls, 2)
assert.Len(t, actStore.updateCalls, 2)
// Verify the new IDs are DEX-encoded
for _, call := range mainStore.updateCalls {
userID, connID, decErr := dex.DecodeDexUserID(call.NewID)
require.NoError(t, decErr)
assert.Equal(t, testConnectorID, connID)
assert.Equal(t, call.OldID, userID)
}
}
func TestMigrate_SkipAlreadyMigrated(t *testing.T) {
alreadyMigrated := dex.EncodeDexUserID("original-user", testConnectorID)
mainStore := &mockMainStore{
users: []*types.User{
{Id: alreadyMigrated, AccountID: "acc-1"},
{Id: "not-migrated", AccountID: "acc-1"},
},
}
actStore := &mockActivityStore{}
res, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
ActivityStore: actStore,
})
require.NoError(t, err)
assert.Equal(t, 1, res.Migrated)
assert.Equal(t, 1, res.Skipped)
assert.Len(t, mainStore.updateCalls, 1)
assert.Equal(t, "not-migrated", mainStore.updateCalls[0].OldID)
}
func TestMigrate_DryRun(t *testing.T) {
mainStore := &mockMainStore{
users: []*types.User{
{Id: "user-1", AccountID: "acc-1"},
},
}
actStore := &mockActivityStore{}
res, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
DryRun: true,
MainStore: mainStore,
ActivityStore: actStore,
})
require.NoError(t, err)
assert.Equal(t, 1, res.Migrated)
// No actual updates should have been made
assert.Empty(t, mainStore.updateCalls)
assert.Empty(t, actStore.updateCalls)
}
func TestMigrate_EmptyUserList(t *testing.T) {
mainStore := &mockMainStore{users: []*types.User{}}
actStore := &mockActivityStore{}
res, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
ActivityStore: actStore,
})
require.NoError(t, err)
assert.Equal(t, 0, res.Migrated)
assert.Equal(t, 0, res.Skipped)
}
func TestMigrate_EmptyUserID(t *testing.T) {
mainStore := &mockMainStore{
users: []*types.User{
{Id: "", AccountID: "acc-1"},
{Id: "user-1", AccountID: "acc-1"},
},
}
actStore := &mockActivityStore{}
res, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
ActivityStore: actStore,
})
require.NoError(t, err)
assert.Equal(t, 1, res.Migrated)
assert.Equal(t, 1, res.Skipped)
}
func TestMigrate_NilActivityStore(t *testing.T) {
mainStore := &mockMainStore{
users: []*types.User{
{Id: "user-1", AccountID: "acc-1"},
},
}
res, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
// ActivityStore is nil
})
require.NoError(t, err)
assert.Equal(t, 1, res.Migrated)
assert.Len(t, mainStore.updateCalls, 1)
}
func TestMigrate_EmptyConnectorID(t *testing.T) {
mainStore := &mockMainStore{}
_, err := Migrate(context.Background(), &Config{
ConnectorID: "",
MainStore: mainStore,
})
require.Error(t, err)
assert.Contains(t, err.Error(), "connector ID must not be empty")
}
func TestMigrate_ListUsersError(t *testing.T) {
mainStore := &mockMainStore{listErr: errors.New("db error")}
_, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
})
require.Error(t, err)
assert.Contains(t, err.Error(), "list users")
}
func TestMigrate_UpdateError(t *testing.T) {
mainStore := &mockMainStore{
users: []*types.User{{Id: "user-1", AccountID: "acc-1"}},
updateErr: errors.New("tx error"),
}
_, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
})
require.Error(t, err)
assert.Contains(t, err.Error(), "update user ID")
}
func TestMigrate_Reconciliation(t *testing.T) {
// Simulate a previously migrated user whose activity store wasn't updated
alreadyMigrated := dex.EncodeDexUserID("original-user", testConnectorID)
mainStore := &mockMainStore{
users: []*types.User{
{Id: alreadyMigrated, AccountID: "acc-1"},
},
}
actStore := &mockActivityStore{}
res, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
ActivityStore: actStore,
})
require.NoError(t, err)
assert.Equal(t, 0, res.Migrated)
assert.Equal(t, 1, res.Skipped)
// Reconciliation should have called activity store with the original -> new mapping
require.Len(t, actStore.updateCalls, 1)
assert.Equal(t, "original-user", actStore.updateCalls[0].OldID)
assert.Equal(t, alreadyMigrated, actStore.updateCalls[0].NewID)
}
func TestMigrate_Idempotent(t *testing.T) {
mainStore := &mockMainStore{
users: []*types.User{
{Id: "user-1", AccountID: "acc-1"},
{Id: "user-2", AccountID: "acc-1"},
},
}
actStore := &mockActivityStore{}
// First run
res1, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
ActivityStore: actStore,
})
require.NoError(t, err)
assert.Equal(t, 2, res1.Migrated)
// Simulate that the store now has the migrated IDs
for _, call := range mainStore.updateCalls {
for i, u := range mainStore.users {
if u.Id == call.OldID {
mainStore.users[i].Id = call.NewID
}
}
}
mainStore.updateCalls = nil
actStore.updateCalls = nil
// Second run should skip all
res2, err := Migrate(context.Background(), &Config{
ConnectorID: testConnectorID,
MainStore: mainStore,
ActivityStore: actStore,
})
require.NoError(t, err)
assert.Equal(t, 0, res2.Migrated)
assert.Equal(t, 2, res2.Skipped)
assert.Empty(t, mainStore.updateCalls)
}

View File

@@ -3445,6 +3445,80 @@ func (s *SqlStore) GetDB() *gorm.DB {
return s.db
}
// ListUsers returns all users across all accounts with decrypted sensitive fields.
func (s *SqlStore) ListUsers(ctx context.Context) ([]*types.User, error) {
var users []*types.User
if err := s.db.Find(&users).Error; err != nil {
return nil, status.Errorf(status.Internal, "failed to list users")
}
for _, user := range users {
if err := user.DecryptSensitiveData(s.fieldEncrypt); err != nil {
log.WithContext(ctx).Errorf("failed to decrypt user data for user %s: %v", user.Id, err)
return nil, status.Errorf(status.Internal, "failed to decrypt user data")
}
}
return users, nil
}
// txDeferFKConstraints defers foreign key constraint checks for the duration of the transaction.
// MySQL is already handled by s.transaction (SET FOREIGN_KEY_CHECKS = 0).
func (s *SqlStore) txDeferFKConstraints(tx *gorm.DB) error {
switch s.storeEngine {
case types.PostgresStoreEngine:
return tx.Exec("SET CONSTRAINTS ALL DEFERRED").Error
case types.SqliteStoreEngine:
return tx.Exec("PRAGMA defer_foreign_keys = ON").Error
default:
return nil
}
}
// UpdateUserID re-keys a user's ID from oldUserID to newUserID, updating all FK references first,
// then the users.id primary key last. All updates happen in a single transaction.
func (s *SqlStore) UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error {
type fkUpdate struct {
model any
column string
where string
}
updates := []fkUpdate{
{&types.PersonalAccessToken{}, "user_id", "user_id = ?"},
{&types.PersonalAccessToken{}, "created_by", "created_by = ?"},
{&nbpeer.Peer{}, "user_id", "user_id = ?"},
{&types.UserInviteRecord{}, "created_by", "created_by = ?"},
{&types.Account{}, "created_by", "created_by = ?"},
{&types.ProxyAccessToken{}, "created_by", "created_by = ?"},
{&types.Job{}, "triggered_by", "triggered_by = ?"},
{&types.PolicyRule{}, "authorized_user", "authorized_user = ?"},
{&accesslogs.AccessLogEntry{}, "user_id", "user_id = ?"},
}
err := s.transaction(func(tx *gorm.DB) error {
if err := s.txDeferFKConstraints(tx); err != nil {
return err
}
for _, u := range updates {
if err := tx.Model(u.model).Where(u.where, oldUserID).Update(u.column, newUserID).Error; err != nil {
return fmt.Errorf("update %s: %w", u.column, err)
}
}
if err := tx.Model(&types.User{}).Where(accountAndIDQueryCondition, accountID, oldUserID).Update("id", newUserID).Error; err != nil {
return fmt.Errorf("update users: %w", err)
}
return nil
})
if err != nil {
log.WithContext(ctx).Errorf("failed to update user ID in the store: %s", err)
return status.Errorf(status.Internal, "failed to update user ID in store")
}
return nil
}
// SetFieldEncrypt sets the field encryptor for encrypting sensitive user data.
func (s *SqlStore) SetFieldEncrypt(enc *crypt.FieldEncrypt) {
s.fieldEncrypt = enc

View File

@@ -275,6 +275,11 @@ type Store interface {
// GetCustomDomainsCounts returns the total and validated custom domain counts.
GetCustomDomainsCounts(ctx context.Context) (total int64, validated int64, err error)
// ListUsers returns all users across all accounts.
ListUsers(ctx context.Context) ([]*types.User, error)
// UpdateUserID re-keys a user's ID from oldUserID to newUserID, updating all foreign key references.
UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error
}
const (

View File

@@ -1109,21 +1109,6 @@ func (mr *MockStoreMockRecorder) GetAccountServices(ctx, lockStrength, accountID
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAccountServices", reflect.TypeOf((*MockStore)(nil).GetAccountServices), ctx, lockStrength, accountID)
}
// GetServicesByAccountID mocks base method.
func (m *MockStore) GetServicesByAccountID(ctx context.Context, lockStrength LockingStrength, accountID string) ([]*reverseproxy.Service, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetServicesByAccountID", ctx, lockStrength, accountID)
ret0, _ := ret[0].([]*reverseproxy.Service)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetServicesByAccountID indicates an expected call of GetServicesByAccountID.
func (mr *MockStoreMockRecorder) GetServicesByAccountID(ctx, lockStrength, accountID interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetServicesByAccountID", reflect.TypeOf((*MockStore)(nil).GetServicesByAccountID), ctx, lockStrength, accountID)
}
// GetAccountSettings mocks base method.
func (m *MockStore) GetAccountSettings(ctx context.Context, lockStrength LockingStrength, accountID string) (*types2.Settings, error) {
m.ctrl.T.Helper()
@@ -1288,6 +1273,22 @@ func (mr *MockStoreMockRecorder) GetCustomDomain(ctx, accountID, domainID interf
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCustomDomain", reflect.TypeOf((*MockStore)(nil).GetCustomDomain), ctx, accountID, domainID)
}
// GetCustomDomainsCounts mocks base method.
func (m *MockStore) GetCustomDomainsCounts(ctx context.Context) (int64, int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCustomDomainsCounts", ctx)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(int64)
ret2, _ := ret[2].(error)
return ret0, ret1, ret2
}
// GetCustomDomainsCounts indicates an expected call of GetCustomDomainsCounts.
func (mr *MockStoreMockRecorder) GetCustomDomainsCounts(ctx interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCustomDomainsCounts", reflect.TypeOf((*MockStore)(nil).GetCustomDomainsCounts), ctx)
}
// GetDNSRecordByID mocks base method.
func (m *MockStore) GetDNSRecordByID(ctx context.Context, lockStrength LockingStrength, accountID, zoneID, recordID string) (*records.Record, error) {
m.ctrl.T.Helper()
@@ -1872,22 +1873,6 @@ func (mr *MockStoreMockRecorder) GetServiceTargetByTargetID(ctx, lockStrength, a
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetServiceTargetByTargetID", reflect.TypeOf((*MockStore)(nil).GetServiceTargetByTargetID), ctx, lockStrength, accountID, targetID)
}
// GetCustomDomainsCounts mocks base method.
func (m *MockStore) GetCustomDomainsCounts(ctx context.Context) (int64, int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCustomDomainsCounts", ctx)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(int64)
ret2, _ := ret[2].(error)
return ret0, ret1, ret2
}
// GetCustomDomainsCounts indicates an expected call of GetCustomDomainsCounts.
func (mr *MockStoreMockRecorder) GetCustomDomainsCounts(ctx interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCustomDomainsCounts", reflect.TypeOf((*MockStore)(nil).GetCustomDomainsCounts), ctx)
}
// GetServices mocks base method.
func (m *MockStore) GetServices(ctx context.Context, lockStrength LockingStrength) ([]*reverseproxy.Service, error) {
m.ctrl.T.Helper()
@@ -1903,6 +1888,21 @@ func (mr *MockStoreMockRecorder) GetServices(ctx, lockStrength interface{}) *gom
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetServices", reflect.TypeOf((*MockStore)(nil).GetServices), ctx, lockStrength)
}
// GetServicesByAccountID mocks base method.
func (m *MockStore) GetServicesByAccountID(ctx context.Context, lockStrength LockingStrength, accountID string) ([]*reverseproxy.Service, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetServicesByAccountID", ctx, lockStrength, accountID)
ret0, _ := ret[0].([]*reverseproxy.Service)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetServicesByAccountID indicates an expected call of GetServicesByAccountID.
func (mr *MockStoreMockRecorder) GetServicesByAccountID(ctx, lockStrength, accountID interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetServicesByAccountID", reflect.TypeOf((*MockStore)(nil).GetServicesByAccountID), ctx, lockStrength, accountID)
}
// GetSetupKeyByID mocks base method.
func (m *MockStore) GetSetupKeyByID(ctx context.Context, lockStrength LockingStrength, accountID, setupKeyID string) (*types2.SetupKey, error) {
m.ctrl.T.Helper()
@@ -2231,6 +2231,21 @@ func (mr *MockStoreMockRecorder) ListFreeDomains(ctx, accountID interface{}) *go
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListFreeDomains", reflect.TypeOf((*MockStore)(nil).ListFreeDomains), ctx, accountID)
}
// ListUsers mocks base method.
func (m *MockStore) ListUsers(ctx context.Context) ([]*types2.User, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListUsers", ctx)
ret0, _ := ret[0].([]*types2.User)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListUsers indicates an expected call of ListUsers.
func (mr *MockStoreMockRecorder) ListUsers(ctx interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListUsers", reflect.TypeOf((*MockStore)(nil).ListUsers), ctx)
}
// MarkAccountPrimary mocks base method.
func (m *MockStore) MarkAccountPrimary(ctx context.Context, accountID string) error {
m.ctrl.T.Helper()
@@ -2776,6 +2791,20 @@ func (mr *MockStoreMockRecorder) UpdateService(ctx, service interface{}) *gomock
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateService", reflect.TypeOf((*MockStore)(nil).UpdateService), ctx, service)
}
// UpdateUserID mocks base method.
func (m *MockStore) UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateUserID", ctx, accountID, oldUserID, newUserID)
ret0, _ := ret[0].(error)
return ret0
}
// UpdateUserID indicates an expected call of UpdateUserID.
func (mr *MockStoreMockRecorder) UpdateUserID(ctx, accountID, oldUserID, newUserID interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateUserID", reflect.TypeOf((*MockStore)(nil).UpdateUserID), ctx, accountID, oldUserID, newUserID)
}
// UpdateZone mocks base method.
func (m *MockStore) UpdateZone(ctx context.Context, zone *zones.Zone) error {
m.ctrl.T.Helper()

BIN
migrate-idp Executable file

Binary file not shown.