mirror of
https://github.com/netbirdio/netbird.git
synced 2026-04-17 15:56:39 +00:00
Compare commits
3 Commits
wasm-webso
...
feat/migra
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d511a3e9eb | ||
|
|
cc15f5cb03 | ||
|
|
721aa41361 |
335
idp-migration-plan.md
Normal file
335
idp-migration-plan.md
Normal file
@@ -0,0 +1,335 @@
|
||||
# Plan: Standalone IdP Migration Tool (External IdP → Embedded DEX)
|
||||
|
||||
## Context
|
||||
|
||||
**Target repo:** `/Users/ashleymensah/Documents/netbird-repos/netbird` (main repo, not the fork)
|
||||
|
||||
Self-hosted NetBird users migrating from an external IdP (Zitadel, Keycloak, Okta, etc.) to NetBird's embedded DEX-based IdP need a way to re-key all user IDs in the database. A colleague's fork at `/Users/ashleymensah/Documents/netbird-repos/nico-netbird/netbird` has a prototype that runs inside management as an AfterInit hook, but this has a chicken-and-egg problem (enabling EmbeddedIdP causes management to initialize DEX before migration runs → startup failure).
|
||||
|
||||
This plan creates a **standalone CLI tool** that runs with management stopped, re-keys all user IDs, then the user manually updates their management config and restarts. The main repo already has DEX/EmbeddedIdP infrastructure but is missing the store methods and migration logic — these need to be created (porting patterns from the fork).
|
||||
|
||||
**Note:** Does not need to work with the combined management container setup (that only supports embeddedIdP-enabled setups anyway).
|
||||
|
||||
---
|
||||
|
||||
## What the migration does
|
||||
|
||||
For each user, transforms the old ID (e.g., a Zitadel UUID) into DEX's encoded format:
|
||||
```
|
||||
newID = EncodeDexUserID(oldUserID, connectorID)
|
||||
→ base64(protobuf{field1: userID, field2: connectorID})
|
||||
```
|
||||
This encoded ID is what DEX puts in JWT `sub` claims, ensuring continuity after switching IdPs.
|
||||
|
||||
---
|
||||
|
||||
## Tables requiring user ID updates
|
||||
|
||||
### Main store (store.db / PostgreSQL) — 10 columns
|
||||
|
||||
| # | Table | Column | Notes |
|
||||
|---|-------|--------|-------|
|
||||
| 1 | `users` | `id` (PK) | Primary key update, done last in transaction |
|
||||
| 2 | `personal_access_tokens` | `user_id` (FK) | |
|
||||
| 3 | `personal_access_tokens` | `created_by` | |
|
||||
| 4 | `peers` | `user_id` | |
|
||||
| 5 | `user_invites` | `created_by` | GORM `TableName()` returns `user_invites` (not `user_invite_records`) |
|
||||
| 6 | `accounts` | `created_by` | |
|
||||
| 7 | `proxy_access_tokens` | `created_by` | |
|
||||
| 8 | `jobs` | `triggered_by` | |
|
||||
| 9 | `policy_rules` | `authorized_user` | SSH policy user refs — missed by fork's implementation |
|
||||
| 10 | `access_log_entries` | `user_id` | Reverse proxy access logs — missed by both fork and original plan |
|
||||
|
||||
### Activity store (events.db / PostgreSQL) — 3 columns
|
||||
|
||||
| # | Table | Column | Notes |
|
||||
|---|-------|--------|-------|
|
||||
| 10 | `events` | `initiator_id` | |
|
||||
| 11 | `events` | `target_id` | |
|
||||
| 12 | `deleted_users` | `id` (PK) | Raw SQL needed (GORM can't update PK via Model) |
|
||||
|
||||
**Total: 13 columns (10 main store + 3 activity store)**
|
||||
|
||||
### Verified NOT needing migration
|
||||
- `policy_rules.authorized_groups` — maps group IDs → local Unix usernames (e.g., "root", "admin"), NOT NetBird user IDs
|
||||
- `groups` / `group_peers` — store peer IDs, not user IDs
|
||||
- `routes`, `nameserver_groups`, `setup_keys`, `posture_checks`, `networks`, `dns_settings` — no user ID fields
|
||||
|
||||
---
|
||||
|
||||
## What exists in main repo vs what needs to be created
|
||||
|
||||
| Component | Main repo status | Action |
|
||||
|-----------|-----------------|--------|
|
||||
| `EncodeDexUserID` / `DecodeDexUserID` | EXISTS at `idp/dex/provider.go` | No changes |
|
||||
| EmbeddedIdP config + manager | EXISTS at `management/server/idp/embedded.go` | No changes |
|
||||
| DEX provider | EXISTS at `idp/dex/provider.go` | No changes |
|
||||
| Server bootstrapping (modules.go) | EXISTS at `management/internals/server/modules.go` | No changes |
|
||||
| `Store.ListUsers()` interface method | **MISSING** | Add to `management/server/store/store.go` |
|
||||
| `SqlStore.ListUsers()` implementation | **MISSING** | Add to `management/server/store/sql_store.go` |
|
||||
| `Store.UpdateUserID()` interface method | **MISSING** | Add to `management/server/store/store.go` |
|
||||
| `SqlStore.UpdateUserID()` implementation | **MISSING** | Add to `management/server/store/sql_store.go` |
|
||||
| `activity.Store.UpdateUserID()` interface | **MISSING** | Add to `management/server/activity/store.go` |
|
||||
| Activity `Store.UpdateUserID()` implementation | **MISSING** | Add to `management/server/activity/store/sql_store.go` |
|
||||
| `InMemoryEventStore.UpdateUserID()` no-op | **MISSING** | Add to `management/server/activity/store.go` (compile-blocking) |
|
||||
| `txDeferFKConstraints` helper | **MISSING** | Port from fork to `management/server/store/sql_store.go` |
|
||||
| Store mock regeneration | **NEEDED** | Run `go generate ./management/server/store/...` after interface changes |
|
||||
| Migration package | **MISSING** | Create at `management/server/idp/migration/` |
|
||||
| Standalone CLI tool | **MISSING** | Create at `management/cmd/migrate-idp/` |
|
||||
|
||||
**Source of patterns:** Fork at `/Users/ashleymensah/Documents/netbird-repos/nico-netbird/netbird`
|
||||
|
||||
---
|
||||
|
||||
## Implementation plan
|
||||
|
||||
### Step 1: Add `ListUsers()` to store interface and implementation
|
||||
|
||||
**File:** `management/server/store/store.go` — add to Store interface:
|
||||
```go
|
||||
ListUsers(ctx context.Context) ([]*types.User, error)
|
||||
```
|
||||
|
||||
**File:** `management/server/store/sql_store.go` — add implementation:
|
||||
```go
|
||||
func (s *SqlStore) ListUsers(ctx context.Context) ([]*types.User, error) {
|
||||
var users []*types.User
|
||||
if err := s.db.Find(&users).Error; err != nil {
|
||||
return nil, status.Errorf(status.Internal, "failed to list users")
|
||||
}
|
||||
// Decrypt sensitive fields (Email, Name) so logging shows readable values.
|
||||
// No-op when fieldEncrypt is nil (no encryption key configured).
|
||||
for _, user := range users {
|
||||
if err := user.DecryptSensitiveData(s.fieldEncrypt); err != nil {
|
||||
return nil, status.Errorf(status.Internal, "failed to decrypt user data")
|
||||
}
|
||||
}
|
||||
return users, nil
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Add `UpdateUserID()` to store interface and implementation
|
||||
|
||||
**File:** `management/server/store/store.go` — add to Store interface:
|
||||
```go
|
||||
UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error
|
||||
```
|
||||
|
||||
**File:** `management/server/store/sql_store.go` — add implementation (ported from fork, with `policy_rules` fix):
|
||||
```go
|
||||
func (s *SqlStore) UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error {
|
||||
updates := []fkUpdate{
|
||||
{&types.PersonalAccessToken{}, "user_id", "user_id = ?"},
|
||||
{&types.PersonalAccessToken{}, "created_by", "created_by = ?"},
|
||||
{&nbpeer.Peer{}, "user_id", "user_id = ?"},
|
||||
{&types.UserInviteRecord{}, "created_by", "created_by = ?"},
|
||||
{&types.Account{}, "created_by", "created_by = ?"},
|
||||
{&types.ProxyAccessToken{}, "created_by", "created_by = ?"},
|
||||
{&types.Job{}, "triggered_by", "triggered_by = ?"},
|
||||
{&types.PolicyRule{}, "authorized_user", "authorized_user = ?"}, // missed by fork
|
||||
{&accesslogs.AccessLogEntry{}, "user_id", "user_id = ?"}, // missed by both fork and original plan
|
||||
}
|
||||
// Transaction with deferred FK constraints, update FKs first, then users.id PK
|
||||
// Note: txDeferFKConstraints helper must be ported from fork (does not exist in main repo)
|
||||
// - SQLite: PRAGMA defer_foreign_keys = ON
|
||||
// - PostgreSQL: SET CONSTRAINTS ALL DEFERRED (belt-and-suspenders; FK-first update order
|
||||
// already handles non-deferrable constraints)
|
||||
// - MySQL: handled by existing transaction() helper (SET FOREIGN_KEY_CHECKS = 0)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2b: Port `txDeferFKConstraints` helper
|
||||
|
||||
**File:** `management/server/store/sql_store.go` — add helper (ported from fork lines 842-853):
|
||||
```go
|
||||
func (s *SqlStore) txDeferFKConstraints(tx *gorm.DB) error {
|
||||
// SQLite: defer FK checks until transaction commit
|
||||
// PostgreSQL: defer constraints (belt-and-suspenders; update order handles non-deferrable)
|
||||
// MySQL: already handled by transaction() wrapper
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Add `UpdateUserID()` to activity store interface and implementation
|
||||
|
||||
**File:** `management/server/activity/store.go` — add to Store interface:
|
||||
```go
|
||||
UpdateUserID(ctx context.Context, oldUserID, newUserID string) error
|
||||
```
|
||||
|
||||
**File:** `management/server/activity/store.go` — add no-op to `InMemoryEventStore` (compile-blocking):
|
||||
```go
|
||||
func (store *InMemoryEventStore) UpdateUserID(_ context.Context, _, _ string) error {
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
**File:** `management/server/activity/store/sql_store.go` — add implementation (ported from fork):
|
||||
- Update `events.initiator_id` and `events.target_id` via GORM
|
||||
- Update `deleted_users.id` via raw SQL (GORM can't update PK via Model)
|
||||
- All in one transaction
|
||||
|
||||
### Step 3b: Regenerate store mocks
|
||||
|
||||
Run `go generate ./management/server/store/...` to regenerate `store_mock.go` with the new `ListUsers` and `UpdateUserID` methods. Without this, tests using the mock won't compile.
|
||||
|
||||
### Step 4: Create migration package
|
||||
|
||||
**New file:** `management/server/idp/migration/migration.go`
|
||||
|
||||
- Define narrow interfaces:
|
||||
```go
|
||||
type MainStoreUpdater interface {
|
||||
ListUsers(ctx context.Context) ([]*types.User, error)
|
||||
UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error
|
||||
}
|
||||
type ActivityStoreUpdater interface {
|
||||
UpdateUserID(ctx context.Context, oldUserID, newUserID string) error
|
||||
}
|
||||
```
|
||||
- `MigrationConfig` struct: `ConnectorID`, `DryRun`, `MainStore`, `ActivityStore`
|
||||
- `MigrationResult` struct: `Migrated`, `Skipped` counts
|
||||
- `Migrate(ctx, *MigrationConfig) (*MigrationResult, error)`:
|
||||
1. List all users from main store
|
||||
2. Reconciliation pass: for already-migrated users, ensure activity store is also updated
|
||||
3. For each non-migrated user: encode new ID, update both stores
|
||||
4. Return counts
|
||||
- Idempotency: `DecodeDexUserID(user.Id)` succeeds → user already migrated, skip
|
||||
- Empty-ID guard: skip users with `Id == ""` before the decode check (`DecodeDexUserID("")` succeeds with empty strings — edge case)
|
||||
- Service users: `IsServiceUser=true` users get re-keyed like all others (they'll be looked up by the new DEX-encoded ID after migration). This is intentional — document in CLI help text.
|
||||
- Uses `EncodeDexUserID` / `DecodeDexUserID` from `idp/dex/provider.go`
|
||||
|
||||
**New file:** `management/server/idp/migration/migration_test.go`
|
||||
|
||||
- Mock-based tests for `Migrate()` covering: normal migration, skip already-migrated, dry-run, reconciliation, empty user list, error handling
|
||||
|
||||
### Step 5: Build the standalone CLI tool
|
||||
|
||||
**New file:** `management/cmd/migrate-idp/main.go` (~200 lines)
|
||||
|
||||
CLI flags:
|
||||
| Flag | Required | Default | Description |
|
||||
|------|----------|---------|-------------|
|
||||
| `--config` | Yes | `/etc/netbird/management.json` | Path to management config |
|
||||
| `--connector-id` | Yes | — | DEX connector ID to encode into user IDs |
|
||||
| `--dry-run` | No | `false` | Preview changes without writing |
|
||||
| `--no-backup` | No | `false` | Skip automatic database backup |
|
||||
| `--log-level` | No | `info` | Verbosity |
|
||||
|
||||
Flow:
|
||||
1. Load management config JSON (reuse `util.ReadJsonWithEnvSub`)
|
||||
2. Validate: connector-id is non-empty, DB is accessible
|
||||
3. Open main store via `store.NewStore(ctx, engine, datadir, nil, false)` — nil metrics, run AutoMigrate
|
||||
- `skipMigration=false` ensures schema is up-to-date (AutoMigrate is idempotent/non-destructive)
|
||||
- Using `true` risks stale schema if user upgrades management + tool simultaneously
|
||||
4. Call `store.SetFieldEncrypt(enc)` to enable field decryption (needed for `ListUsers` to return readable Email/Name for logging)
|
||||
5. Open activity store via `activity_store.NewSqlStore(ctx, datadir, encryptionKey)`
|
||||
- Gracefully handle missing activity store (e.g., `events.db` doesn't exist) — warn and skip activity migration
|
||||
6. Backup databases (SQLite: file copy; PostgreSQL: print `pg_dump` instructions)
|
||||
7. Call `migration.Migrate(ctx, cfg)`
|
||||
8. Print summary and exit
|
||||
|
||||
**New file:** `management/cmd/migrate-idp/backup.go` (~60 lines)
|
||||
- `backupSQLiteFile(srcPath)` — copies to `{src}.backup-{timestamp}`
|
||||
|
||||
### Step 6: Tests
|
||||
|
||||
- Unit tests in `migration_test.go` with mock interfaces
|
||||
- Integration test in `management/cmd/migrate-idp/main_test.go` with real SQLite:
|
||||
- Seed users, events, policy rules with `authorized_user`, access log entries with `user_id`
|
||||
- Run migration, verify all 13 columns updated
|
||||
- Run again, verify idempotent (0 new migrations)
|
||||
- Test partial failure reconciliation
|
||||
- Test missing activity store (graceful skip)
|
||||
|
||||
---
|
||||
|
||||
## User-facing migration procedure
|
||||
|
||||
```
|
||||
1. Stop management: systemctl stop netbird-management
|
||||
|
||||
2. Dry-run: netbird-migrate-idp \
|
||||
--config /etc/netbird/management.json \
|
||||
--connector-id "oidc" \
|
||||
--dry-run
|
||||
|
||||
3. Run migration: netbird-migrate-idp \
|
||||
--config /etc/netbird/management.json \
|
||||
--connector-id "oidc"
|
||||
|
||||
4. Update management.json: Add EmbeddedIdP config with a StaticConnector
|
||||
whose ID matches the --connector-id used above (see below)
|
||||
|
||||
5. Start management: systemctl start netbird-management
|
||||
```
|
||||
|
||||
### Why manual config is required (step 4)
|
||||
|
||||
The EmbeddedIdP config block isn't just about the connector — it includes deployment-specific
|
||||
values that depend on your infrastructure: OIDC issuer URL (must match your public domain),
|
||||
dashboard/CLI redirect URIs (depend on your reverse proxy setup), storage paths, the initial
|
||||
owner account (email + bcrypt password hash), and whether local password auth is disabled.
|
||||
Auto-generating these would require the tool to make assumptions about DNS, port config,
|
||||
and proxy setup that could easily be wrong. The connector ID is the only piece the migration
|
||||
tool owns (it's baked into user IDs). Everything else is infrastructure config that belongs
|
||||
in the operator's hands. Getting any of these wrong means management still won't start.
|
||||
|
||||
---
|
||||
|
||||
## Pitfalls and mitigations
|
||||
|
||||
| Risk | Mitigation |
|
||||
|------|------------|
|
||||
| Management running during migration | Warn user; SQLite will return SQLITE_BUSY with clear error |
|
||||
| Wrong connector ID | Dry-run shows exact ID transformations; backup enables rollback |
|
||||
| Partial failure mid-migration | Idempotent: `DecodeDexUserID` detects already-migrated users; reconciliation pass fixes activity store lag |
|
||||
| Large user count | Each user migrated in own transaction; progress every 100 users (not per-user to avoid log spam) |
|
||||
| Missing encryption key for activity store | Read from management config's `DataStoreEncryptionKey` |
|
||||
| Missing activity store database | Warn and skip activity migration; main store migration proceeds |
|
||||
| Empty user ID in database | Explicit guard before decode check; `DecodeDexUserID("")` succeeds with empty strings |
|
||||
| Re-running with different connector-id | Already-migrated users correctly skipped (decode succeeds). To change connector-id, restore from backup first |
|
||||
| MySQL store engine | Supported — existing `transaction()` helper handles `SET FOREIGN_KEY_CHECKS = 0` |
|
||||
| PostgreSQL non-deferrable FK constraints | Update order (FKs first, PK last) avoids constraint violations regardless of deferrability |
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
1. **Unit tests:** Mock-based tests for migration logic (skip/migrate/dry-run/reconcile/empty-ID guard)
|
||||
2. **Integration test:** Real SQLite databases seeded with test data, verify all 13 columns
|
||||
3. **Manual test:** Run `--dry-run` on a copy of a real self-hosted deployment's databases
|
||||
4. **Idempotency test:** Run migration twice, second run should report 0 migrations
|
||||
5. **Policy rules test:** Seed `policy_rules.authorized_user` with old user ID, verify it's updated
|
||||
6. **Access log test:** Seed `access_log_entries.user_id` with old user ID, verify it's updated
|
||||
7. **Missing activity store test:** Run with missing `events.db`, verify main store migration succeeds with warning
|
||||
|
||||
---
|
||||
|
||||
## Key files (all paths relative to main repo)
|
||||
|
||||
**New files to create:**
|
||||
- `management/server/idp/migration/migration.go` — migration interfaces + `Migrate()` function
|
||||
- `management/server/idp/migration/migration_test.go` — unit tests
|
||||
- `management/cmd/migrate-idp/main.go` — CLI entry point
|
||||
- `management/cmd/migrate-idp/backup.go` — SQLite backup logic
|
||||
- `management/cmd/migrate-idp/main_test.go` — integration tests
|
||||
|
||||
**Existing files to modify:**
|
||||
- `management/server/store/store.go` — add `ListUsers()` and `UpdateUserID()` to Store interface
|
||||
- `management/server/store/sql_store.go` — add `ListUsers()`, `UpdateUserID()`, and `txDeferFKConstraints()` implementations
|
||||
- `management/server/activity/store.go` — add `UpdateUserID()` to Store interface + `InMemoryEventStore.UpdateUserID()` no-op
|
||||
- `management/server/activity/store/sql_store.go` — add `UpdateUserID()` implementation
|
||||
|
||||
**Generated files to regenerate:**
|
||||
- `management/server/store/store_mock.go` — run `go generate ./management/server/store/...` after interface changes
|
||||
|
||||
**Read-only references (port patterns from fork):**
|
||||
- Fork's `management/server/store/sql_store.go:855-895` — `UpdateUserID()` pattern
|
||||
- Fork's `management/server/activity/store/sql_store.go:230-254` — activity `UpdateUserID()` pattern
|
||||
- Fork's `management/server/idp/migration/migration.go` — orchestration logic pattern
|
||||
|
||||
**Existing files used as-is (no changes):**
|
||||
- `idp/dex/provider.go` — `EncodeDexUserID` / `DecodeDexUserID`
|
||||
- `management/server/types/policyrule.go:88` — `AuthorizedUser` field
|
||||
- `management/internals/modules/reverseproxy/accesslogs/accesslogentry.go:25` — `AccessLogEntry.UserId` field
|
||||
- `management/server/idp/embedded.go` — EmbeddedIdP manager
|
||||
636
management/cmd/migrate-idp/MIGRATION_GUIDE.md
Normal file
636
management/cmd/migrate-idp/MIGRATION_GUIDE.md
Normal file
@@ -0,0 +1,636 @@
|
||||
# Migrating from an External IdP to NetBird's Embedded IdP
|
||||
|
||||
This guide walks you through migrating a self-hosted NetBird deployment from an external identity provider (Zitadel, Keycloak, Auth0, Okta, etc.) to NetBird's built-in embedded IdP (powered by DEX).
|
||||
|
||||
After this migration, NetBird manages authentication directly — no external IdP dependency required.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [What This Migration Does](#what-this-migration-does)
|
||||
2. [Before You Start](#before-you-start)
|
||||
3. [Step 1: Choose Your Connector ID](#step-1-choose-your-connector-id)
|
||||
4. [Step 2: Stop the Management Server](#step-2-stop-the-management-server)
|
||||
5. [Step 3: Run a Dry-Run](#step-3-run-a-dry-run)
|
||||
6. [Step 4: Run the Migration](#step-4-run-the-migration)
|
||||
7. [Step 5: Update management.json](#step-5-update-managementjson)
|
||||
8. [Step 5f: Configure Your Old IdP (If Keeping It as a DEX Connector)](#step-5f-configure-your-old-idp-if-keeping-it-as-a-dex-connector)
|
||||
9. [Step 6: Start the Management Server](#step-6-start-the-management-server)
|
||||
10. [Step 7: Verify Everything Works](#step-7-verify-everything-works)
|
||||
11. [Rollback](#rollback)
|
||||
12. [FAQ](#faq)
|
||||
|
||||
---
|
||||
|
||||
## What This Migration Does
|
||||
|
||||
NetBird's embedded IdP (DEX) uses a different format for user IDs than external providers do. When a user logs in through DEX, the user ID stored in the JWT `sub` claim looks like this:
|
||||
|
||||
```
|
||||
CiQ3YWFkOGMwNS0zMjg3LTQ3M2YtYjQyYS0zNjU1MDRiZjI1ZTcSBG9pZGM
|
||||
```
|
||||
|
||||
This is a base64-encoded blob that contains two pieces of information:
|
||||
|
||||
- The **original user ID** (e.g., `7aad8c05-3287-473f-b42a-365504bf25e7`)
|
||||
- The **connector ID** (e.g., `oidc`)
|
||||
|
||||
The migration tool reads every user from your database, encodes their existing user ID into this DEX format, and updates all references across the database. After migration, when DEX issues tokens for your users, the `sub` claim will match what's in the database, and everything works seamlessly.
|
||||
|
||||
### What gets updated
|
||||
|
||||
The tool updates user ID references in **13 database columns** across two databases:
|
||||
|
||||
**Main database (store.db or PostgreSQL):**
|
||||
|
||||
| Table | Column | What it stores |
|
||||
|-------|--------|----------------|
|
||||
| `users` | `id` | The user's primary key |
|
||||
| `personal_access_tokens` | `user_id` | Which user owns the token |
|
||||
| `personal_access_tokens` | `created_by` | Who created the token |
|
||||
| `peers` | `user_id` | Which user registered the peer |
|
||||
| `user_invites` | `created_by` | Who sent the invitation |
|
||||
| `accounts` | `created_by` | Who created the account |
|
||||
| `proxy_access_tokens` | `created_by` | Who created the proxy token |
|
||||
| `jobs` | `triggered_by` | Who triggered the job |
|
||||
| `policy_rules` | `authorized_user` | SSH policy user authorization |
|
||||
| `access_log_entries` | `user_id` | Reverse proxy access logs |
|
||||
|
||||
**Activity database (events.db or PostgreSQL):**
|
||||
|
||||
| Table | Column | What it stores |
|
||||
|-------|--------|----------------|
|
||||
| `events` | `initiator_id` | Who performed the action |
|
||||
| `events` | `target_id` | Who was the target of the action |
|
||||
| `deleted_users` | `id` | Archived deleted user records |
|
||||
|
||||
### What does NOT change
|
||||
|
||||
- Peer IDs, group IDs, network configurations, DNS settings, routes, and setup keys are **not affected**.
|
||||
- Your WireGuard tunnels and peer connections continue working throughout.
|
||||
- The migration only touches user identity references.
|
||||
|
||||
---
|
||||
|
||||
## Before You Start
|
||||
|
||||
### Requirements
|
||||
|
||||
- **Access to the management server machine** (SSH or direct).
|
||||
- **The `migrate-idp` binary** — built from `management/cmd/migrate-idp/`.
|
||||
- **Management server must be stopped** during migration. The tool works directly on the database files.
|
||||
- **A backup strategy** — the tool creates automatic SQLite backups, but for PostgreSQL you should run `pg_dump` yourself.
|
||||
|
||||
### What you will need to know
|
||||
|
||||
Before starting, gather these pieces of information:
|
||||
|
||||
1. **Where your management.json lives** — typically `/etc/netbird/management.json`.
|
||||
2. **Your connector ID** — see [Step 1](#step-1-choose-your-connector-id).
|
||||
3. **Your public management URL** — the URL users and agents use to reach the management server (e.g., `https://netbird.example.com`).
|
||||
4. **Your dashboard URL** — where the NetBird web dashboard is hosted (e.g., `https://app.netbird.example.com`).
|
||||
5. **An admin email and password** — for the initial owner account in the embedded IdP.
|
||||
|
||||
### Build the migration tool
|
||||
|
||||
From the NetBird repository root:
|
||||
|
||||
```bash
|
||||
cd management && go build -o migrate-idp ./cmd/migrate-idp/
|
||||
```
|
||||
|
||||
This produces a `migrate-idp` binary. Copy it to your management server if building remotely.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Choose Your Connector ID
|
||||
|
||||
The connector ID is a short string that gets baked into every user's new ID. It tells DEX which authentication connector a user came from. You will use this same connector ID later when configuring the embedded IdP.
|
||||
|
||||
**For most migrations, use `oidc` as the connector ID.** This is the standard value for any OIDC-based external provider (Zitadel, Keycloak, Auth0, Okta, etc.).
|
||||
|
||||
Some specific cases:
|
||||
|
||||
| Previous IdP | Recommended connector ID |
|
||||
|-------------|------------------------|
|
||||
| Zitadel | `oidc` |
|
||||
| Keycloak | `oidc` |
|
||||
| Auth0 | `oidc` |
|
||||
| Okta | `oidc` |
|
||||
| Google Workspace | `google` |
|
||||
| Microsoft Entra (Azure AD) | `microsoft` |
|
||||
| Any generic OIDC provider | `oidc` |
|
||||
|
||||
The connector ID is arbitrary — it just needs to match between the migration and the DEX connector configuration you set up in Step 5. If you later add the old IdP as a DEX connector (to allow existing users to log in via their old provider through DEX), the connector's ID in the DEX config must match the value you use here.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Stop the Management Server
|
||||
|
||||
The migration modifies the database directly. The management server must not be running.
|
||||
|
||||
```bash
|
||||
# systemd
|
||||
sudo systemctl stop netbird-management
|
||||
|
||||
# Docker
|
||||
docker compose stop management
|
||||
# or
|
||||
docker stop netbird-management
|
||||
```
|
||||
|
||||
Verify it's stopped:
|
||||
|
||||
```bash
|
||||
# systemd
|
||||
sudo systemctl status netbird-management
|
||||
|
||||
# Docker
|
||||
docker ps | grep management
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run a Dry-Run
|
||||
|
||||
A dry-run shows you exactly what the migration would do without writing any changes. Always do this first.
|
||||
|
||||
```bash
|
||||
./migrate-idp \
|
||||
--config /etc/netbird/management.json \
|
||||
--connector-id oidc \
|
||||
--dry-run
|
||||
```
|
||||
|
||||
You will see output like:
|
||||
|
||||
```
|
||||
INFO loaded config from /etc/netbird/management.json (datadir: /var/lib/netbird, engine: sqlite)
|
||||
INFO [DRY RUN] mode enabled — no changes will be written
|
||||
INFO found 15 users to process
|
||||
INFO [DRY RUN] would migrate user 7aad8c05-3287-... -> CiQ3YWFkOGMw... (account: abc123)
|
||||
INFO [DRY RUN] would migrate user auth0|abc123... -> CgxhdXRoMHxh... (account: abc123)
|
||||
...
|
||||
INFO [DRY RUN] migration summary: 15 users would be migrated, 0 already migrated
|
||||
|
||||
Migration summary:
|
||||
Migrated: 15 users
|
||||
Skipped: 0 users (already migrated)
|
||||
|
||||
[DRY RUN] No changes were written. Remove --dry-run to apply.
|
||||
```
|
||||
|
||||
**Check the output carefully.** Every user should show their old ID transforming to a new base64-encoded ID. If anything looks wrong (unexpected user count, errors), stop and investigate before proceeding.
|
||||
|
||||
### Available flags
|
||||
|
||||
| Flag | Required | Default | Description |
|
||||
|------|----------|---------|-------------|
|
||||
| `--config` | Yes | `/etc/netbird/management.json` | Path to your management config file |
|
||||
| `--connector-id` | Yes | — | The connector ID to encode into user IDs |
|
||||
| `--dry-run` | No | `false` | Preview changes without writing |
|
||||
| `--no-backup` | No | `false` | Skip automatic database backup |
|
||||
| `--log-level` | No | `info` | Log verbosity: `debug`, `info`, `warn`, `error` |
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Run the Migration
|
||||
|
||||
Once you are satisfied with the dry-run output, run the actual migration:
|
||||
|
||||
```bash
|
||||
./migrate-idp \
|
||||
--config /etc/netbird/management.json \
|
||||
--connector-id oidc
|
||||
```
|
||||
|
||||
The tool will:
|
||||
|
||||
1. **Back up your databases** — for SQLite, it copies `store.db` and `events.db` to timestamped backups (e.g., `store.db.backup-20260302-140000`). For PostgreSQL, it prints a warning reminding you to use `pg_dump`.
|
||||
2. **Migrate each user** — encodes their ID into DEX format and updates all 13 columns in a single database transaction per user.
|
||||
3. **Print a summary** of how many users were migrated and how many were skipped.
|
||||
|
||||
Example output:
|
||||
|
||||
```
|
||||
INFO loaded config from /etc/netbird/management.json (datadir: /var/lib/netbird, engine: sqlite)
|
||||
INFO backed up /var/lib/netbird/store.db -> /var/lib/netbird/store.db.backup-20260302-140000
|
||||
INFO backed up /var/lib/netbird/events.db -> /var/lib/netbird/events.db.backup-20260302-140000
|
||||
INFO found 15 users to process
|
||||
INFO migration complete: 15 users migrated, 0 already migrated
|
||||
|
||||
Migration summary:
|
||||
Migrated: 15 users
|
||||
Skipped: 0 users (already migrated)
|
||||
|
||||
Next step: update management.json to enable EmbeddedIdP with connector ID "oidc"
|
||||
```
|
||||
|
||||
### Idempotency
|
||||
|
||||
The migration is safe to run multiple times. If it's interrupted or you run it again, it detects already-migrated users (their IDs are already in DEX format) and skips them. A second run will report `0 users migrated, 15 already migrated`.
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Update management.json
|
||||
|
||||
This is the manual configuration step. You need to add an `EmbeddedIdP` block to your `management.json` file so the management server starts with the built-in identity provider instead of your old external IdP.
|
||||
|
||||
### 5a: Gather the required information
|
||||
|
||||
You need these values:
|
||||
|
||||
| Value | Where to find it | Example |
|
||||
|-------|------------------|---------|
|
||||
| **Issuer URL** | Your public management server URL + `/oauth2`. This must be reachable by browsers and the NetBird client. | `https://netbird.example.com/oauth2` |
|
||||
| **Local address** | The port the management server listens on locally. Check your current config's `HttpConfig` section. | `:443` or `:8080` or `:33073` |
|
||||
| **Dashboard redirect URIs** | Your dashboard URL + `/nb-auth` and `/nb-silent-auth`. Check your current `HttpConfig.AuthAudience` or dashboard deployment for the base URL. | `https://app.netbird.example.com/nb-auth` |
|
||||
| **CLI redirect URIs** | Standard localhost ports used by the NetBird CLI for OAuth callbacks. These are always the same. | `http://localhost:53000/` and `http://localhost:54000/` |
|
||||
| **IdP storage path** | Where DEX should store its database. Use your existing data directory. | `/var/lib/netbird/idp.db` |
|
||||
| **Owner email** | The email address of the initial admin user. This should be the email of the account owner who currently manages your NetBird deployment. | `admin@example.com` |
|
||||
| **Owner password hash** | A bcrypt hash of the password for the initial admin. See section 5b below. | `$2a$10$N9qo8uLO...` |
|
||||
|
||||
**How to find your dashboard URL:** Look at the current `DeviceAuthorizationFlow` or `PKCEAuthorizationFlow` section in your `management.json`. The redirect URIs there point to your dashboard. You can also check what URL you use to access the NetBird web dashboard in your browser.
|
||||
|
||||
**How to find your local listen address:** Look at the current `HttpConfig` section in your `management.json` for the `ListenAddress` or check what port the management server binds to (default is `443` or `33073`).
|
||||
|
||||
### 5b: Generate a bcrypt password hash
|
||||
|
||||
The owner password must be stored as a bcrypt hash, not as plain text. Use any of these methods to generate one:
|
||||
|
||||
**Using htpasswd (most systems):**
|
||||
|
||||
```bash
|
||||
htpasswd -nbBC 10 "" 'YourSecurePassword' | cut -d: -f2
|
||||
```
|
||||
|
||||
**Using Python:**
|
||||
|
||||
```bash
|
||||
python3 -c "import bcrypt; print(bcrypt.hashpw(b'YourSecurePassword', bcrypt.gensalt()).decode())"
|
||||
```
|
||||
|
||||
If the `bcrypt` module is not installed: `pip3 install bcrypt`.
|
||||
|
||||
**Using Docker (no local dependencies):**
|
||||
|
||||
```bash
|
||||
docker run --rm python:3-slim sh -c \
|
||||
"pip -q install bcrypt && python3 -c \"import bcrypt; print(bcrypt.hashpw(b'YourSecurePassword', bcrypt.gensalt()).decode())\""
|
||||
```
|
||||
|
||||
The output will look like: `$2b$12$LJ3m4ys3Gl.2B1FlKNUyde8R7sCgSEO6k.gSCiBfQKOJDMBz.bXXi`
|
||||
|
||||
### 5c: Edit management.json
|
||||
|
||||
Open your `management.json` and make these changes:
|
||||
|
||||
**1. Add the `EmbeddedIdP` block.** Add it as a top-level key:
|
||||
|
||||
```json
|
||||
{
|
||||
"Stuns": [...],
|
||||
"TURNConfig": {...},
|
||||
"Signal": {...},
|
||||
"Datadir": "/var/lib/netbird",
|
||||
"DataStoreEncryptionKey": "...",
|
||||
"HttpConfig": {...},
|
||||
|
||||
"EmbeddedIdP": {
|
||||
"Enabled": true,
|
||||
"Issuer": "https://netbird.example.com/oauth2",
|
||||
"LocalAddress": ":443",
|
||||
"Storage": {
|
||||
"Type": "sqlite3",
|
||||
"Config": {
|
||||
"File": "/var/lib/netbird/idp.db"
|
||||
}
|
||||
},
|
||||
"DashboardRedirectURIs": [
|
||||
"https://app.netbird.example.com/nb-auth",
|
||||
"https://app.netbird.example.com/nb-silent-auth"
|
||||
],
|
||||
"CLIRedirectURIs": [
|
||||
"http://localhost:53000/",
|
||||
"http://localhost:54000/"
|
||||
],
|
||||
"Owner": {
|
||||
"Email": "admin@example.com",
|
||||
"Hash": "$2b$12$LJ3m4ys3Gl.2B1FlKNUyde8R7sCgSEO6k.gSCiBfQKOJDMBz.bXXi",
|
||||
"Username": "Admin"
|
||||
},
|
||||
"SignKeyRefreshEnabled": false,
|
||||
"LocalAuthDisabled": false
|
||||
},
|
||||
|
||||
"StoreConfig": {...},
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
**2. Update `HttpConfig` to point at the embedded IdP:**
|
||||
|
||||
```json
|
||||
"HttpConfig": {
|
||||
"AuthAudience": "netbird-dashboard",
|
||||
"AuthIssuer": "https://netbird.example.com/oauth2",
|
||||
"AuthUserIDClaim": "sub",
|
||||
"CLIAuthAudience": "netbird-cli",
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
- `AuthAudience` must be `"netbird-dashboard"` — this is the static client ID DEX uses for the dashboard.
|
||||
- `CLIAuthAudience` must be `"netbird-cli"` — the static client ID DEX uses for the CLI.
|
||||
- `AuthIssuer` must match the `Issuer` in your `EmbeddedIdP` block.
|
||||
|
||||
**3. Remove or leave the old `IdpManagerConfig` block.** When `EmbeddedIdP` is configured, the management server uses it instead of any external IdP config. You can either delete the old `IdpManagerConfig` block or leave it — it will be ignored.
|
||||
|
||||
### 5d: Explanation of each field
|
||||
|
||||
| Field | Required | Description |
|
||||
|-------|----------|-------------|
|
||||
| `Enabled` | Yes | Must be `true` to activate the embedded IdP. |
|
||||
| `Issuer` | Yes | The public URL where DEX serves OIDC endpoints. Must be your management server's public URL with `/oauth2` appended. Browsers and clients will call this URL to authenticate. Must be HTTPS in production. |
|
||||
| `LocalAddress` | Yes | The local listen address of the management server (e.g., `:443`). Used internally for JWT validation to avoid external network calls during token verification. |
|
||||
| `Storage.Type` | Yes | `"sqlite3"` or `"postgres"`. This is the storage DEX uses for its own data (connectors, tokens, keys). Separate from NetBird's main store. |
|
||||
| `Storage.Config.File` | For sqlite3 | Path where DEX creates its SQLite database. Use your data directory (e.g., `/var/lib/netbird/idp.db`). |
|
||||
| `Storage.Config.DSN` | For postgres | PostgreSQL connection string for DEX storage (e.g., `host=localhost dbname=netbird_idp sslmode=disable`). |
|
||||
| `DashboardRedirectURIs` | Yes | OAuth2 redirect URIs for the web dashboard. Must include `/nb-auth` and `/nb-silent-auth` paths on your dashboard URL. |
|
||||
| `CLIRedirectURIs` | Yes | OAuth2 redirect URIs for the NetBird CLI. Always use `http://localhost:53000/` and `http://localhost:54000/`. |
|
||||
| `Owner.Email` | Recommended | Email for the initial admin user. This user can log in immediately with email/password. |
|
||||
| `Owner.Hash` | Recommended | Bcrypt hash of the admin password. See [5b](#5b-generate-a-bcrypt-password-hash). |
|
||||
| `Owner.Username` | No | Display name for the admin user. Defaults to the email if not set. |
|
||||
| `SignKeyRefreshEnabled` | No | Enables automatic rotation of JWT signing keys. Default `false`. |
|
||||
| `LocalAuthDisabled` | No | Set to `true` to disable email/password login entirely (only allow login via external connectors configured in DEX). Default `false`. |
|
||||
|
||||
### 5e: If using PostgreSQL for DEX storage
|
||||
|
||||
If your main NetBird store uses PostgreSQL, you may want DEX to use PostgreSQL too. Create a separate database for DEX:
|
||||
|
||||
```sql
|
||||
CREATE DATABASE netbird_idp;
|
||||
```
|
||||
|
||||
Then configure:
|
||||
|
||||
```json
|
||||
"Storage": {
|
||||
"Type": "postgres",
|
||||
"Config": {
|
||||
"DSN": "host=localhost port=5432 user=netbird password=secret dbname=netbird_idp sslmode=disable"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 5f: Configure Your Old IdP (If Keeping It as a DEX Connector)
|
||||
|
||||
After migration, you have two authentication options:
|
||||
|
||||
- **Option A: Local passwords only** — users log in with email/password through DEX's built-in password database. No changes needed on any external IdP. The owner account you configured in Step 5 is the first user. You can create more users through the dashboard or API. **Skip this section entirely.**
|
||||
|
||||
- **Option B: Keep your old IdP as a login option through DEX** — existing users continue to log in via your old provider (Zitadel, Keycloak, Okta, etc.), but DEX sits in the middle as an OIDC broker. **You must complete this section.**
|
||||
|
||||
### Why is this needed?
|
||||
|
||||
Before migration, your NetBird clients and dashboard talked directly to your old IdP for authentication. After migration, they talk to DEX instead. DEX then talks to your old IdP on their behalf. This means:
|
||||
|
||||
1. **DEX needs to be registered as an OAuth2 client in your old IdP** (it may already be if you reuse the existing client credentials).
|
||||
2. **Your old IdP needs to allow DEX's callback URL as a redirect URI** — this is different from the redirect URIs your dashboard and CLI used before.
|
||||
|
||||
### The DEX callback URL
|
||||
|
||||
DEX uses a single callback URL for all external connectors:
|
||||
|
||||
```
|
||||
https://<your-management-server>/oauth2/callback
|
||||
```
|
||||
|
||||
For example, if your management server is at `https://netbird.example.com`, the callback URL is:
|
||||
|
||||
```
|
||||
https://netbird.example.com/oauth2/callback
|
||||
```
|
||||
|
||||
### What to configure in your old IdP
|
||||
|
||||
Go to your old IdP's admin panel and either update the existing OAuth2/OIDC application or create a new one:
|
||||
|
||||
| Setting | Value |
|
||||
|---------|-------|
|
||||
| **Redirect URI / Callback URL** | `https://netbird.example.com/oauth2/callback` |
|
||||
| **Grant type** | Authorization Code |
|
||||
| **Scopes** | `openid`, `profile`, `email` (and `groups` if you use group-based policies) |
|
||||
| **Client ID** | Note this down — you need it for the connector config |
|
||||
| **Client Secret** | Note this down — you need it for the connector config |
|
||||
|
||||
Provider-specific instructions:
|
||||
|
||||
**Zitadel:**
|
||||
1. Go to your Zitadel project > Applications.
|
||||
2. Either edit the existing NetBird application or create a new Web application.
|
||||
3. In Redirect URIs, add `https://netbird.example.com/oauth2/callback`.
|
||||
4. Copy the Client ID and Client Secret.
|
||||
|
||||
**Keycloak:**
|
||||
1. Go to your realm > Clients.
|
||||
2. Either edit the existing NetBird client or create a new OpenID Connect client.
|
||||
3. In Valid Redirect URIs, add `https://netbird.example.com/oauth2/callback`.
|
||||
4. Copy the Client ID and Client Secret from the Credentials tab.
|
||||
|
||||
**Auth0:**
|
||||
1. Go to Applications > your NetBird application (or create a new Regular Web Application).
|
||||
2. In Allowed Callback URLs, add `https://netbird.example.com/oauth2/callback`.
|
||||
3. Copy the Client ID and Client Secret.
|
||||
|
||||
**Okta:**
|
||||
1. Go to Applications > your NetBird application (or create a new OIDC Web Application).
|
||||
2. In Sign-in redirect URIs, add `https://netbird.example.com/oauth2/callback`.
|
||||
3. Copy the Client ID and Client Secret.
|
||||
|
||||
**Google Workspace:**
|
||||
1. Go to Google Cloud Console > APIs & Services > Credentials.
|
||||
2. Edit your OAuth 2.0 Client ID (or create a new one).
|
||||
3. In Authorized redirect URIs, add `https://netbird.example.com/oauth2/callback`.
|
||||
4. Copy the Client ID and Client Secret.
|
||||
|
||||
**Microsoft Entra (Azure AD):**
|
||||
1. Go to Azure Portal > App registrations > your NetBird app (or create a new one).
|
||||
2. In Authentication > Web > Redirect URIs, add `https://netbird.example.com/oauth2/callback`.
|
||||
3. Copy the Application (client) ID and generate a Client Secret under Certificates & secrets.
|
||||
|
||||
### Add the connector to management.json
|
||||
|
||||
Once you have the Client ID, Client Secret, and have configured the callback URL, add a `StaticConnectors` entry inside your `EmbeddedIdP` config. This is done by adding the connector directly to the DEX YAML config that the embedded IdP generates. However, the standalone management server doesn't expose static connectors in `management.json` directly — connectors are managed through the management API after startup.
|
||||
|
||||
**The simpler approach:** After starting the management server (Step 6), use the management API to create the connector:
|
||||
|
||||
```bash
|
||||
# Replace with your actual values
|
||||
curl -X POST https://netbird.example.com/api/idp/connectors \
|
||||
-H "Authorization: Bearer <your-admin-token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"id": "oidc",
|
||||
"name": "Previous IdP",
|
||||
"type": "oidc",
|
||||
"issuer": "https://your-old-idp.example.com",
|
||||
"client_id": "your-client-id",
|
||||
"client_secret": "your-client-secret"
|
||||
}'
|
||||
```
|
||||
|
||||
The `id` field **must** match the `--connector-id` you used during migration (e.g., `oidc`). This is what links the migrated user IDs to this connector.
|
||||
|
||||
### What about existing redirect URIs in the old IdP?
|
||||
|
||||
Your old IdP probably has redirect URIs configured for the NetBird dashboard and CLI (e.g., `https://app.example.com/nb-auth`, `http://localhost:53000/`). These were used when clients talked to the old IdP directly.
|
||||
|
||||
After migration, clients talk to DEX instead — not to the old IdP. So:
|
||||
|
||||
- The old dashboard/CLI redirect URIs in the old IdP are **no longer used** and can be removed (but leaving them is harmless).
|
||||
- The only redirect URI the old IdP needs now is **DEX's callback URL** (`https://netbird.example.com/oauth2/callback`).
|
||||
|
||||
### Authentication flow after migration (Option B)
|
||||
|
||||
```
|
||||
User clicks "Login"
|
||||
→ Browser goes to DEX (https://netbird.example.com/oauth2/auth)
|
||||
→ DEX shows login page with your connector listed (e.g., "Previous IdP")
|
||||
→ User clicks the connector
|
||||
→ DEX redirects to your old IdP (https://your-old-idp.example.com/authorize)
|
||||
→ User authenticates with their existing credentials
|
||||
→ Old IdP redirects back to DEX (https://netbird.example.com/oauth2/callback)
|
||||
→ DEX issues a new JWT with the DEX-encoded user ID
|
||||
→ Browser returns to NetBird dashboard/CLI with the DEX JWT
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Start the Management Server
|
||||
|
||||
```bash
|
||||
# systemd
|
||||
sudo systemctl start netbird-management
|
||||
|
||||
# Docker
|
||||
docker compose start management
|
||||
# or
|
||||
docker start netbird-management
|
||||
```
|
||||
|
||||
Check the logs for successful startup:
|
||||
|
||||
```bash
|
||||
# systemd
|
||||
sudo journalctl -u netbird-management -f
|
||||
|
||||
# Docker
|
||||
docker logs -f netbird-management
|
||||
```
|
||||
|
||||
Look for:
|
||||
|
||||
- `"embedded IdP started"` or similar DEX initialization messages.
|
||||
- No errors about missing users, foreign key violations, or IdP configuration.
|
||||
- The management server accepting connections on its listen port.
|
||||
|
||||
---
|
||||
|
||||
## Step 7: Verify Everything Works
|
||||
|
||||
### Test the dashboard
|
||||
|
||||
1. Open your NetBird dashboard in a browser.
|
||||
2. You should see a DEX login page (NetBird-branded) instead of your old IdP's login page.
|
||||
3. Log in with the **owner email and password** you configured in Step 5.
|
||||
4. Verify you can see your account, peers, and policies.
|
||||
|
||||
### Test the CLI
|
||||
|
||||
```bash
|
||||
netbird login --management-url https://netbird.example.com
|
||||
```
|
||||
|
||||
This should open a browser for DEX authentication. Log in with the owner credentials.
|
||||
|
||||
### Test peer connectivity
|
||||
|
||||
Existing peers should continue to work. Their WireGuard tunnels are not affected by the IdP change. New peers can be registered by users who authenticate through the embedded IdP.
|
||||
|
||||
---
|
||||
|
||||
## Rollback
|
||||
|
||||
If something goes wrong, you can restore the database backups and revert `management.json`.
|
||||
|
||||
### SQLite
|
||||
|
||||
```bash
|
||||
# Stop management
|
||||
sudo systemctl stop netbird-management
|
||||
|
||||
# Restore backups (find the timestamp from migration output)
|
||||
cp /var/lib/netbird/store.db.backup-20260302-140000 /var/lib/netbird/store.db
|
||||
cp /var/lib/netbird/events.db.backup-20260302-140000 /var/lib/netbird/events.db
|
||||
|
||||
# Revert management.json (remove EmbeddedIdP block, restore old IdpManagerConfig)
|
||||
# Then start management
|
||||
sudo systemctl start netbird-management
|
||||
```
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
Restore from the `pg_dump` you took before migration:
|
||||
|
||||
```bash
|
||||
# Stop management
|
||||
sudo systemctl stop netbird-management
|
||||
|
||||
# Restore
|
||||
pg_restore -d netbird /path/to/backup.dump
|
||||
# or
|
||||
psql netbird < /path/to/backup.sql
|
||||
|
||||
# Revert management.json and start
|
||||
sudo systemctl start netbird-management
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## FAQ
|
||||
|
||||
### Can I run the migration multiple times?
|
||||
|
||||
Yes. The migration is idempotent. It detects users whose IDs are already in DEX format and skips them. Running it twice will report `0 users migrated, N already migrated`.
|
||||
|
||||
### What happens if the migration is interrupted?
|
||||
|
||||
Each user is migrated in its own database transaction. If the process is killed mid-migration, some users will have new IDs and some will still have old IDs. Simply run the migration again — it will pick up where it left off and skip already-migrated users.
|
||||
|
||||
### Does this affect my WireGuard tunnels?
|
||||
|
||||
No. WireGuard tunnels are identified by peer keys, not user IDs. All existing tunnels continue working during and after migration. No client-side changes are needed.
|
||||
|
||||
### What about service users?
|
||||
|
||||
Service users (`IsServiceUser=true`) are migrated like all other users. Their IDs are re-encoded with the connector ID. This ensures consistency — all user IDs in the database follow the same format after migration.
|
||||
|
||||
### Can I keep my old IdP as a connector in DEX?
|
||||
|
||||
Yes. See [Step 5f](#step-5f-configure-your-old-idp-if-keeping-it-as-a-dex-connector) for full instructions. In short: register DEX's callback URL (`https://<management-server>/oauth2/callback`) as a redirect URI in your old IdP, then add the connector via the management API after startup. The connector ID must match the `--connector-id` you used during migration.
|
||||
|
||||
### What if I used the wrong connector ID?
|
||||
|
||||
Restore from backup and run the migration again with the correct connector ID. Already-migrated users cannot be re-migrated to a different connector ID without restoring the original data first.
|
||||
|
||||
### Does this work with the combined management container?
|
||||
|
||||
No. The combined container (`combined/cmd/`) only supports setups that already have the embedded IdP enabled. This migration tool is for standalone management server deployments (`management/cmd/`) that are switching from an external IdP.
|
||||
|
||||
### What database engines are supported?
|
||||
|
||||
SQLite, PostgreSQL, and MySQL are all supported. The tool reads the database engine from your `management.json` `StoreConfig` and connects accordingly. For SQLite, automatic backups are created. For PostgreSQL and MySQL, you must create your own backups before running the migration.
|
||||
68
management/cmd/migrate-idp/backup.go
Normal file
68
management/cmd/migrate-idp/backup.go
Normal file
@@ -0,0 +1,68 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/netbirdio/netbird/management/server/types"
|
||||
)
|
||||
|
||||
const (
|
||||
storeDBFile = "store.db"
|
||||
eventsDBFile = "events.db"
|
||||
)
|
||||
|
||||
// backupDatabases creates backups of SQLite database files before migration.
|
||||
// For PostgreSQL/MySQL, it prints instructions for the operator to run pg_dump/mysqldump.
|
||||
func backupDatabases(dataDir string, engine types.Engine) error {
|
||||
switch engine {
|
||||
case types.SqliteStoreEngine:
|
||||
for _, dbFile := range []string{storeDBFile, eventsDBFile} {
|
||||
src := filepath.Join(dataDir, dbFile)
|
||||
if _, err := os.Stat(src); os.IsNotExist(err) {
|
||||
log.Infof("skipping backup of %s (file does not exist)", src)
|
||||
continue
|
||||
}
|
||||
if err := backupSQLiteFile(src); err != nil {
|
||||
return fmt.Errorf("backup %s: %w", dbFile, err)
|
||||
}
|
||||
}
|
||||
case types.PostgresStoreEngine:
|
||||
log.Warn("PostgreSQL detected — automatic backup is not supported. " +
|
||||
"Please ensure you have a recent pg_dump backup before proceeding.")
|
||||
case types.MysqlStoreEngine:
|
||||
log.Warn("MySQL detected — automatic backup is not supported. " +
|
||||
"Please ensure you have a recent mysqldump backup before proceeding.")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// backupSQLiteFile copies a SQLite database file to a timestamped backup.
|
||||
func backupSQLiteFile(srcPath string) error {
|
||||
timestamp := time.Now().Format("20060102-150405")
|
||||
dstPath := fmt.Sprintf("%s.backup-%s", srcPath, timestamp)
|
||||
|
||||
src, err := os.Open(srcPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open source: %w", err)
|
||||
}
|
||||
defer src.Close()
|
||||
|
||||
dst, err := os.Create(dstPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("create backup: %w", err)
|
||||
}
|
||||
defer dst.Close()
|
||||
|
||||
if _, err := io.Copy(dst, src); err != nil {
|
||||
return fmt.Errorf("copy data: %w", err)
|
||||
}
|
||||
|
||||
log.Infof("backed up %s -> %s", srcPath, dstPath)
|
||||
return nil
|
||||
}
|
||||
151
management/cmd/migrate-idp/main.go
Normal file
151
management/cmd/migrate-idp/main.go
Normal file
@@ -0,0 +1,151 @@
|
||||
// Command migrate-idp is a standalone CLI tool that migrates self-hosted NetBird
|
||||
// deployments from an external IdP (Zitadel, Keycloak, Okta, etc.) to NetBird's
|
||||
// embedded DEX-based IdP. It re-keys all user IDs in the database to match DEX's
|
||||
// encoded format.
|
||||
//
|
||||
// Usage:
|
||||
//
|
||||
// migrate-idp --config /etc/netbird/management.json --connector-id oidc [--dry-run]
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
nbconfig "github.com/netbirdio/netbird/management/internals/server/config"
|
||||
activitystore "github.com/netbirdio/netbird/management/server/activity/store"
|
||||
"github.com/netbirdio/netbird/management/server/idp/migration"
|
||||
"github.com/netbirdio/netbird/management/server/store"
|
||||
"github.com/netbirdio/netbird/util"
|
||||
"github.com/netbirdio/netbird/util/crypt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
configPath := flag.String("config", "/etc/netbird/management.json", "path to management.json config file")
|
||||
connectorID := flag.String("connector-id", "", "DEX connector ID to encode into user IDs (required)")
|
||||
dryRun := flag.Bool("dry-run", false, "preview changes without writing to the database")
|
||||
noBackup := flag.Bool("no-backup", false, "skip automatic database backup (SQLite only)")
|
||||
logLevel := flag.String("log-level", "info", "log verbosity: debug, info, warn, error")
|
||||
|
||||
flag.Usage = func() {
|
||||
fmt.Fprintf(os.Stderr, `migrate-idp - Migrate NetBird user IDs from external IdP to embedded DEX
|
||||
|
||||
This tool re-keys all user IDs in the management database so they match DEX's
|
||||
encoded format (base64-encoded protobuf with user ID + connector ID). Run this
|
||||
with management stopped, then update management.json to enable EmbeddedIdP.
|
||||
|
||||
Service users (IsServiceUser=true) are re-keyed like all other users. All user
|
||||
types will be looked up by DEX-encoded IDs after migration.
|
||||
|
||||
Usage:
|
||||
migrate-idp --config /etc/netbird/management.json --connector-id oidc [flags]
|
||||
|
||||
Flags:
|
||||
`)
|
||||
flag.PrintDefaults()
|
||||
|
||||
fmt.Fprintf(os.Stderr, `
|
||||
Migration procedure:
|
||||
1. Stop management: systemctl stop netbird-management
|
||||
2. Dry-run: migrate-idp --config <path> --connector-id <id> --dry-run
|
||||
3. Run migration: migrate-idp --config <path> --connector-id <id>
|
||||
4. Update management.json: Add EmbeddedIdP config with matching connector ID
|
||||
5. Start management: systemctl start netbird-management
|
||||
`)
|
||||
}
|
||||
|
||||
flag.Parse()
|
||||
|
||||
level, err := log.ParseLevel(*logLevel)
|
||||
if err != nil {
|
||||
log.Fatalf("invalid log level %q: %v", *logLevel, err)
|
||||
}
|
||||
log.SetLevel(level)
|
||||
|
||||
if *connectorID == "" {
|
||||
fmt.Fprintln(os.Stderr, "error: --connector-id is required")
|
||||
flag.Usage()
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err := run(context.Background(), *configPath, *connectorID, *dryRun, *noBackup); err != nil {
|
||||
log.Fatalf("migration failed: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func run(ctx context.Context, configPath, connectorID string, dryRun, noBackup bool) error {
|
||||
// Load management config
|
||||
config := &nbconfig.Config{}
|
||||
if _, err := util.ReadJsonWithEnvSub(configPath, config); err != nil {
|
||||
return fmt.Errorf("read config %s: %w", configPath, err)
|
||||
}
|
||||
|
||||
if config.Datadir == "" {
|
||||
return fmt.Errorf("config has empty Datadir")
|
||||
}
|
||||
|
||||
log.Infof("loaded config from %s (datadir: %s, engine: %s)", configPath, config.Datadir, config.StoreConfig.Engine)
|
||||
|
||||
if dryRun {
|
||||
log.Info("[DRY RUN] mode enabled — no changes will be written")
|
||||
}
|
||||
|
||||
// Open main store
|
||||
mainStore, err := store.NewStore(ctx, config.StoreConfig.Engine, config.Datadir, nil, false)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open main store: %w", err)
|
||||
}
|
||||
defer mainStore.Close(ctx) //nolint:errcheck
|
||||
|
||||
// Set up field encryption for user data decryption
|
||||
if config.DataStoreEncryptionKey != "" {
|
||||
fieldEncrypt, err := crypt.NewFieldEncrypt(config.DataStoreEncryptionKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("create field encryptor: %w", err)
|
||||
}
|
||||
mainStore.SetFieldEncrypt(fieldEncrypt)
|
||||
}
|
||||
|
||||
// Open activity store (optional — warn and continue if unavailable)
|
||||
var actStore migration.ActivityStoreUpdater
|
||||
activitySqlStore, err := activitystore.NewSqlStore(ctx, config.Datadir, config.DataStoreEncryptionKey)
|
||||
if err != nil {
|
||||
log.Warnf("could not open activity store, activity events will not be migrated: %v", err)
|
||||
} else {
|
||||
defer activitySqlStore.Close(ctx) //nolint:errcheck
|
||||
actStore = activitySqlStore
|
||||
}
|
||||
|
||||
// Backup databases before migration (unless --no-backup or --dry-run)
|
||||
if !noBackup && !dryRun {
|
||||
if err := backupDatabases(config.Datadir, config.StoreConfig.Engine); err != nil {
|
||||
return fmt.Errorf("backup: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Run migration
|
||||
result, err := migration.Migrate(ctx, &migration.Config{
|
||||
ConnectorID: connectorID,
|
||||
DryRun: dryRun,
|
||||
MainStore: mainStore,
|
||||
ActivityStore: actStore,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Printf("\nMigration summary:\n")
|
||||
fmt.Printf(" Migrated: %d users\n", result.Migrated)
|
||||
fmt.Printf(" Skipped: %d users (already migrated)\n", result.Skipped)
|
||||
if dryRun {
|
||||
fmt.Printf("\n [DRY RUN] No changes were written. Remove --dry-run to apply.\n")
|
||||
} else if result.Migrated > 0 {
|
||||
fmt.Printf("\n Next step: update management.json to enable EmbeddedIdP with connector ID %q\n", connectorID)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -13,6 +13,8 @@ type Store interface {
|
||||
Get(ctx context.Context, accountID string, offset, limit int, descending bool) ([]*Event, error)
|
||||
// Close the sink flushing events if necessary
|
||||
Close(ctx context.Context) error
|
||||
// UpdateUserID re-keys all references to oldUserID in events and deleted_users tables.
|
||||
UpdateUserID(ctx context.Context, oldUserID, newUserID string) error
|
||||
}
|
||||
|
||||
// InMemoryEventStore implements the Store interface storing data in-memory
|
||||
@@ -55,3 +57,8 @@ func (store *InMemoryEventStore) Close(_ context.Context) error {
|
||||
store.events = make([]*Event, 0)
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateUserID is a no-op for the in-memory store.
|
||||
func (store *InMemoryEventStore) UpdateUserID(_ context.Context, _, _ string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -227,6 +227,32 @@ func (store *Store) saveDeletedUserEmailAndNameInEncrypted(event *activity.Event
|
||||
return event.Meta, nil
|
||||
}
|
||||
|
||||
// UpdateUserID updates all references to oldUserID in events and deleted_users tables.
|
||||
func (store *Store) UpdateUserID(ctx context.Context, oldUserID, newUserID string) error {
|
||||
return store.db.Transaction(func(tx *gorm.DB) error {
|
||||
if err := tx.Model(&activity.Event{}).
|
||||
Where("initiator_id = ?", oldUserID).
|
||||
Update("initiator_id", newUserID).Error; err != nil {
|
||||
return fmt.Errorf("update events.initiator_id: %w", err)
|
||||
}
|
||||
|
||||
if err := tx.Model(&activity.Event{}).
|
||||
Where("target_id = ?", oldUserID).
|
||||
Update("target_id", newUserID).Error; err != nil {
|
||||
return fmt.Errorf("update events.target_id: %w", err)
|
||||
}
|
||||
|
||||
// Raw exec: GORM can't update a PK via Model().Update()
|
||||
if err := tx.Exec(
|
||||
"UPDATE deleted_users SET id = ? WHERE id = ?", newUserID, oldUserID,
|
||||
).Error; err != nil {
|
||||
return fmt.Errorf("update deleted_users.id: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// Close the Store
|
||||
func (store *Store) Close(_ context.Context) error {
|
||||
if store.db != nil {
|
||||
|
||||
152
management/server/idp/migration/migration.go
Normal file
152
management/server/idp/migration/migration.go
Normal file
@@ -0,0 +1,152 @@
|
||||
// Package migration provides utility functions for migrating from an external IdP
|
||||
// to NetBird's embedded DEX-based IdP. It re-keys user IDs in the main store and
|
||||
// activity store so that they match DEX's encoded format.
|
||||
package migration
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/netbirdio/netbird/idp/dex"
|
||||
"github.com/netbirdio/netbird/management/server/types"
|
||||
)
|
||||
|
||||
// MainStoreUpdater is the subset of the main store needed for migration.
|
||||
type MainStoreUpdater interface {
|
||||
ListUsers(ctx context.Context) ([]*types.User, error)
|
||||
UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error
|
||||
}
|
||||
|
||||
// ActivityStoreUpdater is the subset of the activity store needed for migration.
|
||||
type ActivityStoreUpdater interface {
|
||||
UpdateUserID(ctx context.Context, oldUserID, newUserID string) error
|
||||
}
|
||||
|
||||
// Config holds migration parameters.
|
||||
type Config struct {
|
||||
ConnectorID string
|
||||
DryRun bool
|
||||
MainStore MainStoreUpdater
|
||||
ActivityStore ActivityStoreUpdater // nil if activity store is unavailable
|
||||
}
|
||||
|
||||
// Result holds migration outcome counts.
|
||||
type Result struct {
|
||||
Migrated int
|
||||
Skipped int
|
||||
}
|
||||
|
||||
// progressInterval controls how often progress is logged for large user counts.
|
||||
const progressInterval = 100
|
||||
|
||||
// Migrate re-keys every user ID in both stores so that it encodes the given
|
||||
// connector ID. Already-migrated users (detectable via DecodeDexUserID) are
|
||||
// skipped, making the operation idempotent.
|
||||
func Migrate(ctx context.Context, cfg *Config) (*Result, error) {
|
||||
if cfg.ConnectorID == "" {
|
||||
return nil, fmt.Errorf("connector ID must not be empty")
|
||||
}
|
||||
|
||||
users, err := cfg.MainStore.ListUsers(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("list users: %w", err)
|
||||
}
|
||||
|
||||
if len(users) == 0 {
|
||||
log.Info("no users found, nothing to migrate")
|
||||
return &Result{}, nil
|
||||
}
|
||||
|
||||
log.Infof("found %d users to process", len(users))
|
||||
|
||||
// Reconciliation pass: fix activity store for users already migrated in
|
||||
// the main DB but whose activity references may still use old IDs (from
|
||||
// a previous partial failure).
|
||||
if cfg.ActivityStore != nil && !cfg.DryRun {
|
||||
if err := reconcileActivityStore(ctx, cfg.ActivityStore, users); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
res := &Result{}
|
||||
|
||||
for i, user := range users {
|
||||
if user.Id == "" {
|
||||
log.Warnf("skipping user with empty ID in account %s", user.AccountID)
|
||||
res.Skipped++
|
||||
continue
|
||||
}
|
||||
|
||||
_, _, decErr := dex.DecodeDexUserID(user.Id)
|
||||
if decErr == nil {
|
||||
// Already encoded in DEX format — skip.
|
||||
res.Skipped++
|
||||
continue
|
||||
}
|
||||
|
||||
newUserID := dex.EncodeDexUserID(user.Id, cfg.ConnectorID)
|
||||
|
||||
if cfg.DryRun {
|
||||
log.Infof("[DRY RUN] would migrate user %s -> %s (account: %s)",
|
||||
user.Id, newUserID, user.AccountID)
|
||||
res.Migrated++
|
||||
continue
|
||||
}
|
||||
|
||||
if err := migrateUser(ctx, cfg, user.Id, user.AccountID, newUserID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
res.Migrated++
|
||||
|
||||
if (i+1)%progressInterval == 0 {
|
||||
log.Infof("progress: %d/%d users processed", i+1, len(users))
|
||||
}
|
||||
}
|
||||
|
||||
if cfg.DryRun {
|
||||
log.Infof("[DRY RUN] migration summary: %d users would be migrated, %d already migrated",
|
||||
res.Migrated, res.Skipped)
|
||||
} else {
|
||||
log.Infof("migration complete: %d users migrated, %d already migrated",
|
||||
res.Migrated, res.Skipped)
|
||||
}
|
||||
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// reconcileActivityStore updates activity store references for users already
|
||||
// migrated in the main DB whose activity entries may still use old IDs from a
|
||||
// previous partial failure.
|
||||
func reconcileActivityStore(ctx context.Context, activityStore ActivityStoreUpdater, users []*types.User) error {
|
||||
for _, user := range users {
|
||||
originalID, _, err := dex.DecodeDexUserID(user.Id)
|
||||
if err != nil {
|
||||
// Not yet migrated — will be handled in the main loop.
|
||||
continue
|
||||
}
|
||||
if err := activityStore.UpdateUserID(ctx, originalID, user.Id); err != nil {
|
||||
return fmt.Errorf("reconcile activity store for user %s: %w", user.Id, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// migrateUser updates a single user's ID in both the main store and the activity store.
|
||||
func migrateUser(ctx context.Context, cfg *Config, oldID, accountID, newID string) error {
|
||||
if err := cfg.MainStore.UpdateUserID(ctx, accountID, oldID, newID); err != nil {
|
||||
return fmt.Errorf("update user ID for user %s: %w", oldID, err)
|
||||
}
|
||||
|
||||
if cfg.ActivityStore == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := cfg.ActivityStore.UpdateUserID(ctx, oldID, newID); err != nil {
|
||||
return fmt.Errorf("update activity store user ID for user %s: %w", oldID, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
287
management/server/idp/migration/migration_test.go
Normal file
287
management/server/idp/migration/migration_test.go
Normal file
@@ -0,0 +1,287 @@
|
||||
package migration
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/netbirdio/netbird/idp/dex"
|
||||
"github.com/netbirdio/netbird/management/server/types"
|
||||
)
|
||||
|
||||
const testConnectorID = "oidc"
|
||||
|
||||
// mockMainStore implements MainStoreUpdater for testing.
|
||||
type mockMainStore struct {
|
||||
users []*types.User
|
||||
listErr error
|
||||
updateErr error
|
||||
updateCalls []updateCall
|
||||
}
|
||||
|
||||
type updateCall struct {
|
||||
AccountID string
|
||||
OldID string
|
||||
NewID string
|
||||
}
|
||||
|
||||
func (m *mockMainStore) ListUsers(_ context.Context) ([]*types.User, error) {
|
||||
return m.users, m.listErr
|
||||
}
|
||||
|
||||
func (m *mockMainStore) UpdateUserID(_ context.Context, accountID, oldUserID, newUserID string) error {
|
||||
m.updateCalls = append(m.updateCalls, updateCall{accountID, oldUserID, newUserID})
|
||||
return m.updateErr
|
||||
}
|
||||
|
||||
// mockActivityStore implements ActivityStoreUpdater for testing.
|
||||
type mockActivityStore struct {
|
||||
updateErr error
|
||||
updateCalls []activityUpdateCall
|
||||
}
|
||||
|
||||
type activityUpdateCall struct {
|
||||
OldID string
|
||||
NewID string
|
||||
}
|
||||
|
||||
func (m *mockActivityStore) UpdateUserID(_ context.Context, oldUserID, newUserID string) error {
|
||||
m.updateCalls = append(m.updateCalls, activityUpdateCall{oldUserID, newUserID})
|
||||
return m.updateErr
|
||||
}
|
||||
|
||||
func TestMigrate_NormalMigration(t *testing.T) {
|
||||
mainStore := &mockMainStore{
|
||||
users: []*types.User{
|
||||
{Id: "user-1", AccountID: "acc-1"},
|
||||
{Id: "user-2", AccountID: "acc-1"},
|
||||
},
|
||||
}
|
||||
actStore := &mockActivityStore{}
|
||||
|
||||
res, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
ActivityStore: actStore,
|
||||
})
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 2, res.Migrated)
|
||||
assert.Equal(t, 0, res.Skipped)
|
||||
assert.Len(t, mainStore.updateCalls, 2)
|
||||
assert.Len(t, actStore.updateCalls, 2)
|
||||
|
||||
// Verify the new IDs are DEX-encoded
|
||||
for _, call := range mainStore.updateCalls {
|
||||
userID, connID, decErr := dex.DecodeDexUserID(call.NewID)
|
||||
require.NoError(t, decErr)
|
||||
assert.Equal(t, testConnectorID, connID)
|
||||
assert.Equal(t, call.OldID, userID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMigrate_SkipAlreadyMigrated(t *testing.T) {
|
||||
alreadyMigrated := dex.EncodeDexUserID("original-user", testConnectorID)
|
||||
mainStore := &mockMainStore{
|
||||
users: []*types.User{
|
||||
{Id: alreadyMigrated, AccountID: "acc-1"},
|
||||
{Id: "not-migrated", AccountID: "acc-1"},
|
||||
},
|
||||
}
|
||||
actStore := &mockActivityStore{}
|
||||
|
||||
res, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
ActivityStore: actStore,
|
||||
})
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 1, res.Migrated)
|
||||
assert.Equal(t, 1, res.Skipped)
|
||||
assert.Len(t, mainStore.updateCalls, 1)
|
||||
assert.Equal(t, "not-migrated", mainStore.updateCalls[0].OldID)
|
||||
}
|
||||
|
||||
func TestMigrate_DryRun(t *testing.T) {
|
||||
mainStore := &mockMainStore{
|
||||
users: []*types.User{
|
||||
{Id: "user-1", AccountID: "acc-1"},
|
||||
},
|
||||
}
|
||||
actStore := &mockActivityStore{}
|
||||
|
||||
res, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
DryRun: true,
|
||||
MainStore: mainStore,
|
||||
ActivityStore: actStore,
|
||||
})
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 1, res.Migrated)
|
||||
// No actual updates should have been made
|
||||
assert.Empty(t, mainStore.updateCalls)
|
||||
assert.Empty(t, actStore.updateCalls)
|
||||
}
|
||||
|
||||
func TestMigrate_EmptyUserList(t *testing.T) {
|
||||
mainStore := &mockMainStore{users: []*types.User{}}
|
||||
actStore := &mockActivityStore{}
|
||||
|
||||
res, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
ActivityStore: actStore,
|
||||
})
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, res.Migrated)
|
||||
assert.Equal(t, 0, res.Skipped)
|
||||
}
|
||||
|
||||
func TestMigrate_EmptyUserID(t *testing.T) {
|
||||
mainStore := &mockMainStore{
|
||||
users: []*types.User{
|
||||
{Id: "", AccountID: "acc-1"},
|
||||
{Id: "user-1", AccountID: "acc-1"},
|
||||
},
|
||||
}
|
||||
actStore := &mockActivityStore{}
|
||||
|
||||
res, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
ActivityStore: actStore,
|
||||
})
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 1, res.Migrated)
|
||||
assert.Equal(t, 1, res.Skipped)
|
||||
}
|
||||
|
||||
func TestMigrate_NilActivityStore(t *testing.T) {
|
||||
mainStore := &mockMainStore{
|
||||
users: []*types.User{
|
||||
{Id: "user-1", AccountID: "acc-1"},
|
||||
},
|
||||
}
|
||||
|
||||
res, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
// ActivityStore is nil
|
||||
})
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 1, res.Migrated)
|
||||
assert.Len(t, mainStore.updateCalls, 1)
|
||||
}
|
||||
|
||||
func TestMigrate_EmptyConnectorID(t *testing.T) {
|
||||
mainStore := &mockMainStore{}
|
||||
|
||||
_, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: "",
|
||||
MainStore: mainStore,
|
||||
})
|
||||
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "connector ID must not be empty")
|
||||
}
|
||||
|
||||
func TestMigrate_ListUsersError(t *testing.T) {
|
||||
mainStore := &mockMainStore{listErr: errors.New("db error")}
|
||||
|
||||
_, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
})
|
||||
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "list users")
|
||||
}
|
||||
|
||||
func TestMigrate_UpdateError(t *testing.T) {
|
||||
mainStore := &mockMainStore{
|
||||
users: []*types.User{{Id: "user-1", AccountID: "acc-1"}},
|
||||
updateErr: errors.New("tx error"),
|
||||
}
|
||||
|
||||
_, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
})
|
||||
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "update user ID")
|
||||
}
|
||||
|
||||
func TestMigrate_Reconciliation(t *testing.T) {
|
||||
// Simulate a previously migrated user whose activity store wasn't updated
|
||||
alreadyMigrated := dex.EncodeDexUserID("original-user", testConnectorID)
|
||||
mainStore := &mockMainStore{
|
||||
users: []*types.User{
|
||||
{Id: alreadyMigrated, AccountID: "acc-1"},
|
||||
},
|
||||
}
|
||||
actStore := &mockActivityStore{}
|
||||
|
||||
res, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
ActivityStore: actStore,
|
||||
})
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, res.Migrated)
|
||||
assert.Equal(t, 1, res.Skipped)
|
||||
// Reconciliation should have called activity store with the original -> new mapping
|
||||
require.Len(t, actStore.updateCalls, 1)
|
||||
assert.Equal(t, "original-user", actStore.updateCalls[0].OldID)
|
||||
assert.Equal(t, alreadyMigrated, actStore.updateCalls[0].NewID)
|
||||
}
|
||||
|
||||
func TestMigrate_Idempotent(t *testing.T) {
|
||||
mainStore := &mockMainStore{
|
||||
users: []*types.User{
|
||||
{Id: "user-1", AccountID: "acc-1"},
|
||||
{Id: "user-2", AccountID: "acc-1"},
|
||||
},
|
||||
}
|
||||
actStore := &mockActivityStore{}
|
||||
|
||||
// First run
|
||||
res1, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
ActivityStore: actStore,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 2, res1.Migrated)
|
||||
|
||||
// Simulate that the store now has the migrated IDs
|
||||
for _, call := range mainStore.updateCalls {
|
||||
for i, u := range mainStore.users {
|
||||
if u.Id == call.OldID {
|
||||
mainStore.users[i].Id = call.NewID
|
||||
}
|
||||
}
|
||||
}
|
||||
mainStore.updateCalls = nil
|
||||
actStore.updateCalls = nil
|
||||
|
||||
// Second run should skip all
|
||||
res2, err := Migrate(context.Background(), &Config{
|
||||
ConnectorID: testConnectorID,
|
||||
MainStore: mainStore,
|
||||
ActivityStore: actStore,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, res2.Migrated)
|
||||
assert.Equal(t, 2, res2.Skipped)
|
||||
assert.Empty(t, mainStore.updateCalls)
|
||||
}
|
||||
@@ -3445,6 +3445,80 @@ func (s *SqlStore) GetDB() *gorm.DB {
|
||||
return s.db
|
||||
}
|
||||
|
||||
// ListUsers returns all users across all accounts with decrypted sensitive fields.
|
||||
func (s *SqlStore) ListUsers(ctx context.Context) ([]*types.User, error) {
|
||||
var users []*types.User
|
||||
if err := s.db.Find(&users).Error; err != nil {
|
||||
return nil, status.Errorf(status.Internal, "failed to list users")
|
||||
}
|
||||
for _, user := range users {
|
||||
if err := user.DecryptSensitiveData(s.fieldEncrypt); err != nil {
|
||||
log.WithContext(ctx).Errorf("failed to decrypt user data for user %s: %v", user.Id, err)
|
||||
return nil, status.Errorf(status.Internal, "failed to decrypt user data")
|
||||
}
|
||||
}
|
||||
return users, nil
|
||||
}
|
||||
|
||||
// txDeferFKConstraints defers foreign key constraint checks for the duration of the transaction.
|
||||
// MySQL is already handled by s.transaction (SET FOREIGN_KEY_CHECKS = 0).
|
||||
func (s *SqlStore) txDeferFKConstraints(tx *gorm.DB) error {
|
||||
switch s.storeEngine {
|
||||
case types.PostgresStoreEngine:
|
||||
return tx.Exec("SET CONSTRAINTS ALL DEFERRED").Error
|
||||
case types.SqliteStoreEngine:
|
||||
return tx.Exec("PRAGMA defer_foreign_keys = ON").Error
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateUserID re-keys a user's ID from oldUserID to newUserID, updating all FK references first,
|
||||
// then the users.id primary key last. All updates happen in a single transaction.
|
||||
func (s *SqlStore) UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error {
|
||||
type fkUpdate struct {
|
||||
model any
|
||||
column string
|
||||
where string
|
||||
}
|
||||
|
||||
updates := []fkUpdate{
|
||||
{&types.PersonalAccessToken{}, "user_id", "user_id = ?"},
|
||||
{&types.PersonalAccessToken{}, "created_by", "created_by = ?"},
|
||||
{&nbpeer.Peer{}, "user_id", "user_id = ?"},
|
||||
{&types.UserInviteRecord{}, "created_by", "created_by = ?"},
|
||||
{&types.Account{}, "created_by", "created_by = ?"},
|
||||
{&types.ProxyAccessToken{}, "created_by", "created_by = ?"},
|
||||
{&types.Job{}, "triggered_by", "triggered_by = ?"},
|
||||
{&types.PolicyRule{}, "authorized_user", "authorized_user = ?"},
|
||||
{&accesslogs.AccessLogEntry{}, "user_id", "user_id = ?"},
|
||||
}
|
||||
|
||||
err := s.transaction(func(tx *gorm.DB) error {
|
||||
if err := s.txDeferFKConstraints(tx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, u := range updates {
|
||||
if err := tx.Model(u.model).Where(u.where, oldUserID).Update(u.column, newUserID).Error; err != nil {
|
||||
return fmt.Errorf("update %s: %w", u.column, err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := tx.Model(&types.User{}).Where(accountAndIDQueryCondition, accountID, oldUserID).Update("id", newUserID).Error; err != nil {
|
||||
return fmt.Errorf("update users: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
log.WithContext(ctx).Errorf("failed to update user ID in the store: %s", err)
|
||||
return status.Errorf(status.Internal, "failed to update user ID in store")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetFieldEncrypt sets the field encryptor for encrypting sensitive user data.
|
||||
func (s *SqlStore) SetFieldEncrypt(enc *crypt.FieldEncrypt) {
|
||||
s.fieldEncrypt = enc
|
||||
|
||||
@@ -275,6 +275,11 @@ type Store interface {
|
||||
|
||||
// GetCustomDomainsCounts returns the total and validated custom domain counts.
|
||||
GetCustomDomainsCounts(ctx context.Context) (total int64, validated int64, err error)
|
||||
|
||||
// ListUsers returns all users across all accounts.
|
||||
ListUsers(ctx context.Context) ([]*types.User, error)
|
||||
// UpdateUserID re-keys a user's ID from oldUserID to newUserID, updating all foreign key references.
|
||||
UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error
|
||||
}
|
||||
|
||||
const (
|
||||
|
||||
@@ -1109,21 +1109,6 @@ func (mr *MockStoreMockRecorder) GetAccountServices(ctx, lockStrength, accountID
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAccountServices", reflect.TypeOf((*MockStore)(nil).GetAccountServices), ctx, lockStrength, accountID)
|
||||
}
|
||||
|
||||
// GetServicesByAccountID mocks base method.
|
||||
func (m *MockStore) GetServicesByAccountID(ctx context.Context, lockStrength LockingStrength, accountID string) ([]*reverseproxy.Service, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "GetServicesByAccountID", ctx, lockStrength, accountID)
|
||||
ret0, _ := ret[0].([]*reverseproxy.Service)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// GetServicesByAccountID indicates an expected call of GetServicesByAccountID.
|
||||
func (mr *MockStoreMockRecorder) GetServicesByAccountID(ctx, lockStrength, accountID interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetServicesByAccountID", reflect.TypeOf((*MockStore)(nil).GetServicesByAccountID), ctx, lockStrength, accountID)
|
||||
}
|
||||
|
||||
// GetAccountSettings mocks base method.
|
||||
func (m *MockStore) GetAccountSettings(ctx context.Context, lockStrength LockingStrength, accountID string) (*types2.Settings, error) {
|
||||
m.ctrl.T.Helper()
|
||||
@@ -1288,6 +1273,22 @@ func (mr *MockStoreMockRecorder) GetCustomDomain(ctx, accountID, domainID interf
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCustomDomain", reflect.TypeOf((*MockStore)(nil).GetCustomDomain), ctx, accountID, domainID)
|
||||
}
|
||||
|
||||
// GetCustomDomainsCounts mocks base method.
|
||||
func (m *MockStore) GetCustomDomainsCounts(ctx context.Context) (int64, int64, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "GetCustomDomainsCounts", ctx)
|
||||
ret0, _ := ret[0].(int64)
|
||||
ret1, _ := ret[1].(int64)
|
||||
ret2, _ := ret[2].(error)
|
||||
return ret0, ret1, ret2
|
||||
}
|
||||
|
||||
// GetCustomDomainsCounts indicates an expected call of GetCustomDomainsCounts.
|
||||
func (mr *MockStoreMockRecorder) GetCustomDomainsCounts(ctx interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCustomDomainsCounts", reflect.TypeOf((*MockStore)(nil).GetCustomDomainsCounts), ctx)
|
||||
}
|
||||
|
||||
// GetDNSRecordByID mocks base method.
|
||||
func (m *MockStore) GetDNSRecordByID(ctx context.Context, lockStrength LockingStrength, accountID, zoneID, recordID string) (*records.Record, error) {
|
||||
m.ctrl.T.Helper()
|
||||
@@ -1872,22 +1873,6 @@ func (mr *MockStoreMockRecorder) GetServiceTargetByTargetID(ctx, lockStrength, a
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetServiceTargetByTargetID", reflect.TypeOf((*MockStore)(nil).GetServiceTargetByTargetID), ctx, lockStrength, accountID, targetID)
|
||||
}
|
||||
|
||||
// GetCustomDomainsCounts mocks base method.
|
||||
func (m *MockStore) GetCustomDomainsCounts(ctx context.Context) (int64, int64, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "GetCustomDomainsCounts", ctx)
|
||||
ret0, _ := ret[0].(int64)
|
||||
ret1, _ := ret[1].(int64)
|
||||
ret2, _ := ret[2].(error)
|
||||
return ret0, ret1, ret2
|
||||
}
|
||||
|
||||
// GetCustomDomainsCounts indicates an expected call of GetCustomDomainsCounts.
|
||||
func (mr *MockStoreMockRecorder) GetCustomDomainsCounts(ctx interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCustomDomainsCounts", reflect.TypeOf((*MockStore)(nil).GetCustomDomainsCounts), ctx)
|
||||
}
|
||||
|
||||
// GetServices mocks base method.
|
||||
func (m *MockStore) GetServices(ctx context.Context, lockStrength LockingStrength) ([]*reverseproxy.Service, error) {
|
||||
m.ctrl.T.Helper()
|
||||
@@ -1903,6 +1888,21 @@ func (mr *MockStoreMockRecorder) GetServices(ctx, lockStrength interface{}) *gom
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetServices", reflect.TypeOf((*MockStore)(nil).GetServices), ctx, lockStrength)
|
||||
}
|
||||
|
||||
// GetServicesByAccountID mocks base method.
|
||||
func (m *MockStore) GetServicesByAccountID(ctx context.Context, lockStrength LockingStrength, accountID string) ([]*reverseproxy.Service, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "GetServicesByAccountID", ctx, lockStrength, accountID)
|
||||
ret0, _ := ret[0].([]*reverseproxy.Service)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// GetServicesByAccountID indicates an expected call of GetServicesByAccountID.
|
||||
func (mr *MockStoreMockRecorder) GetServicesByAccountID(ctx, lockStrength, accountID interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetServicesByAccountID", reflect.TypeOf((*MockStore)(nil).GetServicesByAccountID), ctx, lockStrength, accountID)
|
||||
}
|
||||
|
||||
// GetSetupKeyByID mocks base method.
|
||||
func (m *MockStore) GetSetupKeyByID(ctx context.Context, lockStrength LockingStrength, accountID, setupKeyID string) (*types2.SetupKey, error) {
|
||||
m.ctrl.T.Helper()
|
||||
@@ -2231,6 +2231,21 @@ func (mr *MockStoreMockRecorder) ListFreeDomains(ctx, accountID interface{}) *go
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListFreeDomains", reflect.TypeOf((*MockStore)(nil).ListFreeDomains), ctx, accountID)
|
||||
}
|
||||
|
||||
// ListUsers mocks base method.
|
||||
func (m *MockStore) ListUsers(ctx context.Context) ([]*types2.User, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "ListUsers", ctx)
|
||||
ret0, _ := ret[0].([]*types2.User)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// ListUsers indicates an expected call of ListUsers.
|
||||
func (mr *MockStoreMockRecorder) ListUsers(ctx interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListUsers", reflect.TypeOf((*MockStore)(nil).ListUsers), ctx)
|
||||
}
|
||||
|
||||
// MarkAccountPrimary mocks base method.
|
||||
func (m *MockStore) MarkAccountPrimary(ctx context.Context, accountID string) error {
|
||||
m.ctrl.T.Helper()
|
||||
@@ -2776,6 +2791,20 @@ func (mr *MockStoreMockRecorder) UpdateService(ctx, service interface{}) *gomock
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateService", reflect.TypeOf((*MockStore)(nil).UpdateService), ctx, service)
|
||||
}
|
||||
|
||||
// UpdateUserID mocks base method.
|
||||
func (m *MockStore) UpdateUserID(ctx context.Context, accountID, oldUserID, newUserID string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "UpdateUserID", ctx, accountID, oldUserID, newUserID)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// UpdateUserID indicates an expected call of UpdateUserID.
|
||||
func (mr *MockStoreMockRecorder) UpdateUserID(ctx, accountID, oldUserID, newUserID interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateUserID", reflect.TypeOf((*MockStore)(nil).UpdateUserID), ctx, accountID, oldUserID, newUserID)
|
||||
}
|
||||
|
||||
// UpdateZone mocks base method.
|
||||
func (m *MockStore) UpdateZone(ctx context.Context, zone *zones.Zone) error {
|
||||
m.ctrl.T.Helper()
|
||||
|
||||
BIN
migrate-idp
Executable file
BIN
migrate-idp
Executable file
Binary file not shown.
Reference in New Issue
Block a user