Compare commits

..

50 Commits

Author SHA1 Message Date
Eduard Gert
18e3b5dd32 fix about 2026-05-11 09:37:14 +02:00
Eduard Gert
f3f9704c6f update about 2026-05-08 17:55:41 +02:00
Eduard Gert
4c3d4effbd update troubleshooting 2026-05-08 17:18:25 +02:00
Eduard Gert
3953fee5a4 update ssh and advanced settings tabs 2026-05-08 10:57:31 +02:00
Eduard Gert
adeaa49cda update switch 2026-05-07 17:27:56 +02:00
Eduard Gert
2c5d52a1bf update wording 2026-05-07 17:19:56 +02:00
Eduard Gert
70a755fbae add general settings 2026-05-07 16:47:52 +02:00
Eduard Gert
559da5d5b9 refactor 2026-05-07 15:00:36 +02:00
Eduard Gert
614ee11ac7 update CFBundleDisplayName 2026-05-07 14:19:34 +02:00
Eduard Gert
85080afa59 use new mac style icons 2026-05-07 14:14:26 +02:00
Eduard Gert
062a183e4e update settings nav 2026-05-07 12:40:04 +02:00
Eduard Gert
a2be41caf8 add about setting 2026-05-07 11:24:11 +02:00
Eduard Gert
debb558aa3 wip 2026-05-07 09:57:14 +02:00
Eduard Gert
553be144b4 add setting 2026-05-06 14:21:01 +02:00
Eduard Gert
c3f9514182 wip 2026-05-06 10:47:40 +02:00
Eduard Gert
bfe19fa542 wip 2026-05-04 10:15:29 +02:00
Eduard Gert
d07f25fc49 wip 2026-05-04 10:14:41 +02:00
Eduard Gert
670b0f66ac Merge branch 'ui-refactor' into ui-refactor-ui 2026-04-30 14:57:32 +02:00
Eduard Gert
15d73a2edd Add connect toggle 2026-04-30 13:22:43 +02:00
Zoltán Papp
88a2bf582d [client] Push-based status stream for the Wails UI
Adds a SubscribeStatus gRPC RPC that pushes a fresh FullStatus snapshot
on every peer-recorder state change, replacing the Wails UI's 2-second
Status poll. The daemon's notifier already triggers on Connected /
Disconnected / Connecting / management or signal flip / address
change / peers-list change; we now coalesce those into ticks on a
buffered chan and stream the resulting snapshots over gRPC.

- Status recorder gains SubscribeToStateChanges /
  UnsubscribeFromStateChanges + a non-blocking notifyStateChange that
  drops ticks when a subscriber's 1-slot buffer is full (next snapshot
  the consumer pulls already reflects everything).
- Server.Status handler split: the snapshot composition is shared
  with the new SubscribeStatus stream handler so unary and stream
  paths return identical bytes.
- UI peers service: pollLoop replaced by statusStreamLoop. The local
  name of the existing SubscribeEvents loop is now toastStreamLoop so
  the two streams are easy to tell apart — the underlying RPC name is
  unchanged.
- Tray applyStatus skips the icon refresh when connected/lastStatus
  hasn't changed; rapid SubscribeStatus bursts during health probes
  no longer churn Shell_NotifyIcon or the log.
2026-04-30 11:45:43 +02:00
Zoltán Papp
0148d926d5 [client/ui-wails] Use original Fyne tray PNGs and drop the .ico split
The SVG-derived tray icons + multi-resolution .ico path looked correct on
disk but Wails3's Shell_NotifyIcon update never landed on the running
Windows tray — the icon stayed frozen on the .exe resource regardless of
how many times we called SetIcon. Single-PNG fed through the same path
updates correctly, so revert to the source-of-truth PNGs that ship with
the legacy Fyne UI and remove the icons_windows.go / tray_icon_*.go
split. The 6 colored tray PNGs and 6 macOS-template PNGs come from
client/ui/assets verbatim. Generation pipeline (assets/svg/) is gone.
2026-04-29 18:54:51 +02:00
Zoltán Papp
8f16a19b8f [client/ui-wails] Add windows:build:console task for log debugging
The default Windows build links the binary as a GUI subsystem app, so
stdout/stderr is detached from the launching terminal — invisible logrus
output makes tray and event-stream bugs hard to chase. Add a sibling task
that links as console subsystem and writes a separately-named binary so
the production output is preserved.

Usage:
  CGO_ENABLED=1 task windows:build:console
  bin\netbird-ui-console.exe   # logs print to the launching cmd/PowerShell
2026-04-29 16:21:45 +02:00
Zoltán Papp
504dceedf3 [client] Add Wails3 + React desktop UI scaffold
Stage 1 of the client/ui (Fyne) replacement. Adds a new client/ui-wails
module that runs on Linux/macOS/Windows from a single React + Vite +
Tailwind frontend driven by a thin gRPC services layer in Go.

- Single-module integration (no submodule): merge Wails3 into root go.mod
  with build tags !android !ios !freebsd !js so cross-compiles on those
  targets exclude the package automatically.
- Seven gRPC-bound services: Connection, Settings, Networks, Profiles,
  Debug, Update, Peers. Peers bridges Status polling and SubscribeEvents
  to the Wails event bus (netbird:status, netbird:event).
- Tray + window shell mirrors the Fyne menu 1:1 with hide-on-close,
  SIGUSR1 / Windows named-event for external "show window" triggers.
- React pages cover functional parity for Status, Settings (3 tabs),
  Networks (3 tabs), Profiles, Debug, Update, QuickActions, LoginUrl.
- SVG-sourced tray icons (12 source SVGs incl. macOS template variants)
  rasterized to PNG via task common:generate:tray:icons.
- Linux launcher sets WEBKIT_DISABLE_DMABUF_RENDERER=1 in the .desktop
  Exec= line and in task linux:run so the app renders correctly under
  RDP, VirtualBox, KVM, and bare WMs (Fluxbox/dwm) without DRM access.
2026-04-29 11:10:23 +02:00
Viktor Liu
e5474e199f [client] Use WinRT COM for Windows toasts (#6013)
* Use WinRT COM for Windows toasts instead of fyne's PowerShell path

* Quote autostart path and split HKCU registry into per-user component
2026-04-28 20:54:06 +02:00
Bethuel Mmbaga
db44848e2d [management] Drop netmap calculation on peer read (#6006) 2026-04-28 18:25:56 +03:00
EL OUAZIZI Walid
9417ce3b3a fix(getting-started): Infinite healthcheck loop with existing traefik (#5871) 2026-04-28 17:22:51 +02:00
Zoltan Papp
8fc4265995 [relay] evict foreign client cache on disconnect (#6015)
* [relay] evict foreign client cache on disconnect

When a foreign relay's TCP connection drops, the manager's
onServerDisconnected handler only triggered reconnect logic for the
home server; the disconnected foreign entry stayed in the relayClients
cache. Subsequent OpenConn calls reused the closed client until the
60-second cleanup tick evicted it, breaking peer connectivity through
that relay for up to a minute.

Evict the foreign entry from the cache on disconnect so the next
OpenConn dials a fresh client.

Also:
- Make the reconnect backoff cap configurable via WithMaxBackoffInterval
  ManagerOption; the previous hard-coded 60s constant forced
  TestAutoReconnect to sleep ~61s. Test now polls Ready() and finishes
  in ~2s.
- Add NB_HOME_RELAY_SERVERS env var that overrides the relay URL list
  received from management, so a peer can be pinned to a specific home
  relay (used by the netbird-conn-lab Edge 4 reproducer).

* [client] treat empty NB_HOME_RELAY_SERVERS as unset

Returning (urls=[], ok=true) when the env var contained only separators or
whitespace caused callers to wipe the mgmt-provided relay list, leaving the
peer with no relays. Treat a parsed-empty result the same as an unset env.
2026-04-28 15:04:48 +02:00
Zoltan Papp
9c50819f20 Don't mark management disconnected on transient job stream errors (#6005)
The JOB stream is a separate channel from the SYNC stream. Server-side
EOF or other transient errors on the JOB stream do not indicate that
the management connection is unhealthy — the SYNC stream remains the
authoritative state signal.

Previously, a JOB stream EOF would call notifyDisconnected and the
client would emit OnConnecting to the UI. The backoff retry would
reconnect the JOB stream, but handleJobStream never calls notifyConnected
on success, so the UI was stuck on "Connecting" until the next SYNC
event or health check.

Keep notifyDisconnected for codes.PermissionDenied since IsLoginRequired
relies on managementError to detect expired auth.
2026-04-28 15:04:41 +02:00
Bethuel Mmbaga
6f0eff3ba0 [management] Handle single-string JWT group claim from IdPs (#6014) 2026-04-28 14:48:28 +03:00
Bethuel Mmbaga
f8745723fc [management] Add Microsoft AD FS support for embedded Dex identity providers (#6008) 2026-04-28 12:42:19 +03:00
Vlad
154b81645a [management] removed legacy network map code (#5565) 2026-04-27 16:02:54 +02:00
Maycon Santos
34167c8a16 [misc] Update release pipeline version (#5995) 2026-04-27 10:55:38 +02:00
Maycon Santos
d6f08e4840 [misc] Update sign pipeline version (#5981) 2026-04-24 13:13:27 +02:00
Zoltan Papp
f732b01a05 [management] unify peer-update test timeout via constant (#5952)
peerShouldReceiveUpdate waited 500ms for the expected update message,
and every outer wrapper across the management/server test suite paired
it with a 1s goroutine-drain timeout. Both were too tight for slower
CI runners (MySQL, FreeBSD, loaded sqlite), producing intermittent
"Timed out waiting for update message" failures in tests like
TestDNSAccountPeersUpdate, TestPeerAccountPeersUpdate, and
TestNameServerAccountPeersUpdate.

Introduce peerUpdateTimeout (5s) next to the helper and use it both in
the helper and in every outer wrapper so the two timeouts stay in sync.
Only runs down on failure; passing tests return as soon as the channel
delivers, so there is no slowdown on green runs.
2026-04-23 21:19:21 +02:00
alsruf36
c07c726ea7 [proxy] Set session cookie path to root (#5915) 2026-04-23 18:20:54 +02:00
Pascal Fischer
fa0d58d093 [management] exclude peers for expiration job that have already been marked expired (#5970) 2026-04-23 16:01:54 +02:00
Vlad
b6038e8acd [management] refactor: changeable pat rate limiting (#5946) 2026-04-23 15:13:22 +02:00
Zoltan Papp
5da05ecca6 [client] increase gRPC health check timeout to 5s (#5961)
Bump the IsHealthy() context timeout from 1s to 5s for both the
management and signal gRPC clients to reduce false negatives on
slower or congested connections.
2026-04-22 20:54:18 +02:00
Viktor Liu
801de8c68d [client] Add TTL-based refresh to mgmt DNS cache via handler chain (#5945) 2026-04-22 15:10:14 +02:00
Viktor Liu
a822a33240 [self-hosted] Use cscli lapi status for CrowdSec readiness in installer (#5949) 2026-04-22 10:35:22 +02:00
Bethuel Mmbaga
57b23c5b25 [management] Propagate context changes to upstream middleware (#5956) 2026-04-21 23:06:52 +03:00
Zoltan Papp
1165058fad [client] fix port collision in TestUpload (#5950)
* [debug] fix port collision in TestUpload

TestUpload hardcoded :8080, so it failed deterministically when anything
was already on that port and collided across concurrent test runs.
Bind a :0 listener in the test to get a kernel-assigned free port, and
add Server.Serve so tests can hand the listener in without reaching
into unexported state.

* [debug] drop test-only Server.Serve, use SERVER_ADDRESS env

The previous commit added a Server.Serve method on the upload-server,
used only by TestUpload. That left production with an unused function.
Reserve an ephemeral loopback port in the test, release it, and pass
the address through SERVER_ADDRESS (which the server already reads).
A small wait helper ensures the server is accepting connections before
the upload runs, so the close/rebind gap does not cause a false failure.
2026-04-21 19:07:20 +02:00
Zoltan Papp
703353d354 [flow] fix goroutine leak in TestReceive_ProtocolErrorStreamReconnect (#5951)
The Receive goroutine could outlive the test and call t.Logf after
teardown, panicking with "Log in goroutine after ... has completed".
Register a cleanup that waits for the goroutine to exit; ordering is
LIFO so it runs after client.Close, which is what unblocks Receive.
2026-04-21 19:06:47 +02:00
Zoltan Papp
2fb50aef6b [client] allow UDP packet loss in TestICEBind_HandlesConcurrentMixedTraffic (#5953)
The test writes 500 packets per family and asserted exact-count
delivery within a 5s window, even though its own comment says "Some
packet loss is acceptable for UDP". On FreeBSD/QEMU runners the writer
loops cannot always finish all 500 before the 5s deadline closes the
readers (we have seen 411/500 in CI).

The real assertion of this test is the routing check — IPv4 peer only
gets v4- packets, IPv6 peer only gets v6- packets — which remains
strict. Replace the exact-count assertions with a >=80% delivery
threshold so runner speed variance no longer causes false failures.
2026-04-21 19:05:58 +02:00
Vlad
eb3aa96257 [management] check policy for changes before actual db update (#5405) 2026-04-21 18:37:04 +02:00
Viktor Liu
064ec1c832 [client] Trust wg interface in firewalld to bypass owner-flagged chains (#5928) 2026-04-21 17:57:16 +02:00
Viktor Liu
75e408f51c [client] Prefer systemd-resolved stub over file mode regardless of resolv.conf header (#5935) 2026-04-21 17:56:56 +02:00
Zoltan Papp
5a89e6621b [client] Supress ICE signaling (#5820)
* [client] Suppress ICE signaling and periodic offers in force-relay mode

When NB_FORCE_RELAY is enabled, skip WorkerICE creation entirely,
suppress ICE credentials in offer/answer messages, disable the
periodic ICE candidate monitor, and fix isConnectedOnAllWay to
only check relay status so the guard stops sending unnecessary offers.

* [client] Dynamically suppress ICE based on remote peer's offer credentials

Track whether the remote peer includes ICE credentials in its
offers/answers. When remote stops sending ICE credentials, skip
ICE listener dispatch, suppress ICE credentials in responses, and
exclude ICE from the guard connectivity check. When remote resumes
sending ICE credentials, re-enable all ICE behavior.

* [client] Fix nil SessionID panic and force ICE teardown on relay-only transition

Fix nil pointer dereference in signalOfferAnswer when SessionID is nil
(relay-only offers). Close stale ICE agent immediately when remote peer
stops sending ICE credentials to avoid traffic black-hole during the
ICE disconnect timeout.

* [client] Add relay-only fallback check when ICE is unavailable

Ensure the relay connection is supported with the peer when ICE is disabled to prevent connectivity issues.

* [client] Add tri-state connection status to guard for smarter ICE retry (#5828)

* [client] Add tri-state connection status to guard for smarter ICE retry

Refactor isConnectedOnAllWay to return a ConnStatus enum (Connected,
Disconnected, PartiallyConnected) instead of a boolean. When relay is
up but ICE is not (PartiallyConnected), limit ICE offers to 3 retries
with exponential backoff then fall back to hourly attempts, reducing
unnecessary signaling traffic. Fully disconnected peers continue to
retry aggressively. External events (relay/ICE disconnect, signal/relay
reconnect) reset retry state to give ICE a fresh chance.

* [client] Clarify guard ICE retry state and trace log trigger

Split iceRetryState.attempt into shouldRetry (pure predicate) and
enterHourlyMode (explicit state transition) so the caller in
reconnectLoopWithRetry reads top-to-bottom. Restore the original
trace-log behavior in isConnectedOnAllWay so it only logs on full
disconnection, not on the new PartiallyConnected state.

* [client] Extract pure evalConnStatus and add unit tests

Split isConnectedOnAllWay into a thin method that snapshots state and
a pure evalConnStatus helper that takes a connStatusInputs struct, so
the tri-state decision logic can be exercised without constructing
full Worker or Handshaker objects. Add table-driven tests covering
force-relay, ICE-unavailable and fully-available code paths, plus
unit tests for iceRetryState budget/hourly transitions and reset.

* [client] Improve grammar in logs and refactor ICE credential checks
2026-04-21 15:52:08 +02:00
Misha Bragin
06dfa9d4a5 [management] replace mailru/easyjson with netbirdio/easyjson fork (#5938) 2026-04-21 13:59:35 +02:00
Misha Bragin
45d9ee52c0 [self-hosted] add reverse proxy retention fields to combined YAML (#5930) 2026-04-21 10:21:11 +02:00
270 changed files with 18462 additions and 6072 deletions

View File

@@ -9,7 +9,7 @@ on:
pull_request:
env:
SIGN_PIPE_VER: "v0.1.2"
SIGN_PIPE_VER: "v0.1.4"
GORELEASER_VER: "v2.14.3"
PRODUCT_NAME: "NetBird"
COPYRIGHT: "NetBird GmbH"

View File

@@ -0,0 +1,11 @@
// Package firewalld integrates with the firewalld daemon so NetBird can place
// its wg interface into firewalld's "trusted" zone. This is required because
// firewalld's nftables chains are created with NFT_CHAIN_OWNER on recent
// versions, which returns EPERM to any other process that tries to insert
// rules into them. The workaround mirrors what Tailscale does: let firewalld
// itself add the accept rules to its own chains by trusting the interface.
package firewalld
// TrustedZone is the firewalld zone name used for interfaces whose traffic
// should bypass firewalld filtering.
const TrustedZone = "trusted"

View File

@@ -0,0 +1,260 @@
//go:build linux
package firewalld
import (
"context"
"errors"
"fmt"
"os/exec"
"strings"
"sync"
"time"
"github.com/godbus/dbus/v5"
log "github.com/sirupsen/logrus"
)
const (
dbusDest = "org.fedoraproject.FirewallD1"
dbusPath = "/org/fedoraproject/FirewallD1"
dbusRootIface = "org.fedoraproject.FirewallD1"
dbusZoneIface = "org.fedoraproject.FirewallD1.zone"
errZoneAlreadySet = "ZONE_ALREADY_SET"
errAlreadyEnabled = "ALREADY_ENABLED"
errUnknownIface = "UNKNOWN_INTERFACE"
errNotEnabled = "NOT_ENABLED"
// callTimeout bounds each individual DBus or firewall-cmd invocation.
// A fresh context is created for each call so a slow DBus probe can't
// exhaust the deadline before the firewall-cmd fallback gets to run.
callTimeout = 3 * time.Second
)
var (
errDBusUnavailable = errors.New("firewalld dbus unavailable")
// trustLogOnce ensures the "added to trusted zone" message is logged at
// Info level only for the first successful add per process; repeat adds
// from other init paths are quieter.
trustLogOnce sync.Once
parentCtxMu sync.RWMutex
parentCtx context.Context = context.Background()
)
// SetParentContext installs a parent context whose cancellation aborts any
// in-flight TrustInterface call. It does not affect UntrustInterface, which
// always uses a fresh Background-rooted timeout so cleanup can still run
// during engine shutdown when the engine context is already cancelled.
func SetParentContext(ctx context.Context) {
parentCtxMu.Lock()
parentCtx = ctx
parentCtxMu.Unlock()
}
func getParentContext() context.Context {
parentCtxMu.RLock()
defer parentCtxMu.RUnlock()
return parentCtx
}
// TrustInterface places iface into firewalld's trusted zone if firewalld is
// running. It is idempotent and best-effort: errors are returned so callers
// can log, but a non-running firewalld is not an error. Only the first
// successful call per process logs at Info. Respects the parent context set
// via SetParentContext so startup-time cancellation unblocks it.
func TrustInterface(iface string) error {
parent := getParentContext()
if !isRunning(parent) {
return nil
}
if err := addTrusted(parent, iface); err != nil {
return fmt.Errorf("add %s to firewalld trusted zone: %w", iface, err)
}
trustLogOnce.Do(func() {
log.Infof("added %s to firewalld trusted zone", iface)
})
log.Debugf("firewalld: ensured %s is in trusted zone", iface)
return nil
}
// UntrustInterface removes iface from firewalld's trusted zone if firewalld
// is running. Idempotent. Uses a Background-rooted timeout so it still runs
// during shutdown after the engine context has been cancelled.
func UntrustInterface(iface string) error {
if !isRunning(context.Background()) {
return nil
}
if err := removeTrusted(context.Background(), iface); err != nil {
return fmt.Errorf("remove %s from firewalld trusted zone: %w", iface, err)
}
return nil
}
func newCallContext(parent context.Context) (context.Context, context.CancelFunc) {
return context.WithTimeout(parent, callTimeout)
}
func isRunning(parent context.Context) bool {
ctx, cancel := newCallContext(parent)
ok, err := isRunningDBus(ctx)
cancel()
if err == nil {
return ok
}
if errors.Is(err, errDBusUnavailable) || errors.Is(err, context.DeadlineExceeded) {
ctx, cancel = newCallContext(parent)
defer cancel()
return isRunningCLI(ctx)
}
return false
}
func addTrusted(parent context.Context, iface string) error {
ctx, cancel := newCallContext(parent)
err := addDBus(ctx, iface)
cancel()
if err == nil {
return nil
}
if !errors.Is(err, errDBusUnavailable) {
log.Debugf("firewalld: dbus add failed, falling back to firewall-cmd: %v", err)
}
ctx, cancel = newCallContext(parent)
defer cancel()
return addCLI(ctx, iface)
}
func removeTrusted(parent context.Context, iface string) error {
ctx, cancel := newCallContext(parent)
err := removeDBus(ctx, iface)
cancel()
if err == nil {
return nil
}
if !errors.Is(err, errDBusUnavailable) {
log.Debugf("firewalld: dbus remove failed, falling back to firewall-cmd: %v", err)
}
ctx, cancel = newCallContext(parent)
defer cancel()
return removeCLI(ctx, iface)
}
func isRunningDBus(ctx context.Context) (bool, error) {
conn, err := dbus.SystemBus()
if err != nil {
return false, fmt.Errorf("%w: %v", errDBusUnavailable, err)
}
obj := conn.Object(dbusDest, dbusPath)
var zone string
if err := obj.CallWithContext(ctx, dbusRootIface+".getDefaultZone", 0).Store(&zone); err != nil {
return false, fmt.Errorf("firewalld getDefaultZone: %w", err)
}
return true, nil
}
func isRunningCLI(ctx context.Context) bool {
if _, err := exec.LookPath("firewall-cmd"); err != nil {
return false
}
return exec.CommandContext(ctx, "firewall-cmd", "--state").Run() == nil
}
func addDBus(ctx context.Context, iface string) error {
conn, err := dbus.SystemBus()
if err != nil {
return fmt.Errorf("%w: %v", errDBusUnavailable, err)
}
obj := conn.Object(dbusDest, dbusPath)
call := obj.CallWithContext(ctx, dbusZoneIface+".addInterface", 0, TrustedZone, iface)
if call.Err == nil {
return nil
}
if dbusErrContains(call.Err, errAlreadyEnabled) {
return nil
}
if dbusErrContains(call.Err, errZoneAlreadySet) {
move := obj.CallWithContext(ctx, dbusZoneIface+".changeZoneOfInterface", 0, TrustedZone, iface)
if move.Err != nil {
return fmt.Errorf("firewalld changeZoneOfInterface: %w", move.Err)
}
return nil
}
return fmt.Errorf("firewalld addInterface: %w", call.Err)
}
func removeDBus(ctx context.Context, iface string) error {
conn, err := dbus.SystemBus()
if err != nil {
return fmt.Errorf("%w: %v", errDBusUnavailable, err)
}
obj := conn.Object(dbusDest, dbusPath)
call := obj.CallWithContext(ctx, dbusZoneIface+".removeInterface", 0, TrustedZone, iface)
if call.Err == nil {
return nil
}
if dbusErrContains(call.Err, errUnknownIface) || dbusErrContains(call.Err, errNotEnabled) {
return nil
}
return fmt.Errorf("firewalld removeInterface: %w", call.Err)
}
func addCLI(ctx context.Context, iface string) error {
if _, err := exec.LookPath("firewall-cmd"); err != nil {
return fmt.Errorf("firewall-cmd not available: %w", err)
}
// --change-interface (no --permanent) binds the interface for the
// current runtime only; we do not want membership to persist across
// reboots because netbird re-asserts it on every startup.
out, err := exec.CommandContext(ctx,
"firewall-cmd", "--zone="+TrustedZone, "--change-interface="+iface,
).CombinedOutput()
if err != nil {
return fmt.Errorf("firewall-cmd change-interface: %w: %s", err, strings.TrimSpace(string(out)))
}
return nil
}
func removeCLI(ctx context.Context, iface string) error {
if _, err := exec.LookPath("firewall-cmd"); err != nil {
return fmt.Errorf("firewall-cmd not available: %w", err)
}
out, err := exec.CommandContext(ctx,
"firewall-cmd", "--zone="+TrustedZone, "--remove-interface="+iface,
).CombinedOutput()
if err != nil {
msg := strings.TrimSpace(string(out))
if strings.Contains(msg, errUnknownIface) || strings.Contains(msg, errNotEnabled) {
return nil
}
return fmt.Errorf("firewall-cmd remove-interface: %w: %s", err, msg)
}
return nil
}
func dbusErrContains(err error, code string) bool {
if err == nil {
return false
}
var de dbus.Error
if errors.As(err, &de) {
for _, b := range de.Body {
if s, ok := b.(string); ok && strings.Contains(s, code) {
return true
}
}
}
return strings.Contains(err.Error(), code)
}

View File

@@ -0,0 +1,49 @@
//go:build linux
package firewalld
import (
"errors"
"testing"
"github.com/godbus/dbus/v5"
)
func TestDBusErrContains(t *testing.T) {
tests := []struct {
name string
err error
code string
want bool
}{
{"nil error", nil, errZoneAlreadySet, false},
{"plain error match", errors.New("ZONE_ALREADY_SET: wt0"), errZoneAlreadySet, true},
{"plain error miss", errors.New("something else"), errZoneAlreadySet, false},
{
"dbus.Error body match",
dbus.Error{Name: "org.fedoraproject.FirewallD1.Exception", Body: []any{"ZONE_ALREADY_SET: wt0"}},
errZoneAlreadySet,
true,
},
{
"dbus.Error body miss",
dbus.Error{Name: "org.fedoraproject.FirewallD1.Exception", Body: []any{"INVALID_INTERFACE"}},
errAlreadyEnabled,
false,
},
{
"dbus.Error non-string body falls back to Error()",
dbus.Error{Name: "x", Body: []any{123}},
"x",
true,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := dbusErrContains(tc.err, tc.code)
if got != tc.want {
t.Fatalf("dbusErrContains(%v, %q) = %v; want %v", tc.err, tc.code, got, tc.want)
}
})
}
}

View File

@@ -0,0 +1,25 @@
//go:build !linux
package firewalld
import "context"
// SetParentContext is a no-op on non-Linux platforms because firewalld only
// runs on Linux.
func SetParentContext(context.Context) {
// intentionally empty: firewalld is a Linux-only daemon
}
// TrustInterface is a no-op on non-Linux platforms because firewalld only
// runs on Linux.
func TrustInterface(string) error {
// intentionally empty: firewalld is a Linux-only daemon
return nil
}
// UntrustInterface is a no-op on non-Linux platforms because firewalld only
// runs on Linux.
func UntrustInterface(string) error {
// intentionally empty: firewalld is a Linux-only daemon
return nil
}

View File

@@ -12,6 +12,7 @@ import (
log "github.com/sirupsen/logrus"
nberrors "github.com/netbirdio/netbird/client/errors"
"github.com/netbirdio/netbird/client/firewall/firewalld"
firewall "github.com/netbirdio/netbird/client/firewall/manager"
"github.com/netbirdio/netbird/client/iface/wgaddr"
"github.com/netbirdio/netbird/client/internal/statemanager"
@@ -86,6 +87,12 @@ func (m *Manager) Init(stateManager *statemanager.Manager) error {
log.Warnf("raw table not available, notrack rules will be disabled: %v", err)
}
// Trust after all fatal init steps so a later failure doesn't leave the
// interface in firewalld's trusted zone without a corresponding Close.
if err := firewalld.TrustInterface(m.wgIface.Name()); err != nil {
log.Warnf("failed to trust interface in firewalld: %v", err)
}
// persist early to ensure cleanup of chains
go func() {
if err := stateManager.PersistState(context.Background()); err != nil {
@@ -191,6 +198,12 @@ func (m *Manager) Close(stateManager *statemanager.Manager) error {
merr = multierror.Append(merr, fmt.Errorf("reset router: %w", err))
}
// Appending to merr intentionally blocks DeleteState below so ShutdownState
// stays persisted and the crash-recovery path retries firewalld cleanup.
if err := firewalld.UntrustInterface(m.wgIface.Name()); err != nil {
merr = multierror.Append(merr, err)
}
// attempt to delete state only if all other operations succeeded
if merr == nil {
if err := stateManager.DeleteState(&ShutdownState{}); err != nil {
@@ -217,6 +230,11 @@ func (m *Manager) AllowNetbird() error {
if err != nil {
return fmt.Errorf("allow netbird interface traffic: %w", err)
}
if err := firewalld.TrustInterface(m.wgIface.Name()); err != nil {
log.Warnf("failed to trust interface in firewalld: %v", err)
}
return nil
}

View File

@@ -14,6 +14,7 @@ import (
log "github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
"github.com/netbirdio/netbird/client/firewall/firewalld"
firewall "github.com/netbirdio/netbird/client/firewall/manager"
"github.com/netbirdio/netbird/client/iface/wgaddr"
"github.com/netbirdio/netbird/client/internal/statemanager"
@@ -217,6 +218,10 @@ func (m *Manager) AllowNetbird() error {
return fmt.Errorf("flush allow input netbird rules: %w", err)
}
if err := firewalld.TrustInterface(m.wgIface.Name()); err != nil {
log.Warnf("failed to trust interface in firewalld: %v", err)
}
return nil
}

View File

@@ -19,6 +19,7 @@ import (
"golang.org/x/sys/unix"
nberrors "github.com/netbirdio/netbird/client/errors"
"github.com/netbirdio/netbird/client/firewall/firewalld"
firewall "github.com/netbirdio/netbird/client/firewall/manager"
nbid "github.com/netbirdio/netbird/client/internal/acl/id"
"github.com/netbirdio/netbird/client/internal/routemanager/ipfwdstate"
@@ -40,6 +41,8 @@ const (
chainNameForward = "FORWARD"
chainNameMangleForward = "netbird-mangle-forward"
firewalldTableName = "firewalld"
userDataAcceptForwardRuleIif = "frwacceptiif"
userDataAcceptForwardRuleOif = "frwacceptoif"
userDataAcceptInputRule = "inputaccept"
@@ -133,6 +136,10 @@ func (r *router) Reset() error {
merr = multierror.Append(merr, fmt.Errorf("remove accept filter rules: %w", err))
}
if err := firewalld.UntrustInterface(r.wgIface.Name()); err != nil {
merr = multierror.Append(merr, err)
}
if err := r.removeNatPreroutingRules(); err != nil {
merr = multierror.Append(merr, fmt.Errorf("remove filter prerouting rules: %w", err))
}
@@ -280,6 +287,10 @@ func (r *router) createContainers() error {
log.Errorf("failed to add accept rules for the forward chain: %s", err)
}
if err := firewalld.TrustInterface(r.wgIface.Name()); err != nil {
log.Warnf("failed to trust interface in firewalld: %v", err)
}
if err := r.refreshRulesMap(); err != nil {
log.Errorf("failed to refresh rules: %s", err)
}
@@ -1319,6 +1330,13 @@ func (r *router) isExternalChain(chain *nftables.Chain) bool {
return false
}
// Skip firewalld-owned chains. Firewalld creates its chains with the
// NFT_CHAIN_OWNER flag, so inserting rules into them returns EPERM.
// We delegate acceptance to firewalld by trusting the interface instead.
if chain.Table.Name == firewalldTableName {
return false
}
// Skip all iptables-managed tables in the ip family
if chain.Table.Family == nftables.TableFamilyIPv4 && isIptablesTable(chain.Table.Name) {
return false

View File

@@ -3,6 +3,9 @@
package uspfilter
import (
log "github.com/sirupsen/logrus"
"github.com/netbirdio/netbird/client/firewall/firewalld"
"github.com/netbirdio/netbird/client/internal/statemanager"
)
@@ -16,6 +19,9 @@ func (m *Manager) Close(stateManager *statemanager.Manager) error {
if m.nativeFirewall != nil {
return m.nativeFirewall.Close(stateManager)
}
if err := firewalld.UntrustInterface(m.wgIface.Name()); err != nil {
log.Warnf("failed to untrust interface in firewalld: %v", err)
}
return nil
}
@@ -24,5 +30,8 @@ func (m *Manager) AllowNetbird() error {
if m.nativeFirewall != nil {
return m.nativeFirewall.AllowNetbird()
}
if err := firewalld.TrustInterface(m.wgIface.Name()); err != nil {
log.Warnf("failed to trust interface in firewalld: %v", err)
}
return nil
}

View File

@@ -9,6 +9,7 @@ import (
// IFaceMapper defines subset methods of interface required for manager
type IFaceMapper interface {
Name() string
SetFilter(device.PacketFilter) error
Address() wgaddr.Address
GetWGDevice() *wgdevice.Device

View File

@@ -31,12 +31,20 @@ var logger = log.NewFromLogrus(logrus.StandardLogger())
var flowLogger = netflow.NewManager(nil, []byte{}, nil).GetLogger()
type IFaceMock struct {
NameFunc func() string
SetFilterFunc func(device.PacketFilter) error
AddressFunc func() wgaddr.Address
GetWGDeviceFunc func() *wgdevice.Device
GetDeviceFunc func() *device.FilteredDevice
}
func (i *IFaceMock) Name() string {
if i.NameFunc == nil {
return "wgtest"
}
return i.NameFunc()
}
func (i *IFaceMock) GetWGDevice() *wgdevice.Device {
if i.GetWGDeviceFunc == nil {
return nil

View File

@@ -239,8 +239,12 @@ func TestICEBind_HandlesConcurrentMixedTraffic(t *testing.T) {
ipv6Count++
}
assert.Equal(t, packetsPerFamily, ipv4Count)
assert.Equal(t, packetsPerFamily, ipv6Count)
// Allow some UDP packet loss under load (e.g. FreeBSD/QEMU runners). The
// routing-correctness checks above are the real assertions; the counts
// are a sanity bound to catch a totally silent path.
minDelivered := packetsPerFamily * 80 / 100
assert.GreaterOrEqual(t, ipv4Count, minDelivered, "IPv4 delivery below threshold")
assert.GreaterOrEqual(t, ipv6Count, minDelivered, "IPv6 delivery below threshold")
}
func TestICEBind_DetectsAddressFamilyFromConnection(t *testing.T) {

View File

@@ -200,6 +200,7 @@ Pop $0
!macroend
Function .onInit
SetRegView 64
StrCpy $INSTDIR "${INSTALL_DIR}"
ReadRegStr $R0 HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\$(^NAME)" "UninstallString"
${If} $R0 != ""
@@ -214,6 +215,10 @@ ${If} $R0 != ""
${EndIf}
FunctionEnd
Function un.onInit
SetRegView 64
FunctionEnd
######################################################################
Section -MainProgram
${INSTALL_TYPE}
@@ -228,6 +233,7 @@ Section -MainProgram
!else
File /r "..\\dist\\netbird_windows_amd64\\"
!endif
File "..\\client\\ui\\assets\\netbird.png"
SectionEnd
######################################################################
@@ -247,9 +253,11 @@ WriteRegStr ${REG_ROOT} "${UI_REG_APP_PATH}" "" "$INSTDIR\${UI_APP_EXE}"
; Create autostart registry entry based on checkbox
DetailPrint "Autostart enabled: $AutostartEnabled"
${If} $AutostartEnabled == "1"
WriteRegStr HKCU "${AUTOSTART_REG_KEY}" "${APP_NAME}" "$INSTDIR\${UI_APP_EXE}.exe"
WriteRegStr HKLM "${AUTOSTART_REG_KEY}" "${APP_NAME}" '"$INSTDIR\${UI_APP_EXE}.exe"'
DetailPrint "Added autostart registry entry: $INSTDIR\${UI_APP_EXE}.exe"
${Else}
DeleteRegValue HKLM "${AUTOSTART_REG_KEY}" "${APP_NAME}"
; Legacy: pre-HKLM installs wrote to HKCU; clean that up too.
DeleteRegValue HKCU "${AUTOSTART_REG_KEY}" "${APP_NAME}"
DetailPrint "Autostart not enabled by user"
${EndIf}
@@ -283,6 +291,8 @@ ExecWait `taskkill /im ${UI_APP_EXE}.exe /f`
; Remove autostart registry entry
DetailPrint "Removing autostart registry entry if exists..."
DeleteRegValue HKLM "${AUTOSTART_REG_KEY}" "${APP_NAME}"
; Legacy: pre-HKLM installs wrote to HKCU; clean that up too.
DeleteRegValue HKCU "${AUTOSTART_REG_KEY}" "${APP_NAME}"
; Handle data deletion based on checkbox
@@ -321,6 +331,7 @@ DetailPrint "Removing registry keys..."
DeleteRegKey ${REG_ROOT} "${REG_APP_PATH}"
DeleteRegKey ${REG_ROOT} "${UNINSTALL_PATH}"
DeleteRegKey ${REG_ROOT} "${UI_REG_APP_PATH}"
DeleteRegKey HKCU "Software\Classes\AppUserModelId\${APP_NAME}"
DetailPrint "Removing application directory from PATH..."
EnVar::SetHKLM

View File

@@ -333,6 +333,10 @@ func (c *ConnectClient) run(mobileDependency MobileDependency, runningChan chan
c.statusRecorder.MarkSignalConnected()
relayURLs, token := parseRelayInfo(loginResp)
if override, ok := peer.OverrideRelayURLs(); ok {
log.Infof("overriding relay URLs from %s: %v", peer.EnvKeyNBHomeRelayServers, override)
relayURLs = override
}
peerConfig := loginResp.GetPeerConfig()
engineConfig, err := createEngineConfig(myPrivateKey, c.config, peerConfig, logPath)

View File

@@ -3,10 +3,12 @@ package debug
import (
"context"
"errors"
"net"
"net/http"
"os"
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/require"
@@ -19,8 +21,10 @@ func TestUpload(t *testing.T) {
t.Skip("Skipping upload test on docker ci")
}
testDir := t.TempDir()
testURL := "http://localhost:8080"
addr := reserveLoopbackPort(t)
testURL := "http://" + addr
t.Setenv("SERVER_URL", testURL)
t.Setenv("SERVER_ADDRESS", addr)
t.Setenv("STORE_DIR", testDir)
srv := server.NewServer()
go func() {
@@ -33,6 +37,7 @@ func TestUpload(t *testing.T) {
t.Errorf("Failed to stop server: %v", err)
}
})
waitForServer(t, addr)
file := filepath.Join(t.TempDir(), "tmpfile")
fileContent := []byte("test file content")
@@ -47,3 +52,30 @@ func TestUpload(t *testing.T) {
require.NoError(t, err)
require.Equal(t, fileContent, createdFileContent)
}
// reserveLoopbackPort binds an ephemeral port on loopback to learn a free
// address, then releases it so the server under test can rebind. The close/
// rebind window is racy in theory; on loopback with a kernel-assigned port
// it's essentially never contended in practice.
func reserveLoopbackPort(t *testing.T) string {
t.Helper()
l, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
addr := l.Addr().String()
require.NoError(t, l.Close())
return addr
}
func waitForServer(t *testing.T, addr string) {
t.Helper()
deadline := time.Now().Add(5 * time.Second)
for time.Now().Before(deadline) {
c, err := net.DialTimeout("tcp", addr, 100*time.Millisecond)
if err == nil {
_ = c.Close()
return
}
time.Sleep(20 * time.Millisecond)
}
t.Fatalf("server did not start listening on %s in time", addr)
}

View File

@@ -13,6 +13,7 @@ import (
const (
defaultResolvConfPath = "/etc/resolv.conf"
nsswitchConfPath = "/etc/nsswitch.conf"
)
type resolvConf struct {

View File

@@ -1,7 +1,10 @@
package dns
import (
"context"
"fmt"
"math"
"net"
"slices"
"strconv"
"strings"
@@ -192,6 +195,12 @@ func (c *HandlerChain) logHandlers() {
}
func (c *HandlerChain) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {
c.dispatch(w, r, math.MaxInt)
}
// dispatch routes a DNS request through the chain, skipping handlers with
// priority > maxPriority. Shared by ServeDNS and ResolveInternal.
func (c *HandlerChain) dispatch(w dns.ResponseWriter, r *dns.Msg, maxPriority int) {
if len(r.Question) == 0 {
return
}
@@ -216,6 +225,9 @@ func (c *HandlerChain) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {
// Try handlers in priority order
for _, entry := range handlers {
if entry.Priority > maxPriority {
continue
}
if !c.isHandlerMatch(qname, entry) {
continue
}
@@ -273,6 +285,55 @@ func (c *HandlerChain) logResponse(logger *log.Entry, cw *ResponseWriterChain, q
cw.response.Len(), meta, time.Since(startTime))
}
// ResolveInternal runs an in-process DNS query against the chain, skipping any
// handler with priority > maxPriority. Used by internal callers (e.g. the mgmt
// cache refresher) that must bypass themselves to avoid loops. Honors ctx
// cancellation; on ctx.Done the dispatch goroutine is left to drain on its own
// (bounded by the invoked handler's internal timeout).
func (c *HandlerChain) ResolveInternal(ctx context.Context, r *dns.Msg, maxPriority int) (*dns.Msg, error) {
if len(r.Question) == 0 {
return nil, fmt.Errorf("empty question")
}
base := &internalResponseWriter{}
done := make(chan struct{})
go func() {
c.dispatch(base, r, maxPriority)
close(done)
}()
select {
case <-done:
case <-ctx.Done():
// Prefer a completed response if dispatch finished concurrently with cancellation.
select {
case <-done:
default:
return nil, fmt.Errorf("resolve %s: %w", strings.ToLower(r.Question[0].Name), ctx.Err())
}
}
if base.response == nil || base.response.Rcode == dns.RcodeRefused {
return nil, fmt.Errorf("no handler resolved %s at priority ≤ %d",
strings.ToLower(r.Question[0].Name), maxPriority)
}
return base.response, nil
}
// HasRootHandlerAtOrBelow reports whether any "." handler is registered at
// priority ≤ maxPriority.
func (c *HandlerChain) HasRootHandlerAtOrBelow(maxPriority int) bool {
c.mu.RLock()
defer c.mu.RUnlock()
for _, h := range c.handlers {
if h.Pattern == "." && h.Priority <= maxPriority {
return true
}
}
return false
}
func (c *HandlerChain) isHandlerMatch(qname string, entry HandlerEntry) bool {
switch {
case entry.Pattern == ".":
@@ -291,3 +352,36 @@ func (c *HandlerChain) isHandlerMatch(qname string, entry HandlerEntry) bool {
}
}
}
// internalResponseWriter captures a dns.Msg for in-process chain queries.
type internalResponseWriter struct {
response *dns.Msg
}
func (w *internalResponseWriter) WriteMsg(m *dns.Msg) error { w.response = m; return nil }
func (w *internalResponseWriter) LocalAddr() net.Addr { return nil }
func (w *internalResponseWriter) RemoteAddr() net.Addr { return nil }
// Write unpacks raw DNS bytes so handlers that call Write instead of WriteMsg
// still surface their answer to ResolveInternal.
func (w *internalResponseWriter) Write(p []byte) (int, error) {
msg := new(dns.Msg)
if err := msg.Unpack(p); err != nil {
return 0, err
}
w.response = msg
return len(p), nil
}
func (w *internalResponseWriter) Close() error { return nil }
func (w *internalResponseWriter) TsigStatus() error { return nil }
// TsigTimersOnly is part of dns.ResponseWriter.
func (w *internalResponseWriter) TsigTimersOnly(bool) {
// no-op: in-process queries carry no TSIG state.
}
// Hijack is part of dns.ResponseWriter.
func (w *internalResponseWriter) Hijack() {
// no-op: in-process queries have no underlying connection to hand off.
}

View File

@@ -1,11 +1,15 @@
package dns_test
import (
"context"
"net"
"testing"
"time"
"github.com/miekg/dns"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
nbdns "github.com/netbirdio/netbird/client/internal/dns"
"github.com/netbirdio/netbird/client/internal/dns/test"
@@ -1042,3 +1046,163 @@ func TestHandlerChain_AddRemoveRoundtrip(t *testing.T) {
})
}
}
// answeringHandler writes a fixed A record to ack the query. Used to verify
// which handler ResolveInternal dispatches to.
type answeringHandler struct {
name string
ip string
}
func (h *answeringHandler) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {
resp := &dns.Msg{}
resp.SetReply(r)
resp.Answer = []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: r.Question[0].Name, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP(h.ip).To4(),
}}
_ = w.WriteMsg(resp)
}
func (h *answeringHandler) String() string { return h.name }
func TestHandlerChain_ResolveInternal_SkipsAboveMaxPriority(t *testing.T) {
chain := nbdns.NewHandlerChain()
high := &answeringHandler{name: "high", ip: "10.0.0.1"}
low := &answeringHandler{name: "low", ip: "10.0.0.2"}
chain.AddHandler("example.com.", high, nbdns.PriorityMgmtCache)
chain.AddHandler("example.com.", low, nbdns.PriorityUpstream)
r := new(dns.Msg)
r.SetQuestion("example.com.", dns.TypeA)
resp, err := chain.ResolveInternal(context.Background(), r, nbdns.PriorityUpstream)
assert.NoError(t, err)
assert.NotNil(t, resp)
assert.Equal(t, 1, len(resp.Answer))
a, ok := resp.Answer[0].(*dns.A)
assert.True(t, ok)
assert.Equal(t, "10.0.0.2", a.A.String(), "should skip mgmtCache handler and resolve via upstream")
}
func TestHandlerChain_ResolveInternal_ErrorWhenNoMatch(t *testing.T) {
chain := nbdns.NewHandlerChain()
high := &answeringHandler{name: "high", ip: "10.0.0.1"}
chain.AddHandler("example.com.", high, nbdns.PriorityMgmtCache)
r := new(dns.Msg)
r.SetQuestion("example.com.", dns.TypeA)
_, err := chain.ResolveInternal(context.Background(), r, nbdns.PriorityUpstream)
assert.Error(t, err, "no handler at or below maxPriority should error")
}
// rawWriteHandler packs a response and calls ResponseWriter.Write directly
// (instead of WriteMsg), exercising the internalResponseWriter.Write path.
type rawWriteHandler struct {
ip string
}
func (h *rawWriteHandler) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {
resp := &dns.Msg{}
resp.SetReply(r)
resp.Answer = []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: r.Question[0].Name, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP(h.ip).To4(),
}}
packed, err := resp.Pack()
if err != nil {
return
}
_, _ = w.Write(packed)
}
func TestHandlerChain_ResolveInternal_CapturesRawWrite(t *testing.T) {
chain := nbdns.NewHandlerChain()
chain.AddHandler("example.com.", &rawWriteHandler{ip: "10.0.0.3"}, nbdns.PriorityUpstream)
r := new(dns.Msg)
r.SetQuestion("example.com.", dns.TypeA)
resp, err := chain.ResolveInternal(context.Background(), r, nbdns.PriorityUpstream)
assert.NoError(t, err)
require.NotNil(t, resp)
require.Len(t, resp.Answer, 1)
a, ok := resp.Answer[0].(*dns.A)
require.True(t, ok)
assert.Equal(t, "10.0.0.3", a.A.String(), "handlers calling Write(packed) must still surface their answer")
}
func TestHandlerChain_ResolveInternal_EmptyQuestion(t *testing.T) {
chain := nbdns.NewHandlerChain()
_, err := chain.ResolveInternal(context.Background(), new(dns.Msg), nbdns.PriorityUpstream)
assert.Error(t, err)
}
// hangingHandler blocks indefinitely until closed, simulating a wedged upstream.
type hangingHandler struct {
block chan struct{}
}
func (h *hangingHandler) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {
<-h.block
resp := &dns.Msg{}
resp.SetReply(r)
_ = w.WriteMsg(resp)
}
func (h *hangingHandler) String() string { return "hangingHandler" }
func TestHandlerChain_ResolveInternal_HonorsContextTimeout(t *testing.T) {
chain := nbdns.NewHandlerChain()
h := &hangingHandler{block: make(chan struct{})}
defer close(h.block)
chain.AddHandler("example.com.", h, nbdns.PriorityUpstream)
r := new(dns.Msg)
r.SetQuestion("example.com.", dns.TypeA)
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
start := time.Now()
_, err := chain.ResolveInternal(ctx, r, nbdns.PriorityUpstream)
elapsed := time.Since(start)
assert.Error(t, err)
assert.ErrorIs(t, err, context.DeadlineExceeded)
assert.Less(t, elapsed, 500*time.Millisecond, "ResolveInternal must return shortly after ctx deadline")
}
func TestHandlerChain_HasRootHandlerAtOrBelow(t *testing.T) {
chain := nbdns.NewHandlerChain()
h := &answeringHandler{name: "h", ip: "10.0.0.1"}
assert.False(t, chain.HasRootHandlerAtOrBelow(nbdns.PriorityUpstream), "empty chain")
chain.AddHandler("example.com.", h, nbdns.PriorityUpstream)
assert.False(t, chain.HasRootHandlerAtOrBelow(nbdns.PriorityUpstream), "non-root handler does not count")
chain.AddHandler(".", h, nbdns.PriorityMgmtCache)
assert.False(t, chain.HasRootHandlerAtOrBelow(nbdns.PriorityUpstream), "root handler above threshold excluded")
chain.AddHandler(".", h, nbdns.PriorityDefault)
assert.True(t, chain.HasRootHandlerAtOrBelow(nbdns.PriorityUpstream), "root handler at PriorityDefault included")
chain.RemoveHandler(".", nbdns.PriorityDefault)
assert.False(t, chain.HasRootHandlerAtOrBelow(nbdns.PriorityUpstream))
// Primary nsgroup case: root handler lands at PriorityUpstream.
chain.AddHandler(".", h, nbdns.PriorityUpstream)
assert.True(t, chain.HasRootHandlerAtOrBelow(nbdns.PriorityUpstream), "root at PriorityUpstream included")
chain.RemoveHandler(".", nbdns.PriorityUpstream)
// Fallback case: original /etc/resolv.conf entries land at PriorityFallback.
chain.AddHandler(".", h, nbdns.PriorityFallback)
assert.True(t, chain.HasRootHandlerAtOrBelow(nbdns.PriorityUpstream), "root at PriorityFallback included")
chain.RemoveHandler(".", nbdns.PriorityFallback)
assert.False(t, chain.HasRootHandlerAtOrBelow(nbdns.PriorityUpstream))
}

View File

@@ -46,12 +46,12 @@ type restoreHostManager interface {
}
func newHostManager(wgInterface string) (hostManager, error) {
osManager, err := getOSDNSManagerType()
osManager, reason, err := getOSDNSManagerType()
if err != nil {
return nil, fmt.Errorf("get os dns manager type: %w", err)
}
log.Infof("System DNS manager discovered: %s", osManager)
log.Infof("System DNS manager discovered: %s (%s)", osManager, reason)
mgr, err := newHostManagerFromType(wgInterface, osManager)
// need to explicitly return nil mgr on error to avoid returning a non-nil interface containing a nil value
if err != nil {
@@ -74,17 +74,49 @@ func newHostManagerFromType(wgInterface string, osManager osManagerType) (restor
}
}
func getOSDNSManagerType() (osManagerType, error) {
func getOSDNSManagerType() (osManagerType, string, error) {
resolved := isSystemdResolvedRunning()
nss := isLibnssResolveUsed()
stub := checkStub()
// Prefer systemd-resolved whenever it owns libc resolution, regardless of
// who wrote /etc/resolv.conf. File-mode rewrites do not affect lookups
// that go through nss-resolve, and in foreign mode they can loop back
// through resolved as an upstream.
if resolved && (nss || stub) {
return systemdManager, fmt.Sprintf("systemd-resolved active (nss-resolve=%t, stub=%t)", nss, stub), nil
}
mgr, reason, rejected, err := scanResolvConfHeader()
if err != nil {
return 0, "", err
}
if reason != "" {
return mgr, reason, nil
}
fallback := fmt.Sprintf("no manager matched (resolved=%t, nss-resolve=%t, stub=%t)", resolved, nss, stub)
if len(rejected) > 0 {
fallback += "; rejected: " + strings.Join(rejected, ", ")
}
return fileManager, fallback, nil
}
// scanResolvConfHeader walks /etc/resolv.conf header comments and returns the
// matching manager. If reason is empty the caller should pick file mode and
// use rejected for diagnostics.
func scanResolvConfHeader() (osManagerType, string, []string, error) {
file, err := os.Open(defaultResolvConfPath)
if err != nil {
return 0, fmt.Errorf("unable to open %s for checking owner, got error: %w", defaultResolvConfPath, err)
return 0, "", nil, fmt.Errorf("unable to open %s for checking owner, got error: %w", defaultResolvConfPath, err)
}
defer func() {
if err := file.Close(); err != nil {
log.Errorf("close file %s: %s", defaultResolvConfPath, err)
if cerr := file.Close(); cerr != nil {
log.Errorf("close file %s: %s", defaultResolvConfPath, cerr)
}
}()
var rejected []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
text := scanner.Text()
@@ -92,41 +124,48 @@ func getOSDNSManagerType() (osManagerType, error) {
continue
}
if text[0] != '#' {
return fileManager, nil
break
}
if strings.Contains(text, fileGeneratedResolvConfContentHeader) {
return netbirdManager, nil
}
if strings.Contains(text, "NetworkManager") && isDbusListenerRunning(networkManagerDest, networkManagerDbusObjectNode) && isNetworkManagerSupported() {
return networkManager, nil
}
if strings.Contains(text, "systemd-resolved") && isSystemdResolvedRunning() {
if checkStub() {
return systemdManager, nil
} else {
return fileManager, nil
}
}
if strings.Contains(text, "resolvconf") {
if isSystemdResolveConfMode() {
return systemdManager, nil
}
return resolvConfManager, nil
if mgr, reason, rej := matchResolvConfHeader(text); reason != "" {
return mgr, reason, nil, nil
} else if rej != "" {
rejected = append(rejected, rej)
}
}
if err := scanner.Err(); err != nil && err != io.EOF {
return 0, fmt.Errorf("scan: %w", err)
return 0, "", nil, fmt.Errorf("scan: %w", err)
}
return fileManager, nil
return 0, "", rejected, nil
}
// checkStub checks if the stub resolver is disabled in systemd-resolved. If it is disabled, we fall back to file manager.
// matchResolvConfHeader inspects a single comment line. Returns either a
// definitive (manager, reason) or a non-empty rejected diagnostic.
func matchResolvConfHeader(text string) (osManagerType, string, string) {
if strings.Contains(text, fileGeneratedResolvConfContentHeader) {
return netbirdManager, "netbird-managed resolv.conf header detected", ""
}
if strings.Contains(text, "NetworkManager") {
if isDbusListenerRunning(networkManagerDest, networkManagerDbusObjectNode) && isNetworkManagerSupported() {
return networkManager, "NetworkManager header + supported version on dbus", ""
}
return 0, "", "NetworkManager header (no dbus or unsupported version)"
}
if strings.Contains(text, "resolvconf") {
if isSystemdResolveConfMode() {
return systemdManager, "resolvconf header in systemd-resolved compatibility mode", ""
}
return resolvConfManager, "resolvconf header detected", ""
}
return 0, "", ""
}
// checkStub reports whether systemd-resolved's stub (127.0.0.53) is listed
// in /etc/resolv.conf. On parse failure we assume it is, to avoid dropping
// into file mode while resolved is active.
func checkStub() bool {
rConf, err := parseDefaultResolvConf()
if err != nil {
log.Warnf("failed to parse resolv conf: %s", err)
log.Warnf("failed to parse resolv conf, assuming stub is active: %s", err)
return true
}
@@ -139,3 +178,36 @@ func checkStub() bool {
return false
}
// isLibnssResolveUsed reports whether nss-resolve is listed before dns on
// the hosts: line of /etc/nsswitch.conf. When it is, libc lookups are
// delegated to systemd-resolved regardless of /etc/resolv.conf.
func isLibnssResolveUsed() bool {
bs, err := os.ReadFile(nsswitchConfPath)
if err != nil {
log.Debugf("read %s: %v", nsswitchConfPath, err)
return false
}
return parseNsswitchResolveAhead(bs)
}
func parseNsswitchResolveAhead(data []byte) bool {
for _, line := range strings.Split(string(data), "\n") {
if i := strings.IndexByte(line, '#'); i >= 0 {
line = line[:i]
}
fields := strings.Fields(line)
if len(fields) < 2 || fields[0] != "hosts:" {
continue
}
for _, module := range fields[1:] {
switch module {
case "dns":
return false
case "resolve":
return true
}
}
}
return false
}

View File

@@ -0,0 +1,76 @@
//go:build (linux && !android) || freebsd
package dns
import "testing"
func TestParseNsswitchResolveAhead(t *testing.T) {
tests := []struct {
name string
in string
want bool
}{
{
name: "resolve before dns with action token",
in: "hosts: mymachines resolve [!UNAVAIL=return] files myhostname dns\n",
want: true,
},
{
name: "dns before resolve",
in: "hosts: files mdns4_minimal [NOTFOUND=return] dns resolve\n",
want: false,
},
{
name: "debian default with only dns",
in: "hosts: files mdns4_minimal [NOTFOUND=return] dns mymachines\n",
want: false,
},
{
name: "neither resolve nor dns",
in: "hosts: files myhostname\n",
want: false,
},
{
name: "no hosts line",
in: "passwd: files systemd\ngroup: files systemd\n",
want: false,
},
{
name: "empty",
in: "",
want: false,
},
{
name: "comments and blank lines ignored",
in: "# comment\n\n# another\nhosts: resolve dns\n",
want: true,
},
{
name: "trailing inline comment",
in: "hosts: resolve [!UNAVAIL=return] dns # fallback\n",
want: true,
},
{
name: "hosts token must be the first field",
in: " hosts: resolve dns\n",
want: true,
},
{
name: "other db line mentioning resolve is ignored",
in: "networks: resolve\nhosts: dns\n",
want: false,
},
{
name: "only resolve, no dns",
in: "hosts: files resolve\n",
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := parseNsswitchResolveAhead([]byte(tt.in)); got != tt.want {
t.Errorf("parseNsswitchResolveAhead() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -2,40 +2,83 @@ package mgmt
import (
"context"
"errors"
"fmt"
"net"
"net/netip"
"net/url"
"os"
"slices"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/miekg/dns"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/singleflight"
dnsconfig "github.com/netbirdio/netbird/client/internal/dns/config"
"github.com/netbirdio/netbird/client/internal/dns/resutil"
"github.com/netbirdio/netbird/shared/management/domain"
)
const dnsTimeout = 5 * time.Second
const (
dnsTimeout = 5 * time.Second
defaultTTL = 300 * time.Second
refreshBackoff = 30 * time.Second
// Resolver caches critical NetBird infrastructure domains
// envMgmtCacheTTL overrides defaultTTL for integration/dev testing.
envMgmtCacheTTL = "NB_MGMT_CACHE_TTL"
)
// ChainResolver lets the cache refresh stale entries through the DNS handler
// chain instead of net.DefaultResolver, avoiding loopback when NetBird is the
// system resolver.
type ChainResolver interface {
ResolveInternal(ctx context.Context, msg *dns.Msg, maxPriority int) (*dns.Msg, error)
HasRootHandlerAtOrBelow(maxPriority int) bool
}
// cachedRecord holds DNS records plus timestamps used for TTL refresh.
// records and cachedAt are set at construction and treated as immutable;
// lastFailedRefresh and consecFailures are mutable and must be accessed under
// Resolver.mutex.
type cachedRecord struct {
records []dns.RR
cachedAt time.Time
lastFailedRefresh time.Time
consecFailures int
}
// Resolver caches critical NetBird infrastructure domains.
// records, refreshing, mgmtDomain and serverDomains are all guarded by mutex.
type Resolver struct {
records map[dns.Question][]dns.RR
records map[dns.Question]*cachedRecord
mgmtDomain *domain.Domain
serverDomains *dnsconfig.ServerDomains
mutex sync.RWMutex
}
type ipsResponse struct {
ips []netip.Addr
err error
chain ChainResolver
chainMaxPriority int
refreshGroup singleflight.Group
// refreshing tracks questions whose refresh is running via the OS
// fallback path. A ServeDNS hit for a question in this map indicates
// the OS resolver routed the recursive query back to us (loop). Only
// the OS path arms this so chain-path refreshes don't produce false
// positives. The atomic bool is CAS-flipped once per refresh to
// throttle the warning log.
refreshing map[dns.Question]*atomic.Bool
cacheTTL time.Duration
}
// NewResolver creates a new management domains cache resolver.
func NewResolver() *Resolver {
return &Resolver{
records: make(map[dns.Question][]dns.RR),
records: make(map[dns.Question]*cachedRecord),
refreshing: make(map[dns.Question]*atomic.Bool),
cacheTTL: resolveCacheTTL(),
}
}
@@ -44,7 +87,19 @@ func (m *Resolver) String() string {
return "MgmtCacheResolver"
}
// ServeDNS implements dns.Handler interface.
// SetChainResolver wires the handler chain used to refresh stale cache entries.
// maxPriority caps which handlers may answer refresh queries (typically
// PriorityUpstream, so upstream/default/fallback handlers are consulted and
// mgmt/route/local handlers are skipped).
func (m *Resolver) SetChainResolver(chain ChainResolver, maxPriority int) {
m.mutex.Lock()
m.chain = chain
m.chainMaxPriority = maxPriority
m.mutex.Unlock()
}
// ServeDNS serves cached A/AAAA records. Stale entries are returned
// immediately and refreshed asynchronously (stale-while-revalidate).
func (m *Resolver) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {
if len(r.Question) == 0 {
m.continueToNext(w, r)
@@ -60,7 +115,14 @@ func (m *Resolver) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {
}
m.mutex.RLock()
records, found := m.records[question]
cached, found := m.records[question]
inflight := m.refreshing[question]
var shouldRefresh bool
if found {
stale := time.Since(cached.cachedAt) > m.cacheTTL
inBackoff := !cached.lastFailedRefresh.IsZero() && time.Since(cached.lastFailedRefresh) < refreshBackoff
shouldRefresh = stale && !inBackoff
}
m.mutex.RUnlock()
if !found {
@@ -68,12 +130,23 @@ func (m *Resolver) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {
return
}
if inflight != nil && inflight.CompareAndSwap(false, true) {
log.Warnf("mgmt cache: possible resolver loop for domain=%s: served stale while an OS-fallback refresh was inflight (if NetBird is the system resolver, the OS-path predicate is wrong)",
question.Name)
}
// Skip scheduling a refresh goroutine if one is already inflight for
// this question; singleflight would dedup anyway but skipping avoids
// a parked goroutine per stale hit under bursty load.
if shouldRefresh && inflight == nil {
m.scheduleRefresh(question, cached)
}
resp := &dns.Msg{}
resp.SetReply(r)
resp.Authoritative = false
resp.RecursionAvailable = true
resp.Answer = append(resp.Answer, records...)
resp.Answer = cloneRecordsWithTTL(cached.records, m.responseTTL(cached.cachedAt))
log.Debugf("serving %d cached records for domain=%s", len(resp.Answer), question.Name)
@@ -98,101 +171,260 @@ func (m *Resolver) continueToNext(w dns.ResponseWriter, r *dns.Msg) {
}
}
// AddDomain manually adds a domain to cache by resolving it.
// AddDomain resolves a domain and stores its A/AAAA records in the cache.
// A family that resolves NODATA (nil err, zero records) evicts any stale
// entry for that qtype.
func (m *Resolver) AddDomain(ctx context.Context, d domain.Domain) error {
dnsName := strings.ToLower(dns.Fqdn(d.PunycodeString()))
ctx, cancel := context.WithTimeout(ctx, dnsTimeout)
defer cancel()
ips, err := lookupIPWithExtraTimeout(ctx, d)
if err != nil {
return err
aRecords, aaaaRecords, errA, errAAAA := m.lookupBoth(ctx, d, dnsName)
if errA != nil && errAAAA != nil {
return fmt.Errorf("resolve %s: %w", d.SafeString(), errors.Join(errA, errAAAA))
}
var aRecords, aaaaRecords []dns.RR
for _, ip := range ips {
if ip.Is4() {
rr := &dns.A{
Hdr: dns.RR_Header{
Name: dnsName,
Rrtype: dns.TypeA,
Class: dns.ClassINET,
Ttl: 300,
},
A: ip.AsSlice(),
}
aRecords = append(aRecords, rr)
} else if ip.Is6() {
rr := &dns.AAAA{
Hdr: dns.RR_Header{
Name: dnsName,
Rrtype: dns.TypeAAAA,
Class: dns.ClassINET,
Ttl: 300,
},
AAAA: ip.AsSlice(),
}
aaaaRecords = append(aaaaRecords, rr)
if len(aRecords) == 0 && len(aaaaRecords) == 0 {
if err := errors.Join(errA, errAAAA); err != nil {
return fmt.Errorf("resolve %s: no A/AAAA records: %w", d.SafeString(), err)
}
return fmt.Errorf("resolve %s: no A/AAAA records", d.SafeString())
}
now := time.Now()
m.mutex.Lock()
defer m.mutex.Unlock()
if len(aRecords) > 0 {
aQuestion := dns.Question{
Name: dnsName,
Qtype: dns.TypeA,
Qclass: dns.ClassINET,
}
m.records[aQuestion] = aRecords
}
m.applyFamilyRecords(dnsName, dns.TypeA, aRecords, errA, now)
m.applyFamilyRecords(dnsName, dns.TypeAAAA, aaaaRecords, errAAAA, now)
if len(aaaaRecords) > 0 {
aaaaQuestion := dns.Question{
Name: dnsName,
Qtype: dns.TypeAAAA,
Qclass: dns.ClassINET,
}
m.records[aaaaQuestion] = aaaaRecords
}
m.mutex.Unlock()
log.Debugf("added domain=%s with %d A records and %d AAAA records",
log.Debugf("added/updated domain=%s with %d A records and %d AAAA records",
d.SafeString(), len(aRecords), len(aaaaRecords))
return nil
}
func lookupIPWithExtraTimeout(ctx context.Context, d domain.Domain) ([]netip.Addr, error) {
log.Infof("looking up IP for mgmt domain=%s", d.SafeString())
defer log.Infof("done looking up IP for mgmt domain=%s", d.SafeString())
resultChan := make(chan *ipsResponse, 1)
// applyFamilyRecords writes records, evicts on NODATA, leaves the cache
// untouched on error. Caller holds m.mutex.
func (m *Resolver) applyFamilyRecords(dnsName string, qtype uint16, records []dns.RR, err error, now time.Time) {
q := dns.Question{Name: dnsName, Qtype: qtype, Qclass: dns.ClassINET}
switch {
case len(records) > 0:
m.records[q] = &cachedRecord{records: records, cachedAt: now}
case err == nil:
delete(m.records, q)
}
}
go func() {
ips, err := net.DefaultResolver.LookupNetIP(ctx, "ip", d.PunycodeString())
resultChan <- &ipsResponse{
err: err,
ips: ips,
// scheduleRefresh kicks off an async refresh. DoChan spawns one goroutine per
// unique in-flight key; bursty stale hits share its channel. expected is the
// cachedRecord pointer observed by the caller; the refresh only mutates the
// cache if that pointer is still the one stored, so a stale in-flight refresh
// can't clobber a newer entry written by AddDomain or a competing refresh.
func (m *Resolver) scheduleRefresh(question dns.Question, expected *cachedRecord) {
key := question.Name + "|" + dns.TypeToString[question.Qtype]
_ = m.refreshGroup.DoChan(key, func() (any, error) {
return nil, m.refreshQuestion(question, expected)
})
}
// refreshQuestion replaces the cached records on success, or marks the entry
// failed (arming the backoff) on failure. While this runs, ServeDNS can detect
// a resolver loop by spotting a query for this same question arriving on us.
// expected pins the cache entry observed at schedule time; mutations only apply
// if m.records[question] still points at it.
func (m *Resolver) refreshQuestion(question dns.Question, expected *cachedRecord) error {
ctx, cancel := context.WithTimeout(context.Background(), dnsTimeout)
defer cancel()
d, err := domain.FromString(strings.TrimSuffix(question.Name, "."))
if err != nil {
m.markRefreshFailed(question, expected)
return fmt.Errorf("parse domain: %w", err)
}
records, err := m.lookupRecords(ctx, d, question)
if err != nil {
fails := m.markRefreshFailed(question, expected)
logf := log.Warnf
if fails == 0 || fails > 1 {
logf = log.Debugf
}
}()
var resp *ipsResponse
select {
case <-time.After(dnsTimeout + time.Millisecond*500):
log.Warnf("timed out waiting for IP for mgmt domain=%s", d.SafeString())
return nil, fmt.Errorf("timed out waiting for ips to be available for domain %s", d.SafeString())
case <-ctx.Done():
return nil, ctx.Err()
case resp = <-resultChan:
logf("refresh mgmt cache domain=%s type=%s: %v (consecutive failures=%d)",
d.SafeString(), dns.TypeToString[question.Qtype], err, fails)
return err
}
if resp.err != nil {
return nil, fmt.Errorf("resolve domain %s: %w", d.SafeString(), resp.err)
// NOERROR/NODATA: family gone upstream, evict so we stop serving stale.
if len(records) == 0 {
m.mutex.Lock()
if m.records[question] == expected {
delete(m.records, question)
m.mutex.Unlock()
log.Infof("removed mgmt cache domain=%s type=%s: no records returned",
d.SafeString(), dns.TypeToString[question.Qtype])
return nil
}
m.mutex.Unlock()
log.Debugf("skipping refresh evict for domain=%s type=%s: entry changed during refresh",
d.SafeString(), dns.TypeToString[question.Qtype])
return nil
}
return resp.ips, nil
now := time.Now()
m.mutex.Lock()
if m.records[question] != expected {
m.mutex.Unlock()
log.Debugf("skipping refresh write for domain=%s type=%s: entry changed during refresh",
d.SafeString(), dns.TypeToString[question.Qtype])
return nil
}
m.records[question] = &cachedRecord{records: records, cachedAt: now}
m.mutex.Unlock()
log.Infof("refreshed mgmt cache domain=%s type=%s",
d.SafeString(), dns.TypeToString[question.Qtype])
return nil
}
func (m *Resolver) markRefreshing(question dns.Question) {
m.mutex.Lock()
m.refreshing[question] = &atomic.Bool{}
m.mutex.Unlock()
}
func (m *Resolver) clearRefreshing(question dns.Question) {
m.mutex.Lock()
delete(m.refreshing, question)
m.mutex.Unlock()
}
// markRefreshFailed arms the backoff and returns the new consecutive-failure
// count so callers can downgrade subsequent failure logs to debug.
func (m *Resolver) markRefreshFailed(question dns.Question, expected *cachedRecord) int {
m.mutex.Lock()
defer m.mutex.Unlock()
c, ok := m.records[question]
if !ok || c != expected {
return 0
}
c.lastFailedRefresh = time.Now()
c.consecFailures++
return c.consecFailures
}
// lookupBoth resolves A and AAAA via chain or OS. Per-family errors let
// callers tell records, NODATA (nil err, no records), and failure apart.
func (m *Resolver) lookupBoth(ctx context.Context, d domain.Domain, dnsName string) (aRecords, aaaaRecords []dns.RR, errA, errAAAA error) {
m.mutex.RLock()
chain := m.chain
maxPriority := m.chainMaxPriority
m.mutex.RUnlock()
if chain != nil && chain.HasRootHandlerAtOrBelow(maxPriority) {
aRecords, errA = m.lookupViaChain(ctx, chain, maxPriority, dnsName, dns.TypeA)
aaaaRecords, errAAAA = m.lookupViaChain(ctx, chain, maxPriority, dnsName, dns.TypeAAAA)
return
}
// TODO: drop once every supported OS registers a fallback resolver. Safe
// today: no root handler at priority ≤ PriorityUpstream means NetBird is
// not the system resolver, so net.DefaultResolver will not loop back.
aRecords, errA = m.osLookup(ctx, d, dnsName, dns.TypeA)
aaaaRecords, errAAAA = m.osLookup(ctx, d, dnsName, dns.TypeAAAA)
return
}
// lookupRecords resolves a single record type via chain or OS. The OS branch
// arms the loop detector for the duration of its call so that ServeDNS can
// spot the OS resolver routing the recursive query back to us.
func (m *Resolver) lookupRecords(ctx context.Context, d domain.Domain, q dns.Question) ([]dns.RR, error) {
m.mutex.RLock()
chain := m.chain
maxPriority := m.chainMaxPriority
m.mutex.RUnlock()
if chain != nil && chain.HasRootHandlerAtOrBelow(maxPriority) {
return m.lookupViaChain(ctx, chain, maxPriority, q.Name, q.Qtype)
}
// TODO: drop once every supported OS registers a fallback resolver.
m.markRefreshing(q)
defer m.clearRefreshing(q)
return m.osLookup(ctx, d, q.Name, q.Qtype)
}
// lookupViaChain resolves via the handler chain and rewrites each RR to use
// dnsName as owner and m.cacheTTL as TTL, so CNAME-backed domains don't cache
// target-owned records or upstream TTLs. NODATA returns (nil, nil).
func (m *Resolver) lookupViaChain(ctx context.Context, chain ChainResolver, maxPriority int, dnsName string, qtype uint16) ([]dns.RR, error) {
msg := &dns.Msg{}
msg.SetQuestion(dnsName, qtype)
msg.RecursionDesired = true
resp, err := chain.ResolveInternal(ctx, msg, maxPriority)
if err != nil {
return nil, fmt.Errorf("chain resolve: %w", err)
}
if resp == nil {
return nil, fmt.Errorf("chain resolve returned nil response")
}
if resp.Rcode != dns.RcodeSuccess {
return nil, fmt.Errorf("chain resolve rcode=%s", dns.RcodeToString[resp.Rcode])
}
ttl := uint32(m.cacheTTL.Seconds())
owners := cnameOwners(dnsName, resp.Answer)
var filtered []dns.RR
for _, rr := range resp.Answer {
h := rr.Header()
if h.Class != dns.ClassINET || h.Rrtype != qtype {
continue
}
if !owners[strings.ToLower(dns.Fqdn(h.Name))] {
continue
}
if cp := cloneIPRecord(rr, dnsName, ttl); cp != nil {
filtered = append(filtered, cp)
}
}
return filtered, nil
}
// osLookup resolves a single family via net.DefaultResolver using resutil,
// which disambiguates NODATA from NXDOMAIN and Unmaps v4-mapped-v6. NODATA
// returns (nil, nil).
func (m *Resolver) osLookup(ctx context.Context, d domain.Domain, dnsName string, qtype uint16) ([]dns.RR, error) {
network := resutil.NetworkForQtype(qtype)
if network == "" {
return nil, fmt.Errorf("unsupported qtype %s", dns.TypeToString[qtype])
}
log.Infof("looking up IP for mgmt domain=%s type=%s", d.SafeString(), dns.TypeToString[qtype])
defer log.Infof("done looking up IP for mgmt domain=%s type=%s", d.SafeString(), dns.TypeToString[qtype])
result := resutil.LookupIP(ctx, net.DefaultResolver, network, d.PunycodeString(), qtype)
if result.Rcode == dns.RcodeSuccess {
return resutil.IPsToRRs(dnsName, result.IPs, uint32(m.cacheTTL.Seconds())), nil
}
if result.Err != nil {
return nil, fmt.Errorf("resolve %s type=%s: %w", d.SafeString(), dns.TypeToString[qtype], result.Err)
}
return nil, fmt.Errorf("resolve %s type=%s: rcode=%s", d.SafeString(), dns.TypeToString[qtype], dns.RcodeToString[result.Rcode])
}
// responseTTL returns the remaining cache lifetime in seconds (rounded up),
// so downstream resolvers don't cache an answer for longer than we will.
func (m *Resolver) responseTTL(cachedAt time.Time) uint32 {
remaining := m.cacheTTL - time.Since(cachedAt)
if remaining <= 0 {
return 0
}
return uint32((remaining + time.Second - 1) / time.Second)
}
// PopulateFromConfig extracts and caches domains from the client configuration.
@@ -224,19 +456,12 @@ func (m *Resolver) RemoveDomain(d domain.Domain) error {
m.mutex.Lock()
defer m.mutex.Unlock()
aQuestion := dns.Question{
Name: dnsName,
Qtype: dns.TypeA,
Qclass: dns.ClassINET,
}
delete(m.records, aQuestion)
aaaaQuestion := dns.Question{
Name: dnsName,
Qtype: dns.TypeAAAA,
Qclass: dns.ClassINET,
}
delete(m.records, aaaaQuestion)
qA := dns.Question{Name: dnsName, Qtype: dns.TypeA, Qclass: dns.ClassINET}
qAAAA := dns.Question{Name: dnsName, Qtype: dns.TypeAAAA, Qclass: dns.ClassINET}
delete(m.records, qA)
delete(m.records, qAAAA)
delete(m.refreshing, qA)
delete(m.refreshing, qAAAA)
log.Debugf("removed domain=%s from cache", d.SafeString())
return nil
@@ -394,3 +619,73 @@ func (m *Resolver) extractDomainsFromServerDomains(serverDomains dnsconfig.Serve
return domains
}
// cloneIPRecord returns a deep copy of rr retargeted to owner with ttl. Non
// A/AAAA records return nil.
func cloneIPRecord(rr dns.RR, owner string, ttl uint32) dns.RR {
switch r := rr.(type) {
case *dns.A:
cp := *r
cp.Hdr.Name = owner
cp.Hdr.Ttl = ttl
cp.A = slices.Clone(r.A)
return &cp
case *dns.AAAA:
cp := *r
cp.Hdr.Name = owner
cp.Hdr.Ttl = ttl
cp.AAAA = slices.Clone(r.AAAA)
return &cp
}
return nil
}
// cloneRecordsWithTTL clones A/AAAA records preserving their owner and
// stamping ttl so the response shares no memory with the cached slice.
func cloneRecordsWithTTL(records []dns.RR, ttl uint32) []dns.RR {
out := make([]dns.RR, 0, len(records))
for _, rr := range records {
if cp := cloneIPRecord(rr, rr.Header().Name, ttl); cp != nil {
out = append(out, cp)
}
}
return out
}
// cnameOwners returns dnsName plus every target reachable by following CNAMEs
// in answer, iterating until fixed point so out-of-order chains resolve.
func cnameOwners(dnsName string, answer []dns.RR) map[string]bool {
owners := map[string]bool{dnsName: true}
for {
added := false
for _, rr := range answer {
cname, ok := rr.(*dns.CNAME)
if !ok {
continue
}
name := strings.ToLower(dns.Fqdn(cname.Hdr.Name))
if !owners[name] {
continue
}
target := strings.ToLower(dns.Fqdn(cname.Target))
if !owners[target] {
owners[target] = true
added = true
}
}
if !added {
return owners
}
}
}
// resolveCacheTTL reads the cache TTL override env var; invalid or empty
// values fall back to defaultTTL. Called once per Resolver from NewResolver.
func resolveCacheTTL() time.Duration {
if v := os.Getenv(envMgmtCacheTTL); v != "" {
if d, err := time.ParseDuration(v); err == nil && d > 0 {
return d
}
}
return defaultTTL
}

View File

@@ -0,0 +1,408 @@
package mgmt
import (
"context"
"errors"
"net"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/miekg/dns"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/netbirdio/netbird/client/internal/dns/test"
"github.com/netbirdio/netbird/shared/management/domain"
)
type fakeChain struct {
mu sync.Mutex
calls map[string]int
answers map[string][]dns.RR
err error
hasRoot bool
onLookup func()
}
func newFakeChain() *fakeChain {
return &fakeChain{
calls: map[string]int{},
answers: map[string][]dns.RR{},
hasRoot: true,
}
}
func (f *fakeChain) HasRootHandlerAtOrBelow(maxPriority int) bool {
f.mu.Lock()
defer f.mu.Unlock()
return f.hasRoot
}
func (f *fakeChain) ResolveInternal(ctx context.Context, msg *dns.Msg, maxPriority int) (*dns.Msg, error) {
f.mu.Lock()
q := msg.Question[0]
key := q.Name + "|" + dns.TypeToString[q.Qtype]
f.calls[key]++
answers := f.answers[key]
err := f.err
onLookup := f.onLookup
f.mu.Unlock()
if onLookup != nil {
onLookup()
}
if err != nil {
return nil, err
}
resp := &dns.Msg{}
resp.SetReply(msg)
resp.Answer = answers
return resp, nil
}
func (f *fakeChain) setAnswer(name string, qtype uint16, ip string) {
f.mu.Lock()
defer f.mu.Unlock()
key := name + "|" + dns.TypeToString[qtype]
hdr := dns.RR_Header{Name: name, Rrtype: qtype, Class: dns.ClassINET, Ttl: 60}
switch qtype {
case dns.TypeA:
f.answers[key] = []dns.RR{&dns.A{Hdr: hdr, A: net.ParseIP(ip).To4()}}
case dns.TypeAAAA:
f.answers[key] = []dns.RR{&dns.AAAA{Hdr: hdr, AAAA: net.ParseIP(ip).To16()}}
}
}
func (f *fakeChain) callCount(name string, qtype uint16) int {
f.mu.Lock()
defer f.mu.Unlock()
return f.calls[name+"|"+dns.TypeToString[qtype]]
}
// waitFor polls the predicate until it returns true or the deadline passes.
func waitFor(t *testing.T, d time.Duration, fn func() bool) {
t.Helper()
deadline := time.Now().Add(d)
for time.Now().Before(deadline) {
if fn() {
return
}
time.Sleep(5 * time.Millisecond)
}
t.Fatalf("condition not met within %s", d)
}
func queryA(t *testing.T, r *Resolver, name string) *dns.Msg {
t.Helper()
msg := new(dns.Msg)
msg.SetQuestion(name, dns.TypeA)
w := &test.MockResponseWriter{}
r.ServeDNS(w, msg)
return w.GetLastResponse()
}
func firstA(t *testing.T, resp *dns.Msg) string {
t.Helper()
require.NotNil(t, resp)
require.Greater(t, len(resp.Answer), 0, "expected at least one answer")
a, ok := resp.Answer[0].(*dns.A)
require.True(t, ok, "expected A record")
return a.A.String()
}
func TestResolver_CacheTTLGatesRefresh(t *testing.T) {
// Same cached entry age, different cacheTTL values: the shorter TTL must
// trigger a background refresh, the longer one must not. Proves that the
// per-Resolver cacheTTL field actually drives the stale decision.
cachedAt := time.Now().Add(-100 * time.Millisecond)
newRec := func() *cachedRecord {
return &cachedRecord{
records: []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: "mgmt.example.com.", Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP("10.0.0.1").To4(),
}},
cachedAt: cachedAt,
}
}
q := dns.Question{Name: "mgmt.example.com.", Qtype: dns.TypeA, Qclass: dns.ClassINET}
t.Run("short TTL treats entry as stale and refreshes", func(t *testing.T) {
r := NewResolver()
r.cacheTTL = 10 * time.Millisecond
chain := newFakeChain()
chain.setAnswer(q.Name, dns.TypeA, "10.0.0.2")
r.SetChainResolver(chain, 50)
r.records[q] = newRec()
resp := queryA(t, r, q.Name)
assert.Equal(t, "10.0.0.1", firstA(t, resp), "stale entry must be served while refresh runs")
waitFor(t, time.Second, func() bool {
return chain.callCount(q.Name, dns.TypeA) >= 1
})
})
t.Run("long TTL keeps entry fresh and skips refresh", func(t *testing.T) {
r := NewResolver()
r.cacheTTL = time.Hour
chain := newFakeChain()
chain.setAnswer(q.Name, dns.TypeA, "10.0.0.2")
r.SetChainResolver(chain, 50)
r.records[q] = newRec()
resp := queryA(t, r, q.Name)
assert.Equal(t, "10.0.0.1", firstA(t, resp))
time.Sleep(50 * time.Millisecond)
assert.Equal(t, 0, chain.callCount(q.Name, dns.TypeA), "fresh entry must not trigger refresh")
})
}
func TestResolver_ServeFresh_NoRefresh(t *testing.T) {
r := NewResolver()
chain := newFakeChain()
chain.setAnswer("mgmt.example.com.", dns.TypeA, "10.0.0.2")
r.SetChainResolver(chain, 50)
r.records[dns.Question{Name: "mgmt.example.com.", Qtype: dns.TypeA, Qclass: dns.ClassINET}] = &cachedRecord{
records: []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: "mgmt.example.com.", Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP("10.0.0.1").To4(),
}},
cachedAt: time.Now(), // fresh
}
resp := queryA(t, r, "mgmt.example.com.")
assert.Equal(t, "10.0.0.1", firstA(t, resp))
time.Sleep(20 * time.Millisecond)
assert.Equal(t, 0, chain.callCount("mgmt.example.com.", dns.TypeA), "fresh entry must not trigger refresh")
}
func TestResolver_StaleTriggersAsyncRefresh(t *testing.T) {
r := NewResolver()
chain := newFakeChain()
chain.setAnswer("mgmt.example.com.", dns.TypeA, "10.0.0.2")
r.SetChainResolver(chain, 50)
q := dns.Question{Name: "mgmt.example.com.", Qtype: dns.TypeA, Qclass: dns.ClassINET}
r.records[q] = &cachedRecord{
records: []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: q.Name, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP("10.0.0.1").To4(),
}},
cachedAt: time.Now().Add(-2 * defaultTTL), // stale
}
// First query: serves stale immediately.
resp := queryA(t, r, "mgmt.example.com.")
assert.Equal(t, "10.0.0.1", firstA(t, resp), "stale entry must be served while refresh runs")
waitFor(t, time.Second, func() bool {
return chain.callCount("mgmt.example.com.", dns.TypeA) >= 1
})
// Next query should now return the refreshed IP.
waitFor(t, time.Second, func() bool {
resp := queryA(t, r, "mgmt.example.com.")
return resp != nil && len(resp.Answer) > 0 && firstA(t, resp) == "10.0.0.2"
})
}
func TestResolver_ConcurrentStaleHitsCollapseRefresh(t *testing.T) {
r := NewResolver()
chain := newFakeChain()
chain.setAnswer("mgmt.example.com.", dns.TypeA, "10.0.0.2")
var inflight atomic.Int32
var maxInflight atomic.Int32
chain.onLookup = func() {
cur := inflight.Add(1)
defer inflight.Add(-1)
for {
prev := maxInflight.Load()
if cur <= prev || maxInflight.CompareAndSwap(prev, cur) {
break
}
}
time.Sleep(50 * time.Millisecond) // hold inflight long enough to collide
}
r.SetChainResolver(chain, 50)
q := dns.Question{Name: "mgmt.example.com.", Qtype: dns.TypeA, Qclass: dns.ClassINET}
r.records[q] = &cachedRecord{
records: []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: q.Name, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP("10.0.0.1").To4(),
}},
cachedAt: time.Now().Add(-2 * defaultTTL),
}
var wg sync.WaitGroup
for i := 0; i < 50; i++ {
wg.Add(1)
go func() {
defer wg.Done()
queryA(t, r, "mgmt.example.com.")
}()
}
wg.Wait()
waitFor(t, 2*time.Second, func() bool {
return inflight.Load() == 0
})
calls := chain.callCount("mgmt.example.com.", dns.TypeA)
assert.LessOrEqual(t, calls, 2, "singleflight must collapse concurrent refreshes (got %d)", calls)
assert.Equal(t, int32(1), maxInflight.Load(), "only one refresh should run concurrently")
}
func TestResolver_RefreshFailureArmsBackoff(t *testing.T) {
r := NewResolver()
chain := newFakeChain()
chain.err = errors.New("boom")
r.SetChainResolver(chain, 50)
q := dns.Question{Name: "mgmt.example.com.", Qtype: dns.TypeA, Qclass: dns.ClassINET}
r.records[q] = &cachedRecord{
records: []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: q.Name, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP("10.0.0.1").To4(),
}},
cachedAt: time.Now().Add(-2 * defaultTTL),
}
// First stale hit triggers a refresh attempt that fails.
resp := queryA(t, r, "mgmt.example.com.")
assert.Equal(t, "10.0.0.1", firstA(t, resp), "stale entry served while refresh fails")
waitFor(t, time.Second, func() bool {
return chain.callCount("mgmt.example.com.", dns.TypeA) == 1
})
waitFor(t, time.Second, func() bool {
r.mutex.RLock()
defer r.mutex.RUnlock()
c, ok := r.records[q]
return ok && !c.lastFailedRefresh.IsZero()
})
// Subsequent stale hits within backoff window should not schedule more refreshes.
for i := 0; i < 10; i++ {
queryA(t, r, "mgmt.example.com.")
}
time.Sleep(50 * time.Millisecond)
assert.Equal(t, 1, chain.callCount("mgmt.example.com.", dns.TypeA), "backoff must suppress further refreshes")
}
func TestResolver_NoRootHandler_SkipsChain(t *testing.T) {
r := NewResolver()
chain := newFakeChain()
chain.hasRoot = false
chain.setAnswer("mgmt.example.com.", dns.TypeA, "10.0.0.2")
r.SetChainResolver(chain, 50)
// With hasRoot=false the chain must not be consulted. Use a short
// deadline so the OS fallback returns quickly without waiting on a
// real network call in CI.
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
_, _, _, _ = r.lookupBoth(ctx, domain.Domain("mgmt.example.com"), "mgmt.example.com.")
assert.Equal(t, 0, chain.callCount("mgmt.example.com.", dns.TypeA),
"chain must not be used when no root handler is registered at the bound priority")
}
func TestResolver_ServeDuringRefreshSetsLoopFlag(t *testing.T) {
// ServeDNS being invoked for a question while a refresh for that question
// is inflight indicates a resolver loop (OS resolver sent the recursive
// query back to us). The inflightRefresh.loopLoggedOnce flag must be set.
r := NewResolver()
q := dns.Question{Name: "mgmt.example.com.", Qtype: dns.TypeA, Qclass: dns.ClassINET}
r.records[q] = &cachedRecord{
records: []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: q.Name, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP("10.0.0.1").To4(),
}},
cachedAt: time.Now(),
}
// Simulate an inflight refresh.
r.markRefreshing(q)
defer r.clearRefreshing(q)
resp := queryA(t, r, "mgmt.example.com.")
assert.Equal(t, "10.0.0.1", firstA(t, resp), "stale entry must still be served to avoid breaking external queries")
r.mutex.RLock()
inflight := r.refreshing[q]
r.mutex.RUnlock()
require.NotNil(t, inflight)
assert.True(t, inflight.Load(), "loop flag must be set once a ServeDNS during refresh was observed")
}
func TestResolver_LoopFlagOnlyTrippedOncePerRefresh(t *testing.T) {
r := NewResolver()
q := dns.Question{Name: "mgmt.example.com.", Qtype: dns.TypeA, Qclass: dns.ClassINET}
r.records[q] = &cachedRecord{
records: []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: q.Name, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP("10.0.0.1").To4(),
}},
cachedAt: time.Now(),
}
r.markRefreshing(q)
defer r.clearRefreshing(q)
// Multiple ServeDNS calls during the same refresh must not re-set the flag
// (CompareAndSwap from false -> true returns true only on the first call).
for range 5 {
queryA(t, r, "mgmt.example.com.")
}
r.mutex.RLock()
inflight := r.refreshing[q]
r.mutex.RUnlock()
assert.True(t, inflight.Load())
}
func TestResolver_NoLoopFlagWhenNotRefreshing(t *testing.T) {
r := NewResolver()
q := dns.Question{Name: "mgmt.example.com.", Qtype: dns.TypeA, Qclass: dns.ClassINET}
r.records[q] = &cachedRecord{
records: []dns.RR{&dns.A{
Hdr: dns.RR_Header{Name: q.Name, Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 60},
A: net.ParseIP("10.0.0.1").To4(),
}},
cachedAt: time.Now(),
}
queryA(t, r, "mgmt.example.com.")
r.mutex.RLock()
_, ok := r.refreshing[q]
r.mutex.RUnlock()
assert.False(t, ok, "no refresh inflight means no loop tracking")
}
func TestResolver_AddDomain_UsesChainWhenRootRegistered(t *testing.T) {
r := NewResolver()
chain := newFakeChain()
chain.setAnswer("mgmt.example.com.", dns.TypeA, "10.0.0.2")
chain.setAnswer("mgmt.example.com.", dns.TypeAAAA, "fd00::2")
r.SetChainResolver(chain, 50)
require.NoError(t, r.AddDomain(context.Background(), domain.Domain("mgmt.example.com")))
resp := queryA(t, r, "mgmt.example.com.")
assert.Equal(t, "10.0.0.2", firstA(t, resp))
assert.Equal(t, 1, chain.callCount("mgmt.example.com.", dns.TypeA))
assert.Equal(t, 1, chain.callCount("mgmt.example.com.", dns.TypeAAAA))
}

View File

@@ -6,6 +6,7 @@ import (
"net/url"
"strings"
"testing"
"time"
"github.com/miekg/dns"
"github.com/stretchr/testify/assert"
@@ -23,6 +24,60 @@ func TestResolver_NewResolver(t *testing.T) {
assert.False(t, resolver.MatchSubdomains())
}
func TestResolveCacheTTL(t *testing.T) {
tests := []struct {
name string
value string
want time.Duration
}{
{"unset falls back to default", "", defaultTTL},
{"valid duration", "45s", 45 * time.Second},
{"valid minutes", "2m", 2 * time.Minute},
{"malformed falls back to default", "not-a-duration", defaultTTL},
{"zero falls back to default", "0s", defaultTTL},
{"negative falls back to default", "-5s", defaultTTL},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
t.Setenv(envMgmtCacheTTL, tc.value)
got := resolveCacheTTL()
assert.Equal(t, tc.want, got, "parsed TTL should match")
})
}
}
func TestNewResolver_CacheTTLFromEnv(t *testing.T) {
t.Setenv(envMgmtCacheTTL, "7s")
r := NewResolver()
assert.Equal(t, 7*time.Second, r.cacheTTL, "NewResolver should evaluate cacheTTL once from env")
}
func TestResolver_ResponseTTL(t *testing.T) {
now := time.Now()
tests := []struct {
name string
cacheTTL time.Duration
cachedAt time.Time
wantMin uint32
wantMax uint32
}{
{"fresh entry returns full TTL", 60 * time.Second, now, 59, 60},
{"half-aged entry returns half TTL", 60 * time.Second, now.Add(-30 * time.Second), 29, 31},
{"expired entry returns zero", 60 * time.Second, now.Add(-61 * time.Second), 0, 0},
{"exactly expired returns zero", 10 * time.Second, now.Add(-10 * time.Second), 0, 0},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
r := &Resolver{cacheTTL: tc.cacheTTL}
got := r.responseTTL(tc.cachedAt)
assert.GreaterOrEqual(t, got, tc.wantMin, "remaining TTL should be >= wantMin")
assert.LessOrEqual(t, got, tc.wantMax, "remaining TTL should be <= wantMax")
})
}
}
func TestResolver_ExtractDomainFromURL(t *testing.T) {
tests := []struct {
name string

View File

@@ -212,6 +212,7 @@ func newDefaultServer(
ctx, stop := context.WithCancel(ctx)
mgmtCacheResolver := mgmt.NewResolver()
mgmtCacheResolver.SetChainResolver(handlerChain, PriorityUpstream)
defaultServer := &DefaultServer{
ctx: ctx,

View File

@@ -26,6 +26,7 @@ import (
nberrors "github.com/netbirdio/netbird/client/errors"
"github.com/netbirdio/netbird/client/firewall"
"github.com/netbirdio/netbird/client/firewall/firewalld"
firewallManager "github.com/netbirdio/netbird/client/firewall/manager"
"github.com/netbirdio/netbird/client/iface"
"github.com/netbirdio/netbird/client/iface/device"
@@ -570,7 +571,7 @@ func (e *Engine) Start(netbirdConfig *mgmProto.NetbirdConfig, mgmtURL *url.URL)
e.connMgr.Start(e.ctx)
e.srWatcher = guard.NewSRWatcher(e.signal, e.relayManager, e.mobileDep.IFaceDiscover, iceCfg)
e.srWatcher.Start()
e.srWatcher.Start(peer.IsForceRelayed())
e.receiveSignalEvents()
e.receiveManagementEvents()
@@ -604,6 +605,8 @@ func (e *Engine) createFirewall() error {
return nil
}
firewalld.SetParentContext(e.ctx)
var err error
e.firewall, err = firewall.NewFirewall(e.wgInterface, e.stateManager, e.flowManager.GetLogger(), e.config.DisableServerRoutes, e.config.MTU)
if err != nil {
@@ -941,7 +944,12 @@ func (e *Engine) handleRelayUpdate(update *mgmProto.RelayConfig) error {
return fmt.Errorf("update relay token: %w", err)
}
e.relayManager.UpdateServerURLs(update.Urls)
urls := update.Urls
if override, ok := peer.OverrideRelayURLs(); ok {
log.Infof("overriding relay URLs from %s: %v", peer.EnvKeyNBHomeRelayServers, override)
urls = override
}
e.relayManager.UpdateServerURLs(urls)
// Just in case the agent started with an MGM server where the relay was disabled but was later enabled.
// We can ignore all errors because the guard will manage the reconnection retries.

View File

@@ -185,17 +185,20 @@ func (conn *Conn) Open(engineCtx context.Context) error {
conn.workerRelay = NewWorkerRelay(conn.ctx, conn.Log, isController(conn.config), conn.config, conn, conn.relayManager)
relayIsSupportedLocally := conn.workerRelay.RelayIsSupportedLocally()
workerICE, err := NewWorkerICE(conn.ctx, conn.Log, conn.config, conn, conn.signaler, conn.iFaceDiscover, conn.statusRecorder, relayIsSupportedLocally)
if err != nil {
return err
forceRelay := IsForceRelayed()
if !forceRelay {
relayIsSupportedLocally := conn.workerRelay.RelayIsSupportedLocally()
workerICE, err := NewWorkerICE(conn.ctx, conn.Log, conn.config, conn, conn.signaler, conn.iFaceDiscover, conn.statusRecorder, relayIsSupportedLocally)
if err != nil {
return err
}
conn.workerICE = workerICE
}
conn.workerICE = workerICE
conn.handshaker = NewHandshaker(conn.Log, conn.config, conn.signaler, conn.workerICE, conn.workerRelay, conn.metricsStages)
conn.handshaker.AddRelayListener(conn.workerRelay.OnNewOffer)
if !isForceRelayed() {
if !forceRelay {
conn.handshaker.AddICEListener(conn.workerICE.OnNewOffer)
}
@@ -251,7 +254,9 @@ func (conn *Conn) Close(signalToRemote bool) {
conn.wgWatcherCancel()
}
conn.workerRelay.CloseConn()
conn.workerICE.Close()
if conn.workerICE != nil {
conn.workerICE.Close()
}
if conn.wgProxyRelay != nil {
err := conn.wgProxyRelay.CloseConn()
@@ -294,7 +299,9 @@ func (conn *Conn) OnRemoteAnswer(answer OfferAnswer) {
// OnRemoteCandidate Handles ICE connection Candidate provided by the remote peer.
func (conn *Conn) OnRemoteCandidate(candidate ice.Candidate, haRoutes route.HAMap) {
conn.dumpState.RemoteCandidate()
conn.workerICE.OnRemoteCandidate(candidate, haRoutes)
if conn.workerICE != nil {
conn.workerICE.OnRemoteCandidate(candidate, haRoutes)
}
}
// SetOnConnected sets a handler function to be triggered by Conn when a new connection to a remote peer established
@@ -712,33 +719,35 @@ func (conn *Conn) evalStatus() ConnStatus {
return StatusConnecting
}
func (conn *Conn) isConnectedOnAllWay() (connected bool) {
// would be better to protect this with a mutex, but it could cause deadlock with Close function
// isConnectedOnAllWay evaluates the overall connection status based on ICE and Relay transports.
//
// The result is a tri-state:
// - ConnStatusConnected: all available transports are up
// - ConnStatusPartiallyConnected: relay is up but ICE is still pending/reconnecting
// - ConnStatusDisconnected: no working transport
func (conn *Conn) isConnectedOnAllWay() (status guard.ConnStatus) {
defer func() {
if !connected {
if status == guard.ConnStatusDisconnected {
conn.logTraceConnState()
}
}()
// For JS platform: only relay connection is supported
if runtime.GOOS == "js" {
return conn.statusRelay.Get() == worker.StatusConnected
iceWorkerCreated := conn.workerICE != nil
var iceInProgress bool
if iceWorkerCreated {
iceInProgress = conn.workerICE.InProgress()
}
// For non-JS platforms: check ICE connection status
if conn.statusICE.Get() == worker.StatusDisconnected && !conn.workerICE.InProgress() {
return false
}
// If relay is supported with peer, it must also be connected
if conn.workerRelay.IsRelayConnectionSupportedWithPeer() {
if conn.statusRelay.Get() == worker.StatusDisconnected {
return false
}
}
return true
return evalConnStatus(connStatusInputs{
forceRelay: IsForceRelayed(),
peerUsesRelay: conn.workerRelay.IsRelayConnectionSupportedWithPeer(),
relayConnected: conn.statusRelay.Get() == worker.StatusConnected,
remoteSupportsICE: conn.handshaker.RemoteICESupported(),
iceWorkerCreated: iceWorkerCreated,
iceStatusConnecting: conn.statusICE.Get() != worker.StatusDisconnected,
iceInProgress: iceInProgress,
})
}
func (conn *Conn) enableWgWatcherIfNeeded(enabledTime time.Time) {
@@ -926,3 +935,43 @@ func isController(config ConnConfig) bool {
func isRosenpassEnabled(remoteRosenpassPubKey []byte) bool {
return remoteRosenpassPubKey != nil
}
func evalConnStatus(in connStatusInputs) guard.ConnStatus {
// "Relay up and needed" — the peer uses relay and the transport is connected.
relayUsedAndUp := in.peerUsesRelay && in.relayConnected
// Force-relay mode: ICE never runs. Relay is the only transport and must be up.
if in.forceRelay {
return boolToConnStatus(relayUsedAndUp)
}
// Remote peer doesn't support ICE, or we haven't created the worker yet:
// relay is the only possible transport.
if !in.remoteSupportsICE || !in.iceWorkerCreated {
return boolToConnStatus(relayUsedAndUp)
}
// ICE counts as "up" when the status is anything other than Disconnected, OR
// when a negotiation is currently in progress (so we don't spam offers while one is in flight).
iceUp := in.iceStatusConnecting || in.iceInProgress
// Relay side is acceptable if the peer doesn't rely on relay, or relay is connected.
relayOK := !in.peerUsesRelay || in.relayConnected
switch {
case iceUp && relayOK:
return guard.ConnStatusConnected
case relayUsedAndUp:
// Relay is up but ICE is down — partially connected.
return guard.ConnStatusPartiallyConnected
default:
return guard.ConnStatusDisconnected
}
}
func boolToConnStatus(connected bool) guard.ConnStatus {
if connected {
return guard.ConnStatusConnected
}
return guard.ConnStatusDisconnected
}

View File

@@ -13,6 +13,20 @@ const (
StatusConnected
)
// connStatusInputs is the primitive-valued snapshot of the state that drives the
// tri-state connection classification. Extracted so the decision logic can be unit-tested
// without constructing full Worker/Handshaker objects.
type connStatusInputs struct {
forceRelay bool // NB_FORCE_RELAY or JS/WASM
peerUsesRelay bool // remote peer advertises relay support AND local has relay
relayConnected bool // statusRelay reports Connected (independent of whether peer uses relay)
remoteSupportsICE bool // remote peer sent ICE credentials
iceWorkerCreated bool // local WorkerICE exists (false in force-relay mode)
iceStatusConnecting bool // statusICE is anything other than Disconnected
iceInProgress bool // a negotiation is currently in flight
}
// ConnStatus describe the status of a peer's connection
type ConnStatus int32

View File

@@ -0,0 +1,201 @@
package peer
import (
"testing"
"github.com/netbirdio/netbird/client/internal/peer/guard"
)
func TestEvalConnStatus_ForceRelay(t *testing.T) {
tests := []struct {
name string
in connStatusInputs
want guard.ConnStatus
}{
{
name: "force relay, peer uses relay, relay up",
in: connStatusInputs{
forceRelay: true,
peerUsesRelay: true,
relayConnected: true,
},
want: guard.ConnStatusConnected,
},
{
name: "force relay, peer uses relay, relay down",
in: connStatusInputs{
forceRelay: true,
peerUsesRelay: true,
relayConnected: false,
},
want: guard.ConnStatusDisconnected,
},
{
name: "force relay, peer does NOT use relay - disconnected forever",
in: connStatusInputs{
forceRelay: true,
peerUsesRelay: false,
relayConnected: true,
},
want: guard.ConnStatusDisconnected,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
if got := evalConnStatus(tc.in); got != tc.want {
t.Fatalf("evalConnStatus = %v, want %v", got, tc.want)
}
})
}
}
func TestEvalConnStatus_ICEUnavailable(t *testing.T) {
tests := []struct {
name string
in connStatusInputs
want guard.ConnStatus
}{
{
name: "remote does not support ICE, peer uses relay, relay up",
in: connStatusInputs{
peerUsesRelay: true,
relayConnected: true,
remoteSupportsICE: false,
iceWorkerCreated: true,
},
want: guard.ConnStatusConnected,
},
{
name: "remote does not support ICE, peer uses relay, relay down",
in: connStatusInputs{
peerUsesRelay: true,
relayConnected: false,
remoteSupportsICE: false,
iceWorkerCreated: true,
},
want: guard.ConnStatusDisconnected,
},
{
name: "ICE worker not yet created, relay up",
in: connStatusInputs{
peerUsesRelay: true,
relayConnected: true,
remoteSupportsICE: true,
iceWorkerCreated: false,
},
want: guard.ConnStatusConnected,
},
{
name: "remote does not support ICE, peer does not use relay",
in: connStatusInputs{
peerUsesRelay: false,
relayConnected: false,
remoteSupportsICE: false,
iceWorkerCreated: true,
},
want: guard.ConnStatusDisconnected,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
if got := evalConnStatus(tc.in); got != tc.want {
t.Fatalf("evalConnStatus = %v, want %v", got, tc.want)
}
})
}
}
func TestEvalConnStatus_FullyAvailable(t *testing.T) {
base := connStatusInputs{
remoteSupportsICE: true,
iceWorkerCreated: true,
}
tests := []struct {
name string
mutator func(*connStatusInputs)
want guard.ConnStatus
}{
{
name: "ICE connected, relay connected, peer uses relay",
mutator: func(in *connStatusInputs) {
in.peerUsesRelay = true
in.relayConnected = true
in.iceStatusConnecting = true
},
want: guard.ConnStatusConnected,
},
{
name: "ICE connected, peer does NOT use relay",
mutator: func(in *connStatusInputs) {
in.peerUsesRelay = false
in.relayConnected = false
in.iceStatusConnecting = true
},
want: guard.ConnStatusConnected,
},
{
name: "ICE InProgress only, peer does NOT use relay",
mutator: func(in *connStatusInputs) {
in.peerUsesRelay = false
in.iceStatusConnecting = false
in.iceInProgress = true
},
want: guard.ConnStatusConnected,
},
{
name: "ICE down, relay up, peer uses relay -> partial",
mutator: func(in *connStatusInputs) {
in.peerUsesRelay = true
in.relayConnected = true
in.iceStatusConnecting = false
in.iceInProgress = false
},
want: guard.ConnStatusPartiallyConnected,
},
{
name: "ICE down, peer does NOT use relay -> disconnected",
mutator: func(in *connStatusInputs) {
in.peerUsesRelay = false
in.relayConnected = false
in.iceStatusConnecting = false
in.iceInProgress = false
},
want: guard.ConnStatusDisconnected,
},
{
name: "ICE up, peer uses relay but relay down -> partial (relay required, ICE ignored)",
mutator: func(in *connStatusInputs) {
in.peerUsesRelay = true
in.relayConnected = false
in.iceStatusConnecting = true
},
// relayOK = false (peer uses relay but it's down), iceUp = true
// first switch arm fails (relayOK false), relayUsedAndUp = false (relay down),
// falls into default: Disconnected.
want: guard.ConnStatusDisconnected,
},
{
name: "ICE down, relay up but peer does not use relay -> disconnected",
mutator: func(in *connStatusInputs) {
in.peerUsesRelay = false
in.relayConnected = true // not actually used since peer doesn't rely on it
in.iceStatusConnecting = false
in.iceInProgress = false
},
want: guard.ConnStatusDisconnected,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
in := base
tc.mutator(&in)
if got := evalConnStatus(in); got != tc.want {
t.Fatalf("evalConnStatus = %v, want %v (inputs: %+v)", got, tc.want, in)
}
})
}
}

View File

@@ -7,12 +7,38 @@ import (
)
const (
EnvKeyNBForceRelay = "NB_FORCE_RELAY"
EnvKeyNBForceRelay = "NB_FORCE_RELAY"
EnvKeyNBHomeRelayServers = "NB_HOME_RELAY_SERVERS"
)
func isForceRelayed() bool {
func IsForceRelayed() bool {
if runtime.GOOS == "js" {
return true
}
return strings.EqualFold(os.Getenv(EnvKeyNBForceRelay), "true")
}
// OverrideRelayURLs returns the relay server URL list set in
// NB_HOME_RELAY_SERVERS (comma-separated) and a boolean indicating whether
// the override is active. When the env var is unset, the boolean is false
// and the caller should keep the list received from the management server.
// Intended for lab/debug scenarios where a peer must pin to a specific home
// relay regardless of what management offers.
func OverrideRelayURLs() ([]string, bool) {
raw := os.Getenv(EnvKeyNBHomeRelayServers)
if raw == "" {
return nil, false
}
parts := strings.Split(raw, ",")
urls := make([]string, 0, len(parts))
for _, p := range parts {
p = strings.TrimSpace(p)
if p != "" {
urls = append(urls, p)
}
}
if len(urls) == 0 {
return nil, false
}
return urls, true
}

View File

@@ -8,7 +8,19 @@ import (
log "github.com/sirupsen/logrus"
)
type isConnectedFunc func() bool
// ConnStatus represents the connection state as seen by the guard.
type ConnStatus int
const (
// ConnStatusDisconnected means neither ICE nor Relay is connected.
ConnStatusDisconnected ConnStatus = iota
// ConnStatusPartiallyConnected means Relay is connected but ICE is not.
ConnStatusPartiallyConnected
// ConnStatusConnected means all required connections are established.
ConnStatusConnected
)
type connStatusFunc func() ConnStatus
// Guard is responsible for the reconnection logic.
// It will trigger to send an offer to the peer then has connection issues.
@@ -20,14 +32,14 @@ type isConnectedFunc func() bool
// - ICE candidate changes
type Guard struct {
log *log.Entry
isConnectedOnAllWay isConnectedFunc
isConnectedOnAllWay connStatusFunc
timeout time.Duration
srWatcher *SRWatcher
relayedConnDisconnected chan struct{}
iCEConnDisconnected chan struct{}
}
func NewGuard(log *log.Entry, isConnectedFn isConnectedFunc, timeout time.Duration, srWatcher *SRWatcher) *Guard {
func NewGuard(log *log.Entry, isConnectedFn connStatusFunc, timeout time.Duration, srWatcher *SRWatcher) *Guard {
return &Guard{
log: log,
isConnectedOnAllWay: isConnectedFn,
@@ -57,8 +69,17 @@ func (g *Guard) SetICEConnDisconnected() {
}
}
// reconnectLoopWithRetry periodically check the connection status.
// Try to send offer while the P2P is not established or while the Relay is not connected if is it supported
// reconnectLoopWithRetry periodically checks the connection status and sends offers to re-establish connectivity.
//
// Behavior depends on the connection state reported by isConnectedOnAllWay:
// - Connected: no action, the peer is fully reachable.
// - Disconnected (neither ICE nor Relay): retries aggressively with exponential backoff (800ms doubling
// up to timeout), never gives up. This ensures rapid recovery when the peer has no connectivity at all.
// - PartiallyConnected (Relay up, ICE not): retries up to 3 times with exponential backoff, then switches
// to one attempt per hour. This limits signaling traffic when relay already provides connectivity.
//
// External events (relay/ICE disconnect, signal/relay reconnect, candidate changes) reset the retry
// counter and backoff ticker, giving ICE a fresh chance after network conditions change.
func (g *Guard) reconnectLoopWithRetry(ctx context.Context, callback func()) {
srReconnectedChan := g.srWatcher.NewListener()
defer g.srWatcher.RemoveListener(srReconnectedChan)
@@ -68,36 +89,47 @@ func (g *Guard) reconnectLoopWithRetry(ctx context.Context, callback func()) {
tickerChannel := ticker.C
iceState := &iceRetryState{log: g.log}
defer iceState.reset()
for {
select {
case t := <-tickerChannel:
if t.IsZero() {
g.log.Infof("retry timed out, stop periodic offer sending")
// after backoff timeout the ticker.C will be closed. We need to a dummy channel to avoid loop
tickerChannel = make(<-chan time.Time)
continue
case <-tickerChannel:
switch g.isConnectedOnAllWay() {
case ConnStatusConnected:
// all good, nothing to do
case ConnStatusDisconnected:
callback()
case ConnStatusPartiallyConnected:
if iceState.shouldRetry() {
callback()
} else {
iceState.enterHourlyMode()
ticker.Stop()
tickerChannel = iceState.hourlyC()
}
}
if !g.isConnectedOnAllWay() {
callback()
}
case <-g.relayedConnDisconnected:
g.log.Debugf("Relay connection changed, reset reconnection ticker")
ticker.Stop()
ticker = g.prepareExponentTicker(ctx)
ticker = g.newReconnectTicker(ctx)
tickerChannel = ticker.C
iceState.reset()
case <-g.iCEConnDisconnected:
g.log.Debugf("ICE connection changed, reset reconnection ticker")
ticker.Stop()
ticker = g.prepareExponentTicker(ctx)
ticker = g.newReconnectTicker(ctx)
tickerChannel = ticker.C
iceState.reset()
case <-srReconnectedChan:
g.log.Debugf("has network changes, reset reconnection ticker")
ticker.Stop()
ticker = g.prepareExponentTicker(ctx)
ticker = g.newReconnectTicker(ctx)
tickerChannel = ticker.C
iceState.reset()
case <-ctx.Done():
g.log.Debugf("context is done, stop reconnect loop")
@@ -120,7 +152,7 @@ func (g *Guard) initialTicker(ctx context.Context) *backoff.Ticker {
return backoff.NewTicker(bo)
}
func (g *Guard) prepareExponentTicker(ctx context.Context) *backoff.Ticker {
func (g *Guard) newReconnectTicker(ctx context.Context) *backoff.Ticker {
bo := backoff.WithContext(&backoff.ExponentialBackOff{
InitialInterval: 800 * time.Millisecond,
RandomizationFactor: 0.1,

View File

@@ -0,0 +1,61 @@
package guard
import (
"time"
log "github.com/sirupsen/logrus"
)
const (
// maxICERetries is the maximum number of ICE offer attempts when relay is connected
maxICERetries = 3
// iceRetryInterval is the periodic retry interval after ICE retries are exhausted
iceRetryInterval = 1 * time.Hour
)
// iceRetryState tracks the limited ICE retry attempts when relay is already connected.
// After maxICERetries attempts it switches to a periodic hourly retry.
type iceRetryState struct {
log *log.Entry
retries int
hourly *time.Ticker
}
func (s *iceRetryState) reset() {
s.retries = 0
if s.hourly != nil {
s.hourly.Stop()
s.hourly = nil
}
}
// shouldRetry reports whether the caller should send another ICE offer on this tick.
// Returns false when the per-cycle retry budget is exhausted and the caller must switch
// to the hourly ticker via enterHourlyMode + hourlyC.
func (s *iceRetryState) shouldRetry() bool {
if s.hourly != nil {
s.log.Debugf("hourly ICE retry attempt")
return true
}
s.retries++
if s.retries <= maxICERetries {
s.log.Debugf("ICE retry attempt %d/%d", s.retries, maxICERetries)
return true
}
return false
}
// enterHourlyMode starts the hourly retry ticker. Must be called after shouldRetry returns false.
func (s *iceRetryState) enterHourlyMode() {
s.log.Infof("ICE retries exhausted (%d/%d), switching to hourly retry", maxICERetries, maxICERetries)
s.hourly = time.NewTicker(iceRetryInterval)
}
func (s *iceRetryState) hourlyC() <-chan time.Time {
if s.hourly == nil {
return nil
}
return s.hourly.C
}

View File

@@ -0,0 +1,103 @@
package guard
import (
"testing"
log "github.com/sirupsen/logrus"
)
func newTestRetryState() *iceRetryState {
return &iceRetryState{log: log.NewEntry(log.StandardLogger())}
}
func TestICERetryState_AllowsInitialBudget(t *testing.T) {
s := newTestRetryState()
for i := 1; i <= maxICERetries; i++ {
if !s.shouldRetry() {
t.Fatalf("shouldRetry returned false on attempt %d, want true (budget = %d)", i, maxICERetries)
}
}
}
func TestICERetryState_ExhaustsAfterBudget(t *testing.T) {
s := newTestRetryState()
for i := 0; i < maxICERetries; i++ {
_ = s.shouldRetry()
}
if s.shouldRetry() {
t.Fatalf("shouldRetry returned true after budget exhausted, want false")
}
}
func TestICERetryState_HourlyCNilBeforeEnterHourlyMode(t *testing.T) {
s := newTestRetryState()
if s.hourlyC() != nil {
t.Fatalf("hourlyC returned non-nil channel before enterHourlyMode")
}
}
func TestICERetryState_EnterHourlyModeArmsTicker(t *testing.T) {
s := newTestRetryState()
for i := 0; i < maxICERetries+1; i++ {
_ = s.shouldRetry()
}
s.enterHourlyMode()
defer s.reset()
if s.hourlyC() == nil {
t.Fatalf("hourlyC returned nil after enterHourlyMode")
}
}
func TestICERetryState_ShouldRetryTrueInHourlyMode(t *testing.T) {
s := newTestRetryState()
s.enterHourlyMode()
defer s.reset()
if !s.shouldRetry() {
t.Fatalf("shouldRetry returned false in hourly mode, want true")
}
// Subsequent calls also return true — we keep retrying on each hourly tick.
if !s.shouldRetry() {
t.Fatalf("second shouldRetry returned false in hourly mode, want true")
}
}
func TestICERetryState_ResetRestoresBudget(t *testing.T) {
s := newTestRetryState()
for i := 0; i < maxICERetries+1; i++ {
_ = s.shouldRetry()
}
s.enterHourlyMode()
s.reset()
if s.hourlyC() != nil {
t.Fatalf("hourlyC returned non-nil channel after reset")
}
if s.retries != 0 {
t.Fatalf("retries = %d after reset, want 0", s.retries)
}
for i := 1; i <= maxICERetries; i++ {
if !s.shouldRetry() {
t.Fatalf("shouldRetry returned false on attempt %d after reset, want true", i)
}
}
}
func TestICERetryState_ResetIsIdempotent(t *testing.T) {
s := newTestRetryState()
s.reset()
s.reset() // second call must not panic or re-stop a nil ticker
if s.hourlyC() != nil {
t.Fatalf("hourlyC non-nil after double reset")
}
}

View File

@@ -39,7 +39,7 @@ func NewSRWatcher(signalClient chNotifier, relayManager chNotifier, iFaceDiscove
return srw
}
func (w *SRWatcher) Start() {
func (w *SRWatcher) Start(disableICEMonitor bool) {
w.mu.Lock()
defer w.mu.Unlock()
@@ -50,8 +50,10 @@ func (w *SRWatcher) Start() {
ctx, cancel := context.WithCancel(context.Background())
w.cancelIceMonitor = cancel
iceMonitor := NewICEMonitor(w.iFaceDiscover, w.iceConfig, GetICEMonitorPeriod())
go iceMonitor.Start(ctx, w.onICEChanged)
if !disableICEMonitor {
iceMonitor := NewICEMonitor(w.iFaceDiscover, w.iceConfig, GetICEMonitorPeriod())
go iceMonitor.Start(ctx, w.onICEChanged)
}
w.signalClient.SetOnReconnectedListener(w.onReconnected)
w.relayManager.SetOnReconnectedListener(w.onReconnected)

View File

@@ -4,6 +4,7 @@ import (
"context"
"errors"
"sync"
"sync/atomic"
log "github.com/sirupsen/logrus"
@@ -43,6 +44,10 @@ type OfferAnswer struct {
SessionID *ICESessionID
}
func (o *OfferAnswer) hasICECredentials() bool {
return o.IceCredentials.UFrag != "" && o.IceCredentials.Pwd != ""
}
type Handshaker struct {
mu sync.Mutex
log *log.Entry
@@ -59,6 +64,10 @@ type Handshaker struct {
relayListener *AsyncOfferListener
iceListener func(remoteOfferAnswer *OfferAnswer)
// remoteICESupported tracks whether the remote peer includes ICE credentials in its offers/answers.
// When false, the local side skips ICE listener dispatch and suppresses ICE credentials in responses.
remoteICESupported atomic.Bool
// remoteOffersCh is a channel used to wait for remote credentials to proceed with the connection
remoteOffersCh chan OfferAnswer
// remoteAnswerCh is a channel used to wait for remote credentials answer (confirmation of our offer) to proceed with the connection
@@ -66,7 +75,7 @@ type Handshaker struct {
}
func NewHandshaker(log *log.Entry, config ConnConfig, signaler *Signaler, ice *WorkerICE, relay *WorkerRelay, metricsStages *MetricsStages) *Handshaker {
return &Handshaker{
h := &Handshaker{
log: log,
config: config,
signaler: signaler,
@@ -76,6 +85,13 @@ func NewHandshaker(log *log.Entry, config ConnConfig, signaler *Signaler, ice *W
remoteOffersCh: make(chan OfferAnswer),
remoteAnswerCh: make(chan OfferAnswer),
}
// assume remote supports ICE until we learn otherwise from received offers
h.remoteICESupported.Store(ice != nil)
return h
}
func (h *Handshaker) RemoteICESupported() bool {
return h.remoteICESupported.Load()
}
func (h *Handshaker) AddRelayListener(offer func(remoteOfferAnswer *OfferAnswer)) {
@@ -90,18 +106,20 @@ func (h *Handshaker) Listen(ctx context.Context) {
for {
select {
case remoteOfferAnswer := <-h.remoteOffersCh:
h.log.Infof("received offer, running version %s, remote WireGuard listen port %d, session id: %s", remoteOfferAnswer.Version, remoteOfferAnswer.WgListenPort, remoteOfferAnswer.SessionIDString())
h.log.Infof("received offer, running version %s, remote WireGuard listen port %d, session id: %s, remote ICE supported: %t", remoteOfferAnswer.Version, remoteOfferAnswer.WgListenPort, remoteOfferAnswer.SessionIDString(), remoteOfferAnswer.hasICECredentials())
// Record signaling received for reconnection attempts
if h.metricsStages != nil {
h.metricsStages.RecordSignalingReceived()
}
h.updateRemoteICEState(&remoteOfferAnswer)
if h.relayListener != nil {
h.relayListener.Notify(&remoteOfferAnswer)
}
if h.iceListener != nil {
if h.iceListener != nil && h.RemoteICESupported() {
h.iceListener(&remoteOfferAnswer)
}
@@ -110,18 +128,20 @@ func (h *Handshaker) Listen(ctx context.Context) {
continue
}
case remoteOfferAnswer := <-h.remoteAnswerCh:
h.log.Infof("received answer, running version %s, remote WireGuard listen port %d, session id: %s", remoteOfferAnswer.Version, remoteOfferAnswer.WgListenPort, remoteOfferAnswer.SessionIDString())
h.log.Infof("received answer, running version %s, remote WireGuard listen port %d, session id: %s, remote ICE supported: %t", remoteOfferAnswer.Version, remoteOfferAnswer.WgListenPort, remoteOfferAnswer.SessionIDString(), remoteOfferAnswer.hasICECredentials())
// Record signaling received for reconnection attempts
if h.metricsStages != nil {
h.metricsStages.RecordSignalingReceived()
}
h.updateRemoteICEState(&remoteOfferAnswer)
if h.relayListener != nil {
h.relayListener.Notify(&remoteOfferAnswer)
}
if h.iceListener != nil {
if h.iceListener != nil && h.RemoteICESupported() {
h.iceListener(&remoteOfferAnswer)
}
case <-ctx.Done():
@@ -183,15 +203,18 @@ func (h *Handshaker) sendAnswer() error {
}
func (h *Handshaker) buildOfferAnswer() OfferAnswer {
uFrag, pwd := h.ice.GetLocalUserCredentials()
sid := h.ice.SessionID()
answer := OfferAnswer{
IceCredentials: IceCredentials{uFrag, pwd},
WgListenPort: h.config.LocalWgPort,
Version: version.NetbirdVersion(),
RosenpassPubKey: h.config.RosenpassConfig.PubKey,
RosenpassAddr: h.config.RosenpassConfig.Addr,
SessionID: &sid,
}
if h.ice != nil && h.RemoteICESupported() {
uFrag, pwd := h.ice.GetLocalUserCredentials()
sid := h.ice.SessionID()
answer.IceCredentials = IceCredentials{uFrag, pwd}
answer.SessionID = &sid
}
if addr, err := h.relay.RelayInstanceAddress(); err == nil {
@@ -200,3 +223,18 @@ func (h *Handshaker) buildOfferAnswer() OfferAnswer {
return answer
}
func (h *Handshaker) updateRemoteICEState(offer *OfferAnswer) {
hasICE := offer.hasICECredentials()
prev := h.remoteICESupported.Swap(hasICE)
if prev != hasICE {
if hasICE {
h.log.Infof("remote peer started sending ICE credentials")
} else {
h.log.Infof("remote peer stopped sending ICE credentials")
if h.ice != nil {
h.ice.Close()
}
}
}
}

View File

@@ -46,9 +46,13 @@ func (s *Signaler) Ready() bool {
// SignalOfferAnswer signals either an offer or an answer to remote peer
func (s *Signaler) signalOfferAnswer(offerAnswer OfferAnswer, remoteKey string, bodyType sProto.Body_Type) error {
sessionIDBytes, err := offerAnswer.SessionID.Bytes()
if err != nil {
log.Warnf("failed to get session ID bytes: %v", err)
var sessionIDBytes []byte
if offerAnswer.SessionID != nil {
var err error
sessionIDBytes, err = offerAnswer.SessionID.Bytes()
if err != nil {
log.Warnf("failed to get session ID bytes: %v", err)
}
}
msg, err := signal.MarshalCredential(
s.wgPrivateKey,

View File

@@ -215,6 +215,14 @@ type Status struct {
eventStreams map[string]chan *proto.SystemEvent
eventQueue *EventQueue
// stateChangeStreams fan-out connection-state changes (connected /
// disconnected / connecting / address change / peers list change) to
// every active SubscribeStatus gRPC stream. Each subscriber gets a
// buffered chan; the notifier non-blockingly pings them so a slow
// consumer can never stall the daemon.
stateChangeMux sync.Mutex
stateChangeStreams map[string]chan struct{}
ingressGwMgr *ingressgw.Manager
routeIDLookup routeIDLookup
@@ -228,6 +236,7 @@ func NewRecorder(mgmAddress string) *Status {
changeNotify: make(map[string]map[string]*StatusChangeSubscription),
eventStreams: make(map[string]chan *proto.SystemEvent),
eventQueue: NewEventQueue(eventQueueSize),
stateChangeStreams: make(map[string]chan struct{}),
offlinePeers: make([]State, 0),
notifier: newNotifier(),
mgmAddress: mgmAddress,
@@ -990,16 +999,19 @@ func (d *Status) GetFullStatus() FullStatus {
// ClientStart will notify all listeners about the new service state
func (d *Status) ClientStart() {
d.notifier.clientStart()
d.notifyStateChange()
}
// ClientStop will notify all listeners about the new service state
func (d *Status) ClientStop() {
d.notifier.clientStop()
d.notifyStateChange()
}
// ClientTeardown will notify all listeners about the service is under teardown
func (d *Status) ClientTeardown() {
d.notifier.clientTearDown()
d.notifyStateChange()
}
// SetConnectionListener set a listener to the notifier
@@ -1014,6 +1026,7 @@ func (d *Status) RemoveConnectionListener() {
func (d *Status) onConnectionChanged() {
d.notifier.updateServerStates(d.managementState, d.signalState)
d.notifyStateChange()
}
// notifyPeerStateChangeListeners notifies route manager about the change in peer state
@@ -1049,10 +1062,12 @@ func (d *Status) notifyPeerStateChangeListeners(peerID string) {
func (d *Status) notifyPeerListChanged() {
d.notifier.peerListChanged(d.numOfPeers())
d.notifyStateChange()
}
func (d *Status) notifyAddressChanged() {
d.notifier.localAddressChanged(d.localPeer.FQDN, d.localPeer.IP)
d.notifyStateChange()
}
func (d *Status) numOfPeers() int {
@@ -1128,6 +1143,50 @@ func (d *Status) GetEventHistory() []*proto.SystemEvent {
return d.eventQueue.GetAll()
}
// SubscribeToStateChanges hands back a channel that receives a tick on
// every connection-state change (connected / disconnected / connecting /
// address change / peers-list change). The channel is buffered to one
// pending tick so a coalesced burst still wakes the consumer exactly
// once. Pass the returned id to UnsubscribeFromStateChanges to detach.
func (d *Status) SubscribeToStateChanges() (string, <-chan struct{}) {
d.stateChangeMux.Lock()
defer d.stateChangeMux.Unlock()
id := uuid.New().String()
ch := make(chan struct{}, 1)
d.stateChangeStreams[id] = ch
return id, ch
}
// UnsubscribeFromStateChanges releases a SubscribeToStateChanges channel
// and closes it so any consumer goroutine selecting on the channel
// unblocks cleanly.
func (d *Status) UnsubscribeFromStateChanges(id string) {
d.stateChangeMux.Lock()
defer d.stateChangeMux.Unlock()
if ch, ok := d.stateChangeStreams[id]; ok {
close(ch)
delete(d.stateChangeStreams, id)
}
}
// notifyStateChange wakes every SubscribeToStateChanges subscriber. Drops
// the tick if a subscriber's buffer is full — by definition the consumer
// is already going to fetch the latest snapshot, so multiple pending ticks
// would be redundant.
func (d *Status) notifyStateChange() {
d.stateChangeMux.Lock()
defer d.stateChangeMux.Unlock()
for _, ch := range d.stateChangeStreams {
select {
case ch <- struct{}{}:
default:
}
}
}
func (d *Status) SetWgIface(wgInterface WGIfaceStatus) {
d.mux.Lock()
defer d.mux.Unlock()

View File

@@ -18,10 +18,17 @@
<Component Id="NetbirdFiles" Guid="db3165de-cc6e-4922-8396-9d892950e23e" Bitness="always64">
<File ProcessorArchitecture="$(var.ProcessorArchitecture)" Source=".\dist\netbird_windows_$(var.ArchSuffix)\netbird.exe" KeyPath="yes" />
<File ProcessorArchitecture="$(var.ProcessorArchitecture)" Source=".\dist\netbird_windows_$(var.ArchSuffix)\netbird-ui.exe">
<Shortcut Id="NetbirdDesktopShortcut" Directory="DesktopFolder" Name="NetBird" WorkingDirectory="NetbirdInstallDir" Icon="NetbirdIcon" />
<Shortcut Id="NetbirdStartMenuShortcut" Directory="StartMenuFolder" Name="NetBird" WorkingDirectory="NetbirdInstallDir" Icon="NetbirdIcon" />
<Shortcut Id="NetbirdDesktopShortcut" Directory="DesktopFolder" Name="NetBird" WorkingDirectory="NetbirdInstallDir" Icon="NetbirdIcon">
<ShortcutProperty Key="System.AppUserModel.ID" Value="NetBird" />
<ShortcutProperty Key="System.AppUserModel.ToastActivatorCLSID" Value="{0E1B4DE7-E148-432B-9814-544F941826EC}" />
</Shortcut>
<Shortcut Id="NetbirdStartMenuShortcut" Directory="StartMenuFolder" Name="NetBird" WorkingDirectory="NetbirdInstallDir" Icon="NetbirdIcon">
<ShortcutProperty Key="System.AppUserModel.ID" Value="NetBird" />
<ShortcutProperty Key="System.AppUserModel.ToastActivatorCLSID" Value="{0E1B4DE7-E148-432B-9814-544F941826EC}" />
</Shortcut>
</File>
<File ProcessorArchitecture="$(var.ProcessorArchitecture)" Source=".\dist\netbird_windows_$(var.ArchSuffix)\wintun.dll" />
<File Id="NetbirdToastIcon" Name="netbird.png" Source=".\client\ui\assets\netbird.png" />
<?if $(var.ArchSuffix) = "amd64" ?>
<File ProcessorArchitecture="$(var.ProcessorArchitecture)" Source=".\dist\netbird_windows_$(var.ArchSuffix)\opengl32.dll" />
<?endif ?>
@@ -46,8 +53,19 @@
</Directory>
</StandardDirectory>
<!-- Per-user component: HKCU keypath (auto GUID via "*"), separate from
the per-machine NetbirdFiles component to satisfy ICE57. -->
<StandardDirectory Id="ProgramMenuFolder">
<Component Id="NetbirdAumidRegistry" Guid="*">
<RegistryKey Root="HKCU" Key="Software\Classes\AppUserModelId\NetBird" ForceDeleteOnUninstall="yes">
<RegistryValue Name="InstalledByMSI" Type="integer" Value="1" KeyPath="yes" />
</RegistryKey>
</Component>
</StandardDirectory>
<ComponentGroup Id="NetbirdFilesComponent">
<ComponentRef Id="NetbirdFiles" />
<ComponentRef Id="NetbirdAumidRegistry" />
</ComponentGroup>
<util:CloseApplication Id="CloseNetBird" CloseMessage="no" Target="netbird.exe" RebootPrompt="no" />

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.6
// protoc v6.33.1
// protoc v7.34.1
// source: daemon.proto
package proto
@@ -6566,12 +6566,13 @@ const file_daemon_proto_rawDesc = "" +
"\n" +
"EXPOSE_UDP\x10\x03\x12\x0e\n" +
"\n" +
"EXPOSE_TLS\x10\x042\xfc\x15\n" +
"EXPOSE_TLS\x10\x042\xc2\x16\n" +
"\rDaemonService\x126\n" +
"\x05Login\x12\x14.daemon.LoginRequest\x1a\x15.daemon.LoginResponse\"\x00\x12K\n" +
"\fWaitSSOLogin\x12\x1b.daemon.WaitSSOLoginRequest\x1a\x1c.daemon.WaitSSOLoginResponse\"\x00\x12-\n" +
"\x02Up\x12\x11.daemon.UpRequest\x1a\x12.daemon.UpResponse\"\x00\x129\n" +
"\x06Status\x12\x15.daemon.StatusRequest\x1a\x16.daemon.StatusResponse\"\x00\x123\n" +
"\x06Status\x12\x15.daemon.StatusRequest\x1a\x16.daemon.StatusResponse\"\x00\x12D\n" +
"\x0fSubscribeStatus\x12\x15.daemon.StatusRequest\x1a\x16.daemon.StatusResponse\"\x000\x01\x123\n" +
"\x04Down\x12\x13.daemon.DownRequest\x1a\x14.daemon.DownResponse\"\x00\x12B\n" +
"\tGetConfig\x12\x18.daemon.GetConfigRequest\x1a\x19.daemon.GetConfigResponse\"\x00\x12K\n" +
"\fListNetworks\x12\x1b.daemon.ListNetworksRequest\x1a\x1c.daemon.ListNetworksResponse\"\x00\x12Q\n" +
@@ -6766,78 +6767,80 @@ var file_daemon_proto_depIdxs = []int32{
10, // 37: daemon.DaemonService.WaitSSOLogin:input_type -> daemon.WaitSSOLoginRequest
12, // 38: daemon.DaemonService.Up:input_type -> daemon.UpRequest
14, // 39: daemon.DaemonService.Status:input_type -> daemon.StatusRequest
16, // 40: daemon.DaemonService.Down:input_type -> daemon.DownRequest
18, // 41: daemon.DaemonService.GetConfig:input_type -> daemon.GetConfigRequest
29, // 42: daemon.DaemonService.ListNetworks:input_type -> daemon.ListNetworksRequest
31, // 43: daemon.DaemonService.SelectNetworks:input_type -> daemon.SelectNetworksRequest
31, // 44: daemon.DaemonService.DeselectNetworks:input_type -> daemon.SelectNetworksRequest
5, // 45: daemon.DaemonService.ForwardingRules:input_type -> daemon.EmptyRequest
38, // 46: daemon.DaemonService.DebugBundle:input_type -> daemon.DebugBundleRequest
40, // 47: daemon.DaemonService.GetLogLevel:input_type -> daemon.GetLogLevelRequest
42, // 48: daemon.DaemonService.SetLogLevel:input_type -> daemon.SetLogLevelRequest
45, // 49: daemon.DaemonService.ListStates:input_type -> daemon.ListStatesRequest
47, // 50: daemon.DaemonService.CleanState:input_type -> daemon.CleanStateRequest
49, // 51: daemon.DaemonService.DeleteState:input_type -> daemon.DeleteStateRequest
51, // 52: daemon.DaemonService.SetSyncResponsePersistence:input_type -> daemon.SetSyncResponsePersistenceRequest
54, // 53: daemon.DaemonService.TracePacket:input_type -> daemon.TracePacketRequest
57, // 54: daemon.DaemonService.SubscribeEvents:input_type -> daemon.SubscribeRequest
59, // 55: daemon.DaemonService.GetEvents:input_type -> daemon.GetEventsRequest
61, // 56: daemon.DaemonService.SwitchProfile:input_type -> daemon.SwitchProfileRequest
63, // 57: daemon.DaemonService.SetConfig:input_type -> daemon.SetConfigRequest
65, // 58: daemon.DaemonService.AddProfile:input_type -> daemon.AddProfileRequest
67, // 59: daemon.DaemonService.RemoveProfile:input_type -> daemon.RemoveProfileRequest
69, // 60: daemon.DaemonService.ListProfiles:input_type -> daemon.ListProfilesRequest
72, // 61: daemon.DaemonService.GetActiveProfile:input_type -> daemon.GetActiveProfileRequest
74, // 62: daemon.DaemonService.Logout:input_type -> daemon.LogoutRequest
76, // 63: daemon.DaemonService.GetFeatures:input_type -> daemon.GetFeaturesRequest
78, // 64: daemon.DaemonService.TriggerUpdate:input_type -> daemon.TriggerUpdateRequest
80, // 65: daemon.DaemonService.GetPeerSSHHostKey:input_type -> daemon.GetPeerSSHHostKeyRequest
82, // 66: daemon.DaemonService.RequestJWTAuth:input_type -> daemon.RequestJWTAuthRequest
84, // 67: daemon.DaemonService.WaitJWTToken:input_type -> daemon.WaitJWTTokenRequest
86, // 68: daemon.DaemonService.StartCPUProfile:input_type -> daemon.StartCPUProfileRequest
88, // 69: daemon.DaemonService.StopCPUProfile:input_type -> daemon.StopCPUProfileRequest
6, // 70: daemon.DaemonService.NotifyOSLifecycle:input_type -> daemon.OSLifecycleRequest
90, // 71: daemon.DaemonService.GetInstallerResult:input_type -> daemon.InstallerResultRequest
92, // 72: daemon.DaemonService.ExposeService:input_type -> daemon.ExposeServiceRequest
9, // 73: daemon.DaemonService.Login:output_type -> daemon.LoginResponse
11, // 74: daemon.DaemonService.WaitSSOLogin:output_type -> daemon.WaitSSOLoginResponse
13, // 75: daemon.DaemonService.Up:output_type -> daemon.UpResponse
15, // 76: daemon.DaemonService.Status:output_type -> daemon.StatusResponse
17, // 77: daemon.DaemonService.Down:output_type -> daemon.DownResponse
19, // 78: daemon.DaemonService.GetConfig:output_type -> daemon.GetConfigResponse
30, // 79: daemon.DaemonService.ListNetworks:output_type -> daemon.ListNetworksResponse
32, // 80: daemon.DaemonService.SelectNetworks:output_type -> daemon.SelectNetworksResponse
32, // 81: daemon.DaemonService.DeselectNetworks:output_type -> daemon.SelectNetworksResponse
37, // 82: daemon.DaemonService.ForwardingRules:output_type -> daemon.ForwardingRulesResponse
39, // 83: daemon.DaemonService.DebugBundle:output_type -> daemon.DebugBundleResponse
41, // 84: daemon.DaemonService.GetLogLevel:output_type -> daemon.GetLogLevelResponse
43, // 85: daemon.DaemonService.SetLogLevel:output_type -> daemon.SetLogLevelResponse
46, // 86: daemon.DaemonService.ListStates:output_type -> daemon.ListStatesResponse
48, // 87: daemon.DaemonService.CleanState:output_type -> daemon.CleanStateResponse
50, // 88: daemon.DaemonService.DeleteState:output_type -> daemon.DeleteStateResponse
52, // 89: daemon.DaemonService.SetSyncResponsePersistence:output_type -> daemon.SetSyncResponsePersistenceResponse
56, // 90: daemon.DaemonService.TracePacket:output_type -> daemon.TracePacketResponse
58, // 91: daemon.DaemonService.SubscribeEvents:output_type -> daemon.SystemEvent
60, // 92: daemon.DaemonService.GetEvents:output_type -> daemon.GetEventsResponse
62, // 93: daemon.DaemonService.SwitchProfile:output_type -> daemon.SwitchProfileResponse
64, // 94: daemon.DaemonService.SetConfig:output_type -> daemon.SetConfigResponse
66, // 95: daemon.DaemonService.AddProfile:output_type -> daemon.AddProfileResponse
68, // 96: daemon.DaemonService.RemoveProfile:output_type -> daemon.RemoveProfileResponse
70, // 97: daemon.DaemonService.ListProfiles:output_type -> daemon.ListProfilesResponse
73, // 98: daemon.DaemonService.GetActiveProfile:output_type -> daemon.GetActiveProfileResponse
75, // 99: daemon.DaemonService.Logout:output_type -> daemon.LogoutResponse
77, // 100: daemon.DaemonService.GetFeatures:output_type -> daemon.GetFeaturesResponse
79, // 101: daemon.DaemonService.TriggerUpdate:output_type -> daemon.TriggerUpdateResponse
81, // 102: daemon.DaemonService.GetPeerSSHHostKey:output_type -> daemon.GetPeerSSHHostKeyResponse
83, // 103: daemon.DaemonService.RequestJWTAuth:output_type -> daemon.RequestJWTAuthResponse
85, // 104: daemon.DaemonService.WaitJWTToken:output_type -> daemon.WaitJWTTokenResponse
87, // 105: daemon.DaemonService.StartCPUProfile:output_type -> daemon.StartCPUProfileResponse
89, // 106: daemon.DaemonService.StopCPUProfile:output_type -> daemon.StopCPUProfileResponse
7, // 107: daemon.DaemonService.NotifyOSLifecycle:output_type -> daemon.OSLifecycleResponse
91, // 108: daemon.DaemonService.GetInstallerResult:output_type -> daemon.InstallerResultResponse
93, // 109: daemon.DaemonService.ExposeService:output_type -> daemon.ExposeServiceEvent
73, // [73:110] is the sub-list for method output_type
36, // [36:73] is the sub-list for method input_type
14, // 40: daemon.DaemonService.SubscribeStatus:input_type -> daemon.StatusRequest
16, // 41: daemon.DaemonService.Down:input_type -> daemon.DownRequest
18, // 42: daemon.DaemonService.GetConfig:input_type -> daemon.GetConfigRequest
29, // 43: daemon.DaemonService.ListNetworks:input_type -> daemon.ListNetworksRequest
31, // 44: daemon.DaemonService.SelectNetworks:input_type -> daemon.SelectNetworksRequest
31, // 45: daemon.DaemonService.DeselectNetworks:input_type -> daemon.SelectNetworksRequest
5, // 46: daemon.DaemonService.ForwardingRules:input_type -> daemon.EmptyRequest
38, // 47: daemon.DaemonService.DebugBundle:input_type -> daemon.DebugBundleRequest
40, // 48: daemon.DaemonService.GetLogLevel:input_type -> daemon.GetLogLevelRequest
42, // 49: daemon.DaemonService.SetLogLevel:input_type -> daemon.SetLogLevelRequest
45, // 50: daemon.DaemonService.ListStates:input_type -> daemon.ListStatesRequest
47, // 51: daemon.DaemonService.CleanState:input_type -> daemon.CleanStateRequest
49, // 52: daemon.DaemonService.DeleteState:input_type -> daemon.DeleteStateRequest
51, // 53: daemon.DaemonService.SetSyncResponsePersistence:input_type -> daemon.SetSyncResponsePersistenceRequest
54, // 54: daemon.DaemonService.TracePacket:input_type -> daemon.TracePacketRequest
57, // 55: daemon.DaemonService.SubscribeEvents:input_type -> daemon.SubscribeRequest
59, // 56: daemon.DaemonService.GetEvents:input_type -> daemon.GetEventsRequest
61, // 57: daemon.DaemonService.SwitchProfile:input_type -> daemon.SwitchProfileRequest
63, // 58: daemon.DaemonService.SetConfig:input_type -> daemon.SetConfigRequest
65, // 59: daemon.DaemonService.AddProfile:input_type -> daemon.AddProfileRequest
67, // 60: daemon.DaemonService.RemoveProfile:input_type -> daemon.RemoveProfileRequest
69, // 61: daemon.DaemonService.ListProfiles:input_type -> daemon.ListProfilesRequest
72, // 62: daemon.DaemonService.GetActiveProfile:input_type -> daemon.GetActiveProfileRequest
74, // 63: daemon.DaemonService.Logout:input_type -> daemon.LogoutRequest
76, // 64: daemon.DaemonService.GetFeatures:input_type -> daemon.GetFeaturesRequest
78, // 65: daemon.DaemonService.TriggerUpdate:input_type -> daemon.TriggerUpdateRequest
80, // 66: daemon.DaemonService.GetPeerSSHHostKey:input_type -> daemon.GetPeerSSHHostKeyRequest
82, // 67: daemon.DaemonService.RequestJWTAuth:input_type -> daemon.RequestJWTAuthRequest
84, // 68: daemon.DaemonService.WaitJWTToken:input_type -> daemon.WaitJWTTokenRequest
86, // 69: daemon.DaemonService.StartCPUProfile:input_type -> daemon.StartCPUProfileRequest
88, // 70: daemon.DaemonService.StopCPUProfile:input_type -> daemon.StopCPUProfileRequest
6, // 71: daemon.DaemonService.NotifyOSLifecycle:input_type -> daemon.OSLifecycleRequest
90, // 72: daemon.DaemonService.GetInstallerResult:input_type -> daemon.InstallerResultRequest
92, // 73: daemon.DaemonService.ExposeService:input_type -> daemon.ExposeServiceRequest
9, // 74: daemon.DaemonService.Login:output_type -> daemon.LoginResponse
11, // 75: daemon.DaemonService.WaitSSOLogin:output_type -> daemon.WaitSSOLoginResponse
13, // 76: daemon.DaemonService.Up:output_type -> daemon.UpResponse
15, // 77: daemon.DaemonService.Status:output_type -> daemon.StatusResponse
15, // 78: daemon.DaemonService.SubscribeStatus:output_type -> daemon.StatusResponse
17, // 79: daemon.DaemonService.Down:output_type -> daemon.DownResponse
19, // 80: daemon.DaemonService.GetConfig:output_type -> daemon.GetConfigResponse
30, // 81: daemon.DaemonService.ListNetworks:output_type -> daemon.ListNetworksResponse
32, // 82: daemon.DaemonService.SelectNetworks:output_type -> daemon.SelectNetworksResponse
32, // 83: daemon.DaemonService.DeselectNetworks:output_type -> daemon.SelectNetworksResponse
37, // 84: daemon.DaemonService.ForwardingRules:output_type -> daemon.ForwardingRulesResponse
39, // 85: daemon.DaemonService.DebugBundle:output_type -> daemon.DebugBundleResponse
41, // 86: daemon.DaemonService.GetLogLevel:output_type -> daemon.GetLogLevelResponse
43, // 87: daemon.DaemonService.SetLogLevel:output_type -> daemon.SetLogLevelResponse
46, // 88: daemon.DaemonService.ListStates:output_type -> daemon.ListStatesResponse
48, // 89: daemon.DaemonService.CleanState:output_type -> daemon.CleanStateResponse
50, // 90: daemon.DaemonService.DeleteState:output_type -> daemon.DeleteStateResponse
52, // 91: daemon.DaemonService.SetSyncResponsePersistence:output_type -> daemon.SetSyncResponsePersistenceResponse
56, // 92: daemon.DaemonService.TracePacket:output_type -> daemon.TracePacketResponse
58, // 93: daemon.DaemonService.SubscribeEvents:output_type -> daemon.SystemEvent
60, // 94: daemon.DaemonService.GetEvents:output_type -> daemon.GetEventsResponse
62, // 95: daemon.DaemonService.SwitchProfile:output_type -> daemon.SwitchProfileResponse
64, // 96: daemon.DaemonService.SetConfig:output_type -> daemon.SetConfigResponse
66, // 97: daemon.DaemonService.AddProfile:output_type -> daemon.AddProfileResponse
68, // 98: daemon.DaemonService.RemoveProfile:output_type -> daemon.RemoveProfileResponse
70, // 99: daemon.DaemonService.ListProfiles:output_type -> daemon.ListProfilesResponse
73, // 100: daemon.DaemonService.GetActiveProfile:output_type -> daemon.GetActiveProfileResponse
75, // 101: daemon.DaemonService.Logout:output_type -> daemon.LogoutResponse
77, // 102: daemon.DaemonService.GetFeatures:output_type -> daemon.GetFeaturesResponse
79, // 103: daemon.DaemonService.TriggerUpdate:output_type -> daemon.TriggerUpdateResponse
81, // 104: daemon.DaemonService.GetPeerSSHHostKey:output_type -> daemon.GetPeerSSHHostKeyResponse
83, // 105: daemon.DaemonService.RequestJWTAuth:output_type -> daemon.RequestJWTAuthResponse
85, // 106: daemon.DaemonService.WaitJWTToken:output_type -> daemon.WaitJWTTokenResponse
87, // 107: daemon.DaemonService.StartCPUProfile:output_type -> daemon.StartCPUProfileResponse
89, // 108: daemon.DaemonService.StopCPUProfile:output_type -> daemon.StopCPUProfileResponse
7, // 109: daemon.DaemonService.NotifyOSLifecycle:output_type -> daemon.OSLifecycleResponse
91, // 110: daemon.DaemonService.GetInstallerResult:output_type -> daemon.InstallerResultResponse
93, // 111: daemon.DaemonService.ExposeService:output_type -> daemon.ExposeServiceEvent
74, // [74:112] is the sub-list for method output_type
36, // [36:74] is the sub-list for method input_type
36, // [36:36] is the sub-list for extension type_name
36, // [36:36] is the sub-list for extension extendee
0, // [0:36] is the sub-list for field type_name

View File

@@ -24,6 +24,12 @@ service DaemonService {
// Status of the service.
rpc Status(StatusRequest) returns (StatusResponse) {}
// SubscribeStatus pushes a fresh StatusResponse on connection state
// changes (Connected / Disconnected / Connecting / address change /
// peers list change). The first message on the stream is the current
// snapshot, so a freshly-subscribed UI doesn't need to also call Status.
rpc SubscribeStatus(StatusRequest) returns (stream StatusResponse) {}
// Down stops engine work in the daemon.
rpc Down(DownRequest) returns (DownResponse) {}

View File

@@ -27,6 +27,11 @@ type DaemonServiceClient interface {
Up(ctx context.Context, in *UpRequest, opts ...grpc.CallOption) (*UpResponse, error)
// Status of the service.
Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (*StatusResponse, error)
// SubscribeStatus pushes a fresh StatusResponse on connection state
// changes (Connected / Disconnected / Connecting / address change /
// peers list change). The first message on the stream is the current
// snapshot, so a freshly-subscribed UI doesn't need to also call Status.
SubscribeStatus(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (DaemonService_SubscribeStatusClient, error)
// Down stops engine work in the daemon.
Down(ctx context.Context, in *DownRequest, opts ...grpc.CallOption) (*DownResponse, error)
// GetConfig of the daemon.
@@ -127,6 +132,38 @@ func (c *daemonServiceClient) Status(ctx context.Context, in *StatusRequest, opt
return out, nil
}
func (c *daemonServiceClient) SubscribeStatus(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (DaemonService_SubscribeStatusClient, error) {
stream, err := c.cc.NewStream(ctx, &DaemonService_ServiceDesc.Streams[0], "/daemon.DaemonService/SubscribeStatus", opts...)
if err != nil {
return nil, err
}
x := &daemonServiceSubscribeStatusClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type DaemonService_SubscribeStatusClient interface {
Recv() (*StatusResponse, error)
grpc.ClientStream
}
type daemonServiceSubscribeStatusClient struct {
grpc.ClientStream
}
func (x *daemonServiceSubscribeStatusClient) Recv() (*StatusResponse, error) {
m := new(StatusResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *daemonServiceClient) Down(ctx context.Context, in *DownRequest, opts ...grpc.CallOption) (*DownResponse, error) {
out := new(DownResponse)
err := c.cc.Invoke(ctx, "/daemon.DaemonService/Down", in, out, opts...)
@@ -254,7 +291,7 @@ func (c *daemonServiceClient) TracePacket(ctx context.Context, in *TracePacketRe
}
func (c *daemonServiceClient) SubscribeEvents(ctx context.Context, in *SubscribeRequest, opts ...grpc.CallOption) (DaemonService_SubscribeEventsClient, error) {
stream, err := c.cc.NewStream(ctx, &DaemonService_ServiceDesc.Streams[0], "/daemon.DaemonService/SubscribeEvents", opts...)
stream, err := c.cc.NewStream(ctx, &DaemonService_ServiceDesc.Streams[1], "/daemon.DaemonService/SubscribeEvents", opts...)
if err != nil {
return nil, err
}
@@ -439,7 +476,7 @@ func (c *daemonServiceClient) GetInstallerResult(ctx context.Context, in *Instal
}
func (c *daemonServiceClient) ExposeService(ctx context.Context, in *ExposeServiceRequest, opts ...grpc.CallOption) (DaemonService_ExposeServiceClient, error) {
stream, err := c.cc.NewStream(ctx, &DaemonService_ServiceDesc.Streams[1], "/daemon.DaemonService/ExposeService", opts...)
stream, err := c.cc.NewStream(ctx, &DaemonService_ServiceDesc.Streams[2], "/daemon.DaemonService/ExposeService", opts...)
if err != nil {
return nil, err
}
@@ -483,6 +520,11 @@ type DaemonServiceServer interface {
Up(context.Context, *UpRequest) (*UpResponse, error)
// Status of the service.
Status(context.Context, *StatusRequest) (*StatusResponse, error)
// SubscribeStatus pushes a fresh StatusResponse on connection state
// changes (Connected / Disconnected / Connecting / address change /
// peers list change). The first message on the stream is the current
// snapshot, so a freshly-subscribed UI doesn't need to also call Status.
SubscribeStatus(*StatusRequest, DaemonService_SubscribeStatusServer) error
// Down stops engine work in the daemon.
Down(context.Context, *DownRequest) (*DownResponse, error)
// GetConfig of the daemon.
@@ -556,6 +598,9 @@ func (UnimplementedDaemonServiceServer) Up(context.Context, *UpRequest) (*UpResp
func (UnimplementedDaemonServiceServer) Status(context.Context, *StatusRequest) (*StatusResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Status not implemented")
}
func (UnimplementedDaemonServiceServer) SubscribeStatus(*StatusRequest, DaemonService_SubscribeStatusServer) error {
return status.Errorf(codes.Unimplemented, "method SubscribeStatus not implemented")
}
func (UnimplementedDaemonServiceServer) Down(context.Context, *DownRequest) (*DownResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Down not implemented")
}
@@ -740,6 +785,27 @@ func _DaemonService_Status_Handler(srv interface{}, ctx context.Context, dec fun
return interceptor(ctx, in, info, handler)
}
func _DaemonService_SubscribeStatus_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(StatusRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(DaemonServiceServer).SubscribeStatus(m, &daemonServiceSubscribeStatusServer{stream})
}
type DaemonService_SubscribeStatusServer interface {
Send(*StatusResponse) error
grpc.ServerStream
}
type daemonServiceSubscribeStatusServer struct {
grpc.ServerStream
}
func (x *daemonServiceSubscribeStatusServer) Send(m *StatusResponse) error {
return x.ServerStream.SendMsg(m)
}
func _DaemonService_Down_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DownRequest)
if err := dec(in); err != nil {
@@ -1489,6 +1555,11 @@ var DaemonService_ServiceDesc = grpc.ServiceDesc{
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "SubscribeStatus",
Handler: _DaemonService_SubscribeStatus_Handler,
ServerStreams: true,
},
{
StreamName: "SubscribeEvents",
Handler: _DaemonService_SubscribeEvents_Handler,

View File

@@ -1101,6 +1101,13 @@ func (s *Server) Status(
}
}
return s.buildStatusResponse(msg)
}
// buildStatusResponse composes a StatusResponse from the current daemon
// state. Shared between the unary Status RPC and the SubscribeStatus
// stream so both paths return identical snapshots.
func (s *Server) buildStatusResponse(msg *proto.StatusRequest) (*proto.StatusResponse, error) {
status, err := internal.CtxGetState(s.rootCtx).Status()
if err != nil {
return nil, err

View File

@@ -0,0 +1,57 @@
package server
import (
log "github.com/sirupsen/logrus"
"github.com/netbirdio/netbird/client/proto"
)
// SubscribeStatus pushes a fresh StatusResponse on every connection state
// change. The first message is the current snapshot, so a re-subscribing
// client doesn't need to also call Status. Subsequent messages fire when
// the peer recorder reports any of: connected/disconnected/connecting,
// management or signal flip, address change, or peers list change.
//
// The change channel coalesces bursts to a single tick. If the consumer
// is slow the daemon drops extras (not blocks), and the next snapshot
// the consumer pulls already reflects everything.
func (s *Server) SubscribeStatus(req *proto.StatusRequest, stream proto.DaemonService_SubscribeStatusServer) error {
subID, ch := s.statusRecorder.SubscribeToStateChanges()
defer func() {
s.statusRecorder.UnsubscribeFromStateChanges(subID)
log.Debug("client unsubscribed from status updates")
}()
log.Debug("client subscribed to status updates")
if err := s.sendStatusSnapshot(req, stream); err != nil {
return err
}
for {
select {
case _, ok := <-ch:
if !ok {
return nil
}
if err := s.sendStatusSnapshot(req, stream); err != nil {
return err
}
case <-stream.Context().Done():
return nil
}
}
}
func (s *Server) sendStatusSnapshot(req *proto.StatusRequest, stream proto.DaemonService_SubscribeStatusServer) error {
resp, err := s.buildStatusResponse(req)
if err != nil {
log.Warnf("build status snapshot for stream: %v", err)
return err
}
if err := stream.Send(resp); err != nil {
log.Warnf("send status snapshot to stream: %v", err)
return err
}
return nil
}

8
client/ui-wails/.gitignore vendored Normal file
View File

@@ -0,0 +1,8 @@
.task
bin
frontend/dist
frontend/node_modules
frontend/bindings
frontend/.vite
build/linux/appimage/build
build/windows/nsis/MicrosoftEdgeWebview2Setup.exe

100
client/ui-wails/README.md Normal file
View File

@@ -0,0 +1,100 @@
# NetBird desktop UI (Wails3 + React)
Replaces `client/ui` (Fyne). One binary on Windows / macOS / Linux,
talks to the NetBird daemon over gRPC, renders a React frontend in a
WebView.
## Prerequisites
- Go ≥ 1.25, Node ≥ 20, **pnpm** (`corepack enable && corepack prepare pnpm@latest --activate`)
- `wails3` CLI: `go install github.com/wailsapp/wails/v3/cmd/wails3@latest`
- `task`: `go install github.com/go-task/task/v3/cmd/task@latest`
- A running NetBird daemon (default: `unix:///var/run/netbird.sock`,
Windows `tcp://127.0.0.1:41731`)
- Linux only: `libwebkit2gtk-4.1-dev`, `libgtk-3-dev`,
`libayatana-appindicator3-dev`
## Develop without rebuilding
```bash
cd client/ui-wails
task dev
```
`task dev` runs Vite (port 9245) + the Go binary + a `*.go` watcher.
Frontend edits hot-reload instantly. Go edits trigger a rebuild and
relaunch. Pass daemon flags after `--`:
```bash
task dev -- --daemon-addr=tcp://127.0.0.1:41731
```
For pure UI work (no native window, fastest loop):
```bash
cd frontend && pnpm dev
```
## Production build
```bash
task build
```
Output in `bin/`. Frontend assets are embedded into the binary.
### Cross-compile Windows from Linux
Install the mingw-w64 toolchain once:
```bash
sudo apt install gcc-mingw-w64-x86-64 # Debian/Ubuntu
sudo dnf install mingw64-gcc # Fedora
sudo pacman -S mingw-w64-gcc # Arch
```
Then:
```bash
CGO_ENABLED=1 task windows:build
```
Produces `bin/netbird-ui.exe`. macOS cross-compile from Linux is not
supported (signing and notarization need a real Mac).
### Windows console build (logs in the terminal)
Default `windows:build` links the binary as a Windows GUI app, which
detaches from the launching console — `logrus` output, `fmt.Println`,
and panics go nowhere visible. To debug tray/event/daemon issues:
```bash
CGO_ENABLED=1 task windows:build:console
```
Produces `bin/netbird-ui-console.exe`. Run it from `cmd.exe` /
PowerShell / Windows Terminal and stdout/stderr land in that
terminal. Same flag works on a native Windows build (drop the
`CGO_ENABLED=1` if your toolchain already has it set).
## Regenerating bindings
When a Go service signature changes:
```bash
wails3 generate bindings
```
`task dev` does this automatically on `*.go` save.
## Tray icons
Source SVGs live in `assets/svg/` (state.svg + state-macos.svg). After editing
any SVG, rasterize to the PNGs the Go side embeds:
```bash
task common:generate:tray:icons
```
Requires Inkscape. Commit the resulting `assets/*.png` files alongside the
SVG change so CI doesn't need Inkscape installed.

View File

@@ -0,0 +1,58 @@
version: '3'
includes:
common: ./build/Taskfile.yml
windows: ./build/windows/Taskfile.yml
darwin: ./build/darwin/Taskfile.yml
linux: ./build/linux/Taskfile.yml
vars:
APP_NAME: "netbird-ui"
BIN_DIR: "bin"
VITE_PORT: '{{.WAILS_VITE_PORT | default 9245}}'
tasks:
build:
summary: Builds the application
cmds:
- task: "{{OS}}:build"
package:
summary: Packages a production build of the application
cmds:
- task: "{{OS}}:package"
run:
summary: Runs the application
cmds:
- task: "{{OS}}:run"
dev:
summary: Runs the application in development mode
cmds:
- wails3 dev -config ./build/config.yml -port {{.VITE_PORT}}
setup:docker:
summary: Builds Docker image for cross-compilation (~800MB download)
cmds:
- task: common:setup:docker
build:server:
summary: Builds the application in server mode (no GUI, HTTP server only)
cmds:
- task: common:build:server
run:server:
summary: Runs the application in server mode
cmds:
- task: common:run:server
build:docker:
summary: Builds a Docker image for server mode deployment
cmds:
- task: common:build:docker
run:docker:
summary: Builds and runs the Docker image
cmds:
- task: common:run:docker

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

View File

@@ -0,0 +1,14 @@
<!--
NetBird base mark, centered in a 32×32 viewBox with badge-friendly margins.
Preserved across every state icon as required by the design plan; state
badges sit on top in the bottom-right 12×12 area (x=18..30, y=18..30).
The mark itself is taken verbatim from dashboard/src/assets/netbird.svg
(three orange/red paths) and translated into the 32×32 grid.
-->
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g id="netbird-mark" transform="translate(2 5) scale(0.8)">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z" fill="#F68330"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z" fill="#F68330"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727" fill="#F05252"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 932 B

View File

@@ -0,0 +1,17 @@
<!--
App icon source. Rasterized to build/appicon.png by
`task common:generate:icons`, which then drives `wails3 generate icons`
to produce the per-platform .ico / .icns artifacts.
The mark fills ~90% of the canvas width (with vertical centering) so
Windows Explorer and macOS Finder render a recognisable bird at small
sizes. The mark's native aspect (31:23) is wider than tall, so width is
the binding dimension.
-->
<svg xmlns="http://www.w3.org/2000/svg" width="1024" height="1024" viewBox="0 0 1024 1024">
<g transform="translate(37 170) scale(29.7)">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z" fill="#F68330"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z" fill="#F68330"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727" fill="#F05252"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 997 B

View File

@@ -0,0 +1,10 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5)" fill="black">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="black"/>
<path d="M22 25 L24 27 L28 23" stroke="white" stroke-width="1.8" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
</svg>

After

Width:  |  Height:  |  Size: 723 B

View File

@@ -0,0 +1,14 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<!-- Mark fills the canvas. Badge overlaps the bottom-right corner so most
of the mark is still visible at 16 px tray sizes. -->
<g transform="translate(0.5 4.5) scale(1.0)">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z" fill="#F68330"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z" fill="#F68330"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727" fill="#F05252"/>
</g>
<!-- connected badge: green check, ~25% canvas, with a thin white halo so
the green disc reads cleanly on top of the orange mark. -->
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="#0E9F6E"/>
<path d="M22 25 L24 27 L28 23" stroke="white" stroke-width="1.8" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
</svg>

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

@@ -0,0 +1,9 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5)" fill="black">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="none" stroke="black" stroke-width="1.8" stroke-dasharray="2.5 2.5" stroke-linecap="round"/>
</svg>

After

Width:  |  Height:  |  Size: 678 B

View File

@@ -0,0 +1,9 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5) scale(1.0)">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z" fill="#F68330"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z" fill="#F68330"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727" fill="#F05252"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="none" stroke="#F68330" stroke-width="1.8" stroke-dasharray="2.5 2.5" stroke-linecap="round"/>
</svg>

After

Width:  |  Height:  |  Size: 723 B

View File

@@ -0,0 +1,10 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5)" fill="black" opacity="0.5">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="none" stroke="black" stroke-width="1.6"/>
<line x1="21.5" y1="25" x2="28.5" y2="25" stroke="black" stroke-width="1.6" stroke-linecap="round"/>
</svg>

After

Width:  |  Height:  |  Size: 745 B

View File

@@ -0,0 +1,10 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5) scale(1.0)" opacity="0.45">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z" fill="#F68330"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z" fill="#F68330"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727" fill="#F05252"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="none" stroke="#7c8994" stroke-width="1.6"/>
<line x1="21.5" y1="25" x2="28.5" y2="25" stroke="#7c8994" stroke-width="1.6" stroke-linecap="round"/>
</svg>

After

Width:  |  Height:  |  Size: 793 B

View File

@@ -0,0 +1,11 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5)" fill="black" opacity="0.7">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="black"/>
<line x1="25" y1="21.5" x2="25" y2="26" stroke="white" stroke-width="1.8" stroke-linecap="round"/>
<circle cx="25" cy="28.4" r="1.0" fill="white"/>
</svg>

After

Width:  |  Height:  |  Size: 761 B

View File

@@ -0,0 +1,11 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5) scale(1.0)" opacity="0.7">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z" fill="#F68330"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z" fill="#F68330"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727" fill="#F05252"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="#E02424"/>
<line x1="25" y1="21.5" x2="25" y2="26" stroke="white" stroke-width="1.8" stroke-linecap="round"/>
<circle cx="25" cy="28.4" r="1.0" fill="white"/>
</svg>

After

Width:  |  Height:  |  Size: 806 B

View File

@@ -0,0 +1,10 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5)" fill="black">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="black"/>
<path d="M25 22 L25 28 M22.5 24.5 L25 22 L27.5 24.5" stroke="white" stroke-width="1.6" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
</svg>

After

Width:  |  Height:  |  Size: 745 B

View File

@@ -0,0 +1,10 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5) scale(1.0)">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z" fill="#F68330"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z" fill="#F68330"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727" fill="#F05252"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="#1C64F2"/>
<path d="M25 22 L25 28 M22.5 24.5 L25 22 L27.5 24.5" stroke="white" stroke-width="1.6" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
</svg>

After

Width:  |  Height:  |  Size: 790 B

View File

@@ -0,0 +1,10 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5)" fill="black" opacity="0.5">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="black"/>
<path d="M25 22 L25 28 M22.5 24.5 L25 22 L27.5 24.5" stroke="white" stroke-width="1.6" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
</svg>

After

Width:  |  Height:  |  Size: 759 B

View File

@@ -0,0 +1,10 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32" fill="none">
<g transform="translate(0.5 4.5) scale(1.0)" opacity="0.45">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z" fill="#F68330"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z" fill="#F68330"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727" fill="#F05252"/>
</g>
<circle cx="25" cy="25" r="7" fill="white"/>
<circle cx="25" cy="25" r="6" fill="#1C64F2"/>
<path d="M25 22 L25 28 M22.5 24.5 L25 22 L27.5 24.5" stroke="white" stroke-width="1.6" stroke-linecap="round" stroke-linejoin="round" fill="none"/>
</svg>

After

Width:  |  Height:  |  Size: 805 B

View File

@@ -0,0 +1,295 @@
version: '3'
tasks:
go:mod:tidy:
summary: Runs `go mod tidy`
internal: true
cmds:
- go mod tidy
install:frontend:deps:
summary: Install frontend dependencies
dir: frontend
sources:
- package.json
- pnpm-lock.yaml
generates:
- node_modules
preconditions:
- sh: pnpm --version
msg: "Looks like pnpm isn't installed. Install with: corepack enable && corepack prepare pnpm@latest --activate"
cmds:
- pnpm install
build:frontend:
label: build:frontend (DEV={{.DEV}})
summary: Build the frontend project
dir: frontend
sources:
- "**/*"
- exclude: node_modules/**/*
generates:
- dist/**/*
deps:
- task: install:frontend:deps
- task: generate:bindings
vars:
BUILD_FLAGS:
ref: .BUILD_FLAGS
cmds:
- pnpm run {{.BUILD_COMMAND}}
env:
PRODUCTION: '{{if eq .DEV "true"}}false{{else}}true{{end}}'
vars:
BUILD_COMMAND: '{{if eq .DEV "true"}}build:dev{{else}}build{{end}}'
frontend:vendor:puppertino:
summary: Fetches Puppertino CSS into frontend/public for consistent mobile styling
sources:
- frontend/public/puppertino/puppertino.css
generates:
- frontend/public/puppertino/puppertino.css
cmds:
- |
set -euo pipefail
mkdir -p frontend/public/puppertino
# If bundled Puppertino exists, prefer it. Otherwise, try to fetch, but don't fail build on error.
if [ ! -f frontend/public/puppertino/puppertino.css ]; then
echo "No bundled Puppertino found. Attempting to fetch from GitHub..."
if curl -fsSL https://raw.githubusercontent.com/codedgar/Puppertino/main/dist/css/full.css -o frontend/public/puppertino/puppertino.css; then
curl -fsSL https://raw.githubusercontent.com/codedgar/Puppertino/main/LICENSE -o frontend/public/puppertino/LICENSE || true
echo "Puppertino CSS downloaded to frontend/public/puppertino/puppertino.css"
else
echo "Warning: Could not fetch Puppertino CSS. Proceeding without download since template may bundle it."
fi
else
echo "Using bundled Puppertino at frontend/public/puppertino/puppertino.css"
fi
# Ensure index.html includes Puppertino CSS and button classes
INDEX_HTML=frontend/index.html
if [ -f "$INDEX_HTML" ]; then
if ! grep -q 'href="/puppertino/puppertino.css"' "$INDEX_HTML"; then
# Insert Puppertino link tag after style.css link
awk '
/href="\/style.css"\/?/ && !x { print; print " <link rel=\"stylesheet\" href=\"/puppertino/puppertino.css\"/>"; x=1; next }1
' "$INDEX_HTML" > "$INDEX_HTML.tmp" && mv "$INDEX_HTML.tmp" "$INDEX_HTML"
fi
# Replace default .btn with Puppertino primary button classes if present
sed -E -i'' 's/class=\"btn\"/class=\"p-btn p-prim-col\"/g' "$INDEX_HTML" || true
fi
generate:bindings:
label: generate:bindings (BUILD_FLAGS={{.BUILD_FLAGS}})
summary: Generates bindings for the frontend
deps:
- task: go:mod:tidy
sources:
- "**/*.[jt]s"
- exclude: frontend/**/*
- frontend/bindings/**/* # Rerun when switching between dev/production mode causes changes in output
- "**/*.go"
- go.mod
- go.sum
generates:
- frontend/bindings/**/*
cmds:
- wails3 generate bindings -f '{{.BUILD_FLAGS}}' -clean=true -ts
generate:icons:
summary: Generates Windows `.ico` and Mac `.icns` from an image; on macOS, `-iconcomposerinput appicon.icon -macassetdir darwin` also produces `Assets.car` from a `.icon` file (skipped on other platforms).
dir: build
sources:
- "appicon.png"
- "appicon.icon"
generates:
- "darwin/icons.icns"
- "windows/icon.ico"
cmds:
- wails3 generate icons -input appicon.png -macfilename darwin/icons.icns -windowsfilename windows/icon.ico -iconcomposerinput appicon.icon -macassetdir darwin
generate:tray:icons:
summary: Rebuild Windows multi-res .ico files from the per-state PNGs.
desc: |
The colored tray PNGs (assets/netbird-systemtray-<state>.png) and the
macOS template variants are committed to the repo as the canonical
source. This task only regenerates the Windows multi-resolution .ico
files from those PNGs by downscaling each to 16/24/32/48 px and
packing them with icotool, so Shell_NotifyIcon picks the frame
matching the user's DPI instead of downscaling a single large PNG.
Run after replacing any of the colored PNGs (e.g. when copying a new
version of the icons from client/ui/assets). The SVG sources in
assets/svg/ are kept for reference but are not built by default.
dir: assets
sources:
- "netbird-systemtray-connected.png"
- "netbird-systemtray-disconnected.png"
- "netbird-systemtray-connecting.png"
- "netbird-systemtray-error.png"
- "netbird-systemtray-update-connected.png"
- "netbird-systemtray-update-disconnected.png"
generates:
- "netbird-systemtray-*.ico"
preconditions:
- sh: command -v magick >/dev/null 2>&1 || command -v convert >/dev/null 2>&1
msg: "ImageMagick is required to downscale PNGs (apt install imagemagick)"
- sh: command -v icotool >/dev/null 2>&1
msg: "icotool is required to pack tray .ico files (apt install icoutils)"
cmds:
- |
set -euo pipefail
tmp=$(mktemp -d)
trap 'rm -rf "$tmp"' EXIT
resize=$(command -v magick || echo convert)
for state in connected disconnected connecting error update-connected update-disconnected; do
for sz in 16 24 32 48; do
"$resize" "netbird-systemtray-$state.png" -resize ${sz}x${sz} "$tmp/$state-$sz.png"
done
icotool -c -o "netbird-systemtray-$state.ico" \
"$tmp/$state-16.png" "$tmp/$state-24.png" "$tmp/$state-32.png" "$tmp/$state-48.png"
done
dev:frontend:
summary: Runs the frontend in development mode
dir: frontend
deps:
- task: install:frontend:deps
cmds:
- pnpm exec vite --port {{.VITE_PORT}} --strictPort
update:build-assets:
summary: Updates the build assets
dir: build
cmds:
- wails3 update build-assets -name "{{.APP_NAME}}" -binaryname "{{.APP_NAME}}" -config config.yml -dir .
build:server:
summary: Builds the application in server mode (no GUI, HTTP server only)
desc: |
Builds the application with the server build tag enabled.
Server mode runs as a pure HTTP server without native GUI dependencies.
Usage: task build:server
deps:
- task: build:frontend
vars:
BUILD_FLAGS:
ref: .BUILD_FLAGS
cmds:
- go build -tags server {{.BUILD_FLAGS}} -o {{.BIN_DIR}}/{{.APP_NAME}}-server{{exeExt}}
vars:
BUILD_FLAGS: "{{.BUILD_FLAGS}}"
run:server:
summary: Builds and runs the application in server mode
deps:
- task: build:server
cmds:
- ./{{.BIN_DIR}}/{{.APP_NAME}}-server{{exeExt}}
build:docker:
summary: Builds a Docker image for server mode deployment
desc: |
Creates a minimal Docker image containing the server mode binary.
The image is based on distroless for security and small size.
Usage: task build:docker [TAG=myapp:latest]
cmds:
- docker build -t {{.TAG | default (printf "%s:latest" .APP_NAME)}} -f build/docker/Dockerfile.server .
vars:
TAG: "{{.TAG}}"
preconditions:
- sh: docker info > /dev/null 2>&1
msg: "Docker is required. Please install Docker first."
- sh: test -f build/docker/Dockerfile.server
msg: "Dockerfile.server not found. Run 'wails3 update build-assets' to generate it."
run:docker:
summary: Builds and runs the Docker image
desc: |
Builds the Docker image and runs it, exposing port 8080.
Usage: task run:docker [TAG=myapp:latest] [PORT=8080]
Note: The internal container port is always 8080. The PORT variable
only changes the host port mapping. Ensure your app uses port 8080
or modify the Dockerfile to match your ServerOptions.Port setting.
deps:
- task: build:docker
vars:
TAG:
ref: .TAG
cmds:
- docker run --rm -p {{.PORT | default "8080"}}:8080 {{.TAG | default (printf "%s:latest" .APP_NAME)}}
vars:
TAG: "{{.TAG}}"
PORT: "{{.PORT}}"
setup:docker:
summary: Builds Docker image for cross-compilation (~800MB download)
desc: |
Builds the Docker image needed for cross-compiling to any platform.
Run this once to enable cross-platform builds from any OS.
cmds:
- docker build -t wails-cross -f build/docker/Dockerfile.cross build/docker/
preconditions:
- sh: docker info > /dev/null 2>&1
msg: "Docker is required. Please install Docker first."
ios:device:list:
summary: Lists connected iOS devices (UDIDs)
cmds:
- xcrun xcdevice list
ios:run:device:
summary: Build, install, and launch on a physical iPhone using Apple tools (xcodebuild/devicectl)
vars:
PROJECT: '{{.PROJECT}}' # e.g., build/ios/xcode/<YourProject>.xcodeproj
SCHEME: '{{.SCHEME}}' # e.g., ios.dev
CONFIG: '{{.CONFIG | default "Debug"}}'
DERIVED: '{{.DERIVED | default "build/ios/DerivedData"}}'
UDID: '{{.UDID}}' # from `task ios:device:list`
BUNDLE_ID: '{{.BUNDLE_ID}}' # e.g., com.yourco.wails.ios.dev
TEAM_ID: '{{.TEAM_ID}}' # optional, if your project is not already set up for signing
preconditions:
- sh: xcrun -f xcodebuild
msg: "xcodebuild not found. Please install Xcode."
- sh: xcrun -f devicectl
msg: "devicectl not found. Please update to Xcode 15+ (which includes devicectl)."
- sh: test -n '{{.PROJECT}}'
msg: "Set PROJECT to your .xcodeproj path (e.g., PROJECT=build/ios/xcode/App.xcodeproj)."
- sh: test -n '{{.SCHEME}}'
msg: "Set SCHEME to your app scheme (e.g., SCHEME=ios.dev)."
- sh: test -n '{{.UDID}}'
msg: "Set UDID to your device UDID (see: task ios:device:list)."
- sh: test -n '{{.BUNDLE_ID}}'
msg: "Set BUNDLE_ID to your app's bundle identifier (e.g., com.yourco.wails.ios.dev)."
cmds:
- |
set -euo pipefail
echo "Building for device: UDID={{.UDID}} SCHEME={{.SCHEME}} PROJECT={{.PROJECT}}"
XCB_ARGS=(
-project "{{.PROJECT}}"
-scheme "{{.SCHEME}}"
-configuration "{{.CONFIG}}"
-destination "id={{.UDID}}"
-derivedDataPath "{{.DERIVED}}"
-allowProvisioningUpdates
-allowProvisioningDeviceRegistration
)
# Optionally inject signing identifiers if provided
if [ -n '{{.TEAM_ID}}' ]; then XCB_ARGS+=(DEVELOPMENT_TEAM={{.TEAM_ID}}); fi
if [ -n '{{.BUNDLE_ID}}' ]; then XCB_ARGS+=(PRODUCT_BUNDLE_IDENTIFIER={{.BUNDLE_ID}}); fi
xcodebuild "${XCB_ARGS[@]}" build | xcpretty || true
# If xcpretty isn't installed, run without it
if [ "${PIPESTATUS[0]}" -ne 0 ]; then
xcodebuild "${XCB_ARGS[@]}" build
fi
# Find built .app
APP_PATH=$(find "{{.DERIVED}}/Build/Products" -type d -name "*.app" -maxdepth 3 | head -n 1)
if [ -z "$APP_PATH" ]; then
echo "Could not locate built .app under {{.DERIVED}}/Build/Products" >&2
exit 1
fi
echo "Installing: $APP_PATH"
xcrun devicectl device install app --device "{{.UDID}}" "$APP_PATH"
echo "Launching: {{.BUNDLE_ID}}"
xcrun devicectl device process launch --device "{{.UDID}}" --stderr console --stdout console "{{.BUNDLE_ID}}"

View File

@@ -0,0 +1,13 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!--
macOS Icon Composer source. Designed on a 1024x1024 canvas with the bird
glyph centered and sized to ~75% of canvas width, leaving padding for
the system squircle treatment.
-->
<svg xmlns="http://www.w3.org/2000/svg" width="1024" height="1024" viewBox="0 0 1024 1024">
<g transform="translate(128, 227) scale(24.77)">
<path d="M21.4631 0.523438C17.8173 0.857913 16.0028 2.95675 15.3171 4.01871L4.66406 22.4734H17.5163L30.1929 0.523438H21.4631Z" fill="#F68330"/>
<path d="M17.5265 22.4737L0 3.88525C0 3.88525 19.8177 -1.44128 21.7493 15.1738L17.5265 22.4737Z" fill="#F68330"/>
<path d="M14.9236 4.70563L9.54688 14.0208L17.5158 22.4747L21.7385 15.158C21.0696 9.44682 18.2851 6.32784 14.9236 4.69727" fill="#F05252"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 810 B

View File

@@ -0,0 +1,26 @@
{
"fill" : {
"solid" : "srgb:1.00000,1.00000,1.00000,1.00000"
},
"groups" : [
{
"layers" : [
{
"image-name" : "wails_icon_vector.svg",
"name" : "wails_icon_vector"
}
],
"shadow" : {
"kind" : "neutral",
"opacity" : 0.5
},
"specular" : true
}
],
"supported-platforms" : {
"circles" : [
"watchOS"
],
"squares" : "shared"
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

View File

@@ -0,0 +1,78 @@
# This file contains the configuration for this project.
# When you update `info` or `fileAssociations`, run `wails3 task common:update:build-assets` to update the assets.
# Note that this will overwrite any changes you have made to the assets.
version: '3'
# This information is used to generate the build assets.
info:
companyName: "My Company" # The name of the company
productName: "My Product" # The name of the application
productIdentifier: "com.mycompany.myproduct" # The unique product identifier
description: "A program that does X" # The application description
copyright: "(c) 2025, My Company" # Copyright text
comments: "Some Product Comments" # Comments
version: "0.0.1" # The application version
# cfBundleIconName: "appicon" # The macOS icon name in Assets.car icon bundles (optional)
# # Should match the name of your .icon file without the extension
# # If not set and Assets.car exists, defaults to "appicon"
# iOS build configuration (uncomment to customise iOS project generation)
# Note: Keys under `ios` OVERRIDE values under `info` when set.
# ios:
# # The iOS bundle identifier used in the generated Xcode project (CFBundleIdentifier)
# bundleID: "com.mycompany.myproduct"
# # The display name shown under the app icon (CFBundleDisplayName/CFBundleName)
# displayName: "My Product"
# # The app version to embed in Info.plist (CFBundleShortVersionString/CFBundleVersion)
# version: "0.0.1"
# # The company/organisation name for templates and project settings
# company: "My Company"
# # Additional comments to embed in Info.plist metadata
# comments: "Some Product Comments"
# Dev mode configuration
dev_mode:
root_path: .
log_level: warn
debounce: 1000
ignore:
dir:
- .git
- node_modules
- frontend
- bin
file:
- .DS_Store
- .gitignore
- .gitkeep
watched_extension:
- "*.go"
- "*.js" # Watch for changes to JS/TS files included using the //wails:include directive.
- "*.ts" # The frontend directory will be excluded entirely by the setting above.
git_ignore: true
executes:
- cmd: wails3 build DEV=true
type: blocking
- cmd: wails3 task common:dev:frontend
type: background
- cmd: wails3 task run
type: primary
# File Associations
# More information at: https://v3.wails.io/noit/done/yet
fileAssociations:
# - ext: wails
# name: Wails
# description: Wails Application File
# iconName: wailsFileIcon
# role: Editor
# - ext: jpg
# name: JPEG
# description: Image File
# iconName: jpegFileIcon
# role: Editor
# mimeType: image/jpeg # (optional)
# Other data
other:
- name: My Other Data

Binary file not shown.

View File

@@ -0,0 +1,36 @@
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleName</key>
<string>NetBird</string>
<key>CFBundleDisplayName</key>
<string>NetBird</string>
<key>CFBundleExecutable</key>
<string>netbird-ui</string>
<key>CFBundleIdentifier</key>
<string>io.netbird.client</string>
<key>CFBundleVersion</key>
<string>0.0.1</string>
<key>CFBundleGetInfoString</key>
<string>This is a comment</string>
<key>CFBundleShortVersionString</key>
<string>0.0.1</string>
<key>CFBundleIconFile</key>
<string>icons</string>
<key>CFBundleIconName</key>
<string>appicon</string>
<key>LSMinimumSystemVersion</key>
<string>10.15.0</string>
<key>NSHighResolutionCapable</key>
<string>true</string>
<key>NSHumanReadableCopyright</key>
<string>© 2026, My Company</string>
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsLocalNetworking</key>
<true/>
</dict>
</dict>
</plist>

View File

@@ -0,0 +1,31 @@
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleName</key>
<string>NetBird</string>
<key>CFBundleDisplayName</key>
<string>NetBird</string>
<key>CFBundleExecutable</key>
<string>netbird-ui</string>
<key>CFBundleIdentifier</key>
<string>io.netbird.client</string>
<key>CFBundleVersion</key>
<string>0.0.1</string>
<key>CFBundleGetInfoString</key>
<string>This is a comment</string>
<key>CFBundleShortVersionString</key>
<string>0.0.1</string>
<key>CFBundleIconFile</key>
<string>icons</string>
<key>CFBundleIconName</key>
<string>appicon</string>
<key>LSMinimumSystemVersion</key>
<string>10.15.0</string>
<key>NSHighResolutionCapable</key>
<string>true</string>
<key>NSHumanReadableCopyright</key>
<string>© 2026, My Company</string>
</dict>
</plist>

View File

@@ -0,0 +1,208 @@
version: '3'
includes:
common: ../Taskfile.yml
vars:
# Signing configuration - edit these values for your project
# SIGN_IDENTITY: "Developer ID Application: Your Company (TEAMID)"
# KEYCHAIN_PROFILE: "my-notarize-profile"
# ENTITLEMENTS: "build/darwin/entitlements.plist"
# Docker image for cross-compilation (used when building on non-macOS)
CROSS_IMAGE: wails-cross
tasks:
build:
summary: Builds the application
cmds:
- task: '{{if eq OS "darwin"}}build:native{{else}}build:docker{{end}}'
vars:
ARCH: '{{.ARCH}}'
DEV: '{{.DEV}}'
OUTPUT: '{{.OUTPUT}}'
EXTRA_TAGS: '{{.EXTRA_TAGS}}'
vars:
DEFAULT_OUTPUT: '{{.BIN_DIR}}/{{.APP_NAME}}'
OUTPUT: '{{ .OUTPUT | default .DEFAULT_OUTPUT }}'
build:native:
summary: Builds the application natively on macOS
internal: true
deps:
- task: common:go:mod:tidy
- task: common:build:frontend
vars:
BUILD_FLAGS:
ref: .BUILD_FLAGS
DEV:
ref: .DEV
- task: common:generate:icons
cmds:
- go build {{.BUILD_FLAGS}} -o {{.OUTPUT}}
vars:
BUILD_FLAGS: '{{if eq .DEV "true"}}{{if .EXTRA_TAGS}}-tags {{.EXTRA_TAGS}} {{end}}-buildvcs=false -gcflags=all="-l"{{else}}-tags production{{if .EXTRA_TAGS}},{{.EXTRA_TAGS}}{{end}} -trimpath -buildvcs=false -ldflags="-w -s"{{end}}'
DEFAULT_OUTPUT: '{{.BIN_DIR}}/{{.APP_NAME}}'
OUTPUT: '{{ .OUTPUT | default .DEFAULT_OUTPUT }}'
env:
GOOS: darwin
CGO_ENABLED: 1
GOARCH: '{{.ARCH | default ARCH}}'
CGO_CFLAGS: "-mmacosx-version-min=10.15"
CGO_LDFLAGS: "-mmacosx-version-min=10.15"
MACOSX_DEPLOYMENT_TARGET: "10.15"
build:docker:
summary: Cross-compiles for macOS using Docker (for Linux/Windows hosts)
internal: true
deps:
- task: common:build:frontend
- task: common:generate:icons
preconditions:
- sh: docker info > /dev/null 2>&1
msg: "Docker is required for cross-compilation. Please install Docker."
- sh: docker image inspect {{.CROSS_IMAGE}} > /dev/null 2>&1
msg: |
Docker image '{{.CROSS_IMAGE}}' not found.
Build it first: wails3 task setup:docker
cmds:
- docker run --rm -v "{{.ROOT_DIR}}:/app" {{.GO_CACHE_MOUNT}} {{.REPLACE_MOUNTS}} -e APP_NAME="{{.APP_NAME}}" {{if .EXTRA_TAGS}}-e EXTRA_TAGS="{{.EXTRA_TAGS}}"{{end}} {{.CROSS_IMAGE}} darwin {{.DOCKER_ARCH}}
- docker run --rm -v "{{.ROOT_DIR}}:/app" alpine chown -R $(id -u):$(id -g) /app/bin
- mkdir -p {{.BIN_DIR}}
- mv "bin/{{.APP_NAME}}-darwin-{{.DOCKER_ARCH}}" "{{.OUTPUT}}"
vars:
DOCKER_ARCH: '{{if eq .ARCH "arm64"}}arm64{{else if eq .ARCH "amd64"}}amd64{{else}}arm64{{end}}'
DEFAULT_OUTPUT: '{{.BIN_DIR}}/{{.APP_NAME}}'
OUTPUT: '{{ .OUTPUT | default .DEFAULT_OUTPUT }}'
# Mount Go module cache for faster builds
GO_CACHE_MOUNT:
sh: 'echo "-v ${GOPATH:-$HOME/go}/pkg/mod:/go/pkg/mod"'
# Extract replace directives from go.mod and create -v mounts for each
# Handles both relative (=> ../) and absolute (=> /) paths
REPLACE_MOUNTS:
sh: |
grep -E '^replace .* => ' go.mod 2>/dev/null | while read -r line; do
path=$(echo "$line" | sed -E 's/^replace .* => //' | tr -d '\r')
# Convert relative paths to absolute
if [ "${path#/}" = "$path" ]; then
path="$(cd "$(dirname "$path")" 2>/dev/null && pwd)/$(basename "$path")"
fi
# Only mount if directory exists
if [ -d "$path" ]; then
echo "-v $path:$path:ro"
fi
done | tr '\n' ' '
build:universal:
summary: Builds darwin universal binary (arm64 + amd64)
deps:
- task: build
vars:
ARCH: amd64
OUTPUT: "{{.BIN_DIR}}/{{.APP_NAME}}-amd64"
- task: build
vars:
ARCH: arm64
OUTPUT: "{{.BIN_DIR}}/{{.APP_NAME}}-arm64"
cmds:
- task: '{{if eq OS "darwin"}}build:universal:lipo:native{{else}}build:universal:lipo:go{{end}}'
build:universal:lipo:native:
summary: Creates universal binary using native lipo (macOS)
internal: true
cmds:
- lipo -create -output "{{.BIN_DIR}}/{{.APP_NAME}}" "{{.BIN_DIR}}/{{.APP_NAME}}-amd64" "{{.BIN_DIR}}/{{.APP_NAME}}-arm64"
- rm "{{.BIN_DIR}}/{{.APP_NAME}}-amd64" "{{.BIN_DIR}}/{{.APP_NAME}}-arm64"
build:universal:lipo:go:
summary: Creates universal binary using wails3 tool lipo (Linux/Windows)
internal: true
cmds:
- wails3 tool lipo -output "{{.BIN_DIR}}/{{.APP_NAME}}" -input "{{.BIN_DIR}}/{{.APP_NAME}}-amd64" -input "{{.BIN_DIR}}/{{.APP_NAME}}-arm64"
- rm -f "{{.BIN_DIR}}/{{.APP_NAME}}-amd64" "{{.BIN_DIR}}/{{.APP_NAME}}-arm64"
package:
summary: Packages the application into a `.app` bundle
deps:
- task: build
cmds:
- task: create:app:bundle
package:universal:
summary: Packages darwin universal binary (arm64 + amd64)
deps:
- task: build:universal
cmds:
- task: create:app:bundle
create:app:bundle:
summary: Creates an `.app` bundle
cmds:
- mkdir -p "{{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/MacOS"
- mkdir -p "{{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/Resources"
- cp build/darwin/icons.icns "{{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/Resources"
- |
if [ -f build/darwin/Assets.car ]; then
cp build/darwin/Assets.car "{{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/Resources"
fi
- cp "{{.BIN_DIR}}/{{.APP_NAME}}" "{{.BIN_DIR}}/{{.APP_NAME}}.app/Contents/MacOS"
- cp build/darwin/Info.plist "{{.BIN_DIR}}/{{.APP_NAME}}.app/Contents"
- task: '{{if eq OS "darwin"}}codesign:adhoc{{else}}codesign:skip{{end}}'
codesign:adhoc:
summary: Ad-hoc signs the app bundle (macOS only)
internal: true
cmds:
- codesign --force --deep --sign - "{{.BIN_DIR}}/{{.APP_NAME}}.app"
codesign:skip:
summary: Skips codesigning when cross-compiling
internal: true
cmds:
- 'echo "Skipping codesign (not available on {{OS}}). Sign the .app on macOS before distribution."'
run:
cmds:
- mkdir -p "{{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/MacOS"
- mkdir -p "{{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/Resources"
- cp build/darwin/icons.icns "{{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/Resources"
- |
if [ -f build/darwin/Assets.car ]; then
cp build/darwin/Assets.car "{{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/Resources"
fi
- cp "{{.BIN_DIR}}/{{.APP_NAME}}" "{{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/MacOS"
- cp "build/darwin/Info.dev.plist" "{{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/Info.plist"
- codesign --force --deep --sign - "{{.BIN_DIR}}/{{.APP_NAME}}.dev.app"
- '{{.BIN_DIR}}/{{.APP_NAME}}.dev.app/Contents/MacOS/{{.APP_NAME}}'
sign:
summary: Signs the application bundle with Developer ID
desc: |
Signs the .app bundle for distribution.
Configure SIGN_IDENTITY in the vars section at the top of this file.
deps:
- task: package
cmds:
- wails3 tool sign --input "{{.BIN_DIR}}/{{.APP_NAME}}.app" --identity "{{.SIGN_IDENTITY}}" {{if .ENTITLEMENTS}}--entitlements {{.ENTITLEMENTS}}{{end}}
preconditions:
- sh: '[ -n "{{.SIGN_IDENTITY}}" ]'
msg: "SIGN_IDENTITY is required. Set it in the vars section at the top of build/darwin/Taskfile.yml"
sign:notarize:
summary: Signs and notarizes the application bundle
desc: |
Signs the .app bundle and submits it for notarization.
Configure SIGN_IDENTITY and KEYCHAIN_PROFILE in the vars section at the top of this file.
Setup (one-time):
wails3 signing credentials --apple-id "you@email.com" --team-id "TEAMID" --password "app-specific-password" --profile "my-profile"
deps:
- task: package
cmds:
- wails3 tool sign --input "{{.BIN_DIR}}/{{.APP_NAME}}.app" --identity "{{.SIGN_IDENTITY}}" {{if .ENTITLEMENTS}}--entitlements {{.ENTITLEMENTS}}{{end}} --notarize --keychain-profile {{.KEYCHAIN_PROFILE}}
preconditions:
- sh: '[ -n "{{.SIGN_IDENTITY}}" ]'
msg: "SIGN_IDENTITY is required. Set it in the vars section at the top of build/darwin/Taskfile.yml"
- sh: '[ -n "{{.KEYCHAIN_PROFILE}}" ]'
msg: "KEYCHAIN_PROFILE is required. Set it in the vars section at the top of build/darwin/Taskfile.yml"

Binary file not shown.

View File

@@ -0,0 +1,203 @@
# Cross-compile Wails v3 apps to any platform
#
# Darwin: Zig + macOS SDK
# Linux: Native GCC when host matches target, Zig for cross-arch
# Windows: Zig + bundled mingw
#
# Usage:
# docker build -t wails-cross -f Dockerfile.cross .
# docker run --rm -v $(pwd):/app wails-cross darwin arm64
# docker run --rm -v $(pwd):/app wails-cross darwin amd64
# docker run --rm -v $(pwd):/app wails-cross linux amd64
# docker run --rm -v $(pwd):/app wails-cross linux arm64
# docker run --rm -v $(pwd):/app wails-cross windows amd64
# docker run --rm -v $(pwd):/app wails-cross windows arm64
FROM golang:1.25-bookworm
ARG TARGETARCH
# Install base tools, GCC, and GTK/WebKit dev packages
RUN apt-get update && apt-get install -y --no-install-recommends \
curl xz-utils nodejs npm pkg-config gcc libc6-dev \
libgtk-3-dev libwebkit2gtk-4.1-dev \
libgtk-4-dev libwebkitgtk-6.0-dev \
&& rm -rf /var/lib/apt/lists/*
# Install Zig - automatically selects correct binary for host architecture
ARG ZIG_VERSION=0.14.0
RUN ZIG_ARCH=$(case "${TARGETARCH}" in arm64) echo "aarch64" ;; *) echo "x86_64" ;; esac) && \
curl -L "https://ziglang.org/download/${ZIG_VERSION}/zig-linux-${ZIG_ARCH}-${ZIG_VERSION}.tar.xz" \
| tar -xJ -C /opt \
&& ln -s /opt/zig-linux-${ZIG_ARCH}-${ZIG_VERSION}/zig /usr/local/bin/zig
# Download macOS SDK (required for darwin targets)
ARG MACOS_SDK_VERSION=14.5
RUN curl -L "https://github.com/joseluisq/macosx-sdks/releases/download/${MACOS_SDK_VERSION}/MacOSX${MACOS_SDK_VERSION}.sdk.tar.xz" \
| tar -xJ -C /opt \
&& mv /opt/MacOSX${MACOS_SDK_VERSION}.sdk /opt/macos-sdk
ENV MACOS_SDK_PATH=/opt/macos-sdk
# Create Zig CC wrappers for cross-compilation targets
# Darwin and Windows use Zig; Linux uses native GCC (run with --platform for cross-arch)
# Darwin arm64
COPY <<'ZIGWRAP' /usr/local/bin/zcc-darwin-arm64
#!/bin/sh
ARGS=""
SKIP_NEXT=0
for arg in "$@"; do
if [ $SKIP_NEXT -eq 1 ]; then
SKIP_NEXT=0
continue
fi
case "$arg" in
-target) SKIP_NEXT=1 ;;
-mmacosx-version-min=*) ;;
*) ARGS="$ARGS $arg" ;;
esac
done
exec zig cc -fno-sanitize=all -target aarch64-macos-none -isysroot /opt/macos-sdk -I/opt/macos-sdk/usr/include -L/opt/macos-sdk/usr/lib -F/opt/macos-sdk/System/Library/Frameworks -w $ARGS
ZIGWRAP
RUN chmod +x /usr/local/bin/zcc-darwin-arm64
# Darwin amd64
COPY <<'ZIGWRAP' /usr/local/bin/zcc-darwin-amd64
#!/bin/sh
ARGS=""
SKIP_NEXT=0
for arg in "$@"; do
if [ $SKIP_NEXT -eq 1 ]; then
SKIP_NEXT=0
continue
fi
case "$arg" in
-target) SKIP_NEXT=1 ;;
-mmacosx-version-min=*) ;;
*) ARGS="$ARGS $arg" ;;
esac
done
exec zig cc -fno-sanitize=all -target x86_64-macos-none -isysroot /opt/macos-sdk -I/opt/macos-sdk/usr/include -L/opt/macos-sdk/usr/lib -F/opt/macos-sdk/System/Library/Frameworks -w $ARGS
ZIGWRAP
RUN chmod +x /usr/local/bin/zcc-darwin-amd64
# Windows amd64 - uses Zig's bundled mingw
COPY <<'ZIGWRAP' /usr/local/bin/zcc-windows-amd64
#!/bin/sh
ARGS=""
SKIP_NEXT=0
for arg in "$@"; do
if [ $SKIP_NEXT -eq 1 ]; then
SKIP_NEXT=0
continue
fi
case "$arg" in
-target) SKIP_NEXT=1 ;;
-Wl,*) ;;
*) ARGS="$ARGS $arg" ;;
esac
done
exec zig cc -target x86_64-windows-gnu $ARGS
ZIGWRAP
RUN chmod +x /usr/local/bin/zcc-windows-amd64
# Windows arm64 - uses Zig's bundled mingw
COPY <<'ZIGWRAP' /usr/local/bin/zcc-windows-arm64
#!/bin/sh
ARGS=""
SKIP_NEXT=0
for arg in "$@"; do
if [ $SKIP_NEXT -eq 1 ]; then
SKIP_NEXT=0
continue
fi
case "$arg" in
-target) SKIP_NEXT=1 ;;
-Wl,*) ;;
*) ARGS="$ARGS $arg" ;;
esac
done
exec zig cc -target aarch64-windows-gnu $ARGS
ZIGWRAP
RUN chmod +x /usr/local/bin/zcc-windows-arm64
# Build script
COPY <<'SCRIPT' /usr/local/bin/build.sh
#!/bin/sh
set -e
OS=${1:-darwin}
ARCH=${2:-arm64}
case "${OS}-${ARCH}" in
darwin-arm64|darwin-aarch64)
export CC=zcc-darwin-arm64
export GOARCH=arm64
export GOOS=darwin
;;
darwin-amd64|darwin-x86_64)
export CC=zcc-darwin-amd64
export GOARCH=amd64
export GOOS=darwin
;;
linux-arm64|linux-aarch64)
export CC=gcc
export GOARCH=arm64
export GOOS=linux
;;
linux-amd64|linux-x86_64)
export CC=gcc
export GOARCH=amd64
export GOOS=linux
;;
windows-arm64|windows-aarch64)
export CC=zcc-windows-arm64
export GOARCH=arm64
export GOOS=windows
;;
windows-amd64|windows-x86_64)
export CC=zcc-windows-amd64
export GOARCH=amd64
export GOOS=windows
;;
*)
echo "Usage: <os> <arch>"
echo " os: darwin, linux, windows"
echo " arch: amd64, arm64"
exit 1
;;
esac
export CGO_ENABLED=1
export CGO_CFLAGS="-w"
# Build frontend if exists and not already built (host may have built it)
if [ -d "frontend" ] && [ -f "frontend/package.json" ] && [ ! -d "frontend/dist" ]; then
(cd frontend && npm install --silent && npm run build --silent)
fi
# Build
APP=${APP_NAME:-$(basename $(pwd))}
mkdir -p bin
EXT=""
LDFLAGS="-s -w"
if [ "$GOOS" = "windows" ]; then
EXT=".exe"
LDFLAGS="-s -w -H windowsgui"
fi
TAGS="production"
if [ -n "$EXTRA_TAGS" ]; then
TAGS="${TAGS},${EXTRA_TAGS}"
fi
go build -tags "$TAGS" -trimpath -buildvcs=false -ldflags="$LDFLAGS" -o bin/${APP}-${GOOS}-${GOARCH}${EXT} .
echo "Built: bin/${APP}-${GOOS}-${GOARCH}${EXT}"
SCRIPT
RUN chmod +x /usr/local/bin/build.sh
WORKDIR /app
ENTRYPOINT ["/usr/local/bin/build.sh"]
CMD ["darwin", "arm64"]

View File

@@ -0,0 +1,41 @@
# Wails Server Mode Dockerfile
# Multi-stage build for minimal image size
# Build stage
FROM golang:alpine AS builder
WORKDIR /app
# Install build dependencies
RUN apk add --no-cache git
# Copy source code
COPY . .
# Remove local replace directive if present (for production builds)
RUN sed -i '/^replace/d' go.mod || true
# Download dependencies
RUN go mod tidy
# Build the server binary
RUN go build -tags server -ldflags="-s -w" -o server .
# Runtime stage - minimal image
FROM gcr.io/distroless/static-debian12
# Copy the binary
COPY --from=builder /app/server /server
# Copy frontend assets
COPY --from=builder /app/frontend/dist /frontend/dist
# Expose the default port
EXPOSE 8080
# Bind to all interfaces (required for Docker)
# Can be overridden at runtime with -e WAILS_SERVER_HOST=...
ENV WAILS_SERVER_HOST=0.0.0.0
# Run the server
ENTRYPOINT ["/server"]

View File

@@ -0,0 +1,235 @@
version: '3'
includes:
common: ../Taskfile.yml
vars:
# Signing configuration - edit these values for your project
# PGP_KEY: "path/to/signing-key.asc"
# SIGN_ROLE: "builder" # Options: origin, maint, archive, builder
#
# Password is stored securely in system keychain. Run: wails3 setup signing
# Docker image for cross-compilation (used when building on non-Linux or no CC available)
CROSS_IMAGE: wails-cross
tasks:
build:
summary: Builds the application for Linux
cmds:
# Linux requires CGO - use Docker when:
# 1. Cross-compiling from non-Linux, OR
# 2. No C compiler is available, OR
# 3. Target architecture differs from host architecture (cross-arch compilation)
- task: '{{if and (eq OS "linux") (eq .HAS_CC "true") (eq .TARGET_ARCH ARCH)}}build:native{{else}}build:docker{{end}}'
vars:
ARCH: '{{.ARCH}}'
DEV: '{{.DEV}}'
OUTPUT: '{{.OUTPUT}}'
EXTRA_TAGS: '{{.EXTRA_TAGS}}'
vars:
DEFAULT_OUTPUT: '{{.BIN_DIR}}/{{.APP_NAME}}'
OUTPUT: '{{ .OUTPUT | default .DEFAULT_OUTPUT }}'
# Determine target architecture (defaults to host ARCH if not specified)
TARGET_ARCH: '{{.ARCH | default ARCH}}'
# Check if a C compiler is available (gcc or clang)
HAS_CC:
sh: '(command -v gcc >/dev/null 2>&1 || command -v clang >/dev/null 2>&1) && echo "true" || echo "false"'
build:native:
summary: Builds the application natively on Linux
internal: true
deps:
- task: common:go:mod:tidy
- task: common:build:frontend
vars:
BUILD_FLAGS:
ref: .BUILD_FLAGS
DEV:
ref: .DEV
- task: common:generate:icons
- task: generate:dotdesktop
cmds:
- go build {{.BUILD_FLAGS}} -o {{.OUTPUT}}
vars:
BUILD_FLAGS: '{{if eq .DEV "true"}}{{if .EXTRA_TAGS}}-tags {{.EXTRA_TAGS}} {{end}}-buildvcs=false -gcflags=all="-l"{{else}}-tags production{{if .EXTRA_TAGS}},{{.EXTRA_TAGS}}{{end}} -trimpath -buildvcs=false -ldflags="-w -s"{{end}}'
DEFAULT_OUTPUT: '{{.BIN_DIR}}/{{.APP_NAME}}'
OUTPUT: '{{ .OUTPUT | default .DEFAULT_OUTPUT }}'
env:
GOOS: linux
CGO_ENABLED: 1
GOARCH: '{{.ARCH | default ARCH}}'
build:docker:
summary: Builds for Linux using Docker (for non-Linux hosts or when no C compiler available)
internal: true
deps:
- task: common:build:frontend
- task: common:generate:icons
- task: generate:dotdesktop
preconditions:
- sh: docker info > /dev/null 2>&1
msg: "Docker is required for cross-compilation to Linux. Please install Docker."
- sh: docker image inspect {{.CROSS_IMAGE}} > /dev/null 2>&1
msg: |
Docker image '{{.CROSS_IMAGE}}' not found.
Build it first: wails3 task setup:docker
cmds:
- docker run --rm -v "{{.ROOT_DIR}}:/app" {{.GO_CACHE_MOUNT}} {{.REPLACE_MOUNTS}} -e APP_NAME="{{.APP_NAME}}" {{if .EXTRA_TAGS}}-e EXTRA_TAGS="{{.EXTRA_TAGS}}"{{end}} "{{.CROSS_IMAGE}}" linux {{.DOCKER_ARCH}}
- docker run --rm -v "{{.ROOT_DIR}}:/app" alpine chown -R $(id -u):$(id -g) /app/bin
- mkdir -p {{.BIN_DIR}}
- mv "bin/{{.APP_NAME}}-linux-{{.DOCKER_ARCH}}" "{{.OUTPUT}}"
vars:
DOCKER_ARCH: '{{.ARCH | default "amd64"}}'
DEFAULT_OUTPUT: '{{.BIN_DIR}}/{{.APP_NAME}}'
OUTPUT: '{{ .OUTPUT | default .DEFAULT_OUTPUT }}'
# Mount Go module cache for faster builds
GO_CACHE_MOUNT:
sh: 'echo "-v ${GOPATH:-$HOME/go}/pkg/mod:/go/pkg/mod"'
# Extract replace directives from go.mod and create -v mounts for each
REPLACE_MOUNTS:
sh: |
grep -E '^replace .* => ' go.mod 2>/dev/null | while read -r line; do
path=$(echo "$line" | sed -E 's/^replace .* => //' | tr -d '\r')
# Convert relative paths to absolute
if [ "${path#/}" = "$path" ]; then
path="$(cd "$(dirname "$path")" 2>/dev/null && pwd)/$(basename "$path")"
fi
# Only mount if directory exists
if [ -d "$path" ]; then
echo "-v $path:$path:ro"
fi
done | tr '\n' ' '
package:
summary: Packages the application for Linux
deps:
- task: build
cmds:
- task: create:appimage
- task: create:deb
- task: create:rpm
- task: create:aur
create:appimage:
summary: Creates an AppImage
dir: build/linux/appimage
deps:
- task: build
- task: generate:dotdesktop
cmds:
- cp "{{.APP_BINARY}}" "{{.APP_NAME}}"
- cp ../../appicon.png "{{.APP_NAME}}.png"
- wails3 generate appimage -binary "{{.APP_NAME}}" -icon {{.ICON}} -desktopfile {{.DESKTOP_FILE}} -outputdir {{.OUTPUT_DIR}} -builddir {{.ROOT_DIR}}/build/linux/appimage/build
vars:
APP_NAME: '{{.APP_NAME}}'
APP_BINARY: '../../../bin/{{.APP_NAME}}'
ICON: '{{.APP_NAME}}.png'
DESKTOP_FILE: '../{{.APP_NAME}}.desktop'
OUTPUT_DIR: '../../../bin'
create:deb:
summary: Creates a deb package
deps:
- task: build
cmds:
- task: generate:dotdesktop
- task: generate:deb
create:rpm:
summary: Creates a rpm package
deps:
- task: build
cmds:
- task: generate:dotdesktop
- task: generate:rpm
create:aur:
summary: Creates a arch linux packager package
deps:
- task: build
cmds:
- task: generate:dotdesktop
- task: generate:aur
generate:deb:
summary: Creates a deb package
cmds:
- wails3 tool package -name "{{.APP_NAME}}" -format deb -config ./build/linux/nfpm/nfpm.yaml -out {{.ROOT_DIR}}/bin
generate:rpm:
summary: Creates a rpm package
cmds:
- wails3 tool package -name "{{.APP_NAME}}" -format rpm -config ./build/linux/nfpm/nfpm.yaml -out {{.ROOT_DIR}}/bin
generate:aur:
summary: Creates a arch linux packager package
cmds:
- wails3 tool package -name "{{.APP_NAME}}" -format archlinux -config ./build/linux/nfpm/nfpm.yaml -out {{.ROOT_DIR}}/bin
generate:dotdesktop:
summary: Generates a `.desktop` file
dir: build
cmds:
- mkdir -p {{.ROOT_DIR}}/build/linux/appimage
- wails3 generate .desktop -name "{{.APP_NAME}}" -exec "{{.EXEC}}" -icon "{{.ICON}}" -outputfile "{{.ROOT_DIR}}/build/linux/{{.APP_NAME}}.desktop" -categories "{{.CATEGORIES}}"
# Wrap Exec= with `env WEBKIT_DISABLE_DMABUF_RENDERER=1 ...` so launches
# from any desktop environment use the working renderer. See build/linux/Taskfile.yml :run for the matching dev-mode env block.
- sed -i -E 's|^Exec=([^ ]+)(.*)$|Exec=env WEBKIT_DISABLE_DMABUF_RENDERER=1 \1\2|' {{.ROOT_DIR}}/build/linux/{{.APP_NAME}}.desktop
vars:
APP_NAME: '{{.APP_NAME}}'
EXEC: '{{.APP_NAME}}'
ICON: '{{.APP_NAME}}'
CATEGORIES: 'Development;'
OUTPUTFILE: '{{.ROOT_DIR}}/build/linux/{{.APP_NAME}}.desktop'
run:
cmds:
- '{{.BIN_DIR}}/{{.APP_NAME}}'
env:
# WebKitGTK 2.50's default DMA-BUF renderer fails on RDP, VirtualBox/QEMU,
# and some bare WMs (Fluxbox, dwm) where DRM dumb-buffer access is
# restricted. Disabling it falls back to the GLES2/cairo path which works
# everywhere. Production launchers must set this too.
WEBKIT_DISABLE_DMABUF_RENDERER: "1"
sign:deb:
summary: Signs the DEB package
desc: |
Signs the .deb package with a PGP key.
Configure PGP_KEY in the vars section at the top of this file.
Password is retrieved from system keychain (run: wails3 setup signing)
deps:
- task: create:deb
cmds:
- wails3 tool sign --input "{{.BIN_DIR}}/{{.APP_NAME}}*.deb" --pgp-key {{.PGP_KEY}} {{if .SIGN_ROLE}}--role {{.SIGN_ROLE}}{{end}}
preconditions:
- sh: '[ -n "{{.PGP_KEY}}" ]'
msg: "PGP_KEY is required. Set it in the vars section at the top of build/linux/Taskfile.yml"
sign:rpm:
summary: Signs the RPM package
desc: |
Signs the .rpm package with a PGP key.
Configure PGP_KEY in the vars section at the top of this file.
Password is retrieved from system keychain (run: wails3 setup signing)
deps:
- task: create:rpm
cmds:
- wails3 tool sign --input "{{.BIN_DIR}}/{{.APP_NAME}}*.rpm" --pgp-key {{.PGP_KEY}}
preconditions:
- sh: '[ -n "{{.PGP_KEY}}" ]'
msg: "PGP_KEY is required. Set it in the vars section at the top of build/linux/Taskfile.yml"
sign:packages:
summary: Signs all Linux packages (DEB and RPM)
desc: |
Signs both .deb and .rpm packages with a PGP key.
Configure PGP_KEY in the vars section at the top of this file.
Password is retrieved from system keychain (run: wails3 setup signing)
cmds:
- task: sign:deb
- task: sign:rpm
preconditions:
- sh: '[ -n "{{.PGP_KEY}}" ]'
msg: "PGP_KEY is required. Set it in the vars section at the top of build/linux/Taskfile.yml"

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env bash
# Copyright (c) 2018-Present Lea Anthony
# SPDX-License-Identifier: MIT
# Fail script on any error
set -euxo pipefail
# Define variables
APP_DIR="${APP_NAME}.AppDir"
# Create AppDir structure
mkdir -p "${APP_DIR}/usr/bin"
cp -r "${APP_BINARY}" "${APP_DIR}/usr/bin/"
cp "${ICON_PATH}" "${APP_DIR}/"
cp "${DESKTOP_FILE}" "${APP_DIR}/"
if [[ $(uname -m) == *x86_64* ]]; then
# Download linuxdeploy and make it executable
wget -q -4 -N https://github.com/linuxdeploy/linuxdeploy/releases/download/continuous/linuxdeploy-x86_64.AppImage
chmod +x linuxdeploy-x86_64.AppImage
# Run linuxdeploy to bundle the application
./linuxdeploy-x86_64.AppImage --appdir "${APP_DIR}" --output appimage
else
# Download linuxdeploy and make it executable (arm64)
wget -q -4 -N https://github.com/linuxdeploy/linuxdeploy/releases/download/continuous/linuxdeploy-aarch64.AppImage
chmod +x linuxdeploy-aarch64.AppImage
# Run linuxdeploy to bundle the application (arm64)
./linuxdeploy-aarch64.AppImage --appdir "${APP_DIR}" --output appimage
fi
# Rename the generated AppImage
mv "${APP_NAME}*.AppImage" "${APP_NAME}.AppImage"

View File

@@ -0,0 +1,13 @@
[Desktop Entry]
Version=1.0
Name=NetBird
Comment=NetBird desktop client
# The Exec line includes %u to pass the URL to the application
Exec=/usr/local/bin/netbird-ui %u
Terminal=false
Type=Application
Icon=netbird-ui
Categories=Utility;
StartupWMClass=netbird-ui

View File

@@ -0,0 +1,10 @@
[Desktop Entry]
Type=Application
Name=netbird-ui
Exec=netbird-ui
Icon=netbird-ui
Categories=Development;
Terminal=false
Keywords=wails
Version=1.0
StartupNotify=false

View File

@@ -0,0 +1,67 @@
# Feel free to remove those if you don't want/need to use them.
# Make sure to check the documentation at https://nfpm.goreleaser.com
#
# The lines below are called `modelines`. See `:help modeline`
name: "netbird-ui"
arch: ${GOARCH}
platform: "linux"
version: "0.0.1"
section: "default"
priority: "extra"
maintainer: ${GIT_COMMITTER_NAME} <${GIT_COMMITTER_EMAIL}>
description: "NetBird desktop client"
vendor: "NetBird"
homepage: "https://wails.io"
license: "MIT"
release: "1"
contents:
- src: "./bin/netbird-ui"
dst: "/usr/local/bin/netbird-ui"
- src: "./build/appicon.png"
dst: "/usr/share/icons/hicolor/128x128/apps/netbird-ui.png"
- src: "./build/linux/netbird-ui.desktop"
dst: "/usr/share/applications/netbird-ui.desktop"
# Default dependencies for Debian 12/Ubuntu 22.04+ with WebKit 4.1
depends:
- libgtk-3-0
- libwebkit2gtk-4.1-0
# Distribution-specific overrides for different package formats and WebKit versions
overrides:
# RPM packages for RHEL/CentOS/AlmaLinux/Rocky Linux (WebKit 4.0)
rpm:
depends:
- gtk3
- webkit2gtk4.1
# Arch Linux packages (WebKit 4.1)
archlinux:
depends:
- gtk3
- webkit2gtk-4.1
# scripts section to ensure desktop database is updated after install
scripts:
postinstall: "./build/linux/nfpm/scripts/postinstall.sh"
# You can also add preremove, postremove if needed
# preremove: "./build/linux/nfpm/scripts/preremove.sh"
# postremove: "./build/linux/nfpm/scripts/postremove.sh"
# replaces:
# - foobar
# provides:
# - bar
# depends:
# - gtk3
# - libwebkit2gtk
# recommends:
# - whatever
# suggests:
# - something-else
# conflicts:
# - not-foo
# - not-bar
# changelog: "changelog.yaml"

View File

@@ -0,0 +1,21 @@
#!/bin/sh
# Update desktop database for .desktop file changes
# This makes the application appear in application menus and registers its capabilities.
if command -v update-desktop-database >/dev/null 2>&1; then
echo "Updating desktop database..."
update-desktop-database -q /usr/share/applications
else
echo "Warning: update-desktop-database command not found. Desktop file may not be immediately recognized." >&2
fi
# Update MIME database for custom URL schemes (x-scheme-handler)
# This ensures the system knows how to handle your custom protocols.
if command -v update-mime-database >/dev/null 2>&1; then
echo "Updating MIME database..."
update-mime-database -n /usr/share/mime
else
echo "Warning: update-mime-database command not found. Custom URL schemes may not be immediately recognized." >&2
fi
exit 0

View File

@@ -0,0 +1 @@
#!/bin/bash

View File

@@ -0,0 +1 @@
#!/bin/bash

View File

@@ -0,0 +1 @@
#!/bin/bash

View File

@@ -0,0 +1,236 @@
version: '3'
includes:
common: ../Taskfile.yml
vars:
# Signing configuration - edit these values for your project
# SIGN_CERTIFICATE: "path/to/certificate.pfx"
# SIGN_THUMBPRINT: "certificate-thumbprint" # Alternative to SIGN_CERTIFICATE
# TIMESTAMP_SERVER: "http://timestamp.digicert.com"
#
# Password is stored securely in system keychain. Run: wails3 setup signing
# Docker image for cross-compilation with CGO (used when CGO_ENABLED=1 on non-Windows)
CROSS_IMAGE: wails-cross
tasks:
build:
summary: Builds the application for Windows
cmds:
# CGO Windows builds from Linux use mingw-w64 (lighter than docker).
# Docker is only needed if mingw-w64 is unavailable.
- task: build:native
vars:
ARCH: '{{.ARCH}}'
DEV: '{{.DEV}}'
EXTRA_TAGS: '{{.EXTRA_TAGS}}'
vars:
CGO_ENABLED: '{{.CGO_ENABLED | default "0"}}'
build:console:
summary: Builds a console-attached Windows binary so logs go to the terminal.
desc: |
Same as `windows:build` but links against the console PE subsystem
instead of windowsgui, so stdout/stderr (logrus, panics) print to the
terminal that launched the .exe. Useful for chasing tray, event-stream,
or daemon-RPC bugs that have no other feedback channel on Windows.
Output is bin/netbird-ui-console.exe — kept distinct so the production
binary built by `windows:build` isn't shadowed.
Cross-compile from Linux works the same way:
CGO_ENABLED=1 task windows:build:console
deps:
- task: common:go:mod:tidy
- task: common:build:frontend
vars:
BUILD_FLAGS:
ref: .BUILD_FLAGS
DEV:
ref: .DEV
- task: common:generate:icons
preconditions:
- sh: '[ "{{OS}}" = "windows" ] || [ "{{.CGO_ENABLED}}" != "1" ] || command -v {{.CC}}'
msg: "{{.CC}} not found. Install with: sudo apt-get install gcc-mingw-w64-x86-64 (Debian/Ubuntu) / sudo dnf install mingw64-gcc (Fedora)"
cmds:
- task: generate:syso
- go build {{.BUILD_FLAGS}} -o "{{.BIN_DIR}}/{{.APP_NAME}}-console.exe"
- cmd: powershell Remove-item *.syso
platforms: [windows]
- cmd: rm -f *.syso
platforms: [linux, darwin]
vars:
# Identical to build:native's flags except no -H windowsgui, so the
# binary attaches to the launching console.
BUILD_FLAGS: '-tags production{{if .EXTRA_TAGS}},{{.EXTRA_TAGS}}{{end}} -trimpath -buildvcs=false -ldflags="-w -s"'
CGO_ENABLED: '{{.CGO_ENABLED | default "0"}}'
CC: '{{.CC | default "x86_64-w64-mingw32-gcc"}}'
env:
GOOS: windows
CGO_ENABLED: '{{.CGO_ENABLED}}'
GOARCH: '{{.ARCH | default ARCH}}'
CC: '{{.CC}}'
build:native:
summary: Builds for Windows natively, or cross-compiles from Linux/macOS via mingw-w64.
internal: true
deps:
- task: common:go:mod:tidy
- task: common:build:frontend
vars:
BUILD_FLAGS:
ref: .BUILD_FLAGS
DEV:
ref: .DEV
- task: common:generate:icons
preconditions:
# When cross-compiling with CGO from a non-Windows host, the mingw-w64
# cross-gcc must be present. Native Windows builds skip this check.
- sh: '[ "{{OS}}" = "windows" ] || [ "{{.CGO_ENABLED}}" != "1" ] || command -v {{.CC}}'
msg: "{{.CC}} not found. Install with: sudo apt-get install gcc-mingw-w64-x86-64 (Debian/Ubuntu) / sudo dnf install mingw64-gcc (Fedora)"
cmds:
- task: generate:syso
- go build {{.BUILD_FLAGS}} -o "{{.BIN_DIR}}/{{.APP_NAME}}.exe"
- cmd: powershell Remove-item *.syso
platforms: [windows]
- cmd: rm -f *.syso
platforms: [linux, darwin]
vars:
BUILD_FLAGS: '{{if eq .DEV "true"}}{{if .EXTRA_TAGS}}-tags {{.EXTRA_TAGS}} {{end}}-buildvcs=false -gcflags=all="-l"{{else}}-tags production{{if .EXTRA_TAGS}},{{.EXTRA_TAGS}}{{end}} -trimpath -buildvcs=false -ldflags="-w -s -H windowsgui"{{end}}'
CGO_ENABLED: '{{.CGO_ENABLED | default "0"}}'
CC: '{{.CC | default "x86_64-w64-mingw32-gcc"}}'
env:
GOOS: windows
CGO_ENABLED: '{{.CGO_ENABLED}}'
GOARCH: '{{.ARCH | default ARCH}}'
CC: '{{.CC}}'
build:docker:
summary: Cross-compiles for Windows using Docker with Zig (for CGO builds on non-Windows)
internal: true
deps:
- task: common:build:frontend
- task: common:generate:icons
preconditions:
- sh: docker info > /dev/null 2>&1
msg: "Docker is required for CGO cross-compilation. Please install Docker."
- sh: docker image inspect {{.CROSS_IMAGE}} > /dev/null 2>&1
msg: |
Docker image '{{.CROSS_IMAGE}}' not found.
Build it first: wails3 task setup:docker
cmds:
- task: generate:syso
- docker run --rm -v "{{.ROOT_DIR}}:/app" {{.GO_CACHE_MOUNT}} {{.REPLACE_MOUNTS}} -e APP_NAME="{{.APP_NAME}}" {{if .EXTRA_TAGS}}-e EXTRA_TAGS="{{.EXTRA_TAGS}}"{{end}} {{.CROSS_IMAGE}} windows {{.DOCKER_ARCH}}
- docker run --rm -v "{{.ROOT_DIR}}:/app" alpine chown -R $(id -u):$(id -g) /app/bin
- rm -f *.syso
vars:
DOCKER_ARCH: '{{.ARCH | default "amd64"}}'
# Mount Go module cache for faster builds
GO_CACHE_MOUNT:
sh: 'echo "-v ${GOPATH:-$HOME/go}/pkg/mod:/go/pkg/mod"'
# Extract replace directives from go.mod and create -v mounts for each
REPLACE_MOUNTS:
sh: |
grep -E '^replace .* => ' go.mod 2>/dev/null | while read -r line; do
path=$(echo "$line" | sed -E 's/^replace .* => //' | tr -d '\r')
# Convert relative paths to absolute
if [ "${path#/}" = "$path" ]; then
path="$(cd "$(dirname "$path")" 2>/dev/null && pwd)/$(basename "$path")"
fi
# Only mount if directory exists
if [ -d "$path" ]; then
echo "-v $path:$path:ro"
fi
done | tr '\n' ' '
package:
summary: Packages the application
cmds:
- task: '{{if eq (.FORMAT | default "nsis") "msix"}}create:msix:package{{else}}create:nsis:installer{{end}}'
vars:
FORMAT: '{{.FORMAT | default "nsis"}}'
generate:syso:
summary: Generates Windows `.syso` file
dir: build
cmds:
- wails3 generate syso -arch {{.ARCH}} -icon windows/icon.ico -manifest windows/wails.exe.manifest -info windows/info.json -out ../wails_windows_{{.ARCH}}.syso
vars:
ARCH: '{{.ARCH | default ARCH}}'
create:nsis:installer:
summary: Creates an NSIS installer
dir: build/windows/nsis
deps:
- task: build
cmds:
# Create the Microsoft WebView2 bootstrapper if it doesn't exist
- wails3 generate webview2bootstrapper -dir "{{.ROOT_DIR}}/build/windows/nsis"
- |
{{if eq OS "windows"}}
makensis -DARG_WAILS_{{.ARG_FLAG}}_BINARY="{{.ROOT_DIR}}\{{.BIN_DIR}}\{{.APP_NAME}}.exe" project.nsi
{{else}}
makensis -DARG_WAILS_{{.ARG_FLAG}}_BINARY="{{.ROOT_DIR}}/{{.BIN_DIR}}/{{.APP_NAME}}.exe" project.nsi
{{end}}
vars:
ARCH: '{{.ARCH | default ARCH}}'
ARG_FLAG: '{{if eq .ARCH "amd64"}}AMD64{{else}}ARM64{{end}}'
create:msix:package:
summary: Creates an MSIX package
deps:
- task: build
cmds:
- |-
wails3 tool msix \
--config "{{.ROOT_DIR}}/wails.json" \
--name "{{.APP_NAME}}" \
--executable "{{.ROOT_DIR}}/{{.BIN_DIR}}/{{.APP_NAME}}.exe" \
--arch "{{.ARCH}}" \
--out "{{.ROOT_DIR}}/{{.BIN_DIR}}/{{.APP_NAME}}-{{.ARCH}}.msix" \
{{if .CERT_PATH}}--cert "{{.CERT_PATH}}"{{end}} \
{{if .PUBLISHER}}--publisher "{{.PUBLISHER}}"{{end}} \
{{if .USE_MSIX_TOOL}}--use-msix-tool{{else}}--use-makeappx{{end}}
vars:
ARCH: '{{.ARCH | default ARCH}}'
CERT_PATH: '{{.CERT_PATH | default ""}}'
PUBLISHER: '{{.PUBLISHER | default ""}}'
USE_MSIX_TOOL: '{{.USE_MSIX_TOOL | default "false"}}'
install:msix:tools:
summary: Installs tools required for MSIX packaging
cmds:
- wails3 tool msix-install-tools
run:
cmds:
- '{{.BIN_DIR}}/{{.APP_NAME}}.exe'
sign:
summary: Signs the Windows executable
desc: |
Signs the .exe with an Authenticode certificate.
Configure SIGN_CERTIFICATE or SIGN_THUMBPRINT in the vars section at the top of this file.
Password is retrieved from system keychain (run: wails3 setup signing)
deps:
- task: build
cmds:
- wails3 tool sign --input "{{.BIN_DIR}}/{{.APP_NAME}}.exe" {{if .SIGN_CERTIFICATE}}--certificate {{.SIGN_CERTIFICATE}}{{end}} {{if .SIGN_THUMBPRINT}}--thumbprint {{.SIGN_THUMBPRINT}}{{end}} {{if .TIMESTAMP_SERVER}}--timestamp {{.TIMESTAMP_SERVER}}{{end}}
preconditions:
- sh: '[ -n "{{.SIGN_CERTIFICATE}}" ] || [ -n "{{.SIGN_THUMBPRINT}}" ]'
msg: "Either SIGN_CERTIFICATE or SIGN_THUMBPRINT is required. Set it in the vars section at the top of build/windows/Taskfile.yml"
sign:installer:
summary: Signs the NSIS installer
desc: |
Creates and signs the NSIS installer.
Configure SIGN_CERTIFICATE or SIGN_THUMBPRINT in the vars section at the top of this file.
Password is retrieved from system keychain (run: wails3 setup signing)
deps:
- task: create:nsis:installer
cmds:
- wails3 tool sign --input "build/windows/nsis/{{.APP_NAME}}-installer.exe" {{if .SIGN_CERTIFICATE}}--certificate {{.SIGN_CERTIFICATE}}{{end}} {{if .SIGN_THUMBPRINT}}--thumbprint {{.SIGN_THUMBPRINT}}{{end}} {{if .TIMESTAMP_SERVER}}--timestamp {{.TIMESTAMP_SERVER}}{{end}}
preconditions:
- sh: '[ -n "{{.SIGN_CERTIFICATE}}" ] || [ -n "{{.SIGN_THUMBPRINT}}" ]'
msg: "Either SIGN_CERTIFICATE or SIGN_THUMBPRINT is required. Set it in the vars section at the top of build/windows/Taskfile.yml"

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@@ -0,0 +1,15 @@
{
"fixed": {
"file_version": "0.0.1"
},
"info": {
"0000": {
"ProductVersion": "0.0.1",
"CompanyName": "NetBird",
"FileDescription": "NetBird desktop client",
"LegalCopyright": "© 2026, My Company",
"ProductName": "NetBird",
"Comments": "This is a comment"
}
}
}

Some files were not shown because too many files have changed in this diff Show More