Compare commits

...

159 Commits
1.16.0 ... dev

Author SHA1 Message Date
Owen
86bba494fe Disable intervals in saas 2026-03-14 16:03:43 -07:00
Owen
1a43f1ef4b Handle newt online offline with websocket 2026-03-14 11:59:20 -07:00
Owen
75ab074805 Attempt to improve handling bandwidth tracking 2026-03-13 12:06:01 -07:00
Owen
dc4e0253de Add message compression for large messages 2026-03-13 11:46:03 -07:00
Owen
cccf236042 Add optional compression 2026-03-12 17:49:21 -07:00
Owen
63fd63c65c Send less data down 2026-03-12 17:27:15 -07:00
Owen
beee1d692d revert: telemetry comment 2026-03-12 17:11:13 -07:00
Owen
fde786ca84 Add todo 2026-03-12 17:10:46 -07:00
Owen
3086fdd064 Merge branch 'dev' into jit 2026-03-12 16:58:23 -07:00
Owen
6c30f6db31 Dont send site if it missing public key 2026-03-12 16:33:33 -07:00
Owen
f021b73458 Add alert about domain error 2026-03-11 18:00:23 -07:00
Owen
74f4751bcc Dont show raw resource option unless remote node 2026-03-11 17:47:15 -07:00
Owen
e5bce4e180 Merge branch 'main' into dev 2026-03-11 15:55:59 -07:00
Owen
9b0e7b381c Fix error to gerbil 2026-03-11 15:49:03 -07:00
Owen
90afe5a7ac Log errors 2026-03-11 15:42:40 -07:00
Owen
b24de85157 Handle gerbil rejecting 0
Closes #2605
2026-03-11 15:06:26 -07:00
Owen
eda43dffe1 Fix not pulling wildcard cert updates 2026-03-11 15:06:26 -07:00
Owen
82c9a1eb70 Add demo link 2026-03-11 15:06:26 -07:00
Owen
a3d4553d14 Merge branch 'main' into dev 2026-03-11 14:53:55 -07:00
Owen
1cc5f59f66 Implement email and ip banning 2026-03-11 11:42:31 -07:00
Owen
4e2d88efdd Add some logging to debug 2026-03-11 11:42:28 -07:00
Owen
4975cabb2c Use native drizzle count 2026-03-11 11:42:28 -07:00
Owen
225591094f Clean up 2026-03-11 11:42:28 -07:00
Owen
82f88f2cd3 Reorder delete 2026-03-11 11:42:28 -07:00
Owen
99e6bd31b6 Bump dompurify 2026-03-10 16:47:03 -07:00
Owen
5c50590d7b Bump esbuild 2026-03-10 16:47:03 -07:00
Owen
072c89e704 Bump dompurify 2026-03-10 16:43:40 -07:00
Owen
dbdff6812d Bump esbuild 2026-03-10 16:31:19 -07:00
dependabot[bot]
42b9d5158d Bump the prod-minor-updates group across 1 directory with 10 updates
Bumps the prod-minor-updates group with 10 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [@aws-sdk/client-s3](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/clients/client-s3) | `3.989.0` | `3.1003.0` |
| [express-rate-limit](https://github.com/express-rate-limit/express-rate-limit) | `8.2.1` | `8.3.0` |
| [ioredis](https://github.com/luin/ioredis) | `5.9.3` | `5.10.0` |
| [lucide-react](https://github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react) | `0.563.0` | `0.577.0` |
| [pg](https://github.com/brianc/node-postgres/tree/HEAD/packages/pg) | `8.19.0` | `8.20.0` |
| [posthog-node](https://github.com/PostHog/posthog-js/tree/HEAD/packages/node) | `5.26.0` | `5.28.0` |
| [react-day-picker](https://github.com/gpbl/react-day-picker) | `9.13.2` | `9.14.0` |
| [react-icons](https://github.com/react-icons/react-icons) | `5.5.0` | `5.6.0` |
| reodotdev | `1.0.0` | `1.1.0` |
| [stripe](https://github.com/stripe/stripe-node) | `20.3.1` | `20.4.0` |



Updates `@aws-sdk/client-s3` from 3.989.0 to 3.1003.0
- [Release notes](https://github.com/aws/aws-sdk-js-v3/releases)
- [Changelog](https://github.com/aws/aws-sdk-js-v3/blob/main/clients/client-s3/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-js-v3/commits/v3.1003.0/clients/client-s3)

Updates `express-rate-limit` from 8.2.1 to 8.3.0
- [Release notes](https://github.com/express-rate-limit/express-rate-limit/releases)
- [Commits](https://github.com/express-rate-limit/express-rate-limit/compare/v8.2.1...v8.3.0)

Updates `ioredis` from 5.9.3 to 5.10.0
- [Release notes](https://github.com/luin/ioredis/releases)
- [Changelog](https://github.com/redis/ioredis/blob/main/CHANGELOG.md)
- [Commits](https://github.com/luin/ioredis/compare/v5.9.3...v5.10.0)

Updates `lucide-react` from 0.563.0 to 0.577.0
- [Release notes](https://github.com/lucide-icons/lucide/releases)
- [Commits](https://github.com/lucide-icons/lucide/commits/0.577.0/packages/lucide-react)

Updates `pg` from 8.19.0 to 8.20.0
- [Changelog](https://github.com/brianc/node-postgres/blob/master/CHANGELOG.md)
- [Commits](https://github.com/brianc/node-postgres/commits/pg@8.20.0/packages/pg)

Updates `posthog-node` from 5.26.0 to 5.28.0
- [Release notes](https://github.com/PostHog/posthog-js/releases)
- [Changelog](https://github.com/PostHog/posthog-js/blob/main/packages/node/CHANGELOG.md)
- [Commits](https://github.com/PostHog/posthog-js/commits/posthog-node@5.28.0/packages/node)

Updates `react-day-picker` from 9.13.2 to 9.14.0
- [Release notes](https://github.com/gpbl/react-day-picker/releases)
- [Changelog](https://github.com/gpbl/react-day-picker/blob/main/CHANGELOG.md)
- [Commits](https://github.com/gpbl/react-day-picker/compare/v9.13.2...v9.14.0)

Updates `react-icons` from 5.5.0 to 5.6.0
- [Release notes](https://github.com/react-icons/react-icons/releases)
- [Commits](https://github.com/react-icons/react-icons/compare/v5.5.0...v5.6.0)

Updates `reodotdev` from 1.0.0 to 1.1.0

Updates `stripe` from 20.3.1 to 20.4.0
- [Release notes](https://github.com/stripe/stripe-node/releases)
- [Changelog](https://github.com/stripe/stripe-node/blob/master/CHANGELOG.md)
- [Commits](https://github.com/stripe/stripe-node/compare/v20.3.1...v20.4.0)

---
updated-dependencies:
- dependency-name: "@aws-sdk/client-s3"
  dependency-version: 3.1003.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: express-rate-limit
  dependency-version: 8.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: ioredis
  dependency-version: 5.10.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: lucide-react
  dependency-version: 0.577.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: pg
  dependency-version: 8.20.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: posthog-node
  dependency-version: 5.28.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: react-day-picker
  dependency-version: 9.14.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: react-icons
  dependency-version: 5.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: reodotdev
  dependency-version: 1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
- dependency-name: stripe
  dependency-version: 20.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: prod-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-10 16:30:13 -07:00
dependabot[bot]
2ba225299e Bump the dev-minor-updates group across 1 directory with 5 updates
Bumps the dev-minor-updates group with 5 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [@dotenvx/dotenvx](https://github.com/dotenvx/dotenvx) | `1.52.0` | `1.53.0` |
| [@tailwindcss/postcss](https://github.com/tailwindlabs/tailwindcss/tree/HEAD/packages/@tailwindcss-postcss) | `4.1.18` | `4.2.1` |
| [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) | `25.2.3` | `25.3.5` |
| [tailwindcss](https://github.com/tailwindlabs/tailwindcss/tree/HEAD/packages/tailwindcss) | `4.1.18` | `4.2.1` |
| [typescript-eslint](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/typescript-eslint) | `8.55.0` | `8.56.1` |



Updates `@dotenvx/dotenvx` from 1.52.0 to 1.53.0
- [Release notes](https://github.com/dotenvx/dotenvx/releases)
- [Changelog](https://github.com/dotenvx/dotenvx/blob/main/CHANGELOG.md)
- [Commits](https://github.com/dotenvx/dotenvx/compare/v1.52.0...v1.53.0)

Updates `@tailwindcss/postcss` from 4.1.18 to 4.2.1
- [Release notes](https://github.com/tailwindlabs/tailwindcss/releases)
- [Changelog](https://github.com/tailwindlabs/tailwindcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/tailwindlabs/tailwindcss/commits/v4.2.1/packages/@tailwindcss-postcss)

Updates `@types/node` from 25.2.3 to 25.3.5
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Updates `tailwindcss` from 4.1.18 to 4.2.1
- [Release notes](https://github.com/tailwindlabs/tailwindcss/releases)
- [Changelog](https://github.com/tailwindlabs/tailwindcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/tailwindlabs/tailwindcss/commits/v4.2.1/packages/tailwindcss)

Updates `typescript-eslint` from 8.55.0 to 8.56.1
- [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases)
- [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/typescript-eslint/CHANGELOG.md)
- [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.56.1/packages/typescript-eslint)

---
updated-dependencies:
- dependency-name: "@dotenvx/dotenvx"
  dependency-version: 1.53.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: dev-minor-updates
- dependency-name: "@tailwindcss/postcss"
  dependency-version: 4.2.1
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: dev-minor-updates
- dependency-name: "@types/node"
  dependency-version: 25.3.5
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: dev-minor-updates
- dependency-name: tailwindcss
  dependency-version: 4.2.1
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: dev-minor-updates
- dependency-name: typescript-eslint
  dependency-version: 8.56.1
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: dev-minor-updates
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-10 16:30:00 -07:00
Owen
cc841d5640 Add some logging to debug 2026-03-10 14:24:57 -07:00
Shreyas Papinwar
fa0818d3fa fix: ensure Credenza dialog max-height 2026-03-10 10:07:36 -07:00
Owen
dec358c4cd Use native drizzle count 2026-03-10 10:03:49 -07:00
Owen
e98f873f81 Clean up 2026-03-09 21:16:37 -07:00
Owen
e9a2a7e752 Reorder delete 2026-03-09 20:46:27 -07:00
Owen
06015d5191 Handle gerbil rejecting 0
Closes #2605
2026-03-09 17:35:25 -07:00
Owen
af688d2a23 Add demo link 2026-03-09 17:35:04 -07:00
Owen
7d0b3ec6b5 Fix not pulling wildcard cert updates 2026-03-09 17:34:48 -07:00
Owen
cf5fb8dc33 Working on jit 2026-03-09 16:36:13 -07:00
Owen Schwartz
9a0a255445 Merge pull request #2524 from shreyaspapi/fix/2294-path-based-routing
fix: path-based routing broken due to key collisions in sanitize()
2026-03-07 21:18:59 -08:00
Owen Schwartz
91b7ceb2cf Merge pull request #2603 from Fizza-Mukhtar/fix/prevent-dashboard-domain-conflict-2595
fix: prevent resource from being created with dashboard's domain to avoid redirect loop
2026-03-07 21:15:53 -08:00
Owen Schwartz
d5a37436c0 Merge pull request #2616 from LaurenceJJones/fix/issue-240-hcStatus-missing
fix(newt): missing hcStatus in hc config on reconnect
2026-03-07 21:14:27 -08:00
Laurence
be609b5000 Fix missing hcStatus field in health check config on reconnect
The buildTargetConfigurationForNewtClient function was not including the
  hcStatus field when building health check targets for the newt/wg/connect
  message. This caused custom expected response codes (e.g., 409) to revert
  to the default 2xx range check after Pangolin server restart.

  Added hcStatus to both the database select query and the returned health
  check target object, matching the behavior in targets.ts addTargets.
2026-03-07 06:28:10 +00:00
Owen
0503c6e66e Handle JIT for ssh 2026-03-06 15:49:17 -08:00
Owen Schwartz
d4b830b9bb Merge pull request #2613 from fosrl/dependabot/npm_and_yarn/multi-43b302174d
Bump fast-xml-parser and @aws-sdk/xml-builder
2026-03-06 14:16:27 -08:00
dependabot[bot]
14d6ff25a7 Bump fast-xml-parser and @aws-sdk/xml-builder
Bumps [fast-xml-parser](https://github.com/NaturalIntelligence/fast-xml-parser) and [@aws-sdk/xml-builder](https://github.com/aws/aws-sdk-js-v3/tree/HEAD/packages-internal/xml-builder). These dependencies needed to be updated together.

Updates `fast-xml-parser` from 5.3.6 to 5.4.1
- [Release notes](https://github.com/NaturalIntelligence/fast-xml-parser/releases)
- [Changelog](https://github.com/NaturalIntelligence/fast-xml-parser/blob/master/CHANGELOG.md)
- [Commits](https://github.com/NaturalIntelligence/fast-xml-parser/compare/v5.3.6...v5.4.1)

Updates `@aws-sdk/xml-builder` from 3.972.5 to 3.972.10
- [Release notes](https://github.com/aws/aws-sdk-js-v3/releases)
- [Changelog](https://github.com/aws/aws-sdk-js-v3/blob/main/packages-internal/xml-builder/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-js-v3/commits/HEAD/packages-internal/xml-builder)

---
updated-dependencies:
- dependency-name: fast-xml-parser
  dependency-version: 5.4.1
  dependency-type: indirect
- dependency-name: "@aws-sdk/xml-builder"
  dependency-version: 3.972.10
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-06 22:14:56 +00:00
Owen Schwartz
1f62f305ce Merge pull request #2611 from fosrl/dependabot/npm_and_yarn/express-rate-limit-8.2.2
Bump express-rate-limit from 8.2.1 to 8.2.2
2026-03-06 14:13:32 -08:00
Owen
9405b0b70a Force jit above site limit 2026-03-06 14:09:57 -08:00
Owen
a26ee4ac1a Adjust billing upgrade language 2026-03-06 12:17:26 -08:00
dependabot[bot]
cebcf3e337 Bump express-rate-limit from 8.2.1 to 8.2.2
Bumps [express-rate-limit](https://github.com/express-rate-limit/express-rate-limit) from 8.2.1 to 8.2.2.
- [Release notes](https://github.com/express-rate-limit/express-rate-limit/releases)
- [Commits](https://github.com/express-rate-limit/express-rate-limit/compare/v8.2.1...v8.2.2)

---
updated-dependencies:
- dependency-name: express-rate-limit
  dependency-version: 8.2.2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-06 18:40:41 +00:00
Fizza-Mukhtar
4cfcc64481 fix: use config instead of process.env for dashboard URL check 2026-03-05 01:07:30 -08:00
Fizza-Mukhtar
1a2069a6d9 fix: prevent resource creation with dashboard domain to avoid redirect loop 2026-03-05 00:39:03 -08:00
Owen
2a5c9465e9 Add chainId field passthrough 2026-03-04 22:17:58 -08:00
Owen
f36b66e397 Merge branch 'dev' into jit 2026-03-04 17:58:50 -08:00
Owen
8c6d44677d Update lock 2026-03-04 17:48:58 -08:00
Owen
1bfff630bf Jit working for sites 2026-03-04 17:46:58 -08:00
miloschwartz
ebcef28b05 remove resend from config 2026-03-04 17:45:48 -08:00
miloschwartz
e87e12898c remove resend 2026-03-04 17:45:22 -08:00
miloschwartz
d60ab281cf remove resend from package.json 2026-03-04 17:42:25 -08:00
Owen Schwartz
483d54a9f0 Merge pull request #2598 from fosrl/marketing-consetn
add consent boolean to schema
2026-03-04 15:52:08 -08:00
miloschwartz
0ab6ff9148 add consent boolean to schema 2026-03-04 15:50:42 -08:00
Owen
c73a39f797 Allow JIT based on site or resource 2026-03-04 15:44:27 -08:00
Owen Schwartz
c87b6872e5 Merge pull request #2594 from fosrl/dev
Translations
2026-03-03 21:37:56 -08:00
Owen Schwartz
f315c8bc43 Merge pull request #2569 from fosrl/crowdin_dev
New Crowdin updates
2026-03-03 21:34:23 -08:00
Owen Schwartz
20fa1519fd New translations en-us.json (French) 2026-03-03 21:33:01 -08:00
Owen Schwartz
54430afc40 New translations en-us.json (Norwegian Bokmal) 2026-03-03 21:32:59 -08:00
Owen Schwartz
7990d08fee New translations en-us.json (Chinese Simplified) 2026-03-03 21:32:58 -08:00
Owen Schwartz
e9042d9e2e New translations en-us.json (Turkish) 2026-03-03 21:32:57 -08:00
Owen Schwartz
24a15841e4 New translations en-us.json (Russian) 2026-03-03 21:32:55 -08:00
Owen Schwartz
bb8f6e09fd New translations en-us.json (Portuguese) 2026-03-03 21:32:54 -08:00
Owen Schwartz
04bc8ab694 New translations en-us.json (Polish) 2026-03-03 21:32:52 -08:00
Owen Schwartz
6ac8335cf2 New translations en-us.json (Dutch) 2026-03-03 21:32:51 -08:00
Owen Schwartz
4c6144f8fb New translations en-us.json (Korean) 2026-03-03 21:32:50 -08:00
Owen Schwartz
255003794e New translations en-us.json (Italian) 2026-03-03 21:32:48 -08:00
Owen Schwartz
119d5c79a0 New translations en-us.json (German) 2026-03-03 21:32:47 -08:00
Owen Schwartz
8e2d7c25df New translations en-us.json (Czech) 2026-03-03 21:32:46 -08:00
Owen Schwartz
753dee3023 New translations en-us.json (Bulgarian) 2026-03-03 21:32:44 -08:00
Owen Schwartz
cac0272952 New translations en-us.json (Spanish) 2026-03-03 21:32:43 -08:00
Owen Schwartz
ee5b74f9fc Merge pull request #2593 from fosrl/dev
1.16.2-s.2
2026-03-03 21:17:10 -08:00
Owen
1362b72cd3 Restrict what can be a header 2026-03-03 21:10:52 -08:00
Owen Schwartz
35b1566962 New translations en-us.json (French) 2026-03-03 20:42:42 -08:00
Owen Schwartz
a4bcce5a0c New translations en-us.json (Norwegian Bokmal) 2026-03-03 20:42:40 -08:00
Owen Schwartz
c03f1946e8 New translations en-us.json (Chinese Simplified) 2026-03-03 20:42:39 -08:00
Owen Schwartz
c11e107758 New translations en-us.json (Turkish) 2026-03-03 20:42:37 -08:00
Owen Schwartz
3b4e49f63a New translations en-us.json (Russian) 2026-03-03 20:42:36 -08:00
Owen Schwartz
ea7253f7e8 New translations en-us.json (Portuguese) 2026-03-03 20:42:34 -08:00
Owen Schwartz
8a529f7946 New translations en-us.json (Polish) 2026-03-03 20:42:33 -08:00
Owen Schwartz
e76612e018 New translations en-us.json (Dutch) 2026-03-03 20:42:31 -08:00
Owen Schwartz
e1f99985d8 New translations en-us.json (Korean) 2026-03-03 20:42:30 -08:00
Owen Schwartz
e0c2735635 New translations en-us.json (Italian) 2026-03-03 20:42:28 -08:00
Owen Schwartz
8e6b4e243d New translations en-us.json (German) 2026-03-03 20:42:27 -08:00
Owen Schwartz
2623fa8f02 New translations en-us.json (Czech) 2026-03-03 20:42:25 -08:00
Owen Schwartz
7ff92d32cd New translations en-us.json (Bulgarian) 2026-03-03 20:42:24 -08:00
Owen Schwartz
c7f691b20a New translations en-us.json (Spanish) 2026-03-03 20:42:23 -08:00
Owen
db042e520e Adjust language 2026-03-03 20:34:56 -08:00
miloschwartz
4cab693cfc openapi and swagger ui improvements and cleanup 2026-03-03 14:54:17 -08:00
Owen
c9515ae77c Add comment about not needing exit node 2026-03-03 14:54:17 -08:00
miloschwartz
d14de86f65 fix org selector spacing on mobile 2026-03-03 14:54:17 -08:00
Laurence
f6ee9db730 enhance(sidebar): make mobile org selector sticky
Make org selector sticky on mobile sidebar

  Move OrgSelector outside the scrollable container so it stays fixed
  at the top while menu items scroll, matching the desktop sidebar
  behavior introduced in 9b2c0d0b.
2026-03-03 14:54:17 -08:00
ChanningHe
94353aea44 feat(integration): add domain CRUD endpoints to integration API 2026-03-03 14:54:17 -08:00
miloschwartz
ed95f10fcc openapi and swagger ui improvements and cleanup 2026-03-02 21:59:41 -08:00
Owen
64bae5b142 Merge branch 'main' into dev 2026-03-02 18:52:20 -08:00
Owen
19f9dda490 Add comment about not needing exit node 2026-03-02 16:28:01 -08:00
Owen Schwartz
cdf79edb00 Merge pull request #2570 from Fizza-Mukhtar/fix/mixed-target-failover-2448
fix: local targets ignored when newt site is unhealthy (mixed target failover)
2026-03-01 15:58:25 -08:00
Owen Schwartz
df53dfc936 New translations en-us.json (French) 2026-03-01 11:17:30 -08:00
Owen Schwartz
8e2e09ab81 New translations en-us.json (Norwegian Bokmal) 2026-03-01 11:17:28 -08:00
Owen Schwartz
1eac7cbccd New translations en-us.json (Chinese Simplified) 2026-03-01 11:17:27 -08:00
Owen Schwartz
ddaaed65e4 New translations en-us.json (Turkish) 2026-03-01 11:17:26 -08:00
Owen Schwartz
8e633c21c7 New translations en-us.json (Russian) 2026-03-01 11:17:24 -08:00
Owen Schwartz
e7c4ef44d8 New translations en-us.json (Portuguese) 2026-03-01 11:17:23 -08:00
Owen Schwartz
3d71470bd2 New translations en-us.json (Polish) 2026-03-01 11:17:21 -08:00
Owen Schwartz
dd627a222e New translations en-us.json (Dutch) 2026-03-01 11:17:20 -08:00
Owen Schwartz
62cc20fa1c New translations en-us.json (Korean) 2026-03-01 11:17:19 -08:00
Owen Schwartz
0450fc9f57 New translations en-us.json (Italian) 2026-03-01 11:17:17 -08:00
Owen Schwartz
c58aaf5ba6 New translations en-us.json (German) 2026-03-01 11:17:16 -08:00
Owen Schwartz
655522d4e2 New translations en-us.json (Czech) 2026-03-01 11:17:15 -08:00
Owen Schwartz
225475dcae New translations en-us.json (Bulgarian) 2026-03-01 11:17:13 -08:00
Owen Schwartz
ccb977fdfb New translations en-us.json (Spanish) 2026-03-01 11:17:12 -08:00
Milo Schwartz
280cbb6e22 Merge pull request #2553 from LaurenceJJones/explore/static-org-dropdown
enhance(sidebar): make mobile org selector sticky
2026-03-01 11:14:16 -08:00
miloschwartz
c20babcb53 fix org selector spacing on mobile 2026-03-01 11:13:49 -08:00
Owen Schwartz
768eebe2cd Merge pull request #2432 from ChanningHe/feat-integration-api-domain-crud
feat(integration): add domain CRUD endpoints to integration API
2026-03-01 11:12:05 -08:00
Owen Schwartz
44e3eedffa Merge pull request #2567 from marcschaeferger/fix-kubernetes-install
feat(kubernetes): enable newtInstances by default and update installation instructions
2026-03-01 10:56:18 -08:00
Marc Schäfer
bb189874cb fix(newt-install): conditionally display Kubernetes installation info
Signed-off-by: Marc Schäfer <git@marcschaeferger.de>
2026-03-01 10:55:58 -08:00
Marc Schäfer
34dadd0e16 feat(kubernetes): enable newtInstances by default and update installation instructions
Signed-off-by: Marc Schäfer <git@marcschaeferger.de>
2026-03-01 10:55:58 -08:00
Owen Schwartz
87b5cd9988 Merge pull request #2573 from Fizza-Mukhtar/fix/container-search-excludes-labels-2228
fix: exclude labels from container search to prevent false positives
2026-03-01 10:52:50 -08:00
Marc Schäfer
6a537a23e8 fix(newt-install): conditionally display Kubernetes installation info
Signed-off-by: Marc Schäfer <git@marcschaeferger.de>
2026-03-01 18:17:45 +01:00
Fizza-Mukhtar
e63a6e9b77 fix: treat local and wireguard sites as online for failover 2026-03-01 07:56:47 -08:00
Fizza-Mukhtar
7ce589c4f2 fix: exclude labels from container search to prevent false positives 2026-03-01 06:50:03 -08:00
Shreyas Papinwar
75a909784a fix: simplify path encoding per review — inline utils, use single key scheme
Address PR review comments:
- Remove pathUtils.ts and move sanitize/encodePath directly into utils.ts
- Simplify dual-key approach to single key using encodePath for map keys
- Remove backward-compat logic (not needed per reviewer)
- Update tests to match simplified approach
2026-03-01 15:48:26 +05:30
Shreyas
244f497a9c test: add comprehensive backward compatibility tests for path routing fix 2026-03-01 15:48:26 +05:30
Shreyas
e58f0c9f07 fix: preserve backward-compatible router names while fixing path collisions
Use encodePath only for internal map key grouping (collision-free) and
sanitize for Traefik-facing router/service names (unchanged for existing
users). Extract pure functions into pathUtils.ts so tests can run without
DB dependencies.
2026-03-01 15:48:26 +05:30
Shreyas
5f18c06e03 fix: use collision-free path encoding for Traefik router key generation 2026-03-01 15:48:26 +05:30
Fizza-Mukhtar
f36cf06e26 fix: fallback to local targets when newt targets are unhealthy 2026-03-01 01:43:15 -08:00
Owen Schwartz
27d52646a0 New translations en-us.json (Norwegian Bokmal) 2026-02-28 20:13:31 -08:00
Owen Schwartz
4dd8080c55 New translations en-us.json (Chinese Simplified) 2026-02-28 20:13:29 -08:00
Owen Schwartz
0b35d4f2e3 New translations en-us.json (Turkish) 2026-02-28 20:13:28 -08:00
Owen Schwartz
54a9fb9e54 New translations en-us.json (Russian) 2026-02-28 20:13:27 -08:00
Owen Schwartz
60a9e68f02 New translations en-us.json (Portuguese) 2026-02-28 20:13:25 -08:00
Owen Schwartz
ad374298e3 New translations en-us.json (Polish) 2026-02-28 20:13:24 -08:00
Owen Schwartz
c5dc4e6127 New translations en-us.json (Dutch) 2026-02-28 20:13:22 -08:00
Owen Schwartz
291ad831c5 New translations en-us.json (Korean) 2026-02-28 20:13:21 -08:00
Owen Schwartz
0a018f0ca8 New translations en-us.json (Italian) 2026-02-28 20:13:20 -08:00
Owen Schwartz
6673eeb1bb New translations en-us.json (German) 2026-02-28 20:13:18 -08:00
Owen Schwartz
4641f0b9ef New translations en-us.json (Czech) 2026-02-28 20:13:17 -08:00
Owen Schwartz
a4487964e5 New translations en-us.json (Bulgarian) 2026-02-28 20:13:15 -08:00
Owen Schwartz
fe42fdd1ec New translations en-us.json (Spanish) 2026-02-28 20:13:14 -08:00
Marc Schäfer
375211f184 feat(kubernetes): enable newtInstances by default and update installation instructions
Signed-off-by: Marc Schäfer <git@marcschaeferger.de>
2026-02-28 23:56:28 +01:00
Owen
66c377a5c9 Merge branch 'main' into dev 2026-02-28 12:14:41 -08:00
Owen
50c2aa0111 Add default memory limits 2026-02-28 12:14:27 -08:00
Owen
fdeb891137 Fix pagination effecting drop downs 2026-02-28 12:07:42 -08:00
Owen Schwartz
6a6e3a43b1 Merge pull request #2562 from LaurenceJJones/fix/zod-openapi-catch-error
fix(zod): Add openapi call after catch
2026-02-28 11:04:10 -08:00
Laurence
b0a34fa21b fix(openapi): Add openapi call after catch
fix: #2561
without making an explicit call to openapi a runtime error happens because it cannot infer the type, the call to openapi is the same across the codebase
2026-02-28 11:27:19 +00:00
Owen
72bf6f3c41 Comma seperated 2026-02-27 17:53:44 -08:00
miloschwartz
ad9289e0c1 sort by name by default 2026-02-27 15:53:27 -08:00
Owen Schwartz
b0cb0e5a99 Merge pull request #2559 from fosrl/dev
1.16.1
2026-02-27 12:40:23 -08:00
miloschwartz
8347203bbe add sort to name col 2026-02-27 12:39:26 -08:00
miloschwartz
4aa1186aed fix machine client pagination 2026-02-27 11:59:55 -08:00
Laurence
81c1a1da9c enhance(sidebar): make mobile org selector sticky
Make org selector sticky on mobile sidebar

  Move OrgSelector outside the scrollable container so it stays fixed
  at the top while menu items scroll, matching the desktop sidebar
  behavior introduced in 9b2c0d0b.
2026-02-26 15:45:41 +00:00
ChanningHe
52f26396ac feat(integration): add domain CRUD endpoints to integration API 2026-02-26 08:44:55 +09:00
196 changed files with 4650 additions and 4305 deletions

View File

@@ -4,6 +4,12 @@ services:
image: fosrl/pangolin:latest image: fosrl/pangolin:latest
container_name: pangolin container_name: pangolin
restart: unless-stopped restart: unless-stopped
deploy:
resources:
limits:
memory: 1g
reservations:
memory: 256m
volumes: volumes:
- ./config:/app/config - ./config:/app/config
healthcheck: healthcheck:

View File

@@ -4,6 +4,12 @@ services:
image: docker.io/fosrl/pangolin:{{if .IsEnterprise}}ee-{{end}}{{.PangolinVersion}} image: docker.io/fosrl/pangolin:{{if .IsEnterprise}}ee-{{end}}{{.PangolinVersion}}
container_name: pangolin container_name: pangolin
restart: unless-stopped restart: unless-stopped
deploy:
resources:
limits:
memory: 1g
reservations:
memory: 256m
volumes: volumes:
- ./config:/app/config - ./config:/app/config
healthcheck: healthcheck:

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Прокси заявки чрез HTTPS, използвайки напълно квалифицирано име на домейн.", "resourceHTTPDescription": "Прокси заявки чрез HTTPS, използвайки напълно квалифицирано име на домейн.",
"resourceRaw": "Суров TCP/UDP ресурс", "resourceRaw": "Суров TCP/UDP ресурс",
"resourceRawDescription": "Прокси заявки чрез сурови TCP/UDP, използвайки порт номер.", "resourceRawDescription": "Прокси заявки чрез сурови TCP/UDP, използвайки порт номер.",
"resourceRawDescriptionCloud": "Прокси заявките през суров TCP/UDP, използвайки номер на порт. ИЗИСКВА ИЗПОЛЗВАНЕ НА ОТДАЛЕЧЕН УЗЕЛ.",
"resourceCreate": "Създайте ресурс", "resourceCreate": "Създайте ресурс",
"resourceCreateDescription": "Следвайте стъпките по-долу, за да създадете нов ресурс", "resourceCreateDescription": "Следвайте стъпките по-долу, за да създадете нов ресурс",
"resourceSeeAll": "Вижте всички ресурси", "resourceSeeAll": "Вижте всички ресурси",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Получаване на потребител", "actionGetUser": "Получаване на потребител",
"actionGetOrgUser": "Вземете потребител на организация", "actionGetOrgUser": "Вземете потребител на организация",
"actionListOrgDomains": "Изброяване на домейни на организация", "actionListOrgDomains": "Изброяване на домейни на организация",
"actionGetDomain": "Вземи домейн",
"actionCreateOrgDomain": "Създай домейн",
"actionUpdateOrgDomain": "Актуализирай домейн",
"actionDeleteOrgDomain": "Изтрий домейн",
"actionGetDNSRecords": "Вземи DNS записи",
"actionRestartOrgDomain": "Рестартирай домейн",
"actionCreateSite": "Създаване на сайт", "actionCreateSite": "Създаване на сайт",
"actionDeleteSite": "Изтриване на сайта", "actionDeleteSite": "Изтриване на сайта",
"actionGetSite": "Вземете сайт", "actionGetSite": "Вземете сайт",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "Потребителят може да изпълнява само определени команди с sudo.", "sshSudoModeCommandsDescription": "Потребителят може да изпълнява само определени команди с sudo.",
"sshSudo": "Разреши sudo", "sshSudo": "Разреши sudo",
"sshSudoCommands": "Sudo команди", "sshSudoCommands": "Sudo команди",
"sshSudoCommandsDescription": "Списък с команди, които потребителят е разрешено да изпълнява с sudo.", "sshSudoCommandsDescription": "Списък, разделен със запетаи, с команди, които потребителят е позволено да изпълнява с sudo.",
"sshCreateHomeDir": "Създай начална директория", "sshCreateHomeDir": "Създай начална директория",
"sshUnixGroups": "Unix групи", "sshUnixGroups": "Unix групи",
"sshUnixGroupsDescription": "Unix групи, в които да добавите потребителя на целевия хост.", "sshUnixGroupsDescription": "Списък, разделен със запетаи, с Unix групи, към които да се добави потребителят на целевия хост.",
"retryAttempts": "Опити за повторно", "retryAttempts": "Опити за повторно",
"expectedResponseCodes": "Очаквани кодове за отговор", "expectedResponseCodes": "Очаквани кодове за отговор",
"expectedResponseCodesDescription": "HTTP статус код, указващ здравословно състояние. Ако бъде оставено празно, между 200-300 се счита за здравословно.", "expectedResponseCodesDescription": "HTTP статус код, указващ здравословно състояние. Ако бъде оставено празно, между 200-300 се счита за здравословно.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Proxy požadavky přes HTTPS pomocí plně kvalifikovaného názvu domény.", "resourceHTTPDescription": "Proxy požadavky přes HTTPS pomocí plně kvalifikovaného názvu domény.",
"resourceRaw": "Surový TCP/UDP zdroj", "resourceRaw": "Surový TCP/UDP zdroj",
"resourceRawDescription": "Proxy požadavky přes nezpracovaný TCP/UDP pomocí čísla portu.", "resourceRawDescription": "Proxy požadavky přes nezpracovaný TCP/UDP pomocí čísla portu.",
"resourceRawDescriptionCloud": "Požadavky na proxy přes syrové TCP/UDP pomocí portového čísla. ŽÁDOSTI POUŽÍVAT POUŽITÍ Z REMOTE NODE.",
"resourceCreate": "Vytvořit zdroj", "resourceCreate": "Vytvořit zdroj",
"resourceCreateDescription": "Postupujte podle níže uvedených kroků, abyste vytvořili a připojili nový zdroj", "resourceCreateDescription": "Postupujte podle níže uvedených kroků, abyste vytvořili a připojili nový zdroj",
"resourceSeeAll": "Zobrazit všechny zdroje", "resourceSeeAll": "Zobrazit všechny zdroje",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Získat uživatele", "actionGetUser": "Získat uživatele",
"actionGetOrgUser": "Získat uživatele organizace", "actionGetOrgUser": "Získat uživatele organizace",
"actionListOrgDomains": "Seznam domén organizace", "actionListOrgDomains": "Seznam domén organizace",
"actionGetDomain": "Získat doménu",
"actionCreateOrgDomain": "Vytvořit doménu",
"actionUpdateOrgDomain": "Aktualizovat doménu",
"actionDeleteOrgDomain": "Odstranit doménu",
"actionGetDNSRecords": "Získat záznamy DNS",
"actionRestartOrgDomain": "Restartovat doménu",
"actionCreateSite": "Vytvořit lokalitu", "actionCreateSite": "Vytvořit lokalitu",
"actionDeleteSite": "Odstranění lokality", "actionDeleteSite": "Odstranění lokality",
"actionGetSite": "Získat web", "actionGetSite": "Získat web",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "Uživatel může spustit pouze zadané příkazy s sudo.", "sshSudoModeCommandsDescription": "Uživatel může spustit pouze zadané příkazy s sudo.",
"sshSudo": "Povolit sudo", "sshSudo": "Povolit sudo",
"sshSudoCommands": "Sudo příkazy", "sshSudoCommands": "Sudo příkazy",
"sshSudoCommandsDescription": "Seznam příkazů, které může uživatel spouštět s sudo.", "sshSudoCommandsDescription": "Čárkami oddělený seznam příkazů, které může uživatel spouštět s sudo.",
"sshCreateHomeDir": "Vytvořit domovský adresář", "sshCreateHomeDir": "Vytvořit domovský adresář",
"sshUnixGroups": "Unixové skupiny", "sshUnixGroups": "Unixové skupiny",
"sshUnixGroupsDescription": "Unix skupiny přidají uživatele do cílového hostitele.", "sshUnixGroupsDescription": "Čárkou oddělené skupiny Unix přidají uživatele do cílového hostitele.",
"retryAttempts": "Opakovat pokusy", "retryAttempts": "Opakovat pokusy",
"expectedResponseCodes": "Očekávané kódy odezvy", "expectedResponseCodes": "Očekávané kódy odezvy",
"expectedResponseCodesDescription": "HTTP kód stavu, který označuje zdravý stav. Ponecháte-li prázdné, 200-300 je považováno za zdravé.", "expectedResponseCodesDescription": "HTTP kód stavu, který označuje zdravý stav. Ponecháte-li prázdné, 200-300 je považováno za zdravé.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Proxy-Anfragen über HTTPS mit einem voll qualifizierten Domain-Namen.", "resourceHTTPDescription": "Proxy-Anfragen über HTTPS mit einem voll qualifizierten Domain-Namen.",
"resourceRaw": "Direkte TCP/UDP Ressource (raw)", "resourceRaw": "Direkte TCP/UDP Ressource (raw)",
"resourceRawDescription": "Proxy-Anfragen über rohes TCP/UDP mit einer Portnummer.", "resourceRawDescription": "Proxy-Anfragen über rohes TCP/UDP mit einer Portnummer.",
"resourceRawDescriptionCloud": "Proxy-Anfragen über rohe TCP/UDP mit einer Portnummer. Erfordert die NUTZUNG eines REMOTE Knotens.",
"resourceCreate": "Ressource erstellen", "resourceCreate": "Ressource erstellen",
"resourceCreateDescription": "Folgen Sie den Schritten unten, um eine neue Ressource zu erstellen", "resourceCreateDescription": "Folgen Sie den Schritten unten, um eine neue Ressource zu erstellen",
"resourceSeeAll": "Alle Ressourcen anzeigen", "resourceSeeAll": "Alle Ressourcen anzeigen",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Benutzer abrufen", "actionGetUser": "Benutzer abrufen",
"actionGetOrgUser": "Organisationsbenutzer abrufen", "actionGetOrgUser": "Organisationsbenutzer abrufen",
"actionListOrgDomains": "Organisationsdomains auflisten", "actionListOrgDomains": "Organisationsdomains auflisten",
"actionGetDomain": "Domain abrufen",
"actionCreateOrgDomain": "Domain erstellen",
"actionUpdateOrgDomain": "Domain aktualisieren",
"actionDeleteOrgDomain": "Domain löschen",
"actionGetDNSRecords": "DNS-Einträge abrufen",
"actionRestartOrgDomain": "Domain neu starten",
"actionCreateSite": "Standort erstellen", "actionCreateSite": "Standort erstellen",
"actionDeleteSite": "Standort löschen", "actionDeleteSite": "Standort löschen",
"actionGetSite": "Standort abrufen", "actionGetSite": "Standort abrufen",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "Benutzer kann nur die angegebenen Befehle mit sudo ausführen.", "sshSudoModeCommandsDescription": "Benutzer kann nur die angegebenen Befehle mit sudo ausführen.",
"sshSudo": "sudo erlauben", "sshSudo": "sudo erlauben",
"sshSudoCommands": "Sudo-Befehle", "sshSudoCommands": "Sudo-Befehle",
"sshSudoCommandsDescription": "Liste der Befehle, die der Benutzer mit sudo ausführen darf.", "sshSudoCommandsDescription": "Kommagetrennte Liste von Befehlen, die der Benutzer mit sudo ausführen darf.",
"sshCreateHomeDir": "Home-Verzeichnis erstellen", "sshCreateHomeDir": "Home-Verzeichnis erstellen",
"sshUnixGroups": "Unix-Gruppen", "sshUnixGroups": "Unix-Gruppen",
"sshUnixGroupsDescription": "Unix-Gruppen, zu denen der Benutzer auf dem Ziel-Host hinzugefügt wird.", "sshUnixGroupsDescription": "Durch Komma getrennte Unix-Gruppen, um den Benutzer auf dem Zielhost hinzuzufügen.",
"retryAttempts": "Wiederholungsversuche", "retryAttempts": "Wiederholungsversuche",
"expectedResponseCodes": "Erwartete Antwortcodes", "expectedResponseCodes": "Erwartete Antwortcodes",
"expectedResponseCodesDescription": "HTTP-Statuscode, der einen gesunden Zustand anzeigt. Wenn leer gelassen, wird 200-300 als gesund angesehen.", "expectedResponseCodesDescription": "HTTP-Statuscode, der einen gesunden Zustand anzeigt. Wenn leer gelassen, wird 200-300 als gesund angesehen.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Proxy requests over HTTPS using a fully qualified domain name.", "resourceHTTPDescription": "Proxy requests over HTTPS using a fully qualified domain name.",
"resourceRaw": "Raw TCP/UDP Resource", "resourceRaw": "Raw TCP/UDP Resource",
"resourceRawDescription": "Proxy requests over raw TCP/UDP using a port number.", "resourceRawDescription": "Proxy requests over raw TCP/UDP using a port number.",
"resourceRawDescriptionCloud": "Proxy requests over raw TCP/UDP using a port number. REQUIRES THE USE OF A REMOTE NODE.",
"resourceCreate": "Create Resource", "resourceCreate": "Create Resource",
"resourceCreateDescription": "Follow the steps below to create a new resource", "resourceCreateDescription": "Follow the steps below to create a new resource",
"resourceSeeAll": "See All Resources", "resourceSeeAll": "See All Resources",
@@ -1102,6 +1103,12 @@
"actionGetUser": "Get User", "actionGetUser": "Get User",
"actionGetOrgUser": "Get Organization User", "actionGetOrgUser": "Get Organization User",
"actionListOrgDomains": "List Organization Domains", "actionListOrgDomains": "List Organization Domains",
"actionGetDomain": "Get Domain",
"actionCreateOrgDomain": "Create Domain",
"actionUpdateOrgDomain": "Update Domain",
"actionDeleteOrgDomain": "Delete Domain",
"actionGetDNSRecords": "Get DNS Records",
"actionRestartOrgDomain": "Restart Domain",
"actionCreateSite": "Create Site", "actionCreateSite": "Create Site",
"actionDeleteSite": "Delete Site", "actionDeleteSite": "Delete Site",
"actionGetSite": "Get Site", "actionGetSite": "Get Site",
@@ -1670,10 +1677,10 @@
"sshSudoModeCommandsDescription": "User can run only the specified commands with sudo.", "sshSudoModeCommandsDescription": "User can run only the specified commands with sudo.",
"sshSudo": "Allow sudo", "sshSudo": "Allow sudo",
"sshSudoCommands": "Sudo Commands", "sshSudoCommands": "Sudo Commands",
"sshSudoCommandsDescription": "List of commands the user is allowed to run with sudo.", "sshSudoCommandsDescription": "Comma separated list of commands the user is allowed to run with sudo.",
"sshCreateHomeDir": "Create Home Directory", "sshCreateHomeDir": "Create Home Directory",
"sshUnixGroups": "Unix Groups", "sshUnixGroups": "Unix Groups",
"sshUnixGroupsDescription": "Unix groups to add the user to on the target host.", "sshUnixGroupsDescription": "Comma separated Unix groups to add the user to on the target host.",
"retryAttempts": "Retry Attempts", "retryAttempts": "Retry Attempts",
"expectedResponseCodes": "Expected Response Codes", "expectedResponseCodes": "Expected Response Codes",
"expectedResponseCodesDescription": "HTTP status code that indicates healthy status. If left blank, 200-300 is considered healthy.", "expectedResponseCodesDescription": "HTTP status code that indicates healthy status. If left blank, 200-300 is considered healthy.",
@@ -2336,8 +2343,8 @@
"logRetentionEndOfFollowingYear": "End of following year", "logRetentionEndOfFollowingYear": "End of following year",
"actionLogsDescription": "View a history of actions performed in this organization", "actionLogsDescription": "View a history of actions performed in this organization",
"accessLogsDescription": "View access auth requests for resources in this organization", "accessLogsDescription": "View access auth requests for resources in this organization",
"licenseRequiredToUse": "An <enterpriseLicenseLink>Enterprise Edition</enterpriseLicenseLink> license or <pangolinCloudLink>Pangolin Cloud</pangolinCloudLink> is required to use this feature.", "licenseRequiredToUse": "An <enterpriseLicenseLink>Enterprise Edition</enterpriseLicenseLink> license or <pangolinCloudLink>Pangolin Cloud</pangolinCloudLink> is required to use this feature. <bookADemoLink>Book a demo or POC trial</bookADemoLink>.",
"ossEnterpriseEditionRequired": "The <enterpriseEditionLink>Enterprise Edition</enterpriseEditionLink> is required to use this feature. This feature is also available in <pangolinCloudLink>Pangolin Cloud</pangolinCloudLink>.", "ossEnterpriseEditionRequired": "The <enterpriseEditionLink>Enterprise Edition</enterpriseEditionLink> is required to use this feature. This feature is also available in <pangolinCloudLink>Pangolin Cloud</pangolinCloudLink>. <bookADemoLink>Book a demo or POC trial</bookADemoLink>.",
"certResolver": "Certificate Resolver", "certResolver": "Certificate Resolver",
"certResolverDescription": "Select the certificate resolver to use for this resource.", "certResolverDescription": "Select the certificate resolver to use for this resource.",
"selectCertResolver": "Select Certificate Resolver", "selectCertResolver": "Select Certificate Resolver",
@@ -2674,5 +2681,6 @@
"approvalsEmptyStateStep2Title": "Enable Device Approvals", "approvalsEmptyStateStep2Title": "Enable Device Approvals",
"approvalsEmptyStateStep2Description": "Edit a role and enable the 'Require Device Approvals' option. Users with this role will need admin approval for new devices.", "approvalsEmptyStateStep2Description": "Edit a role and enable the 'Require Device Approvals' option. Users with this role will need admin approval for new devices.",
"approvalsEmptyStatePreviewDescription": "Preview: When enabled, pending device requests will appear here for review", "approvalsEmptyStatePreviewDescription": "Preview: When enabled, pending device requests will appear here for review",
"approvalsEmptyStateButtonText": "Manage Roles" "approvalsEmptyStateButtonText": "Manage Roles",
"domainErrorTitle": "We are having trouble verifying your domain"
} }

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Proxy proporciona solicitudes sobre HTTPS usando un nombre de dominio completamente calificado.", "resourceHTTPDescription": "Proxy proporciona solicitudes sobre HTTPS usando un nombre de dominio completamente calificado.",
"resourceRaw": "Recurso TCP/UDP sin procesar", "resourceRaw": "Recurso TCP/UDP sin procesar",
"resourceRawDescription": "Proxy proporciona solicitudes sobre TCP/UDP usando un número de puerto.", "resourceRawDescription": "Proxy proporciona solicitudes sobre TCP/UDP usando un número de puerto.",
"resourceRawDescriptionCloud": "Las peticiones de proxy sobre TCP/UDP crudas usando un número de puerto. REQUIERE EL USO DE UN NODO REMOTE.",
"resourceCreate": "Crear Recurso", "resourceCreate": "Crear Recurso",
"resourceCreateDescription": "Siga los siguientes pasos para crear un nuevo recurso", "resourceCreateDescription": "Siga los siguientes pasos para crear un nuevo recurso",
"resourceSeeAll": "Ver todos los recursos", "resourceSeeAll": "Ver todos los recursos",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Obtener usuario", "actionGetUser": "Obtener usuario",
"actionGetOrgUser": "Obtener usuario de la organización", "actionGetOrgUser": "Obtener usuario de la organización",
"actionListOrgDomains": "Listar dominios de la organización", "actionListOrgDomains": "Listar dominios de la organización",
"actionGetDomain": "Obtener dominio",
"actionCreateOrgDomain": "Crear dominio",
"actionUpdateOrgDomain": "Actualizar dominio",
"actionDeleteOrgDomain": "Eliminar dominio",
"actionGetDNSRecords": "Obtener registros DNS",
"actionRestartOrgDomain": "Reiniciar dominio",
"actionCreateSite": "Crear sitio", "actionCreateSite": "Crear sitio",
"actionDeleteSite": "Eliminar sitio", "actionDeleteSite": "Eliminar sitio",
"actionGetSite": "Obtener sitio", "actionGetSite": "Obtener sitio",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "El usuario sólo puede ejecutar los comandos especificados con sudo.", "sshSudoModeCommandsDescription": "El usuario sólo puede ejecutar los comandos especificados con sudo.",
"sshSudo": "Permitir sudo", "sshSudo": "Permitir sudo",
"sshSudoCommands": "Comandos Sudo", "sshSudoCommands": "Comandos Sudo",
"sshSudoCommandsDescription": "Lista de comandos que el usuario puede ejecutar con sudo.", "sshSudoCommandsDescription": "Lista separada por comas de comandos que el usuario puede ejecutar con sudo.",
"sshCreateHomeDir": "Crear directorio principal", "sshCreateHomeDir": "Crear directorio principal",
"sshUnixGroups": "Grupos Unix", "sshUnixGroups": "Grupos Unix",
"sshUnixGroupsDescription": "Grupos Unix para agregar el usuario en el host de destino.", "sshUnixGroupsDescription": "Grupos Unix separados por comas para agregar el usuario en el host de destino.",
"retryAttempts": "Intentos de Reintento", "retryAttempts": "Intentos de Reintento",
"expectedResponseCodes": "Códigos de respuesta esperados", "expectedResponseCodes": "Códigos de respuesta esperados",
"expectedResponseCodesDescription": "Código de estado HTTP que indica un estado saludable. Si se deja en blanco, se considera saludable de 200 a 300.", "expectedResponseCodesDescription": "Código de estado HTTP que indica un estado saludable. Si se deja en blanco, se considera saludable de 200 a 300.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Proxy les demandes sur HTTPS en utilisant un nom de domaine entièrement qualifié.", "resourceHTTPDescription": "Proxy les demandes sur HTTPS en utilisant un nom de domaine entièrement qualifié.",
"resourceRaw": "Ressource TCP/UDP brute", "resourceRaw": "Ressource TCP/UDP brute",
"resourceRawDescription": "Proxy les demandes sur TCP/UDP brut en utilisant un numéro de port.", "resourceRawDescription": "Proxy les demandes sur TCP/UDP brut en utilisant un numéro de port.",
"resourceRawDescriptionCloud": "Requêtes de proxy sur TCP/UDP brute en utilisant un numéro de port. REQUISE L'UTILISATION D'UN Nœud DE REMOTE.",
"resourceCreate": "Créer une ressource", "resourceCreate": "Créer une ressource",
"resourceCreateDescription": "Suivez les étapes ci-dessous pour créer une nouvelle ressource", "resourceCreateDescription": "Suivez les étapes ci-dessous pour créer une nouvelle ressource",
"resourceSeeAll": "Voir toutes les ressources", "resourceSeeAll": "Voir toutes les ressources",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Obtenir l'utilisateur", "actionGetUser": "Obtenir l'utilisateur",
"actionGetOrgUser": "Obtenir l'utilisateur de l'organisation", "actionGetOrgUser": "Obtenir l'utilisateur de l'organisation",
"actionListOrgDomains": "Lister les domaines de l'organisation", "actionListOrgDomains": "Lister les domaines de l'organisation",
"actionGetDomain": "Obtenir un domaine",
"actionCreateOrgDomain": "Créer un domaine",
"actionUpdateOrgDomain": "Mettre à jour le domaine",
"actionDeleteOrgDomain": "Supprimer le domaine",
"actionGetDNSRecords": "Récupérer les enregistrements DNS",
"actionRestartOrgDomain": "Redémarrer le domaine",
"actionCreateSite": "Créer un site", "actionCreateSite": "Créer un site",
"actionDeleteSite": "Supprimer un site", "actionDeleteSite": "Supprimer un site",
"actionGetSite": "Obtenir un site", "actionGetSite": "Obtenir un site",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "L'utilisateur ne peut exécuter que les commandes spécifiées avec sudo.", "sshSudoModeCommandsDescription": "L'utilisateur ne peut exécuter que les commandes spécifiées avec sudo.",
"sshSudo": "Autoriser sudo", "sshSudo": "Autoriser sudo",
"sshSudoCommands": "Commandes Sudo", "sshSudoCommands": "Commandes Sudo",
"sshSudoCommandsDescription": "Liste des commandes que l'utilisateur est autorisé à exécuter avec sudo.", "sshSudoCommandsDescription": "Liste des commandes séparées par des virgules que l'utilisateur est autorisé à exécuter avec sudo.",
"sshCreateHomeDir": "Créer un répertoire personnel", "sshCreateHomeDir": "Créer un répertoire personnel",
"sshUnixGroups": "Groupes Unix", "sshUnixGroups": "Groupes Unix",
"sshUnixGroupsDescription": "Groupes Unix à ajouter à l'utilisateur sur l'hôte cible.", "sshUnixGroupsDescription": "Groupes Unix séparés par des virgules pour ajouter l'utilisateur sur l'hôte cible.",
"retryAttempts": "Tentatives de réessai", "retryAttempts": "Tentatives de réessai",
"expectedResponseCodes": "Codes de réponse attendus", "expectedResponseCodes": "Codes de réponse attendus",
"expectedResponseCodesDescription": "Code de statut HTTP indiquant un état de santé satisfaisant. Si non renseigné, 200-300 est considéré comme satisfaisant.", "expectedResponseCodesDescription": "Code de statut HTTP indiquant un état de santé satisfaisant. Si non renseigné, 200-300 est considéré comme satisfaisant.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Richieste proxy su HTTPS usando un nome di dominio completo.", "resourceHTTPDescription": "Richieste proxy su HTTPS usando un nome di dominio completo.",
"resourceRaw": "Risorsa Raw TCP/UDP", "resourceRaw": "Risorsa Raw TCP/UDP",
"resourceRawDescription": "Richieste proxy su TCP/UDP grezzo utilizzando un numero di porta.", "resourceRawDescription": "Richieste proxy su TCP/UDP grezzo utilizzando un numero di porta.",
"resourceRawDescriptionCloud": "Richieste proxy su TCP/UDP grezzo utilizzando un numero di porta. RICHIEDE L'USO DI UN NODO REMOTO.",
"resourceCreate": "Crea Risorsa", "resourceCreate": "Crea Risorsa",
"resourceCreateDescription": "Segui i passaggi seguenti per creare una nuova risorsa", "resourceCreateDescription": "Segui i passaggi seguenti per creare una nuova risorsa",
"resourceSeeAll": "Vedi Tutte Le Risorse", "resourceSeeAll": "Vedi Tutte Le Risorse",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Ottieni Utente", "actionGetUser": "Ottieni Utente",
"actionGetOrgUser": "Ottieni Utente Organizzazione", "actionGetOrgUser": "Ottieni Utente Organizzazione",
"actionListOrgDomains": "Elenca Domini Organizzazione", "actionListOrgDomains": "Elenca Domini Organizzazione",
"actionGetDomain": "Ottieni Dominio",
"actionCreateOrgDomain": "Crea Dominio",
"actionUpdateOrgDomain": "Aggiorna Dominio",
"actionDeleteOrgDomain": "Elimina Dominio",
"actionGetDNSRecords": "Ottieni Record DNS",
"actionRestartOrgDomain": "Riavvia Dominio",
"actionCreateSite": "Crea Sito", "actionCreateSite": "Crea Sito",
"actionDeleteSite": "Elimina Sito", "actionDeleteSite": "Elimina Sito",
"actionGetSite": "Ottieni Sito", "actionGetSite": "Ottieni Sito",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "L'utente può eseguire solo i comandi specificati con sudo.", "sshSudoModeCommandsDescription": "L'utente può eseguire solo i comandi specificati con sudo.",
"sshSudo": "Consenti sudo", "sshSudo": "Consenti sudo",
"sshSudoCommands": "Comandi Sudo", "sshSudoCommands": "Comandi Sudo",
"sshSudoCommandsDescription": "Elenco di comandi che l'utente può eseguire con sudo.", "sshSudoCommandsDescription": "Elenco di comandi separati da virgole che l'utente può eseguire con sudo.",
"sshCreateHomeDir": "Crea Cartella Home", "sshCreateHomeDir": "Crea Cartella Home",
"sshUnixGroups": "Gruppi Unix", "sshUnixGroups": "Gruppi Unix",
"sshUnixGroupsDescription": "Gruppi Unix su cui aggiungere l'utente sull'host di destinazione.", "sshUnixGroupsDescription": "Gruppi Unix separati da virgole per aggiungere l'utente sull'host di destinazione.",
"retryAttempts": "Tentativi di Riprova", "retryAttempts": "Tentativi di Riprova",
"expectedResponseCodes": "Codici di Risposta Attesi", "expectedResponseCodes": "Codici di Risposta Attesi",
"expectedResponseCodesDescription": "Codice di stato HTTP che indica lo stato di salute. Se lasciato vuoto, considerato sano è compreso tra 200-300.", "expectedResponseCodesDescription": "Codice di stato HTTP che indica lo stato di salute. Se lasciato vuoto, considerato sano è compreso tra 200-300.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "완전한 도메인 이름을 사용해 RAW 또는 HTTPS로 프록시 요청을 수행합니다.", "resourceHTTPDescription": "완전한 도메인 이름을 사용해 RAW 또는 HTTPS로 프록시 요청을 수행합니다.",
"resourceRaw": "원시 TCP/UDP 리소스", "resourceRaw": "원시 TCP/UDP 리소스",
"resourceRawDescription": "포트 번호를 사용하여 RAW TCP/UDP로 요청을 프록시합니다.", "resourceRawDescription": "포트 번호를 사용하여 RAW TCP/UDP로 요청을 프록시합니다.",
"resourceRawDescriptionCloud": "원시 TCP/UDP를 포트 번호를 사용하여 프록시 요청합니다. 원격 노드 사용이 필요합니다.",
"resourceCreate": "리소스 생성", "resourceCreate": "리소스 생성",
"resourceCreateDescription": "아래 단계를 따라 새 리소스를 생성하세요.", "resourceCreateDescription": "아래 단계를 따라 새 리소스를 생성하세요.",
"resourceSeeAll": "모든 리소스 보기", "resourceSeeAll": "모든 리소스 보기",
@@ -1101,6 +1102,12 @@
"actionGetUser": "사용자 조회", "actionGetUser": "사용자 조회",
"actionGetOrgUser": "조직 사용자 가져오기", "actionGetOrgUser": "조직 사용자 가져오기",
"actionListOrgDomains": "조직 도메인 목록", "actionListOrgDomains": "조직 도메인 목록",
"actionGetDomain": "도메인 가져오기",
"actionCreateOrgDomain": "도메인 생성",
"actionUpdateOrgDomain": "도메인 업데이트",
"actionDeleteOrgDomain": "도메인 삭제",
"actionGetDNSRecords": "DNS 레코드 가져오기",
"actionRestartOrgDomain": "도메인 재시작",
"actionCreateSite": "사이트 생성", "actionCreateSite": "사이트 생성",
"actionDeleteSite": "사이트 삭제", "actionDeleteSite": "사이트 삭제",
"actionGetSite": "사이트 가져오기", "actionGetSite": "사이트 가져오기",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "사용자는 sudo로 지정된 명령만 실행할 수 있습니다.", "sshSudoModeCommandsDescription": "사용자는 sudo로 지정된 명령만 실행할 수 있습니다.",
"sshSudo": "Sudo 허용", "sshSudo": "Sudo 허용",
"sshSudoCommands": "Sudo 명령", "sshSudoCommands": "Sudo 명령",
"sshSudoCommandsDescription": "사용자가 sudo로 실행할 수 있도록 허용된 명령 목록입니다.", "sshSudoCommandsDescription": "사용자가 sudo로 실행할 수 있는 명령어의 쉼표로 구분된 목록입니다.",
"sshCreateHomeDir": "홈 디렉터리 생성", "sshCreateHomeDir": "홈 디렉터리 생성",
"sshUnixGroups": "유닉스 그룹", "sshUnixGroups": "유닉스 그룹",
"sshUnixGroupsDescription": "대상 호스트에서 사용자 추가할 유닉스 그룹입니다.", "sshUnixGroupsDescription": "대상 호스트에서 사용자에게 추가할 유닉스 그룹의 쉼표로 구분된 목록입니다.",
"retryAttempts": "재시도 횟수", "retryAttempts": "재시도 횟수",
"expectedResponseCodes": "예상 응답 코드", "expectedResponseCodes": "예상 응답 코드",
"expectedResponseCodesDescription": "정상 상태를 나타내는 HTTP 상태 코드입니다. 비워 두면 200-300이 정상으로 간주됩니다.", "expectedResponseCodesDescription": "정상 상태를 나타내는 HTTP 상태 코드입니다. 비워 두면 200-300이 정상으로 간주됩니다.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Proxy forespørsler over HTTPS ved å bruke et fullstendig kvalifisert domenenavn.", "resourceHTTPDescription": "Proxy forespørsler over HTTPS ved å bruke et fullstendig kvalifisert domenenavn.",
"resourceRaw": "Rå TCP/UDP-ressurs", "resourceRaw": "Rå TCP/UDP-ressurs",
"resourceRawDescription": "Proxy forespørsler over rå TCP/UDP ved å bruke et portnummer.", "resourceRawDescription": "Proxy forespørsler over rå TCP/UDP ved å bruke et portnummer.",
"resourceRawDescriptionCloud": "Proxy ber om et portnummer. Om du vil bruke et sportsnummer.",
"resourceCreate": "Opprett ressurs", "resourceCreate": "Opprett ressurs",
"resourceCreateDescription": "Følg trinnene nedenfor for å opprette en ny ressurs", "resourceCreateDescription": "Følg trinnene nedenfor for å opprette en ny ressurs",
"resourceSeeAll": "Se alle ressurser", "resourceSeeAll": "Se alle ressurser",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Hent bruker", "actionGetUser": "Hent bruker",
"actionGetOrgUser": "Hent organisasjonsbruker", "actionGetOrgUser": "Hent organisasjonsbruker",
"actionListOrgDomains": "List opp organisasjonsdomener", "actionListOrgDomains": "List opp organisasjonsdomener",
"actionGetDomain": "Få Domene",
"actionCreateOrgDomain": "Opprett domene",
"actionUpdateOrgDomain": "Oppdater domene",
"actionDeleteOrgDomain": "Slett domene",
"actionGetDNSRecords": "Hent DNS-oppføringer",
"actionRestartOrgDomain": "Omstart Domene",
"actionCreateSite": "Opprett område", "actionCreateSite": "Opprett område",
"actionDeleteSite": "Slett område", "actionDeleteSite": "Slett område",
"actionGetSite": "Hent område", "actionGetSite": "Hent område",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "Brukeren kan bare kjøre de angitte kommandoene med sudo.", "sshSudoModeCommandsDescription": "Brukeren kan bare kjøre de angitte kommandoene med sudo.",
"sshSudo": "Tillat sudo", "sshSudo": "Tillat sudo",
"sshSudoCommands": "Sudo kommandoer", "sshSudoCommands": "Sudo kommandoer",
"sshSudoCommandsDescription": "Liste av kommandoer brukeren har lov til å kjøre med sudo.", "sshSudoCommandsDescription": "Kommaseparert liste med kommandoer brukeren kan kjøre med sudo.",
"sshCreateHomeDir": "Opprett hjemmappe", "sshCreateHomeDir": "Opprett hjemmappe",
"sshUnixGroups": "Unix grupper", "sshUnixGroups": "Unix grupper",
"sshUnixGroupsDescription": "Unix grupper for å legge til brukeren til målverten.", "sshUnixGroupsDescription": "Kommaseparerte Unix grupper for å legge brukeren til mål-verten.",
"retryAttempts": "Forsøk på nytt", "retryAttempts": "Forsøk på nytt",
"expectedResponseCodes": "Forventede svarkoder", "expectedResponseCodes": "Forventede svarkoder",
"expectedResponseCodesDescription": "HTTP-statuskode som indikerer sunn status. Hvis den blir stående tom, regnes 200-300 som sunn.", "expectedResponseCodesDescription": "HTTP-statuskode som indikerer sunn status. Hvis den blir stående tom, regnes 200-300 som sunn.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Proxyverzoeken via HTTPS met een volledig gekwalificeerde domeinnaam.", "resourceHTTPDescription": "Proxyverzoeken via HTTPS met een volledig gekwalificeerde domeinnaam.",
"resourceRaw": "TCP/UDP bron", "resourceRaw": "TCP/UDP bron",
"resourceRawDescription": "Proxyverzoeken via ruwe TCP/UDP met een poortnummer.", "resourceRawDescription": "Proxyverzoeken via ruwe TCP/UDP met een poortnummer.",
"resourceRawDescriptionCloud": "Proxy vraagt om onbewerkte TCP/UDP met behulp van een poortnummer. VEREIST HET GEBRUIK VAN EEN AFSTANDSBEDIENING NODE.",
"resourceCreate": "Bron maken", "resourceCreate": "Bron maken",
"resourceCreateDescription": "Volg de onderstaande stappen om een nieuwe bron te maken", "resourceCreateDescription": "Volg de onderstaande stappen om een nieuwe bron te maken",
"resourceSeeAll": "Alle bronnen bekijken", "resourceSeeAll": "Alle bronnen bekijken",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Gebruiker ophalen", "actionGetUser": "Gebruiker ophalen",
"actionGetOrgUser": "Krijg organisatie-gebruiker", "actionGetOrgUser": "Krijg organisatie-gebruiker",
"actionListOrgDomains": "Lijst organisatie domeinen", "actionListOrgDomains": "Lijst organisatie domeinen",
"actionGetDomain": "Domein verkrijgen",
"actionCreateOrgDomain": "Domein aanmaken",
"actionUpdateOrgDomain": "Domein bijwerken",
"actionDeleteOrgDomain": "Domein verwijderen",
"actionGetDNSRecords": "Krijg DNS Records",
"actionRestartOrgDomain": "Domein opnieuw starten",
"actionCreateSite": "Site aanmaken", "actionCreateSite": "Site aanmaken",
"actionDeleteSite": "Site verwijderen", "actionDeleteSite": "Site verwijderen",
"actionGetSite": "Site ophalen", "actionGetSite": "Site ophalen",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "Gebruiker kan alleen de opgegeven commando's uitvoeren met de sudo.", "sshSudoModeCommandsDescription": "Gebruiker kan alleen de opgegeven commando's uitvoeren met de sudo.",
"sshSudo": "sudo toestaan", "sshSudo": "sudo toestaan",
"sshSudoCommands": "Sudo Commando's", "sshSudoCommands": "Sudo Commando's",
"sshSudoCommandsDescription": "Lijst van commando's die de gebruiker mag uitvoeren met een sudo.", "sshSudoCommandsDescription": "Komma's gescheiden lijst van commando's waar de gebruiker een sudo mee mag uitvoeren.",
"sshCreateHomeDir": "Maak Home Directory", "sshCreateHomeDir": "Maak Home Directory",
"sshUnixGroups": "Unix groepen", "sshUnixGroups": "Unix groepen",
"sshUnixGroupsDescription": "Unix groepen om de gebruiker toe te voegen aan de doel host.", "sshUnixGroupsDescription": "Door komma's gescheiden Unix-groepen om de gebruiker toe te voegen aan de doelhost.",
"retryAttempts": "Herhaal Pogingen", "retryAttempts": "Herhaal Pogingen",
"expectedResponseCodes": "Verwachte Reactiecodes", "expectedResponseCodes": "Verwachte Reactiecodes",
"expectedResponseCodesDescription": "HTTP-statuscode die gezonde status aangeeft. Indien leeg wordt 200-300 als gezond beschouwd.", "expectedResponseCodesDescription": "HTTP-statuscode die gezonde status aangeeft. Indien leeg wordt 200-300 als gezond beschouwd.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Proxy zapytań przez HTTPS przy użyciu w pełni kwalifikowanej nazwy domeny.", "resourceHTTPDescription": "Proxy zapytań przez HTTPS przy użyciu w pełni kwalifikowanej nazwy domeny.",
"resourceRaw": "Surowy zasób TCP/UDP", "resourceRaw": "Surowy zasób TCP/UDP",
"resourceRawDescription": "Proxy zapytań przez surowe TCP/UDP przy użyciu numeru portu.", "resourceRawDescription": "Proxy zapytań przez surowe TCP/UDP przy użyciu numeru portu.",
"resourceRawDescriptionCloud": "Proxy żądania przesyłania danych nad surowym TCP/UDP przy użyciu numeru portu. Wymaga UŻYTKOWANIA PALIWA węzła.",
"resourceCreate": "Utwórz zasób", "resourceCreate": "Utwórz zasób",
"resourceCreateDescription": "Wykonaj poniższe kroki, aby utworzyć nowy zasób", "resourceCreateDescription": "Wykonaj poniższe kroki, aby utworzyć nowy zasób",
"resourceSeeAll": "Zobacz wszystkie zasoby", "resourceSeeAll": "Zobacz wszystkie zasoby",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Pobierz użytkownika", "actionGetUser": "Pobierz użytkownika",
"actionGetOrgUser": "Pobierz użytkownika organizacji", "actionGetOrgUser": "Pobierz użytkownika organizacji",
"actionListOrgDomains": "Lista domen organizacji", "actionListOrgDomains": "Lista domen organizacji",
"actionGetDomain": "Pobierz domenę",
"actionCreateOrgDomain": "Utwórz domenę",
"actionUpdateOrgDomain": "Aktualizuj domenę",
"actionDeleteOrgDomain": "Usuń domenę",
"actionGetDNSRecords": "Pobierz rekordy DNS",
"actionRestartOrgDomain": "Zrestartuj domenę",
"actionCreateSite": "Utwórz witrynę", "actionCreateSite": "Utwórz witrynę",
"actionDeleteSite": "Usuń witrynę", "actionDeleteSite": "Usuń witrynę",
"actionGetSite": "Pobierz witrynę", "actionGetSite": "Pobierz witrynę",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "Użytkownik może uruchamiać tylko określone polecenia z sudo.", "sshSudoModeCommandsDescription": "Użytkownik może uruchamiać tylko określone polecenia z sudo.",
"sshSudo": "Zezwól na sudo", "sshSudo": "Zezwól na sudo",
"sshSudoCommands": "Komendy Sudo", "sshSudoCommands": "Komendy Sudo",
"sshSudoCommandsDescription": "Lista poleceń, które użytkownik może uruchamiać z sudo.", "sshSudoCommandsDescription": "Lista poleceń oddzielonych przecinkami, które użytkownik może uruchamiać z sudo.",
"sshCreateHomeDir": "Utwórz katalog domowy", "sshCreateHomeDir": "Utwórz katalog domowy",
"sshUnixGroups": "Grupy Unix", "sshUnixGroups": "Grupy Unix",
"sshUnixGroupsDescription": "Grupy Unix do dodania użytkownika do docelowego hosta.", "sshUnixGroupsDescription": "Oddzielone przecinkami grupy Unix, aby dodać użytkownika do docelowego hosta.",
"retryAttempts": "Próby Ponowienia", "retryAttempts": "Próby Ponowienia",
"expectedResponseCodes": "Oczekiwane Kody Odpowiedzi", "expectedResponseCodes": "Oczekiwane Kody Odpowiedzi",
"expectedResponseCodesDescription": "Kod statusu HTTP, który wskazuje zdrowy status. Jeśli pozostanie pusty, uznaje się 200-300 za zdrowy.", "expectedResponseCodesDescription": "Kod statusu HTTP, który wskazuje zdrowy status. Jeśli pozostanie pusty, uznaje się 200-300 za zdrowy.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Proxies requests sobre HTTPS usando um nome de domínio totalmente qualificado.", "resourceHTTPDescription": "Proxies requests sobre HTTPS usando um nome de domínio totalmente qualificado.",
"resourceRaw": "Recurso TCP/UDP bruto", "resourceRaw": "Recurso TCP/UDP bruto",
"resourceRawDescription": "Proxies solicitações sobre TCP/UDP bruto usando um número de porta.", "resourceRawDescription": "Proxies solicitações sobre TCP/UDP bruto usando um número de porta.",
"resourceRawDescriptionCloud": "Proxy solicita sobre TCP/UDP bruto usando um número de porta. OBRIGATÓRIO O USO DE UMA NOTA REMOTA.",
"resourceCreate": "Criar Recurso", "resourceCreate": "Criar Recurso",
"resourceCreateDescription": "Siga os passos abaixo para criar um novo recurso", "resourceCreateDescription": "Siga os passos abaixo para criar um novo recurso",
"resourceSeeAll": "Ver todos os recursos", "resourceSeeAll": "Ver todos os recursos",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Obter Usuário", "actionGetUser": "Obter Usuário",
"actionGetOrgUser": "Obter Utilizador da Organização", "actionGetOrgUser": "Obter Utilizador da Organização",
"actionListOrgDomains": "Listar Domínios da Organização", "actionListOrgDomains": "Listar Domínios da Organização",
"actionGetDomain": "Obter domínio",
"actionCreateOrgDomain": "Criar domínio",
"actionUpdateOrgDomain": "Atualizar domínio",
"actionDeleteOrgDomain": "Excluir domínio",
"actionGetDNSRecords": "Obter registros de DNS",
"actionRestartOrgDomain": "Reiniciar domínio",
"actionCreateSite": "Criar Site", "actionCreateSite": "Criar Site",
"actionDeleteSite": "Eliminar Site", "actionDeleteSite": "Eliminar Site",
"actionGetSite": "Obter Site", "actionGetSite": "Obter Site",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "Usuário só pode executar os comandos especificados com sudo.", "sshSudoModeCommandsDescription": "Usuário só pode executar os comandos especificados com sudo.",
"sshSudo": "Permitir sudo", "sshSudo": "Permitir sudo",
"sshSudoCommands": "Comandos Sudo", "sshSudoCommands": "Comandos Sudo",
"sshSudoCommandsDescription": "Lista de comandos com permissão de executar com o sudo.", "sshSudoCommandsDescription": "Lista separada por vírgulas de comandos que o usuário pode executar com sudo.",
"sshCreateHomeDir": "Criar Diretório Inicial", "sshCreateHomeDir": "Criar Diretório Inicial",
"sshUnixGroups": "Grupos Unix", "sshUnixGroups": "Grupos Unix",
"sshUnixGroupsDescription": "Grupos Unix para adicionar o usuário no host de destino.", "sshUnixGroupsDescription": "Grupos Unix separados por vírgulas para adicionar o usuário no host alvo.",
"retryAttempts": "Tentativas de Repetição", "retryAttempts": "Tentativas de Repetição",
"expectedResponseCodes": "Códigos de Resposta Esperados", "expectedResponseCodes": "Códigos de Resposta Esperados",
"expectedResponseCodesDescription": "Código de status HTTP que indica estado saudável. Se deixado em branco, 200-300 é considerado saudável.", "expectedResponseCodesDescription": "Código de status HTTP que indica estado saudável. Se deixado em branco, 200-300 é considerado saudável.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Проксировать запросы через HTTPS с использованием полного доменного имени.", "resourceHTTPDescription": "Проксировать запросы через HTTPS с использованием полного доменного имени.",
"resourceRaw": "Сырой TCP/UDP-ресурс", "resourceRaw": "Сырой TCP/UDP-ресурс",
"resourceRawDescription": "Проксировать запросы по сырому TCP/UDP с использованием номера порта.", "resourceRawDescription": "Проксировать запросы по сырому TCP/UDP с использованием номера порта.",
"resourceRawDescriptionCloud": "Прокси-запросы через необработанный TCP/UDP с использованием номера порта. ТРЕБУЕТЕСЬ ИСПОЛЬЗОВАТЬ НЕОБХОДИМЫ.",
"resourceCreate": "Создание ресурса", "resourceCreate": "Создание ресурса",
"resourceCreateDescription": "Следуйте инструкциям ниже для создания нового ресурса", "resourceCreateDescription": "Следуйте инструкциям ниже для создания нового ресурса",
"resourceSeeAll": "Посмотреть все ресурсы", "resourceSeeAll": "Посмотреть все ресурсы",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Получить пользователя", "actionGetUser": "Получить пользователя",
"actionGetOrgUser": "Получить пользователя организации", "actionGetOrgUser": "Получить пользователя организации",
"actionListOrgDomains": "Список доменов организации", "actionListOrgDomains": "Список доменов организации",
"actionGetDomain": "Получить домен",
"actionCreateOrgDomain": "Создать домен",
"actionUpdateOrgDomain": "Обновить домен",
"actionDeleteOrgDomain": "Удалить домен",
"actionGetDNSRecords": "Получить записи DNS",
"actionRestartOrgDomain": "Перезапустить домен",
"actionCreateSite": "Создать сайт", "actionCreateSite": "Создать сайт",
"actionDeleteSite": "Удалить сайт", "actionDeleteSite": "Удалить сайт",
"actionGetSite": "Получить сайт", "actionGetSite": "Получить сайт",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "Пользователь может запускать только указанные команды с помощью sudo.", "sshSudoModeCommandsDescription": "Пользователь может запускать только указанные команды с помощью sudo.",
"sshSudo": "Разрешить sudo", "sshSudo": "Разрешить sudo",
"sshSudoCommands": "Sudo Команды", "sshSudoCommands": "Sudo Команды",
"sshSudoCommandsDescription": "Список команд, которые пользователю разрешено запускать с помощью sudo.", "sshSudoCommandsDescription": "Список команд, разделенных запятыми, которые пользователю разрешено запускать с помощью sudo.",
"sshCreateHomeDir": "Создать домашний каталог", "sshCreateHomeDir": "Создать домашний каталог",
"sshUnixGroups": "Unix группы", "sshUnixGroups": "Unix группы",
"sshUnixGroupsDescription": "Unix группы для добавления пользователя на целевой хост.", "sshUnixGroupsDescription": "Группы Unix через запятую, чтобы добавить пользователя на целевой хост.",
"retryAttempts": "Количество попыток повторного запроса", "retryAttempts": "Количество попыток повторного запроса",
"expectedResponseCodes": "Ожидаемые коды ответов", "expectedResponseCodes": "Ожидаемые коды ответов",
"expectedResponseCodesDescription": "HTTP-код состояния, указывающий на здоровое состояние. Если оставить пустым, 200-300 считается здоровым.", "expectedResponseCodesDescription": "HTTP-код состояния, указывающий на здоровое состояние. Если оставить пустым, 200-300 считается здоровым.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "Tam nitelikli bir etki alanı adı kullanarak HTTPS üzerinden proxy isteklerini yönlendirin.", "resourceHTTPDescription": "Tam nitelikli bir etki alanı adı kullanarak HTTPS üzerinden proxy isteklerini yönlendirin.",
"resourceRaw": "Ham TCP/UDP Kaynağı", "resourceRaw": "Ham TCP/UDP Kaynağı",
"resourceRawDescription": "Port numarası kullanarak ham TCP/UDP üzerinden proxy isteklerini yönlendirin.", "resourceRawDescription": "Port numarası kullanarak ham TCP/UDP üzerinden proxy isteklerini yönlendirin.",
"resourceRawDescriptionCloud": "Bir port numarası kullanarak ham TCP/UDP üzerinden istekleri proxy ile yönlendirin. UZAKTAN BİR DÜĞÜM KULLANIMINI GEREKTİRİR.",
"resourceCreate": "Kaynak Oluştur", "resourceCreate": "Kaynak Oluştur",
"resourceCreateDescription": "Yeni bir kaynak oluşturmak için aşağıdaki adımları izleyin", "resourceCreateDescription": "Yeni bir kaynak oluşturmak için aşağıdaki adımları izleyin",
"resourceSeeAll": "Tüm Kaynakları Gör", "resourceSeeAll": "Tüm Kaynakları Gör",
@@ -1101,6 +1102,12 @@
"actionGetUser": "Kullanıcıyı Getir", "actionGetUser": "Kullanıcıyı Getir",
"actionGetOrgUser": "Kuruluş Kullanıcısını Al", "actionGetOrgUser": "Kuruluş Kullanıcısını Al",
"actionListOrgDomains": "Kuruluş Alan Adlarını Listele", "actionListOrgDomains": "Kuruluş Alan Adlarını Listele",
"actionGetDomain": "Alan Adını Al",
"actionCreateOrgDomain": "Alan Adı Oluştur",
"actionUpdateOrgDomain": "Alan Adını Güncelle",
"actionDeleteOrgDomain": "Alan Adını Sil",
"actionGetDNSRecords": "DNS Kayıtlarını Al",
"actionRestartOrgDomain": "Alanı Yeniden Başlat",
"actionCreateSite": "Site Oluştur", "actionCreateSite": "Site Oluştur",
"actionDeleteSite": "Siteyi Sil", "actionDeleteSite": "Siteyi Sil",
"actionGetSite": "Siteyi Al", "actionGetSite": "Siteyi Al",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "Kullanıcı sadece belirtilen komutları sudo ile çalıştırabilir.", "sshSudoModeCommandsDescription": "Kullanıcı sadece belirtilen komutları sudo ile çalıştırabilir.",
"sshSudo": "Sudo'ya izin ver", "sshSudo": "Sudo'ya izin ver",
"sshSudoCommands": "Sudo Komutları", "sshSudoCommands": "Sudo Komutları",
"sshSudoCommandsDescription": "Kullanıcının sudo ile çalıştırmasına izin verilen komutların listesi.", "sshSudoCommandsDescription": "Kullanıcının sudo ile çalıştırmasına izin verilen komutların virgülle ayrılmış listesi.",
"sshCreateHomeDir": "Ev Dizini Oluştur", "sshCreateHomeDir": "Ev Dizini Oluştur",
"sshUnixGroups": "Unix Grupları", "sshUnixGroups": "Unix Grupları",
"sshUnixGroupsDescription": "Hedef ana bilgisayarda kullanıcıya eklemek için Unix grupları.", "sshUnixGroupsDescription": "Hedef konakta kullanıcıya eklenecek Unix gruplarının virgülle ayrılmış listesi.",
"retryAttempts": "Tekrar Deneme Girişimleri", "retryAttempts": "Tekrar Deneme Girişimleri",
"expectedResponseCodes": "Beklenen Yanıt Kodları", "expectedResponseCodes": "Beklenen Yanıt Kodları",
"expectedResponseCodesDescription": "Sağlıklı durumu gösteren HTTP durum kodu. Boş bırakılırsa, 200-300 arası sağlıklı kabul edilir.", "expectedResponseCodesDescription": "Sağlıklı durumu gösteren HTTP durum kodu. Boş bırakılırsa, 200-300 arası sağlıklı kabul edilir.",

View File

@@ -175,6 +175,7 @@
"resourceHTTPDescription": "通过使用完全限定的域名的HTTPS代理请求。", "resourceHTTPDescription": "通过使用完全限定的域名的HTTPS代理请求。",
"resourceRaw": "TCP/UDP 资源", "resourceRaw": "TCP/UDP 资源",
"resourceRawDescription": "通过使用端口号的原始TCP/UDP代理请求。", "resourceRawDescription": "通过使用端口号的原始TCP/UDP代理请求。",
"resourceRawDescriptionCloud": "正在使用端口号的 TCP/UDP 代理请求。请使用一个REMOTE",
"resourceCreate": "创建资源", "resourceCreate": "创建资源",
"resourceCreateDescription": "按照下面的步骤创建新资源", "resourceCreateDescription": "按照下面的步骤创建新资源",
"resourceSeeAll": "查看所有资源", "resourceSeeAll": "查看所有资源",
@@ -1101,6 +1102,12 @@
"actionGetUser": "获取用户", "actionGetUser": "获取用户",
"actionGetOrgUser": "获取组织用户", "actionGetOrgUser": "获取组织用户",
"actionListOrgDomains": "列出组织域", "actionListOrgDomains": "列出组织域",
"actionGetDomain": "获取域",
"actionCreateOrgDomain": "创建域",
"actionUpdateOrgDomain": "更新域",
"actionDeleteOrgDomain": "删除域",
"actionGetDNSRecords": "获取 DNS 记录",
"actionRestartOrgDomain": "重新启动域",
"actionCreateSite": "创建站点", "actionCreateSite": "创建站点",
"actionDeleteSite": "删除站点", "actionDeleteSite": "删除站点",
"actionGetSite": "获取站点", "actionGetSite": "获取站点",
@@ -1669,10 +1676,10 @@
"sshSudoModeCommandsDescription": "用户只能用 sudo 运行指定的命令。", "sshSudoModeCommandsDescription": "用户只能用 sudo 运行指定的命令。",
"sshSudo": "允许Sudo", "sshSudo": "允许Sudo",
"sshSudoCommands": "Sudo 命令", "sshSudoCommands": "Sudo 命令",
"sshSudoCommandsDescription": "允许用户使用 sudo 运行的命令列表。", "sshSudoCommandsDescription": "逗号分隔的用户允许使用 sudo 运行的命令列表。",
"sshCreateHomeDir": "创建主目录", "sshCreateHomeDir": "创建主目录",
"sshUnixGroups": "Unix 组", "sshUnixGroups": "Unix 组",
"sshUnixGroupsDescription": "将用户添加到目标主机的Unix组。", "sshUnixGroupsDescription": "用逗号分隔了Unix组将用户添加到目标主机。",
"retryAttempts": "重试次数", "retryAttempts": "重试次数",
"expectedResponseCodes": "期望响应代码", "expectedResponseCodes": "期望响应代码",
"expectedResponseCodesDescription": "HTTP 状态码表示健康状态。如留空200-300 被视为健康。", "expectedResponseCodesDescription": "HTTP 状态码表示健康状态。如留空200-300 被视为健康。",

3938
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -33,7 +33,7 @@
}, },
"dependencies": { "dependencies": {
"@asteasolutions/zod-to-openapi": "8.4.1", "@asteasolutions/zod-to-openapi": "8.4.1",
"@aws-sdk/client-s3": "3.989.0", "@aws-sdk/client-s3": "3.1004.0",
"@faker-js/faker": "10.3.0", "@faker-js/faker": "10.3.0",
"@headlessui/react": "2.2.9", "@headlessui/react": "2.2.9",
"@hookform/resolvers": "5.2.2", "@hookform/resolvers": "5.2.2",
@@ -80,16 +80,16 @@
"d3": "7.9.0", "d3": "7.9.0",
"drizzle-orm": "0.45.1", "drizzle-orm": "0.45.1",
"express": "5.2.1", "express": "5.2.1",
"express-rate-limit": "8.2.1", "express-rate-limit": "8.3.0",
"glob": "13.0.6", "glob": "13.0.6",
"helmet": "8.1.0", "helmet": "8.1.0",
"http-errors": "2.0.1", "http-errors": "2.0.1",
"input-otp": "1.4.2", "input-otp": "1.4.2",
"ioredis": "5.9.3", "ioredis": "5.10.0",
"jmespath": "0.16.0", "jmespath": "0.16.0",
"js-yaml": "4.1.1", "js-yaml": "4.1.1",
"jsonwebtoken": "9.0.3", "jsonwebtoken": "9.0.3",
"lucide-react": "0.563.0", "lucide-react": "0.577.0",
"maxmind": "5.0.5", "maxmind": "5.0.5",
"moment": "2.30.1", "moment": "2.30.1",
"next": "15.5.12", "next": "15.5.12",
@@ -99,21 +99,21 @@
"node-cache": "5.1.2", "node-cache": "5.1.2",
"nodemailer": "8.0.1", "nodemailer": "8.0.1",
"oslo": "1.2.1", "oslo": "1.2.1",
"pg": "8.19.0", "pg": "8.20.0",
"posthog-node": "5.26.0", "posthog-node": "5.28.0",
"qrcode.react": "4.2.0", "qrcode.react": "4.2.0",
"react": "19.2.4", "react": "19.2.4",
"react-day-picker": "9.13.2", "react-day-picker": "9.14.0",
"react-dom": "19.2.4", "react-dom": "19.2.4",
"react-easy-sort": "1.8.0", "react-easy-sort": "1.8.0",
"react-hook-form": "7.71.2", "react-hook-form": "7.71.2",
"react-icons": "5.5.0", "react-icons": "5.6.0",
"recharts": "2.15.4", "recharts": "2.15.4",
"reodotdev": "1.0.0", "reodotdev": "1.1.0",
"resend": "6.9.2", "resend": "6.9.2",
"semver": "7.7.4", "semver": "7.7.4",
"sshpk": "^1.18.0", "sshpk": "^1.18.0",
"stripe": "20.3.1", "stripe": "20.4.1",
"swagger-ui-express": "5.0.1", "swagger-ui-express": "5.0.1",
"tailwind-merge": "3.5.0", "tailwind-merge": "3.5.0",
"topojson-client": "3.1.0", "topojson-client": "3.1.0",
@@ -131,10 +131,10 @@
"zod-validation-error": "5.0.0" "zod-validation-error": "5.0.0"
}, },
"devDependencies": { "devDependencies": {
"@dotenvx/dotenvx": "1.52.0", "@dotenvx/dotenvx": "1.54.1",
"@esbuild-plugins/tsconfig-paths": "0.1.2", "@esbuild-plugins/tsconfig-paths": "0.1.2",
"@react-email/preview-server": "5.2.8", "@react-email/preview-server": "5.2.8",
"@tailwindcss/postcss": "4.1.18", "@tailwindcss/postcss": "4.2.1",
"@tanstack/react-query-devtools": "5.91.3", "@tanstack/react-query-devtools": "5.91.3",
"@types/better-sqlite3": "7.6.13", "@types/better-sqlite3": "7.6.13",
"@types/cookie-parser": "1.4.10", "@types/cookie-parser": "1.4.10",
@@ -146,10 +146,10 @@
"@types/jmespath": "0.15.2", "@types/jmespath": "0.15.2",
"@types/js-yaml": "4.0.9", "@types/js-yaml": "4.0.9",
"@types/jsonwebtoken": "9.0.10", "@types/jsonwebtoken": "9.0.10",
"@types/node": "25.2.3", "@types/node": "25.3.5",
"@types/nodemailer": "7.0.11", "@types/nodemailer": "7.0.11",
"@types/nprogress": "0.2.3", "@types/nprogress": "0.2.3",
"@types/pg": "8.16.0", "@types/pg": "8.18.0",
"@types/react": "19.2.14", "@types/react": "19.2.14",
"@types/react-dom": "19.2.3", "@types/react-dom": "19.2.3",
"@types/semver": "7.7.1", "@types/semver": "7.7.1",
@@ -167,10 +167,14 @@
"postcss": "8.5.6", "postcss": "8.5.6",
"prettier": "3.8.1", "prettier": "3.8.1",
"react-email": "5.2.8", "react-email": "5.2.8",
"tailwindcss": "4.1.18", "tailwindcss": "4.2.1",
"tsc-alias": "1.8.16", "tsc-alias": "1.8.16",
"tsx": "4.21.0", "tsx": "4.21.0",
"typescript": "5.9.3", "typescript": "5.9.3",
"typescript-eslint": "8.55.0" "typescript-eslint": "8.56.1"
},
"overrides": {
"esbuild": "0.27.3",
"dompurify": "3.3.2"
} }
} }

View File

@@ -1,6 +1,10 @@
import { flushBandwidthToDb } from "@server/routers/newt/handleReceiveBandwidthMessage";
import { flushSiteBandwidthToDb } from "@server/routers/gerbil/receiveBandwidth";
import { cleanup as wsCleanup } from "#dynamic/routers/ws"; import { cleanup as wsCleanup } from "#dynamic/routers/ws";
async function cleanup() { async function cleanup() {
await flushBandwidthToDb();
await flushSiteBandwidthToDb();
await wsCleanup(); await wsCleanup();
process.exit(0); process.exit(0);
@@ -10,4 +14,4 @@ export async function initCleanup() {
// Handle process termination // Handle process termination
process.on("SIGTERM", () => cleanup()); process.on("SIGTERM", () => cleanup());
process.on("SIGINT", () => cleanup()); process.on("SIGINT", () => cleanup());
} }

View File

@@ -328,6 +328,14 @@ export const approvals = pgTable("approvals", {
.notNull() .notNull()
}); });
export const bannedEmails = pgTable("bannedEmails", {
email: varchar("email", { length: 255 }).primaryKey(),
});
export const bannedIps = pgTable("bannedIps", {
ip: varchar("ip", { length: 255 }).primaryKey(),
});
export type Approval = InferSelectModel<typeof approvals>; export type Approval = InferSelectModel<typeof approvals>;
export type Limit = InferSelectModel<typeof limits>; export type Limit = InferSelectModel<typeof limits>;
export type Account = InferSelectModel<typeof account>; export type Account = InferSelectModel<typeof account>;

View File

@@ -22,7 +22,8 @@ export const domains = pgTable("domains", {
tries: integer("tries").notNull().default(0), tries: integer("tries").notNull().default(0),
certResolver: varchar("certResolver"), certResolver: varchar("certResolver"),
customCertResolver: varchar("customCertResolver"), customCertResolver: varchar("customCertResolver"),
preferWildcardCert: boolean("preferWildcardCert") preferWildcardCert: boolean("preferWildcardCert"),
errorMessage: text("errorMessage")
}); });
export const dnsRecords = pgTable("dnsRecords", { export const dnsRecords = pgTable("dnsRecords", {
@@ -88,6 +89,7 @@ export const sites = pgTable("sites", {
lastBandwidthUpdate: varchar("lastBandwidthUpdate"), lastBandwidthUpdate: varchar("lastBandwidthUpdate"),
type: varchar("type").notNull(), // "newt" or "wireguard" type: varchar("type").notNull(), // "newt" or "wireguard"
online: boolean("online").notNull().default(false), online: boolean("online").notNull().default(false),
lastPing: integer("lastPing"),
address: varchar("address"), address: varchar("address"),
endpoint: varchar("endpoint"), endpoint: varchar("endpoint"),
publicKey: varchar("publicKey"), publicKey: varchar("publicKey"),
@@ -283,6 +285,7 @@ export const users = pgTable("user", {
dateCreated: varchar("dateCreated").notNull(), dateCreated: varchar("dateCreated").notNull(),
termsAcceptedTimestamp: varchar("termsAcceptedTimestamp"), termsAcceptedTimestamp: varchar("termsAcceptedTimestamp"),
termsVersion: varchar("termsVersion"), termsVersion: varchar("termsVersion"),
marketingEmailConsent: boolean("marketingEmailConsent").default(false),
serverAdmin: boolean("serverAdmin").notNull().default(false), serverAdmin: boolean("serverAdmin").notNull().default(false),
lastPasswordChange: bigint("lastPasswordChange", { mode: "number" }) lastPasswordChange: bigint("lastPasswordChange", { mode: "number" })
}); });
@@ -719,6 +722,7 @@ export const clientSitesAssociationsCache = pgTable(
.notNull(), .notNull(),
siteId: integer("siteId").notNull(), siteId: integer("siteId").notNull(),
isRelayed: boolean("isRelayed").notNull().default(false), isRelayed: boolean("isRelayed").notNull().default(false),
isJitMode: boolean("isJitMode").notNull().default(false),
endpoint: varchar("endpoint"), endpoint: varchar("endpoint"),
publicKey: varchar("publicKey") // this will act as the session's public key for hole punching so we can track when it changes publicKey: varchar("publicKey") // this will act as the session's public key for hole punching so we can track when it changes
} }

View File

@@ -318,6 +318,15 @@ export const approvals = sqliteTable("approvals", {
.notNull() .notNull()
}); });
export const bannedEmails = sqliteTable("bannedEmails", {
email: text("email").primaryKey()
});
export const bannedIps = sqliteTable("bannedIps", {
ip: text("ip").primaryKey()
});
export type Approval = InferSelectModel<typeof approvals>; export type Approval = InferSelectModel<typeof approvals>;
export type Limit = InferSelectModel<typeof limits>; export type Limit = InferSelectModel<typeof limits>;
export type Account = InferSelectModel<typeof account>; export type Account = InferSelectModel<typeof account>;

View File

@@ -13,7 +13,8 @@ export const domains = sqliteTable("domains", {
failed: integer("failed", { mode: "boolean" }).notNull().default(false), failed: integer("failed", { mode: "boolean" }).notNull().default(false),
tries: integer("tries").notNull().default(0), tries: integer("tries").notNull().default(0),
certResolver: text("certResolver"), certResolver: text("certResolver"),
preferWildcardCert: integer("preferWildcardCert", { mode: "boolean" }) preferWildcardCert: integer("preferWildcardCert", { mode: "boolean" }),
errorMessage: text("errorMessage")
}); });
export const dnsRecords = sqliteTable("dnsRecords", { export const dnsRecords = sqliteTable("dnsRecords", {
@@ -89,6 +90,7 @@ export const sites = sqliteTable("sites", {
lastBandwidthUpdate: text("lastBandwidthUpdate"), lastBandwidthUpdate: text("lastBandwidthUpdate"),
type: text("type").notNull(), // "newt" or "wireguard" type: text("type").notNull(), // "newt" or "wireguard"
online: integer("online", { mode: "boolean" }).notNull().default(false), online: integer("online", { mode: "boolean" }).notNull().default(false),
lastPing: integer("lastPing"),
// exit node stuff that is how to connect to the site when it has a wg server // exit node stuff that is how to connect to the site when it has a wg server
address: text("address"), // this is the address of the wireguard interface in newt address: text("address"), // this is the address of the wireguard interface in newt
@@ -314,6 +316,9 @@ export const users = sqliteTable("user", {
dateCreated: text("dateCreated").notNull(), dateCreated: text("dateCreated").notNull(),
termsAcceptedTimestamp: text("termsAcceptedTimestamp"), termsAcceptedTimestamp: text("termsAcceptedTimestamp"),
termsVersion: text("termsVersion"), termsVersion: text("termsVersion"),
marketingEmailConsent: integer("marketingEmailConsent", {
mode: "boolean"
}).default(false),
serverAdmin: integer("serverAdmin", { mode: "boolean" }) serverAdmin: integer("serverAdmin", { mode: "boolean" })
.notNull() .notNull()
.default(false), .default(false),
@@ -406,6 +411,9 @@ export const clientSitesAssociationsCache = sqliteTable(
isRelayed: integer("isRelayed", { mode: "boolean" }) isRelayed: integer("isRelayed", { mode: "boolean" })
.notNull() .notNull()
.default(false), .default(false),
isJitMode: integer("isJitMode", { mode: "boolean" })
.notNull()
.default(false),
endpoint: text("endpoint"), endpoint: text("endpoint"),
publicKey: text("publicKey") // this will act as the session's public key for hole punching so we can track when it changes publicKey: text("publicKey") // this will act as the session's public key for hole punching so we can track when it changes
} }

View File

@@ -17,6 +17,7 @@ import fs from "fs";
import path from "path"; import path from "path";
import { APP_PATH } from "./lib/consts"; import { APP_PATH } from "./lib/consts";
import yaml from "js-yaml"; import yaml from "js-yaml";
import { z } from "zod";
const dev = process.env.ENVIRONMENT !== "prod"; const dev = process.env.ENVIRONMENT !== "prod";
const externalPort = config.getRawConfig().server.integration_port; const externalPort = config.getRawConfig().server.integration_port;
@@ -38,12 +39,24 @@ export function createIntegrationApiServer() {
apiServer.use(cookieParser()); apiServer.use(cookieParser());
apiServer.use(express.json()); apiServer.use(express.json());
const openApiDocumentation = getOpenApiDocumentation();
apiServer.use( apiServer.use(
"/v1/docs", "/v1/docs",
swaggerUi.serve, swaggerUi.serve,
swaggerUi.setup(getOpenApiDocumentation()) swaggerUi.setup(openApiDocumentation)
); );
// Unauthenticated OpenAPI spec endpoints
apiServer.get("/v1/openapi.json", (_req, res) => {
res.json(openApiDocumentation);
});
apiServer.get("/v1/openapi.yaml", (_req, res) => {
const yamlOutput = yaml.dump(openApiDocumentation);
res.type("application/yaml").send(yamlOutput);
});
// API routes // API routes
const prefix = `/v1`; const prefix = `/v1`;
apiServer.use(logIncomingMiddleware); apiServer.use(logIncomingMiddleware);
@@ -75,16 +88,6 @@ function getOpenApiDocumentation() {
} }
); );
for (const def of registry.definitions) {
if (def.type === "route") {
def.route.security = [
{
[bearerAuth.name]: []
}
];
}
}
registry.registerPath({ registry.registerPath({
method: "get", method: "get",
path: "/", path: "/",
@@ -94,6 +97,74 @@ function getOpenApiDocumentation() {
responses: {} responses: {}
}); });
registry.registerPath({
method: "get",
path: "/openapi.json",
description: "Get OpenAPI specification as JSON",
tags: [],
request: {},
responses: {
"200": {
description: "OpenAPI specification as JSON",
content: {
"application/json": {
schema: {
type: "object"
}
}
}
}
}
});
registry.registerPath({
method: "get",
path: "/openapi.yaml",
description: "Get OpenAPI specification as YAML",
tags: [],
request: {},
responses: {
"200": {
description: "OpenAPI specification as YAML",
content: {
"application/yaml": {
schema: {
type: "string"
}
}
}
}
}
});
for (const def of registry.definitions) {
if (def.type === "route") {
def.route.security = [
{
[bearerAuth.name]: []
}
];
// Ensure every route has a generic JSON response schema so Swagger UI can render responses
const existingResponses = def.route.responses;
const hasExistingResponses =
existingResponses && Object.keys(existingResponses).length > 0;
if (!hasExistingResponses) {
def.route.responses = {
"*": {
description: "",
content: {
"application/json": {
schema: z.object({})
}
}
}
};
}
}
}
const generator = new OpenApiGeneratorV3(registry.definitions); const generator = new OpenApiGeneratorV3(registry.definitions);
const generated = generator.generateDocument({ const generated = generator.generateDocument({

View File

@@ -107,7 +107,7 @@ export async function applyBlueprint({
[target], [target],
matchingHealthcheck ? [matchingHealthcheck] : [], matchingHealthcheck ? [matchingHealthcheck] : [],
result.proxyResource.protocol, result.proxyResource.protocol,
result.proxyResource.proxyPort site.newt.version
); );
} }
} }

View File

@@ -4,8 +4,12 @@ import { cleanUpOldLogs as cleanUpOldActionLogs } from "#dynamic/middlewares/log
import { cleanUpOldLogs as cleanUpOldRequestLogs } from "@server/routers/badger/logRequestAudit"; import { cleanUpOldLogs as cleanUpOldRequestLogs } from "@server/routers/badger/logRequestAudit";
import { gt, or } from "drizzle-orm"; import { gt, or } from "drizzle-orm";
import { cleanUpOldFingerprintSnapshots } from "@server/routers/olm/fingerprintingUtils"; import { cleanUpOldFingerprintSnapshots } from "@server/routers/olm/fingerprintingUtils";
import { build } from "@server/build";
export function initLogCleanupInterval() { export function initLogCleanupInterval() {
if (build == "saas") { // skip log cleanup for saas builds
return null;
}
return setInterval( return setInterval(
async () => { async () => {
const orgsToClean = await db const orgsToClean = await db

View File

@@ -0,0 +1,20 @@
import semver from "semver";
export function canCompress(
clientVersion: string | null | undefined,
type: "newt" | "olm"
): boolean {
try {
if (!clientVersion) return false;
// check if it is a valid semver
if (!semver.valid(clientVersion)) return false;
if (type === "newt") {
return semver.gte(clientVersion, "1.10.3");
} else if (type === "olm") {
return semver.gte(clientVersion, "1.4.3");
}
return false;
} catch {
return false;
}
}

View File

@@ -85,9 +85,7 @@ export async function deleteOrgById(
deletedNewtIds.push(deletedNewt.newtId); deletedNewtIds.push(deletedNewt.newtId);
await trx await trx
.delete(newtSessions) .delete(newtSessions)
.where( .where(eq(newtSessions.newtId, deletedNewt.newtId));
eq(newtSessions.newtId, deletedNewt.newtId)
);
} }
} }
} }
@@ -121,33 +119,38 @@ export async function deleteOrgById(
eq(clientSitesAssociationsCache.clientId, client.clientId) eq(clientSitesAssociationsCache.clientId, client.clientId)
); );
} }
await trx.delete(resources).where(eq(resources.orgId, orgId));
const allOrgDomains = await trx const allOrgDomains = await trx
.select() .select()
.from(orgDomains) .from(orgDomains)
.innerJoin(domains, eq(domains.domainId, orgDomains.domainId)) .innerJoin(domains, eq(orgDomains.domainId, domains.domainId))
.where( .where(
and( and(
eq(orgDomains.orgId, orgId), eq(orgDomains.orgId, orgId),
eq(domains.configManaged, false) eq(domains.configManaged, false)
) )
); );
logger.info(`Found ${allOrgDomains.length} domains to delete`);
const domainIdsToDelete: string[] = []; const domainIdsToDelete: string[] = [];
for (const orgDomain of allOrgDomains) { for (const orgDomain of allOrgDomains) {
const domainId = orgDomain.domains.domainId; const domainId = orgDomain.domains.domainId;
const orgCount = await trx const [orgCount] = await trx
.select({ count: sql<number>`count(*)` }) .select({ count: count() })
.from(orgDomains) .from(orgDomains)
.where(eq(orgDomains.domainId, domainId)); .where(eq(orgDomains.domainId, domainId));
if (orgCount[0].count === 1) { logger.info(`Found ${orgCount.count} orgs using domain ${domainId}`);
if (orgCount.count === 1) {
domainIdsToDelete.push(domainId); domainIdsToDelete.push(domainId);
} }
} }
logger.info(`Found ${domainIdsToDelete.length} domains to delete`);
if (domainIdsToDelete.length > 0) { if (domainIdsToDelete.length > 0) {
await trx await trx
.delete(domains) .delete(domains)
.where(inArray(domains.domainId, domainIdsToDelete)); .where(inArray(domains.domainId, domainIdsToDelete));
} }
await trx.delete(resources).where(eq(resources.orgId, orgId));
await usageService.add(orgId, FeatureId.ORGINIZATIONS, -1, trx); // here we are decreasing the org count BEFORE deleting the org because we need to still be able to get the org to get the billing org inside of here await usageService.add(orgId, FeatureId.ORGINIZATIONS, -1, trx); // here we are decreasing the org count BEFORE deleting the org because we need to still be able to get the org to get the billing org inside of here
@@ -231,15 +234,13 @@ export function sendTerminationMessages(result: DeleteOrgByIdResult): void {
); );
} }
for (const olmId of result.olmsToTerminate) { for (const olmId of result.olmsToTerminate) {
sendTerminateClient( sendTerminateClient(0, OlmErrorCodes.TERMINATED_REKEYED, olmId).catch(
0, (error) => {
OlmErrorCodes.TERMINATED_REKEYED, logger.error(
olmId "Failed to send termination message to olm:",
).catch((error) => { error
logger.error( );
"Failed to send termination message to olm:", }
error );
);
});
} }
} }

View File

@@ -477,6 +477,7 @@ async function handleMessagesForSiteClients(
} }
if (isAdd) { if (isAdd) {
// TODO: if we are in jit mode here should we really be sending this?
await initPeerAddHandshake( await initPeerAddHandshake(
// this will kick off the add peer process for the client // this will kick off the add peer process for the client
client.clientId, client.clientId,
@@ -571,7 +572,7 @@ export async function updateClientSiteDestinations(
destinations: [ destinations: [
{ {
destinationIP: site.sites.subnet.split("/")[0], destinationIP: site.sites.subnet.split("/")[0],
destinationPort: site.sites.listenPort || 0 destinationPort: site.sites.listenPort || 1 // this satisfies gerbil for now but should be reevaluated
} }
] ]
}; };
@@ -579,7 +580,7 @@ export async function updateClientSiteDestinations(
// add to the existing destinations // add to the existing destinations
destinations.destinations.push({ destinations.destinations.push({
destinationIP: site.sites.subnet.split("/")[0], destinationIP: site.sites.subnet.split("/")[0],
destinationPort: site.sites.listenPort || 0 destinationPort: site.sites.listenPort || 1 // this satisfies gerbil for now but should be reevaluated
}); });
} }
@@ -669,7 +670,11 @@ async function handleSubnetProxyTargetUpdates(
`Adding ${targetsToAdd.length} subnet proxy targets for siteResource ${siteResource.siteResourceId}` `Adding ${targetsToAdd.length} subnet proxy targets for siteResource ${siteResource.siteResourceId}`
); );
proxyJobs.push( proxyJobs.push(
addSubnetProxyTargets(newt.newtId, targetsToAdd) addSubnetProxyTargets(
newt.newtId,
targetsToAdd,
newt.version
)
); );
} }
@@ -705,7 +710,11 @@ async function handleSubnetProxyTargetUpdates(
`Removing ${targetsToRemove.length} subnet proxy targets for siteResource ${siteResource.siteResourceId}` `Removing ${targetsToRemove.length} subnet proxy targets for siteResource ${siteResource.siteResourceId}`
); );
proxyJobs.push( proxyJobs.push(
removeSubnetProxyTargets(newt.newtId, targetsToRemove) removeSubnetProxyTargets(
newt.newtId,
targetsToRemove,
newt.version
)
); );
} }
@@ -1080,6 +1089,7 @@ async function handleMessagesForClientSites(
continue; continue;
} }
// TODO: if we are in jit mode here should we really be sending this?
await initPeerAddHandshake( await initPeerAddHandshake(
// this will kick off the add peer process for the client // this will kick off the add peer process for the client
client.clientId, client.clientId,
@@ -1146,7 +1156,7 @@ async function handleMessagesForClientResources(
// Add subnet proxy targets for each site // Add subnet proxy targets for each site
for (const [siteId, resources] of addedBySite.entries()) { for (const [siteId, resources] of addedBySite.entries()) {
const [newt] = await trx const [newt] = await trx
.select({ newtId: newts.newtId }) .select({ newtId: newts.newtId, version: newts.version })
.from(newts) .from(newts)
.where(eq(newts.siteId, siteId)) .where(eq(newts.siteId, siteId))
.limit(1); .limit(1);
@@ -1168,7 +1178,13 @@ async function handleMessagesForClientResources(
]); ]);
if (targets.length > 0) { if (targets.length > 0) {
proxyJobs.push(addSubnetProxyTargets(newt.newtId, targets)); proxyJobs.push(
addSubnetProxyTargets(
newt.newtId,
targets,
newt.version
)
);
} }
try { try {
@@ -1217,7 +1233,7 @@ async function handleMessagesForClientResources(
// Remove subnet proxy targets for each site // Remove subnet proxy targets for each site
for (const [siteId, resources] of removedBySite.entries()) { for (const [siteId, resources] of removedBySite.entries()) {
const [newt] = await trx const [newt] = await trx
.select({ newtId: newts.newtId }) .select({ newtId: newts.newtId, version: newts.version })
.from(newts) .from(newts)
.where(eq(newts.siteId, siteId)) .where(eq(newts.siteId, siteId))
.limit(1); .limit(1);
@@ -1240,7 +1256,11 @@ async function handleMessagesForClientResources(
if (targets.length > 0) { if (targets.length > 0) {
proxyJobs.push( proxyJobs.push(
removeSubnetProxyTargets(newt.newtId, targets) removeSubnetProxyTargets(
newt.newtId,
targets,
newt.version
)
); );
} }

View File

@@ -1,16 +0,0 @@
export enum AudienceIds {
SignUps = "",
Subscribed = "",
Churned = "",
Newsletter = ""
}
let resend;
export default resend;
export async function moveEmailToAudience(
email: string,
audienceId: AudienceIds
) {
return;
}

View File

@@ -218,10 +218,11 @@ export class TraefikConfigManager {
return true; return true;
} }
// Fetch if it's been more than 24 hours (for renewals)
const dayInMs = 24 * 60 * 60 * 1000; const dayInMs = 24 * 60 * 60 * 1000;
const timeSinceLastFetch = const timeSinceLastFetch =
Date.now() - this.lastCertificateFetch.getTime(); Date.now() - this.lastCertificateFetch.getTime();
// Fetch if it's been more than 24 hours (daily routine check)
if (timeSinceLastFetch > dayInMs) { if (timeSinceLastFetch > dayInMs) {
logger.info("Fetching certificates due to 24-hour renewal check"); logger.info("Fetching certificates due to 24-hour renewal check");
return true; return true;
@@ -265,7 +266,7 @@ export class TraefikConfigManager {
return true; return true;
} }
// Check if any local certificates are missing or appear to be outdated // Check if any local certificates are missing (needs immediate fetch)
for (const domain of domainsNeedingCerts) { for (const domain of domainsNeedingCerts) {
const localState = this.lastLocalCertificateState.get(domain); const localState = this.lastLocalCertificateState.get(domain);
if (!localState || !localState.exists) { if (!localState || !localState.exists) {
@@ -274,17 +275,55 @@ export class TraefikConfigManager {
); );
return true; return true;
} }
}
// Check if certificate is expiring soon (within 30 days) // For expiry checks, throttle to every 6 hours to avoid querying the
if (localState.expiresAt) { // API/DB on every monitor loop. The certificate-service renews certs
const nowInSeconds = Math.floor(Date.now() / 1000); // 45 days before expiry, so checking every 6 hours is plenty frequent
const secondsUntilExpiry = localState.expiresAt - nowInSeconds; // to pick up renewed certs promptly.
const daysUntilExpiry = secondsUntilExpiry / (60 * 60 * 24); const renewalCheckIntervalMs = 6 * 60 * 60 * 1000; // 6 hours
if (daysUntilExpiry < 30) { if (timeSinceLastFetch > renewalCheckIntervalMs) {
logger.info( // Check non-wildcard certs for expiry (within 45 days to match
`Fetching certificates due to upcoming expiry for ${domain} (${Math.round(daysUntilExpiry)} days remaining)` // the server-side renewal window in certificate-service)
); for (const domain of domainsNeedingCerts) {
return true; const localState =
this.lastLocalCertificateState.get(domain);
if (localState?.expiresAt) {
const nowInSeconds = Math.floor(Date.now() / 1000);
const secondsUntilExpiry =
localState.expiresAt - nowInSeconds;
const daysUntilExpiry =
secondsUntilExpiry / (60 * 60 * 24);
if (daysUntilExpiry < 45) {
logger.info(
`Fetching certificates due to upcoming expiry for ${domain} (${Math.round(daysUntilExpiry)} days remaining)`
);
return true;
}
}
}
// Also check wildcard certificates for expiry. These are not
// included in domainsNeedingCerts since their subdomains are
// filtered out, so we must check them separately.
for (const [certDomain, state] of this
.lastLocalCertificateState) {
if (
state.exists &&
state.wildcard &&
state.expiresAt
) {
const nowInSeconds = Math.floor(Date.now() / 1000);
const secondsUntilExpiry =
state.expiresAt - nowInSeconds;
const daysUntilExpiry =
secondsUntilExpiry / (60 * 60 * 24);
if (daysUntilExpiry < 45) {
logger.info(
`Fetching certificates due to upcoming expiry for wildcard cert ${certDomain} (${Math.round(daysUntilExpiry)} days remaining)`
);
return true;
}
} }
} }
} }
@@ -361,6 +400,32 @@ export class TraefikConfigManager {
} }
} }
// Also include wildcard cert base domains that are
// expiring or expired so they get re-fetched even though
// their subdomains were filtered out above.
for (const [certDomain, state] of this
.lastLocalCertificateState) {
if (
state.exists &&
state.wildcard &&
state.expiresAt
) {
const nowInSeconds = Math.floor(
Date.now() / 1000
);
const secondsUntilExpiry =
state.expiresAt - nowInSeconds;
const daysUntilExpiry =
secondsUntilExpiry / (60 * 60 * 24);
if (daysUntilExpiry < 45) {
domainsToFetch.add(certDomain);
logger.info(
`Including expiring wildcard cert domain ${certDomain} in fetch (${Math.round(daysUntilExpiry)} days remaining)`
);
}
}
}
if (domainsToFetch.size > 0) { if (domainsToFetch.size > 0) {
// Get valid certificates for domains not covered by wildcards // Get valid certificates for domains not covered by wildcards
validCertificates = validCertificates =

View File

@@ -14,7 +14,7 @@ import logger from "@server/logger";
import config from "@server/lib/config"; import config from "@server/lib/config";
import { resources, sites, Target, targets } from "@server/db"; import { resources, sites, Target, targets } from "@server/db";
import createPathRewriteMiddleware from "./middleware"; import createPathRewriteMiddleware from "./middleware";
import { sanitize, validatePathRewriteConfig } from "./utils"; import { sanitize, encodePath, validatePathRewriteConfig } from "./utils";
const redirectHttpsMiddlewareName = "redirect-to-https"; const redirectHttpsMiddlewareName = "redirect-to-https";
const badgerMiddlewareName = "badger"; const badgerMiddlewareName = "badger";
@@ -44,7 +44,7 @@ export async function getTraefikConfig(
filterOutNamespaceDomains = false, // UNUSED BUT USED IN PRIVATE filterOutNamespaceDomains = false, // UNUSED BUT USED IN PRIVATE
generateLoginPageRouters = false, // UNUSED BUT USED IN PRIVATE generateLoginPageRouters = false, // UNUSED BUT USED IN PRIVATE
allowRawResources = true, allowRawResources = true,
allowMaintenancePage = true, // UNUSED BUT USED IN PRIVATE allowMaintenancePage = true // UNUSED BUT USED IN PRIVATE
): Promise<any> { ): Promise<any> {
// Get resources with their targets and sites in a single optimized query // Get resources with their targets and sites in a single optimized query
// Start from sites on this exit node, then join to targets and resources // Start from sites on this exit node, then join to targets and resources
@@ -127,7 +127,7 @@ export async function getTraefikConfig(
resourcesWithTargetsAndSites.forEach((row) => { resourcesWithTargetsAndSites.forEach((row) => {
const resourceId = row.resourceId; const resourceId = row.resourceId;
const resourceName = sanitize(row.resourceName) || ""; const resourceName = sanitize(row.resourceName) || "";
const targetPath = sanitize(row.path) || ""; // Handle null/undefined paths const targetPath = encodePath(row.path); // Use encodePath to avoid collisions (e.g. "/a/b" vs "/a-b")
const pathMatchType = row.pathMatchType || ""; const pathMatchType = row.pathMatchType || "";
const rewritePath = row.rewritePath || ""; const rewritePath = row.rewritePath || "";
const rewritePathType = row.rewritePathType || ""; const rewritePathType = row.rewritePathType || "";
@@ -145,7 +145,7 @@ export async function getTraefikConfig(
const mapKey = [resourceId, pathKey].filter(Boolean).join("-"); const mapKey = [resourceId, pathKey].filter(Boolean).join("-");
const key = sanitize(mapKey); const key = sanitize(mapKey);
if (!resourcesMap.has(key)) { if (!resourcesMap.has(mapKey)) {
const validation = validatePathRewriteConfig( const validation = validatePathRewriteConfig(
row.path, row.path,
row.pathMatchType, row.pathMatchType,
@@ -160,9 +160,10 @@ export async function getTraefikConfig(
return; return;
} }
resourcesMap.set(key, { resourcesMap.set(mapKey, {
resourceId: row.resourceId, resourceId: row.resourceId,
name: resourceName, name: resourceName,
key: key,
fullDomain: row.fullDomain, fullDomain: row.fullDomain,
ssl: row.ssl, ssl: row.ssl,
http: row.http, http: row.http,
@@ -190,7 +191,7 @@ export async function getTraefikConfig(
}); });
} }
resourcesMap.get(key).targets.push({ resourcesMap.get(mapKey).targets.push({
resourceId: row.resourceId, resourceId: row.resourceId,
targetId: row.targetId, targetId: row.targetId,
ip: row.ip, ip: row.ip,
@@ -227,8 +228,9 @@ export async function getTraefikConfig(
}; };
// get the key and the resource // get the key and the resource
for (const [key, resource] of resourcesMap.entries()) { for (const [, resource] of resourcesMap.entries()) {
const targets = resource.targets as TargetWithSite[]; const targets = resource.targets as TargetWithSite[];
const key = resource.key;
const routerName = `${key}-${resource.name}-router`; const routerName = `${key}-${resource.name}-router`;
const serviceName = `${key}-${resource.name}-service`; const serviceName = `${key}-${resource.name}-service`;
@@ -477,7 +479,10 @@ export async function getTraefikConfig(
// TODO: HOW TO HANDLE ^^^^^^ BETTER // TODO: HOW TO HANDLE ^^^^^^ BETTER
const anySitesOnline = targets.some( const anySitesOnline = targets.some(
(target) => target.site.online (target) =>
target.site.online ||
target.site.type === "local" ||
target.site.type === "wireguard"
); );
return ( return (
@@ -490,7 +495,7 @@ export async function getTraefikConfig(
if (target.health == "unhealthy") { if (target.health == "unhealthy") {
return false; return false;
} }
// If any sites are online, exclude offline sites // If any sites are online, exclude offline sites
if (anySitesOnline && !target.site.online) { if (anySitesOnline && !target.site.online) {
return false; return false;
@@ -605,7 +610,10 @@ export async function getTraefikConfig(
servers: (() => { servers: (() => {
// Check if any sites are online // Check if any sites are online
const anySitesOnline = targets.some( const anySitesOnline = targets.some(
(target) => target.site.online (target) =>
target.site.online ||
target.site.type === "local" ||
target.site.type === "wireguard"
); );
return targets return targets
@@ -613,7 +621,7 @@ export async function getTraefikConfig(
if (!target.enabled) { if (!target.enabled) {
return false; return false;
} }
// If any sites are online, exclude offline sites // If any sites are online, exclude offline sites
if (anySitesOnline && !target.site.online) { if (anySitesOnline && !target.site.online) {
return false; return false;

View File

@@ -0,0 +1,323 @@
import { assertEquals } from "../../../test/assert";
// ── Pure function copies (inlined to avoid pulling in server dependencies) ──
function sanitize(input: string | null | undefined): string | undefined {
if (!input) return undefined;
if (input.length > 50) {
input = input.substring(0, 50);
}
return input
.replace(/[^a-zA-Z0-9-]/g, "-")
.replace(/-+/g, "-")
.replace(/^-|-$/g, "");
}
function encodePath(path: string | null | undefined): string {
if (!path) return "";
return path.replace(/[^a-zA-Z0-9]/g, (ch) => {
return ch.charCodeAt(0).toString(16);
});
}
// ── Helpers ──────────────────────────────────────────────────────────
/**
* Exact replica of the OLD key computation from upstream main.
* Uses sanitize() for paths — this is what had the collision bug.
*/
function oldKeyComputation(
resourceId: number,
path: string | null,
pathMatchType: string | null,
rewritePath: string | null,
rewritePathType: string | null
): string {
const targetPath = sanitize(path) || "";
const pmt = pathMatchType || "";
const rp = rewritePath || "";
const rpt = rewritePathType || "";
const pathKey = [targetPath, pmt, rp, rpt].filter(Boolean).join("-");
const mapKey = [resourceId, pathKey].filter(Boolean).join("-");
return sanitize(mapKey) || "";
}
/**
* Replica of the NEW key computation from our fix.
* Uses encodePath() for paths — collision-free.
*/
function newKeyComputation(
resourceId: number,
path: string | null,
pathMatchType: string | null,
rewritePath: string | null,
rewritePathType: string | null
): string {
const targetPath = encodePath(path);
const pmt = pathMatchType || "";
const rp = rewritePath || "";
const rpt = rewritePathType || "";
const pathKey = [targetPath, pmt, rp, rpt].filter(Boolean).join("-");
const mapKey = [resourceId, pathKey].filter(Boolean).join("-");
return sanitize(mapKey) || "";
}
// ── Tests ────────────────────────────────────────────────────────────
function runTests() {
console.log("Running path encoding tests...\n");
let passed = 0;
// ── encodePath unit tests ────────────────────────────────────────
// Test 1: null/undefined/empty
{
assertEquals(encodePath(null), "", "null should return empty");
assertEquals(
encodePath(undefined),
"",
"undefined should return empty"
);
assertEquals(encodePath(""), "", "empty string should return empty");
console.log(" PASS: encodePath handles null/undefined/empty");
passed++;
}
// Test 2: root path
{
assertEquals(encodePath("/"), "2f", "/ should encode to 2f");
console.log(" PASS: encodePath encodes root path");
passed++;
}
// Test 3: alphanumeric passthrough
{
assertEquals(encodePath("/api"), "2fapi", "/api encodes slash only");
assertEquals(encodePath("/v1"), "2fv1", "/v1 encodes slash only");
assertEquals(encodePath("abc"), "abc", "plain alpha passes through");
console.log(" PASS: encodePath preserves alphanumeric chars");
passed++;
}
// Test 4: all special chars produce unique hex
{
const paths = ["/a/b", "/a-b", "/a.b", "/a_b", "/a b"];
const results = paths.map((p) => encodePath(p));
const unique = new Set(results);
assertEquals(
unique.size,
paths.length,
"all special-char paths must produce unique encodings"
);
console.log(
" PASS: encodePath produces unique output for different special chars"
);
passed++;
}
// Test 5: output is always alphanumeric (safe for Traefik names)
{
const paths = [
"/",
"/api",
"/a/b",
"/a-b",
"/a.b",
"/complex/path/here"
];
for (const p of paths) {
const e = encodePath(p);
assertEquals(
/^[a-zA-Z0-9]+$/.test(e),
true,
`encodePath("${p}") = "${e}" must be alphanumeric`
);
}
console.log(" PASS: encodePath output is always alphanumeric");
passed++;
}
// Test 6: deterministic
{
assertEquals(
encodePath("/api"),
encodePath("/api"),
"same input same output"
);
assertEquals(
encodePath("/a/b/c"),
encodePath("/a/b/c"),
"same input same output"
);
console.log(" PASS: encodePath is deterministic");
passed++;
}
// Test 7: many distinct paths never collide
{
const paths = [
"/",
"/api",
"/api/v1",
"/api/v2",
"/a/b",
"/a-b",
"/a.b",
"/a_b",
"/health",
"/health/check",
"/admin",
"/admin/users",
"/api/v1/users",
"/api/v1/posts",
"/app",
"/app/dashboard"
];
const encoded = new Set(paths.map((p) => encodePath(p)));
assertEquals(
encoded.size,
paths.length,
`expected ${paths.length} unique encodings, got ${encoded.size}`
);
console.log(" PASS: 16 realistic paths all produce unique encodings");
passed++;
}
// ── Collision fix: the actual bug we're fixing ───────────────────
// Test 8: /a/b and /a-b now have different keys (THE BUG FIX)
{
const keyAB = newKeyComputation(1, "/a/b", "prefix", null, null);
const keyDash = newKeyComputation(1, "/a-b", "prefix", null, null);
assertEquals(
keyAB !== keyDash,
true,
"/a/b and /a-b MUST have different keys"
);
console.log(" PASS: collision fix — /a/b vs /a-b have different keys");
passed++;
}
// Test 9: demonstrate the old bug — old code maps /a/b and /a-b to same key
{
const oldKeyAB = oldKeyComputation(1, "/a/b", "prefix", null, null);
const oldKeyDash = oldKeyComputation(1, "/a-b", "prefix", null, null);
assertEquals(
oldKeyAB,
oldKeyDash,
"old code MUST have this collision (confirms the bug exists)"
);
console.log(" PASS: confirmed old code bug — /a/b and /a-b collided");
passed++;
}
// Test 10: /api/v1 and /api-v1 — old code collision, new code fixes it
{
const oldKey1 = oldKeyComputation(1, "/api/v1", "prefix", null, null);
const oldKey2 = oldKeyComputation(1, "/api-v1", "prefix", null, null);
assertEquals(
oldKey1,
oldKey2,
"old code collision for /api/v1 vs /api-v1"
);
const newKey1 = newKeyComputation(1, "/api/v1", "prefix", null, null);
const newKey2 = newKeyComputation(1, "/api-v1", "prefix", null, null);
assertEquals(
newKey1 !== newKey2,
true,
"new code must separate /api/v1 and /api-v1"
);
console.log(" PASS: collision fix — /api/v1 vs /api-v1");
passed++;
}
// Test 11: /app.v2 and /app/v2 and /app-v2 — three-way collision fixed
{
const a = newKeyComputation(1, "/app.v2", "prefix", null, null);
const b = newKeyComputation(1, "/app/v2", "prefix", null, null);
const c = newKeyComputation(1, "/app-v2", "prefix", null, null);
const keys = new Set([a, b, c]);
assertEquals(
keys.size,
3,
"three paths must produce three unique keys"
);
console.log(
" PASS: collision fix — three-way /app.v2, /app/v2, /app-v2"
);
passed++;
}
// ── Edge cases ───────────────────────────────────────────────────
// Test 12: same path in different resources — always separate
{
const key1 = newKeyComputation(1, "/api", "prefix", null, null);
const key2 = newKeyComputation(2, "/api", "prefix", null, null);
assertEquals(
key1 !== key2,
true,
"different resources with same path must have different keys"
);
console.log(" PASS: edge case — same path, different resources");
passed++;
}
// Test 13: same resource, different pathMatchType — separate keys
{
const exact = newKeyComputation(1, "/api", "exact", null, null);
const prefix = newKeyComputation(1, "/api", "prefix", null, null);
assertEquals(
exact !== prefix,
true,
"exact vs prefix must have different keys"
);
console.log(" PASS: edge case — same path, different match types");
passed++;
}
// Test 14: same resource and path, different rewrite config — separate keys
{
const noRewrite = newKeyComputation(1, "/api", "prefix", null, null);
const withRewrite = newKeyComputation(
1,
"/api",
"prefix",
"/backend",
"prefix"
);
assertEquals(
noRewrite !== withRewrite,
true,
"with vs without rewrite must have different keys"
);
console.log(" PASS: edge case — same path, different rewrite config");
passed++;
}
// Test 15: paths with special URL characters
{
const paths = ["/api?foo", "/api#bar", "/api%20baz", "/api+qux"];
const keys = new Set(
paths.map((p) => newKeyComputation(1, p, "prefix", null, null))
);
assertEquals(
keys.size,
paths.length,
"special URL chars must produce unique keys"
);
console.log(" PASS: edge case — special URL characters in paths");
passed++;
}
console.log(`\nAll ${passed} tests passed!`);
}
try {
runTests();
} catch (error) {
console.error("Test failed:", error);
process.exit(1);
}

View File

@@ -13,6 +13,26 @@ export function sanitize(input: string | null | undefined): string | undefined {
.replace(/^-|-$/g, ""); .replace(/^-|-$/g, "");
} }
/**
* Encode a URL path into a collision-free alphanumeric string suitable for use
* in Traefik map keys.
*
* Unlike sanitize(), this preserves uniqueness by encoding each non-alphanumeric
* character as its hex code. Different paths always produce different outputs.
*
* encodePath("/api") => "2fapi"
* encodePath("/a/b") => "2fa2fb"
* encodePath("/a-b") => "2fa2db" (different from /a/b)
* encodePath("/") => "2f"
* encodePath(null) => ""
*/
export function encodePath(path: string | null | undefined): string {
if (!path) return "";
return path.replace(/[^a-zA-Z0-9]/g, (ch) => {
return ch.charCodeAt(0).toString(16);
});
}
export function validatePathRewriteConfig( export function validatePathRewriteConfig(
path: string | null, path: string | null,
pathMatchType: string | null, pathMatchType: string | null,

View File

@@ -14,3 +14,4 @@ export * from "./verifyApiKeyApiKeyAccess";
export * from "./verifyApiKeyClientAccess"; export * from "./verifyApiKeyClientAccess";
export * from "./verifyApiKeySiteResourceAccess"; export * from "./verifyApiKeySiteResourceAccess";
export * from "./verifyApiKeyIdpAccess"; export * from "./verifyApiKeyIdpAccess";
export * from "./verifyApiKeyDomainAccess";

View File

@@ -0,0 +1,90 @@
import { Request, Response, NextFunction } from "express";
import { db, domains, orgDomains, apiKeyOrg } from "@server/db";
import { and, eq } from "drizzle-orm";
import createHttpError from "http-errors";
import HttpCode from "@server/types/HttpCode";
export async function verifyApiKeyDomainAccess(
req: Request,
res: Response,
next: NextFunction
) {
try {
const apiKey = req.apiKey;
const domainId =
req.params.domainId || req.body.domainId || req.query.domainId;
const orgId = req.params.orgId;
if (!apiKey) {
return next(
createHttpError(HttpCode.UNAUTHORIZED, "Key not authenticated")
);
}
if (!domainId) {
return next(
createHttpError(HttpCode.BAD_REQUEST, "Invalid domain ID")
);
}
if (apiKey.isRoot) {
// Root keys can access any domain in any org
return next();
}
// Verify domain exists and belongs to the organization
const [domain] = await db
.select()
.from(domains)
.innerJoin(orgDomains, eq(orgDomains.domainId, domains.domainId))
.where(
and(
eq(orgDomains.domainId, domainId),
eq(orgDomains.orgId, orgId)
)
)
.limit(1);
if (!domain) {
return next(
createHttpError(
HttpCode.NOT_FOUND,
`Domain with ID ${domainId} not found in organization ${orgId}`
)
);
}
// Verify the API key has access to this organization
if (!req.apiKeyOrg) {
const apiKeyOrgRes = await db
.select()
.from(apiKeyOrg)
.where(
and(
eq(apiKeyOrg.apiKeyId, apiKey.apiKeyId),
eq(apiKeyOrg.orgId, orgId)
)
)
.limit(1);
req.apiKeyOrg = apiKeyOrgRes[0];
}
if (!req.apiKeyOrg) {
return next(
createHttpError(
HttpCode.FORBIDDEN,
"Key does not have access to this organization"
)
);
}
return next();
} catch (error) {
return next(
createHttpError(
HttpCode.INTERNAL_SERVER_ERROR,
"Error verifying domain access"
)
);
}
}

View File

@@ -5,17 +5,20 @@ export const registry = new OpenAPIRegistry();
export enum OpenAPITags { export enum OpenAPITags {
Site = "Site", Site = "Site",
Org = "Organization", Org = "Organization",
Resource = "Resource", PublicResource = "Public Resource",
PrivateResource = "Private Resource",
Role = "Role", Role = "Role",
User = "User", User = "User",
Invitation = "Invitation", Invitation = "User Invitation",
Target = "Target", Target = "Resource Target",
Rule = "Rule", Rule = "Rule",
AccessToken = "Access Token", AccessToken = "Access Token",
Idp = "Identity Provider", GlobalIdp = "Identity Provider (Global)",
OrgIdp = "Identity Provider (Organization Only)",
Client = "Client", Client = "Client",
ApiKey = "API Key", ApiKey = "API Key",
Domain = "Domain", Domain = "Domain",
Blueprint = "Blueprint", Blueprint = "Blueprint",
Ssh = "SSH" Ssh = "SSH",
Logs = "Logs"
} }

View File

@@ -13,8 +13,12 @@
import { rateLimitService } from "#private/lib/rateLimit"; import { rateLimitService } from "#private/lib/rateLimit";
import { cleanup as wsCleanup } from "#private/routers/ws"; import { cleanup as wsCleanup } from "#private/routers/ws";
import { flushBandwidthToDb } from "@server/routers/newt/handleReceiveBandwidthMessage";
import { flushSiteBandwidthToDb } from "@server/routers/gerbil/receiveBandwidth";
async function cleanup() { async function cleanup() {
await flushBandwidthToDb();
await flushSiteBandwidthToDb();
await rateLimitService.cleanup(); await rateLimitService.cleanup();
await wsCleanup(); await wsCleanup();
@@ -25,4 +29,4 @@ export async function initCleanup() {
// Handle process termination // Handle process termination
process.on("SIGTERM", () => cleanup()); process.on("SIGTERM", () => cleanup());
process.on("SIGINT", () => cleanup()); process.on("SIGINT", () => cleanup());
} }

View File

@@ -38,10 +38,6 @@ export const privateConfigSchema = z.object({
.string() .string()
.optional() .optional()
.transform(getEnvOrYaml("SERVER_ENCRYPTION_KEY")), .transform(getEnvOrYaml("SERVER_ENCRYPTION_KEY")),
resend_api_key: z
.string()
.optional()
.transform(getEnvOrYaml("RESEND_API_KEY")),
reo_client_id: z reo_client_id: z
.string() .string()
.optional() .optional()

View File

@@ -1,127 +0,0 @@
/*
* This file is part of a proprietary work.
*
* Copyright (c) 2025 Fossorial, Inc.
* All rights reserved.
*
* This file is licensed under the Fossorial Commercial License.
* You may not use this file except in compliance with the License.
* Unauthorized use, copying, modification, or distribution is strictly prohibited.
*
* This file is not licensed under the AGPLv3.
*/
import { Resend } from "resend";
import privateConfig from "#private/lib/config";
import logger from "@server/logger";
export enum AudienceIds {
SignUps = "6c4e77b2-0851-4bd6-bac8-f51f91360f1a",
Subscribed = "870b43fd-387f-44de-8fc1-707335f30b20",
Churned = "f3ae92bd-2fdb-4d77-8746-2118afd62549",
Newsletter = "5500c431-191c-42f0-a5d4-8b6d445b4ea0"
}
const resend = new Resend(
privateConfig.getRawPrivateConfig().server.resend_api_key || "missing"
);
export default resend;
export async function moveEmailToAudience(
email: string,
audienceId: AudienceIds
) {
if (process.env.ENVIRONMENT !== "prod") {
logger.debug(
`Skipping moving email ${email} to audience ${audienceId} in non-prod environment`
);
return;
}
const { error, data } = await retryWithBackoff(async () => {
const { data, error } = await resend.contacts.create({
email,
unsubscribed: false,
audienceId
});
if (error) {
throw new Error(
`Error adding email ${email} to audience ${audienceId}: ${error}`
);
}
return { error, data };
});
if (error) {
logger.error(
`Error adding email ${email} to audience ${audienceId}: ${error}`
);
return;
}
if (data) {
logger.debug(
`Added email ${email} to audience ${audienceId} with contact ID ${data.id}`
);
}
const otherAudiences = Object.values(AudienceIds).filter(
(id) => id !== audienceId
);
for (const otherAudienceId of otherAudiences) {
const { error, data } = await retryWithBackoff(async () => {
const { data, error } = await resend.contacts.remove({
email,
audienceId: otherAudienceId
});
if (error) {
throw new Error(
`Error removing email ${email} from audience ${otherAudienceId}: ${error}`
);
}
return { error, data };
});
if (error) {
logger.error(
`Error removing email ${email} from audience ${otherAudienceId}: ${error}`
);
}
if (data) {
logger.info(
`Removed email ${email} from audience ${otherAudienceId}`
);
}
}
}
type RetryOptions = {
retries?: number;
initialDelayMs?: number;
factor?: number;
};
export async function retryWithBackoff<T>(
fn: () => Promise<T>,
options: RetryOptions = {}
): Promise<T> {
const { retries = 5, initialDelayMs = 500, factor = 2 } = options;
let attempt = 0;
let delay = initialDelayMs;
while (true) {
try {
return await fn();
} catch (err) {
attempt++;
if (attempt > retries) throw err;
await new Promise((resolve) => setTimeout(resolve, delay));
delay *= factor;
}
}
}

View File

@@ -34,7 +34,11 @@ import {
import logger from "@server/logger"; import logger from "@server/logger";
import config from "@server/lib/config"; import config from "@server/lib/config";
import { orgs, resources, sites, Target, targets } from "@server/db"; import { orgs, resources, sites, Target, targets } from "@server/db";
import { sanitize, validatePathRewriteConfig } from "@server/lib/traefik/utils"; import {
sanitize,
encodePath,
validatePathRewriteConfig
} from "@server/lib/traefik/utils";
import privateConfig from "#private/lib/config"; import privateConfig from "#private/lib/config";
import createPathRewriteMiddleware from "@server/lib/traefik/middleware"; import createPathRewriteMiddleware from "@server/lib/traefik/middleware";
import { import {
@@ -170,7 +174,7 @@ export async function getTraefikConfig(
resourcesWithTargetsAndSites.forEach((row) => { resourcesWithTargetsAndSites.forEach((row) => {
const resourceId = row.resourceId; const resourceId = row.resourceId;
const resourceName = sanitize(row.resourceName) || ""; const resourceName = sanitize(row.resourceName) || "";
const targetPath = sanitize(row.path) || ""; // Handle null/undefined paths const targetPath = encodePath(row.path); // Use encodePath to avoid collisions (e.g. "/a/b" vs "/a-b")
const pathMatchType = row.pathMatchType || ""; const pathMatchType = row.pathMatchType || "";
const rewritePath = row.rewritePath || ""; const rewritePath = row.rewritePath || "";
const rewritePathType = row.rewritePathType || ""; const rewritePathType = row.rewritePathType || "";
@@ -192,7 +196,7 @@ export async function getTraefikConfig(
const mapKey = [resourceId, pathKey].filter(Boolean).join("-"); const mapKey = [resourceId, pathKey].filter(Boolean).join("-");
const key = sanitize(mapKey); const key = sanitize(mapKey);
if (!resourcesMap.has(key)) { if (!resourcesMap.has(mapKey)) {
const validation = validatePathRewriteConfig( const validation = validatePathRewriteConfig(
row.path, row.path,
row.pathMatchType, row.pathMatchType,
@@ -207,9 +211,10 @@ export async function getTraefikConfig(
return; return;
} }
resourcesMap.set(key, { resourcesMap.set(mapKey, {
resourceId: row.resourceId, resourceId: row.resourceId,
name: resourceName, name: resourceName,
key: key,
fullDomain: row.fullDomain, fullDomain: row.fullDomain,
ssl: row.ssl, ssl: row.ssl,
http: row.http, http: row.http,
@@ -243,7 +248,7 @@ export async function getTraefikConfig(
} }
// Add target with its associated site data // Add target with its associated site data
resourcesMap.get(key).targets.push({ resourcesMap.get(mapKey).targets.push({
resourceId: row.resourceId, resourceId: row.resourceId,
targetId: row.targetId, targetId: row.targetId,
ip: row.ip, ip: row.ip,
@@ -296,8 +301,9 @@ export async function getTraefikConfig(
}; };
// get the key and the resource // get the key and the resource
for (const [key, resource] of resourcesMap.entries()) { for (const [, resource] of resourcesMap.entries()) {
const targets = resource.targets as TargetWithSite[]; const targets = resource.targets as TargetWithSite[];
const key = resource.key;
const routerName = `${key}-${resource.name}-router`; const routerName = `${key}-${resource.name}-router`;
const serviceName = `${key}-${resource.name}-service`; const serviceName = `${key}-${resource.name}-service`;
@@ -665,7 +671,10 @@ export async function getTraefikConfig(
// TODO: HOW TO HANDLE ^^^^^^ BETTER // TODO: HOW TO HANDLE ^^^^^^ BETTER
const anySitesOnline = targets.some( const anySitesOnline = targets.some(
(target) => target.site.online (target) =>
target.site.online ||
target.site.type === "local" ||
target.site.type === "wireguard"
); );
return ( return (
@@ -793,7 +802,10 @@ export async function getTraefikConfig(
servers: (() => { servers: (() => {
// Check if any sites are online // Check if any sites are online
const anySitesOnline = targets.some( const anySitesOnline = targets.some(
(target) => target.site.online (target) =>
target.site.online ||
target.site.type === "local" ||
target.site.type === "wireguard"
); );
return targets return targets

View File

@@ -32,7 +32,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/logs/access/export", path: "/org/{orgId}/logs/access/export",
description: "Export the access audit log for an organization as CSV", description: "Export the access audit log for an organization as CSV",
tags: [OpenAPITags.Org], tags: [OpenAPITags.Logs],
request: { request: {
query: queryAccessAuditLogsQuery, query: queryAccessAuditLogsQuery,
params: queryAccessAuditLogsParams params: queryAccessAuditLogsParams

View File

@@ -32,7 +32,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/logs/action/export", path: "/org/{orgId}/logs/action/export",
description: "Export the action audit log for an organization as CSV", description: "Export the action audit log for an organization as CSV",
tags: [OpenAPITags.Org], tags: [OpenAPITags.Logs],
request: { request: {
query: queryActionAuditLogsQuery, query: queryActionAuditLogsQuery,
params: queryActionAuditLogsParams params: queryActionAuditLogsParams

View File

@@ -249,7 +249,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/logs/access", path: "/org/{orgId}/logs/access",
description: "Query the access audit log for an organization", description: "Query the access audit log for an organization",
tags: [OpenAPITags.Org], tags: [OpenAPITags.Logs],
request: { request: {
query: queryAccessAuditLogsQuery, query: queryAccessAuditLogsQuery,
params: queryAccessAuditLogsParams params: queryAccessAuditLogsParams

View File

@@ -160,7 +160,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/logs/action", path: "/org/{orgId}/logs/action",
description: "Query the action audit log for an organization", description: "Query the action audit log for an organization",
tags: [OpenAPITags.Org], tags: [OpenAPITags.Logs],
request: { request: {
query: queryActionAuditLogsQuery, query: queryActionAuditLogsQuery,
params: queryActionAuditLogsParams params: queryActionAuditLogsParams

View File

@@ -31,16 +31,16 @@ const getOrgSchema = z.strictObject({
orgId: z.string() orgId: z.string()
}); });
registry.registerPath({ // registry.registerPath({
method: "get", // method: "get",
path: "/org/{orgId}/billing/usage", // path: "/org/{orgId}/billing/usage",
description: "Get an organization's billing usage", // description: "Get an organization's billing usage",
tags: [OpenAPITags.Org], // tags: [OpenAPITags.Org],
request: { // request: {
params: getOrgSchema // params: getOrgSchema
}, // },
responses: {} // responses: {}
}); // });
export async function getOrgUsage( export async function getOrgUsage(
req: Request, req: Request,

View File

@@ -24,7 +24,6 @@ import { eq, and } from "drizzle-orm";
import logger from "@server/logger"; import logger from "@server/logger";
import stripe from "#private/lib/stripe"; import stripe from "#private/lib/stripe";
import { handleSubscriptionLifesycle } from "../subscriptionLifecycle"; import { handleSubscriptionLifesycle } from "../subscriptionLifecycle";
import { AudienceIds, moveEmailToAudience } from "#private/lib/resend";
import { getSubType } from "./getSubType"; import { getSubType } from "./getSubType";
import privateConfig from "#private/lib/config"; import privateConfig from "#private/lib/config";
import { getLicensePriceSet, LicenseId } from "@server/lib/billing/licenses"; import { getLicensePriceSet, LicenseId } from "@server/lib/billing/licenses";
@@ -172,7 +171,7 @@ export async function handleSubscriptionCreated(
const email = orgUserRes.user.email; const email = orgUserRes.user.email;
if (email) { if (email) {
moveEmailToAudience(email, AudienceIds.Subscribed); // TODO: update user in Sendy
} }
} }
} else if (type === "license") { } else if (type === "license") {

View File

@@ -23,7 +23,6 @@ import {
import { eq, and } from "drizzle-orm"; import { eq, and } from "drizzle-orm";
import logger from "@server/logger"; import logger from "@server/logger";
import { handleSubscriptionLifesycle } from "../subscriptionLifecycle"; import { handleSubscriptionLifesycle } from "../subscriptionLifecycle";
import { AudienceIds, moveEmailToAudience } from "#private/lib/resend";
import { getSubType } from "./getSubType"; import { getSubType } from "./getSubType";
import stripe from "#private/lib/stripe"; import stripe from "#private/lib/stripe";
import privateConfig from "#private/lib/config"; import privateConfig from "#private/lib/config";
@@ -109,7 +108,7 @@ export async function handleSubscriptionDeleted(
const email = orgUserRes.user.email; const email = orgUserRes.user.email;
if (email) { if (email) {
moveEmailToAudience(email, AudienceIds.Churned); // TODO: update user in Sendy
} }
} }
} else if (type === "license") { } else if (type === "license") {

View File

@@ -52,7 +52,7 @@ registry.registerPath({
method: "put", method: "put",
path: "/org/{orgId}/idp/oidc", path: "/org/{orgId}/idp/oidc",
description: "Create an OIDC IdP for a specific organization.", description: "Create an OIDC IdP for a specific organization.",
tags: [OpenAPITags.Idp, OpenAPITags.Org], tags: [OpenAPITags.OrgIdp],
request: { request: {
params: paramsSchema, params: paramsSchema,
body: { body: {

View File

@@ -35,7 +35,7 @@ registry.registerPath({
method: "delete", method: "delete",
path: "/org/{orgId}/idp/{idpId}", path: "/org/{orgId}/idp/{idpId}",
description: "Delete IDP for a specific organization.", description: "Delete IDP for a specific organization.",
tags: [OpenAPITags.Idp, OpenAPITags.Org], tags: [OpenAPITags.OrgIdp],
request: { request: {
params: paramsSchema params: paramsSchema
}, },

View File

@@ -50,9 +50,9 @@ async function query(idpId: number, orgId: string) {
registry.registerPath({ registry.registerPath({
method: "get", method: "get",
path: "/org/:orgId/idp/:idpId", path: "/org/{orgId}/idp/{idpId}",
description: "Get an IDP by its IDP ID for a specific organization.", description: "Get an IDP by its IDP ID for a specific organization.",
tags: [OpenAPITags.Idp, OpenAPITags.Org], tags: [OpenAPITags.OrgIdp],
request: { request: {
params: paramsSchema params: paramsSchema
}, },

View File

@@ -67,7 +67,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/idp", path: "/org/{orgId}/idp",
description: "List all IDP for a specific organization.", description: "List all IDP for a specific organization.",
tags: [OpenAPITags.Idp, OpenAPITags.Org], tags: [OpenAPITags.OrgIdp],
request: { request: {
query: querySchema, query: querySchema,
params: paramsSchema params: paramsSchema

View File

@@ -59,7 +59,7 @@ registry.registerPath({
method: "post", method: "post",
path: "/org/{orgId}/idp/{idpId}/oidc", path: "/org/{orgId}/idp/{idpId}/oidc",
description: "Update an OIDC IdP for a specific organization.", description: "Update an OIDC IdP for a specific organization.",
tags: [OpenAPITags.Idp, OpenAPITags.Org], tags: [OpenAPITags.OrgIdp],
request: { request: {
params: paramsSchema, params: paramsSchema,
body: { body: {

View File

@@ -52,7 +52,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/maintenance/info", path: "/maintenance/info",
description: "Get maintenance information for a resource by domain.", description: "Get maintenance information for a resource by domain.",
tags: [OpenAPITags.Resource], tags: [OpenAPITags.PublicResource],
request: { request: {
query: z.object({ query: z.object({
fullDomain: z.string() fullDomain: z.string()

View File

@@ -29,7 +29,6 @@ import HttpCode from "@server/types/HttpCode";
import createHttpError from "http-errors"; import createHttpError from "http-errors";
import logger from "@server/logger"; import logger from "@server/logger";
import { fromError } from "zod-validation-error"; import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
import { eq, or, and } from "drizzle-orm"; import { eq, or, and } from "drizzle-orm";
import { canUserAccessSiteResource } from "@server/auth/canUserAccessSiteResource"; import { canUserAccessSiteResource } from "@server/auth/canUserAccessSiteResource";
import { signPublicKey, getOrgCAKeys } from "@server/lib/sshCA"; import { signPublicKey, getOrgCAKeys } from "@server/lib/sshCA";
@@ -64,6 +63,7 @@ export type SignSshKeyResponse = {
sshUsername: string; sshUsername: string;
sshHost: string; sshHost: string;
resourceId: number; resourceId: number;
siteId: number;
keyId: string; keyId: string;
validPrincipals: string[]; validPrincipals: string[];
validAfter: string; validAfter: string;
@@ -453,6 +453,7 @@ export async function signSshKey(
sshUsername: usernameToUse, sshUsername: usernameToUse,
sshHost: sshHost, sshHost: sshHost,
resourceId: resource.siteResourceId, resourceId: resource.siteResourceId,
siteId: resource.siteId,
keyId: cert.keyId, keyId: cert.keyId,
validPrincipals: cert.validPrincipals, validPrincipals: cert.validPrincipals,
validAfter: cert.validAfter.toISOString(), validAfter: cert.validAfter.toISOString(),

View File

@@ -17,10 +17,13 @@ import {
startRemoteExitNodeOfflineChecker startRemoteExitNodeOfflineChecker
} from "#private/routers/remoteExitNode"; } from "#private/routers/remoteExitNode";
import { MessageHandler } from "@server/routers/ws"; import { MessageHandler } from "@server/routers/ws";
import { build } from "@server/build";
export const messageHandlers: Record<string, MessageHandler> = { export const messageHandlers: Record<string, MessageHandler> = {
"remoteExitNode/register": handleRemoteExitNodeRegisterMessage, "remoteExitNode/register": handleRemoteExitNodeRegisterMessage,
"remoteExitNode/ping": handleRemoteExitNodePingMessage "remoteExitNode/ping": handleRemoteExitNodePingMessage
}; };
startRemoteExitNodeOfflineChecker(); // this is to handle the offline check for remote exit nodes if (build != "saas") {
startRemoteExitNodeOfflineChecker(); // this is to handle the offline check for remote exit nodes
}

View File

@@ -12,6 +12,7 @@
*/ */
import { Router, Request, Response } from "express"; import { Router, Request, Response } from "express";
import zlib from "zlib";
import { Server as HttpServer } from "http"; import { Server as HttpServer } from "http";
import { WebSocket, WebSocketServer } from "ws"; import { WebSocket, WebSocketServer } from "ws";
import { Socket } from "net"; import { Socket } from "net";
@@ -24,7 +25,8 @@ import {
OlmSession, OlmSession,
RemoteExitNode, RemoteExitNode,
RemoteExitNodeSession, RemoteExitNodeSession,
remoteExitNodes remoteExitNodes,
sites
} from "@server/db"; } from "@server/db";
import { eq } from "drizzle-orm"; import { eq } from "drizzle-orm";
import { db } from "@server/db"; import { db } from "@server/db";
@@ -57,11 +59,13 @@ const MAX_PENDING_MESSAGES = 50; // Maximum messages to queue during connection
const processMessage = async ( const processMessage = async (
ws: AuthenticatedWebSocket, ws: AuthenticatedWebSocket,
data: Buffer, data: Buffer,
isBinary: boolean,
clientId: string, clientId: string,
clientType: ClientType clientType: ClientType
): Promise<void> => { ): Promise<void> => {
try { try {
const message: WSMessage = JSON.parse(data.toString()); const messageBuffer = isBinary ? zlib.gunzipSync(data) : data;
const message: WSMessage = JSON.parse(messageBuffer.toString());
// logger.debug( // logger.debug(
// `Processing message from ${clientType.toUpperCase()} ID: ${clientId}, type: ${message.type}` // `Processing message from ${clientType.toUpperCase()} ID: ${clientId}, type: ${message.type}`
@@ -76,7 +80,7 @@ const processMessage = async (
clientId, clientId,
message.type, // Pass message type for granular limiting message.type, // Pass message type for granular limiting
100, // max requests per window 100, // max requests per window
20, // max requests per message type per window 100, // max requests per message type per window
60 * 1000 // window in milliseconds 60 * 1000 // window in milliseconds
); );
if (rateLimitResult.isLimited) { if (rateLimitResult.isLimited) {
@@ -163,8 +167,16 @@ const processPendingMessages = async (
); );
const jobs = []; const jobs = [];
for (const messageData of ws.pendingMessages) { for (const pending of ws.pendingMessages) {
jobs.push(processMessage(ws, messageData, clientId, clientType)); jobs.push(
processMessage(
ws,
pending.data,
pending.isBinary,
clientId,
clientType
)
);
} }
await Promise.all(jobs); await Promise.all(jobs);
@@ -325,7 +337,9 @@ const addClient = async (
// Check Redis first if enabled // Check Redis first if enabled
if (redisManager.isRedisEnabled()) { if (redisManager.isRedisEnabled()) {
try { try {
const redisVersion = await redisManager.get(getConfigVersionKey(clientId)); const redisVersion = await redisManager.get(
getConfigVersionKey(clientId)
);
if (redisVersion !== null) { if (redisVersion !== null) {
configVersion = parseInt(redisVersion, 10); configVersion = parseInt(redisVersion, 10);
// Sync to local cache // Sync to local cache
@@ -337,7 +351,10 @@ const addClient = async (
} else { } else {
// Use local cache version and sync to Redis // Use local cache version and sync to Redis
configVersion = clientConfigVersions.get(clientId) || 0; configVersion = clientConfigVersions.get(clientId) || 0;
await redisManager.set(getConfigVersionKey(clientId), configVersion.toString()); await redisManager.set(
getConfigVersionKey(clientId),
configVersion.toString()
);
} }
} catch (error) { } catch (error) {
logger.error("Failed to get/set config version in Redis:", error); logger.error("Failed to get/set config version in Redis:", error);
@@ -432,7 +449,9 @@ const removeClient = async (
}; };
// Helper to get the current config version for a client // Helper to get the current config version for a client
const getClientConfigVersion = async (clientId: string): Promise<number | undefined> => { const getClientConfigVersion = async (
clientId: string
): Promise<number | undefined> => {
// Try Redis first if available // Try Redis first if available
if (redisManager.isRedisEnabled()) { if (redisManager.isRedisEnabled()) {
try { try {
@@ -502,11 +521,26 @@ const sendToClientLocal = async (
}; };
const messageString = JSON.stringify(messageWithVersion); const messageString = JSON.stringify(messageWithVersion);
clients.forEach((client) => { if (options.compress) {
if (client.readyState === WebSocket.OPEN) { logger.debug(
client.send(messageString); `Message size before compression: ${messageString.length} bytes`
} );
}); const compressed = zlib.gzipSync(Buffer.from(messageString, "utf8"));
logger.debug(
`Message size after compression: ${compressed.length} bytes`
);
clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(compressed);
}
});
} else {
clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(messageString);
}
});
}
return true; return true;
}; };
@@ -532,11 +566,22 @@ const broadcastToAllExceptLocal = async (
configVersion configVersion
}; };
clients.forEach((client) => { if (options.compress) {
if (client.readyState === WebSocket.OPEN) { const compressed = zlib.gzipSync(
client.send(JSON.stringify(messageWithVersion)); Buffer.from(JSON.stringify(messageWithVersion), "utf8")
} );
}); clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(compressed);
}
});
} else {
clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(JSON.stringify(messageWithVersion));
}
});
}
} }
} }
}; };
@@ -762,7 +807,7 @@ const setupConnection = async (
} }
// Set up message handler FIRST to prevent race condition // Set up message handler FIRST to prevent race condition
ws.on("message", async (data) => { ws.on("message", async (data, isBinary) => {
if (!ws.isFullyConnected) { if (!ws.isFullyConnected) {
// Queue message for later processing with limits // Queue message for later processing with limits
ws.pendingMessages = ws.pendingMessages || []; ws.pendingMessages = ws.pendingMessages || [];
@@ -777,11 +822,17 @@ const setupConnection = async (
logger.debug( logger.debug(
`Queueing message from ${clientType.toUpperCase()} ID: ${clientId} (connection not fully established)` `Queueing message from ${clientType.toUpperCase()} ID: ${clientId} (connection not fully established)`
); );
ws.pendingMessages.push(data as Buffer); ws.pendingMessages.push({ data: data as Buffer, isBinary });
return; return;
} }
await processMessage(ws, data as Buffer, clientId, clientType); await processMessage(
ws,
data as Buffer,
isBinary,
clientId,
clientType
);
}); });
// Set up other event handlers before async operations // Set up other event handlers before async operations
@@ -796,6 +847,31 @@ const setupConnection = async (
); );
}); });
// Handle WebSocket protocol-level pings from older newt clients that do
// not send application-level "newt/ping" messages. Update the site's
// online state and lastPing timestamp so the offline checker treats them
// the same as modern newt clients.
if (clientType === "newt") {
const newtClient = client as Newt;
ws.on("ping", async () => {
if (!newtClient.siteId) return;
try {
await db
.update(sites)
.set({
online: true,
lastPing: Math.floor(Date.now() / 1000)
})
.where(eq(sites.siteId, newtClient.siteId));
} catch (error) {
logger.error(
"Error updating newt site online state on WS ping",
{ error }
);
}
});
}
ws.on("error", (error: Error) => { ws.on("error", (error: Error) => {
logger.error( logger.error(
`WebSocket error for ${clientType.toUpperCase()} ID ${clientId}:`, `WebSocket error for ${clientType.toUpperCase()} ID ${clientId}:`,

View File

@@ -43,7 +43,7 @@ registry.registerPath({
method: "post", method: "post",
path: "/resource/{resourceId}/access-token", path: "/resource/{resourceId}/access-token",
description: "Generate a new access token for a resource.", description: "Generate a new access token for a resource.",
tags: [OpenAPITags.Resource, OpenAPITags.AccessToken], tags: [OpenAPITags.PublicResource, OpenAPITags.AccessToken],
request: { request: {
params: generateAccssTokenParamsSchema, params: generateAccssTokenParamsSchema,
body: { body: {

View File

@@ -122,7 +122,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/access-tokens", path: "/org/{orgId}/access-tokens",
description: "List all access tokens in an organization.", description: "List all access tokens in an organization.",
tags: [OpenAPITags.Org, OpenAPITags.AccessToken], tags: [OpenAPITags.AccessToken],
request: { request: {
params: z.object({ params: z.object({
orgId: z.string() orgId: z.string()
@@ -135,8 +135,8 @@ registry.registerPath({
registry.registerPath({ registry.registerPath({
method: "get", method: "get",
path: "/resource/{resourceId}/access-tokens", path: "/resource/{resourceId}/access-tokens",
description: "List all access tokens in an organization.", description: "List all access tokens for a resource.",
tags: [OpenAPITags.Resource, OpenAPITags.AccessToken], tags: [OpenAPITags.PublicResource, OpenAPITags.AccessToken],
request: { request: {
params: z.object({ params: z.object({
resourceId: z.number() resourceId: z.number()

View File

@@ -37,7 +37,7 @@ registry.registerPath({
method: "put", method: "put",
path: "/org/{orgId}/api-key", path: "/org/{orgId}/api-key",
description: "Create a new API key scoped to the organization.", description: "Create a new API key scoped to the organization.",
tags: [OpenAPITags.Org, OpenAPITags.ApiKey], tags: [OpenAPITags.ApiKey],
request: { request: {
params: paramsSchema, params: paramsSchema,
body: { body: {

View File

@@ -18,7 +18,7 @@ registry.registerPath({
method: "delete", method: "delete",
path: "/org/{orgId}/api-key/{apiKeyId}", path: "/org/{orgId}/api-key/{apiKeyId}",
description: "Delete an API key.", description: "Delete an API key.",
tags: [OpenAPITags.Org, OpenAPITags.ApiKey], tags: [OpenAPITags.ApiKey],
request: { request: {
params: paramsSchema params: paramsSchema
}, },

View File

@@ -48,7 +48,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/api-key/{apiKeyId}/actions", path: "/org/{orgId}/api-key/{apiKeyId}/actions",
description: "List all actions set for an API key.", description: "List all actions set for an API key.",
tags: [OpenAPITags.Org, OpenAPITags.ApiKey], tags: [OpenAPITags.ApiKey],
request: { request: {
params: paramsSchema, params: paramsSchema,
query: querySchema query: querySchema

View File

@@ -52,7 +52,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/api-keys", path: "/org/{orgId}/api-keys",
description: "List all API keys for an organization", description: "List all API keys for an organization",
tags: [OpenAPITags.Org, OpenAPITags.ApiKey], tags: [OpenAPITags.ApiKey],
request: { request: {
params: paramsSchema, params: paramsSchema,
query: querySchema query: querySchema

View File

@@ -25,7 +25,7 @@ registry.registerPath({
path: "/org/{orgId}/api-key/{apiKeyId}/actions", path: "/org/{orgId}/api-key/{apiKeyId}/actions",
description: description:
"Set actions for an API key. This will replace any existing actions.", "Set actions for an API key. This will replace any existing actions.",
tags: [OpenAPITags.Org, OpenAPITags.ApiKey], tags: [OpenAPITags.ApiKey],
request: { request: {
params: paramsSchema, params: paramsSchema,
body: { body: {

View File

@@ -20,7 +20,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/logs/request", path: "/org/{orgId}/logs/request",
description: "Query the request audit log for an organization", description: "Query the request audit log for an organization",
tags: [OpenAPITags.Org], tags: [OpenAPITags.Logs],
request: { request: {
query: queryAccessAuditLogsQuery.omit({ query: queryAccessAuditLogsQuery.omit({
limit: true, limit: true,

View File

@@ -151,7 +151,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/logs/analytics", path: "/org/{orgId}/logs/analytics",
description: "Query the request audit analytics for an organization", description: "Query the request audit analytics for an organization",
tags: [OpenAPITags.Org], tags: [OpenAPITags.Logs],
request: { request: {
query: queryAccessAuditLogsQuery, query: queryAccessAuditLogsQuery,
params: queryRequestAuditLogsParams params: queryRequestAuditLogsParams

View File

@@ -182,7 +182,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/logs/request", path: "/org/{orgId}/logs/request",
description: "Query the request audit log for an organization", description: "Query the request audit log for an organization",
tags: [OpenAPITags.Org], tags: [OpenAPITags.Logs],
request: { request: {
query: queryAccessAuditLogsQuery, query: queryAccessAuditLogsQuery,
params: queryRequestAuditLogsParams params: queryRequestAuditLogsParams

View File

@@ -1,5 +1,5 @@
import { NextFunction, Request, Response } from "express"; import { NextFunction, Request, Response } from "express";
import { db, users } from "@server/db"; import { bannedEmails, bannedIps, db, users } from "@server/db";
import HttpCode from "@server/types/HttpCode"; import HttpCode from "@server/types/HttpCode";
import { email, z } from "zod"; import { email, z } from "zod";
import { fromError } from "zod-validation-error"; import { fromError } from "zod-validation-error";
@@ -22,7 +22,6 @@ import { checkValidInvite } from "@server/auth/checkValidInvite";
import { passwordSchema } from "@server/auth/passwordSchema"; import { passwordSchema } from "@server/auth/passwordSchema";
import { UserType } from "@server/types/UserTypes"; import { UserType } from "@server/types/UserTypes";
import { build } from "@server/build"; import { build } from "@server/build";
import resend, { AudienceIds, moveEmailToAudience } from "#dynamic/lib/resend";
export const signupBodySchema = z.object({ export const signupBodySchema = z.object({
email: z.email().toLowerCase(), email: z.email().toLowerCase(),
@@ -66,6 +65,30 @@ export async function signup(
skipVerificationEmail skipVerificationEmail
} = parsedBody.data; } = parsedBody.data;
const [bannedEmail] = await db
.select()
.from(bannedEmails)
.where(eq(bannedEmails.email, email))
.limit(1);
if (bannedEmail) {
return next(
createHttpError(HttpCode.FORBIDDEN, "Signup blocked. Do not attempt to continue to use this service.")
);
}
if (req.ip) {
const [bannedIp] = await db
.select()
.from(bannedIps)
.where(eq(bannedIps.ip, req.ip))
.limit(1);
if (bannedIp) {
return next(
createHttpError(HttpCode.FORBIDDEN, "Signup blocked. Do not attempt to continue to use this service.")
);
}
}
const passwordHash = await hashPassword(password); const passwordHash = await hashPassword(password);
const userId = generateId(15); const userId = generateId(15);
@@ -189,6 +212,7 @@ export async function signup(
dateCreated: moment().toISOString(), dateCreated: moment().toISOString(),
termsAcceptedTimestamp: termsAcceptedTimestamp || null, termsAcceptedTimestamp: termsAcceptedTimestamp || null,
termsVersion: "1", termsVersion: "1",
marketingEmailConsent: marketingEmailConsent ?? false,
lastPasswordChange: new Date().getTime() lastPasswordChange: new Date().getTime()
}); });
@@ -212,7 +236,7 @@ export async function signup(
logger.debug( logger.debug(
`User ${email} opted in to marketing emails during signup.` `User ${email} opted in to marketing emails during signup.`
); );
moveEmailToAudience(email, AudienceIds.SignUps); // TODO: update user in Sendy
} }
if (config.getRawConfig().flags?.require_email_verification) { if (config.getRawConfig().flags?.require_email_verification) {

View File

@@ -20,7 +20,7 @@ registry.registerPath({
method: "put", method: "put",
path: "/org/{orgId}/blueprint", path: "/org/{orgId}/blueprint",
description: "Apply a base64 encoded JSON blueprint to an organization", description: "Apply a base64 encoded JSON blueprint to an organization",
tags: [OpenAPITags.Org, OpenAPITags.Blueprint], tags: [OpenAPITags.Blueprint],
request: { request: {
params: applyBlueprintParamsSchema, params: applyBlueprintParamsSchema,
body: { body: {

View File

@@ -43,7 +43,7 @@ registry.registerPath({
method: "put", method: "put",
path: "/org/{orgId}/blueprint", path: "/org/{orgId}/blueprint",
description: "Create and apply a YAML blueprint to an organization", description: "Create and apply a YAML blueprint to an organization",
tags: [OpenAPITags.Org, OpenAPITags.Blueprint], tags: [OpenAPITags.Blueprint],
request: { request: {
params: applyBlueprintParamsSchema, params: applyBlueprintParamsSchema,
body: { body: {

View File

@@ -53,7 +53,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/blueprint/{blueprintId}", path: "/org/{orgId}/blueprint/{blueprintId}",
description: "Get a blueprint by its blueprint ID.", description: "Get a blueprint by its blueprint ID.",
tags: [OpenAPITags.Org, OpenAPITags.Blueprint], tags: [OpenAPITags.Blueprint],
request: { request: {
params: getBlueprintSchema params: getBlueprintSchema
}, },

View File

@@ -67,7 +67,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/blueprints", path: "/org/{orgId}/blueprints",
description: "List all blueprints for a organization.", description: "List all blueprints for a organization.",
tags: [OpenAPITags.Org, OpenAPITags.Blueprint], tags: [OpenAPITags.Blueprint],
request: { request: {
params: z.object({ params: z.object({
orgId: z.string() orgId: z.string()

View File

@@ -48,7 +48,7 @@ registry.registerPath({
method: "put", method: "put",
path: "/org/{orgId}/client", path: "/org/{orgId}/client",
description: "Create a new client for an organization.", description: "Create a new client for an organization.",
tags: [OpenAPITags.Client, OpenAPITags.Org], tags: [OpenAPITags.Client],
request: { request: {
params: createClientParamsSchema, params: createClientParamsSchema,
body: { body: {

View File

@@ -49,7 +49,7 @@ registry.registerPath({
path: "/org/{orgId}/user/{userId}/client", path: "/org/{orgId}/user/{userId}/client",
description: description:
"Create a new client for a user and associate it with an existing olm.", "Create a new client for a user and associate it with an existing olm.",
tags: [OpenAPITags.Client, OpenAPITags.Org, OpenAPITags.User], tags: [OpenAPITags.Client],
request: { request: {
params: paramsSchema, params: paramsSchema,
body: { body: {

View File

@@ -243,7 +243,7 @@ registry.registerPath({
path: "/org/{orgId}/client/{niceId}", path: "/org/{orgId}/client/{niceId}",
description: description:
"Get a client by orgId and niceId. NiceId is a readable ID for the site and unique on a per org basis.", "Get a client by orgId and niceId. NiceId is a readable ID for the site and unique on a per org basis.",
tags: [OpenAPITags.Org, OpenAPITags.Site], tags: [OpenAPITags.Site],
request: { request: {
params: z.object({ params: z.object({
orgId: z.string(), orgId: z.string(),

View File

@@ -119,12 +119,12 @@ const listClientsSchema = z.object({
}), }),
query: z.string().optional(), query: z.string().optional(),
sort_by: z sort_by: z
.enum(["megabytesIn", "megabytesOut"]) .enum(["name", "megabytesIn", "megabytesOut"])
.optional() .optional()
.catch(undefined) .catch(undefined)
.openapi({ .openapi({
type: "string", type: "string",
enum: ["megabytesIn", "megabytesOut"], enum: ["name", "megabytesIn", "megabytesOut"],
description: "Field to sort by" description: "Field to sort by"
}), }),
order: z order: z
@@ -237,7 +237,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/clients", path: "/org/{orgId}/clients",
description: "List all clients for an organization.", description: "List all clients for an organization.",
tags: [OpenAPITags.Client, OpenAPITags.Org], tags: [OpenAPITags.Client],
request: { request: {
query: listClientsSchema, query: listClientsSchema,
params: listClientsParamsSchema params: listClientsParamsSchema
@@ -363,14 +363,14 @@ export async function listClients(
const countQuery = db.$count(baseQuery.as("filtered_clients")); const countQuery = db.$count(baseQuery.as("filtered_clients"));
const listMachinesQuery = baseQuery const listMachinesQuery = baseQuery
.limit(page) .limit(pageSize)
.offset(pageSize * (page - 1)) .offset(pageSize * (page - 1))
.orderBy( .orderBy(
sort_by sort_by
? order === "asc" ? order === "asc"
? asc(clients[sort_by]) ? asc(clients[sort_by])
: desc(clients[sort_by]) : desc(clients[sort_by])
: asc(clients.clientId) : asc(clients.name)
); );
const [clientsList, totalCount] = await Promise.all([ const [clientsList, totalCount] = await Promise.all([

View File

@@ -256,7 +256,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/user-devices", path: "/org/{orgId}/user-devices",
description: "List all user devices for an organization.", description: "List all user devices for an organization.",
tags: [OpenAPITags.Client, OpenAPITags.Org], tags: [OpenAPITags.Client],
request: { request: {
query: listUserDevicesSchema, query: listUserDevicesSchema,
params: listUserDevicesParamsSchema params: listUserDevicesParamsSchema

View File

@@ -23,7 +23,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/pick-client-defaults", path: "/org/{orgId}/pick-client-defaults",
description: "Return pre-requisite data for creating a client.", description: "Return pre-requisite data for creating a client.",
tags: [OpenAPITags.Client, OpenAPITags.Site], tags: [OpenAPITags.Client],
request: { request: {
params: pickClientDefaultsSchema params: pickClientDefaultsSchema
}, },

View File

@@ -1,51 +1,38 @@
import { sendToClient } from "#dynamic/routers/ws"; import { sendToClient } from "#dynamic/routers/ws";
import { db, olms, Transaction } from "@server/db"; import { db, olms, Transaction } from "@server/db";
import { canCompress } from "@server/lib/clientVersionChecks";
import { Alias, SubnetProxyTarget } from "@server/lib/ip"; import { Alias, SubnetProxyTarget } from "@server/lib/ip";
import logger from "@server/logger"; import logger from "@server/logger";
import { eq } from "drizzle-orm"; import { eq } from "drizzle-orm";
const BATCH_SIZE = 50; export async function addTargets(
const BATCH_DELAY_MS = 50; newtId: string,
targets: SubnetProxyTarget[],
function sleep(ms: number): Promise<void> { version?: string | null
return new Promise((resolve) => setTimeout(resolve, ms)); ) {
} await sendToClient(
newtId,
function chunkArray<T>(array: T[], size: number): T[][] { {
const chunks: T[][] = [];
for (let i = 0; i < array.length; i += size) {
chunks.push(array.slice(i, i + size));
}
return chunks;
}
export async function addTargets(newtId: string, targets: SubnetProxyTarget[]) {
const batches = chunkArray(targets, BATCH_SIZE);
for (let i = 0; i < batches.length; i++) {
if (i > 0) {
await sleep(BATCH_DELAY_MS);
}
await sendToClient(newtId, {
type: `newt/wg/targets/add`, type: `newt/wg/targets/add`,
data: batches[i] data: targets
}, { incrementConfigVersion: true }); },
} { incrementConfigVersion: true, compress: canCompress(version, "newt") }
);
} }
export async function removeTargets( export async function removeTargets(
newtId: string, newtId: string,
targets: SubnetProxyTarget[] targets: SubnetProxyTarget[],
version?: string | null
) { ) {
const batches = chunkArray(targets, BATCH_SIZE); await sendToClient(
for (let i = 0; i < batches.length; i++) { newtId,
if (i > 0) { {
await sleep(BATCH_DELAY_MS);
}
await sendToClient(newtId, {
type: `newt/wg/targets/remove`, type: `newt/wg/targets/remove`,
data: batches[i] data: targets
},{ incrementConfigVersion: true }); },
} { incrementConfigVersion: true, compress: canCompress(version, "newt") }
);
} }
export async function updateTargets( export async function updateTargets(
@@ -53,26 +40,22 @@ export async function updateTargets(
targets: { targets: {
oldTargets: SubnetProxyTarget[]; oldTargets: SubnetProxyTarget[];
newTargets: SubnetProxyTarget[]; newTargets: SubnetProxyTarget[];
} },
version?: string | null
) { ) {
const oldBatches = chunkArray(targets.oldTargets, BATCH_SIZE); await sendToClient(
const newBatches = chunkArray(targets.newTargets, BATCH_SIZE); newtId,
const maxBatches = Math.max(oldBatches.length, newBatches.length); {
for (let i = 0; i < maxBatches; i++) {
if (i > 0) {
await sleep(BATCH_DELAY_MS);
}
await sendToClient(newtId, {
type: `newt/wg/targets/update`, type: `newt/wg/targets/update`,
data: { data: {
oldTargets: oldBatches[i] || [], oldTargets: targets.oldTargets,
newTargets: newBatches[i] || [] newTargets: targets.newTargets
} }
}, { incrementConfigVersion: true }).catch((error) => { },
logger.warn(`Error sending message:`, error); { incrementConfigVersion: true, compress: canCompress(version, "newt") }
}); ).catch((error) => {
} logger.warn(`Error sending message:`, error);
});
} }
export async function addPeerData( export async function addPeerData(
@@ -80,7 +63,8 @@ export async function addPeerData(
siteId: number, siteId: number,
remoteSubnets: string[], remoteSubnets: string[],
aliases: Alias[], aliases: Alias[],
olmId?: string olmId?: string,
version?: string | null
) { ) {
if (!olmId) { if (!olmId) {
const [olm] = await db const [olm] = await db
@@ -92,16 +76,21 @@ export async function addPeerData(
return; // ignore this because an olm might not be associated with the client anymore return; // ignore this because an olm might not be associated with the client anymore
} }
olmId = olm.olmId; olmId = olm.olmId;
version = olm.version;
} }
await sendToClient(olmId, { await sendToClient(
type: `olm/wg/peer/data/add`, olmId,
data: { {
siteId: siteId, type: `olm/wg/peer/data/add`,
remoteSubnets: remoteSubnets, data: {
aliases: aliases siteId: siteId,
} remoteSubnets: remoteSubnets,
}, { incrementConfigVersion: true }).catch((error) => { aliases: aliases
}
},
{ incrementConfigVersion: true, compress: canCompress(version, "olm") }
).catch((error) => {
logger.warn(`Error sending message:`, error); logger.warn(`Error sending message:`, error);
}); });
} }
@@ -111,7 +100,8 @@ export async function removePeerData(
siteId: number, siteId: number,
remoteSubnets: string[], remoteSubnets: string[],
aliases: Alias[], aliases: Alias[],
olmId?: string olmId?: string,
version?: string | null
) { ) {
if (!olmId) { if (!olmId) {
const [olm] = await db const [olm] = await db
@@ -123,16 +113,21 @@ export async function removePeerData(
return; return;
} }
olmId = olm.olmId; olmId = olm.olmId;
version = olm.version;
} }
await sendToClient(olmId, { await sendToClient(
type: `olm/wg/peer/data/remove`, olmId,
data: { {
siteId: siteId, type: `olm/wg/peer/data/remove`,
remoteSubnets: remoteSubnets, data: {
aliases: aliases siteId: siteId,
} remoteSubnets: remoteSubnets,
}, { incrementConfigVersion: true }).catch((error) => { aliases: aliases
}
},
{ incrementConfigVersion: true, compress: canCompress(version, "olm") }
).catch((error) => {
logger.warn(`Error sending message:`, error); logger.warn(`Error sending message:`, error);
}); });
} }
@@ -152,7 +147,8 @@ export async function updatePeerData(
newAliases: Alias[]; newAliases: Alias[];
} }
| undefined, | undefined,
olmId?: string olmId?: string,
version?: string | null
) { ) {
if (!olmId) { if (!olmId) {
const [olm] = await db const [olm] = await db
@@ -164,16 +160,21 @@ export async function updatePeerData(
return; return;
} }
olmId = olm.olmId; olmId = olm.olmId;
version = olm.version;
} }
await sendToClient(olmId, { await sendToClient(
type: `olm/wg/peer/data/update`, olmId,
data: { {
siteId: siteId, type: `olm/wg/peer/data/update`,
...remoteSubnets, data: {
...aliases siteId: siteId,
} ...remoteSubnets,
}, { incrementConfigVersion: true }).catch((error) => { ...aliases
}
},
{ incrementConfigVersion: true, compress: canCompress(version, "olm") }
).catch((error) => {
logger.warn(`Error sending message:`, error); logger.warn(`Error sending message:`, error);
}); });
} }

View File

@@ -40,7 +40,8 @@ async function queryDomains(orgId: string, limit: number, offset: number) {
tries: domains.tries, tries: domains.tries,
configManaged: domains.configManaged, configManaged: domains.configManaged,
certResolver: domains.certResolver, certResolver: domains.certResolver,
preferWildcardCert: domains.preferWildcardCert preferWildcardCert: domains.preferWildcardCert,
errorMessage: domains.errorMessage
}) })
.from(orgDomains) .from(orgDomains)
.where(eq(orgDomains.orgId, orgId)) .where(eq(orgDomains.orgId, orgId))
@@ -59,7 +60,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/org/{orgId}/domains", path: "/org/{orgId}/domains",
description: "List all domains for a organization.", description: "List all domains for a organization.",
tags: [OpenAPITags.Org], tags: [OpenAPITags.Domain],
request: { request: {
params: z.object({ params: z.object({
orgId: z.string() orgId: z.string()

View File

@@ -125,7 +125,7 @@ export async function generateRelayMappings(exitNode: ExitNode) {
// Add site as a destination for this client // Add site as a destination for this client
const destination: PeerDestination = { const destination: PeerDestination = {
destinationIP: site.subnet.split("/")[0], destinationIP: site.subnet.split("/")[0],
destinationPort: site.listenPort destinationPort: site.listenPort || 1 // this satisfies gerbil for now but should be reevaluated
}; };
// Check if this destination is already in the array to avoid duplicates // Check if this destination is already in the array to avoid duplicates
@@ -165,7 +165,7 @@ export async function generateRelayMappings(exitNode: ExitNode) {
const destination: PeerDestination = { const destination: PeerDestination = {
destinationIP: peer.subnet.split("/")[0], destinationIP: peer.subnet.split("/")[0],
destinationPort: peer.listenPort destinationPort: peer.listenPort || 1 // this satisfies gerbil for now but should be reevaluated
}; };
// Check for duplicates // Check for duplicates

View File

@@ -1,5 +1,5 @@
import { Request, Response, NextFunction } from "express"; import { Request, Response, NextFunction } from "express";
import { eq, and, lt, inArray, sql } from "drizzle-orm"; import { eq, sql } from "drizzle-orm";
import { sites } from "@server/db"; import { sites } from "@server/db";
import { db } from "@server/db"; import { db } from "@server/db";
import logger from "@server/logger"; import logger from "@server/logger";
@@ -11,19 +11,31 @@ import { FeatureId } from "@server/lib/billing/features";
import { checkExitNodeOrg } from "#dynamic/lib/exitNodes"; import { checkExitNodeOrg } from "#dynamic/lib/exitNodes";
import { build } from "@server/build"; import { build } from "@server/build";
// Track sites that are already offline to avoid unnecessary queries
const offlineSites = new Set<string>();
// Retry configuration for deadlock handling
const MAX_RETRIES = 3;
const BASE_DELAY_MS = 50;
interface PeerBandwidth { interface PeerBandwidth {
publicKey: string; publicKey: string;
bytesIn: number; bytesIn: number;
bytesOut: number; bytesOut: number;
} }
interface AccumulatorEntry {
bytesIn: number;
bytesOut: number;
/** Present when the update came through a remote exit node. */
exitNodeId?: number;
/** Whether to record egress usage for billing purposes. */
calcUsage: boolean;
}
// Retry configuration for deadlock handling
const MAX_RETRIES = 3;
const BASE_DELAY_MS = 50;
// How often to flush accumulated bandwidth data to the database
const FLUSH_INTERVAL_MS = 30_000; // 30 seconds
// In-memory accumulator: publicKey -> AccumulatorEntry
let accumulator = new Map<string, AccumulatorEntry>();
/** /**
* Check if an error is a deadlock error * Check if an error is a deadlock error
*/ */
@@ -63,6 +75,220 @@ async function withDeadlockRetry<T>(
} }
} }
/**
* Flush all accumulated site bandwidth data to the database.
*
* Swaps out the accumulator before writing so that any bandwidth messages
* received during the flush are captured in the new accumulator rather than
* being lost or causing contention. Entries that fail to write are re-queued
* back into the accumulator so they will be retried on the next flush.
*
* This function is exported so that the application's graceful-shutdown
* cleanup handler can call it before the process exits.
*/
export async function flushSiteBandwidthToDb(): Promise<void> {
if (accumulator.size === 0) {
return;
}
// Atomically swap out the accumulator so new data keeps flowing in
// while we write the snapshot to the database.
const snapshot = accumulator;
accumulator = new Map<string, AccumulatorEntry>();
const currentTime = new Date().toISOString();
// Sort by publicKey for consistent lock ordering across concurrent
// writers — deadlock-prevention strategy.
const sortedEntries = [...snapshot.entries()].sort(([a], [b]) =>
a.localeCompare(b)
);
logger.debug(
`Flushing accumulated bandwidth data for ${sortedEntries.length} site(s) to the database`
);
// Aggregate billing usage by org, collected during the DB update loop.
const orgUsageMap = new Map<string, number>();
for (const [publicKey, { bytesIn, bytesOut, exitNodeId, calcUsage }] of sortedEntries) {
try {
const updatedSite = await withDeadlockRetry(async () => {
const [result] = await db
.update(sites)
.set({
megabytesOut: sql`COALESCE(${sites.megabytesOut}, 0) + ${bytesIn}`,
megabytesIn: sql`COALESCE(${sites.megabytesIn}, 0) + ${bytesOut}`,
lastBandwidthUpdate: currentTime
})
.where(eq(sites.pubKey, publicKey))
.returning({
orgId: sites.orgId,
siteId: sites.siteId
});
return result;
}, `flush bandwidth for site ${publicKey}`);
if (updatedSite) {
if (exitNodeId) {
const notAllowed = await checkExitNodeOrg(
exitNodeId,
updatedSite.orgId
);
if (notAllowed) {
logger.warn(
`Exit node ${exitNodeId} is not allowed for org ${updatedSite.orgId}`
);
// Skip usage tracking for this site but continue
// processing the rest.
continue;
}
}
if (calcUsage) {
const totalBandwidth = bytesIn + bytesOut;
const current = orgUsageMap.get(updatedSite.orgId) ?? 0;
orgUsageMap.set(updatedSite.orgId, current + totalBandwidth);
}
}
} catch (error) {
logger.error(
`Failed to flush bandwidth for site ${publicKey}:`,
error
);
// Re-queue the failed entry so it is retried on the next flush
// rather than silently dropped.
const existing = accumulator.get(publicKey);
if (existing) {
existing.bytesIn += bytesIn;
existing.bytesOut += bytesOut;
} else {
accumulator.set(publicKey, {
bytesIn,
bytesOut,
exitNodeId,
calcUsage
});
}
}
}
// Process billing usage updates outside the site-update loop to keep
// lock scope small and concerns separated.
if (orgUsageMap.size > 0) {
// Sort org IDs for consistent lock ordering.
const sortedOrgIds = [...orgUsageMap.keys()].sort();
for (const orgId of sortedOrgIds) {
try {
const totalBandwidth = orgUsageMap.get(orgId)!;
const bandwidthUsage = await usageService.add(
orgId,
FeatureId.EGRESS_DATA_MB,
totalBandwidth
);
if (bandwidthUsage) {
// Fire-and-forget — don't block the flush on limit checking.
usageService
.checkLimitSet(
orgId,
FeatureId.EGRESS_DATA_MB,
bandwidthUsage
)
.catch((error: any) => {
logger.error(
`Error checking bandwidth limits for org ${orgId}:`,
error
);
});
}
} catch (error) {
logger.error(
`Error processing usage for org ${orgId}:`,
error
);
// Continue with other orgs.
}
}
}
}
// ---------------------------------------------------------------------------
// Periodic flush timer
// ---------------------------------------------------------------------------
const flushTimer = setInterval(async () => {
try {
await flushSiteBandwidthToDb();
} catch (error) {
logger.error(
"Unexpected error during periodic site bandwidth flush:",
error
);
}
}, FLUSH_INTERVAL_MS);
// Allow the process to exit normally even while the timer is pending.
// The graceful-shutdown path (see server/cleanup.ts) will call
// flushSiteBandwidthToDb() explicitly before process.exit(), so no data
// is lost.
flushTimer.unref();
// ---------------------------------------------------------------------------
// Public API
// ---------------------------------------------------------------------------
/**
* Accumulate bandwidth data reported by a gerbil or remote exit node.
*
* Only peers that actually transferred data (bytesIn > 0) are added to the
* accumulator; peers with no activity are silently ignored, which means the
* flush will only write rows that have genuinely changed.
*
* The function is intentionally synchronous in its fast path so that the
* HTTP handler can respond immediately without waiting for any I/O.
*/
export async function updateSiteBandwidth(
bandwidthData: PeerBandwidth[],
calcUsageAndLimits: boolean,
exitNodeId?: number
): Promise<void> {
for (const { publicKey, bytesIn, bytesOut } of bandwidthData) {
// Skip peers that haven't transferred any data — writing zeros to the
// database would be a no-op anyway.
if (bytesIn <= 0 && bytesOut <= 0) {
continue;
}
const existing = accumulator.get(publicKey);
if (existing) {
existing.bytesIn += bytesIn;
existing.bytesOut += bytesOut;
// Retain the most-recent exitNodeId for this peer.
if (exitNodeId !== undefined) {
existing.exitNodeId = exitNodeId;
}
// Once calcUsage has been requested for a peer, keep it set for
// the lifetime of this flush window.
if (calcUsageAndLimits) {
existing.calcUsage = true;
}
} else {
accumulator.set(publicKey, {
bytesIn,
bytesOut,
exitNodeId,
calcUsage: calcUsageAndLimits
});
}
}
}
// ---------------------------------------------------------------------------
// HTTP handler
// ---------------------------------------------------------------------------
export const receiveBandwidth = async ( export const receiveBandwidth = async (
req: Request, req: Request,
res: Response, res: Response,
@@ -75,7 +301,9 @@ export const receiveBandwidth = async (
throw new Error("Invalid bandwidth data"); throw new Error("Invalid bandwidth data");
} }
await updateSiteBandwidth(bandwidthData, build == "saas"); // we are checking the usage on saas only // Accumulate in memory; the periodic timer (and the shutdown hook)
// will write to the database.
await updateSiteBandwidth(bandwidthData, build == "saas");
return response(res, { return response(res, {
data: {}, data: {},
@@ -93,202 +321,4 @@ export const receiveBandwidth = async (
) )
); );
} }
}; };
export async function updateSiteBandwidth(
bandwidthData: PeerBandwidth[],
calcUsageAndLimits: boolean,
exitNodeId?: number
) {
const currentTime = new Date();
const oneMinuteAgo = new Date(currentTime.getTime() - 60000); // 1 minute ago
// Sort bandwidth data by publicKey to ensure consistent lock ordering across all instances
// This is critical for preventing deadlocks when multiple instances update the same sites
const sortedBandwidthData = [...bandwidthData].sort((a, b) =>
a.publicKey.localeCompare(b.publicKey)
);
// First, handle sites that are actively reporting bandwidth
const activePeers = sortedBandwidthData.filter((peer) => peer.bytesIn > 0);
// Aggregate usage data by organization (collected outside transaction)
const orgUsageMap = new Map<string, number>();
if (activePeers.length > 0) {
// Remove any active peers from offline tracking since they're sending data
activePeers.forEach((peer) => offlineSites.delete(peer.publicKey));
// Update each active site individually with retry logic
// This reduces transaction scope and allows retries per-site
for (const peer of activePeers) {
try {
const updatedSite = await withDeadlockRetry(async () => {
const [result] = await db
.update(sites)
.set({
megabytesOut: sql`${sites.megabytesOut} + ${peer.bytesIn}`,
megabytesIn: sql`${sites.megabytesIn} + ${peer.bytesOut}`,
lastBandwidthUpdate: currentTime.toISOString(),
online: true
})
.where(eq(sites.pubKey, peer.publicKey))
.returning({
online: sites.online,
orgId: sites.orgId,
siteId: sites.siteId,
lastBandwidthUpdate: sites.lastBandwidthUpdate
});
return result;
}, `update active site ${peer.publicKey}`);
if (updatedSite) {
if (exitNodeId) {
const notAllowed = await checkExitNodeOrg(
exitNodeId,
updatedSite.orgId
);
if (notAllowed) {
logger.warn(
`Exit node ${exitNodeId} is not allowed for org ${updatedSite.orgId}`
);
// Skip this site but continue processing others
continue;
}
}
// Aggregate bandwidth usage for the org
const totalBandwidth = peer.bytesIn + peer.bytesOut;
const currentOrgUsage =
orgUsageMap.get(updatedSite.orgId) || 0;
orgUsageMap.set(
updatedSite.orgId,
currentOrgUsage + totalBandwidth
);
}
} catch (error) {
logger.error(
`Failed to update bandwidth for site ${peer.publicKey}:`,
error
);
// Continue with other sites
}
}
}
// Process usage updates outside of site update transactions
// This separates the concerns and reduces lock contention
if (calcUsageAndLimits && orgUsageMap.size > 0) {
// Sort org IDs to ensure consistent lock ordering
const allOrgIds = [...new Set([...orgUsageMap.keys()])].sort();
for (const orgId of allOrgIds) {
try {
// Process bandwidth usage for this org
const totalBandwidth = orgUsageMap.get(orgId);
if (totalBandwidth) {
const bandwidthUsage = await usageService.add(
orgId,
FeatureId.EGRESS_DATA_MB,
totalBandwidth
);
if (bandwidthUsage) {
// Fire and forget - don't block on limit checking
usageService
.checkLimitSet(
orgId,
FeatureId.EGRESS_DATA_MB,
bandwidthUsage
)
.catch((error: any) => {
logger.error(
`Error checking bandwidth limits for org ${orgId}:`,
error
);
});
}
}
} catch (error) {
logger.error(`Error processing usage for org ${orgId}:`, error);
// Continue with other orgs
}
}
}
// Handle sites that reported zero bandwidth but need online status updated
const zeroBandwidthPeers = sortedBandwidthData.filter(
(peer) => peer.bytesIn === 0 && !offlineSites.has(peer.publicKey)
);
if (zeroBandwidthPeers.length > 0) {
// Fetch all zero bandwidth sites in one query
const zeroBandwidthSites = await db
.select()
.from(sites)
.where(
inArray(
sites.pubKey,
zeroBandwidthPeers.map((p) => p.publicKey)
)
);
// Sort by siteId to ensure consistent lock ordering
const sortedZeroBandwidthSites = zeroBandwidthSites.sort(
(a, b) => a.siteId - b.siteId
);
for (const site of sortedZeroBandwidthSites) {
let newOnlineStatus = site.online;
// Check if site should go offline based on last bandwidth update WITH DATA
if (site.lastBandwidthUpdate) {
const lastUpdateWithData = new Date(site.lastBandwidthUpdate);
if (lastUpdateWithData < oneMinuteAgo) {
newOnlineStatus = false;
}
} else {
// No previous data update recorded, set to offline
newOnlineStatus = false;
}
// Only update online status if it changed
if (site.online !== newOnlineStatus) {
try {
const updatedSite = await withDeadlockRetry(async () => {
const [result] = await db
.update(sites)
.set({
online: newOnlineStatus
})
.where(eq(sites.siteId, site.siteId))
.returning();
return result;
}, `update offline status for site ${site.siteId}`);
if (updatedSite && exitNodeId) {
const notAllowed = await checkExitNodeOrg(
exitNodeId,
updatedSite.orgId
);
if (notAllowed) {
logger.warn(
`Exit node ${exitNodeId} is not allowed for org ${updatedSite.orgId}`
);
}
}
// If site went offline, add it to our tracking set
if (!newOnlineStatus && site.pubKey) {
offlineSites.add(site.pubKey);
}
} catch (error) {
logger.error(
`Failed to update offline status for site ${site.siteId}:`,
error
);
// Continue with other sites
}
}
}
}
}

View File

@@ -112,7 +112,7 @@ export async function updateHolePunch(
destinations: destinations destinations: destinations
}); });
} catch (error) { } catch (error) {
// logger.error(error); // FIX THIS logger.error(error);
return next( return next(
createHttpError( createHttpError(
HttpCode.INTERNAL_SERVER_ERROR, HttpCode.INTERNAL_SERVER_ERROR,
@@ -262,7 +262,7 @@ export async function updateAndGenerateEndpointDestinations(
if (site.subnet && site.listenPort) { if (site.subnet && site.listenPort) {
destinations.push({ destinations.push({
destinationIP: site.subnet.split("/")[0], destinationIP: site.subnet.split("/")[0],
destinationPort: site.listenPort destinationPort: site.listenPort || 1 // this satisfies gerbil for now but should be reevaluated
}); });
} }
} }
@@ -339,10 +339,10 @@ export async function updateAndGenerateEndpointDestinations(
handleSiteEndpointChange(newt.siteId, updatedSite.endpoint!); handleSiteEndpointChange(newt.siteId, updatedSite.endpoint!);
} }
if (!updatedSite || !updatedSite.subnet) { // if (!updatedSite || !updatedSite.subnet) {
logger.warn(`Site not found: ${newt.siteId}`); // logger.warn(`Site not found: ${newt.siteId}`);
throw new Error("Site not found"); // throw new Error("Site not found");
} // }
// Find all clients that connect to this site // Find all clients that connect to this site
// const sitesClientPairs = await db // const sitesClientPairs = await db

View File

@@ -27,7 +27,7 @@ registry.registerPath({
method: "put", method: "put",
path: "/idp/{idpId}/org/{orgId}", path: "/idp/{idpId}/org/{orgId}",
description: "Create an IDP policy for an existing IDP on an organization.", description: "Create an IDP policy for an existing IDP on an organization.",
tags: [OpenAPITags.Idp], tags: [OpenAPITags.GlobalIdp],
request: { request: {
params: paramsSchema, params: paramsSchema,
body: { body: {

View File

@@ -37,7 +37,7 @@ registry.registerPath({
method: "put", method: "put",
path: "/idp/oidc", path: "/idp/oidc",
description: "Create an OIDC IdP.", description: "Create an OIDC IdP.",
tags: [OpenAPITags.Idp], tags: [OpenAPITags.GlobalIdp],
request: { request: {
body: { body: {
content: { content: {

View File

@@ -21,7 +21,7 @@ registry.registerPath({
method: "delete", method: "delete",
path: "/idp/{idpId}", path: "/idp/{idpId}",
description: "Delete IDP.", description: "Delete IDP.",
tags: [OpenAPITags.Idp], tags: [OpenAPITags.GlobalIdp],
request: { request: {
params: paramsSchema params: paramsSchema
}, },

View File

@@ -19,7 +19,7 @@ registry.registerPath({
method: "delete", method: "delete",
path: "/idp/{idpId}/org/{orgId}", path: "/idp/{idpId}/org/{orgId}",
description: "Create an OIDC IdP for an organization.", description: "Create an OIDC IdP for an organization.",
tags: [OpenAPITags.Idp], tags: [OpenAPITags.GlobalIdp],
request: { request: {
params: paramsSchema params: paramsSchema
}, },

View File

@@ -34,7 +34,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/idp/{idpId}", path: "/idp/{idpId}",
description: "Get an IDP by its IDP ID.", description: "Get an IDP by its IDP ID.",
tags: [OpenAPITags.Idp], tags: [OpenAPITags.GlobalIdp],
request: { request: {
params: paramsSchema params: paramsSchema
}, },

View File

@@ -48,7 +48,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/idp/{idpId}/org", path: "/idp/{idpId}/org",
description: "List all org policies on an IDP.", description: "List all org policies on an IDP.",
tags: [OpenAPITags.Idp], tags: [OpenAPITags.GlobalIdp],
request: { request: {
params: paramsSchema, params: paramsSchema,
query: querySchema query: querySchema

View File

@@ -58,7 +58,7 @@ registry.registerPath({
method: "get", method: "get",
path: "/idp", path: "/idp",
description: "List all IDP in the system.", description: "List all IDP in the system.",
tags: [OpenAPITags.Idp], tags: [OpenAPITags.GlobalIdp],
request: { request: {
query: querySchema query: querySchema
}, },

View File

@@ -26,7 +26,7 @@ registry.registerPath({
method: "post", method: "post",
path: "/idp/{idpId}/org/{orgId}", path: "/idp/{idpId}/org/{orgId}",
description: "Update an IDP org policy.", description: "Update an IDP org policy.",
tags: [OpenAPITags.Idp], tags: [OpenAPITags.GlobalIdp],
request: { request: {
params: paramsSchema, params: paramsSchema,
body: { body: {

View File

@@ -42,7 +42,7 @@ registry.registerPath({
method: "post", method: "post",
path: "/idp/{idpId}/oidc", path: "/idp/{idpId}/oidc",
description: "Update an OIDC IdP.", description: "Update an OIDC IdP.",
tags: [OpenAPITags.Idp], tags: [OpenAPITags.GlobalIdp],
request: { request: {
params: paramsSchema, params: paramsSchema,
body: { body: {

View File

@@ -27,7 +27,8 @@ import {
verifyApiKeyClientAccess, verifyApiKeyClientAccess,
verifyApiKeySiteResourceAccess, verifyApiKeySiteResourceAccess,
verifyApiKeySetResourceClients, verifyApiKeySetResourceClients,
verifyLimits verifyLimits,
verifyApiKeyDomainAccess
} from "@server/middlewares"; } from "@server/middlewares";
import HttpCode from "@server/types/HttpCode"; import HttpCode from "@server/types/HttpCode";
import { Router } from "express"; import { Router } from "express";
@@ -347,6 +348,56 @@ authenticated.get(
domain.listDomains domain.listDomains
); );
authenticated.get(
"/org/:orgId/domain/:domainId",
verifyApiKeyOrgAccess,
verifyApiKeyDomainAccess,
verifyApiKeyHasAction(ActionsEnum.getDomain),
domain.getDomain
);
authenticated.put(
"/org/:orgId/domain",
verifyApiKeyOrgAccess,
verifyApiKeyHasAction(ActionsEnum.createOrgDomain),
logActionAudit(ActionsEnum.createOrgDomain),
domain.createOrgDomain
);
authenticated.patch(
"/org/:orgId/domain/:domainId",
verifyApiKeyOrgAccess,
verifyApiKeyDomainAccess,
verifyApiKeyHasAction(ActionsEnum.updateOrgDomain),
domain.updateOrgDomain
);
authenticated.delete(
"/org/:orgId/domain/:domainId",
verifyApiKeyOrgAccess,
verifyApiKeyDomainAccess,
verifyApiKeyHasAction(ActionsEnum.deleteOrgDomain),
logActionAudit(ActionsEnum.deleteOrgDomain),
domain.deleteAccountDomain
);
authenticated.get(
"/org/:orgId/domain/:domainId/dns-records",
verifyApiKeyOrgAccess,
verifyApiKeyDomainAccess,
verifyApiKeyHasAction(ActionsEnum.getDNSRecords),
domain.getDNSRecords
);
authenticated.post(
"/org/:orgId/domain/:domainId/restart",
verifyApiKeyOrgAccess,
verifyApiKeyDomainAccess,
verifyApiKeyHasAction(ActionsEnum.restartOrgDomain),
logActionAudit(ActionsEnum.restartOrgDomain),
domain.restartOrgDomain
);
authenticated.get( authenticated.get(
"/org/:orgId/invitations", "/org/:orgId/invitations",
verifyApiKeyOrgAccess, verifyApiKeyOrgAccess,

View File

@@ -1,4 +1,15 @@
import { clients, clientSiteResourcesAssociationsCache, clientSitesAssociationsCache, db, ExitNode, resources, Site, siteResources, targetHealthCheck, targets } from "@server/db"; import {
clients,
clientSiteResourcesAssociationsCache,
clientSitesAssociationsCache,
db,
ExitNode,
resources,
Site,
siteResources,
targetHealthCheck,
targets
} from "@server/db";
import logger from "@server/logger"; import logger from "@server/logger";
import { initPeerAddHandshake, updatePeer } from "../olm/peers"; import { initPeerAddHandshake, updatePeer } from "../olm/peers";
import { eq, and } from "drizzle-orm"; import { eq, and } from "drizzle-orm";
@@ -69,40 +80,42 @@ export async function buildClientConfigurationForNewtClient(
// ) // )
// ); // );
// update the peer info on the olm if (!client.clientSitesAssociationsCache.isJitMode) { // if we are adding sites through jit then dont add the site to the olm
// if the peer has not been added yet this will be a no-op // update the peer info on the olm
await updatePeer(client.clients.clientId, { // if the peer has not been added yet this will be a no-op
siteId: site.siteId, await updatePeer(client.clients.clientId, {
endpoint: site.endpoint!, siteId: site.siteId,
relayEndpoint: `${exitNode.endpoint}:${config.getRawConfig().gerbil.clients_start_port}`, endpoint: site.endpoint!,
publicKey: site.publicKey!, relayEndpoint: `${exitNode.endpoint}:${config.getRawConfig().gerbil.clients_start_port}`,
serverIP: site.address, publicKey: site.publicKey!,
serverPort: site.listenPort serverIP: site.address,
// remoteSubnets: generateRemoteSubnets( serverPort: site.listenPort
// allSiteResources.map( // remoteSubnets: generateRemoteSubnets(
// ({ siteResources }) => siteResources // allSiteResources.map(
// ) // ({ siteResources }) => siteResources
// ), // )
// aliases: generateAliasConfig( // ),
// allSiteResources.map( // aliases: generateAliasConfig(
// ({ siteResources }) => siteResources // allSiteResources.map(
// ) // ({ siteResources }) => siteResources
// ) // )
}); // )
});
// also trigger the peer add handshake in case the peer was not already added to the olm and we need to hole punch // also trigger the peer add handshake in case the peer was not already added to the olm and we need to hole punch
// if it has already been added this will be a no-op // if it has already been added this will be a no-op
await initPeerAddHandshake( await initPeerAddHandshake(
// this will kick off the add peer process for the client // this will kick off the add peer process for the client
client.clients.clientId, client.clients.clientId,
{ {
siteId, siteId,
exitNode: { exitNode: {
publicKey: exitNode.publicKey, publicKey: exitNode.publicKey,
endpoint: exitNode.endpoint endpoint: exitNode.endpoint
}
} }
} );
); }
return { return {
publicKey: client.clients.pubKey!, publicKey: client.clients.pubKey!,
@@ -188,7 +201,8 @@ export async function buildTargetConfigurationForNewtClient(siteId: number) {
hcTimeout: targetHealthCheck.hcTimeout, hcTimeout: targetHealthCheck.hcTimeout,
hcHeaders: targetHealthCheck.hcHeaders, hcHeaders: targetHealthCheck.hcHeaders,
hcMethod: targetHealthCheck.hcMethod, hcMethod: targetHealthCheck.hcMethod,
hcTlsServerName: targetHealthCheck.hcTlsServerName hcTlsServerName: targetHealthCheck.hcTlsServerName,
hcStatus: targetHealthCheck.hcStatus
}) })
.from(targets) .from(targets)
.innerJoin(resources, eq(targets.resourceId, resources.resourceId)) .innerJoin(resources, eq(targets.resourceId, resources.resourceId))
@@ -229,9 +243,9 @@ export async function buildTargetConfigurationForNewtClient(siteId: number) {
!target.hcInterval || !target.hcInterval ||
!target.hcMethod !target.hcMethod
) { ) {
logger.debug( // logger.debug(
`Skipping target ${target.targetId} due to missing health check fields` // `Skipping adding target health check ${target.targetId} due to missing health check fields`
); // );
return null; // Skip targets with missing health check fields return null; // Skip targets with missing health check fields
} }
@@ -261,7 +275,8 @@ export async function buildTargetConfigurationForNewtClient(siteId: number) {
hcTimeout: target.hcTimeout, // in seconds hcTimeout: target.hcTimeout, // in seconds
hcHeaders: hcHeadersSend, hcHeaders: hcHeadersSend,
hcMethod: target.hcMethod, hcMethod: target.hcMethod,
hcTlsServerName: target.hcTlsServerName hcTlsServerName: target.hcTlsServerName,
hcStatus: target.hcStatus
}; };
}); });

View File

@@ -6,6 +6,7 @@ import { db, ExitNode, exitNodes, Newt, sites } from "@server/db";
import { eq } from "drizzle-orm"; import { eq } from "drizzle-orm";
import { sendToExitNode } from "#dynamic/lib/exitNodes"; import { sendToExitNode } from "#dynamic/lib/exitNodes";
import { buildClientConfigurationForNewtClient } from "./buildConfiguration"; import { buildClientConfigurationForNewtClient } from "./buildConfiguration";
import { canCompress } from "@server/lib/clientVersionChecks";
const inputSchema = z.object({ const inputSchema = z.object({
publicKey: z.string(), publicKey: z.string(),
@@ -104,11 +105,11 @@ export const handleGetConfigMessage: MessageHandler = async (context) => {
const payload = { const payload = {
oldDestination: { oldDestination: {
destinationIP: existingSite.subnet?.split("/")[0], destinationIP: existingSite.subnet?.split("/")[0],
destinationPort: existingSite.listenPort destinationPort: existingSite.listenPort || 1 // this satisfies gerbil for now but should be reevaluated
}, },
newDestination: { newDestination: {
destinationIP: site.subnet?.split("/")[0], destinationIP: site.subnet?.split("/")[0],
destinationPort: site.listenPort destinationPort: site.listenPort || 1 // this satisfies gerbil for now but should be reevaluated
} }
}; };
@@ -135,6 +136,9 @@ export const handleGetConfigMessage: MessageHandler = async (context) => {
targets targets
} }
}, },
options: {
compress: canCompress(newt.version, "newt")
},
broadcast: false, broadcast: false,
excludeSender: false excludeSender: false
}; };

View File

@@ -0,0 +1,34 @@
import { MessageHandler } from "@server/routers/ws";
import { db, Newt, sites } from "@server/db";
import { eq } from "drizzle-orm";
import logger from "@server/logger";
/**
* Handles disconnecting messages from sites to show disconnected in the ui
*/
export const handleNewtDisconnectingMessage: MessageHandler = async (context) => {
const { message, client: c, sendToClient } = context;
const newt = c as Newt;
if (!newt) {
logger.warn("Newt not found");
return;
}
if (!newt.siteId) {
logger.warn("Newt has no client ID!");
return;
}
try {
// Update the client's last ping timestamp
await db
.update(sites)
.set({
online: false
})
.where(eq(sites.siteId, sites.siteId));
} catch (error) {
logger.error("Error handling disconnecting message", { error });
}
};

View File

@@ -1,105 +1,107 @@
import { db, sites } from "@server/db"; import { db, newts, sites } from "@server/db";
import { disconnectClient, getClientConfigVersion } from "#dynamic/routers/ws"; import { hasActiveConnections, getClientConfigVersion } from "#dynamic/routers/ws";
import { MessageHandler } from "@server/routers/ws"; import { MessageHandler } from "@server/routers/ws";
import { clients, Newt } from "@server/db"; import { Newt } from "@server/db";
import { eq, lt, isNull, and, or } from "drizzle-orm"; import { eq, lt, isNull, and, or } from "drizzle-orm";
import logger from "@server/logger"; import logger from "@server/logger";
import { validateSessionToken } from "@server/auth/sessions/app";
import { checkOrgAccessPolicy } from "#dynamic/lib/checkOrgAccessPolicy";
import { sendTerminateClient } from "../client/terminate";
import { encodeHexLowerCase } from "@oslojs/encoding";
import { sha256 } from "@oslojs/crypto/sha2";
import { sendNewtSyncMessage } from "./sync"; import { sendNewtSyncMessage } from "./sync";
// Track if the offline checker interval is running // Track if the offline checker interval is running
// let offlineCheckerInterval: NodeJS.Timeout | null = null; let offlineCheckerInterval: NodeJS.Timeout | null = null;
// const OFFLINE_CHECK_INTERVAL = 30 * 1000; // Check every 30 seconds const OFFLINE_CHECK_INTERVAL = 30 * 1000; // Check every 30 seconds
// const OFFLINE_THRESHOLD_MS = 2 * 60 * 1000; // 2 minutes const OFFLINE_THRESHOLD_MS = 2 * 60 * 1000; // 2 minutes
/** /**
* Starts the background interval that checks for clients that haven't pinged recently * Starts the background interval that checks for newt sites that haven't
* and marks them as offline * pinged recently and marks them as offline. For backward compatibility,
* a site is only marked offline when there is no active WebSocket connection
* either — so older newt versions that don't send pings but remain connected
* continue to be treated as online.
*/ */
// export const startNewtOfflineChecker = (): void => { export const startNewtOfflineChecker = (): void => {
// if (offlineCheckerInterval) { if (offlineCheckerInterval) {
// return; // Already running return; // Already running
// } }
// offlineCheckerInterval = setInterval(async () => { offlineCheckerInterval = setInterval(async () => {
// try { try {
// const twoMinutesAgo = Math.floor( const twoMinutesAgo = Math.floor(
// (Date.now() - OFFLINE_THRESHOLD_MS) / 1000 (Date.now() - OFFLINE_THRESHOLD_MS) / 1000
// ); );
// // TODO: WE NEED TO MAKE SURE THIS WORKS WITH DISTRIBUTED NODES ALL DOING THE SAME THING // Find all online newt-type sites that haven't pinged recently
// (or have never pinged at all). Join newts to obtain the newtId
// needed for the WebSocket connection check.
const staleSites = await db
.select({
siteId: sites.siteId,
newtId: newts.newtId,
lastPing: sites.lastPing
})
.from(sites)
.innerJoin(newts, eq(newts.siteId, sites.siteId))
.where(
and(
eq(sites.online, true),
eq(sites.type, "newt"),
or(
lt(sites.lastPing, twoMinutesAgo),
isNull(sites.lastPing)
)
)
);
// // Find clients that haven't pinged in the last 2 minutes and mark them as offline for (const staleSite of staleSites) {
// const offlineClients = await db // Backward-compatibility check: if the newt still has an
// .update(clients) // active WebSocket connection (older clients that don't send
// .set({ online: false }) // pings), keep the site online.
// .where( const isConnected = await hasActiveConnections(staleSite.newtId);
// and( if (isConnected) {
// eq(clients.online, true), logger.debug(
// or( `Newt ${staleSite.newtId} has not pinged recently but is still connected via WebSocket — keeping site ${staleSite.siteId} online`
// lt(clients.lastPing, twoMinutesAgo), );
// isNull(clients.lastPing) continue;
// ) }
// )
// )
// .returning();
// for (const offlineClient of offlineClients) { logger.info(
// logger.info( `Marking site ${staleSite.siteId} offline: newt ${staleSite.newtId} has no recent ping and no active WebSocket connection`
// `Kicking offline newt client ${offlineClient.clientId} due to inactivity` );
// );
// if (!offlineClient.newtId) { await db
// logger.warn( .update(sites)
// `Offline client ${offlineClient.clientId} has no newtId, cannot disconnect` .set({ online: false })
// ); .where(eq(sites.siteId, staleSite.siteId));
// continue; }
// } } catch (error) {
logger.error("Error in newt offline checker interval", { error });
}
}, OFFLINE_CHECK_INTERVAL);
// // Send a disconnect message to the client if connected logger.debug("Started newt offline checker interval");
// try { };
// await sendTerminateClient(
// offlineClient.clientId,
// offlineClient.newtId
// ); // terminate first
// // wait a moment to ensure the message is sent
// await new Promise((resolve) => setTimeout(resolve, 1000));
// await disconnectClient(offlineClient.newtId);
// } catch (error) {
// logger.error(
// `Error sending disconnect to offline newt ${offlineClient.clientId}`,
// { error }
// );
// }
// }
// } catch (error) {
// logger.error("Error in offline checker interval", { error });
// }
// }, OFFLINE_CHECK_INTERVAL);
// logger.debug("Started offline checker interval");
// };
/** /**
* Stops the background interval that checks for offline clients * Stops the background interval that checks for offline newt sites.
*/ */
// export const stopNewtOfflineChecker = (): void => { export const stopNewtOfflineChecker = (): void => {
// if (offlineCheckerInterval) { if (offlineCheckerInterval) {
// clearInterval(offlineCheckerInterval); clearInterval(offlineCheckerInterval);
// offlineCheckerInterval = null; offlineCheckerInterval = null;
// logger.info("Stopped offline checker interval"); logger.info("Stopped newt offline checker interval");
// } }
// }; };
/** /**
* Handles ping messages from clients and responds with pong * Handles ping messages from newt clients.
*
* On each ping:
* - Marks the associated site as online.
* - Records the current timestamp as the newt's last-ping time.
* - Triggers a config sync if the newt is running an outdated config version.
* - Responds with a pong message.
*/ */
export const handleNewtPingMessage: MessageHandler = async (context) => { export const handleNewtPingMessage: MessageHandler = async (context) => {
const { message, client: c, sendToClient } = context; const { message, client: c } = context;
const newt = c as Newt; const newt = c as Newt;
if (!newt) { if (!newt) {
@@ -112,15 +114,31 @@ export const handleNewtPingMessage: MessageHandler = async (context) => {
return; return;
} }
// get the version try {
// Mark the site as online and record the ping timestamp.
await db
.update(sites)
.set({
online: true,
lastPing: Math.floor(Date.now() / 1000)
})
.where(eq(sites.siteId, newt.siteId));
} catch (error) {
logger.error("Error updating online state on newt ping", { error });
}
// Check config version and sync if stale.
const configVersion = await getClientConfigVersion(newt.newtId); const configVersion = await getClientConfigVersion(newt.newtId);
if (message.configVersion && configVersion != null && configVersion != message.configVersion) { if (
message.configVersion != null &&
configVersion != null &&
configVersion !== message.configVersion
) {
logger.warn( logger.warn(
`Newt ping with outdated config version: ${message.configVersion} (current: ${configVersion})` `Newt ping with outdated config version: ${message.configVersion} (current: ${configVersion})`
); );
// get the site
const [site] = await db const [site] = await db
.select() .select()
.from(sites) .from(sites)
@@ -137,19 +155,6 @@ export const handleNewtPingMessage: MessageHandler = async (context) => {
await sendNewtSyncMessage(newt, site); await sendNewtSyncMessage(newt, site);
} }
// try {
// // Update the client's last ping timestamp
// await db
// .update(clients)
// .set({
// lastPing: Math.floor(Date.now() / 1000),
// online: true
// })
// .where(eq(clients.clientId, newt.clientId));
// } catch (error) {
// logger.error("Error handling ping message", { error });
// }
return { return {
message: { message: {
type: "pong", type: "pong",

View File

@@ -5,9 +5,7 @@ import { eq } from "drizzle-orm";
import { addPeer, deletePeer } from "../gerbil/peers"; import { addPeer, deletePeer } from "../gerbil/peers";
import logger from "@server/logger"; import logger from "@server/logger";
import config from "@server/lib/config"; import config from "@server/lib/config";
import { import { findNextAvailableCidr } from "@server/lib/ip";
findNextAvailableCidr,
} from "@server/lib/ip";
import { import {
selectBestExitNode, selectBestExitNode,
verifyExitNodeOrgAccess verifyExitNodeOrgAccess
@@ -15,6 +13,7 @@ import {
import { fetchContainers } from "./dockerSocket"; import { fetchContainers } from "./dockerSocket";
import { lockManager } from "#dynamic/lib/lock"; import { lockManager } from "#dynamic/lib/lock";
import { buildTargetConfigurationForNewtClient } from "./buildConfiguration"; import { buildTargetConfigurationForNewtClient } from "./buildConfiguration";
import { canCompress } from "@server/lib/clientVersionChecks";
export type ExitNodePingResult = { export type ExitNodePingResult = {
exitNodeId: number; exitNodeId: number;
@@ -215,6 +214,9 @@ export const handleNewtRegisterMessage: MessageHandler = async (context) => {
healthCheckTargets: validHealthCheckTargets healthCheckTargets: validHealthCheckTargets
} }
}, },
options: {
compress: canCompress(newt.version, "newt")
},
broadcast: false, // Send to all clients broadcast: false, // Send to all clients
excludeSender: false // Include sender in broadcast excludeSender: false // Include sender in broadcast
}; };

View File

@@ -10,10 +10,21 @@ interface PeerBandwidth {
bytesOut: number; bytesOut: number;
} }
interface BandwidthAccumulator {
bytesIn: number;
bytesOut: number;
}
// Retry configuration for deadlock handling // Retry configuration for deadlock handling
const MAX_RETRIES = 3; const MAX_RETRIES = 3;
const BASE_DELAY_MS = 50; const BASE_DELAY_MS = 50;
// How often to flush accumulated bandwidth data to the database
const FLUSH_INTERVAL_MS = 120_000; // 120 seconds
// In-memory accumulator: publicKey -> { bytesIn, bytesOut }
let accumulator = new Map<string, BandwidthAccumulator>();
/** /**
* Check if an error is a deadlock error * Check if an error is a deadlock error
*/ */
@@ -53,6 +64,90 @@ async function withDeadlockRetry<T>(
} }
} }
/**
* Flush all accumulated bandwidth data to the database.
*
* Swaps out the accumulator before writing so that any bandwidth messages
* received during the flush are captured in the new accumulator rather than
* being lost or causing contention. Entries that fail to write are re-queued
* back into the accumulator so they will be retried on the next flush.
*
* This function is exported so that the application's graceful-shutdown
* cleanup handler can call it before the process exits.
*/
export async function flushBandwidthToDb(): Promise<void> {
if (accumulator.size === 0) {
return;
}
// Atomically swap out the accumulator so new data keeps flowing in
// while we write the snapshot to the database.
const snapshot = accumulator;
accumulator = new Map<string, BandwidthAccumulator>();
const currentTime = new Date().toISOString();
// Sort by publicKey for consistent lock ordering across concurrent
// writers — this is the same deadlock-prevention strategy used in the
// original per-message implementation.
const sortedEntries = [...snapshot.entries()].sort(([a], [b]) =>
a.localeCompare(b)
);
logger.debug(
`Flushing accumulated bandwidth data for ${sortedEntries.length} client(s) to the database`
);
for (const [publicKey, { bytesIn, bytesOut }] of sortedEntries) {
try {
await withDeadlockRetry(async () => {
// Use atomic SQL increment to avoid the SELECT-then-UPDATE
// anti-pattern and the races it would introduce.
await db
.update(clients)
.set({
// Note: bytesIn from peer goes to megabytesOut (data
// sent to client) and bytesOut from peer goes to
// megabytesIn (data received from client).
megabytesOut: sql`COALESCE(${clients.megabytesOut}, 0) + ${bytesIn}`,
megabytesIn: sql`COALESCE(${clients.megabytesIn}, 0) + ${bytesOut}`,
lastBandwidthUpdate: currentTime
})
.where(eq(clients.pubKey, publicKey));
}, `flush bandwidth for client ${publicKey}`);
} catch (error) {
logger.error(
`Failed to flush bandwidth for client ${publicKey}:`,
error
);
// Re-queue the failed entry so it is retried on the next flush
// rather than silently dropped.
const existing = accumulator.get(publicKey);
if (existing) {
existing.bytesIn += bytesIn;
existing.bytesOut += bytesOut;
} else {
accumulator.set(publicKey, { bytesIn, bytesOut });
}
}
}
}
const flushTimer = setInterval(async () => {
try {
await flushBandwidthToDb();
} catch (error) {
logger.error("Unexpected error during periodic bandwidth flush:", error);
}
}, FLUSH_INTERVAL_MS);
// Calling unref() means this timer will not keep the Node.js event loop alive
// on its own — the process can still exit normally when there is no other work
// left. The graceful-shutdown path (see server/cleanup.ts) will call
// flushBandwidthToDb() explicitly before process.exit(), so no data is lost.
flushTimer.unref();
export const handleReceiveBandwidthMessage: MessageHandler = async ( export const handleReceiveBandwidthMessage: MessageHandler = async (
context context
) => { ) => {
@@ -69,40 +164,21 @@ export const handleReceiveBandwidthMessage: MessageHandler = async (
throw new Error("Invalid bandwidth data"); throw new Error("Invalid bandwidth data");
} }
// Sort bandwidth data by publicKey to ensure consistent lock ordering across all instances // Accumulate the incoming data in memory; the periodic timer (and the
// This is critical for preventing deadlocks when multiple instances update the same clients // shutdown hook) will take care of writing it to the database.
const sortedBandwidthData = [...bandwidthData].sort((a, b) => for (const { publicKey, bytesIn, bytesOut } of bandwidthData) {
a.publicKey.localeCompare(b.publicKey) // Skip peers that haven't transferred any data — writing zeros to the
); // database would be a no-op anyway.
if (bytesIn <= 0 && bytesOut <= 0) {
continue;
}
const currentTime = new Date().toISOString(); const existing = accumulator.get(publicKey);
if (existing) {
// Update each client individually with retry logic existing.bytesIn += bytesIn;
// This reduces transaction scope and allows retries per-client existing.bytesOut += bytesOut;
for (const peer of sortedBandwidthData) { } else {
const { publicKey, bytesIn, bytesOut } = peer; accumulator.set(publicKey, { bytesIn, bytesOut });
try {
await withDeadlockRetry(async () => {
// Use atomic SQL increment to avoid SELECT then UPDATE pattern
// This eliminates the need to read the current value first
await db
.update(clients)
.set({
// Note: bytesIn from peer goes to megabytesOut (data sent to client)
// and bytesOut from peer goes to megabytesIn (data received from client)
megabytesOut: sql`COALESCE(${clients.megabytesOut}, 0) + ${bytesIn}`,
megabytesIn: sql`COALESCE(${clients.megabytesIn}, 0) + ${bytesOut}`,
lastBandwidthUpdate: currentTime
})
.where(eq(clients.pubKey, publicKey));
}, `update client bandwidth ${publicKey}`);
} catch (error) {
logger.error(
`Failed to update bandwidth for client ${publicKey}:`,
error
);
// Continue with other clients even if one fails
} }
} }
}; };

Some files were not shown because too many files have changed in this diff Show More