-Pangolin is a self-hosted tunneled reverse proxy server with identity and context aware access control, designed to easily expose and protect applications running anywhere. Pangolin acts as a central hub and connects isolated networks — even those behind restrictive firewalls — through encrypted tunnels, enabling easy access to remote services without opening ports or requiring a VPN.
+Pangolin is an open-source, identity-based remote access platform built on WireGuard that enables secure, seamless connectivity to private and public resources. Pangolin combines reverse proxy and VPN capabilities into one platform, providing browser-based access to web applications and client-based access to any private resources, all with zero-trust security and granular access control.
## Installation
@@ -60,14 +60,20 @@ Pangolin is a self-hosted tunneled reverse proxy server with identity and contex
## Key Features
-Pangolin packages everything you need for seamless application access and exposure into one cohesive platform.
-
| | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------|
-| **Manage applications in one place**
Pangolin provides a unified dashboard where you can monitor, configure, and secure all of your services regardless of where they are hosted. |
|
-| **Reverse proxy across networks anywhere**
Route traffic via tunnels to any private network. Pangolin works like a reverse proxy that spans multiple networks and handles routing, load balancing, health checking, and more to the right services on the other end. |
|
-| **Enforce identity and context aware rules**
Protect your applications with identity and context aware rules such as SSO, OIDC, PIN, password, temporary share links, geolocation, IP, and more. |
|
-| **Quickly connect Pangolin sites**
Pangolin's lightweight [Newt](https://github.com/fosrl/newt) client runs in userspace and can run anywhere. Use it as a site connector to route traffic to backends across all of your environments. |
|
+| **Connect remote networks with sites**
Pangolin's lightweight site connectors create secure tunnels from remote networks without requiring public IP addresses or open ports. Sites make any network anywhere available for authorized access. |
|
+| **Browser-based reverse proxy access**
Expose web applications through identity and context-aware tunneled reverse proxies. Pangolin handles routing, load balancing, health checking, and automatic SSL certificates without exposing your network directly to the internet. Users access applications through any web browser with authentication and granular access control. |
|
+| **Client-based private resource access**
Access private resources like SSH servers, databases, RDP, and entire network ranges through Pangolin clients. Intelligent NAT traversal enables connections even through restrictive firewalls, while DNS aliases provide friendly names and fast connections to resources across all your sites. |
|
+| **Zero-trust granular access**
Grant users access to specific resources, not entire networks. Unlike traditional VPNs that expose full network access, Pangolin's zero-trust model ensures users can only reach the applications and services you explicitly define, reducing security risk and attack surface. |
|
+
+## Download Clients
+
+Download the Pangolin client for your platform:
+
+- [Mac](https://pangolin.net/downloads/mac)
+- [Windows](https://pangolin.net/downloads/windows)
+- [Linux](https://pangolin.net/downloads/linux)
## Get Started
diff --git a/components.json b/components.json
index 97d8c8c0..13f7efef 100644
--- a/components.json
+++ b/components.json
@@ -17,4 +17,4 @@
"lib": "@/lib",
"hooks": "@/hooks"
}
-}
\ No newline at end of file
+}
diff --git a/drizzle.pg.config.ts b/drizzle.pg.config.ts
index febd5f45..ba4ca8fe 100644
--- a/drizzle.pg.config.ts
+++ b/drizzle.pg.config.ts
@@ -1,9 +1,7 @@
import { defineConfig } from "drizzle-kit";
import path from "path";
-const schema = [
- path.join("server", "db", "pg", "schema"),
-];
+const schema = [path.join("server", "db", "pg", "schema")];
export default defineConfig({
dialect: "postgresql",
diff --git a/drizzle.sqlite.config.ts b/drizzle.sqlite.config.ts
index 4912c256..d8344f94 100644
--- a/drizzle.sqlite.config.ts
+++ b/drizzle.sqlite.config.ts
@@ -2,9 +2,7 @@ import { APP_PATH } from "@server/lib/consts";
import { defineConfig } from "drizzle-kit";
import path from "path";
-const schema = [
- path.join("server", "db", "sqlite", "schema"),
-];
+const schema = [path.join("server", "db", "sqlite", "schema")];
export default defineConfig({
dialect: "sqlite",
diff --git a/esbuild.mjs b/esbuild.mjs
index 7f67fe81..0157c34a 100644
--- a/esbuild.mjs
+++ b/esbuild.mjs
@@ -24,20 +24,20 @@ const argv = yargs(hideBin(process.argv))
alias: "e",
describe: "Entry point file",
type: "string",
- demandOption: true,
+ demandOption: true
})
.option("out", {
alias: "o",
describe: "Output file path",
type: "string",
- demandOption: true,
+ demandOption: true
})
.option("build", {
alias: "b",
describe: "Build type (oss, saas, enterprise)",
type: "string",
choices: ["oss", "saas", "enterprise"],
- default: "oss",
+ default: "oss"
})
.help()
.alias("help", "h").argv;
@@ -66,7 +66,9 @@ function privateImportGuardPlugin() {
// Check if the importing file is NOT in server/private
const normalizedImporter = path.normalize(importingFile);
- const isInServerPrivate = normalizedImporter.includes(path.normalize("server/private"));
+ const isInServerPrivate = normalizedImporter.includes(
+ path.normalize("server/private")
+ );
if (!isInServerPrivate) {
const violation = {
@@ -79,8 +81,8 @@ function privateImportGuardPlugin() {
console.log(`PRIVATE IMPORT VIOLATION:`);
console.log(` File: ${importingFile}`);
console.log(` Import: ${args.path}`);
- console.log(` Resolve dir: ${args.resolveDir || 'N/A'}`);
- console.log('');
+ console.log(` Resolve dir: ${args.resolveDir || "N/A"}`);
+ console.log("");
}
// Return null to let the default resolver handle it
@@ -89,16 +91,20 @@ function privateImportGuardPlugin() {
build.onEnd((result) => {
if (violations.length > 0) {
- console.log(`\nSUMMARY: Found ${violations.length} private import violation(s):`);
+ console.log(
+ `\nSUMMARY: Found ${violations.length} private import violation(s):`
+ );
violations.forEach((v, i) => {
- console.log(` ${i + 1}. ${path.relative(process.cwd(), v.file)} imports ${v.importPath}`);
+ console.log(
+ ` ${i + 1}. ${path.relative(process.cwd(), v.file)} imports ${v.importPath}`
+ );
});
- console.log('');
+ console.log("");
result.errors.push({
text: `Private import violations detected: ${violations.length} violation(s) found`,
location: null,
- notes: violations.map(v => ({
+ notes: violations.map((v) => ({
text: `${path.relative(process.cwd(), v.file)} imports ${v.importPath}`,
location: null
}))
@@ -121,7 +127,9 @@ function dynamicImportGuardPlugin() {
// Check if the importing file is NOT in server/private
const normalizedImporter = path.normalize(importingFile);
- const isInServerPrivate = normalizedImporter.includes(path.normalize("server/private"));
+ const isInServerPrivate = normalizedImporter.includes(
+ path.normalize("server/private")
+ );
if (isInServerPrivate) {
const violation = {
@@ -134,8 +142,8 @@ function dynamicImportGuardPlugin() {
console.log(`DYNAMIC IMPORT VIOLATION:`);
console.log(` File: ${importingFile}`);
console.log(` Import: ${args.path}`);
- console.log(` Resolve dir: ${args.resolveDir || 'N/A'}`);
- console.log('');
+ console.log(` Resolve dir: ${args.resolveDir || "N/A"}`);
+ console.log("");
}
// Return null to let the default resolver handle it
@@ -144,16 +152,20 @@ function dynamicImportGuardPlugin() {
build.onEnd((result) => {
if (violations.length > 0) {
- console.log(`\nSUMMARY: Found ${violations.length} dynamic import violation(s):`);
+ console.log(
+ `\nSUMMARY: Found ${violations.length} dynamic import violation(s):`
+ );
violations.forEach((v, i) => {
- console.log(` ${i + 1}. ${path.relative(process.cwd(), v.file)} imports ${v.importPath}`);
+ console.log(
+ ` ${i + 1}. ${path.relative(process.cwd(), v.file)} imports ${v.importPath}`
+ );
});
- console.log('');
+ console.log("");
result.errors.push({
text: `Dynamic import violations detected: ${violations.length} violation(s) found`,
location: null,
- notes: violations.map(v => ({
+ notes: violations.map((v) => ({
text: `${path.relative(process.cwd(), v.file)} imports ${v.importPath}`,
location: null
}))
@@ -172,21 +184,28 @@ function dynamicImportSwitcherPlugin(buildValue) {
const switches = [];
build.onStart(() => {
- console.log(`Dynamic import switcher using build type: ${buildValue}`);
+ console.log(
+ `Dynamic import switcher using build type: ${buildValue}`
+ );
});
build.onResolve({ filter: /^#dynamic\// }, (args) => {
// Extract the path after #dynamic/
- const dynamicPath = args.path.replace(/^#dynamic\//, '');
+ const dynamicPath = args.path.replace(/^#dynamic\//, "");
// Determine the replacement based on build type
let replacement;
if (buildValue === "oss") {
replacement = `#open/${dynamicPath}`;
- } else if (buildValue === "saas" || buildValue === "enterprise") {
+ } else if (
+ buildValue === "saas" ||
+ buildValue === "enterprise"
+ ) {
replacement = `#closed/${dynamicPath}`; // We use #closed here so that the route guards dont complain after its been changed but this is the same as #private
} else {
- console.warn(`Unknown build type '${buildValue}', defaulting to #open/`);
+ console.warn(
+ `Unknown build type '${buildValue}', defaulting to #open/`
+ );
replacement = `#open/${dynamicPath}`;
}
@@ -201,8 +220,10 @@ function dynamicImportSwitcherPlugin(buildValue) {
console.log(`DYNAMIC IMPORT SWITCH:`);
console.log(` File: ${args.importer}`);
console.log(` Original: ${args.path}`);
- console.log(` Switched to: ${replacement} (build: ${buildValue})`);
- console.log('');
+ console.log(
+ ` Switched to: ${replacement} (build: ${buildValue})`
+ );
+ console.log("");
// Rewrite the import path and let the normal resolution continue
return build.resolve(replacement, {
@@ -215,12 +236,18 @@ function dynamicImportSwitcherPlugin(buildValue) {
build.onEnd((result) => {
if (switches.length > 0) {
- console.log(`\nDYNAMIC IMPORT SUMMARY: Switched ${switches.length} import(s) for build type '${buildValue}':`);
+ console.log(
+ `\nDYNAMIC IMPORT SUMMARY: Switched ${switches.length} import(s) for build type '${buildValue}':`
+ );
switches.forEach((s, i) => {
- console.log(` ${i + 1}. ${path.relative(process.cwd(), s.file)}`);
- console.log(` ${s.originalPath} → ${s.replacementPath}`);
+ console.log(
+ ` ${i + 1}. ${path.relative(process.cwd(), s.file)}`
+ );
+ console.log(
+ ` ${s.originalPath} → ${s.replacementPath}`
+ );
});
- console.log('');
+ console.log("");
}
});
}
@@ -235,7 +262,7 @@ esbuild
format: "esm",
minify: false,
banner: {
- js: banner,
+ js: banner
},
platform: "node",
external: ["body-parser"],
@@ -244,20 +271,22 @@ esbuild
dynamicImportGuardPlugin(),
dynamicImportSwitcherPlugin(argv.build),
nodeExternalsPlugin({
- packagePath: getPackagePaths(),
- }),
+ packagePath: getPackagePaths()
+ })
],
sourcemap: "inline",
- target: "node22",
+ target: "node22"
})
.then((result) => {
// Check if there were any errors in the build result
if (result.errors && result.errors.length > 0) {
- console.error(`Build failed with ${result.errors.length} error(s):`);
+ console.error(
+ `Build failed with ${result.errors.length} error(s):`
+ );
result.errors.forEach((error, i) => {
console.error(`${i + 1}. ${error.text}`);
if (error.notes) {
- error.notes.forEach(note => {
+ error.notes.forEach((note) => {
console.error(` - ${note.text}`);
});
}
diff --git a/eslint.config.js b/eslint.config.js
index dfc194bc..ae921d45 100644
--- a/eslint.config.js
+++ b/eslint.config.js
@@ -1,19 +1,19 @@
-import tseslint from 'typescript-eslint';
+import tseslint from "typescript-eslint";
export default tseslint.config({
- files: ["**/*.{ts,tsx,js,jsx}"],
- languageOptions: {
- parser: tseslint.parser,
- parserOptions: {
- ecmaVersion: "latest",
- sourceType: "module",
- ecmaFeatures: {
- jsx: true
- }
+ files: ["**/*.{ts,tsx,js,jsx}"],
+ languageOptions: {
+ parser: tseslint.parser,
+ parserOptions: {
+ ecmaVersion: "latest",
+ sourceType: "module",
+ ecmaFeatures: {
+ jsx: true
+ }
+ }
+ },
+ rules: {
+ semi: "error",
+ "prefer-const": "warn"
}
- },
- rules: {
- "semi": "error",
- "prefer-const": "warn"
- }
-});
\ No newline at end of file
+});
diff --git a/install/containers.go b/install/containers.go
index 9993e117..464186c2 100644
--- a/install/containers.go
+++ b/install/containers.go
@@ -73,7 +73,7 @@ func installDocker() error {
case strings.Contains(osRelease, "ID=ubuntu"):
installCmd = exec.Command("bash", "-c", fmt.Sprintf(`
apt-get update &&
- apt-get install -y apt-transport-https ca-certificates curl &&
+ apt-get install -y apt-transport-https ca-certificates curl gpg &&
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg &&
echo "deb [arch=%s signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list &&
apt-get update &&
@@ -82,7 +82,7 @@ func installDocker() error {
case strings.Contains(osRelease, "ID=debian"):
installCmd = exec.Command("bash", "-c", fmt.Sprintf(`
apt-get update &&
- apt-get install -y apt-transport-https ca-certificates curl &&
+ apt-get install -y apt-transport-https ca-certificates curl gpg &&
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg &&
echo "deb [arch=%s signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list &&
apt-get update &&
diff --git a/messages/de-DE.json b/messages/de-DE.json
index 333c7052..13ab3d11 100644
--- a/messages/de-DE.json
+++ b/messages/de-DE.json
@@ -1043,7 +1043,7 @@
"actionDeleteSite": "Standort löschen",
"actionGetSite": "Standort abrufen",
"actionListSites": "Standorte auflisten",
- "actionApplyBlueprint": "Blaupause anwenden",
+ "actionApplyBlueprint": "Blueprint anwenden",
"setupToken": "Setup-Token",
"setupTokenDescription": "Geben Sie das Setup-Token von der Serverkonsole ein.",
"setupTokenRequired": "Setup-Token ist erforderlich",
@@ -1102,7 +1102,7 @@
"actionDeleteIdpOrg": "IDP-Organisationsrichtlinie löschen",
"actionListIdpOrgs": "IDP-Organisationen auflisten",
"actionUpdateIdpOrg": "IDP-Organisation aktualisieren",
- "actionCreateClient": "Endgerät anlegen",
+ "actionCreateClient": "Client erstellen",
"actionDeleteClient": "Client löschen",
"actionUpdateClient": "Client aktualisieren",
"actionListClients": "Clients auflisten",
@@ -1201,24 +1201,24 @@
"sidebarLogsAnalytics": "Analytik",
"blueprints": "Baupläne",
"blueprintsDescription": "Deklarative Konfigurationen anwenden und vorherige Abläufe anzeigen",
- "blueprintAdd": "Blaupause hinzufügen",
- "blueprintGoBack": "Alle Blaupausen ansehen",
- "blueprintCreate": "Blaupause erstellen",
- "blueprintCreateDescription2": "Folge den Schritten unten, um eine neue Blaupause zu erstellen und anzuwenden",
- "blueprintDetails": "Blaupausendetails",
- "blueprintDetailsDescription": "Siehe das Ergebnis der angewendeten Blaupause und alle aufgetretenen Fehler",
- "blueprintInfo": "Blaupauseninformation",
+ "blueprintAdd": "Blueprint hinzufügen",
+ "blueprintGoBack": "Alle Blueprints ansehen",
+ "blueprintCreate": "Blueprint erstellen",
+ "blueprintCreateDescription2": "Folge den unten aufgeführten Schritten, um einen neuen Blueprint zu erstellen und anzuwenden",
+ "blueprintDetails": "Blueprint Detailinformationen",
+ "blueprintDetailsDescription": "Siehe das Ergebnis des angewendeten Blueprints und alle aufgetretenen Fehler",
+ "blueprintInfo": "Blueprint Informationen",
"message": "Nachricht",
"blueprintContentsDescription": "Den YAML-Inhalt definieren, der die Infrastruktur beschreibt",
- "blueprintErrorCreateDescription": "Fehler beim Anwenden der Blaupause",
- "blueprintErrorCreate": "Fehler beim Erstellen der Blaupause",
- "searchBlueprintProgress": "Blaupausen suchen...",
+ "blueprintErrorCreateDescription": "Fehler beim Anwenden des Blueprints",
+ "blueprintErrorCreate": "Fehler beim Erstellen des Blueprints",
+ "searchBlueprintProgress": "Blueprints suchen...",
"appliedAt": "Angewandt am",
"source": "Quelle",
"contents": "Inhalt",
"parsedContents": "Analysierte Inhalte (Nur lesen)",
- "enableDockerSocket": "Docker Blaupause aktivieren",
- "enableDockerSocketDescription": "Aktiviere Docker-Socket-Label-Scraping für Blaupausenbeschriftungen. Der Socket-Pfad muss neu angegeben werden.",
+ "enableDockerSocket": "Docker Blueprint aktivieren",
+ "enableDockerSocketDescription": "Aktiviere Docker-Socket-Label-Scraping für Blueprintbeschriftungen. Der Socket-Pfad muss neu angegeben werden.",
"enableDockerSocketLink": "Mehr erfahren",
"viewDockerContainers": "Docker Container anzeigen",
"containersIn": "Container in {siteName}",
@@ -1543,7 +1543,7 @@
"healthCheckPathRequired": "Gesundheits-Check-Pfad ist erforderlich",
"healthCheckMethodRequired": "HTTP-Methode ist erforderlich",
"healthCheckIntervalMin": "Prüfintervall muss mindestens 5 Sekunden betragen",
- "healthCheckTimeoutMin": "Timeout muss mindestens 1 Sekunde betragen",
+ "healthCheckTimeoutMin": "Zeitüberschreitung muss mindestens 1 Sekunde betragen",
"healthCheckRetryMin": "Wiederholungsversuche müssen mindestens 1 betragen",
"httpMethod": "HTTP-Methode",
"selectHttpMethod": "HTTP-Methode auswählen",
diff --git a/messages/en-US.json b/messages/en-US.json
index 3dd1c94e..148db379 100644
--- a/messages/en-US.json
+++ b/messages/en-US.json
@@ -419,7 +419,7 @@
"userErrorExistsDescription": "This user is already a member of the organization.",
"inviteError": "Failed to invite user",
"inviteErrorDescription": "An error occurred while inviting the user",
- "userInvited": "User invited",
+ "userInvited": "User Invited",
"userInvitedDescription": "The user has been successfully invited.",
"userErrorCreate": "Failed to create user",
"userErrorCreateDescription": "An error occurred while creating the user",
@@ -1035,6 +1035,7 @@
"updateOrgUser": "Update Org User",
"createOrgUser": "Create Org User",
"actionUpdateOrg": "Update Organization",
+ "actionRemoveInvitation": "Remove Invitation",
"actionUpdateUser": "Update User",
"actionGetUser": "Get User",
"actionGetOrgUser": "Get Organization User",
@@ -2067,6 +2068,8 @@
"timestamp": "Timestamp",
"accessLogs": "Access Logs",
"exportCsv": "Export CSV",
+ "exportError": "Unknown error when exporting CSV",
+ "exportCsvTooltip": "Within Time Range",
"actorId": "Actor ID",
"allowedByRule": "Allowed by Rule",
"allowedNoAuth": "Allowed No Auth",
@@ -2270,5 +2273,15 @@
"remoteExitNodeRegenerateAndDisconnectWarning": "This will regenerate the credentials and immediately disconnect the remote exit node. The remote exit node will need to be restarted with the new credentials.",
"remoteExitNodeRegenerateCredentialsConfirmation": "Are you sure you want to regenerate the credentials for this remote exit node?",
"remoteExitNodeRegenerateCredentialsWarning": "This will regenerate the credentials. The remote exit node will stay connected until you manually restart it and use the new credentials.",
- "agent": "Agent"
+ "agent": "Agent",
+ "personalUseOnly": "Personal Use Only",
+ "loginPageLicenseWatermark": "This instance is licensed for personal use only.",
+ "instanceIsUnlicensed": "This instance is unlicensed.",
+ "portRestrictions": "Port Restrictions",
+ "allPorts": "All",
+ "custom": "Custom",
+ "allPortsAllowed": "All Ports Allowed",
+ "allPortsBlocked": "All Ports Blocked",
+ "tcpPortsDescription": "Specify which TCP ports are allowed for this resource. Use '*' for all ports, leave empty to block all, or enter a comma-separated list of ports and ranges (e.g., 80,443,8000-9000).",
+ "udpPortsDescription": "Specify which UDP ports are allowed for this resource. Use '*' for all ports, leave empty to block all, or enter a comma-separated list of ports and ranges (e.g., 53,123,500-600)."
}
diff --git a/package-lock.json b/package-lock.json
index cb63d280..b3a18c31 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -10,7 +10,7 @@
"license": "SEE LICENSE IN LICENSE AND README.md",
"dependencies": {
"@asteasolutions/zod-to-openapi": "8.2.0",
- "@aws-sdk/client-s3": "3.947.0",
+ "@aws-sdk/client-s3": "3.948.0",
"@faker-js/faker": "10.1.0",
"@headlessui/react": "2.2.9",
"@hookform/resolvers": "5.2.2",
@@ -44,7 +44,6 @@
"@tailwindcss/forms": "0.5.10",
"@tanstack/react-query": "5.90.12",
"@tanstack/react-table": "8.21.3",
- "@types/js-yaml": "4.0.9",
"arctic": "3.7.0",
"axios": "1.13.2",
"better-sqlite3": "11.9.1",
@@ -73,32 +72,32 @@
"jmespath": "0.16.0",
"js-yaml": "4.1.1",
"jsonwebtoken": "9.0.3",
- "lucide-react": "0.556.0",
+ "lucide-react": "0.559.0",
"maxmind": "5.0.1",
"moment": "2.30.1",
- "next": "15.5.7",
+ "next": "15.5.9",
"next-intl": "4.5.8",
"next-themes": "0.4.6",
"nextjs-toploader": "3.9.17",
"node-cache": "5.1.2",
"node-fetch": "3.3.2",
"nodemailer": "7.0.11",
- "npm": "11.6.4",
+ "npm": "11.7.0",
"nprogress": "0.2.0",
"oslo": "1.2.1",
"pg": "8.16.3",
"posthog-node": "5.17.2",
"qrcode.react": "4.2.0",
- "react": "19.2.1",
+ "react": "19.2.3",
"react-day-picker": "9.12.0",
- "react-dom": "19.2.1",
+ "react-dom": "19.2.3",
"react-easy-sort": "1.8.0",
"react-hook-form": "7.68.0",
"react-icons": "5.5.0",
"rebuild": "0.1.2",
"recharts": "2.15.4",
"reodotdev": "1.0.0",
- "resend": "6.5.2",
+ "resend": "6.6.0",
"semver": "7.7.3",
"stripe": "20.0.0",
"swagger-ui-express": "5.0.1",
@@ -129,11 +128,12 @@
"@types/express": "5.0.6",
"@types/express-session": "1.18.2",
"@types/jmespath": "0.15.2",
+ "@types/js-yaml": "4.0.9",
"@types/jsonwebtoken": "9.0.10",
"@types/node": "24.10.2",
"@types/nodemailer": "7.0.4",
"@types/nprogress": "0.2.3",
- "@types/pg": "8.15.6",
+ "@types/pg": "8.16.0",
"@types/react": "19.2.7",
"@types/react-dom": "19.2.3",
"@types/semver": "7.7.1",
@@ -147,7 +147,7 @@
"esbuild-node-externals": "1.20.1",
"postcss": "8.5.6",
"prettier": "3.7.4",
- "react-email": "5.0.6",
+ "react-email": "5.0.7",
"tailwindcss": "4.1.17",
"tsc-alias": "1.8.16",
"tsx": "4.21.0",
@@ -396,23 +396,23 @@
}
},
"node_modules/@aws-sdk/client-s3": {
- "version": "3.947.0",
- "resolved": "https://registry.npmjs.org/@aws-sdk/client-s3/-/client-s3-3.947.0.tgz",
- "integrity": "sha512-ICgnI8D3ccIX9alsLksPFY2bX5CAIbyB+q19sXJgPhzCJ5kWeQ6LQ5xBmRVT5kccmsVGbbJdhnLXHyiN5LZsWg==",
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/client-s3/-/client-s3-3.948.0.tgz",
+ "integrity": "sha512-uvEjds8aYA9SzhBS8RKDtsDUhNV9VhqKiHTcmvhM7gJO92q0WTn8/QeFTdNyLc6RxpiDyz+uBxS7PcdNiZzqfA==",
"license": "Apache-2.0",
"dependencies": {
"@aws-crypto/sha1-browser": "5.2.0",
"@aws-crypto/sha256-browser": "5.2.0",
"@aws-crypto/sha256-js": "5.2.0",
"@aws-sdk/core": "3.947.0",
- "@aws-sdk/credential-provider-node": "3.947.0",
+ "@aws-sdk/credential-provider-node": "3.948.0",
"@aws-sdk/middleware-bucket-endpoint": "3.936.0",
"@aws-sdk/middleware-expect-continue": "3.936.0",
"@aws-sdk/middleware-flexible-checksums": "3.947.0",
"@aws-sdk/middleware-host-header": "3.936.0",
"@aws-sdk/middleware-location-constraint": "3.936.0",
"@aws-sdk/middleware-logger": "3.936.0",
- "@aws-sdk/middleware-recursion-detection": "3.936.0",
+ "@aws-sdk/middleware-recursion-detection": "3.948.0",
"@aws-sdk/middleware-sdk-s3": "3.947.0",
"@aws-sdk/middleware-ssec": "3.936.0",
"@aws-sdk/middleware-user-agent": "3.947.0",
@@ -462,9 +462,9 @@
}
},
"node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/client-sso": {
- "version": "3.947.0",
- "resolved": "https://registry.npmjs.org/@aws-sdk/client-sso/-/client-sso-3.947.0.tgz",
- "integrity": "sha512-sDwcO8SP290WSErY1S8pz8hTafeghKmmWjNVks86jDK30wx62CfazOTeU70IpWgrUBEygyXk/zPogHsUMbW2Rg==",
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/client-sso/-/client-sso-3.948.0.tgz",
+ "integrity": "sha512-iWjchXy8bIAVBUsKnbfKYXRwhLgRg3EqCQ5FTr3JbR+QR75rZm4ZOYXlvHGztVTmtAZ+PQVA1Y4zO7v7N87C0A==",
"license": "Apache-2.0",
"dependencies": {
"@aws-crypto/sha256-browser": "5.2.0",
@@ -472,7 +472,7 @@
"@aws-sdk/core": "3.947.0",
"@aws-sdk/middleware-host-header": "3.936.0",
"@aws-sdk/middleware-logger": "3.936.0",
- "@aws-sdk/middleware-recursion-detection": "3.936.0",
+ "@aws-sdk/middleware-recursion-detection": "3.948.0",
"@aws-sdk/middleware-user-agent": "3.947.0",
"@aws-sdk/region-config-resolver": "3.936.0",
"@aws-sdk/types": "3.936.0",
@@ -572,19 +572,19 @@
}
},
"node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/credential-provider-ini": {
- "version": "3.947.0",
- "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-ini/-/credential-provider-ini-3.947.0.tgz",
- "integrity": "sha512-A2ZUgJUJZERjSzvCi2NR/hBVbVkTXPD0SdKcR/aITb30XwF+n3T963b+pJl90qhOspoy7h0IVYNR7u5Nr9tJdQ==",
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-ini/-/credential-provider-ini-3.948.0.tgz",
+ "integrity": "sha512-Cl//Qh88e8HBL7yYkJNpF5eq76IO6rq8GsatKcfVBm7RFVxCqYEPSSBtkHdbtNwQdRQqAMXc6E/lEB/CZUDxnA==",
"license": "Apache-2.0",
"dependencies": {
"@aws-sdk/core": "3.947.0",
"@aws-sdk/credential-provider-env": "3.947.0",
"@aws-sdk/credential-provider-http": "3.947.0",
- "@aws-sdk/credential-provider-login": "3.947.0",
+ "@aws-sdk/credential-provider-login": "3.948.0",
"@aws-sdk/credential-provider-process": "3.947.0",
- "@aws-sdk/credential-provider-sso": "3.947.0",
- "@aws-sdk/credential-provider-web-identity": "3.947.0",
- "@aws-sdk/nested-clients": "3.947.0",
+ "@aws-sdk/credential-provider-sso": "3.948.0",
+ "@aws-sdk/credential-provider-web-identity": "3.948.0",
+ "@aws-sdk/nested-clients": "3.948.0",
"@aws-sdk/types": "3.936.0",
"@smithy/credential-provider-imds": "^4.2.5",
"@smithy/property-provider": "^4.2.5",
@@ -597,13 +597,13 @@
}
},
"node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/credential-provider-login": {
- "version": "3.947.0",
- "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-login/-/credential-provider-login-3.947.0.tgz",
- "integrity": "sha512-u7M3hazcB7aJiVwosNdJRbIJDzbwQ861NTtl6S0HmvWpixaVb7iyhJZWg8/plyUznboZGBm7JVEdxtxv3u0bTA==",
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-login/-/credential-provider-login-3.948.0.tgz",
+ "integrity": "sha512-gcKO2b6eeTuZGp3Vvgr/9OxajMrD3W+FZ2FCyJox363ZgMoYJsyNid1vuZrEuAGkx0jvveLXfwiVS0UXyPkgtw==",
"license": "Apache-2.0",
"dependencies": {
"@aws-sdk/core": "3.947.0",
- "@aws-sdk/nested-clients": "3.947.0",
+ "@aws-sdk/nested-clients": "3.948.0",
"@aws-sdk/types": "3.936.0",
"@smithy/property-provider": "^4.2.5",
"@smithy/protocol-http": "^5.3.5",
@@ -616,17 +616,17 @@
}
},
"node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/credential-provider-node": {
- "version": "3.947.0",
- "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-node/-/credential-provider-node-3.947.0.tgz",
- "integrity": "sha512-S0Zqebr71KyrT6J4uYPhwV65g4V5uDPHnd7dt2W34FcyPu+hVC7Hx4MFmsPyVLeT5cMCkkZvmY3kAoEzgUPJJg==",
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-node/-/credential-provider-node-3.948.0.tgz",
+ "integrity": "sha512-ep5vRLnrRdcsP17Ef31sNN4g8Nqk/4JBydcUJuFRbGuyQtrZZrVT81UeH2xhz6d0BK6ejafDB9+ZpBjXuWT5/Q==",
"license": "Apache-2.0",
"dependencies": {
"@aws-sdk/credential-provider-env": "3.947.0",
"@aws-sdk/credential-provider-http": "3.947.0",
- "@aws-sdk/credential-provider-ini": "3.947.0",
+ "@aws-sdk/credential-provider-ini": "3.948.0",
"@aws-sdk/credential-provider-process": "3.947.0",
- "@aws-sdk/credential-provider-sso": "3.947.0",
- "@aws-sdk/credential-provider-web-identity": "3.947.0",
+ "@aws-sdk/credential-provider-sso": "3.948.0",
+ "@aws-sdk/credential-provider-web-identity": "3.948.0",
"@aws-sdk/types": "3.936.0",
"@smithy/credential-provider-imds": "^4.2.5",
"@smithy/property-provider": "^4.2.5",
@@ -656,14 +656,14 @@
}
},
"node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/credential-provider-sso": {
- "version": "3.947.0",
- "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-sso/-/credential-provider-sso-3.947.0.tgz",
- "integrity": "sha512-NktnVHTGaUMaozxycYrepvb3yfFquHTQ53lt6hBEVjYBzK3C4tVz0siUpr+5RMGLSiZ5bLBp2UjJPgwx4i4waQ==",
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-sso/-/credential-provider-sso-3.948.0.tgz",
+ "integrity": "sha512-gqLhX1L+zb/ZDnnYbILQqJ46j735StfWV5PbDjxRzBKS7GzsiYoaf6MyHseEopmWrez5zl5l6aWzig7UpzSeQQ==",
"license": "Apache-2.0",
"dependencies": {
- "@aws-sdk/client-sso": "3.947.0",
+ "@aws-sdk/client-sso": "3.948.0",
"@aws-sdk/core": "3.947.0",
- "@aws-sdk/token-providers": "3.947.0",
+ "@aws-sdk/token-providers": "3.948.0",
"@aws-sdk/types": "3.936.0",
"@smithy/property-provider": "^4.2.5",
"@smithy/shared-ini-file-loader": "^4.4.0",
@@ -675,13 +675,13 @@
}
},
"node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/credential-provider-web-identity": {
- "version": "3.947.0",
- "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-web-identity/-/credential-provider-web-identity-3.947.0.tgz",
- "integrity": "sha512-gokm/e/YHiHLrZgLq4j8tNAn8RJDPbIcglFRKgy08q8DmAqHQ8MXAKW3eS0QjAuRXU9mcMmUo1NrX6FRNBCCPw==",
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/credential-provider-web-identity/-/credential-provider-web-identity-3.948.0.tgz",
+ "integrity": "sha512-MvYQlXVoJyfF3/SmnNzOVEtANRAiJIObEUYYyjTqKZTmcRIVVky0tPuG26XnB8LmTYgtESwJIZJj/Eyyc9WURQ==",
"license": "Apache-2.0",
"dependencies": {
"@aws-sdk/core": "3.947.0",
- "@aws-sdk/nested-clients": "3.947.0",
+ "@aws-sdk/nested-clients": "3.948.0",
"@aws-sdk/types": "3.936.0",
"@smithy/property-provider": "^4.2.5",
"@smithy/shared-ini-file-loader": "^4.4.0",
@@ -692,6 +692,22 @@
"node": ">=18.0.0"
}
},
+ "node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/middleware-recursion-detection": {
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/middleware-recursion-detection/-/middleware-recursion-detection-3.948.0.tgz",
+ "integrity": "sha512-Qa8Zj+EAqA0VlAVvxpRnpBpIWJI9KUwaioY1vkeNVwXPlNaz9y9zCKVM9iU9OZ5HXpoUg6TnhATAHXHAE8+QsQ==",
+ "license": "Apache-2.0",
+ "dependencies": {
+ "@aws-sdk/types": "3.936.0",
+ "@aws/lambda-invoke-store": "^0.2.2",
+ "@smithy/protocol-http": "^5.3.5",
+ "@smithy/types": "^4.9.0",
+ "tslib": "^2.6.2"
+ },
+ "engines": {
+ "node": ">=18.0.0"
+ }
+ },
"node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/middleware-sdk-s3": {
"version": "3.947.0",
"resolved": "https://registry.npmjs.org/@aws-sdk/middleware-sdk-s3/-/middleware-sdk-s3-3.947.0.tgz",
@@ -736,9 +752,9 @@
}
},
"node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/nested-clients": {
- "version": "3.947.0",
- "resolved": "https://registry.npmjs.org/@aws-sdk/nested-clients/-/nested-clients-3.947.0.tgz",
- "integrity": "sha512-DjRJEYNnHUTu9kGPPQDTSXquwSEd6myKR4ssI4FaYLFhdT3ldWpj73yYt807H3tdmhS7vPmdVqchSJnjurUQAw==",
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/nested-clients/-/nested-clients-3.948.0.tgz",
+ "integrity": "sha512-zcbJfBsB6h254o3NuoEkf0+UY1GpE9ioiQdENWv7odo69s8iaGBEQ4BDpsIMqcuiiUXw1uKIVNxCB1gUGYz8lw==",
"license": "Apache-2.0",
"dependencies": {
"@aws-crypto/sha256-browser": "5.2.0",
@@ -746,7 +762,7 @@
"@aws-sdk/core": "3.947.0",
"@aws-sdk/middleware-host-header": "3.936.0",
"@aws-sdk/middleware-logger": "3.936.0",
- "@aws-sdk/middleware-recursion-detection": "3.936.0",
+ "@aws-sdk/middleware-recursion-detection": "3.948.0",
"@aws-sdk/middleware-user-agent": "3.947.0",
"@aws-sdk/region-config-resolver": "3.936.0",
"@aws-sdk/types": "3.936.0",
@@ -802,13 +818,13 @@
}
},
"node_modules/@aws-sdk/client-s3/node_modules/@aws-sdk/token-providers": {
- "version": "3.947.0",
- "resolved": "https://registry.npmjs.org/@aws-sdk/token-providers/-/token-providers-3.947.0.tgz",
- "integrity": "sha512-X/DyB8GuK44rsE89Tn5+s542B3PhGbXQSgV8lvqHDzvicwCt0tWny6790st6CPETrVVV2K3oJMfG5U3/jAmaZA==",
+ "version": "3.948.0",
+ "resolved": "https://registry.npmjs.org/@aws-sdk/token-providers/-/token-providers-3.948.0.tgz",
+ "integrity": "sha512-V487/kM4Teq5dcr1t5K6eoUKuqlGr9FRWL3MIMukMERJXHZvio6kox60FZ/YtciRHRI75u14YUqm2Dzddcu3+A==",
"license": "Apache-2.0",
"dependencies": {
"@aws-sdk/core": "3.947.0",
- "@aws-sdk/nested-clients": "3.947.0",
+ "@aws-sdk/nested-clients": "3.948.0",
"@aws-sdk/types": "3.936.0",
"@smithy/property-provider": "^4.2.5",
"@smithy/shared-ini-file-loader": "^4.4.0",
@@ -1264,6 +1280,7 @@
"version": "3.936.0",
"resolved": "https://registry.npmjs.org/@aws-sdk/middleware-recursion-detection/-/middleware-recursion-detection-3.936.0.tgz",
"integrity": "sha512-l4aGbHpXM45YNgXggIux1HgsCVAvvBoqHPkqLnqMl9QVapfuSTjJHfDYDsx1Xxct6/m7qSMUzanBALhiaGO2fA==",
+ "dev": true,
"license": "Apache-2.0",
"dependencies": {
"@aws-sdk/types": "3.936.0",
@@ -3818,9 +3835,9 @@
}
},
"node_modules/@next/env": {
- "version": "15.5.7",
- "resolved": "https://registry.npmjs.org/@next/env/-/env-15.5.7.tgz",
- "integrity": "sha512-4h6Y2NyEkIEN7Z8YxkA27pq6zTkS09bUSYC0xjd0NpwFxjnIKeZEeH591o5WECSmjpUhLn3H2QLJcDye3Uzcvg==",
+ "version": "15.5.9",
+ "resolved": "https://registry.npmjs.org/@next/env/-/env-15.5.9.tgz",
+ "integrity": "sha512-4GlTZ+EJM7WaW2HEZcyU317tIQDjkQIyENDLxYJfSWlfqguN+dHkZgyQTV/7ykvobU7yEH5gKvreNrH4B6QgIg==",
"license": "MIT"
},
"node_modules/@next/eslint-plugin-next": {
@@ -9297,6 +9314,7 @@
"version": "4.0.9",
"resolved": "https://registry.npmjs.org/@types/js-yaml/-/js-yaml-4.0.9.tgz",
"integrity": "sha512-k4MGaQl5TGo/iipqb2UDG2UwjXziSWkh0uysQelTlJpX1qGlpUZYm8PnO4DxG1qBomtJUdYJ6qR6xdIah10JLg==",
+ "dev": true,
"license": "MIT"
},
"node_modules/@types/json-schema": {
@@ -9359,9 +9377,9 @@
"license": "MIT"
},
"node_modules/@types/pg": {
- "version": "8.15.6",
- "resolved": "https://registry.npmjs.org/@types/pg/-/pg-8.15.6.tgz",
- "integrity": "sha512-NoaMtzhxOrubeL/7UZuNTrejB4MPAJ0RpxZqXQf2qXuVlTPuG6Y8p4u9dKRaue4yjmC7ZhzVO2/Yyyn25znrPQ==",
+ "version": "8.16.0",
+ "resolved": "https://registry.npmjs.org/@types/pg/-/pg-8.16.0.tgz",
+ "integrity": "sha512-RmhMd/wD+CF8Dfo+cVIy3RR5cl8CyfXQ0tGgW6XBL8L4LM/UTEbNXYRbLwU6w+CgrKBNbrQWt4FUtTfaU5jSYQ==",
"devOptional": true,
"license": "MIT",
"peer": true,
@@ -15914,9 +15932,9 @@
}
},
"node_modules/lucide-react": {
- "version": "0.556.0",
- "resolved": "https://registry.npmjs.org/lucide-react/-/lucide-react-0.556.0.tgz",
- "integrity": "sha512-iOb8dRk7kLaYBZhR2VlV1CeJGxChBgUthpSP8wom9jfj79qovgG6qcSdiy6vkoREKPnbUYzJsCn4o4PtG3Iy+A==",
+ "version": "0.559.0",
+ "resolved": "https://registry.npmjs.org/lucide-react/-/lucide-react-0.559.0.tgz",
+ "integrity": "sha512-3ymrkBPXWk3U2bwUDg6TdA6hP5iGDMgPEAMLhchEgTQmA+g0Zk24tOtKtXMx35w1PizTmsBC3RhP88QYm+7mHQ==",
"license": "ISC",
"peerDependencies": {
"react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0"
@@ -16273,13 +16291,13 @@
}
},
"node_modules/next": {
- "version": "15.5.7",
- "resolved": "https://registry.npmjs.org/next/-/next-15.5.7.tgz",
- "integrity": "sha512-+t2/0jIJ48kUpGKkdlhgkv+zPTEOoXyr60qXe68eB/pl3CMJaLeIGjzp5D6Oqt25hCBiBTt8wEeeAzfJvUKnPQ==",
+ "version": "15.5.9",
+ "resolved": "https://registry.npmjs.org/next/-/next-15.5.9.tgz",
+ "integrity": "sha512-agNLK89seZEtC5zUHwtut0+tNrc0Xw4FT/Dg+B/VLEo9pAcS9rtTKpek3V6kVcVwsB2YlqMaHdfZL4eLEVYuCg==",
"license": "MIT",
"peer": true,
"dependencies": {
- "@next/env": "15.5.7",
+ "@next/env": "15.5.9",
"@swc/helpers": "0.5.15",
"caniuse-lite": "^1.0.30001579",
"postcss": "8.4.31",
@@ -16514,9 +16532,9 @@
}
},
"node_modules/npm": {
- "version": "11.6.4",
- "resolved": "https://registry.npmjs.org/npm/-/npm-11.6.4.tgz",
- "integrity": "sha512-ERjKtGoFpQrua/9bG0+h3xiv/4nVdGViCjUYA1AmlV24fFvfnSB7B7dIfZnySQ1FDLd0ZVrWPsLLp78dCtJdRQ==",
+ "version": "11.7.0",
+ "resolved": "https://registry.npmjs.org/npm/-/npm-11.7.0.tgz",
+ "integrity": "sha512-wiCZpv/41bIobCoJ31NStIWKfAxxYyD1iYnWCtiyns8s5v3+l8y0HCP/sScuH6B5+GhIfda4HQKiqeGZwJWhFw==",
"bundleDependencies": [
"@isaacs/string-locale-compare",
"@npmcli/arborist",
@@ -16595,8 +16613,8 @@
],
"dependencies": {
"@isaacs/string-locale-compare": "^1.1.0",
- "@npmcli/arborist": "^9.1.8",
- "@npmcli/config": "^10.4.4",
+ "@npmcli/arborist": "^9.1.9",
+ "@npmcli/config": "^10.4.5",
"@npmcli/fs": "^5.0.0",
"@npmcli/map-workspaces": "^5.0.3",
"@npmcli/metavuln-calculator": "^9.0.3",
@@ -16621,11 +16639,11 @@
"is-cidr": "^6.0.1",
"json-parse-even-better-errors": "^5.0.0",
"libnpmaccess": "^10.0.3",
- "libnpmdiff": "^8.0.11",
- "libnpmexec": "^10.1.10",
- "libnpmfund": "^7.0.11",
+ "libnpmdiff": "^8.0.12",
+ "libnpmexec": "^10.1.11",
+ "libnpmfund": "^7.0.12",
"libnpmorg": "^8.0.1",
- "libnpmpack": "^9.0.11",
+ "libnpmpack": "^9.0.12",
"libnpmpublish": "^11.1.3",
"libnpmsearch": "^9.0.1",
"libnpmteam": "^8.0.2",
@@ -16733,7 +16751,7 @@
}
},
"node_modules/npm/node_modules/@npmcli/arborist": {
- "version": "9.1.8",
+ "version": "9.1.9",
"inBundle": true,
"license": "ISC",
"dependencies": {
@@ -16779,7 +16797,7 @@
}
},
"node_modules/npm/node_modules/@npmcli/config": {
- "version": "10.4.4",
+ "version": "10.4.5",
"inBundle": true,
"license": "ISC",
"dependencies": {
@@ -17517,11 +17535,11 @@
}
},
"node_modules/npm/node_modules/libnpmdiff": {
- "version": "8.0.11",
+ "version": "8.0.12",
"inBundle": true,
"license": "ISC",
"dependencies": {
- "@npmcli/arborist": "^9.1.8",
+ "@npmcli/arborist": "^9.1.9",
"@npmcli/installed-package-contents": "^4.0.0",
"binary-extensions": "^3.0.0",
"diff": "^8.0.2",
@@ -17535,11 +17553,11 @@
}
},
"node_modules/npm/node_modules/libnpmexec": {
- "version": "10.1.10",
+ "version": "10.1.11",
"inBundle": true,
"license": "ISC",
"dependencies": {
- "@npmcli/arborist": "^9.1.8",
+ "@npmcli/arborist": "^9.1.9",
"@npmcli/package-json": "^7.0.0",
"@npmcli/run-script": "^10.0.0",
"ci-info": "^4.0.0",
@@ -17557,11 +17575,11 @@
}
},
"node_modules/npm/node_modules/libnpmfund": {
- "version": "7.0.11",
+ "version": "7.0.12",
"inBundle": true,
"license": "ISC",
"dependencies": {
- "@npmcli/arborist": "^9.1.8"
+ "@npmcli/arborist": "^9.1.9"
},
"engines": {
"node": "^20.17.0 || >=22.9.0"
@@ -17580,11 +17598,11 @@
}
},
"node_modules/npm/node_modules/libnpmpack": {
- "version": "9.0.11",
+ "version": "9.0.12",
"inBundle": true,
"license": "ISC",
"dependencies": {
- "@npmcli/arborist": "^9.1.8",
+ "@npmcli/arborist": "^9.1.9",
"@npmcli/run-script": "^10.0.0",
"npm-package-arg": "^13.0.0",
"pacote": "^21.0.2"
@@ -19719,9 +19737,9 @@
}
},
"node_modules/react": {
- "version": "19.2.1",
- "resolved": "https://registry.npmjs.org/react/-/react-19.2.1.tgz",
- "integrity": "sha512-DGrYcCWK7tvYMnWh79yrPHt+vdx9tY+1gPZa7nJQtO/p8bLTDaHp4dzwEhQB7pZ4Xe3ok4XKuEPrVuc+wlpkmw==",
+ "version": "19.2.3",
+ "resolved": "https://registry.npmjs.org/react/-/react-19.2.3.tgz",
+ "integrity": "sha512-Ku/hhYbVjOQnXDZFv2+RibmLFGwFdeeKHFcOTlrt7xplBnya5OGn/hIRDsqDiSUcfORsDC7MPxwork8jBwsIWA==",
"license": "MIT",
"peer": true,
"engines": {
@@ -19750,16 +19768,16 @@
}
},
"node_modules/react-dom": {
- "version": "19.2.1",
- "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.1.tgz",
- "integrity": "sha512-ibrK8llX2a4eOskq1mXKu/TGZj9qzomO+sNfO98M6d9zIPOEhlBkMkBUBLd1vgS0gQsLDBzA+8jJBVXDnfHmJg==",
+ "version": "19.2.3",
+ "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.3.tgz",
+ "integrity": "sha512-yELu4WmLPw5Mr/lmeEpox5rw3RETacE++JgHqQzd2dg+YbJuat3jH4ingc+WPZhxaoFzdv9y33G+F7Nl5O0GBg==",
"license": "MIT",
"peer": true,
"dependencies": {
"scheduler": "^0.27.0"
},
"peerDependencies": {
- "react": "^19.2.1"
+ "react": "^19.2.3"
}
},
"node_modules/react-easy-sort": {
@@ -19779,9 +19797,9 @@
}
},
"node_modules/react-email": {
- "version": "5.0.6",
- "resolved": "https://registry.npmjs.org/react-email/-/react-email-5.0.6.tgz",
- "integrity": "sha512-DEGzWpEiC3CquPEaaEJuipNT3WZ9mK58rbkpOe4Slbgyf60PLa1wONnt5a3afbBBRbNdW2aYhIvVI41yS6UIRA==",
+ "version": "5.0.7",
+ "resolved": "https://registry.npmjs.org/react-email/-/react-email-5.0.7.tgz",
+ "integrity": "sha512-JsWzxl3O82Gw9HRRNJm8VjQLB8c7R5TGbP89Ffj+/Qdb2H2N4J0XRXkhqiucMvmucuqNqe9mNndZkh3jh638xA==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -20870,9 +20888,9 @@
"license": "MIT"
},
"node_modules/resend": {
- "version": "6.5.2",
- "resolved": "https://registry.npmjs.org/resend/-/resend-6.5.2.tgz",
- "integrity": "sha512-Yl83UvS8sYsjgmF8dVbNPzlfpmb3DkLUk3VwsAbkaEFo9UMswpNuPGryHBXGk+Ta4uYMv5HmjVk3j9jmNkcEDg==",
+ "version": "6.6.0",
+ "resolved": "https://registry.npmjs.org/resend/-/resend-6.6.0.tgz",
+ "integrity": "sha512-d1WoOqSxj5x76JtQMrieNAG1kZkh4NU4f+Je1yq4++JsDpLddhEwnJlNfvkCzvUuZy9ZquWmMMAm2mENd2JvRw==",
"license": "MIT",
"dependencies": {
"svix": "1.76.1"
diff --git a/package.json b/package.json
index b41f1778..2aebc439 100644
--- a/package.json
+++ b/package.json
@@ -34,7 +34,7 @@
},
"dependencies": {
"@asteasolutions/zod-to-openapi": "8.2.0",
- "@aws-sdk/client-s3": "3.947.0",
+ "@aws-sdk/client-s3": "3.948.0",
"@faker-js/faker": "10.1.0",
"@headlessui/react": "2.2.9",
"@hookform/resolvers": "5.2.2",
@@ -68,7 +68,6 @@
"@tailwindcss/forms": "0.5.10",
"@tanstack/react-query": "5.90.12",
"@tanstack/react-table": "8.21.3",
- "@types/js-yaml": "4.0.9",
"arctic": "3.7.0",
"axios": "1.13.2",
"better-sqlite3": "11.9.1",
@@ -97,32 +96,32 @@
"jmespath": "0.16.0",
"js-yaml": "4.1.1",
"jsonwebtoken": "9.0.3",
- "lucide-react": "0.556.0",
+ "lucide-react": "0.559.0",
"maxmind": "5.0.1",
"moment": "2.30.1",
- "next": "15.5.7",
+ "next": "15.5.9",
"next-intl": "4.5.8",
"next-themes": "0.4.6",
"nextjs-toploader": "3.9.17",
"node-cache": "5.1.2",
"node-fetch": "3.3.2",
"nodemailer": "7.0.11",
- "npm": "11.6.4",
+ "npm": "11.7.0",
"nprogress": "0.2.0",
"oslo": "1.2.1",
"pg": "8.16.3",
"posthog-node": "5.17.2",
"qrcode.react": "4.2.0",
- "react": "19.2.1",
+ "react": "19.2.3",
"react-day-picker": "9.12.0",
- "react-dom": "19.2.1",
+ "react-dom": "19.2.3",
"react-easy-sort": "1.8.0",
"react-hook-form": "7.68.0",
"react-icons": "5.5.0",
"rebuild": "0.1.2",
"recharts": "2.15.4",
"reodotdev": "1.0.0",
- "resend": "6.5.2",
+ "resend": "6.6.0",
"semver": "7.7.3",
"stripe": "20.0.0",
"swagger-ui-express": "5.0.1",
@@ -157,7 +156,7 @@
"@types/node": "24.10.2",
"@types/nodemailer": "7.0.4",
"@types/nprogress": "0.2.3",
- "@types/pg": "8.15.6",
+ "@types/pg": "8.16.0",
"@types/react": "19.2.7",
"@types/react-dom": "19.2.3",
"@types/semver": "7.7.1",
@@ -165,13 +164,14 @@
"@types/topojson-client": "3.1.5",
"@types/ws": "8.18.1",
"@types/yargs": "17.0.35",
+ "@types/js-yaml": "4.0.9",
"babel-plugin-react-compiler": "1.0.0",
"drizzle-kit": "0.31.8",
"esbuild": "0.27.1",
"esbuild-node-externals": "1.20.1",
"postcss": "8.5.6",
"prettier": "3.7.4",
- "react-email": "5.0.6",
+ "react-email": "5.0.7",
"tailwindcss": "4.1.17",
"tsc-alias": "1.8.16",
"tsx": "4.21.0",
diff --git a/postcss.config.mjs b/postcss.config.mjs
index 9d3299ad..19b5e42f 100644
--- a/postcss.config.mjs
+++ b/postcss.config.mjs
@@ -1,8 +1,8 @@
/** @type {import('postcss-load-config').Config} */
const config = {
plugins: {
- "@tailwindcss/postcss": {},
- },
+ "@tailwindcss/postcss": {}
+ }
};
export default config;
diff --git a/public/screenshots/create-resource.png b/public/screenshots/create-resource.png
deleted file mode 100644
index 3b21f22b..00000000
Binary files a/public/screenshots/create-resource.png and /dev/null differ
diff --git a/public/screenshots/create-site.png b/public/screenshots/create-site.png
index b5ff8048..8d12a962 100644
Binary files a/public/screenshots/create-site.png and b/public/screenshots/create-site.png differ
diff --git a/public/screenshots/edit-resource.png b/public/screenshots/edit-resource.png
deleted file mode 100644
index 2d21afa6..00000000
Binary files a/public/screenshots/edit-resource.png and /dev/null differ
diff --git a/public/screenshots/hero.png b/public/screenshots/hero.png
index 86216cf6..f42a830e 100644
Binary files a/public/screenshots/hero.png and b/public/screenshots/hero.png differ
diff --git a/public/screenshots/private-resources.png b/public/screenshots/private-resources.png
new file mode 100644
index 00000000..f48d9279
Binary files /dev/null and b/public/screenshots/private-resources.png differ
diff --git a/public/screenshots/public-resources.png b/public/screenshots/public-resources.png
new file mode 100644
index 00000000..f42a830e
Binary files /dev/null and b/public/screenshots/public-resources.png differ
diff --git a/public/screenshots/resources.png b/public/screenshots/resources.png
deleted file mode 100644
index 86216cf6..00000000
Binary files a/public/screenshots/resources.png and /dev/null differ
diff --git a/public/screenshots/sites-fade.png b/public/screenshots/sites-fade.png
deleted file mode 100644
index 7e21c2cd..00000000
Binary files a/public/screenshots/sites-fade.png and /dev/null differ
diff --git a/public/screenshots/sites.png b/public/screenshots/sites.png
index 0aaa79d0..86b32b81 100644
Binary files a/public/screenshots/sites.png and b/public/screenshots/sites.png differ
diff --git a/public/screenshots/user-devices.png b/public/screenshots/user-devices.png
new file mode 100644
index 00000000..7b407cd6
Binary files /dev/null and b/public/screenshots/user-devices.png differ
diff --git a/server/auth/password.ts b/server/auth/password.ts
index dd1a3d1b..a25af4c9 100644
--- a/server/auth/password.ts
+++ b/server/auth/password.ts
@@ -2,13 +2,13 @@ import { hash, verify } from "@node-rs/argon2";
export async function verifyPassword(
password: string,
- hash: string,
+ hash: string
): Promise {
const validPassword = await verify(hash, password, {
memoryCost: 19456,
timeCost: 2,
outputLen: 32,
- parallelism: 1,
+ parallelism: 1
});
return validPassword;
}
@@ -18,7 +18,7 @@ export async function hashPassword(password: string): Promise {
memoryCost: 19456,
timeCost: 2,
outputLen: 32,
- parallelism: 1,
+ parallelism: 1
});
return passwordHash;
diff --git a/server/auth/passwordSchema.ts b/server/auth/passwordSchema.ts
index 9c399092..740f9a5d 100644
--- a/server/auth/passwordSchema.ts
+++ b/server/auth/passwordSchema.ts
@@ -4,10 +4,13 @@ export const passwordSchema = z
.string()
.min(8, { message: "Password must be at least 8 characters long" })
.max(128, { message: "Password must be at most 128 characters long" })
- .regex(/^(?=.*?[A-Z])(?=.*?[a-z])(?=.*?[0-9])(?=.*?[~!`@#$%^&*()_\-+={}[\]|\\:;"'<>,.\/?]).*$/, {
- message: `Your password must meet the following conditions:
+ .regex(
+ /^(?=.*?[A-Z])(?=.*?[a-z])(?=.*?[0-9])(?=.*?[~!`@#$%^&*()_\-+={}[\]|\\:;"'<>,.\/?]).*$/,
+ {
+ message: `Your password must meet the following conditions:
at least one uppercase English letter,
at least one lowercase English letter,
at least one digit,
at least one special character.`
- });
+ }
+ );
diff --git a/server/auth/sessions/newt.ts b/server/auth/sessions/newt.ts
index 5e55c491..96c37894 100644
--- a/server/auth/sessions/newt.ts
+++ b/server/auth/sessions/newt.ts
@@ -1,6 +1,4 @@
-import {
- encodeHexLowerCase,
-} from "@oslojs/encoding";
+import { encodeHexLowerCase } from "@oslojs/encoding";
import { sha256 } from "@oslojs/crypto/sha2";
import { Newt, newts, newtSessions, NewtSession } from "@server/db";
import { db } from "@server/db";
@@ -10,25 +8,25 @@ export const EXPIRES = 1000 * 60 * 60 * 24 * 30;
export async function createNewtSession(
token: string,
- newtId: string,
+ newtId: string
): Promise {
const sessionId = encodeHexLowerCase(
- sha256(new TextEncoder().encode(token)),
+ sha256(new TextEncoder().encode(token))
);
const session: NewtSession = {
sessionId: sessionId,
newtId,
- expiresAt: new Date(Date.now() + EXPIRES).getTime(),
+ expiresAt: new Date(Date.now() + EXPIRES).getTime()
};
await db.insert(newtSessions).values(session);
return session;
}
export async function validateNewtSessionToken(
- token: string,
+ token: string
): Promise {
const sessionId = encodeHexLowerCase(
- sha256(new TextEncoder().encode(token)),
+ sha256(new TextEncoder().encode(token))
);
const result = await db
.select({ newt: newts, session: newtSessions })
@@ -45,14 +43,12 @@ export async function validateNewtSessionToken(
.where(eq(newtSessions.sessionId, session.sessionId));
return { session: null, newt: null };
}
- if (Date.now() >= session.expiresAt - (EXPIRES / 2)) {
- session.expiresAt = new Date(
- Date.now() + EXPIRES,
- ).getTime();
+ if (Date.now() >= session.expiresAt - EXPIRES / 2) {
+ session.expiresAt = new Date(Date.now() + EXPIRES).getTime();
await db
.update(newtSessions)
.set({
- expiresAt: session.expiresAt,
+ expiresAt: session.expiresAt
})
.where(eq(newtSessions.sessionId, session.sessionId));
}
diff --git a/server/auth/sessions/olm.ts b/server/auth/sessions/olm.ts
index 89a0e81e..a51ec79a 100644
--- a/server/auth/sessions/olm.ts
+++ b/server/auth/sessions/olm.ts
@@ -1,6 +1,4 @@
-import {
- encodeHexLowerCase,
-} from "@oslojs/encoding";
+import { encodeHexLowerCase } from "@oslojs/encoding";
import { sha256 } from "@oslojs/crypto/sha2";
import { Olm, olms, olmSessions, OlmSession } from "@server/db";
import { db } from "@server/db";
@@ -10,25 +8,25 @@ export const EXPIRES = 1000 * 60 * 60 * 24 * 30;
export async function createOlmSession(
token: string,
- olmId: string,
+ olmId: string
): Promise {
const sessionId = encodeHexLowerCase(
- sha256(new TextEncoder().encode(token)),
+ sha256(new TextEncoder().encode(token))
);
const session: OlmSession = {
sessionId: sessionId,
olmId,
- expiresAt: new Date(Date.now() + EXPIRES).getTime(),
+ expiresAt: new Date(Date.now() + EXPIRES).getTime()
};
await db.insert(olmSessions).values(session);
return session;
}
export async function validateOlmSessionToken(
- token: string,
+ token: string
): Promise {
const sessionId = encodeHexLowerCase(
- sha256(new TextEncoder().encode(token)),
+ sha256(new TextEncoder().encode(token))
);
const result = await db
.select({ olm: olms, session: olmSessions })
@@ -45,14 +43,12 @@ export async function validateOlmSessionToken(
.where(eq(olmSessions.sessionId, session.sessionId));
return { session: null, olm: null };
}
- if (Date.now() >= session.expiresAt - (EXPIRES / 2)) {
- session.expiresAt = new Date(
- Date.now() + EXPIRES,
- ).getTime();
+ if (Date.now() >= session.expiresAt - EXPIRES / 2) {
+ session.expiresAt = new Date(Date.now() + EXPIRES).getTime();
await db
.update(olmSessions)
.set({
- expiresAt: session.expiresAt,
+ expiresAt: session.expiresAt
})
.where(eq(olmSessions.sessionId, session.sessionId));
}
diff --git a/server/cleanup.ts b/server/cleanup.ts
index a8985439..e494fcdc 100644
--- a/server/cleanup.ts
+++ b/server/cleanup.ts
@@ -10,4 +10,4 @@ export async function initCleanup() {
// Handle process termination
process.on("SIGTERM", () => cleanup());
process.on("SIGINT", () => cleanup());
-}
\ No newline at end of file
+}
diff --git a/server/db/countries.ts b/server/db/countries.ts
index 2907fd69..749f1183 100644
--- a/server/db/countries.ts
+++ b/server/db/countries.ts
@@ -1,1014 +1,1014 @@
export const COUNTRIES = [
{
- "name": "ALL COUNTRIES",
- "code": "ALL" // THIS IS AN INVALID CC SO IT WILL NEVER MATCH
+ name: "ALL COUNTRIES",
+ code: "ALL" // THIS IS AN INVALID CC SO IT WILL NEVER MATCH
},
{
- "name": "Afghanistan",
- "code": "AF"
+ name: "Afghanistan",
+ code: "AF"
},
{
- "name": "Albania",
- "code": "AL"
+ name: "Albania",
+ code: "AL"
},
{
- "name": "Algeria",
- "code": "DZ"
+ name: "Algeria",
+ code: "DZ"
},
{
- "name": "American Samoa",
- "code": "AS"
+ name: "American Samoa",
+ code: "AS"
},
{
- "name": "Andorra",
- "code": "AD"
+ name: "Andorra",
+ code: "AD"
},
{
- "name": "Angola",
- "code": "AO"
+ name: "Angola",
+ code: "AO"
},
{
- "name": "Anguilla",
- "code": "AI"
+ name: "Anguilla",
+ code: "AI"
},
{
- "name": "Antarctica",
- "code": "AQ"
+ name: "Antarctica",
+ code: "AQ"
},
{
- "name": "Antigua and Barbuda",
- "code": "AG"
+ name: "Antigua and Barbuda",
+ code: "AG"
},
{
- "name": "Argentina",
- "code": "AR"
+ name: "Argentina",
+ code: "AR"
},
{
- "name": "Armenia",
- "code": "AM"
+ name: "Armenia",
+ code: "AM"
},
{
- "name": "Aruba",
- "code": "AW"
+ name: "Aruba",
+ code: "AW"
},
{
- "name": "Asia/Pacific Region",
- "code": "AP"
+ name: "Asia/Pacific Region",
+ code: "AP"
},
{
- "name": "Australia",
- "code": "AU"
+ name: "Australia",
+ code: "AU"
},
{
- "name": "Austria",
- "code": "AT"
+ name: "Austria",
+ code: "AT"
},
{
- "name": "Azerbaijan",
- "code": "AZ"
+ name: "Azerbaijan",
+ code: "AZ"
},
{
- "name": "Bahamas",
- "code": "BS"
+ name: "Bahamas",
+ code: "BS"
},
{
- "name": "Bahrain",
- "code": "BH"
+ name: "Bahrain",
+ code: "BH"
},
{
- "name": "Bangladesh",
- "code": "BD"
+ name: "Bangladesh",
+ code: "BD"
},
{
- "name": "Barbados",
- "code": "BB"
+ name: "Barbados",
+ code: "BB"
},
{
- "name": "Belarus",
- "code": "BY"
+ name: "Belarus",
+ code: "BY"
},
{
- "name": "Belgium",
- "code": "BE"
+ name: "Belgium",
+ code: "BE"
},
{
- "name": "Belize",
- "code": "BZ"
+ name: "Belize",
+ code: "BZ"
},
{
- "name": "Benin",
- "code": "BJ"
+ name: "Benin",
+ code: "BJ"
},
{
- "name": "Bermuda",
- "code": "BM"
+ name: "Bermuda",
+ code: "BM"
},
{
- "name": "Bhutan",
- "code": "BT"
+ name: "Bhutan",
+ code: "BT"
},
{
- "name": "Bolivia",
- "code": "BO"
+ name: "Bolivia",
+ code: "BO"
},
{
- "name": "Bonaire, Sint Eustatius and Saba",
- "code": "BQ"
+ name: "Bonaire, Sint Eustatius and Saba",
+ code: "BQ"
},
{
- "name": "Bosnia and Herzegovina",
- "code": "BA"
+ name: "Bosnia and Herzegovina",
+ code: "BA"
},
{
- "name": "Botswana",
- "code": "BW"
+ name: "Botswana",
+ code: "BW"
},
{
- "name": "Bouvet Island",
- "code": "BV"
+ name: "Bouvet Island",
+ code: "BV"
},
{
- "name": "Brazil",
- "code": "BR"
+ name: "Brazil",
+ code: "BR"
},
{
- "name": "British Indian Ocean Territory",
- "code": "IO"
+ name: "British Indian Ocean Territory",
+ code: "IO"
},
{
- "name": "Brunei Darussalam",
- "code": "BN"
+ name: "Brunei Darussalam",
+ code: "BN"
},
{
- "name": "Bulgaria",
- "code": "BG"
+ name: "Bulgaria",
+ code: "BG"
},
{
- "name": "Burkina Faso",
- "code": "BF"
+ name: "Burkina Faso",
+ code: "BF"
},
{
- "name": "Burundi",
- "code": "BI"
+ name: "Burundi",
+ code: "BI"
},
{
- "name": "Cambodia",
- "code": "KH"
+ name: "Cambodia",
+ code: "KH"
},
{
- "name": "Cameroon",
- "code": "CM"
+ name: "Cameroon",
+ code: "CM"
},
{
- "name": "Canada",
- "code": "CA"
+ name: "Canada",
+ code: "CA"
},
{
- "name": "Cape Verde",
- "code": "CV"
+ name: "Cape Verde",
+ code: "CV"
},
{
- "name": "Cayman Islands",
- "code": "KY"
+ name: "Cayman Islands",
+ code: "KY"
},
{
- "name": "Central African Republic",
- "code": "CF"
+ name: "Central African Republic",
+ code: "CF"
},
{
- "name": "Chad",
- "code": "TD"
+ name: "Chad",
+ code: "TD"
},
{
- "name": "Chile",
- "code": "CL"
+ name: "Chile",
+ code: "CL"
},
{
- "name": "China",
- "code": "CN"
+ name: "China",
+ code: "CN"
},
{
- "name": "Christmas Island",
- "code": "CX"
+ name: "Christmas Island",
+ code: "CX"
},
{
- "name": "Cocos (Keeling) Islands",
- "code": "CC"
+ name: "Cocos (Keeling) Islands",
+ code: "CC"
},
{
- "name": "Colombia",
- "code": "CO"
+ name: "Colombia",
+ code: "CO"
},
{
- "name": "Comoros",
- "code": "KM"
+ name: "Comoros",
+ code: "KM"
},
{
- "name": "Congo",
- "code": "CG"
+ name: "Congo",
+ code: "CG"
},
{
- "name": "Congo, The Democratic Republic of the",
- "code": "CD"
+ name: "Congo, The Democratic Republic of the",
+ code: "CD"
},
{
- "name": "Cook Islands",
- "code": "CK"
+ name: "Cook Islands",
+ code: "CK"
},
{
- "name": "Costa Rica",
- "code": "CR"
+ name: "Costa Rica",
+ code: "CR"
},
{
- "name": "Croatia",
- "code": "HR"
+ name: "Croatia",
+ code: "HR"
},
{
- "name": "Cuba",
- "code": "CU"
+ name: "Cuba",
+ code: "CU"
},
{
- "name": "Curaçao",
- "code": "CW"
+ name: "Curaçao",
+ code: "CW"
},
{
- "name": "Cyprus",
- "code": "CY"
+ name: "Cyprus",
+ code: "CY"
},
{
- "name": "Czech Republic",
- "code": "CZ"
+ name: "Czech Republic",
+ code: "CZ"
},
{
- "name": "Côte d'Ivoire",
- "code": "CI"
+ name: "Côte d'Ivoire",
+ code: "CI"
},
{
- "name": "Denmark",
- "code": "DK"
+ name: "Denmark",
+ code: "DK"
},
{
- "name": "Djibouti",
- "code": "DJ"
+ name: "Djibouti",
+ code: "DJ"
},
{
- "name": "Dominica",
- "code": "DM"
+ name: "Dominica",
+ code: "DM"
},
{
- "name": "Dominican Republic",
- "code": "DO"
+ name: "Dominican Republic",
+ code: "DO"
},
{
- "name": "Ecuador",
- "code": "EC"
+ name: "Ecuador",
+ code: "EC"
},
{
- "name": "Egypt",
- "code": "EG"
+ name: "Egypt",
+ code: "EG"
},
{
- "name": "El Salvador",
- "code": "SV"
+ name: "El Salvador",
+ code: "SV"
},
{
- "name": "Equatorial Guinea",
- "code": "GQ"
+ name: "Equatorial Guinea",
+ code: "GQ"
},
{
- "name": "Eritrea",
- "code": "ER"
+ name: "Eritrea",
+ code: "ER"
},
{
- "name": "Estonia",
- "code": "EE"
+ name: "Estonia",
+ code: "EE"
},
{
- "name": "Ethiopia",
- "code": "ET"
+ name: "Ethiopia",
+ code: "ET"
},
{
- "name": "Falkland Islands (Malvinas)",
- "code": "FK"
+ name: "Falkland Islands (Malvinas)",
+ code: "FK"
},
{
- "name": "Faroe Islands",
- "code": "FO"
+ name: "Faroe Islands",
+ code: "FO"
},
{
- "name": "Fiji",
- "code": "FJ"
+ name: "Fiji",
+ code: "FJ"
},
{
- "name": "Finland",
- "code": "FI"
+ name: "Finland",
+ code: "FI"
},
{
- "name": "France",
- "code": "FR"
+ name: "France",
+ code: "FR"
},
{
- "name": "French Guiana",
- "code": "GF"
+ name: "French Guiana",
+ code: "GF"
},
{
- "name": "French Polynesia",
- "code": "PF"
+ name: "French Polynesia",
+ code: "PF"
},
{
- "name": "French Southern Territories",
- "code": "TF"
+ name: "French Southern Territories",
+ code: "TF"
},
{
- "name": "Gabon",
- "code": "GA"
+ name: "Gabon",
+ code: "GA"
},
{
- "name": "Gambia",
- "code": "GM"
+ name: "Gambia",
+ code: "GM"
},
{
- "name": "Georgia",
- "code": "GE"
+ name: "Georgia",
+ code: "GE"
},
{
- "name": "Germany",
- "code": "DE"
+ name: "Germany",
+ code: "DE"
},
{
- "name": "Ghana",
- "code": "GH"
+ name: "Ghana",
+ code: "GH"
},
{
- "name": "Gibraltar",
- "code": "GI"
+ name: "Gibraltar",
+ code: "GI"
},
{
- "name": "Greece",
- "code": "GR"
+ name: "Greece",
+ code: "GR"
},
{
- "name": "Greenland",
- "code": "GL"
+ name: "Greenland",
+ code: "GL"
},
{
- "name": "Grenada",
- "code": "GD"
+ name: "Grenada",
+ code: "GD"
},
{
- "name": "Guadeloupe",
- "code": "GP"
+ name: "Guadeloupe",
+ code: "GP"
},
{
- "name": "Guam",
- "code": "GU"
+ name: "Guam",
+ code: "GU"
},
{
- "name": "Guatemala",
- "code": "GT"
+ name: "Guatemala",
+ code: "GT"
},
{
- "name": "Guernsey",
- "code": "GG"
+ name: "Guernsey",
+ code: "GG"
},
{
- "name": "Guinea",
- "code": "GN"
+ name: "Guinea",
+ code: "GN"
},
{
- "name": "Guinea-Bissau",
- "code": "GW"
+ name: "Guinea-Bissau",
+ code: "GW"
},
{
- "name": "Guyana",
- "code": "GY"
+ name: "Guyana",
+ code: "GY"
},
{
- "name": "Haiti",
- "code": "HT"
+ name: "Haiti",
+ code: "HT"
},
{
- "name": "Heard Island and Mcdonald Islands",
- "code": "HM"
+ name: "Heard Island and Mcdonald Islands",
+ code: "HM"
},
{
- "name": "Holy See (Vatican City State)",
- "code": "VA"
+ name: "Holy See (Vatican City State)",
+ code: "VA"
},
{
- "name": "Honduras",
- "code": "HN"
+ name: "Honduras",
+ code: "HN"
},
{
- "name": "Hong Kong",
- "code": "HK"
+ name: "Hong Kong",
+ code: "HK"
},
{
- "name": "Hungary",
- "code": "HU"
+ name: "Hungary",
+ code: "HU"
},
{
- "name": "Iceland",
- "code": "IS"
+ name: "Iceland",
+ code: "IS"
},
{
- "name": "India",
- "code": "IN"
+ name: "India",
+ code: "IN"
},
{
- "name": "Indonesia",
- "code": "ID"
+ name: "Indonesia",
+ code: "ID"
},
{
- "name": "Iran, Islamic Republic Of",
- "code": "IR"
+ name: "Iran, Islamic Republic Of",
+ code: "IR"
},
{
- "name": "Iraq",
- "code": "IQ"
+ name: "Iraq",
+ code: "IQ"
},
{
- "name": "Ireland",
- "code": "IE"
+ name: "Ireland",
+ code: "IE"
},
{
- "name": "Isle of Man",
- "code": "IM"
+ name: "Isle of Man",
+ code: "IM"
},
{
- "name": "Israel",
- "code": "IL"
+ name: "Israel",
+ code: "IL"
},
{
- "name": "Italy",
- "code": "IT"
+ name: "Italy",
+ code: "IT"
},
{
- "name": "Jamaica",
- "code": "JM"
+ name: "Jamaica",
+ code: "JM"
},
{
- "name": "Japan",
- "code": "JP"
+ name: "Japan",
+ code: "JP"
},
{
- "name": "Jersey",
- "code": "JE"
+ name: "Jersey",
+ code: "JE"
},
{
- "name": "Jordan",
- "code": "JO"
+ name: "Jordan",
+ code: "JO"
},
{
- "name": "Kazakhstan",
- "code": "KZ"
+ name: "Kazakhstan",
+ code: "KZ"
},
{
- "name": "Kenya",
- "code": "KE"
+ name: "Kenya",
+ code: "KE"
},
{
- "name": "Kiribati",
- "code": "KI"
+ name: "Kiribati",
+ code: "KI"
},
{
- "name": "Korea, Republic of",
- "code": "KR"
+ name: "Korea, Republic of",
+ code: "KR"
},
{
- "name": "Kuwait",
- "code": "KW"
+ name: "Kuwait",
+ code: "KW"
},
{
- "name": "Kyrgyzstan",
- "code": "KG"
+ name: "Kyrgyzstan",
+ code: "KG"
},
{
- "name": "Laos",
- "code": "LA"
+ name: "Laos",
+ code: "LA"
},
{
- "name": "Latvia",
- "code": "LV"
+ name: "Latvia",
+ code: "LV"
},
{
- "name": "Lebanon",
- "code": "LB"
+ name: "Lebanon",
+ code: "LB"
},
{
- "name": "Lesotho",
- "code": "LS"
+ name: "Lesotho",
+ code: "LS"
},
{
- "name": "Liberia",
- "code": "LR"
+ name: "Liberia",
+ code: "LR"
},
{
- "name": "Libyan Arab Jamahiriya",
- "code": "LY"
+ name: "Libyan Arab Jamahiriya",
+ code: "LY"
},
{
- "name": "Liechtenstein",
- "code": "LI"
+ name: "Liechtenstein",
+ code: "LI"
},
{
- "name": "Lithuania",
- "code": "LT"
+ name: "Lithuania",
+ code: "LT"
},
{
- "name": "Luxembourg",
- "code": "LU"
+ name: "Luxembourg",
+ code: "LU"
},
{
- "name": "Macao",
- "code": "MO"
+ name: "Macao",
+ code: "MO"
},
{
- "name": "Madagascar",
- "code": "MG"
+ name: "Madagascar",
+ code: "MG"
},
{
- "name": "Malawi",
- "code": "MW"
+ name: "Malawi",
+ code: "MW"
},
{
- "name": "Malaysia",
- "code": "MY"
+ name: "Malaysia",
+ code: "MY"
},
{
- "name": "Maldives",
- "code": "MV"
+ name: "Maldives",
+ code: "MV"
},
{
- "name": "Mali",
- "code": "ML"
+ name: "Mali",
+ code: "ML"
},
{
- "name": "Malta",
- "code": "MT"
+ name: "Malta",
+ code: "MT"
},
{
- "name": "Marshall Islands",
- "code": "MH"
+ name: "Marshall Islands",
+ code: "MH"
},
{
- "name": "Martinique",
- "code": "MQ"
+ name: "Martinique",
+ code: "MQ"
},
{
- "name": "Mauritania",
- "code": "MR"
+ name: "Mauritania",
+ code: "MR"
},
{
- "name": "Mauritius",
- "code": "MU"
+ name: "Mauritius",
+ code: "MU"
},
{
- "name": "Mayotte",
- "code": "YT"
+ name: "Mayotte",
+ code: "YT"
},
{
- "name": "Mexico",
- "code": "MX"
+ name: "Mexico",
+ code: "MX"
},
{
- "name": "Micronesia, Federated States of",
- "code": "FM"
+ name: "Micronesia, Federated States of",
+ code: "FM"
},
{
- "name": "Moldova, Republic of",
- "code": "MD"
+ name: "Moldova, Republic of",
+ code: "MD"
},
{
- "name": "Monaco",
- "code": "MC"
+ name: "Monaco",
+ code: "MC"
},
{
- "name": "Mongolia",
- "code": "MN"
+ name: "Mongolia",
+ code: "MN"
},
{
- "name": "Montenegro",
- "code": "ME"
+ name: "Montenegro",
+ code: "ME"
},
{
- "name": "Montserrat",
- "code": "MS"
+ name: "Montserrat",
+ code: "MS"
},
{
- "name": "Morocco",
- "code": "MA"
+ name: "Morocco",
+ code: "MA"
},
{
- "name": "Mozambique",
- "code": "MZ"
+ name: "Mozambique",
+ code: "MZ"
},
{
- "name": "Myanmar",
- "code": "MM"
+ name: "Myanmar",
+ code: "MM"
},
{
- "name": "Namibia",
- "code": "NA"
+ name: "Namibia",
+ code: "NA"
},
{
- "name": "Nauru",
- "code": "NR"
+ name: "Nauru",
+ code: "NR"
},
{
- "name": "Nepal",
- "code": "NP"
+ name: "Nepal",
+ code: "NP"
},
{
- "name": "Netherlands",
- "code": "NL"
+ name: "Netherlands",
+ code: "NL"
},
{
- "name": "Netherlands Antilles",
- "code": "AN"
+ name: "Netherlands Antilles",
+ code: "AN"
},
{
- "name": "New Caledonia",
- "code": "NC"
+ name: "New Caledonia",
+ code: "NC"
},
{
- "name": "New Zealand",
- "code": "NZ"
+ name: "New Zealand",
+ code: "NZ"
},
{
- "name": "Nicaragua",
- "code": "NI"
+ name: "Nicaragua",
+ code: "NI"
},
{
- "name": "Niger",
- "code": "NE"
+ name: "Niger",
+ code: "NE"
},
{
- "name": "Nigeria",
- "code": "NG"
+ name: "Nigeria",
+ code: "NG"
},
{
- "name": "Niue",
- "code": "NU"
+ name: "Niue",
+ code: "NU"
},
{
- "name": "Norfolk Island",
- "code": "NF"
+ name: "Norfolk Island",
+ code: "NF"
},
{
- "name": "North Korea",
- "code": "KP"
+ name: "North Korea",
+ code: "KP"
},
{
- "name": "North Macedonia",
- "code": "MK"
+ name: "North Macedonia",
+ code: "MK"
},
{
- "name": "Northern Mariana Islands",
- "code": "MP"
+ name: "Northern Mariana Islands",
+ code: "MP"
},
{
- "name": "Norway",
- "code": "NO"
+ name: "Norway",
+ code: "NO"
},
{
- "name": "Oman",
- "code": "OM"
+ name: "Oman",
+ code: "OM"
},
{
- "name": "Pakistan",
- "code": "PK"
+ name: "Pakistan",
+ code: "PK"
},
{
- "name": "Palau",
- "code": "PW"
+ name: "Palau",
+ code: "PW"
},
{
- "name": "Palestinian Territory, Occupied",
- "code": "PS"
+ name: "Palestinian Territory, Occupied",
+ code: "PS"
},
{
- "name": "Panama",
- "code": "PA"
+ name: "Panama",
+ code: "PA"
},
{
- "name": "Papua New Guinea",
- "code": "PG"
+ name: "Papua New Guinea",
+ code: "PG"
},
{
- "name": "Paraguay",
- "code": "PY"
+ name: "Paraguay",
+ code: "PY"
},
{
- "name": "Peru",
- "code": "PE"
+ name: "Peru",
+ code: "PE"
},
{
- "name": "Philippines",
- "code": "PH"
+ name: "Philippines",
+ code: "PH"
},
{
- "name": "Pitcairn Islands",
- "code": "PN"
+ name: "Pitcairn Islands",
+ code: "PN"
},
{
- "name": "Poland",
- "code": "PL"
+ name: "Poland",
+ code: "PL"
},
{
- "name": "Portugal",
- "code": "PT"
+ name: "Portugal",
+ code: "PT"
},
{
- "name": "Puerto Rico",
- "code": "PR"
+ name: "Puerto Rico",
+ code: "PR"
},
{
- "name": "Qatar",
- "code": "QA"
+ name: "Qatar",
+ code: "QA"
},
{
- "name": "Reunion",
- "code": "RE"
+ name: "Reunion",
+ code: "RE"
},
{
- "name": "Romania",
- "code": "RO"
+ name: "Romania",
+ code: "RO"
},
{
- "name": "Russian Federation",
- "code": "RU"
+ name: "Russian Federation",
+ code: "RU"
},
{
- "name": "Rwanda",
- "code": "RW"
+ name: "Rwanda",
+ code: "RW"
},
{
- "name": "Saint Barthélemy",
- "code": "BL"
+ name: "Saint Barthélemy",
+ code: "BL"
},
{
- "name": "Saint Helena",
- "code": "SH"
+ name: "Saint Helena",
+ code: "SH"
},
{
- "name": "Saint Kitts and Nevis",
- "code": "KN"
+ name: "Saint Kitts and Nevis",
+ code: "KN"
},
{
- "name": "Saint Lucia",
- "code": "LC"
+ name: "Saint Lucia",
+ code: "LC"
},
{
- "name": "Saint Martin",
- "code": "MF"
+ name: "Saint Martin",
+ code: "MF"
},
{
- "name": "Saint Pierre and Miquelon",
- "code": "PM"
+ name: "Saint Pierre and Miquelon",
+ code: "PM"
},
{
- "name": "Saint Vincent and the Grenadines",
- "code": "VC"
+ name: "Saint Vincent and the Grenadines",
+ code: "VC"
},
{
- "name": "Samoa",
- "code": "WS"
+ name: "Samoa",
+ code: "WS"
},
{
- "name": "San Marino",
- "code": "SM"
+ name: "San Marino",
+ code: "SM"
},
{
- "name": "Sao Tome and Principe",
- "code": "ST"
+ name: "Sao Tome and Principe",
+ code: "ST"
},
{
- "name": "Saudi Arabia",
- "code": "SA"
+ name: "Saudi Arabia",
+ code: "SA"
},
{
- "name": "Senegal",
- "code": "SN"
+ name: "Senegal",
+ code: "SN"
},
{
- "name": "Serbia",
- "code": "RS"
+ name: "Serbia",
+ code: "RS"
},
{
- "name": "Serbia and Montenegro",
- "code": "CS"
+ name: "Serbia and Montenegro",
+ code: "CS"
},
{
- "name": "Seychelles",
- "code": "SC"
+ name: "Seychelles",
+ code: "SC"
},
{
- "name": "Sierra Leone",
- "code": "SL"
+ name: "Sierra Leone",
+ code: "SL"
},
{
- "name": "Singapore",
- "code": "SG"
+ name: "Singapore",
+ code: "SG"
},
{
- "name": "Sint Maarten",
- "code": "SX"
+ name: "Sint Maarten",
+ code: "SX"
},
{
- "name": "Slovakia",
- "code": "SK"
+ name: "Slovakia",
+ code: "SK"
},
{
- "name": "Slovenia",
- "code": "SI"
+ name: "Slovenia",
+ code: "SI"
},
{
- "name": "Solomon Islands",
- "code": "SB"
+ name: "Solomon Islands",
+ code: "SB"
},
{
- "name": "Somalia",
- "code": "SO"
+ name: "Somalia",
+ code: "SO"
},
{
- "name": "South Africa",
- "code": "ZA"
+ name: "South Africa",
+ code: "ZA"
},
{
- "name": "South Georgia and the South Sandwich Islands",
- "code": "GS"
+ name: "South Georgia and the South Sandwich Islands",
+ code: "GS"
},
{
- "name": "South Sudan",
- "code": "SS"
+ name: "South Sudan",
+ code: "SS"
},
{
- "name": "Spain",
- "code": "ES"
+ name: "Spain",
+ code: "ES"
},
{
- "name": "Sri Lanka",
- "code": "LK"
+ name: "Sri Lanka",
+ code: "LK"
},
{
- "name": "Sudan",
- "code": "SD"
+ name: "Sudan",
+ code: "SD"
},
{
- "name": "Suriname",
- "code": "SR"
+ name: "Suriname",
+ code: "SR"
},
{
- "name": "Svalbard and Jan Mayen",
- "code": "SJ"
+ name: "Svalbard and Jan Mayen",
+ code: "SJ"
},
{
- "name": "Swaziland",
- "code": "SZ"
+ name: "Swaziland",
+ code: "SZ"
},
{
- "name": "Sweden",
- "code": "SE"
+ name: "Sweden",
+ code: "SE"
},
{
- "name": "Switzerland",
- "code": "CH"
+ name: "Switzerland",
+ code: "CH"
},
{
- "name": "Syrian Arab Republic",
- "code": "SY"
+ name: "Syrian Arab Republic",
+ code: "SY"
},
{
- "name": "Taiwan",
- "code": "TW"
+ name: "Taiwan",
+ code: "TW"
},
{
- "name": "Tajikistan",
- "code": "TJ"
+ name: "Tajikistan",
+ code: "TJ"
},
{
- "name": "Tanzania, United Republic of",
- "code": "TZ"
+ name: "Tanzania, United Republic of",
+ code: "TZ"
},
{
- "name": "Thailand",
- "code": "TH"
+ name: "Thailand",
+ code: "TH"
},
{
- "name": "Timor-Leste",
- "code": "TL"
+ name: "Timor-Leste",
+ code: "TL"
},
{
- "name": "Togo",
- "code": "TG"
+ name: "Togo",
+ code: "TG"
},
{
- "name": "Tokelau",
- "code": "TK"
+ name: "Tokelau",
+ code: "TK"
},
{
- "name": "Tonga",
- "code": "TO"
+ name: "Tonga",
+ code: "TO"
},
{
- "name": "Trinidad and Tobago",
- "code": "TT"
+ name: "Trinidad and Tobago",
+ code: "TT"
},
{
- "name": "Tunisia",
- "code": "TN"
+ name: "Tunisia",
+ code: "TN"
},
{
- "name": "Turkey",
- "code": "TR"
+ name: "Turkey",
+ code: "TR"
},
{
- "name": "Turkmenistan",
- "code": "TM"
+ name: "Turkmenistan",
+ code: "TM"
},
{
- "name": "Turks and Caicos Islands",
- "code": "TC"
+ name: "Turks and Caicos Islands",
+ code: "TC"
},
{
- "name": "Tuvalu",
- "code": "TV"
+ name: "Tuvalu",
+ code: "TV"
},
{
- "name": "Uganda",
- "code": "UG"
+ name: "Uganda",
+ code: "UG"
},
{
- "name": "Ukraine",
- "code": "UA"
+ name: "Ukraine",
+ code: "UA"
},
{
- "name": "United Arab Emirates",
- "code": "AE"
+ name: "United Arab Emirates",
+ code: "AE"
},
{
- "name": "United Kingdom",
- "code": "GB"
+ name: "United Kingdom",
+ code: "GB"
},
{
- "name": "United States",
- "code": "US"
+ name: "United States",
+ code: "US"
},
{
- "name": "United States Minor Outlying Islands",
- "code": "UM"
+ name: "United States Minor Outlying Islands",
+ code: "UM"
},
{
- "name": "Uruguay",
- "code": "UY"
+ name: "Uruguay",
+ code: "UY"
},
{
- "name": "Uzbekistan",
- "code": "UZ"
+ name: "Uzbekistan",
+ code: "UZ"
},
{
- "name": "Vanuatu",
- "code": "VU"
+ name: "Vanuatu",
+ code: "VU"
},
{
- "name": "Venezuela",
- "code": "VE"
+ name: "Venezuela",
+ code: "VE"
},
{
- "name": "Vietnam",
- "code": "VN"
+ name: "Vietnam",
+ code: "VN"
},
{
- "name": "Virgin Islands, British",
- "code": "VG"
+ name: "Virgin Islands, British",
+ code: "VG"
},
{
- "name": "Virgin Islands, U.S.",
- "code": "VI"
+ name: "Virgin Islands, U.S.",
+ code: "VI"
},
{
- "name": "Wallis and Futuna",
- "code": "WF"
+ name: "Wallis and Futuna",
+ code: "WF"
},
{
- "name": "Western Sahara",
- "code": "EH"
+ name: "Western Sahara",
+ code: "EH"
},
{
- "name": "Yemen",
- "code": "YE"
+ name: "Yemen",
+ code: "YE"
},
{
- "name": "Zambia",
- "code": "ZM"
+ name: "Zambia",
+ code: "ZM"
},
{
- "name": "Zimbabwe",
- "code": "ZW"
+ name: "Zimbabwe",
+ code: "ZW"
},
{
- "name": "Åland Islands",
- "code": "AX"
+ name: "Åland Islands",
+ code: "AX"
}
-];
\ No newline at end of file
+];
diff --git a/server/db/names.json b/server/db/names.json
index fdf545fb..eb104691 100644
--- a/server/db/names.json
+++ b/server/db/names.json
@@ -1708,4 +1708,4 @@
"Desert Box Turtle",
"African Striped Weasel"
]
-}
\ No newline at end of file
+}
diff --git a/server/db/pg/driver.ts b/server/db/pg/driver.ts
index 9456effb..2ee34da6 100644
--- a/server/db/pg/driver.ts
+++ b/server/db/pg/driver.ts
@@ -6,28 +6,28 @@ import { withReplicas } from "drizzle-orm/pg-core";
function createDb() {
const config = readConfigFile();
- if (!config.postgres) {
- // check the environment variables for postgres config
- if (process.env.POSTGRES_CONNECTION_STRING) {
- config.postgres = {
- connection_string: process.env.POSTGRES_CONNECTION_STRING
- };
- if (process.env.POSTGRES_REPLICA_CONNECTION_STRINGS) {
- const replicas =
- process.env.POSTGRES_REPLICA_CONNECTION_STRINGS.split(
- ","
- ).map((conn) => ({
+ // check the environment variables for postgres config first before the config file
+ if (process.env.POSTGRES_CONNECTION_STRING) {
+ config.postgres = {
+ connection_string: process.env.POSTGRES_CONNECTION_STRING
+ };
+ if (process.env.POSTGRES_REPLICA_CONNECTION_STRINGS) {
+ const replicas =
+ process.env.POSTGRES_REPLICA_CONNECTION_STRINGS.split(",").map(
+ (conn) => ({
connection_string: conn.trim()
- }));
- config.postgres.replicas = replicas;
- }
- } else {
- throw new Error(
- "Postgres configuration is missing in the configuration file."
- );
+ })
+ );
+ config.postgres.replicas = replicas;
}
}
+ if (!config.postgres) {
+ throw new Error(
+ "Postgres configuration is missing in the configuration file."
+ );
+ }
+
const connectionString = config.postgres?.connection_string;
const replicaConnections = config.postgres?.replicas || [];
diff --git a/server/db/pg/schema/privateSchema.ts b/server/db/pg/schema/privateSchema.ts
index 17d262c6..cb809b71 100644
--- a/server/db/pg/schema/privateSchema.ts
+++ b/server/db/pg/schema/privateSchema.ts
@@ -215,42 +215,56 @@ export const sessionTransferToken = pgTable("sessionTransferToken", {
expiresAt: bigint("expiresAt", { mode: "number" }).notNull()
});
-export const actionAuditLog = pgTable("actionAuditLog", {
- id: serial("id").primaryKey(),
- timestamp: bigint("timestamp", { mode: "number" }).notNull(), // this is EPOCH time in seconds
- orgId: varchar("orgId")
- .notNull()
- .references(() => orgs.orgId, { onDelete: "cascade" }),
- actorType: varchar("actorType", { length: 50 }).notNull(),
- actor: varchar("actor", { length: 255 }).notNull(),
- actorId: varchar("actorId", { length: 255 }).notNull(),
- action: varchar("action", { length: 100 }).notNull(),
- metadata: text("metadata")
-}, (table) => ([
- index("idx_actionAuditLog_timestamp").on(table.timestamp),
- index("idx_actionAuditLog_org_timestamp").on(table.orgId, table.timestamp)
-]));
+export const actionAuditLog = pgTable(
+ "actionAuditLog",
+ {
+ id: serial("id").primaryKey(),
+ timestamp: bigint("timestamp", { mode: "number" }).notNull(), // this is EPOCH time in seconds
+ orgId: varchar("orgId")
+ .notNull()
+ .references(() => orgs.orgId, { onDelete: "cascade" }),
+ actorType: varchar("actorType", { length: 50 }).notNull(),
+ actor: varchar("actor", { length: 255 }).notNull(),
+ actorId: varchar("actorId", { length: 255 }).notNull(),
+ action: varchar("action", { length: 100 }).notNull(),
+ metadata: text("metadata")
+ },
+ (table) => [
+ index("idx_actionAuditLog_timestamp").on(table.timestamp),
+ index("idx_actionAuditLog_org_timestamp").on(
+ table.orgId,
+ table.timestamp
+ )
+ ]
+);
-export const accessAuditLog = pgTable("accessAuditLog", {
- id: serial("id").primaryKey(),
- timestamp: bigint("timestamp", { mode: "number" }).notNull(), // this is EPOCH time in seconds
- orgId: varchar("orgId")
- .notNull()
- .references(() => orgs.orgId, { onDelete: "cascade" }),
- actorType: varchar("actorType", { length: 50 }),
- actor: varchar("actor", { length: 255 }),
- actorId: varchar("actorId", { length: 255 }),
- resourceId: integer("resourceId"),
- ip: varchar("ip", { length: 45 }),
- type: varchar("type", { length: 100 }).notNull(),
- action: boolean("action").notNull(),
- location: text("location"),
- userAgent: text("userAgent"),
- metadata: text("metadata")
-}, (table) => ([
- index("idx_identityAuditLog_timestamp").on(table.timestamp),
- index("idx_identityAuditLog_org_timestamp").on(table.orgId, table.timestamp)
-]));
+export const accessAuditLog = pgTable(
+ "accessAuditLog",
+ {
+ id: serial("id").primaryKey(),
+ timestamp: bigint("timestamp", { mode: "number" }).notNull(), // this is EPOCH time in seconds
+ orgId: varchar("orgId")
+ .notNull()
+ .references(() => orgs.orgId, { onDelete: "cascade" }),
+ actorType: varchar("actorType", { length: 50 }),
+ actor: varchar("actor", { length: 255 }),
+ actorId: varchar("actorId", { length: 255 }),
+ resourceId: integer("resourceId"),
+ ip: varchar("ip", { length: 45 }),
+ type: varchar("type", { length: 100 }).notNull(),
+ action: boolean("action").notNull(),
+ location: text("location"),
+ userAgent: text("userAgent"),
+ metadata: text("metadata")
+ },
+ (table) => [
+ index("idx_identityAuditLog_timestamp").on(table.timestamp),
+ index("idx_identityAuditLog_org_timestamp").on(
+ table.orgId,
+ table.timestamp
+ )
+ ]
+);
export type Limit = InferSelectModel;
export type Account = InferSelectModel;
@@ -270,4 +284,4 @@ export type RemoteExitNodeSession = InferSelectModel<
export type ExitNodeOrg = InferSelectModel;
export type LoginPage = InferSelectModel;
export type ActionAuditLog = InferSelectModel;
-export type AccessAuditLog = InferSelectModel;
\ No newline at end of file
+export type AccessAuditLog = InferSelectModel;
diff --git a/server/db/pg/schema/schema.ts b/server/db/pg/schema/schema.ts
index a0020a0e..e8077754 100644
--- a/server/db/pg/schema/schema.ts
+++ b/server/db/pg/schema/schema.ts
@@ -177,7 +177,7 @@ export const targetHealthCheck = pgTable("targetHealthCheck", {
hcMethod: varchar("hcMethod").default("GET"),
hcStatus: integer("hcStatus"), // http code
hcHealth: text("hcHealth").default("unknown"), // "unknown", "healthy", "unhealthy"
- hcTlsServerName: text("hcTlsServerName"),
+ hcTlsServerName: text("hcTlsServerName")
});
export const exitNodes = pgTable("exitNodes", {
@@ -213,7 +213,10 @@ export const siteResources = pgTable("siteResources", {
destination: varchar("destination").notNull(), // ip, cidr, hostname; validate against the mode
enabled: boolean("enabled").notNull().default(true),
alias: varchar("alias"),
- aliasAddress: varchar("aliasAddress")
+ aliasAddress: varchar("aliasAddress"),
+ tcpPortRangeString: varchar("tcpPortRangeString"),
+ udpPortRangeString: varchar("udpPortRangeString"),
+ disableIcmp: boolean("disableIcmp").notNull().default(false)
});
export const clientSiteResources = pgTable("clientSiteResources", {
diff --git a/server/db/queries/verifySessionQueries.ts b/server/db/queries/verifySessionQueries.ts
index 85bd7cc7..774c4e53 100644
--- a/server/db/queries/verifySessionQueries.ts
+++ b/server/db/queries/verifySessionQueries.ts
@@ -52,10 +52,7 @@ export async function getResourceByDomain(
resourceHeaderAuth,
eq(resourceHeaderAuth.resourceId, resources.resourceId)
)
- .innerJoin(
- orgs,
- eq(orgs.orgId, resources.orgId)
- )
+ .innerJoin(orgs, eq(orgs.orgId, resources.orgId))
.where(eq(resources.fullDomain, domain))
.limit(1);
diff --git a/server/db/sqlite/migrate.ts b/server/db/sqlite/migrate.ts
index e4a730d0..7c337ae2 100644
--- a/server/db/sqlite/migrate.ts
+++ b/server/db/sqlite/migrate.ts
@@ -8,7 +8,7 @@ const runMigrations = async () => {
console.log("Running migrations...");
try {
migrate(db as any, {
- migrationsFolder: migrationsFolder,
+ migrationsFolder: migrationsFolder
});
console.log("Migrations completed successfully.");
} catch (error) {
diff --git a/server/db/sqlite/schema/privateSchema.ts b/server/db/sqlite/schema/privateSchema.ts
index 65396770..975a949b 100644
--- a/server/db/sqlite/schema/privateSchema.ts
+++ b/server/db/sqlite/schema/privateSchema.ts
@@ -29,7 +29,9 @@ export const certificates = sqliteTable("certificates", {
});
export const dnsChallenge = sqliteTable("dnsChallenges", {
- dnsChallengeId: integer("dnsChallengeId").primaryKey({ autoIncrement: true }),
+ dnsChallengeId: integer("dnsChallengeId").primaryKey({
+ autoIncrement: true
+ }),
domain: text("domain").notNull(),
token: text("token").notNull(),
keyAuthorization: text("keyAuthorization").notNull(),
@@ -61,9 +63,7 @@ export const customers = sqliteTable("customers", {
});
export const subscriptions = sqliteTable("subscriptions", {
- subscriptionId: text("subscriptionId")
- .primaryKey()
- .notNull(),
+ subscriptionId: text("subscriptionId").primaryKey().notNull(),
customerId: text("customerId")
.notNull()
.references(() => customers.customerId, { onDelete: "cascade" }),
@@ -75,7 +75,9 @@ export const subscriptions = sqliteTable("subscriptions", {
});
export const subscriptionItems = sqliteTable("subscriptionItems", {
- subscriptionItemId: integer("subscriptionItemId").primaryKey({ autoIncrement: true }),
+ subscriptionItemId: integer("subscriptionItemId").primaryKey({
+ autoIncrement: true
+ }),
subscriptionId: text("subscriptionId")
.notNull()
.references(() => subscriptions.subscriptionId, {
@@ -129,7 +131,9 @@ export const limits = sqliteTable("limits", {
});
export const usageNotifications = sqliteTable("usageNotifications", {
- notificationId: integer("notificationId").primaryKey({ autoIncrement: true }),
+ notificationId: integer("notificationId").primaryKey({
+ autoIncrement: true
+ }),
orgId: text("orgId")
.notNull()
.references(() => orgs.orgId, { onDelete: "cascade" }),
@@ -210,42 +214,56 @@ export const sessionTransferToken = sqliteTable("sessionTransferToken", {
expiresAt: integer("expiresAt").notNull()
});
-export const actionAuditLog = sqliteTable("actionAuditLog", {
- id: integer("id").primaryKey({ autoIncrement: true }),
- timestamp: integer("timestamp").notNull(), // this is EPOCH time in seconds
- orgId: text("orgId")
- .notNull()
- .references(() => orgs.orgId, { onDelete: "cascade" }),
- actorType: text("actorType").notNull(),
- actor: text("actor").notNull(),
- actorId: text("actorId").notNull(),
- action: text("action").notNull(),
- metadata: text("metadata")
-}, (table) => ([
- index("idx_actionAuditLog_timestamp").on(table.timestamp),
- index("idx_actionAuditLog_org_timestamp").on(table.orgId, table.timestamp)
-]));
+export const actionAuditLog = sqliteTable(
+ "actionAuditLog",
+ {
+ id: integer("id").primaryKey({ autoIncrement: true }),
+ timestamp: integer("timestamp").notNull(), // this is EPOCH time in seconds
+ orgId: text("orgId")
+ .notNull()
+ .references(() => orgs.orgId, { onDelete: "cascade" }),
+ actorType: text("actorType").notNull(),
+ actor: text("actor").notNull(),
+ actorId: text("actorId").notNull(),
+ action: text("action").notNull(),
+ metadata: text("metadata")
+ },
+ (table) => [
+ index("idx_actionAuditLog_timestamp").on(table.timestamp),
+ index("idx_actionAuditLog_org_timestamp").on(
+ table.orgId,
+ table.timestamp
+ )
+ ]
+);
-export const accessAuditLog = sqliteTable("accessAuditLog", {
- id: integer("id").primaryKey({ autoIncrement: true }),
- timestamp: integer("timestamp").notNull(), // this is EPOCH time in seconds
- orgId: text("orgId")
- .notNull()
- .references(() => orgs.orgId, { onDelete: "cascade" }),
- actorType: text("actorType"),
- actor: text("actor"),
- actorId: text("actorId"),
- resourceId: integer("resourceId"),
- ip: text("ip"),
- location: text("location"),
- type: text("type").notNull(),
- action: integer("action", { mode: "boolean" }).notNull(),
- userAgent: text("userAgent"),
- metadata: text("metadata")
-}, (table) => ([
- index("idx_identityAuditLog_timestamp").on(table.timestamp),
- index("idx_identityAuditLog_org_timestamp").on(table.orgId, table.timestamp)
-]));
+export const accessAuditLog = sqliteTable(
+ "accessAuditLog",
+ {
+ id: integer("id").primaryKey({ autoIncrement: true }),
+ timestamp: integer("timestamp").notNull(), // this is EPOCH time in seconds
+ orgId: text("orgId")
+ .notNull()
+ .references(() => orgs.orgId, { onDelete: "cascade" }),
+ actorType: text("actorType"),
+ actor: text("actor"),
+ actorId: text("actorId"),
+ resourceId: integer("resourceId"),
+ ip: text("ip"),
+ location: text("location"),
+ type: text("type").notNull(),
+ action: integer("action", { mode: "boolean" }).notNull(),
+ userAgent: text("userAgent"),
+ metadata: text("metadata")
+ },
+ (table) => [
+ index("idx_identityAuditLog_timestamp").on(table.timestamp),
+ index("idx_identityAuditLog_org_timestamp").on(
+ table.orgId,
+ table.timestamp
+ )
+ ]
+);
export type Limit = InferSelectModel;
export type Account = InferSelectModel;
@@ -265,4 +283,4 @@ export type RemoteExitNodeSession = InferSelectModel<
export type ExitNodeOrg = InferSelectModel;
export type LoginPage = InferSelectModel;
export type ActionAuditLog = InferSelectModel;
-export type AccessAuditLog = InferSelectModel;
\ No newline at end of file
+export type AccessAuditLog = InferSelectModel;
diff --git a/server/db/sqlite/schema/schema.ts b/server/db/sqlite/schema/schema.ts
index 6e17cac4..de8ad8d0 100644
--- a/server/db/sqlite/schema/schema.ts
+++ b/server/db/sqlite/schema/schema.ts
@@ -234,7 +234,10 @@ export const siteResources = sqliteTable("siteResources", {
destination: text("destination").notNull(), // ip, cidr, hostname
enabled: integer("enabled", { mode: "boolean" }).notNull().default(true),
alias: text("alias"),
- aliasAddress: text("aliasAddress")
+ aliasAddress: text("aliasAddress"),
+ tcpPortRangeString: text("tcpPortRangeString"),
+ udpPortRangeString: text("udpPortRangeString"),
+ disableIcmp: integer("disableIcmp", { mode: "boolean" })
});
export const clientSiteResources = sqliteTable("clientSiteResources", {
diff --git a/server/emails/index.ts b/server/emails/index.ts
index 42cfa39c..01cc6610 100644
--- a/server/emails/index.ts
+++ b/server/emails/index.ts
@@ -18,10 +18,13 @@ function createEmailClient() {
host: emailConfig.smtp_host,
port: emailConfig.smtp_port,
secure: emailConfig.smtp_secure || false,
- auth: (emailConfig.smtp_user && emailConfig.smtp_pass) ? {
- user: emailConfig.smtp_user,
- pass: emailConfig.smtp_pass
- } : null
+ auth:
+ emailConfig.smtp_user && emailConfig.smtp_pass
+ ? {
+ user: emailConfig.smtp_user,
+ pass: emailConfig.smtp_pass
+ }
+ : null
} as SMTPTransport.Options;
if (emailConfig.smtp_tls_reject_unauthorized !== undefined) {
diff --git a/server/emails/sendEmail.ts b/server/emails/sendEmail.ts
index c8a0b077..32a5fb47 100644
--- a/server/emails/sendEmail.ts
+++ b/server/emails/sendEmail.ts
@@ -10,6 +10,7 @@ export async function sendEmail(
from: string | undefined;
to: string | undefined;
subject: string;
+ replyTo?: string;
}
) {
if (!emailClient) {
@@ -32,6 +33,7 @@ export async function sendEmail(
address: opts.from
},
to: opts.to,
+ replyTo: opts.replyTo,
subject: opts.subject,
html: emailHtml
});
diff --git a/server/emails/templates/NotifyUsageLimitApproaching.tsx b/server/emails/templates/NotifyUsageLimitApproaching.tsx
index beab0300..161b3676 100644
--- a/server/emails/templates/NotifyUsageLimitApproaching.tsx
+++ b/server/emails/templates/NotifyUsageLimitApproaching.tsx
@@ -19,7 +19,13 @@ interface Props {
billingLink: string; // Link to billing page
}
-export const NotifyUsageLimitApproaching = ({ email, limitName, currentUsage, usageLimit, billingLink }: Props) => {
+export const NotifyUsageLimitApproaching = ({
+ email,
+ limitName,
+ currentUsage,
+ usageLimit,
+ billingLink
+}: Props) => {
const previewText = `Your usage for ${limitName} is approaching the limit.`;
const usagePercentage = Math.round((currentUsage / usageLimit) * 100);
@@ -37,23 +43,32 @@ export const NotifyUsageLimitApproaching = ({ email, limitName, currentUsage, us
Hi there,
- We wanted to let you know that your usage for {limitName} is approaching your plan limit.
+ We wanted to let you know that your usage for{" "}
+ {limitName} is approaching your
+ plan limit.
- Current Usage: {currentUsage} of {usageLimit} ({usagePercentage}%)
+ Current Usage: {currentUsage} of{" "}
+ {usageLimit} ({usagePercentage}%)
- Once you reach your limit, some functionality may be restricted or your sites may disconnect until you upgrade your plan or your usage resets.
+ Once you reach your limit, some functionality may be
+ restricted or your sites may disconnect until you
+ upgrade your plan or your usage resets.
- To avoid any interruption to your service, we recommend upgrading your plan or monitoring your usage closely. You can upgrade your plan here.
+ To avoid any interruption to your service, we
+ recommend upgrading your plan or monitoring your
+ usage closely. You can{" "}
+ upgrade your plan here.
- If you have any questions or need assistance, please don't hesitate to reach out to our support team.
+ If you have any questions or need assistance, please
+ don't hesitate to reach out to our support team.
diff --git a/server/emails/templates/NotifyUsageLimitReached.tsx b/server/emails/templates/NotifyUsageLimitReached.tsx
index 783d1b0e..59841670 100644
--- a/server/emails/templates/NotifyUsageLimitReached.tsx
+++ b/server/emails/templates/NotifyUsageLimitReached.tsx
@@ -19,7 +19,13 @@ interface Props {
billingLink: string; // Link to billing page
}
-export const NotifyUsageLimitReached = ({ email, limitName, currentUsage, usageLimit, billingLink }: Props) => {
+export const NotifyUsageLimitReached = ({
+ email,
+ limitName,
+ currentUsage,
+ usageLimit,
+ billingLink
+}: Props) => {
const previewText = `You've reached your ${limitName} usage limit - Action required`;
const usagePercentage = Math.round((currentUsage / usageLimit) * 100);
@@ -32,30 +38,48 @@ export const NotifyUsageLimitReached = ({ email, limitName, currentUsage, usageL
- Usage Limit Reached - Action Required
+
+ Usage Limit Reached - Action Required
+ Hi there,
- You have reached your usage limit for {limitName}.
+ You have reached your usage limit for{" "}
+ {limitName}.
- Current Usage: {currentUsage} of {usageLimit} ({usagePercentage}%)
+ Current Usage: {currentUsage} of{" "}
+ {usageLimit} ({usagePercentage}%)
- Important: Your functionality may now be restricted and your sites may disconnect until you either upgrade your plan or your usage resets. To prevent any service interruption, immediate action is recommended.
+ Important: Your functionality may
+ now be restricted and your sites may disconnect
+ until you either upgrade your plan or your usage
+ resets. To prevent any service interruption,
+ immediate action is recommended.
What you can do:
- • Upgrade your plan immediately to restore full functionality
- • Monitor your usage to stay within limits in the future
+ •{" "}
+
+ Upgrade your plan immediately
+ {" "}
+ to restore full functionality
+ • Monitor your usage to stay within limits in
+ the future
- If you have any questions or need immediate assistance, please contact our support team right away.
+ If you have any questions or need immediate
+ assistance, please contact our support team right
+ away.
diff --git a/server/integrationApiServer.ts b/server/integrationApiServer.ts
index 3416004c..0ef0c0af 100644
--- a/server/integrationApiServer.ts
+++ b/server/integrationApiServer.ts
@@ -5,7 +5,7 @@ import config from "@server/lib/config";
import logger from "@server/logger";
import {
errorHandlerMiddleware,
- notFoundMiddleware,
+ notFoundMiddleware
} from "@server/middlewares";
import { authenticated, unauthenticated } from "#dynamic/routers/integration";
import { logIncomingMiddleware } from "./middlewares/logIncoming";
diff --git a/server/lib/billing/features.ts b/server/lib/billing/features.ts
index b72543cc..d074894a 100644
--- a/server/lib/billing/features.ts
+++ b/server/lib/billing/features.ts
@@ -25,16 +25,22 @@ export const FeatureMeterIdsSandbox: Record = {
};
export function getFeatureMeterId(featureId: FeatureId): string {
- if (process.env.ENVIRONMENT == "prod" && process.env.SANDBOX_MODE !== "true") {
+ if (
+ process.env.ENVIRONMENT == "prod" &&
+ process.env.SANDBOX_MODE !== "true"
+ ) {
return FeatureMeterIds[featureId];
} else {
return FeatureMeterIdsSandbox[featureId];
}
}
-export function getFeatureIdByMetricId(metricId: string): FeatureId | undefined {
- return (Object.entries(FeatureMeterIds) as [FeatureId, string][])
- .find(([_, v]) => v === metricId)?.[0];
+export function getFeatureIdByMetricId(
+ metricId: string
+): FeatureId | undefined {
+ return (Object.entries(FeatureMeterIds) as [FeatureId, string][]).find(
+ ([_, v]) => v === metricId
+ )?.[0];
}
export type FeaturePriceSet = {
@@ -43,7 +49,8 @@ export type FeaturePriceSet = {
[FeatureId.DOMAINS]?: string; // Optional since domains are not billed
};
-export const standardFeaturePriceSet: FeaturePriceSet = { // Free tier matches the freeLimitSet
+export const standardFeaturePriceSet: FeaturePriceSet = {
+ // Free tier matches the freeLimitSet
[FeatureId.SITE_UPTIME]: "price_1RrQc4D3Ee2Ir7WmaJGZ3MtF",
[FeatureId.USERS]: "price_1RrQeJD3Ee2Ir7WmgveP3xea",
[FeatureId.EGRESS_DATA_MB]: "price_1RrQXFD3Ee2Ir7WmvGDlgxQk",
@@ -51,7 +58,8 @@ export const standardFeaturePriceSet: FeaturePriceSet = { // Free tier matches t
[FeatureId.REMOTE_EXIT_NODES]: "price_1S46weD3Ee2Ir7Wm94KEHI4h"
};
-export const standardFeaturePriceSetSandbox: FeaturePriceSet = { // Free tier matches the freeLimitSet
+export const standardFeaturePriceSetSandbox: FeaturePriceSet = {
+ // Free tier matches the freeLimitSet
[FeatureId.SITE_UPTIME]: "price_1RefFBDCpkOb237BPrKZ8IEU",
[FeatureId.USERS]: "price_1ReNa4DCpkOb237Bc67G5muF",
[FeatureId.EGRESS_DATA_MB]: "price_1Rfp9LDCpkOb237BwuN5Oiu0",
@@ -60,15 +68,20 @@ export const standardFeaturePriceSetSandbox: FeaturePriceSet = { // Free tier ma
};
export function getStandardFeaturePriceSet(): FeaturePriceSet {
- if (process.env.ENVIRONMENT == "prod" && process.env.SANDBOX_MODE !== "true") {
+ if (
+ process.env.ENVIRONMENT == "prod" &&
+ process.env.SANDBOX_MODE !== "true"
+ ) {
return standardFeaturePriceSet;
} else {
return standardFeaturePriceSetSandbox;
}
}
-export function getLineItems(featurePriceSet: FeaturePriceSet): Stripe.Checkout.SessionCreateParams.LineItem[] {
+export function getLineItems(
+ featurePriceSet: FeaturePriceSet
+): Stripe.Checkout.SessionCreateParams.LineItem[] {
return Object.entries(featurePriceSet).map(([featureId, priceId]) => ({
- price: priceId,
+ price: priceId
}));
-}
\ No newline at end of file
+}
diff --git a/server/lib/billing/index.ts b/server/lib/billing/index.ts
index 6c3ef792..54c9ee2e 100644
--- a/server/lib/billing/index.ts
+++ b/server/lib/billing/index.ts
@@ -2,4 +2,4 @@ export * from "./limitSet";
export * from "./features";
export * from "./limitsService";
export * from "./getOrgTierData";
-export * from "./createCustomer";
\ No newline at end of file
+export * from "./createCustomer";
diff --git a/server/lib/billing/limitSet.ts b/server/lib/billing/limitSet.ts
index 153d8ae8..820b121a 100644
--- a/server/lib/billing/limitSet.ts
+++ b/server/lib/billing/limitSet.ts
@@ -12,7 +12,7 @@ export const sandboxLimitSet: LimitSet = {
[FeatureId.USERS]: { value: 1, description: "Sandbox limit" },
[FeatureId.EGRESS_DATA_MB]: { value: 1000, description: "Sandbox limit" }, // 1 GB
[FeatureId.DOMAINS]: { value: 0, description: "Sandbox limit" },
- [FeatureId.REMOTE_EXIT_NODES]: { value: 0, description: "Sandbox limit" },
+ [FeatureId.REMOTE_EXIT_NODES]: { value: 0, description: "Sandbox limit" }
};
export const freeLimitSet: LimitSet = {
@@ -29,7 +29,7 @@ export const freeLimitSet: LimitSet = {
export const subscribedLimitSet: LimitSet = {
[FeatureId.SITE_UPTIME]: {
value: 2232000,
- description: "Contact us to increase soft limit.",
+ description: "Contact us to increase soft limit."
}, // 50 sites up for 31 days
[FeatureId.USERS]: {
value: 150,
@@ -38,7 +38,7 @@ export const subscribedLimitSet: LimitSet = {
[FeatureId.EGRESS_DATA_MB]: {
value: 12000000,
description: "Contact us to increase soft limit."
- }, // 12000 GB
+ }, // 12000 GB
[FeatureId.DOMAINS]: {
value: 25,
description: "Contact us to increase soft limit."
diff --git a/server/lib/billing/tiers.ts b/server/lib/billing/tiers.ts
index 6ccf8898..ae49a48f 100644
--- a/server/lib/billing/tiers.ts
+++ b/server/lib/billing/tiers.ts
@@ -1,22 +1,32 @@
export enum TierId {
- STANDARD = "standard",
+ STANDARD = "standard"
}
export type TierPriceSet = {
[key in TierId]: string;
};
-export const tierPriceSet: TierPriceSet = { // Free tier matches the freeLimitSet
- [TierId.STANDARD]: "price_1RrQ9cD3Ee2Ir7Wmqdy3KBa0",
+export const tierPriceSet: TierPriceSet = {
+ // Free tier matches the freeLimitSet
+ [TierId.STANDARD]: "price_1RrQ9cD3Ee2Ir7Wmqdy3KBa0"
};
-export const tierPriceSetSandbox: TierPriceSet = { // Free tier matches the freeLimitSet
+export const tierPriceSetSandbox: TierPriceSet = {
+ // Free tier matches the freeLimitSet
// when matching tier the keys closer to 0 index are matched first so list the tiers in descending order of value
- [TierId.STANDARD]: "price_1RrAYJDCpkOb237By2s1P32m",
+ [TierId.STANDARD]: "price_1RrAYJDCpkOb237By2s1P32m"
};
-export function getTierPriceSet(environment?: string, sandbox_mode?: boolean): TierPriceSet {
- if ((process.env.ENVIRONMENT == "prod" && process.env.SANDBOX_MODE !== "true") || (environment === "prod" && sandbox_mode !== true)) { // THIS GETS LOADED CLIENT SIDE AND SERVER SIDE
+export function getTierPriceSet(
+ environment?: string,
+ sandbox_mode?: boolean
+): TierPriceSet {
+ if (
+ (process.env.ENVIRONMENT == "prod" &&
+ process.env.SANDBOX_MODE !== "true") ||
+ (environment === "prod" && sandbox_mode !== true)
+ ) {
+ // THIS GETS LOADED CLIENT SIDE AND SERVER SIDE
return tierPriceSet;
} else {
return tierPriceSetSandbox;
diff --git a/server/lib/billing/usageService.ts b/server/lib/billing/usageService.ts
index 8e6f5e9c..0fde8eba 100644
--- a/server/lib/billing/usageService.ts
+++ b/server/lib/billing/usageService.ts
@@ -19,7 +19,7 @@ import logger from "@server/logger";
import { sendToClient } from "#dynamic/routers/ws";
import { build } from "@server/build";
import { s3Client } from "@server/lib/s3";
-import cache from "@server/lib/cache";
+import cache from "@server/lib/cache";
interface StripeEvent {
identifier?: string;
diff --git a/server/lib/blueprints/applyNewtDockerBlueprint.ts b/server/lib/blueprints/applyNewtDockerBlueprint.ts
index 0fe7c3fe..f27cc05b 100644
--- a/server/lib/blueprints/applyNewtDockerBlueprint.ts
+++ b/server/lib/blueprints/applyNewtDockerBlueprint.ts
@@ -34,7 +34,10 @@ export async function applyNewtDockerBlueprint(
return;
}
- if (isEmptyObject(blueprint["proxy-resources"]) && isEmptyObject(blueprint["client-resources"])) {
+ if (
+ isEmptyObject(blueprint["proxy-resources"]) &&
+ isEmptyObject(blueprint["client-resources"])
+ ) {
return;
}
diff --git a/server/lib/blueprints/parseDockerContainers.ts b/server/lib/blueprints/parseDockerContainers.ts
index 1510e6e1..f2cdcfa2 100644
--- a/server/lib/blueprints/parseDockerContainers.ts
+++ b/server/lib/blueprints/parseDockerContainers.ts
@@ -84,12 +84,20 @@ export function processContainerLabels(containers: Container[]): {
// Process proxy resources
if (Object.keys(proxyResourceLabels).length > 0) {
- processResourceLabels(proxyResourceLabels, container, result["proxy-resources"]);
+ processResourceLabels(
+ proxyResourceLabels,
+ container,
+ result["proxy-resources"]
+ );
}
// Process client resources
if (Object.keys(clientResourceLabels).length > 0) {
- processResourceLabels(clientResourceLabels, container, result["client-resources"]);
+ processResourceLabels(
+ clientResourceLabels,
+ container,
+ result["client-resources"]
+ );
}
});
@@ -161,8 +169,7 @@ function processResourceLabels(
const finalTarget = { ...target };
if (!finalTarget.hostname) {
finalTarget.hostname =
- container.name ||
- container.hostname;
+ container.name || container.hostname;
}
if (!finalTarget.port) {
const containerPort =
diff --git a/server/lib/blueprints/proxyResources.ts b/server/lib/blueprints/proxyResources.ts
index 738a833f..706fab12 100644
--- a/server/lib/blueprints/proxyResources.ts
+++ b/server/lib/blueprints/proxyResources.ts
@@ -1086,10 +1086,8 @@ async function getDomainId(
// remove the base domain of the domain
let subdomain = null;
- if (domainSelection.type == "ns" || domainSelection.type == "wildcard") {
- if (fullDomain != baseDomain) {
- subdomain = fullDomain.replace(`.${baseDomain}`, "");
- }
+ if (fullDomain != baseDomain) {
+ subdomain = fullDomain.replace(`.${baseDomain}`, "");
}
// Return the first valid domain
diff --git a/server/lib/blueprints/types.ts b/server/lib/blueprints/types.ts
index 9a184a1f..23e2176f 100644
--- a/server/lib/blueprints/types.ts
+++ b/server/lib/blueprints/types.ts
@@ -312,7 +312,7 @@ export const ConfigSchema = z
};
delete (data as any)["public-resources"];
}
-
+
// Merge private-resources into client-resources
if (data["private-resources"]) {
data["client-resources"] = {
@@ -321,10 +321,13 @@ export const ConfigSchema = z
};
delete (data as any)["private-resources"];
}
-
+
return data as {
"proxy-resources": Record>;
- "client-resources": Record>;
+ "client-resources": Record<
+ string,
+ z.infer
+ >;
sites: Record>;
};
})
diff --git a/server/lib/cache.ts b/server/lib/cache.ts
index efa7d201..82c80280 100644
--- a/server/lib/cache.ts
+++ b/server/lib/cache.ts
@@ -2,4 +2,4 @@ import NodeCache from "node-cache";
export const cache = new NodeCache({ stdTTL: 3600, checkperiod: 120 });
-export default cache;
\ No newline at end of file
+export default cache;
diff --git a/server/lib/calculateUserClientsForOrgs.ts b/server/lib/calculateUserClientsForOrgs.ts
index f7666a36..ac3d719f 100644
--- a/server/lib/calculateUserClientsForOrgs.ts
+++ b/server/lib/calculateUserClientsForOrgs.ts
@@ -166,7 +166,10 @@ export async function calculateUserClientsForOrgs(
];
// Get next available subnet
- const newSubnet = await getNextAvailableClientSubnet(orgId);
+ const newSubnet = await getNextAvailableClientSubnet(
+ orgId,
+ transaction
+ );
if (!newSubnet) {
logger.warn(
`Skipping org ${orgId} for OLM ${olm.olmId} (user ${userId}): no available subnet found`
diff --git a/server/lib/certificates.ts b/server/lib/certificates.ts
index a6c51c96..f5860ff3 100644
--- a/server/lib/certificates.ts
+++ b/server/lib/certificates.ts
@@ -1,4 +1,6 @@
-export async function getValidCertificatesForDomains(domains: Set): Promise<
+export async function getValidCertificatesForDomains(
+ domains: Set
+): Promise<
Array<{
id: number;
domain: string;
@@ -10,4 +12,4 @@ export async function getValidCertificatesForDomains(domains: Set): Prom
}>
> {
return []; // stub
-}
\ No newline at end of file
+}
diff --git a/server/lib/cleanupLogs.test.ts b/server/lib/cleanupLogs.test.ts
index a65e7b01..dc9326e1 100644
--- a/server/lib/cleanupLogs.test.ts
+++ b/server/lib/cleanupLogs.test.ts
@@ -7,7 +7,10 @@ function dateToTimestamp(dateStr: string): number {
// Testable version of calculateCutoffTimestamp that accepts a "now" timestamp
// This matches the logic in cleanupLogs.ts but allows injecting the current time
-function calculateCutoffTimestampWithNow(retentionDays: number, nowTimestamp: number): number {
+function calculateCutoffTimestampWithNow(
+ retentionDays: number,
+ nowTimestamp: number
+): number {
if (retentionDays === 9001) {
// Special case: data is erased at the end of the year following the year it was generated
// This means we delete logs from 2 years ago or older (logs from year Y are deleted after Dec 31 of year Y+1)
@@ -28,7 +31,7 @@ function testCalculateCutoffTimestamp() {
{
const now = dateToTimestamp("2025-12-06T12:00:00Z");
const result = calculateCutoffTimestampWithNow(30, now);
- const expected = now - (30 * 24 * 60 * 60);
+ const expected = now - 30 * 24 * 60 * 60;
assertEquals(result, expected, "30 days retention calculation failed");
}
@@ -36,7 +39,7 @@ function testCalculateCutoffTimestamp() {
{
const now = dateToTimestamp("2025-06-15T00:00:00Z");
const result = calculateCutoffTimestampWithNow(90, now);
- const expected = now - (90 * 24 * 60 * 60);
+ const expected = now - 90 * 24 * 60 * 60;
assertEquals(result, expected, "90 days retention calculation failed");
}
@@ -48,7 +51,11 @@ function testCalculateCutoffTimestamp() {
const now = dateToTimestamp("2025-12-06T12:00:00Z");
const result = calculateCutoffTimestampWithNow(9001, now);
const expected = dateToTimestamp("2024-01-01T00:00:00Z");
- assertEquals(result, expected, "9001 retention (Dec 2025) - should cutoff at Jan 1, 2024");
+ assertEquals(
+ result,
+ expected,
+ "9001 retention (Dec 2025) - should cutoff at Jan 1, 2024"
+ );
}
// Test 4: Special case 9001 - January 2026
@@ -58,7 +65,11 @@ function testCalculateCutoffTimestamp() {
const now = dateToTimestamp("2026-01-15T12:00:00Z");
const result = calculateCutoffTimestampWithNow(9001, now);
const expected = dateToTimestamp("2025-01-01T00:00:00Z");
- assertEquals(result, expected, "9001 retention (Jan 2026) - should cutoff at Jan 1, 2025");
+ assertEquals(
+ result,
+ expected,
+ "9001 retention (Jan 2026) - should cutoff at Jan 1, 2025"
+ );
}
// Test 5: Special case 9001 - December 31, 2025 at 23:59:59 UTC
@@ -68,7 +79,11 @@ function testCalculateCutoffTimestamp() {
const now = dateToTimestamp("2025-12-31T23:59:59Z");
const result = calculateCutoffTimestampWithNow(9001, now);
const expected = dateToTimestamp("2024-01-01T00:00:00Z");
- assertEquals(result, expected, "9001 retention (Dec 31, 2025 23:59:59) - should cutoff at Jan 1, 2024");
+ assertEquals(
+ result,
+ expected,
+ "9001 retention (Dec 31, 2025 23:59:59) - should cutoff at Jan 1, 2024"
+ );
}
// Test 6: Special case 9001 - January 1, 2026 at 00:00:01 UTC
@@ -78,7 +93,11 @@ function testCalculateCutoffTimestamp() {
const now = dateToTimestamp("2026-01-01T00:00:01Z");
const result = calculateCutoffTimestampWithNow(9001, now);
const expected = dateToTimestamp("2025-01-01T00:00:00Z");
- assertEquals(result, expected, "9001 retention (Jan 1, 2026 00:00:01) - should cutoff at Jan 1, 2025");
+ assertEquals(
+ result,
+ expected,
+ "9001 retention (Jan 1, 2026 00:00:01) - should cutoff at Jan 1, 2025"
+ );
}
// Test 7: Special case 9001 - Mid year 2025
@@ -87,7 +106,11 @@ function testCalculateCutoffTimestamp() {
const now = dateToTimestamp("2025-06-15T12:00:00Z");
const result = calculateCutoffTimestampWithNow(9001, now);
const expected = dateToTimestamp("2024-01-01T00:00:00Z");
- assertEquals(result, expected, "9001 retention (mid 2025) - should cutoff at Jan 1, 2024");
+ assertEquals(
+ result,
+ expected,
+ "9001 retention (mid 2025) - should cutoff at Jan 1, 2024"
+ );
}
// Test 8: Special case 9001 - Early 2024
@@ -96,14 +119,18 @@ function testCalculateCutoffTimestamp() {
const now = dateToTimestamp("2024-02-01T12:00:00Z");
const result = calculateCutoffTimestampWithNow(9001, now);
const expected = dateToTimestamp("2023-01-01T00:00:00Z");
- assertEquals(result, expected, "9001 retention (early 2024) - should cutoff at Jan 1, 2023");
+ assertEquals(
+ result,
+ expected,
+ "9001 retention (early 2024) - should cutoff at Jan 1, 2023"
+ );
}
// Test 9: 1 day retention
{
const now = dateToTimestamp("2025-12-06T12:00:00Z");
const result = calculateCutoffTimestampWithNow(1, now);
- const expected = now - (1 * 24 * 60 * 60);
+ const expected = now - 1 * 24 * 60 * 60;
assertEquals(result, expected, "1 day retention calculation failed");
}
@@ -111,7 +138,7 @@ function testCalculateCutoffTimestamp() {
{
const now = dateToTimestamp("2025-12-06T12:00:00Z");
const result = calculateCutoffTimestampWithNow(365, now);
- const expected = now - (365 * 24 * 60 * 60);
+ const expected = now - 365 * 24 * 60 * 60;
assertEquals(result, expected, "365 days retention calculation failed");
}
@@ -123,11 +150,19 @@ function testCalculateCutoffTimestamp() {
const cutoff = calculateCutoffTimestampWithNow(9001, now);
const logFromDec2023 = dateToTimestamp("2023-12-31T23:59:59Z");
const logFromJan2024 = dateToTimestamp("2024-01-01T00:00:00Z");
-
+
// Log from Dec 2023 should be before cutoff (deleted)
- assertEquals(logFromDec2023 < cutoff, true, "Log from Dec 2023 should be deleted");
+ assertEquals(
+ logFromDec2023 < cutoff,
+ true,
+ "Log from Dec 2023 should be deleted"
+ );
// Log from Jan 2024 should be at or after cutoff (kept)
- assertEquals(logFromJan2024 >= cutoff, true, "Log from Jan 2024 should be kept");
+ assertEquals(
+ logFromJan2024 >= cutoff,
+ true,
+ "Log from Jan 2024 should be kept"
+ );
}
// Test 12: Verify 9001 in 2026 - logs from 2024 should now be deleted
@@ -136,11 +171,19 @@ function testCalculateCutoffTimestamp() {
const cutoff = calculateCutoffTimestampWithNow(9001, now);
const logFromDec2024 = dateToTimestamp("2024-12-31T23:59:59Z");
const logFromJan2025 = dateToTimestamp("2025-01-01T00:00:00Z");
-
+
// Log from Dec 2024 should be before cutoff (deleted)
- assertEquals(logFromDec2024 < cutoff, true, "Log from Dec 2024 should be deleted in 2026");
+ assertEquals(
+ logFromDec2024 < cutoff,
+ true,
+ "Log from Dec 2024 should be deleted in 2026"
+ );
// Log from Jan 2025 should be at or after cutoff (kept)
- assertEquals(logFromJan2025 >= cutoff, true, "Log from Jan 2025 should be kept in 2026");
+ assertEquals(
+ logFromJan2025 >= cutoff,
+ true,
+ "Log from Jan 2025 should be kept in 2026"
+ );
}
// Test 13: Edge case - exactly at year boundary for 9001
@@ -149,7 +192,11 @@ function testCalculateCutoffTimestamp() {
const now = dateToTimestamp("2025-01-01T00:00:00Z");
const result = calculateCutoffTimestampWithNow(9001, now);
const expected = dateToTimestamp("2024-01-01T00:00:00Z");
- assertEquals(result, expected, "9001 retention (Jan 1, 2025 00:00:00) - should cutoff at Jan 1, 2024");
+ assertEquals(
+ result,
+ expected,
+ "9001 retention (Jan 1, 2025 00:00:00) - should cutoff at Jan 1, 2024"
+ );
}
// Test 14: Verify data from 2024 is kept throughout 2025 when using 9001
@@ -157,18 +204,29 @@ function testCalculateCutoffTimestamp() {
{
// Running in June 2025
const nowJune2025 = dateToTimestamp("2025-06-15T12:00:00Z");
- const cutoffJune2025 = calculateCutoffTimestampWithNow(9001, nowJune2025);
+ const cutoffJune2025 = calculateCutoffTimestampWithNow(
+ 9001,
+ nowJune2025
+ );
const logFromJuly2024 = dateToTimestamp("2024-07-15T12:00:00Z");
-
+
// Log from July 2024 should be KEPT in June 2025
- assertEquals(logFromJuly2024 >= cutoffJune2025, true, "Log from July 2024 should be kept in June 2025");
-
+ assertEquals(
+ logFromJuly2024 >= cutoffJune2025,
+ true,
+ "Log from July 2024 should be kept in June 2025"
+ );
+
// Running in January 2026
const nowJan2026 = dateToTimestamp("2026-01-15T12:00:00Z");
const cutoffJan2026 = calculateCutoffTimestampWithNow(9001, nowJan2026);
-
+
// Log from July 2024 should be DELETED in January 2026
- assertEquals(logFromJuly2024 < cutoffJan2026, true, "Log from July 2024 should be deleted in Jan 2026");
+ assertEquals(
+ logFromJuly2024 < cutoffJan2026,
+ true,
+ "Log from July 2024 should be deleted in Jan 2026"
+ );
}
// Test 15: Verify the exact requirement - data from 2024 must be purged on December 31, 2025
@@ -176,16 +234,27 @@ function testCalculateCutoffTimestamp() {
// On Jan 1, 2026 (now 2026), data from 2024 can be deleted
{
const logFromMid2024 = dateToTimestamp("2024-06-15T12:00:00Z");
-
+
// Dec 31, 2025 23:59:59 - still 2025, log should be kept
const nowDec31_2025 = dateToTimestamp("2025-12-31T23:59:59Z");
- const cutoffDec31 = calculateCutoffTimestampWithNow(9001, nowDec31_2025);
- assertEquals(logFromMid2024 >= cutoffDec31, true, "Log from mid-2024 should be kept on Dec 31, 2025");
-
+ const cutoffDec31 = calculateCutoffTimestampWithNow(
+ 9001,
+ nowDec31_2025
+ );
+ assertEquals(
+ logFromMid2024 >= cutoffDec31,
+ true,
+ "Log from mid-2024 should be kept on Dec 31, 2025"
+ );
+
// Jan 1, 2026 00:00:00 - now 2026, log can be deleted
const nowJan1_2026 = dateToTimestamp("2026-01-01T00:00:00Z");
const cutoffJan1 = calculateCutoffTimestampWithNow(9001, nowJan1_2026);
- assertEquals(logFromMid2024 < cutoffJan1, true, "Log from mid-2024 should be deleted on Jan 1, 2026");
+ assertEquals(
+ logFromMid2024 < cutoffJan1,
+ true,
+ "Log from mid-2024 should be deleted on Jan 1, 2026"
+ );
}
console.log("All calculateCutoffTimestamp tests passed!");
diff --git a/server/lib/consts.ts b/server/lib/consts.ts
index b380023e..d1f66a9e 100644
--- a/server/lib/consts.ts
+++ b/server/lib/consts.ts
@@ -2,7 +2,7 @@ import path from "path";
import { fileURLToPath } from "url";
// This is a placeholder value replaced by the build process
-export const APP_VERSION = "1.13.0-rc.0";
+export const APP_VERSION = "1.13.1";
export const __FILENAME = fileURLToPath(import.meta.url);
export const __DIRNAME = path.dirname(__FILENAME);
diff --git a/server/lib/domainUtils.ts b/server/lib/domainUtils.ts
index d043ca51..3562df68 100644
--- a/server/lib/domainUtils.ts
+++ b/server/lib/domainUtils.ts
@@ -4,18 +4,20 @@ import { eq, and } from "drizzle-orm";
import { subdomainSchema } from "@server/lib/schemas";
import { fromError } from "zod-validation-error";
-export type DomainValidationResult = {
- success: true;
- fullDomain: string;
- subdomain: string | null;
-} | {
- success: false;
- error: string;
-};
+export type DomainValidationResult =
+ | {
+ success: true;
+ fullDomain: string;
+ subdomain: string | null;
+ }
+ | {
+ success: false;
+ error: string;
+ };
/**
* Validates a domain and constructs the full domain based on domain type and subdomain.
- *
+ *
* @param domainId - The ID of the domain to validate
* @param orgId - The organization ID to check domain access
* @param subdomain - Optional subdomain to append (for ns and wildcard domains)
@@ -34,7 +36,10 @@ export async function validateAndConstructDomain(
.where(eq(domains.domainId, domainId))
.leftJoin(
orgDomains,
- and(eq(orgDomains.orgId, orgId), eq(orgDomains.domainId, domainId))
+ and(
+ eq(orgDomains.orgId, orgId),
+ eq(orgDomains.domainId, domainId)
+ )
);
// Check if domain exists
@@ -106,7 +111,7 @@ export async function validateAndConstructDomain(
} catch (error) {
return {
success: false,
- error: `An error occurred while validating domain: ${error instanceof Error ? error.message : 'Unknown error'}`
+ error: `An error occurred while validating domain: ${error instanceof Error ? error.message : "Unknown error"}`
};
}
}
diff --git a/server/lib/encryption.ts b/server/lib/encryption.ts
index 7959fa4b..79caecd1 100644
--- a/server/lib/encryption.ts
+++ b/server/lib/encryption.ts
@@ -1,39 +1,39 @@
-import crypto from 'crypto';
+import crypto from "crypto";
export function encryptData(data: string, key: Buffer): string {
- const algorithm = 'aes-256-gcm';
- const iv = crypto.randomBytes(16);
- const cipher = crypto.createCipheriv(algorithm, key, iv);
-
- let encrypted = cipher.update(data, 'utf8', 'hex');
- encrypted += cipher.final('hex');
-
- const authTag = cipher.getAuthTag();
-
- // Combine IV, auth tag, and encrypted data
- return iv.toString('hex') + ':' + authTag.toString('hex') + ':' + encrypted;
+ const algorithm = "aes-256-gcm";
+ const iv = crypto.randomBytes(16);
+ const cipher = crypto.createCipheriv(algorithm, key, iv);
+
+ let encrypted = cipher.update(data, "utf8", "hex");
+ encrypted += cipher.final("hex");
+
+ const authTag = cipher.getAuthTag();
+
+ // Combine IV, auth tag, and encrypted data
+ return iv.toString("hex") + ":" + authTag.toString("hex") + ":" + encrypted;
}
// Helper function to decrypt data (you'll need this to read certificates)
export function decryptData(encryptedData: string, key: Buffer): string {
- const algorithm = 'aes-256-gcm';
- const parts = encryptedData.split(':');
-
- if (parts.length !== 3) {
- throw new Error('Invalid encrypted data format');
- }
-
- const iv = Buffer.from(parts[0], 'hex');
- const authTag = Buffer.from(parts[1], 'hex');
- const encrypted = parts[2];
-
- const decipher = crypto.createDecipheriv(algorithm, key, iv);
- decipher.setAuthTag(authTag);
-
- let decrypted = decipher.update(encrypted, 'hex', 'utf8');
- decrypted += decipher.final('utf8');
-
- return decrypted;
+ const algorithm = "aes-256-gcm";
+ const parts = encryptedData.split(":");
+
+ if (parts.length !== 3) {
+ throw new Error("Invalid encrypted data format");
+ }
+
+ const iv = Buffer.from(parts[0], "hex");
+ const authTag = Buffer.from(parts[1], "hex");
+ const encrypted = parts[2];
+
+ const decipher = crypto.createDecipheriv(algorithm, key, iv);
+ decipher.setAuthTag(authTag);
+
+ let decrypted = decipher.update(encrypted, "hex", "utf8");
+ decrypted += decipher.final("utf8");
+
+ return decrypted;
}
-// openssl rand -hex 32 > config/encryption.key
\ No newline at end of file
+// openssl rand -hex 32 > config/encryption.key
diff --git a/server/lib/exitNodes/getCurrentExitNodeId.ts b/server/lib/exitNodes/getCurrentExitNodeId.ts
index d895ce42..1e5c10e3 100644
--- a/server/lib/exitNodes/getCurrentExitNodeId.ts
+++ b/server/lib/exitNodes/getCurrentExitNodeId.ts
@@ -30,4 +30,4 @@ export async function getCurrentExitNodeId(): Promise {
}
}
return currentExitNodeId;
-}
\ No newline at end of file
+}
diff --git a/server/lib/exitNodes/index.ts b/server/lib/exitNodes/index.ts
index ba30ccc2..d1477a68 100644
--- a/server/lib/exitNodes/index.ts
+++ b/server/lib/exitNodes/index.ts
@@ -1,4 +1,4 @@
export * from "./exitNodes";
export * from "./exitNodeComms";
export * from "./subnet";
-export * from "./getCurrentExitNodeId";
\ No newline at end of file
+export * from "./getCurrentExitNodeId";
diff --git a/server/lib/exitNodes/subnet.ts b/server/lib/exitNodes/subnet.ts
index c06f1d05..49e28bd5 100644
--- a/server/lib/exitNodes/subnet.ts
+++ b/server/lib/exitNodes/subnet.ts
@@ -27,4 +27,4 @@ export async function getNextAvailableSubnet(): Promise {
"/" +
subnet.split("/")[1];
return subnet;
-}
\ No newline at end of file
+}
diff --git a/server/lib/geoip.ts b/server/lib/geoip.ts
index 5bc29ef9..8eea4d6f 100644
--- a/server/lib/geoip.ts
+++ b/server/lib/geoip.ts
@@ -30,4 +30,4 @@ export async function getCountryCodeForIp(
}
return;
-}
\ No newline at end of file
+}
diff --git a/server/lib/idp/generateRedirectUrl.ts b/server/lib/idp/generateRedirectUrl.ts
index 077ac6f6..cf55e161 100644
--- a/server/lib/idp/generateRedirectUrl.ts
+++ b/server/lib/idp/generateRedirectUrl.ts
@@ -33,7 +33,11 @@ export async function generateOidcRedirectUrl(
)
.limit(1);
- if (res?.loginPage && res.loginPage.domainId && res.loginPage.fullDomain) {
+ if (
+ res?.loginPage &&
+ res.loginPage.domainId &&
+ res.loginPage.fullDomain
+ ) {
baseUrl = `${method}://${res.loginPage.fullDomain}`;
}
}
diff --git a/server/lib/ip.test.ts b/server/lib/ip.test.ts
index 67a2faaa..70436e05 100644
--- a/server/lib/ip.test.ts
+++ b/server/lib/ip.test.ts
@@ -4,7 +4,7 @@ import { assertEquals } from "@test/assert";
// Test cases
function testFindNextAvailableCidr() {
console.log("Running findNextAvailableCidr tests...");
-
+
// Test 0: Basic IPv4 allocation with a subnet in the wrong range
{
const existing = ["100.90.130.1/30", "100.90.128.4/30"];
@@ -23,7 +23,11 @@ function testFindNextAvailableCidr() {
{
const existing = ["10.0.0.0/16", "10.2.0.0/16"];
const result = findNextAvailableCidr(existing, 16, "10.0.0.0/8");
- assertEquals(result, "10.1.0.0/16", "Finding gap between allocations failed");
+ assertEquals(
+ result,
+ "10.1.0.0/16",
+ "Finding gap between allocations failed"
+ );
}
// Test 3: No available space
@@ -33,7 +37,7 @@ function testFindNextAvailableCidr() {
assertEquals(result, null, "No available space test failed");
}
- // Test 4: Empty existing
+ // Test 4: Empty existing
{
const existing: string[] = [];
const result = findNextAvailableCidr(existing, 30, "10.0.0.0/8");
@@ -137,4 +141,4 @@ try {
} catch (error) {
console.error("Test failed:", error);
process.exit(1);
-}
\ No newline at end of file
+}
diff --git a/server/lib/ip.ts b/server/lib/ip.ts
index b2ff58d6..21c148ac 100644
--- a/server/lib/ip.ts
+++ b/server/lib/ip.ts
@@ -1,10 +1,4 @@
-import {
- clientSitesAssociationsCache,
- db,
- SiteResource,
- siteResources,
- Transaction
-} from "@server/db";
+import { db, SiteResource, siteResources, Transaction } from "@server/db";
import { clients, orgs, sites } from "@server/db";
import { and, eq, isNotNull } from "drizzle-orm";
import config from "@server/lib/config";
@@ -120,11 +114,13 @@ function bigIntToIp(num: bigint, version: IPVersion): string {
* Parses an endpoint string (ip:port) handling both IPv4 and IPv6 addresses.
* IPv6 addresses may be bracketed like [::1]:8080 or unbracketed like ::1:8080.
* For unbracketed IPv6, the last colon-separated segment is treated as the port.
- *
+ *
* @param endpoint The endpoint string to parse (e.g., "192.168.1.1:8080" or "[::1]:8080" or "2607:fea8::1:8080")
* @returns An object with ip and port, or null if parsing fails
*/
-export function parseEndpoint(endpoint: string): { ip: string; port: number } | null {
+export function parseEndpoint(
+ endpoint: string
+): { ip: string; port: number } | null {
if (!endpoint) return null;
// Check for bracketed IPv6 format: [ip]:port
@@ -138,7 +134,7 @@ export function parseEndpoint(endpoint: string): { ip: string; port: number } |
// Check if this looks like IPv6 (contains multiple colons)
const colonCount = (endpoint.match(/:/g) || []).length;
-
+
if (colonCount > 1) {
// This is IPv6 - the port is after the last colon
const lastColonIndex = endpoint.lastIndexOf(":");
@@ -163,7 +159,7 @@ export function parseEndpoint(endpoint: string): { ip: string; port: number } |
/**
* Formats an IP and port into a consistent endpoint string.
* IPv6 addresses are wrapped in brackets for proper parsing.
- *
+ *
* @param ip The IP address (IPv4 or IPv6)
* @param port The port number
* @returns Formatted endpoint string
@@ -306,9 +302,13 @@ export function isIpInCidr(ip: string, cidr: string): boolean {
}
export async function getNextAvailableClientSubnet(
- orgId: string
+ orgId: string,
+ transaction: Transaction | typeof db = db
): Promise {
- const [org] = await db.select().from(orgs).where(eq(orgs.orgId, orgId));
+ const [org] = await transaction
+ .select()
+ .from(orgs)
+ .where(eq(orgs.orgId, orgId));
if (!org) {
throw new Error(`Organization with ID ${orgId} not found`);
@@ -318,14 +318,14 @@ export async function getNextAvailableClientSubnet(
throw new Error(`Organization with ID ${orgId} has no subnet defined`);
}
- const existingAddressesSites = await db
+ const existingAddressesSites = await transaction
.select({
address: sites.address
})
.from(sites)
.where(and(isNotNull(sites.address), eq(sites.orgId, orgId)));
- const existingAddressesClients = await db
+ const existingAddressesClients = await transaction
.select({
address: clients.subnet
})
@@ -421,10 +421,17 @@ export async function getNextAvailableOrgSubnet(): Promise {
return subnet;
}
-export function generateRemoteSubnets(allSiteResources: SiteResource[]): string[] {
+export function generateRemoteSubnets(
+ allSiteResources: SiteResource[]
+): string[] {
const remoteSubnets = allSiteResources
.filter((sr) => {
- if (sr.mode === "cidr") return true;
+ if (sr.mode === "cidr") {
+ // check if its a valid CIDR using zod
+ const cidrSchema = z.union([z.cidrv4(), z.cidrv6()]);
+ const parseResult = cidrSchema.safeParse(sr.destination);
+ return parseResult.success;
+ }
if (sr.mode === "host") {
// check if its a valid IP using zod
const ipSchema = z.union([z.ipv4(), z.ipv6()]);
@@ -448,22 +455,23 @@ export function generateRemoteSubnets(allSiteResources: SiteResource[]): string[
export type Alias = { alias: string | null; aliasAddress: string | null };
export function generateAliasConfig(allSiteResources: SiteResource[]): Alias[] {
- let aliasConfigs = allSiteResources
+ return allSiteResources
.filter((sr) => sr.alias && sr.aliasAddress && sr.mode == "host")
.map((sr) => ({
alias: sr.alias,
aliasAddress: sr.aliasAddress
}));
- return aliasConfigs;
}
export type SubnetProxyTarget = {
sourcePrefix: string; // must be a cidr
destPrefix: string; // must be a cidr
+ disableIcmp?: boolean;
rewriteTo?: string; // must be a cidr
portRange?: {
min: number;
max: number;
+ protocol: "tcp" | "udp";
}[];
};
@@ -493,6 +501,11 @@ export function generateSubnetProxyTargets(
}
const clientPrefix = `${clientSite.subnet.split("/")[0]}/32`;
+ const portRange = [
+ ...parsePortRangeString(siteResource.tcpPortRangeString, "tcp"),
+ ...parsePortRangeString(siteResource.udpPortRangeString, "udp")
+ ];
+ const disableIcmp = siteResource.disableIcmp ?? false;
if (siteResource.mode == "host") {
let destination = siteResource.destination;
@@ -503,7 +516,9 @@ export function generateSubnetProxyTargets(
targets.push({
sourcePrefix: clientPrefix,
- destPrefix: destination
+ destPrefix: destination,
+ portRange,
+ disableIcmp
});
}
@@ -512,13 +527,17 @@ export function generateSubnetProxyTargets(
targets.push({
sourcePrefix: clientPrefix,
destPrefix: `${siteResource.aliasAddress}/32`,
- rewriteTo: destination
+ rewriteTo: destination,
+ portRange,
+ disableIcmp
});
}
} else if (siteResource.mode == "cidr") {
targets.push({
sourcePrefix: clientPrefix,
- destPrefix: siteResource.destination
+ destPrefix: siteResource.destination,
+ portRange,
+ disableIcmp
});
}
}
@@ -530,3 +549,117 @@ export function generateSubnetProxyTargets(
return targets;
}
+
+// Custom schema for validating port range strings
+// Format: "80,443,8000-9000" or "*" for all ports, or empty string
+export const portRangeStringSchema = z
+ .string()
+ .optional()
+ .refine(
+ (val) => {
+ if (!val || val.trim() === "" || val.trim() === "*") {
+ return true;
+ }
+
+ // Split by comma and validate each part
+ const parts = val.split(",").map((p) => p.trim());
+
+ for (const part of parts) {
+ if (part === "") {
+ return false; // empty parts not allowed
+ }
+
+ // Check if it's a range (contains dash)
+ if (part.includes("-")) {
+ const [start, end] = part.split("-").map((p) => p.trim());
+
+ // Both parts must be present
+ if (!start || !end) {
+ return false;
+ }
+
+ const startPort = parseInt(start, 10);
+ const endPort = parseInt(end, 10);
+
+ // Must be valid numbers
+ if (isNaN(startPort) || isNaN(endPort)) {
+ return false;
+ }
+
+ // Must be valid port range (1-65535)
+ if (
+ startPort < 1 ||
+ startPort > 65535 ||
+ endPort < 1 ||
+ endPort > 65535
+ ) {
+ return false;
+ }
+
+ // Start must be <= end
+ if (startPort > endPort) {
+ return false;
+ }
+ } else {
+ // Single port
+ const port = parseInt(part, 10);
+
+ // Must be a valid number
+ if (isNaN(port)) {
+ return false;
+ }
+
+ // Must be valid port range (1-65535)
+ if (port < 1 || port > 65535) {
+ return false;
+ }
+ }
+ }
+
+ return true;
+ },
+ {
+ message:
+ 'Port range must be "*" for all ports, or a comma-separated list of ports and ranges (e.g., "80,443,8000-9000"). Ports must be between 1 and 65535, and ranges must have start <= end.'
+ }
+ );
+
+/**
+ * Parses a port range string into an array of port range objects
+ * @param portRangeStr - Port range string (e.g., "80,443,8000-9000", "*", or "")
+ * @param protocol - Protocol to use for all ranges (default: "tcp")
+ * @returns Array of port range objects with min, max, and protocol fields
+ */
+export function parsePortRangeString(
+ portRangeStr: string | undefined | null,
+ protocol: "tcp" | "udp" = "tcp"
+): { min: number; max: number; protocol: "tcp" | "udp" }[] {
+ // Handle undefined or empty string - insert dummy value with port 0
+ if (!portRangeStr || portRangeStr.trim() === "") {
+ return [{ min: 0, max: 0, protocol }];
+ }
+
+ // Handle wildcard - return empty array (all ports allowed)
+ if (portRangeStr.trim() === "*") {
+ return [];
+ }
+
+ const result: { min: number; max: number; protocol: "tcp" | "udp" }[] = [];
+ const parts = portRangeStr.split(",").map((p) => p.trim());
+
+ for (const part of parts) {
+ if (part.includes("-")) {
+ // Range
+ const [start, end] = part.split("-").map((p) => p.trim());
+ const startPort = parseInt(start, 10);
+ const endPort = parseInt(end, 10);
+ result.push({ min: startPort, max: endPort, protocol });
+ } else {
+ // Single port
+ const port = parseInt(part, 10);
+ result.push({ min: port, max: port, protocol });
+ }
+ }
+
+ return result;
+}
diff --git a/server/lib/logAccessAudit.ts b/server/lib/logAccessAudit.ts
index 82ddda67..5f3601da 100644
--- a/server/lib/logAccessAudit.ts
+++ b/server/lib/logAccessAudit.ts
@@ -14,4 +14,4 @@ export async function logAccessAudit(data: {
requestIp?: string;
}) {
return;
-}
\ No newline at end of file
+}
diff --git a/server/lib/readConfigFile.ts b/server/lib/readConfigFile.ts
index ac819619..fe610663 100644
--- a/server/lib/readConfigFile.ts
+++ b/server/lib/readConfigFile.ts
@@ -14,7 +14,8 @@ export const configSchema = z
.object({
app: z
.object({
- dashboard_url: z.url()
+ dashboard_url: z
+ .url()
.pipe(z.url())
.transform((url) => url.toLowerCase())
.optional(),
@@ -255,7 +256,10 @@ export const configSchema = z
.object({
block_size: z.number().positive().gt(0).optional().default(24),
subnet_group: z.string().optional().default("100.90.128.0/24"),
- utility_subnet_group: z.string().optional().default("100.96.128.0/24") //just hardcode this for now as well
+ utility_subnet_group: z
+ .string()
+ .optional()
+ .default("100.96.128.0/24") //just hardcode this for now as well
})
.optional()
.default({
diff --git a/server/lib/rebuildClientAssociations.ts b/server/lib/rebuildClientAssociations.ts
index 60384fcf..625e5793 100644
--- a/server/lib/rebuildClientAssociations.ts
+++ b/server/lib/rebuildClientAssociations.ts
@@ -24,7 +24,7 @@ import {
deletePeer as newtDeletePeer
} from "@server/routers/newt/peers";
import {
- initPeerAddHandshake as holepunchSiteAdd,
+ initPeerAddHandshake,
deletePeer as olmDeletePeer
} from "@server/routers/olm/peers";
import { sendToExitNode } from "#dynamic/lib/exitNodes";
@@ -111,21 +111,22 @@ export async function getClientSiteResourceAccess(
const directClientIds = allClientSiteResources.map((row) => row.clientId);
// Get full client details for directly associated clients
- const directClients = directClientIds.length > 0
- ? await trx
- .select({
- clientId: clients.clientId,
- pubKey: clients.pubKey,
- subnet: clients.subnet
- })
- .from(clients)
- .where(
- and(
- inArray(clients.clientId, directClientIds),
- eq(clients.orgId, siteResource.orgId) // filter by org to prevent cross-org associations
+ const directClients =
+ directClientIds.length > 0
+ ? await trx
+ .select({
+ clientId: clients.clientId,
+ pubKey: clients.pubKey,
+ subnet: clients.subnet
+ })
+ .from(clients)
+ .where(
+ and(
+ inArray(clients.clientId, directClientIds),
+ eq(clients.orgId, siteResource.orgId) // filter by org to prevent cross-org associations
+ )
)
- )
- : [];
+ : [];
// Merge user-based clients with directly associated clients
const allClientsMap = new Map(
@@ -476,7 +477,7 @@ async function handleMessagesForSiteClients(
}
if (isAdd) {
- await holepunchSiteAdd(
+ await initPeerAddHandshake(
// this will kick off the add peer process for the client
client.clientId,
{
@@ -544,9 +545,13 @@ export async function updateClientSiteDestinations(
}
// Parse the endpoint properly for both IPv4 and IPv6
- const parsedEndpoint = parseEndpoint(site.clientSitesAssociationsCache.endpoint);
+ const parsedEndpoint = parseEndpoint(
+ site.clientSitesAssociationsCache.endpoint
+ );
if (!parsedEndpoint) {
- logger.warn(`Failed to parse endpoint ${site.clientSitesAssociationsCache.endpoint}, skipping`);
+ logger.warn(
+ `Failed to parse endpoint ${site.clientSitesAssociationsCache.endpoint}, skipping`
+ );
continue;
}
@@ -705,11 +710,46 @@ async function handleSubnetProxyTargetUpdates(
}
for (const client of removedClients) {
+ // Check if this client still has access to another resource on this site with the same destination
+ const destinationStillInUse = await trx
+ .select()
+ .from(siteResources)
+ .innerJoin(
+ clientSiteResourcesAssociationsCache,
+ eq(
+ clientSiteResourcesAssociationsCache.siteResourceId,
+ siteResources.siteResourceId
+ )
+ )
+ .where(
+ and(
+ eq(
+ clientSiteResourcesAssociationsCache.clientId,
+ client.clientId
+ ),
+ eq(siteResources.siteId, siteResource.siteId),
+ eq(
+ siteResources.destination,
+ siteResource.destination
+ ),
+ ne(
+ siteResources.siteResourceId,
+ siteResource.siteResourceId
+ )
+ )
+ );
+
+ // Only remove remote subnet if no other resource uses the same destination
+ const remoteSubnetsToRemove =
+ destinationStillInUse.length > 0
+ ? []
+ : generateRemoteSubnets([siteResource]);
+
olmJobs.push(
removePeerData(
client.clientId,
siteResource.siteId,
- generateRemoteSubnets([siteResource]),
+ remoteSubnetsToRemove,
generateAliasConfig([siteResource])
)
);
@@ -787,7 +827,10 @@ export async function rebuildClientAssociationsFromClient(
.from(roleSiteResources)
.innerJoin(
siteResources,
- eq(siteResources.siteResourceId, roleSiteResources.siteResourceId)
+ eq(
+ siteResources.siteResourceId,
+ roleSiteResources.siteResourceId
+ )
)
.where(
and(
@@ -912,28 +955,8 @@ export async function rebuildClientAssociationsFromClient(
/////////// Send messages ///////////
- // Get the olm for this client
- const [olm] = await trx
- .select({ olmId: olms.olmId })
- .from(olms)
- .where(eq(olms.clientId, client.clientId))
- .limit(1);
-
- if (!olm) {
- logger.warn(
- `Olm not found for client ${client.clientId}, skipping peer updates`
- );
- return;
- }
-
// Handle messages for sites being added
- await handleMessagesForClientSites(
- client,
- olm.olmId,
- sitesToAdd,
- sitesToRemove,
- trx
- );
+ await handleMessagesForClientSites(client, sitesToAdd, sitesToRemove, trx);
// Handle subnet proxy target updates for resources
await handleMessagesForClientResources(
@@ -953,11 +976,26 @@ async function handleMessagesForClientSites(
userId: string | null;
orgId: string;
},
- olmId: string,
sitesToAdd: number[],
sitesToRemove: number[],
trx: Transaction | typeof db = db
): Promise {
+ // Get the olm for this client
+ const [olm] = await trx
+ .select({ olmId: olms.olmId })
+ .from(olms)
+ .where(eq(olms.clientId, client.clientId))
+ .limit(1);
+
+ if (!olm) {
+ logger.warn(
+ `Olm not found for client ${client.clientId}, skipping peer updates`
+ );
+ return;
+ }
+
+ const olmId = olm.olmId;
+
if (!client.subnet || !client.pubKey) {
logger.warn(
`Client ${client.clientId} missing subnet or pubKey, skipping peer updates`
@@ -978,9 +1016,9 @@ async function handleMessagesForClientSites(
.leftJoin(newts, eq(sites.siteId, newts.siteId))
.where(inArray(sites.siteId, allSiteIds));
- let newtJobs: Promise[] = [];
- let olmJobs: Promise[] = [];
- let exitNodeJobs: Promise[] = [];
+ const newtJobs: Promise[] = [];
+ const olmJobs: Promise[] = [];
+ const exitNodeJobs: Promise[] = [];
for (const siteData of sitesData) {
const site = siteData.sites;
@@ -1042,7 +1080,7 @@ async function handleMessagesForClientSites(
continue;
}
- await holepunchSiteAdd(
+ await initPeerAddHandshake(
// this will kick off the add peer process for the client
client.clientId,
{
@@ -1087,18 +1125,8 @@ async function handleMessagesForClientResources(
resourcesToRemove: number[],
trx: Transaction | typeof db = db
): Promise {
- // Group resources by site
- const resourcesBySite = new Map();
-
- for (const resource of allNewResources) {
- if (!resourcesBySite.has(resource.siteId)) {
- resourcesBySite.set(resource.siteId, []);
- }
- resourcesBySite.get(resource.siteId)!.push(resource);
- }
-
- let proxyJobs: Promise[] = [];
- let olmJobs: Promise[] = [];
+ const proxyJobs: Promise[] = [];
+ const olmJobs: Promise[] = [];
// Handle additions
if (resourcesToAdd.length > 0) {
@@ -1217,12 +1245,47 @@ async function handleMessagesForClientResources(
}
try {
+ // Check if this client still has access to another resource on this site with the same destination
+ const destinationStillInUse = await trx
+ .select()
+ .from(siteResources)
+ .innerJoin(
+ clientSiteResourcesAssociationsCache,
+ eq(
+ clientSiteResourcesAssociationsCache.siteResourceId,
+ siteResources.siteResourceId
+ )
+ )
+ .where(
+ and(
+ eq(
+ clientSiteResourcesAssociationsCache.clientId,
+ client.clientId
+ ),
+ eq(siteResources.siteId, resource.siteId),
+ eq(
+ siteResources.destination,
+ resource.destination
+ ),
+ ne(
+ siteResources.siteResourceId,
+ resource.siteResourceId
+ )
+ )
+ );
+
+ // Only remove remote subnet if no other resource uses the same destination
+ const remoteSubnetsToRemove =
+ destinationStillInUse.length > 0
+ ? []
+ : generateRemoteSubnets([resource]);
+
// Remove peer data from olm
olmJobs.push(
removePeerData(
client.clientId,
resource.siteId,
- generateRemoteSubnets([resource]),
+ remoteSubnetsToRemove,
generateAliasConfig([resource])
)
);
diff --git a/server/lib/resend.ts b/server/lib/resend.ts
index 0af039bb..0c21b1be 100644
--- a/server/lib/resend.ts
+++ b/server/lib/resend.ts
@@ -1,8 +1,8 @@
export enum AudienceIds {
- SignUps = "",
- Subscribed = "",
- Churned = "",
- Newsletter = ""
+ SignUps = "",
+ Subscribed = "",
+ Churned = "",
+ Newsletter = ""
}
let resend;
diff --git a/server/lib/response.ts b/server/lib/response.ts
index ae8461ba..fd8fa89f 100644
--- a/server/lib/response.ts
+++ b/server/lib/response.ts
@@ -3,14 +3,14 @@ import { Response } from "express";
export const response = (
res: Response,
- { data, success, error, message, status }: ResponseT,
+ { data, success, error, message, status }: ResponseT
) => {
return res.status(status).send({
data,
success,
error,
message,
- status,
+ status
});
};
diff --git a/server/lib/s3.ts b/server/lib/s3.ts
index 5fc3318f..17314ed7 100644
--- a/server/lib/s3.ts
+++ b/server/lib/s3.ts
@@ -1,5 +1,5 @@
import { S3Client } from "@aws-sdk/client-s3";
export const s3Client = new S3Client({
- region: process.env.S3_REGION || "us-east-1",
+ region: process.env.S3_REGION || "us-east-1"
});
diff --git a/server/lib/serverIpService.ts b/server/lib/serverIpService.ts
index 8c16fd43..7f423f9b 100644
--- a/server/lib/serverIpService.ts
+++ b/server/lib/serverIpService.ts
@@ -6,7 +6,7 @@ let serverIp: string | null = null;
const services = [
"https://checkip.amazonaws.com",
"https://ifconfig.io/ip",
- "https://api.ipify.org",
+ "https://api.ipify.org"
];
export async function fetchServerIp() {
@@ -17,7 +17,9 @@ export async function fetchServerIp() {
logger.debug("Detected public IP: " + serverIp);
return;
} catch (err: any) {
- console.warn(`Failed to fetch server IP from ${url}: ${err.message || err.code}`);
+ console.warn(
+ `Failed to fetch server IP from ${url}: ${err.message || err.code}`
+ );
}
}
diff --git a/server/lib/stoi.ts b/server/lib/stoi.ts
index ebc789e6..3c869858 100644
--- a/server/lib/stoi.ts
+++ b/server/lib/stoi.ts
@@ -1,8 +1,7 @@
export default function stoi(val: any) {
if (typeof val === "string") {
- return parseInt(val);
+ return parseInt(val);
+ } else {
+ return val;
}
- else {
- return val;
- }
-}
\ No newline at end of file
+}
diff --git a/server/lib/telemetry.ts b/server/lib/telemetry.ts
index 13ba1c95..fda59f39 100644
--- a/server/lib/telemetry.ts
+++ b/server/lib/telemetry.ts
@@ -2,9 +2,9 @@ import { PostHog } from "posthog-node";
import config from "./config";
import { getHostMeta } from "./hostMeta";
import logger from "@server/logger";
-import { apiKeys, db, roles } from "@server/db";
+import { apiKeys, db, roles, siteResources } from "@server/db";
import { sites, users, orgs, resources, clients, idp } from "@server/db";
-import { eq, count, notInArray, and } from "drizzle-orm";
+import { eq, count, notInArray, and, isNotNull, isNull } from "drizzle-orm";
import { APP_VERSION } from "./consts";
import crypto from "crypto";
import { UserType } from "@server/types/UserTypes";
@@ -25,7 +25,7 @@ class TelemetryClient {
return;
}
- if (build !== "oss") {
+ if (build === "saas") {
return;
}
@@ -41,14 +41,18 @@ class TelemetryClient {
this.client?.shutdown();
});
- this.sendStartupEvents().catch((err) => {
- logger.error("Failed to send startup telemetry:", err);
- });
+ this.sendStartupEvents()
+ .catch((err) => {
+ logger.error("Failed to send startup telemetry:", err);
+ })
+ .then(() => {
+ logger.debug("Successfully sent startup telemetry data");
+ });
this.startAnalyticsInterval();
logger.info(
- "Pangolin now gathers anonymous usage data to help us better understand how the software is used and guide future improvements and feature development. You can find more details, including instructions for opting out of this anonymous data collection, at: https://docs.pangolin.net/telemetry"
+ "Pangolin gathers anonymous usage data to help us better understand how the software is used and guide future improvements and feature development. You can find more details, including instructions for opting out of this anonymous data collection, at: https://docs.pangolin.net/telemetry"
);
} else if (!this.enabled) {
logger.info(
@@ -60,9 +64,13 @@ class TelemetryClient {
private startAnalyticsInterval() {
this.intervalId = setInterval(
() => {
- this.collectAndSendAnalytics().catch((err) => {
- logger.error("Failed to collect analytics:", err);
- });
+ this.collectAndSendAnalytics()
+ .catch((err) => {
+ logger.error("Failed to collect analytics:", err);
+ })
+ .then(() => {
+ logger.debug("Successfully sent analytics data");
+ });
},
48 * 60 * 60 * 1000
);
@@ -99,9 +107,14 @@ class TelemetryClient {
const [resourcesCount] = await db
.select({ count: count() })
.from(resources);
- const [clientsCount] = await db
+ const [userDevicesCount] = await db
.select({ count: count() })
- .from(clients);
+ .from(clients)
+ .where(isNotNull(clients.userId));
+ const [machineClients] = await db
+ .select({ count: count() })
+ .from(clients)
+ .where(isNull(clients.userId));
const [idpCount] = await db.select({ count: count() }).from(idp);
const [onlineSitesCount] = await db
.select({ count: count() })
@@ -146,6 +159,24 @@ class TelemetryClient {
const supporterKey = config.getSupporterData();
+ const allPrivateResources = await db.select().from(siteResources);
+
+ const numPrivResources = allPrivateResources.length;
+ let numPrivResourceAliases = 0;
+ let numPrivResourceHosts = 0;
+ let numPrivResourceCidr = 0;
+ for (const res of allPrivateResources) {
+ if (res.mode === "host") {
+ numPrivResourceHosts += 1;
+ } else if (res.mode === "cidr") {
+ numPrivResourceCidr += 1;
+ }
+
+ if (res.alias) {
+ numPrivResourceAliases += 1;
+ }
+ }
+
return {
numSites: sitesCount.count,
numUsers: usersCount.count,
@@ -153,7 +184,11 @@ class TelemetryClient {
numUsersOidc: usersOidcCount.count,
numOrganizations: orgsCount.count,
numResources: resourcesCount.count,
- numClients: clientsCount.count,
+ numPrivateResources: numPrivResources,
+ numPrivateResourceAliases: numPrivResourceAliases,
+ numPrivateResourceHosts: numPrivResourceHosts,
+ numUserDevices: userDevicesCount.count,
+ numMachineClients: machineClients.count,
numIdentityProviders: idpCount.count,
numSitesOnline: onlineSitesCount.count,
resources: resourceDetails,
@@ -196,7 +231,7 @@ class TelemetryClient {
logger.debug("Sending enterprise startup telemetry payload:", {
payload
});
- // this.client.capture(payload);
+ this.client.capture(payload);
}
if (build === "oss") {
@@ -246,7 +281,12 @@ class TelemetryClient {
num_users_oidc: stats.numUsersOidc,
num_organizations: stats.numOrganizations,
num_resources: stats.numResources,
- num_clients: stats.numClients,
+ num_private_resources: stats.numPrivateResources,
+ num_private_resource_aliases:
+ stats.numPrivateResourceAliases,
+ num_private_resource_hosts: stats.numPrivateResourceHosts,
+ num_user_devices: stats.numUserDevices,
+ num_machine_clients: stats.numMachineClients,
num_identity_providers: stats.numIdentityProviders,
num_sites_online: stats.numSitesOnline,
num_resources_sso_enabled: stats.resources.filter(
diff --git a/server/lib/traefik/TraefikConfigManager.ts b/server/lib/traefik/TraefikConfigManager.ts
index 151e6517..46d5ccc8 100644
--- a/server/lib/traefik/TraefikConfigManager.ts
+++ b/server/lib/traefik/TraefikConfigManager.ts
@@ -195,7 +195,9 @@ export class TraefikConfigManager {
state.set(domain, {
exists: certExists && keyExists,
- lastModified: lastModified ? Math.floor(lastModified.getTime() / 1000) : null,
+ lastModified: lastModified
+ ? Math.floor(lastModified.getTime() / 1000)
+ : null,
expiresAt,
wildcard
});
@@ -464,7 +466,9 @@ export class TraefikConfigManager {
config.getRawConfig().traefik.site_types,
build == "oss", // filter out the namespace domains in open source
build != "oss", // generate the login pages on the cloud and hybrid,
- build == "saas" ? false : config.getRawConfig().traefik.allow_raw_resources // dont allow raw resources on saas otherwise use config
+ build == "saas"
+ ? false
+ : config.getRawConfig().traefik.allow_raw_resources // dont allow raw resources on saas otherwise use config
);
const domains = new Set();
@@ -786,29 +790,30 @@ export class TraefikConfigManager {
"utf8"
);
- // Store the certificate expiry time
- if (cert.expiresAt) {
- const expiresAtPath = path.join(domainDir, ".expires_at");
- fs.writeFileSync(
- expiresAtPath,
- cert.expiresAt.toString(),
- "utf8"
- );
- }
-
logger.info(
`Certificate updated for domain: ${cert.domain}${cert.wildcard ? " (wildcard)" : ""}`
);
-
- // Update local state tracking
- this.lastLocalCertificateState.set(cert.domain, {
- exists: true,
- lastModified: Math.floor(Date.now() / 1000),
- expiresAt: cert.expiresAt,
- wildcard: cert.wildcard
- });
}
+ // Always update expiry tracking when we fetch a certificate,
+ // even if the cert content didn't change
+ if (cert.expiresAt) {
+ const expiresAtPath = path.join(domainDir, ".expires_at");
+ fs.writeFileSync(
+ expiresAtPath,
+ cert.expiresAt.toString(),
+ "utf8"
+ );
+ }
+
+ // Update local state tracking
+ this.lastLocalCertificateState.set(cert.domain, {
+ exists: true,
+ lastModified: Math.floor(Date.now() / 1000),
+ expiresAt: cert.expiresAt,
+ wildcard: cert.wildcard
+ });
+
// Always ensure the config entry exists and is up to date
const certEntry = {
certFile: certPath,
diff --git a/server/lib/traefik/index.ts b/server/lib/traefik/index.ts
index 5630028c..0fc483fa 100644
--- a/server/lib/traefik/index.ts
+++ b/server/lib/traefik/index.ts
@@ -1 +1 @@
-export * from "./getTraefikConfig";
\ No newline at end of file
+export * from "./getTraefikConfig";
diff --git a/server/lib/traefik/traefikConfig.test.ts b/server/lib/traefik/traefikConfig.test.ts
index 88e5da49..36ad4e68 100644
--- a/server/lib/traefik/traefikConfig.test.ts
+++ b/server/lib/traefik/traefikConfig.test.ts
@@ -2,234 +2,249 @@ import { assertEquals } from "@test/assert";
import { isDomainCoveredByWildcard } from "./TraefikConfigManager";
function runTests() {
- console.log('Running wildcard domain coverage tests...');
-
+ console.log("Running wildcard domain coverage tests...");
+
// Test case 1: Basic wildcard certificate at example.com
const basicWildcardCerts = new Map([
- ['example.com', { exists: true, wildcard: true }]
+ ["example.com", { exists: true, wildcard: true }]
]);
-
+
// Should match first-level subdomains
assertEquals(
- isDomainCoveredByWildcard('level1.example.com', basicWildcardCerts),
+ isDomainCoveredByWildcard("level1.example.com", basicWildcardCerts),
true,
- 'Wildcard cert at example.com should match level1.example.com'
+ "Wildcard cert at example.com should match level1.example.com"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('api.example.com', basicWildcardCerts),
+ isDomainCoveredByWildcard("api.example.com", basicWildcardCerts),
true,
- 'Wildcard cert at example.com should match api.example.com'
+ "Wildcard cert at example.com should match api.example.com"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('www.example.com', basicWildcardCerts),
+ isDomainCoveredByWildcard("www.example.com", basicWildcardCerts),
true,
- 'Wildcard cert at example.com should match www.example.com'
+ "Wildcard cert at example.com should match www.example.com"
);
-
+
// Should match the root domain (exact match)
assertEquals(
- isDomainCoveredByWildcard('example.com', basicWildcardCerts),
+ isDomainCoveredByWildcard("example.com", basicWildcardCerts),
true,
- 'Wildcard cert at example.com should match example.com itself'
+ "Wildcard cert at example.com should match example.com itself"
);
-
+
// Should NOT match second-level subdomains
assertEquals(
- isDomainCoveredByWildcard('level2.level1.example.com', basicWildcardCerts),
+ isDomainCoveredByWildcard(
+ "level2.level1.example.com",
+ basicWildcardCerts
+ ),
false,
- 'Wildcard cert at example.com should NOT match level2.level1.example.com'
+ "Wildcard cert at example.com should NOT match level2.level1.example.com"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('deep.nested.subdomain.example.com', basicWildcardCerts),
+ isDomainCoveredByWildcard(
+ "deep.nested.subdomain.example.com",
+ basicWildcardCerts
+ ),
false,
- 'Wildcard cert at example.com should NOT match deep.nested.subdomain.example.com'
+ "Wildcard cert at example.com should NOT match deep.nested.subdomain.example.com"
);
-
+
// Should NOT match different domains
assertEquals(
- isDomainCoveredByWildcard('test.otherdomain.com', basicWildcardCerts),
+ isDomainCoveredByWildcard("test.otherdomain.com", basicWildcardCerts),
false,
- 'Wildcard cert at example.com should NOT match test.otherdomain.com'
+ "Wildcard cert at example.com should NOT match test.otherdomain.com"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('notexample.com', basicWildcardCerts),
+ isDomainCoveredByWildcard("notexample.com", basicWildcardCerts),
false,
- 'Wildcard cert at example.com should NOT match notexample.com'
+ "Wildcard cert at example.com should NOT match notexample.com"
);
-
+
// Test case 2: Multiple wildcard certificates
const multipleWildcardCerts = new Map([
- ['example.com', { exists: true, wildcard: true }],
- ['test.org', { exists: true, wildcard: true }],
- ['api.service.net', { exists: true, wildcard: true }]
+ ["example.com", { exists: true, wildcard: true }],
+ ["test.org", { exists: true, wildcard: true }],
+ ["api.service.net", { exists: true, wildcard: true }]
]);
-
+
assertEquals(
- isDomainCoveredByWildcard('app.example.com', multipleWildcardCerts),
+ isDomainCoveredByWildcard("app.example.com", multipleWildcardCerts),
true,
- 'Should match subdomain of first wildcard cert'
+ "Should match subdomain of first wildcard cert"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('staging.test.org', multipleWildcardCerts),
+ isDomainCoveredByWildcard("staging.test.org", multipleWildcardCerts),
true,
- 'Should match subdomain of second wildcard cert'
+ "Should match subdomain of second wildcard cert"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('v1.api.service.net', multipleWildcardCerts),
+ isDomainCoveredByWildcard("v1.api.service.net", multipleWildcardCerts),
true,
- 'Should match subdomain of third wildcard cert'
+ "Should match subdomain of third wildcard cert"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('deep.nested.api.service.net', multipleWildcardCerts),
+ isDomainCoveredByWildcard(
+ "deep.nested.api.service.net",
+ multipleWildcardCerts
+ ),
false,
- 'Should NOT match multi-level subdomain of third wildcard cert'
+ "Should NOT match multi-level subdomain of third wildcard cert"
);
-
+
// Test exact domain matches for multiple certs
assertEquals(
- isDomainCoveredByWildcard('example.com', multipleWildcardCerts),
+ isDomainCoveredByWildcard("example.com", multipleWildcardCerts),
true,
- 'Should match exact domain of first wildcard cert'
+ "Should match exact domain of first wildcard cert"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('test.org', multipleWildcardCerts),
+ isDomainCoveredByWildcard("test.org", multipleWildcardCerts),
true,
- 'Should match exact domain of second wildcard cert'
+ "Should match exact domain of second wildcard cert"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('api.service.net', multipleWildcardCerts),
+ isDomainCoveredByWildcard("api.service.net", multipleWildcardCerts),
true,
- 'Should match exact domain of third wildcard cert'
+ "Should match exact domain of third wildcard cert"
);
-
+
// Test case 3: Non-wildcard certificates (should not match anything)
const nonWildcardCerts = new Map([
- ['example.com', { exists: true, wildcard: false }],
- ['specific.domain.com', { exists: true, wildcard: false }]
+ ["example.com", { exists: true, wildcard: false }],
+ ["specific.domain.com", { exists: true, wildcard: false }]
]);
-
+
assertEquals(
- isDomainCoveredByWildcard('sub.example.com', nonWildcardCerts),
+ isDomainCoveredByWildcard("sub.example.com", nonWildcardCerts),
false,
- 'Non-wildcard cert should not match subdomains'
+ "Non-wildcard cert should not match subdomains"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('example.com', nonWildcardCerts),
+ isDomainCoveredByWildcard("example.com", nonWildcardCerts),
false,
- 'Non-wildcard cert should not match even exact domain via this function'
+ "Non-wildcard cert should not match even exact domain via this function"
);
-
+
// Test case 4: Non-existent certificates (should not match)
const nonExistentCerts = new Map([
- ['example.com', { exists: false, wildcard: true }],
- ['missing.com', { exists: false, wildcard: true }]
+ ["example.com", { exists: false, wildcard: true }],
+ ["missing.com", { exists: false, wildcard: true }]
]);
-
+
assertEquals(
- isDomainCoveredByWildcard('sub.example.com', nonExistentCerts),
+ isDomainCoveredByWildcard("sub.example.com", nonExistentCerts),
false,
- 'Non-existent wildcard cert should not match'
+ "Non-existent wildcard cert should not match"
);
-
+
// Test case 5: Edge cases with special domain names
const specialDomainCerts = new Map([
- ['localhost', { exists: true, wildcard: true }],
- ['127-0-0-1.nip.io', { exists: true, wildcard: true }],
- ['xn--e1afmkfd.xn--p1ai', { exists: true, wildcard: true }] // IDN domain
+ ["localhost", { exists: true, wildcard: true }],
+ ["127-0-0-1.nip.io", { exists: true, wildcard: true }],
+ ["xn--e1afmkfd.xn--p1ai", { exists: true, wildcard: true }] // IDN domain
]);
-
+
assertEquals(
- isDomainCoveredByWildcard('app.localhost', specialDomainCerts),
+ isDomainCoveredByWildcard("app.localhost", specialDomainCerts),
true,
- 'Should match subdomain of localhost wildcard'
+ "Should match subdomain of localhost wildcard"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('test.127-0-0-1.nip.io', specialDomainCerts),
+ isDomainCoveredByWildcard("test.127-0-0-1.nip.io", specialDomainCerts),
true,
- 'Should match subdomain of nip.io wildcard'
+ "Should match subdomain of nip.io wildcard"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('sub.xn--e1afmkfd.xn--p1ai', specialDomainCerts),
+ isDomainCoveredByWildcard(
+ "sub.xn--e1afmkfd.xn--p1ai",
+ specialDomainCerts
+ ),
true,
- 'Should match subdomain of IDN wildcard'
+ "Should match subdomain of IDN wildcard"
);
-
+
// Test case 6: Empty input and edge cases
const emptyCerts = new Map();
-
+
assertEquals(
- isDomainCoveredByWildcard('any.domain.com', emptyCerts),
+ isDomainCoveredByWildcard("any.domain.com", emptyCerts),
false,
- 'Empty certificate map should not match any domain'
+ "Empty certificate map should not match any domain"
);
-
+
// Test case 7: Domains with single character components
const singleCharCerts = new Map([
- ['a.com', { exists: true, wildcard: true }],
- ['x.y.z', { exists: true, wildcard: true }]
+ ["a.com", { exists: true, wildcard: true }],
+ ["x.y.z", { exists: true, wildcard: true }]
]);
-
+
assertEquals(
- isDomainCoveredByWildcard('b.a.com', singleCharCerts),
+ isDomainCoveredByWildcard("b.a.com", singleCharCerts),
true,
- 'Should match single character subdomain'
+ "Should match single character subdomain"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('w.x.y.z', singleCharCerts),
+ isDomainCoveredByWildcard("w.x.y.z", singleCharCerts),
true,
- 'Should match single character subdomain of multi-part domain'
+ "Should match single character subdomain of multi-part domain"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('v.w.x.y.z', singleCharCerts),
+ isDomainCoveredByWildcard("v.w.x.y.z", singleCharCerts),
false,
- 'Should NOT match multi-level subdomain of single char domain'
+ "Should NOT match multi-level subdomain of single char domain"
);
-
+
// Test case 8: Domains with numbers and hyphens
const numericCerts = new Map([
- ['api-v2.service-1.com', { exists: true, wildcard: true }],
- ['123.456.net', { exists: true, wildcard: true }]
+ ["api-v2.service-1.com", { exists: true, wildcard: true }],
+ ["123.456.net", { exists: true, wildcard: true }]
]);
-
+
assertEquals(
- isDomainCoveredByWildcard('staging.api-v2.service-1.com', numericCerts),
+ isDomainCoveredByWildcard("staging.api-v2.service-1.com", numericCerts),
true,
- 'Should match subdomain with hyphens and numbers'
+ "Should match subdomain with hyphens and numbers"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('test.123.456.net', numericCerts),
+ isDomainCoveredByWildcard("test.123.456.net", numericCerts),
true,
- 'Should match subdomain with numeric components'
+ "Should match subdomain with numeric components"
);
-
+
assertEquals(
- isDomainCoveredByWildcard('deep.staging.api-v2.service-1.com', numericCerts),
+ isDomainCoveredByWildcard(
+ "deep.staging.api-v2.service-1.com",
+ numericCerts
+ ),
false,
- 'Should NOT match multi-level subdomain with hyphens and numbers'
+ "Should NOT match multi-level subdomain with hyphens and numbers"
);
-
- console.log('All wildcard domain coverage tests passed!');
+
+ console.log("All wildcard domain coverage tests passed!");
}
// Run all tests
try {
runTests();
} catch (error) {
- console.error('Test failed:', error);
+ console.error("Test failed:", error);
process.exit(1);
}
diff --git a/server/lib/traefik/utils.ts b/server/lib/traefik/utils.ts
index 37ebfa0b..ec0eae5b 100644
--- a/server/lib/traefik/utils.ts
+++ b/server/lib/traefik/utils.ts
@@ -31,12 +31,17 @@ export function validatePathRewriteConfig(
}
if (rewritePathType !== "stripPrefix") {
- if ((rewritePath && !rewritePathType) || (!rewritePath && rewritePathType)) {
- return { isValid: false, error: "Both rewritePath and rewritePathType must be specified together" };
+ if (
+ (rewritePath && !rewritePathType) ||
+ (!rewritePath && rewritePathType)
+ ) {
+ return {
+ isValid: false,
+ error: "Both rewritePath and rewritePathType must be specified together"
+ };
}
}
-
if (!rewritePath || !rewritePathType) {
return { isValid: true };
}
@@ -68,14 +73,14 @@ export function validatePathRewriteConfig(
}
}
-
// Additional validation for stripPrefix
if (rewritePathType === "stripPrefix") {
if (pathMatchType !== "prefix") {
- logger.warn(`stripPrefix rewrite type is most effective with prefix path matching. Current match type: ${pathMatchType}`);
+ logger.warn(
+ `stripPrefix rewrite type is most effective with prefix path matching. Current match type: ${pathMatchType}`
+ );
}
}
return { isValid: true };
}
-
diff --git a/server/lib/validators.test.ts b/server/lib/validators.test.ts
index e2043c74..c4c564cf 100644
--- a/server/lib/validators.test.ts
+++ b/server/lib/validators.test.ts
@@ -1,71 +1,247 @@
-import { isValidUrlGlobPattern } from "./validators";
+import { isValidUrlGlobPattern } from "./validators";
import { assertEquals } from "@test/assert";
function runTests() {
- console.log('Running URL pattern validation tests...');
-
+ console.log("Running URL pattern validation tests...");
+
// Test valid patterns
- assertEquals(isValidUrlGlobPattern('simple'), true, 'Simple path segment should be valid');
- assertEquals(isValidUrlGlobPattern('simple/path'), true, 'Simple path with slash should be valid');
- assertEquals(isValidUrlGlobPattern('/leading/slash'), true, 'Path with leading slash should be valid');
- assertEquals(isValidUrlGlobPattern('path/'), true, 'Path with trailing slash should be valid');
- assertEquals(isValidUrlGlobPattern('path/*'), true, 'Path with wildcard segment should be valid');
- assertEquals(isValidUrlGlobPattern('*'), true, 'Single wildcard should be valid');
- assertEquals(isValidUrlGlobPattern('*/subpath'), true, 'Wildcard with subpath should be valid');
- assertEquals(isValidUrlGlobPattern('path/*/more'), true, 'Path with wildcard in the middle should be valid');
-
+ assertEquals(
+ isValidUrlGlobPattern("simple"),
+ true,
+ "Simple path segment should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("simple/path"),
+ true,
+ "Simple path with slash should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("/leading/slash"),
+ true,
+ "Path with leading slash should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path/"),
+ true,
+ "Path with trailing slash should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path/*"),
+ true,
+ "Path with wildcard segment should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("*"),
+ true,
+ "Single wildcard should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("*/subpath"),
+ true,
+ "Wildcard with subpath should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path/*/more"),
+ true,
+ "Path with wildcard in the middle should be valid"
+ );
+
// Test with special characters
- assertEquals(isValidUrlGlobPattern('path-with-dash'), true, 'Path with dash should be valid');
- assertEquals(isValidUrlGlobPattern('path_with_underscore'), true, 'Path with underscore should be valid');
- assertEquals(isValidUrlGlobPattern('path.with.dots'), true, 'Path with dots should be valid');
- assertEquals(isValidUrlGlobPattern('path~with~tilde'), true, 'Path with tilde should be valid');
- assertEquals(isValidUrlGlobPattern('path!with!exclamation'), true, 'Path with exclamation should be valid');
- assertEquals(isValidUrlGlobPattern('path$with$dollar'), true, 'Path with dollar should be valid');
- assertEquals(isValidUrlGlobPattern('path&with&ersand'), true, 'Path with ampersand should be valid');
- assertEquals(isValidUrlGlobPattern("path'with'quote"), true, "Path with quote should be valid");
- assertEquals(isValidUrlGlobPattern('path(with)parentheses'), true, 'Path with parentheses should be valid');
- assertEquals(isValidUrlGlobPattern('path+with+plus'), true, 'Path with plus should be valid');
- assertEquals(isValidUrlGlobPattern('path,with,comma'), true, 'Path with comma should be valid');
- assertEquals(isValidUrlGlobPattern('path;with;semicolon'), true, 'Path with semicolon should be valid');
- assertEquals(isValidUrlGlobPattern('path=with=equals'), true, 'Path with equals should be valid');
- assertEquals(isValidUrlGlobPattern('path:with:colon'), true, 'Path with colon should be valid');
- assertEquals(isValidUrlGlobPattern('path@with@at'), true, 'Path with at should be valid');
-
+ assertEquals(
+ isValidUrlGlobPattern("path-with-dash"),
+ true,
+ "Path with dash should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path_with_underscore"),
+ true,
+ "Path with underscore should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path.with.dots"),
+ true,
+ "Path with dots should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path~with~tilde"),
+ true,
+ "Path with tilde should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path!with!exclamation"),
+ true,
+ "Path with exclamation should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path$with$dollar"),
+ true,
+ "Path with dollar should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path&with&ersand"),
+ true,
+ "Path with ampersand should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path'with'quote"),
+ true,
+ "Path with quote should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path(with)parentheses"),
+ true,
+ "Path with parentheses should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path+with+plus"),
+ true,
+ "Path with plus should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path,with,comma"),
+ true,
+ "Path with comma should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path;with;semicolon"),
+ true,
+ "Path with semicolon should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path=with=equals"),
+ true,
+ "Path with equals should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path:with:colon"),
+ true,
+ "Path with colon should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path@with@at"),
+ true,
+ "Path with at should be valid"
+ );
+
// Test with percent encoding
- assertEquals(isValidUrlGlobPattern('path%20with%20spaces'), true, 'Path with percent-encoded spaces should be valid');
- assertEquals(isValidUrlGlobPattern('path%2Fwith%2Fencoded%2Fslashes'), true, 'Path with percent-encoded slashes should be valid');
-
+ assertEquals(
+ isValidUrlGlobPattern("path%20with%20spaces"),
+ true,
+ "Path with percent-encoded spaces should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path%2Fwith%2Fencoded%2Fslashes"),
+ true,
+ "Path with percent-encoded slashes should be valid"
+ );
+
// Test with wildcards in segments (the fixed functionality)
- assertEquals(isValidUrlGlobPattern('padbootstrap*'), true, 'Path with wildcard at the end of segment should be valid');
- assertEquals(isValidUrlGlobPattern('pad*bootstrap'), true, 'Path with wildcard in the middle of segment should be valid');
- assertEquals(isValidUrlGlobPattern('*bootstrap'), true, 'Path with wildcard at the start of segment should be valid');
- assertEquals(isValidUrlGlobPattern('multiple*wildcards*in*segment'), true, 'Path with multiple wildcards in segment should be valid');
- assertEquals(isValidUrlGlobPattern('wild*/cards/in*/different/seg*ments'), true, 'Path with wildcards in different segments should be valid');
-
+ assertEquals(
+ isValidUrlGlobPattern("padbootstrap*"),
+ true,
+ "Path with wildcard at the end of segment should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("pad*bootstrap"),
+ true,
+ "Path with wildcard in the middle of segment should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("*bootstrap"),
+ true,
+ "Path with wildcard at the start of segment should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("multiple*wildcards*in*segment"),
+ true,
+ "Path with multiple wildcards in segment should be valid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("wild*/cards/in*/different/seg*ments"),
+ true,
+ "Path with wildcards in different segments should be valid"
+ );
+
// Test invalid patterns
- assertEquals(isValidUrlGlobPattern(''), false, 'Empty string should be invalid');
- assertEquals(isValidUrlGlobPattern('//double/slash'), false, 'Path with double slash should be invalid');
- assertEquals(isValidUrlGlobPattern('path//end'), false, 'Path with double slash in the middle should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid'), false, 'Path with invalid characters should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid|char'), false, 'Path with invalid pipe character should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid"char'), false, 'Path with invalid quote character should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid`char'), false, 'Path with invalid backtick character should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid^char'), false, 'Path with invalid caret character should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid\\char'), false, 'Path with invalid backslash character should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid[char]'), false, 'Path with invalid square brackets should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid{char}'), false, 'Path with invalid curly braces should be invalid');
-
+ assertEquals(
+ isValidUrlGlobPattern(""),
+ false,
+ "Empty string should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("//double/slash"),
+ false,
+ "Path with double slash should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("path//end"),
+ false,
+ "Path with double slash in the middle should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("invalid"),
+ false,
+ "Path with invalid characters should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("invalid|char"),
+ false,
+ "Path with invalid pipe character should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern('invalid"char'),
+ false,
+ "Path with invalid quote character should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("invalid`char"),
+ false,
+ "Path with invalid backtick character should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("invalid^char"),
+ false,
+ "Path with invalid caret character should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("invalid\\char"),
+ false,
+ "Path with invalid backslash character should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("invalid[char]"),
+ false,
+ "Path with invalid square brackets should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("invalid{char}"),
+ false,
+ "Path with invalid curly braces should be invalid"
+ );
+
// Test invalid percent encoding
- assertEquals(isValidUrlGlobPattern('invalid%2'), false, 'Path with incomplete percent encoding should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid%GZ'), false, 'Path with invalid hex in percent encoding should be invalid');
- assertEquals(isValidUrlGlobPattern('invalid%'), false, 'Path with isolated percent sign should be invalid');
-
- console.log('All tests passed!');
+ assertEquals(
+ isValidUrlGlobPattern("invalid%2"),
+ false,
+ "Path with incomplete percent encoding should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("invalid%GZ"),
+ false,
+ "Path with invalid hex in percent encoding should be invalid"
+ );
+ assertEquals(
+ isValidUrlGlobPattern("invalid%"),
+ false,
+ "Path with isolated percent sign should be invalid"
+ );
+
+ console.log("All tests passed!");
}
// Run all tests
try {
runTests();
} catch (error) {
- console.error('Test failed:', error);
-}
\ No newline at end of file
+ console.error("Test failed:", error);
+}
diff --git a/server/lib/validators.ts b/server/lib/validators.ts
index 5bdd7a14..b1efe8b3 100644
--- a/server/lib/validators.ts
+++ b/server/lib/validators.ts
@@ -2,7 +2,9 @@ import z from "zod";
import ipaddr from "ipaddr.js";
export function isValidCIDR(cidr: string): boolean {
- return z.cidrv4().safeParse(cidr).success || z.cidrv6().safeParse(cidr).success;
+ return (
+ z.cidrv4().safeParse(cidr).success || z.cidrv6().safeParse(cidr).success
+ );
}
export function isValidIP(ip: string): boolean {
@@ -69,11 +71,11 @@ export function isUrlValid(url: string | undefined) {
if (!url) return true; // the link is optional in the schema so if it's empty it's valid
var pattern = new RegExp(
"^(https?:\\/\\/)?" + // protocol
- "((([a-z\\d]([a-z\\d-]*[a-z\\d])*)\\.)+[a-z]{2,}|" + // domain name
- "((\\d{1,3}\\.){3}\\d{1,3}))" + // OR ip (v4) address
- "(\\:\\d+)?(\\/[-a-z\\d%_.~+]*)*" + // port and path
- "(\\?[;&a-z\\d%_.~+=-]*)?" + // query string
- "(\\#[-a-z\\d_]*)?$",
+ "((([a-z\\d]([a-z\\d-]*[a-z\\d])*)\\.)+[a-z]{2,}|" + // domain name
+ "((\\d{1,3}\\.){3}\\d{1,3}))" + // OR ip (v4) address
+ "(\\:\\d+)?(\\/[-a-z\\d%_.~+]*)*" + // port and path
+ "(\\?[;&a-z\\d%_.~+=-]*)?" + // query string
+ "(\\#[-a-z\\d_]*)?$",
"i"
);
return !!pattern.test(url);
@@ -168,14 +170,14 @@ export function validateHeaders(headers: string): boolean {
}
export function isSecondLevelDomain(domain: string): boolean {
- if (!domain || typeof domain !== 'string') {
+ if (!domain || typeof domain !== "string") {
return false;
}
const trimmedDomain = domain.trim().toLowerCase();
// Split into parts
- const parts = trimmedDomain.split('.');
+ const parts = trimmedDomain.split(".");
// Should have exactly 2 parts for a second-level domain (e.g., "example.com")
if (parts.length !== 2) {
diff --git a/server/middlewares/formatError.ts b/server/middlewares/formatError.ts
index e96ff296..1e94c1f5 100644
--- a/server/middlewares/formatError.ts
+++ b/server/middlewares/formatError.ts
@@ -20,6 +20,6 @@ export const errorHandlerMiddleware: ErrorRequestHandler = (
error: true,
message: error.message || "Internal Server Error",
status: statusCode,
- stack: process.env.ENVIRONMENT === "prod" ? null : error.stack,
+ stack: process.env.ENVIRONMENT === "prod" ? null : error.stack
});
};
diff --git a/server/middlewares/getUserOrgs.ts b/server/middlewares/getUserOrgs.ts
index 4d042307..d7905700 100644
--- a/server/middlewares/getUserOrgs.ts
+++ b/server/middlewares/getUserOrgs.ts
@@ -8,13 +8,13 @@ import HttpCode from "@server/types/HttpCode";
export async function getUserOrgs(
req: Request,
res: Response,
- next: NextFunction,
+ next: NextFunction
) {
const userId = req.user?.userId; // Assuming you have user information in the request
if (!userId) {
return next(
- createHttpError(HttpCode.UNAUTHORIZED, "User not authenticated"),
+ createHttpError(HttpCode.UNAUTHORIZED, "User not authenticated")
);
}
@@ -22,7 +22,7 @@ export async function getUserOrgs(
const userOrganizations = await db
.select({
orgId: userOrgs.orgId,
- roleId: userOrgs.roleId,
+ roleId: userOrgs.roleId
})
.from(userOrgs)
.where(eq(userOrgs.userId, userId));
@@ -38,8 +38,8 @@ export async function getUserOrgs(
next(
createHttpError(
HttpCode.INTERNAL_SERVER_ERROR,
- "Error retrieving user organizations",
- ),
+ "Error retrieving user organizations"
+ )
);
}
}
diff --git a/server/middlewares/integration/index.ts b/server/middlewares/integration/index.ts
index d44eb5a3..2e2e8ff0 100644
--- a/server/middlewares/integration/index.ts
+++ b/server/middlewares/integration/index.ts
@@ -12,4 +12,4 @@ export * from "./verifyAccessTokenAccess";
export * from "./verifyApiKeyIsRoot";
export * from "./verifyApiKeyApiKeyAccess";
export * from "./verifyApiKeyClientAccess";
-export * from "./verifyApiKeySiteResourceAccess";
\ No newline at end of file
+export * from "./verifyApiKeySiteResourceAccess";
diff --git a/server/middlewares/integration/verifyAccessTokenAccess.ts b/server/middlewares/integration/verifyAccessTokenAccess.ts
index f5ae8746..c9a84f18 100644
--- a/server/middlewares/integration/verifyAccessTokenAccess.ts
+++ b/server/middlewares/integration/verifyAccessTokenAccess.ts
@@ -97,7 +97,6 @@ export async function verifyApiKeyAccessTokenAccess(
);
}
-
return next();
} catch (e) {
return next(
diff --git a/server/middlewares/integration/verifyApiKeyApiKeyAccess.ts b/server/middlewares/integration/verifyApiKeyApiKeyAccess.ts
index ad5b7fc4..48fbbf87 100644
--- a/server/middlewares/integration/verifyApiKeyApiKeyAccess.ts
+++ b/server/middlewares/integration/verifyApiKeyApiKeyAccess.ts
@@ -11,7 +11,7 @@ export async function verifyApiKeyApiKeyAccess(
next: NextFunction
) {
try {
- const {apiKey: callerApiKey } = req;
+ const { apiKey: callerApiKey } = req;
const apiKeyId =
req.params.apiKeyId || req.body.apiKeyId || req.query.apiKeyId;
@@ -44,7 +44,10 @@ export async function verifyApiKeyApiKeyAccess(
.select()
.from(apiKeyOrg)
.where(
- and(eq(apiKeys.apiKeyId, callerApiKey.apiKeyId), eq(apiKeyOrg.orgId, orgId))
+ and(
+ eq(apiKeys.apiKeyId, callerApiKey.apiKeyId),
+ eq(apiKeyOrg.orgId, orgId)
+ )
)
.limit(1);
diff --git a/server/middlewares/integration/verifyApiKeySetResourceClients.ts b/server/middlewares/integration/verifyApiKeySetResourceClients.ts
index cbcb33ae..704f3ef5 100644
--- a/server/middlewares/integration/verifyApiKeySetResourceClients.ts
+++ b/server/middlewares/integration/verifyApiKeySetResourceClients.ts
@@ -11,9 +11,12 @@ export async function verifyApiKeySetResourceClients(
next: NextFunction
) {
const apiKey = req.apiKey;
- const singleClientId = req.params.clientId || req.body.clientId || req.query.clientId;
+ const singleClientId =
+ req.params.clientId || req.body.clientId || req.query.clientId;
const { clientIds } = req.body;
- const allClientIds = clientIds || (singleClientId ? [parseInt(singleClientId as string)] : []);
+ const allClientIds =
+ clientIds ||
+ (singleClientId ? [parseInt(singleClientId as string)] : []);
if (!apiKey) {
return next(
@@ -70,4 +73,3 @@ export async function verifyApiKeySetResourceClients(
);
}
}
-
diff --git a/server/middlewares/integration/verifyApiKeySetResourceUsers.ts b/server/middlewares/integration/verifyApiKeySetResourceUsers.ts
index db73d134..0d44aa09 100644
--- a/server/middlewares/integration/verifyApiKeySetResourceUsers.ts
+++ b/server/middlewares/integration/verifyApiKeySetResourceUsers.ts
@@ -11,7 +11,8 @@ export async function verifyApiKeySetResourceUsers(
next: NextFunction
) {
const apiKey = req.apiKey;
- const singleUserId = req.params.userId || req.body.userId || req.query.userId;
+ const singleUserId =
+ req.params.userId || req.body.userId || req.query.userId;
const { userIds } = req.body;
const allUserIds = userIds || (singleUserId ? [singleUserId] : []);
diff --git a/server/middlewares/integration/verifyApiKeySiteResourceAccess.ts b/server/middlewares/integration/verifyApiKeySiteResourceAccess.ts
index fb3d8287..1fc11c31 100644
--- a/server/middlewares/integration/verifyApiKeySiteResourceAccess.ts
+++ b/server/middlewares/integration/verifyApiKeySiteResourceAccess.ts
@@ -38,17 +38,12 @@ export async function verifyApiKeySiteResourceAccess(
const [siteResource] = await db
.select()
.from(siteResources)
- .where(and(
- eq(siteResources.siteResourceId, siteResourceId)
- ))
+ .where(and(eq(siteResources.siteResourceId, siteResourceId)))
.limit(1);
if (!siteResource) {
return next(
- createHttpError(
- HttpCode.NOT_FOUND,
- "Site resource not found"
- )
+ createHttpError(HttpCode.NOT_FOUND, "Site resource not found")
);
}
diff --git a/server/middlewares/notFound.ts b/server/middlewares/notFound.ts
index 706796c9..8e0ab332 100644
--- a/server/middlewares/notFound.ts
+++ b/server/middlewares/notFound.ts
@@ -5,7 +5,7 @@ import HttpCode from "@server/types/HttpCode";
export function notFoundMiddleware(
req: Request,
res: Response,
- next: NextFunction,
+ next: NextFunction
) {
if (req.path.startsWith("/api")) {
const message = `The requests url is not found - ${req.originalUrl}`;
diff --git a/server/middlewares/requestTimeout.ts b/server/middlewares/requestTimeout.ts
index 8b5852b7..b0f95a08 100644
--- a/server/middlewares/requestTimeout.ts
+++ b/server/middlewares/requestTimeout.ts
@@ -1,30 +1,32 @@
-import { Request, Response, NextFunction } from 'express';
-import logger from '@server/logger';
-import createHttpError from 'http-errors';
-import HttpCode from '@server/types/HttpCode';
+import { Request, Response, NextFunction } from "express";
+import logger from "@server/logger";
+import createHttpError from "http-errors";
+import HttpCode from "@server/types/HttpCode";
export function requestTimeoutMiddleware(timeoutMs: number = 30000) {
return (req: Request, res: Response, next: NextFunction) => {
// Set a timeout for the request
const timeout = setTimeout(() => {
if (!res.headersSent) {
- logger.error(`Request timeout: ${req.method} ${req.url} from ${req.ip}`);
+ logger.error(
+ `Request timeout: ${req.method} ${req.url} from ${req.ip}`
+ );
return next(
createHttpError(
HttpCode.REQUEST_TIMEOUT,
- 'Request timeout - operation took too long to complete'
+ "Request timeout - operation took too long to complete"
)
);
}
}, timeoutMs);
// Clear timeout when response finishes
- res.on('finish', () => {
+ res.on("finish", () => {
clearTimeout(timeout);
});
// Clear timeout when response closes
- res.on('close', () => {
+ res.on("close", () => {
clearTimeout(timeout);
});
diff --git a/server/middlewares/verifySiteAccess.ts b/server/middlewares/verifySiteAccess.ts
index 05fc6d27..98858cfb 100644
--- a/server/middlewares/verifySiteAccess.ts
+++ b/server/middlewares/verifySiteAccess.ts
@@ -76,7 +76,10 @@ export async function verifySiteAccess(
.select()
.from(userOrgs)
.where(
- and(eq(userOrgs.userId, userId), eq(userOrgs.orgId, site.orgId))
+ and(
+ eq(userOrgs.userId, userId),
+ eq(userOrgs.orgId, site.orgId)
+ )
)
.limit(1);
req.userOrg = userOrgRole[0];
diff --git a/server/nextServer.ts b/server/nextServer.ts
index 5302b9c8..b862a699 100644
--- a/server/nextServer.ts
+++ b/server/nextServer.ts
@@ -9,7 +9,10 @@ const nextPort = config.getRawConfig().server.next_port;
export async function createNextServer() {
// const app = next({ dev });
- const app = next({ dev: process.env.ENVIRONMENT !== "prod", turbopack: true });
+ const app = next({
+ dev: process.env.ENVIRONMENT !== "prod",
+ turbopack: true
+ });
const handle = app.getRequestHandler();
await app.prepare();
diff --git a/server/private/auth/sessions/remoteExitNode.ts b/server/private/auth/sessions/remoteExitNode.ts
index fbb2ae1f..da1fb1aa 100644
--- a/server/private/auth/sessions/remoteExitNode.ts
+++ b/server/private/auth/sessions/remoteExitNode.ts
@@ -11,11 +11,14 @@
* This file is not licensed under the AGPLv3.
*/
-import {
- encodeHexLowerCase,
-} from "@oslojs/encoding";
+import { encodeHexLowerCase } from "@oslojs/encoding";
import { sha256 } from "@oslojs/crypto/sha2";
-import { RemoteExitNode, remoteExitNodes, remoteExitNodeSessions, RemoteExitNodeSession } from "@server/db";
+import {
+ RemoteExitNode,
+ remoteExitNodes,
+ remoteExitNodeSessions,
+ RemoteExitNodeSession
+} from "@server/db";
import { db } from "@server/db";
import { eq } from "drizzle-orm";
@@ -23,30 +26,39 @@ export const EXPIRES = 1000 * 60 * 60 * 24 * 30;
export async function createRemoteExitNodeSession(
token: string,
- remoteExitNodeId: string,
+ remoteExitNodeId: string
): Promise {
const sessionId = encodeHexLowerCase(
- sha256(new TextEncoder().encode(token)),
+ sha256(new TextEncoder().encode(token))
);
const session: RemoteExitNodeSession = {
sessionId: sessionId,
remoteExitNodeId,
- expiresAt: new Date(Date.now() + EXPIRES).getTime(),
+ expiresAt: new Date(Date.now() + EXPIRES).getTime()
};
await db.insert(remoteExitNodeSessions).values(session);
return session;
}
export async function validateRemoteExitNodeSessionToken(
- token: string,
+ token: string
): Promise {
const sessionId = encodeHexLowerCase(
- sha256(new TextEncoder().encode(token)),
+ sha256(new TextEncoder().encode(token))
);
const result = await db
- .select({ remoteExitNode: remoteExitNodes, session: remoteExitNodeSessions })
+ .select({
+ remoteExitNode: remoteExitNodes,
+ session: remoteExitNodeSessions
+ })
.from(remoteExitNodeSessions)
- .innerJoin(remoteExitNodes, eq(remoteExitNodeSessions.remoteExitNodeId, remoteExitNodes.remoteExitNodeId))
+ .innerJoin(
+ remoteExitNodes,
+ eq(
+ remoteExitNodeSessions.remoteExitNodeId,
+ remoteExitNodes.remoteExitNodeId
+ )
+ )
.where(eq(remoteExitNodeSessions.sessionId, sessionId));
if (result.length < 1) {
return { session: null, remoteExitNode: null };
@@ -58,26 +70,32 @@ export async function validateRemoteExitNodeSessionToken(
.where(eq(remoteExitNodeSessions.sessionId, session.sessionId));
return { session: null, remoteExitNode: null };
}
- if (Date.now() >= session.expiresAt - (EXPIRES / 2)) {
- session.expiresAt = new Date(
- Date.now() + EXPIRES,
- ).getTime();
+ if (Date.now() >= session.expiresAt - EXPIRES / 2) {
+ session.expiresAt = new Date(Date.now() + EXPIRES).getTime();
await db
.update(remoteExitNodeSessions)
.set({
- expiresAt: session.expiresAt,
+ expiresAt: session.expiresAt
})
.where(eq(remoteExitNodeSessions.sessionId, session.sessionId));
}
return { session, remoteExitNode };
}
-export async function invalidateRemoteExitNodeSession(sessionId: string): Promise {
- await db.delete(remoteExitNodeSessions).where(eq(remoteExitNodeSessions.sessionId, sessionId));
+export async function invalidateRemoteExitNodeSession(
+ sessionId: string
+): Promise {
+ await db
+ .delete(remoteExitNodeSessions)
+ .where(eq(remoteExitNodeSessions.sessionId, sessionId));
}
-export async function invalidateAllRemoteExitNodeSessions(remoteExitNodeId: string): Promise {
- await db.delete(remoteExitNodeSessions).where(eq(remoteExitNodeSessions.remoteExitNodeId, remoteExitNodeId));
+export async function invalidateAllRemoteExitNodeSessions(
+ remoteExitNodeId: string
+): Promise {
+ await db
+ .delete(remoteExitNodeSessions)
+ .where(eq(remoteExitNodeSessions.remoteExitNodeId, remoteExitNodeId));
}
export type SessionValidationResult =
diff --git a/server/private/cleanup.ts b/server/private/cleanup.ts
index 8bf5ea3d..e9b30527 100644
--- a/server/private/cleanup.ts
+++ b/server/private/cleanup.ts
@@ -25,4 +25,4 @@ export async function initCleanup() {
// Handle process termination
process.on("SIGTERM", () => cleanup());
process.on("SIGINT", () => cleanup());
-}
\ No newline at end of file
+}
diff --git a/server/private/lib/billing/index.ts b/server/private/lib/billing/index.ts
index 13ca3761..c2b77d5f 100644
--- a/server/private/lib/billing/index.ts
+++ b/server/private/lib/billing/index.ts
@@ -12,4 +12,4 @@
*/
export * from "./getOrgTierData";
-export * from "./createCustomer";
\ No newline at end of file
+export * from "./createCustomer";
diff --git a/server/private/lib/certificates.ts b/server/private/lib/certificates.ts
index ec4b73ee..06571cac 100644
--- a/server/private/lib/certificates.ts
+++ b/server/private/lib/certificates.ts
@@ -55,7 +55,6 @@ export async function getValidCertificatesForDomains(
domains: Set,
useCache: boolean = true
): Promise> {
-
loadEncryptData(); // Ensure encryption key is loaded
const finalResults: CertificateResult[] = [];
diff --git a/server/private/lib/checkOrgAccessPolicy.ts b/server/private/lib/checkOrgAccessPolicy.ts
index 2137cd72..7a78803d 100644
--- a/server/private/lib/checkOrgAccessPolicy.ts
+++ b/server/private/lib/checkOrgAccessPolicy.ts
@@ -12,14 +12,7 @@
*/
import { build } from "@server/build";
-import {
- db,
- Org,
- orgs,
- ResourceSession,
- sessions,
- users
-} from "@server/db";
+import { db, Org, orgs, ResourceSession, sessions, users } from "@server/db";
import { getOrgTierData } from "#private/lib/billing";
import { TierId } from "@server/lib/billing/tiers";
import license from "#private/license/license";
diff --git a/server/private/lib/exitNodes/exitNodeComms.ts b/server/private/lib/exitNodes/exitNodeComms.ts
index 20c850a1..faf1153f 100644
--- a/server/private/lib/exitNodes/exitNodeComms.ts
+++ b/server/private/lib/exitNodes/exitNodeComms.ts
@@ -66,7 +66,9 @@ export async function sendToExitNode(
// logger.debug(`Configured local exit node name: ${config.getRawConfig().gerbil.exit_node_name}`);
if (exitNode.name == config.getRawConfig().gerbil.exit_node_name) {
- hostname = privateConfig.getRawPrivateConfig().gerbil.local_exit_node_reachable_at;
+ hostname =
+ privateConfig.getRawPrivateConfig().gerbil
+ .local_exit_node_reachable_at;
}
if (!hostname) {
diff --git a/server/private/lib/exitNodes/exitNodes.ts b/server/private/lib/exitNodes/exitNodes.ts
index 77149bb0..556fdcf7 100644
--- a/server/private/lib/exitNodes/exitNodes.ts
+++ b/server/private/lib/exitNodes/exitNodes.ts
@@ -44,43 +44,53 @@ async function checkExitNodeOnlineStatus(
const delayBetweenAttempts = 100; // 100ms delay between starting each attempt
// Create promises for all attempts with staggered delays
- const attemptPromises = Array.from({ length: maxAttempts }, async (_, index) => {
- const attemptNumber = index + 1;
-
- // Add delay before each attempt (except the first)
- if (index > 0) {
- await new Promise((resolve) => setTimeout(resolve, delayBetweenAttempts * index));
- }
+ const attemptPromises = Array.from(
+ { length: maxAttempts },
+ async (_, index) => {
+ const attemptNumber = index + 1;
- try {
- const response = await axios.get(`http://${endpoint}/ping`, {
- timeout: timeoutMs,
- validateStatus: (status) => status === 200
- });
-
- if (response.status === 200) {
- logger.debug(
- `Exit node ${endpoint} is online (attempt ${attemptNumber}/${maxAttempts})`
+ // Add delay before each attempt (except the first)
+ if (index > 0) {
+ await new Promise((resolve) =>
+ setTimeout(resolve, delayBetweenAttempts * index)
);
- return { success: true, attemptNumber };
}
- return { success: false, attemptNumber, error: 'Non-200 status' };
- } catch (error) {
- const errorMessage = error instanceof Error ? error.message : "Unknown error";
- logger.debug(
- `Exit node ${endpoint} ping failed (attempt ${attemptNumber}/${maxAttempts}): ${errorMessage}`
- );
- return { success: false, attemptNumber, error: errorMessage };
+
+ try {
+ const response = await axios.get(`http://${endpoint}/ping`, {
+ timeout: timeoutMs,
+ validateStatus: (status) => status === 200
+ });
+
+ if (response.status === 200) {
+ logger.debug(
+ `Exit node ${endpoint} is online (attempt ${attemptNumber}/${maxAttempts})`
+ );
+ return { success: true, attemptNumber };
+ }
+ return {
+ success: false,
+ attemptNumber,
+ error: "Non-200 status"
+ };
+ } catch (error) {
+ const errorMessage =
+ error instanceof Error ? error.message : "Unknown error";
+ logger.debug(
+ `Exit node ${endpoint} ping failed (attempt ${attemptNumber}/${maxAttempts}): ${errorMessage}`
+ );
+ return { success: false, attemptNumber, error: errorMessage };
+ }
}
- });
+ );
try {
// Wait for the first successful response or all to fail
const results = await Promise.allSettled(attemptPromises);
-
+
// Check if any attempt succeeded
for (const result of results) {
- if (result.status === 'fulfilled' && result.value.success) {
+ if (result.status === "fulfilled" && result.value.success) {
return true;
}
}
@@ -137,7 +147,11 @@ export async function verifyExitNodeOrgAccess(
return { hasAccess: false, exitNode };
}
-export async function listExitNodes(orgId: string, filterOnline = false, noCloud = false) {
+export async function listExitNodes(
+ orgId: string,
+ filterOnline = false,
+ noCloud = false
+) {
const allExitNodes = await db
.select({
exitNodeId: exitNodes.exitNodeId,
@@ -166,7 +180,10 @@ export async function listExitNodes(orgId: string, filterOnline = false, noCloud
eq(exitNodes.type, "gerbil"),
or(
// only choose nodes that are in the same region
- eq(exitNodes.region, config.getRawPrivateConfig().app.region),
+ eq(
+ exitNodes.region,
+ config.getRawPrivateConfig().app.region
+ ),
isNull(exitNodes.region) // or for enterprise where region is not set
)
),
@@ -191,7 +208,7 @@ export async function listExitNodes(orgId: string, filterOnline = false, noCloud
// let online: boolean;
// if (filterOnline && node.type == "remoteExitNode") {
// try {
- // const isActuallyOnline = await checkExitNodeOnlineStatus(
+ // const isActuallyOnline = await checkExitNodeOnlineStatus(
// node.endpoint
// );
@@ -225,7 +242,8 @@ export async function listExitNodes(orgId: string, filterOnline = false, noCloud
node.type === "remoteExitNode" && (!filterOnline || node.online)
);
const gerbilExitNodes = allExitNodes.filter(
- (node) => node.type === "gerbil" && (!filterOnline || node.online) && !noCloud
+ (node) =>
+ node.type === "gerbil" && (!filterOnline || node.online) && !noCloud
);
// THIS PROVIDES THE FALL
@@ -334,7 +352,11 @@ export function selectBestExitNode(
return fallbackNode;
}
-export async function checkExitNodeOrg(exitNodeId: number, orgId: string, trx: Transaction | typeof db = db) {
+export async function checkExitNodeOrg(
+ exitNodeId: number,
+ orgId: string,
+ trx: Transaction | typeof db = db
+) {
const [exitNodeOrg] = await trx
.select()
.from(exitNodeOrgs)
diff --git a/server/private/lib/exitNodes/index.ts b/server/private/lib/exitNodes/index.ts
index 098a0580..00113b64 100644
--- a/server/private/lib/exitNodes/index.ts
+++ b/server/private/lib/exitNodes/index.ts
@@ -12,4 +12,4 @@
*/
export * from "./exitNodeComms";
-export * from "./exitNodes";
\ No newline at end of file
+export * from "./exitNodes";
diff --git a/server/private/lib/lock.ts b/server/private/lib/lock.ts
index 4a12063b..08496f65 100644
--- a/server/private/lib/lock.ts
+++ b/server/private/lib/lock.ts
@@ -177,7 +177,9 @@ export class LockManager {
const exists = value !== null;
const ownedByMe =
exists &&
- value!.startsWith(`${config.getRawConfig().gerbil.exit_node_name}:`);
+ value!.startsWith(
+ `${config.getRawConfig().gerbil.exit_node_name}:`
+ );
const owner = exists ? value!.split(":")[0] : undefined;
return {
diff --git a/server/private/lib/rateLimit.test.ts b/server/private/lib/rateLimit.test.ts
index 59952c8c..96adf082 100644
--- a/server/private/lib/rateLimit.test.ts
+++ b/server/private/lib/rateLimit.test.ts
@@ -14,15 +14,15 @@
// Simple test file for the rate limit service with Redis
// Run with: npx ts-node rateLimitService.test.ts
-import { RateLimitService } from './rateLimit';
+import { RateLimitService } from "./rateLimit";
function generateClientId() {
- return 'client-' + Math.random().toString(36).substring(2, 15);
+ return "client-" + Math.random().toString(36).substring(2, 15);
}
async function runTests() {
- console.log('Starting Rate Limit Service Tests...\n');
-
+ console.log("Starting Rate Limit Service Tests...\n");
+
const rateLimitService = new RateLimitService();
let testsPassed = 0;
let testsTotal = 0;
@@ -47,36 +47,54 @@ async function runTests() {
}
// Test 1: Basic rate limiting
- await test('Should allow requests under the limit', async () => {
+ await test("Should allow requests under the limit", async () => {
const clientId = generateClientId();
const maxRequests = 5;
for (let i = 0; i < maxRequests - 1; i++) {
- const result = await rateLimitService.checkRateLimit(clientId, undefined, maxRequests);
+ const result = await rateLimitService.checkRateLimit(
+ clientId,
+ undefined,
+ maxRequests
+ );
assert(!result.isLimited, `Request ${i + 1} should be allowed`);
- assert(result.totalHits === i + 1, `Expected ${i + 1} hits, got ${result.totalHits}`);
+ assert(
+ result.totalHits === i + 1,
+ `Expected ${i + 1} hits, got ${result.totalHits}`
+ );
}
});
// Test 2: Rate limit blocking
- await test('Should block requests over the limit', async () => {
+ await test("Should block requests over the limit", async () => {
const clientId = generateClientId();
const maxRequests = 30;
// Use up all allowed requests
for (let i = 0; i < maxRequests - 1; i++) {
- const result = await rateLimitService.checkRateLimit(clientId, undefined, maxRequests);
+ const result = await rateLimitService.checkRateLimit(
+ clientId,
+ undefined,
+ maxRequests
+ );
assert(!result.isLimited, `Request ${i + 1} should be allowed`);
}
// Next request should be blocked
- const blockedResult = await rateLimitService.checkRateLimit(clientId, undefined, maxRequests);
- assert(blockedResult.isLimited, 'Request should be blocked');
- assert(blockedResult.reason === 'global', 'Should be blocked for global reason');
+ const blockedResult = await rateLimitService.checkRateLimit(
+ clientId,
+ undefined,
+ maxRequests
+ );
+ assert(blockedResult.isLimited, "Request should be blocked");
+ assert(
+ blockedResult.reason === "global",
+ "Should be blocked for global reason"
+ );
});
// Test 3: Message type limits
- await test('Should handle message type limits', async () => {
+ await test("Should handle message type limits", async () => {
const clientId = generateClientId();
const globalMax = 10;
const messageTypeMax = 2;
@@ -84,54 +102,64 @@ async function runTests() {
// Send messages of type 'ping' up to the limit
for (let i = 0; i < messageTypeMax - 1; i++) {
const result = await rateLimitService.checkRateLimit(
- clientId,
- 'ping',
- globalMax,
+ clientId,
+ "ping",
+ globalMax,
messageTypeMax
);
- assert(!result.isLimited, `Ping message ${i + 1} should be allowed`);
+ assert(
+ !result.isLimited,
+ `Ping message ${i + 1} should be allowed`
+ );
}
// Next 'ping' should be blocked
const blockedResult = await rateLimitService.checkRateLimit(
- clientId,
- 'ping',
- globalMax,
+ clientId,
+ "ping",
+ globalMax,
messageTypeMax
);
- assert(blockedResult.isLimited, 'Ping message should be blocked');
- assert(blockedResult.reason === 'message_type:ping', 'Should be blocked for message type');
+ assert(blockedResult.isLimited, "Ping message should be blocked");
+ assert(
+ blockedResult.reason === "message_type:ping",
+ "Should be blocked for message type"
+ );
// Other message types should still work
const otherResult = await rateLimitService.checkRateLimit(
- clientId,
- 'pong',
- globalMax,
+ clientId,
+ "pong",
+ globalMax,
messageTypeMax
);
- assert(!otherResult.isLimited, 'Pong message should be allowed');
+ assert(!otherResult.isLimited, "Pong message should be allowed");
});
// Test 4: Reset functionality
- await test('Should reset client correctly', async () => {
+ await test("Should reset client correctly", async () => {
const clientId = generateClientId();
const maxRequests = 3;
// Use up some requests
await rateLimitService.checkRateLimit(clientId, undefined, maxRequests);
- await rateLimitService.checkRateLimit(clientId, 'test', maxRequests);
+ await rateLimitService.checkRateLimit(clientId, "test", maxRequests);
// Reset the client
await rateLimitService.resetKey(clientId);
// Should be able to make fresh requests
- const result = await rateLimitService.checkRateLimit(clientId, undefined, maxRequests);
- assert(!result.isLimited, 'Request after reset should be allowed');
- assert(result.totalHits === 1, 'Should have 1 hit after reset');
+ const result = await rateLimitService.checkRateLimit(
+ clientId,
+ undefined,
+ maxRequests
+ );
+ assert(!result.isLimited, "Request after reset should be allowed");
+ assert(result.totalHits === 1, "Should have 1 hit after reset");
});
// Test 5: Different clients are independent
- await test('Should handle different clients independently', async () => {
+ await test("Should handle different clients independently", async () => {
const client1 = generateClientId();
const client2 = generateClientId();
const maxRequests = 2;
@@ -139,43 +167,62 @@ async function runTests() {
// Client 1 uses up their limit
await rateLimitService.checkRateLimit(client1, undefined, maxRequests);
await rateLimitService.checkRateLimit(client1, undefined, maxRequests);
- const client1Blocked = await rateLimitService.checkRateLimit(client1, undefined, maxRequests);
- assert(client1Blocked.isLimited, 'Client 1 should be blocked');
+ const client1Blocked = await rateLimitService.checkRateLimit(
+ client1,
+ undefined,
+ maxRequests
+ );
+ assert(client1Blocked.isLimited, "Client 1 should be blocked");
// Client 2 should still be able to make requests
- const client2Result = await rateLimitService.checkRateLimit(client2, undefined, maxRequests);
- assert(!client2Result.isLimited, 'Client 2 should not be blocked');
- assert(client2Result.totalHits === 1, 'Client 2 should have 1 hit');
+ const client2Result = await rateLimitService.checkRateLimit(
+ client2,
+ undefined,
+ maxRequests
+ );
+ assert(!client2Result.isLimited, "Client 2 should not be blocked");
+ assert(client2Result.totalHits === 1, "Client 2 should have 1 hit");
});
// Test 6: Decrement functionality
- await test('Should decrement correctly', async () => {
+ await test("Should decrement correctly", async () => {
const clientId = generateClientId();
const maxRequests = 5;
// Make some requests
await rateLimitService.checkRateLimit(clientId, undefined, maxRequests);
await rateLimitService.checkRateLimit(clientId, undefined, maxRequests);
- let result = await rateLimitService.checkRateLimit(clientId, undefined, maxRequests);
- assert(result.totalHits === 3, 'Should have 3 hits before decrement');
+ let result = await rateLimitService.checkRateLimit(
+ clientId,
+ undefined,
+ maxRequests
+ );
+ assert(result.totalHits === 3, "Should have 3 hits before decrement");
// Decrement
await rateLimitService.decrementRateLimit(clientId);
// Next request should reflect the decrement
- result = await rateLimitService.checkRateLimit(clientId, undefined, maxRequests);
- assert(result.totalHits === 3, 'Should have 3 hits after decrement + increment');
+ result = await rateLimitService.checkRateLimit(
+ clientId,
+ undefined,
+ maxRequests
+ );
+ assert(
+ result.totalHits === 3,
+ "Should have 3 hits after decrement + increment"
+ );
});
// Wait a moment for any pending Redis operations
- console.log('\nWaiting for Redis sync...');
- await new Promise(resolve => setTimeout(resolve, 1000));
+ console.log("\nWaiting for Redis sync...");
+ await new Promise((resolve) => setTimeout(resolve, 1000));
// Force sync to test Redis integration
- await test('Should sync to Redis', async () => {
+ await test("Should sync to Redis", async () => {
await rateLimitService.forceSyncAllPendingData();
// If this doesn't throw, Redis sync is working
- assert(true, 'Redis sync completed');
+ assert(true, "Redis sync completed");
});
// Cleanup
@@ -185,18 +232,18 @@ async function runTests() {
console.log(`\n--- Test Results ---`);
console.log(`✅ Passed: ${testsPassed}/${testsTotal}`);
console.log(`❌ Failed: ${testsTotal - testsPassed}/${testsTotal}`);
-
+
if (testsPassed === testsTotal) {
- console.log('\n🎉 All tests passed!');
+ console.log("\n🎉 All tests passed!");
process.exit(0);
} else {
- console.log('\n💥 Some tests failed!');
+ console.log("\n💥 Some tests failed!");
process.exit(1);
}
}
// Run the tests
-runTests().catch(error => {
- console.error('Test runner error:', error);
+runTests().catch((error) => {
+ console.error("Test runner error:", error);
process.exit(1);
-});
\ No newline at end of file
+});
diff --git a/server/private/lib/rateLimit.ts b/server/private/lib/rateLimit.ts
index 6d4ab44d..984d95c6 100644
--- a/server/private/lib/rateLimit.ts
+++ b/server/private/lib/rateLimit.ts
@@ -40,7 +40,8 @@ interface RateLimitResult {
export class RateLimitService {
private localRateLimitTracker: Map = new Map();
- private localMessageTypeRateLimitTracker: Map = new Map();
+ private localMessageTypeRateLimitTracker: Map =
+ new Map();
private cleanupInterval: NodeJS.Timeout | null = null;
private forceSyncInterval: NodeJS.Timeout | null = null;
@@ -68,12 +69,18 @@ export class RateLimitService {
return `ratelimit:${clientId}`;
}
- private getMessageTypeRateLimitKey(clientId: string, messageType: string): string {
+ private getMessageTypeRateLimitKey(
+ clientId: string,
+ messageType: string
+ ): string {
return `ratelimit:${clientId}:${messageType}`;
}
// Helper function to clean up old timestamp fields from a Redis hash
- private async cleanupOldTimestamps(key: string, windowStart: number): Promise {
+ private async cleanupOldTimestamps(
+ key: string,
+ windowStart: number
+ ): Promise {
if (!redisManager.isRedisEnabled()) return;
try {
@@ -101,10 +108,15 @@ export class RateLimitService {
const batch = fieldsToDelete.slice(i, i + batchSize);
await client.hdel(key, ...batch);
}
- logger.debug(`Cleaned up ${fieldsToDelete.length} old timestamp fields from ${key}`);
+ logger.debug(
+ `Cleaned up ${fieldsToDelete.length} old timestamp fields from ${key}`
+ );
}
} catch (error) {
- logger.error(`Failed to cleanup old timestamps for key ${key}:`, error);
+ logger.error(
+ `Failed to cleanup old timestamps for key ${key}:`,
+ error
+ );
// Don't throw - cleanup failures shouldn't block rate limiting
}
}
@@ -114,7 +126,8 @@ export class RateLimitService {
clientId: string,
tracker: RateLimitTracker
): Promise {
- if (!redisManager.isRedisEnabled() || tracker.pendingCount === 0) return;
+ if (!redisManager.isRedisEnabled() || tracker.pendingCount === 0)
+ return;
try {
const currentTime = Math.floor(Date.now() / 1000);
@@ -132,7 +145,11 @@ export class RateLimitService {
const newValue = (
parseInt(currentValue || "0") + tracker.pendingCount
).toString();
- await redisManager.hset(globalKey, currentTime.toString(), newValue);
+ await redisManager.hset(
+ globalKey,
+ currentTime.toString(),
+ newValue
+ );
// Set TTL using the client directly - this prevents the key from persisting forever
if (redisManager.getClient()) {
@@ -145,7 +162,9 @@ export class RateLimitService {
tracker.lastSyncedCount = tracker.count;
tracker.pendingCount = 0;
- logger.debug(`Synced global rate limit to Redis for client ${clientId}`);
+ logger.debug(
+ `Synced global rate limit to Redis for client ${clientId}`
+ );
} catch (error) {
logger.error("Failed to sync global rate limit to Redis:", error);
}
@@ -156,12 +175,16 @@ export class RateLimitService {
messageType: string,
tracker: RateLimitTracker
): Promise {
- if (!redisManager.isRedisEnabled() || tracker.pendingCount === 0) return;
+ if (!redisManager.isRedisEnabled() || tracker.pendingCount === 0)
+ return;
try {
const currentTime = Math.floor(Date.now() / 1000);
const windowStart = currentTime - RATE_LIMIT_WINDOW;
- const messageTypeKey = this.getMessageTypeRateLimitKey(clientId, messageType);
+ const messageTypeKey = this.getMessageTypeRateLimitKey(
+ clientId,
+ messageType
+ );
// Clean up old timestamp fields before writing
await this.cleanupOldTimestamps(messageTypeKey, windowStart);
@@ -195,12 +218,17 @@ export class RateLimitService {
`Synced message type rate limit to Redis for client ${clientId}, type ${messageType}`
);
} catch (error) {
- logger.error("Failed to sync message type rate limit to Redis:", error);
+ logger.error(
+ "Failed to sync message type rate limit to Redis:",
+ error
+ );
}
}
// Initialize local tracker from Redis data
- private async initializeLocalTracker(clientId: string): Promise {
+ private async initializeLocalTracker(
+ clientId: string
+ ): Promise {
const currentTime = Math.floor(Date.now() / 1000);
const windowStart = currentTime - RATE_LIMIT_WINDOW;
@@ -215,14 +243,16 @@ export class RateLimitService {
try {
const globalKey = this.getRateLimitKey(clientId);
-
+
// Clean up old timestamp fields before reading
await this.cleanupOldTimestamps(globalKey, windowStart);
-
+
const globalRateLimitData = await redisManager.hgetall(globalKey);
let count = 0;
- for (const [timestamp, countStr] of Object.entries(globalRateLimitData)) {
+ for (const [timestamp, countStr] of Object.entries(
+ globalRateLimitData
+ )) {
const time = parseInt(timestamp);
if (time >= windowStart) {
count += parseInt(countStr);
@@ -236,7 +266,10 @@ export class RateLimitService {
lastSyncedCount: count
};
} catch (error) {
- logger.error("Failed to initialize global tracker from Redis:", error);
+ logger.error(
+ "Failed to initialize global tracker from Redis:",
+ error
+ );
return {
count: 0,
windowStart: currentTime,
@@ -263,15 +296,21 @@ export class RateLimitService {
}
try {
- const messageTypeKey = this.getMessageTypeRateLimitKey(clientId, messageType);
-
+ const messageTypeKey = this.getMessageTypeRateLimitKey(
+ clientId,
+ messageType
+ );
+
// Clean up old timestamp fields before reading
await this.cleanupOldTimestamps(messageTypeKey, windowStart);
-
- const messageTypeRateLimitData = await redisManager.hgetall(messageTypeKey);
+
+ const messageTypeRateLimitData =
+ await redisManager.hgetall(messageTypeKey);
let count = 0;
- for (const [timestamp, countStr] of Object.entries(messageTypeRateLimitData)) {
+ for (const [timestamp, countStr] of Object.entries(
+ messageTypeRateLimitData
+ )) {
const time = parseInt(timestamp);
if (time >= windowStart) {
count += parseInt(countStr);
@@ -285,7 +324,10 @@ export class RateLimitService {
lastSyncedCount: count
};
} catch (error) {
- logger.error("Failed to initialize message type tracker from Redis:", error);
+ logger.error(
+ "Failed to initialize message type tracker from Redis:",
+ error
+ );
return {
count: 0,
windowStart: currentTime,
@@ -327,7 +369,10 @@ export class RateLimitService {
isLimited: true,
reason: "global",
totalHits: globalTracker.count,
- resetTime: new Date((globalTracker.windowStart + Math.floor(windowMs / 1000)) * 1000)
+ resetTime: new Date(
+ (globalTracker.windowStart + Math.floor(windowMs / 1000)) *
+ 1000
+ )
};
}
@@ -339,19 +384,32 @@ export class RateLimitService {
// Check message type specific rate limit if messageType is provided
if (messageType) {
const messageTypeKey = `${clientId}:${messageType}`;
- let messageTypeTracker = this.localMessageTypeRateLimitTracker.get(messageTypeKey);
+ let messageTypeTracker =
+ this.localMessageTypeRateLimitTracker.get(messageTypeKey);
- if (!messageTypeTracker || messageTypeTracker.windowStart < windowStart) {
+ if (
+ !messageTypeTracker ||
+ messageTypeTracker.windowStart < windowStart
+ ) {
// New window or first request for this message type - initialize from Redis if available
- messageTypeTracker = await this.initializeMessageTypeTracker(clientId, messageType);
+ messageTypeTracker = await this.initializeMessageTypeTracker(
+ clientId,
+ messageType
+ );
messageTypeTracker.windowStart = currentTime;
- this.localMessageTypeRateLimitTracker.set(messageTypeKey, messageTypeTracker);
+ this.localMessageTypeRateLimitTracker.set(
+ messageTypeKey,
+ messageTypeTracker
+ );
}
// Increment message type counters
messageTypeTracker.count++;
messageTypeTracker.pendingCount++;
- this.localMessageTypeRateLimitTracker.set(messageTypeKey, messageTypeTracker);
+ this.localMessageTypeRateLimitTracker.set(
+ messageTypeKey,
+ messageTypeTracker
+ );
// Check if message type limit would be exceeded
if (messageTypeTracker.count >= messageTypeLimit) {
@@ -359,25 +417,38 @@ export class RateLimitService {
isLimited: true,
reason: `message_type:${messageType}`,
totalHits: messageTypeTracker.count,
- resetTime: new Date((messageTypeTracker.windowStart + Math.floor(windowMs / 1000)) * 1000)
+ resetTime: new Date(
+ (messageTypeTracker.windowStart +
+ Math.floor(windowMs / 1000)) *
+ 1000
+ )
};
}
// Sync to Redis if threshold reached
if (messageTypeTracker.pendingCount >= REDIS_SYNC_THRESHOLD) {
- this.syncMessageTypeRateLimitToRedis(clientId, messageType, messageTypeTracker);
+ this.syncMessageTypeRateLimitToRedis(
+ clientId,
+ messageType,
+ messageTypeTracker
+ );
}
}
return {
isLimited: false,
totalHits: globalTracker.count,
- resetTime: new Date((globalTracker.windowStart + Math.floor(windowMs / 1000)) * 1000)
+ resetTime: new Date(
+ (globalTracker.windowStart + Math.floor(windowMs / 1000)) * 1000
+ )
};
}
// Decrement function for skipSuccessfulRequests/skipFailedRequests functionality
- async decrementRateLimit(clientId: string, messageType?: string): Promise {
+ async decrementRateLimit(
+ clientId: string,
+ messageType?: string
+ ): Promise {
// Decrement global counter
const globalTracker = this.localRateLimitTracker.get(clientId);
if (globalTracker && globalTracker.count > 0) {
@@ -389,7 +460,8 @@ export class RateLimitService {
// Decrement message type counter if provided
if (messageType) {
const messageTypeKey = `${clientId}:${messageType}`;
- const messageTypeTracker = this.localMessageTypeRateLimitTracker.get(messageTypeKey);
+ const messageTypeTracker =
+ this.localMessageTypeRateLimitTracker.get(messageTypeKey);
if (messageTypeTracker && messageTypeTracker.count > 0) {
messageTypeTracker.count--;
messageTypeTracker.pendingCount--;
@@ -401,7 +473,7 @@ export class RateLimitService {
async resetKey(clientId: string): Promise {
// Remove from local tracking
this.localRateLimitTracker.delete(clientId);
-
+
// Remove all message type entries for this client
for (const [key] of this.localMessageTypeRateLimitTracker) {
if (key.startsWith(`${clientId}:`)) {
@@ -417,9 +489,13 @@ export class RateLimitService {
// Get all message type keys for this client and delete them
const client = redisManager.getClient();
if (client) {
- const messageTypeKeys = await client.keys(`ratelimit:${clientId}:*`);
+ const messageTypeKeys = await client.keys(
+ `ratelimit:${clientId}:*`
+ );
if (messageTypeKeys.length > 0) {
- await Promise.all(messageTypeKeys.map(key => redisManager.del(key)));
+ await Promise.all(
+ messageTypeKeys.map((key) => redisManager.del(key))
+ );
}
}
}
@@ -431,7 +507,10 @@ export class RateLimitService {
const windowStart = currentTime - RATE_LIMIT_WINDOW;
// Clean up global rate limit tracking and sync pending data
- for (const [clientId, tracker] of this.localRateLimitTracker.entries()) {
+ for (const [
+ clientId,
+ tracker
+ ] of this.localRateLimitTracker.entries()) {
if (tracker.windowStart < windowStart) {
// Sync any pending data before cleanup
if (tracker.pendingCount > 0) {
@@ -442,12 +521,19 @@ export class RateLimitService {
}
// Clean up message type rate limit tracking and sync pending data
- for (const [key, tracker] of this.localMessageTypeRateLimitTracker.entries()) {
+ for (const [
+ key,
+ tracker
+ ] of this.localMessageTypeRateLimitTracker.entries()) {
if (tracker.windowStart < windowStart) {
// Sync any pending data before cleanup
if (tracker.pendingCount > 0) {
const [clientId, messageType] = key.split(":", 2);
- await this.syncMessageTypeRateLimitToRedis(clientId, messageType, tracker);
+ await this.syncMessageTypeRateLimitToRedis(
+ clientId,
+ messageType,
+ tracker
+ );
}
this.localMessageTypeRateLimitTracker.delete(key);
}
@@ -461,17 +547,27 @@ export class RateLimitService {
logger.debug("Force syncing all pending rate limit data to Redis...");
// Sync all pending global rate limits
- for (const [clientId, tracker] of this.localRateLimitTracker.entries()) {
+ for (const [
+ clientId,
+ tracker
+ ] of this.localRateLimitTracker.entries()) {
if (tracker.pendingCount > 0) {
await this.syncRateLimitToRedis(clientId, tracker);
}
}
// Sync all pending message type rate limits
- for (const [key, tracker] of this.localMessageTypeRateLimitTracker.entries()) {
+ for (const [
+ key,
+ tracker
+ ] of this.localMessageTypeRateLimitTracker.entries()) {
if (tracker.pendingCount > 0) {
const [clientId, messageType] = key.split(":", 2);
- await this.syncMessageTypeRateLimitToRedis(clientId, messageType, tracker);
+ await this.syncMessageTypeRateLimitToRedis(
+ clientId,
+ messageType,
+ tracker
+ );
}
}
@@ -504,4 +600,4 @@ export class RateLimitService {
}
// Export singleton instance
-export const rateLimitService = new RateLimitService();
\ No newline at end of file
+export const rateLimitService = new RateLimitService();
diff --git a/server/private/lib/rateLimitStore.ts b/server/private/lib/rateLimitStore.ts
index 20355125..32495cd2 100644
--- a/server/private/lib/rateLimitStore.ts
+++ b/server/private/lib/rateLimitStore.ts
@@ -17,7 +17,10 @@ import { MemoryStore, Store } from "express-rate-limit";
import RedisStore from "#private/lib/redisStore";
export function createStore(): Store {
- if (build != "oss" && privateConfig.getRawPrivateConfig().flags.enable_redis) {
+ if (
+ build != "oss" &&
+ privateConfig.getRawPrivateConfig().flags.enable_redis
+ ) {
const rateLimitStore: Store = new RedisStore({
prefix: "api-rate-limit", // Optional: customize Redis key prefix
skipFailedRequests: true, // Don't count failed requests
diff --git a/server/private/lib/redis.ts b/server/private/lib/redis.ts
index 324a6a74..6b7826ea 100644
--- a/server/private/lib/redis.ts
+++ b/server/private/lib/redis.ts
@@ -19,7 +19,7 @@ import { build } from "@server/build";
class RedisManager {
public client: Redis | null = null;
private writeClient: Redis | null = null; // Master for writes
- private readClient: Redis | null = null; // Replica for reads
+ private readClient: Redis | null = null; // Replica for reads
private subscriber: Redis | null = null;
private publisher: Redis | null = null;
private isEnabled: boolean = false;
@@ -46,7 +46,8 @@ class RedisManager {
this.isEnabled = false;
return;
}
- this.isEnabled = privateConfig.getRawPrivateConfig().flags.enable_redis || false;
+ this.isEnabled =
+ privateConfig.getRawPrivateConfig().flags.enable_redis || false;
if (this.isEnabled) {
this.initializeClients();
}
@@ -63,15 +64,19 @@ class RedisManager {
}
private async triggerReconnectionCallbacks(): Promise {
- logger.info(`Triggering ${this.reconnectionCallbacks.size} reconnection callbacks`);
-
- const promises = Array.from(this.reconnectionCallbacks).map(async (callback) => {
- try {
- await callback();
- } catch (error) {
- logger.error("Error in reconnection callback:", error);
+ logger.info(
+ `Triggering ${this.reconnectionCallbacks.size} reconnection callbacks`
+ );
+
+ const promises = Array.from(this.reconnectionCallbacks).map(
+ async (callback) => {
+ try {
+ await callback();
+ } catch (error) {
+ logger.error("Error in reconnection callback:", error);
+ }
}
- });
+ );
await Promise.allSettled(promises);
}
@@ -79,13 +84,17 @@ class RedisManager {
private async resubscribeToChannels(): Promise {
if (!this.subscriber || this.subscribers.size === 0) return;
- logger.info(`Re-subscribing to ${this.subscribers.size} channels after Redis reconnection`);
-
+ logger.info(
+ `Re-subscribing to ${this.subscribers.size} channels after Redis reconnection`
+ );
+
try {
const channels = Array.from(this.subscribers.keys());
if (channels.length > 0) {
await this.subscriber.subscribe(...channels);
- logger.info(`Successfully re-subscribed to channels: ${channels.join(', ')}`);
+ logger.info(
+ `Successfully re-subscribed to channels: ${channels.join(", ")}`
+ );
}
} catch (error) {
logger.error("Failed to re-subscribe to channels:", error);
@@ -98,7 +107,7 @@ class RedisManager {
host: redisConfig.host!,
port: redisConfig.port!,
password: redisConfig.password,
- db: redisConfig.db,
+ db: redisConfig.db
// tls: {
// rejectUnauthorized:
// redisConfig.tls?.reject_unauthorized || false
@@ -112,7 +121,7 @@ class RedisManager {
if (!redisConfig.replicas || redisConfig.replicas.length === 0) {
return null;
}
-
+
// Use the first replica for simplicity
// In production, you might want to implement load balancing across replicas
const replica = redisConfig.replicas[0];
@@ -120,7 +129,7 @@ class RedisManager {
host: replica.host!,
port: replica.port!,
password: replica.password,
- db: replica.db || redisConfig.db,
+ db: replica.db || redisConfig.db
// tls: {
// rejectUnauthorized:
// replica.tls?.reject_unauthorized || false
@@ -133,7 +142,7 @@ class RedisManager {
private initializeClients(): void {
const masterConfig = this.getRedisConfig();
const replicaConfig = this.getReplicaRedisConfig();
-
+
this.hasReplicas = replicaConfig !== null;
try {
@@ -144,7 +153,7 @@ class RedisManager {
maxRetriesPerRequest: 3,
keepAlive: 30000,
connectTimeout: this.connectionTimeout,
- commandTimeout: this.commandTimeout,
+ commandTimeout: this.commandTimeout
});
// Initialize replica connection for reads (if available)
@@ -155,7 +164,7 @@ class RedisManager {
maxRetriesPerRequest: 3,
keepAlive: 30000,
connectTimeout: this.connectionTimeout,
- commandTimeout: this.commandTimeout,
+ commandTimeout: this.commandTimeout
});
} else {
// Fallback to master for reads if no replicas
@@ -172,7 +181,7 @@ class RedisManager {
maxRetriesPerRequest: 3,
keepAlive: 30000,
connectTimeout: this.connectionTimeout,
- commandTimeout: this.commandTimeout,
+ commandTimeout: this.commandTimeout
});
// Subscriber uses replica if available (reads)
@@ -182,7 +191,7 @@ class RedisManager {
maxRetriesPerRequest: 3,
keepAlive: 30000,
connectTimeout: this.connectionTimeout,
- commandTimeout: this.commandTimeout,
+ commandTimeout: this.commandTimeout
});
// Add reconnection handlers for write client
@@ -202,11 +211,14 @@ class RedisManager {
logger.info("Redis write client ready");
this.isWriteHealthy = true;
this.updateOverallHealth();
-
+
// Trigger reconnection callbacks when Redis comes back online
if (this.isHealthy) {
- this.triggerReconnectionCallbacks().catch(error => {
- logger.error("Error triggering reconnection callbacks:", error);
+ this.triggerReconnectionCallbacks().catch((error) => {
+ logger.error(
+ "Error triggering reconnection callbacks:",
+ error
+ );
});
}
});
@@ -233,11 +245,14 @@ class RedisManager {
logger.info("Redis read client ready");
this.isReadHealthy = true;
this.updateOverallHealth();
-
+
// Trigger reconnection callbacks when Redis comes back online
if (this.isHealthy) {
- this.triggerReconnectionCallbacks().catch(error => {
- logger.error("Error triggering reconnection callbacks:", error);
+ this.triggerReconnectionCallbacks().catch((error) => {
+ logger.error(
+ "Error triggering reconnection callbacks:",
+ error
+ );
});
}
});
@@ -298,8 +313,8 @@ class RedisManager {
}
);
- const setupMessage = this.hasReplicas
- ? "Redis clients initialized successfully with replica support"
+ const setupMessage = this.hasReplicas
+ ? "Redis clients initialized successfully with replica support"
: "Redis clients initialized successfully (single instance)";
logger.info(setupMessage);
@@ -313,7 +328,8 @@ class RedisManager {
private updateOverallHealth(): void {
// Overall health is true if write is healthy and (read is healthy OR we don't have replicas)
- this.isHealthy = this.isWriteHealthy && (this.isReadHealthy || !this.hasReplicas);
+ this.isHealthy =
+ this.isWriteHealthy && (this.isReadHealthy || !this.hasReplicas);
}
private async executeWithRetry(
@@ -322,49 +338,61 @@ class RedisManager {
fallbackOperation?: () => Promise
): Promise {
let lastError: Error | null = null;
-
+
for (let attempt = 0; attempt <= this.maxRetries; attempt++) {
try {
return await operation();
} catch (error) {
lastError = error as Error;
-
+
// If this is the last attempt, try fallback if available
if (attempt === this.maxRetries && fallbackOperation) {
try {
- logger.warn(`${operationName} primary operation failed, trying fallback`);
+ logger.warn(
+ `${operationName} primary operation failed, trying fallback`
+ );
return await fallbackOperation();
} catch (fallbackError) {
- logger.error(`${operationName} fallback also failed:`, fallbackError);
+ logger.error(
+ `${operationName} fallback also failed:`,
+ fallbackError
+ );
throw lastError;
}
}
-
+
// Don't retry on the last attempt
if (attempt === this.maxRetries) {
break;
}
-
+
// Calculate delay with exponential backoff
const delay = Math.min(
- this.baseRetryDelay * Math.pow(this.backoffMultiplier, attempt),
+ this.baseRetryDelay *
+ Math.pow(this.backoffMultiplier, attempt),
this.maxRetryDelay
);
-
- logger.warn(`${operationName} failed (attempt ${attempt + 1}/${this.maxRetries + 1}), retrying in ${delay}ms:`, error);
-
+
+ logger.warn(
+ `${operationName} failed (attempt ${attempt + 1}/${this.maxRetries + 1}), retrying in ${delay}ms:`,
+ error
+ );
+
// Wait before retrying
- await new Promise(resolve => setTimeout(resolve, delay));
+ await new Promise((resolve) => setTimeout(resolve, delay));
}
}
-
- logger.error(`${operationName} failed after ${this.maxRetries + 1} attempts:`, lastError);
+
+ logger.error(
+ `${operationName} failed after ${this.maxRetries + 1} attempts:`,
+ lastError
+ );
throw lastError;
}
private startHealthMonitoring(): void {
if (!this.isEnabled) return;
-
+
// Check health every 30 seconds
setInterval(async () => {
try {
@@ -381,7 +409,7 @@ class RedisManager {
private async checkRedisHealth(): Promise {
const now = Date.now();
-
+
// Only check health every 30 seconds
if (now - this.lastHealthCheck < this.healthCheckInterval) {
return this.isHealthy;
@@ -400,24 +428,45 @@ class RedisManager {
// Check write client (master) health
await Promise.race([
this.writeClient.ping(),
- new Promise((_, reject) =>
- setTimeout(() => reject(new Error('Write client health check timeout')), 2000)
+ new Promise((_, reject) =>
+ setTimeout(
+ () =>
+ reject(
+ new Error("Write client health check timeout")
+ ),
+ 2000
+ )
)
]);
this.isWriteHealthy = true;
// Check read client health if it's different from write client
- if (this.hasReplicas && this.readClient && this.readClient !== this.writeClient) {
+ if (
+ this.hasReplicas &&
+ this.readClient &&
+ this.readClient !== this.writeClient
+ ) {
try {
await Promise.race([
this.readClient.ping(),
- new Promise((_, reject) =>
- setTimeout(() => reject(new Error('Read client health check timeout')), 2000)
+ new Promise((_, reject) =>
+ setTimeout(
+ () =>
+ reject(
+ new Error(
+ "Read client health check timeout"
+ )
+ ),
+ 2000
+ )
)
]);
this.isReadHealthy = true;
} catch (error) {
- logger.error("Redis read client health check failed:", error);
+ logger.error(
+ "Redis read client health check failed:",
+ error
+ );
this.isReadHealthy = false;
}
} else {
@@ -475,16 +524,13 @@ class RedisManager {
if (!this.isRedisEnabled() || !this.writeClient) return false;
try {
- await this.executeWithRetry(
- async () => {
- if (ttl) {
- await this.writeClient!.setex(key, ttl, value);
- } else {
- await this.writeClient!.set(key, value);
- }
- },
- "Redis SET"
- );
+ await this.executeWithRetry(async () => {
+ if (ttl) {
+ await this.writeClient!.setex(key, ttl, value);
+ } else {
+ await this.writeClient!.set(key, value);
+ }
+ }, "Redis SET");
return true;
} catch (error) {
logger.error("Redis SET error:", error);
@@ -496,9 +542,10 @@ class RedisManager {
if (!this.isRedisEnabled() || !this.readClient) return null;
try {
- const fallbackOperation = (this.hasReplicas && this.writeClient && this.isWriteHealthy)
- ? () => this.writeClient!.get(key)
- : undefined;
+ const fallbackOperation =
+ this.hasReplicas && this.writeClient && this.isWriteHealthy
+ ? () => this.writeClient!.get(key)
+ : undefined;
return await this.executeWithRetry(
() => this.readClient!.get(key),
@@ -560,9 +607,10 @@ class RedisManager {
if (!this.isRedisEnabled() || !this.readClient) return [];
try {
- const fallbackOperation = (this.hasReplicas && this.writeClient && this.isWriteHealthy)
- ? () => this.writeClient!.smembers(key)
- : undefined;
+ const fallbackOperation =
+ this.hasReplicas && this.writeClient && this.isWriteHealthy
+ ? () => this.writeClient!.smembers(key)
+ : undefined;
return await this.executeWithRetry(
() => this.readClient!.smembers(key),
@@ -598,9 +646,10 @@ class RedisManager {
if (!this.isRedisEnabled() || !this.readClient) return null;
try {
- const fallbackOperation = (this.hasReplicas && this.writeClient && this.isWriteHealthy)
- ? () => this.writeClient!.hget(key, field)
- : undefined;
+ const fallbackOperation =
+ this.hasReplicas && this.writeClient && this.isWriteHealthy
+ ? () => this.writeClient!.hget(key, field)
+ : undefined;
return await this.executeWithRetry(
() => this.readClient!.hget(key, field),
@@ -632,9 +681,10 @@ class RedisManager {
if (!this.isRedisEnabled() || !this.readClient) return {};
try {
- const fallbackOperation = (this.hasReplicas && this.writeClient && this.isWriteHealthy)
- ? () => this.writeClient!.hgetall(key)
- : undefined;
+ const fallbackOperation =
+ this.hasReplicas && this.writeClient && this.isWriteHealthy
+ ? () => this.writeClient!.hgetall(key)
+ : undefined;
return await this.executeWithRetry(
() => this.readClient!.hgetall(key),
@@ -658,18 +708,18 @@ class RedisManager {
}
try {
- await this.executeWithRetry(
- async () => {
- // Add timeout to prevent hanging
- return Promise.race([
- this.publisher!.publish(channel, message),
- new Promise((_, reject) =>
- setTimeout(() => reject(new Error('Redis publish timeout')), 3000)
+ await this.executeWithRetry(async () => {
+ // Add timeout to prevent hanging
+ return Promise.race([
+ this.publisher!.publish(channel, message),
+ new Promise((_, reject) =>
+ setTimeout(
+ () => reject(new Error("Redis publish timeout")),
+ 3000
)
- ]);
- },
- "Redis PUBLISH"
- );
+ )
+ ]);
+ }, "Redis PUBLISH");
return true;
} catch (error) {
logger.error("Redis PUBLISH error:", error);
@@ -689,17 +739,20 @@ class RedisManager {
if (!this.subscribers.has(channel)) {
this.subscribers.set(channel, new Set());
// Only subscribe to the channel if it's the first subscriber
- await this.executeWithRetry(
- async () => {
- return Promise.race([
- this.subscriber!.subscribe(channel),
- new Promise((_, reject) =>
- setTimeout(() => reject(new Error('Redis subscribe timeout')), 5000)
+ await this.executeWithRetry(async () => {
+ return Promise.race([
+ this.subscriber!.subscribe(channel),
+ new Promise((_, reject) =>
+ setTimeout(
+ () =>
+ reject(
+ new Error("Redis subscribe timeout")
+ ),
+ 5000
)
- ]);
- },
- "Redis SUBSCRIBE"
- );
+ )
+ ]);
+ }, "Redis SUBSCRIBE");
}
this.subscribers.get(channel)!.add(callback);
diff --git a/server/private/lib/redisStore.ts b/server/private/lib/redisStore.ts
index 235f8f8f..2360e309 100644
--- a/server/private/lib/redisStore.ts
+++ b/server/private/lib/redisStore.ts
@@ -11,9 +11,9 @@
* This file is not licensed under the AGPLv3.
*/
-import { Store, Options, IncrementResponse } from 'express-rate-limit';
-import { rateLimitService } from './rateLimit';
-import logger from '@server/logger';
+import { Store, Options, IncrementResponse } from "express-rate-limit";
+import { rateLimitService } from "./rateLimit";
+import logger from "@server/logger";
/**
* A Redis-backed rate limiting store for express-rate-limit that optimizes
@@ -57,12 +57,14 @@ export default class RedisStore implements Store {
*
* @param options - Configuration options for the store.
*/
- constructor(options: {
- prefix?: string;
- skipFailedRequests?: boolean;
- skipSuccessfulRequests?: boolean;
- } = {}) {
- this.prefix = options.prefix || 'express-rate-limit';
+ constructor(
+ options: {
+ prefix?: string;
+ skipFailedRequests?: boolean;
+ skipSuccessfulRequests?: boolean;
+ } = {}
+ ) {
+ this.prefix = options.prefix || "express-rate-limit";
this.skipFailedRequests = options.skipFailedRequests || false;
this.skipSuccessfulRequests = options.skipSuccessfulRequests || false;
}
@@ -101,7 +103,8 @@ export default class RedisStore implements Store {
return {
totalHits: result.totalHits || 1,
- resetTime: result.resetTime || new Date(Date.now() + this.windowMs)
+ resetTime:
+ result.resetTime || new Date(Date.now() + this.windowMs)
};
} catch (error) {
logger.error(`RedisStore increment error for key ${key}:`, error);
@@ -158,7 +161,9 @@ export default class RedisStore implements Store {
*/
async resetAll(): Promise {
try {
- logger.warn('RedisStore resetAll called - this operation can be expensive');
+ logger.warn(
+ "RedisStore resetAll called - this operation can be expensive"
+ );
// Force sync all pending data first
await rateLimitService.forceSyncAllPendingData();
@@ -167,9 +172,9 @@ export default class RedisStore implements Store {
// scanning all Redis keys with our prefix, which could be expensive.
// In production, it's better to let entries expire naturally.
- logger.info('RedisStore resetAll completed (pending data synced)');
+ logger.info("RedisStore resetAll completed (pending data synced)");
} catch (error) {
- logger.error('RedisStore resetAll error:', error);
+ logger.error("RedisStore resetAll error:", error);
// Don't throw - this is an optional method
}
}
@@ -181,7 +186,9 @@ export default class RedisStore implements Store {
* @param key - The identifier for a client.
* @returns Current hit count and reset time, or null if no data exists.
*/
- async getHits(key: string): Promise<{ totalHits: number; resetTime: Date } | null> {
+ async getHits(
+ key: string
+ ): Promise<{ totalHits: number; resetTime: Date } | null> {
try {
const clientId = `${this.prefix}:${key}`;
@@ -200,7 +207,8 @@ export default class RedisStore implements Store {
return {
totalHits: Math.max(0, (result.totalHits || 0) - 1), // Adjust for the decrement
- resetTime: result.resetTime || new Date(Date.now() + this.windowMs)
+ resetTime:
+ result.resetTime || new Date(Date.now() + this.windowMs)
};
} catch (error) {
logger.error(`RedisStore getHits error for key ${key}:`, error);
@@ -215,9 +223,9 @@ export default class RedisStore implements Store {
async shutdown(): Promise {
try {
// The rateLimitService handles its own cleanup
- logger.info('RedisStore shutdown completed');
+ logger.info("RedisStore shutdown completed");
} catch (error) {
- logger.error('RedisStore shutdown error:', error);
+ logger.error("RedisStore shutdown error:", error);
}
}
}
diff --git a/server/private/lib/resend.ts b/server/private/lib/resend.ts
index 17384ea3..42a11c15 100644
--- a/server/private/lib/resend.ts
+++ b/server/private/lib/resend.ts
@@ -16,10 +16,10 @@ import privateConfig from "#private/lib/config";
import logger from "@server/logger";
export enum AudienceIds {
- SignUps = "6c4e77b2-0851-4bd6-bac8-f51f91360f1a",
- Subscribed = "870b43fd-387f-44de-8fc1-707335f30b20",
- Churned = "f3ae92bd-2fdb-4d77-8746-2118afd62549",
- Newsletter = "5500c431-191c-42f0-a5d4-8b6d445b4ea0"
+ SignUps = "6c4e77b2-0851-4bd6-bac8-f51f91360f1a",
+ Subscribed = "870b43fd-387f-44de-8fc1-707335f30b20",
+ Churned = "f3ae92bd-2fdb-4d77-8746-2118afd62549",
+ Newsletter = "5500c431-191c-42f0-a5d4-8b6d445b4ea0"
}
const resend = new Resend(
@@ -33,7 +33,9 @@ export async function moveEmailToAudience(
audienceId: AudienceIds
) {
if (process.env.ENVIRONMENT !== "prod") {
- logger.debug(`Skipping moving email ${email} to audience ${audienceId} in non-prod environment`);
+ logger.debug(
+ `Skipping moving email ${email} to audience ${audienceId} in non-prod environment`
+ );
return;
}
const { error, data } = await retryWithBackoff(async () => {
diff --git a/server/private/lib/traefik/getTraefikConfig.ts b/server/private/lib/traefik/getTraefikConfig.ts
index 8060ccad..82568216 100644
--- a/server/private/lib/traefik/getTraefikConfig.ts
+++ b/server/private/lib/traefik/getTraefikConfig.ts
@@ -823,7 +823,7 @@ export async function getTraefikConfig(
(cert) => cert.queriedDomain === lp.fullDomain
);
if (!matchingCert) {
- logger.warn(
+ logger.debug(
`No matching certificate found for login page domain: ${lp.fullDomain}`
);
continue;
diff --git a/server/private/lib/traefik/index.ts b/server/private/lib/traefik/index.ts
index 30d83181..5f2c2635 100644
--- a/server/private/lib/traefik/index.ts
+++ b/server/private/lib/traefik/index.ts
@@ -11,4 +11,4 @@
* This file is not licensed under the AGPLv3.
*/
-export * from "./getTraefikConfig";
\ No newline at end of file
+export * from "./getTraefikConfig";
diff --git a/server/private/license/license.ts b/server/private/license/license.ts
index 809f5ca9..f8f774c6 100644
--- a/server/private/license/license.ts
+++ b/server/private/license/license.ts
@@ -64,11 +64,14 @@ export class License {
private validationServerUrl = `${this.serverBaseUrl}/api/v1/license/enterprise/validate`;
private activationServerUrl = `${this.serverBaseUrl}/api/v1/license/enterprise/activate`;
- private statusCache = new NodeCache({ stdTTL: this.phoneHomeInterval });
+ private statusCache = new NodeCache();
private licenseKeyCache = new NodeCache();
private statusKey = "status";
private serverSecret!: string;
+ private phoneHomeFailureCount = 0;
+ private checkInProgress = false;
+ private doRecheck = false;
private publicKey = `-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAx9RKc8cw+G8r7h/xeozF
@@ -81,12 +84,11 @@ LQIDAQAB
-----END PUBLIC KEY-----`;
constructor(private hostMeta: HostMeta) {
- setInterval(
- async () => {
- await this.check();
- },
- 1000 * 60 * 60
- );
+ setInterval(async () => {
+ this.doRecheck = true;
+ await this.check();
+ this.doRecheck = false;
+ }, 1000 * this.phoneHomeInterval);
}
public listKeys(): LicenseKeyCache[] {
@@ -103,6 +105,7 @@ LQIDAQAB
public async forceRecheck() {
this.statusCache.flushAll();
this.licenseKeyCache.flushAll();
+ this.phoneHomeFailureCount = 0;
return await this.check();
}
@@ -118,24 +121,49 @@ LQIDAQAB
}
public async check(): Promise {
+ // If a check is already in progress, return the last known status
+ if (this.checkInProgress) {
+ logger.debug(
+ "License check already in progress, returning last known status"
+ );
+ const lastStatus = this.statusCache.get(this.statusKey) as
+ | LicenseStatus
+ | undefined;
+ if (lastStatus) {
+ return lastStatus;
+ }
+ // If no cached status exists, return default status
+ return {
+ hostId: this.hostMeta.hostMetaId,
+ isHostLicensed: true,
+ isLicenseValid: false
+ };
+ }
+
const status: LicenseStatus = {
hostId: this.hostMeta.hostMetaId,
isHostLicensed: true,
isLicenseValid: false
};
+ this.checkInProgress = true;
+
try {
- if (this.statusCache.has(this.statusKey)) {
+ if (!this.doRecheck && this.statusCache.has(this.statusKey)) {
const res = this.statusCache.get("status") as LicenseStatus;
return res;
}
- // Invalidate all
- this.licenseKeyCache.flushAll();
+ logger.debug("Checking license status...");
+ // Build new cache in temporary Map before invalidating old cache
+ const newCache = new Map();
const allKeysRes = await db.select().from(licenseKey);
if (allKeysRes.length === 0) {
status.isHostLicensed = false;
+ // Invalidate all and set new cache (empty)
+ this.licenseKeyCache.flushAll();
+ this.statusCache.set(this.statusKey, status);
return status;
}
@@ -158,7 +186,7 @@ LQIDAQAB
this.publicKey
);
- this.licenseKeyCache.set(decryptedKey, {
+ newCache.set(decryptedKey, {
licenseKey: decryptedKey,
licenseKeyEncrypted: key.licenseKeyId,
valid: payload.valid,
@@ -177,14 +205,11 @@ LQIDAQAB
);
logger.error(e);
- this.licenseKeyCache.set(
- key.licenseKeyId,
- {
- licenseKey: key.licenseKeyId,
- licenseKeyEncrypted: key.licenseKeyId,
- valid: false
- }
- );
+ newCache.set(key.licenseKeyId, {
+ licenseKey: key.licenseKeyId,
+ licenseKeyEncrypted: key.licenseKeyId,
+ valid: false
+ });
}
}
@@ -206,17 +231,31 @@ LQIDAQAB
if (!apiResponse?.success) {
throw new Error(apiResponse?.error);
}
+ // Reset failure count on success
+ this.phoneHomeFailureCount = 0;
} catch (e) {
- logger.error("Error communicating with license server:");
- logger.error(e);
+ this.phoneHomeFailureCount++;
+ if (this.phoneHomeFailureCount === 1) {
+ // First failure: fail silently
+ logger.error("Error communicating with license server:");
+ logger.error(e);
+ logger.error(
+ `Allowing failure. Will retry one more time at next run interval.`
+ );
+ // return last known good status
+ return this.statusCache.get(
+ this.statusKey
+ ) as LicenseStatus;
+ } else {
+ // Subsequent failures: fail abruptly
+ throw e;
+ }
}
// Check and update all license keys with server response
for (const key of keys) {
try {
- const cached = this.licenseKeyCache.get(
- key.licenseKey
- )!;
+ const cached = newCache.get(key.licenseKey)!;
const licenseKeyRes =
apiResponse?.data?.licenseKeys[key.licenseKey];
@@ -240,10 +279,7 @@ LQIDAQAB
`Can't trust license key: ${key.licenseKey}`
);
cached.valid = false;
- this.licenseKeyCache.set(
- key.licenseKey,
- cached
- );
+ newCache.set(key.licenseKey, cached);
continue;
}
@@ -274,10 +310,7 @@ LQIDAQAB
})
.where(eq(licenseKey.licenseKeyId, encryptedKey));
- this.licenseKeyCache.set(
- key.licenseKey,
- cached
- );
+ newCache.set(key.licenseKey, cached);
} catch (e) {
logger.error(`Error validating license key: ${key}`);
logger.error(e);
@@ -286,9 +319,7 @@ LQIDAQAB
// Compute host status
for (const key of keys) {
- const cached = this.licenseKeyCache.get(
- key.licenseKey
- )!;
+ const cached = newCache.get(key.licenseKey)!;
if (cached.type === "host") {
status.isLicenseValid = cached.valid;
@@ -299,9 +330,17 @@ LQIDAQAB
continue;
}
}
+
+ // Invalidate old cache and set new cache
+ this.licenseKeyCache.flushAll();
+ for (const [key, value] of newCache.entries()) {
+ this.licenseKeyCache.set(key, value);
+ }
} catch (error) {
logger.error("Error checking license status:");
logger.error(error);
+ } finally {
+ this.checkInProgress = false;
}
this.statusCache.set(this.statusKey, status);
@@ -430,20 +469,58 @@ LQIDAQAB
: key.instanceId
}));
- const response = await fetch(this.validationServerUrl, {
- method: "POST",
- headers: {
- "Content-Type": "application/json"
- },
- body: JSON.stringify({
- licenseKeys: decryptedKeys,
- instanceName: this.hostMeta.hostMetaId
- })
- });
+ const maxAttempts = 10;
+ const initialRetryDelay = 1 * 1000; // 1 seconds
+ const exponentialFactor = 1.2;
- const data = await response.json();
+ let lastError: Error | undefined;
- return data as ValidateLicenseAPIResponse;
+ for (let attempt = 1; attempt <= maxAttempts; attempt++) {
+ try {
+ const response = await fetch(this.validationServerUrl, {
+ method: "POST",
+ headers: {
+ "Content-Type": "application/json"
+ },
+ body: JSON.stringify({
+ licenseKeys: decryptedKeys,
+ instanceName: this.hostMeta.hostMetaId
+ })
+ });
+
+ if (!response.ok) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const data = await response.json();
+ return data as ValidateLicenseAPIResponse;
+ } catch (error) {
+ lastError =
+ error instanceof Error ? error : new Error(String(error));
+
+ if (attempt < maxAttempts) {
+ // Calculate exponential backoff delay
+ const retryDelay = Math.floor(
+ initialRetryDelay *
+ Math.pow(exponentialFactor, attempt - 1)
+ );
+
+ logger.debug(
+ `License validation request failed (attempt ${attempt}/${maxAttempts}), retrying in ${retryDelay} ms...`
+ );
+ await new Promise((resolve) =>
+ setTimeout(resolve, retryDelay)
+ );
+ } else {
+ logger.error(
+ `License validation request failed after ${maxAttempts} attempts`
+ );
+ throw lastError;
+ }
+ }
+ }
+
+ throw lastError || new Error("License validation request failed");
}
}
diff --git a/server/private/license/licenseJwt.ts b/server/private/license/licenseJwt.ts
index f137db30..eb27b78f 100644
--- a/server/private/license/licenseJwt.ts
+++ b/server/private/license/licenseJwt.ts
@@ -19,10 +19,7 @@ import * as crypto from "crypto";
* @param publicKey - The public key used for verification (PEM format)
* @returns The decoded payload if validation succeeds, throws an error otherwise
*/
-function validateJWT(
- token: string,
- publicKey: string
-): Payload {
+function validateJWT(token: string, publicKey: string): Payload {
// Split the JWT into its three parts
const parts = token.split(".");
if (parts.length !== 3) {
diff --git a/server/private/middlewares/logActionAudit.ts b/server/private/middlewares/logActionAudit.ts
index c89a8896..17cc67c0 100644
--- a/server/private/middlewares/logActionAudit.ts
+++ b/server/private/middlewares/logActionAudit.ts
@@ -41,7 +41,11 @@ async function getActionDays(orgId: string): Promise {
}
// store the result in cache
- cache.set(`org_${orgId}_actionDays`, org.settingsLogRetentionDaysAction, 300);
+ cache.set(
+ `org_${orgId}_actionDays`,
+ org.settingsLogRetentionDaysAction,
+ 300
+ );
return org.settingsLogRetentionDaysAction;
}
@@ -141,4 +145,3 @@ export function logActionAudit(action: ActionsEnum) {
}
};
}
-
diff --git a/server/private/middlewares/verifyCertificateAccess.ts b/server/private/middlewares/verifyCertificateAccess.ts
index 1708215e..dcc57dca 100644
--- a/server/private/middlewares/verifyCertificateAccess.ts
+++ b/server/private/middlewares/verifyCertificateAccess.ts
@@ -28,7 +28,8 @@ export async function verifyCertificateAccess(
try {
// Assume user/org access is already verified
const orgId = req.params.orgId;
- const certId = req.params.certId || req.body?.certId || req.query?.certId;
+ const certId =
+ req.params.certId || req.body?.certId || req.query?.certId;
let domainId =
req.params.domainId || req.body?.domainId || req.query?.domainId;
@@ -39,10 +40,12 @@ export async function verifyCertificateAccess(
}
if (!domainId) {
-
if (!certId) {
return next(
- createHttpError(HttpCode.BAD_REQUEST, "Must provide either certId or domainId")
+ createHttpError(
+ HttpCode.BAD_REQUEST,
+ "Must provide either certId or domainId"
+ )
);
}
@@ -75,7 +78,10 @@ export async function verifyCertificateAccess(
if (!domainId) {
return next(
- createHttpError(HttpCode.BAD_REQUEST, "Must provide either certId or domainId")
+ createHttpError(
+ HttpCode.BAD_REQUEST,
+ "Must provide either certId or domainId"
+ )
);
}
diff --git a/server/private/middlewares/verifyIdpAccess.ts b/server/private/middlewares/verifyIdpAccess.ts
index 87397a3d..41095684 100644
--- a/server/private/middlewares/verifyIdpAccess.ts
+++ b/server/private/middlewares/verifyIdpAccess.ts
@@ -24,8 +24,7 @@ export async function verifyIdpAccess(
) {
try {
const userId = req.user!.userId;
- const idpId =
- req.params.idpId || req.body.idpId || req.query.idpId;
+ const idpId = req.params.idpId || req.body.idpId || req.query.idpId;
const orgId = req.params.orgId;
if (!userId) {
@@ -50,9 +49,7 @@ export async function verifyIdpAccess(
.select()
.from(idp)
.innerJoin(idpOrg, eq(idp.idpId, idpOrg.idpId))
- .where(
- and(eq(idp.idpId, idpId), eq(idpOrg.orgId, orgId))
- )
+ .where(and(eq(idp.idpId, idpId), eq(idpOrg.orgId, orgId)))
.limit(1);
if (!idpRes || !idpRes.idp || !idpRes.idpOrg) {
diff --git a/server/private/middlewares/verifyRemoteExitNode.ts b/server/private/middlewares/verifyRemoteExitNode.ts
index 2f6d99d2..8abdc47e 100644
--- a/server/private/middlewares/verifyRemoteExitNode.ts
+++ b/server/private/middlewares/verifyRemoteExitNode.ts
@@ -26,7 +26,8 @@ export const verifySessionRemoteExitNodeMiddleware = async (
// get the token from the auth header
const token = req.headers["authorization"]?.split(" ")[1] || "";
- const { session, remoteExitNode } = await validateRemoteExitNodeSessionToken(token);
+ const { session, remoteExitNode } =
+ await validateRemoteExitNodeSessionToken(token);
if (!session || !remoteExitNode) {
if (config.getRawConfig().app.log_failed_attempts) {
diff --git a/server/private/routers/auditLogs/exportAccessAuditLog.ts b/server/private/routers/auditLogs/exportAccessAuditLog.ts
index 89aef6cb..7e912f8c 100644
--- a/server/private/routers/auditLogs/exportAccessAuditLog.ts
+++ b/server/private/routers/auditLogs/exportAccessAuditLog.ts
@@ -19,8 +19,14 @@ import createHttpError from "http-errors";
import HttpCode from "@server/types/HttpCode";
import { fromError } from "zod-validation-error";
import logger from "@server/logger";
-import { queryAccessAuditLogsParams, queryAccessAuditLogsQuery, queryAccess } from "./queryAccessAuditLog";
+import {
+ queryAccessAuditLogsParams,
+ queryAccessAuditLogsQuery,
+ queryAccess,
+ countAccessQuery
+} from "./queryAccessAuditLog";
import { generateCSV } from "@server/routers/auditLogs/generateCSV";
+import { MAX_EXPORT_LIMIT } from "@server/routers/auditLogs";
registry.registerPath({
method: "get",
@@ -61,16 +67,28 @@ export async function exportAccessAuditLogs(
}
const data = { ...parsedQuery.data, ...parsedParams.data };
+ const [{ count }] = await countAccessQuery(data);
+ if (count > MAX_EXPORT_LIMIT) {
+ return next(
+ createHttpError(
+ HttpCode.BAD_REQUEST,
+ `Export limit exceeded. Your selection contains ${count} rows, but the maximum is ${MAX_EXPORT_LIMIT} rows. Please select a shorter time range to reduce the data.`
+ )
+ );
+ }
const baseQuery = queryAccess(data);
const log = await baseQuery.limit(data.limit).offset(data.offset);
const csvData = generateCSV(log);
-
- res.setHeader('Content-Type', 'text/csv');
- res.setHeader('Content-Disposition', `attachment; filename="access-audit-logs-${data.orgId}-${Date.now()}.csv"`);
-
+
+ res.setHeader("Content-Type", "text/csv");
+ res.setHeader(
+ "Content-Disposition",
+ `attachment; filename="access-audit-logs-${data.orgId}-${Date.now()}.csv"`
+ );
+
return res.send(csvData);
} catch (error) {
logger.error(error);
@@ -78,4 +96,4 @@ export async function exportAccessAuditLogs(
createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "An error occurred")
);
}
-}
\ No newline at end of file
+}
diff --git a/server/private/routers/auditLogs/exportActionAuditLog.ts b/server/private/routers/auditLogs/exportActionAuditLog.ts
index 12c9ff8b..d8987916 100644
--- a/server/private/routers/auditLogs/exportActionAuditLog.ts
+++ b/server/private/routers/auditLogs/exportActionAuditLog.ts
@@ -19,8 +19,14 @@ import createHttpError from "http-errors";
import HttpCode from "@server/types/HttpCode";
import { fromError } from "zod-validation-error";
import logger from "@server/logger";
-import { queryActionAuditLogsParams, queryActionAuditLogsQuery, queryAction } from "./queryActionAuditLog";
+import {
+ queryActionAuditLogsParams,
+ queryActionAuditLogsQuery,
+ queryAction,
+ countActionQuery
+} from "./queryActionAuditLog";
import { generateCSV } from "@server/routers/auditLogs/generateCSV";
+import { MAX_EXPORT_LIMIT } from "@server/routers/auditLogs";
registry.registerPath({
method: "get",
@@ -60,17 +66,29 @@ export async function exportActionAuditLogs(
);
}
- const data = { ...parsedQuery.data, ...parsedParams.data };
+ const data = { ...parsedQuery.data, ...parsedParams.data };
+ const [{ count }] = await countActionQuery(data);
+ if (count > MAX_EXPORT_LIMIT) {
+ return next(
+ createHttpError(
+ HttpCode.BAD_REQUEST,
+ `Export limit exceeded. Your selection contains ${count} rows, but the maximum is ${MAX_EXPORT_LIMIT} rows. Please select a shorter time range to reduce the data.`
+ )
+ );
+ }
const baseQuery = queryAction(data);
const log = await baseQuery.limit(data.limit).offset(data.offset);
const csvData = generateCSV(log);
-
- res.setHeader('Content-Type', 'text/csv');
- res.setHeader('Content-Disposition', `attachment; filename="action-audit-logs-${data.orgId}-${Date.now()}.csv"`);
-
+
+ res.setHeader("Content-Type", "text/csv");
+ res.setHeader(
+ "Content-Disposition",
+ `attachment; filename="action-audit-logs-${data.orgId}-${Date.now()}.csv"`
+ );
+
return res.send(csvData);
} catch (error) {
logger.error(error);
@@ -78,4 +96,4 @@ export async function exportActionAuditLogs(
createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "An error occurred")
);
}
-}
\ No newline at end of file
+}
diff --git a/server/private/routers/auditLogs/index.ts b/server/private/routers/auditLogs/index.ts
index ac623c4c..e1849a61 100644
--- a/server/private/routers/auditLogs/index.ts
+++ b/server/private/routers/auditLogs/index.ts
@@ -14,4 +14,4 @@
export * from "./queryActionAuditLog";
export * from "./exportActionAuditLog";
export * from "./queryAccessAuditLog";
-export * from "./exportAccessAuditLog";
\ No newline at end of file
+export * from "./exportAccessAuditLog";
diff --git a/server/private/routers/auditLogs/queryAccessAuditLog.ts b/server/private/routers/auditLogs/queryAccessAuditLog.ts
index 769dcf55..eb0cae5d 100644
--- a/server/private/routers/auditLogs/queryAccessAuditLog.ts
+++ b/server/private/routers/auditLogs/queryAccessAuditLog.ts
@@ -24,6 +24,7 @@ import { fromError } from "zod-validation-error";
import { QueryAccessAuditLogResponse } from "@server/routers/auditLogs/types";
import response from "@server/lib/response";
import logger from "@server/logger";
+import { getSevenDaysAgo } from "@app/lib/getSevenDaysAgo";
export const queryAccessAuditLogsQuery = z.object({
// iso string just validate its a parseable date
@@ -32,7 +33,14 @@ export const queryAccessAuditLogsQuery = z.object({
.refine((val) => !isNaN(Date.parse(val)), {
error: "timeStart must be a valid ISO date string"
})
- .transform((val) => Math.floor(new Date(val).getTime() / 1000)),
+ .transform((val) => Math.floor(new Date(val).getTime() / 1000))
+ .prefault(() => getSevenDaysAgo().toISOString())
+ .openapi({
+ type: "string",
+ format: "date-time",
+ description:
+ "Start time as ISO date string (defaults to 7 days ago)"
+ }),
timeEnd: z
.string()
.refine((val) => !isNaN(Date.parse(val)), {
@@ -44,7 +52,8 @@ export const queryAccessAuditLogsQuery = z.object({
.openapi({
type: "string",
format: "date-time",
- description: "End time as ISO date string (defaults to current time)"
+ description:
+ "End time as ISO date string (defaults to current time)"
}),
action: z
.union([z.boolean(), z.string()])
@@ -181,9 +190,15 @@ async function queryUniqueFilterAttributes(
.where(baseConditions);
return {
- actors: uniqueActors.map(row => row.actor).filter((actor): actor is string => actor !== null),
- resources: uniqueResources.filter((row): row is { id: number; name: string | null } => row.id !== null),
- locations: uniqueLocations.map(row => row.locations).filter((location): location is string => location !== null)
+ actors: uniqueActors
+ .map((row) => row.actor)
+ .filter((actor): actor is string => actor !== null),
+ resources: uniqueResources.filter(
+ (row): row is { id: number; name: string | null } => row.id !== null
+ ),
+ locations: uniqueLocations
+ .map((row) => row.locations)
+ .filter((location): location is string => location !== null)
};
}
diff --git a/server/private/routers/auditLogs/queryActionAuditLog.ts b/server/private/routers/auditLogs/queryActionAuditLog.ts
index d4a43879..518eb982 100644
--- a/server/private/routers/auditLogs/queryActionAuditLog.ts
+++ b/server/private/routers/auditLogs/queryActionAuditLog.ts
@@ -24,6 +24,7 @@ import { fromError } from "zod-validation-error";
import { QueryActionAuditLogResponse } from "@server/routers/auditLogs/types";
import response from "@server/lib/response";
import logger from "@server/logger";
+import { getSevenDaysAgo } from "@app/lib/getSevenDaysAgo";
export const queryActionAuditLogsQuery = z.object({
// iso string just validate its a parseable date
@@ -32,7 +33,14 @@ export const queryActionAuditLogsQuery = z.object({
.refine((val) => !isNaN(Date.parse(val)), {
error: "timeStart must be a valid ISO date string"
})
- .transform((val) => Math.floor(new Date(val).getTime() / 1000)),
+ .transform((val) => Math.floor(new Date(val).getTime() / 1000))
+ .prefault(() => getSevenDaysAgo().toISOString())
+ .openapi({
+ type: "string",
+ format: "date-time",
+ description:
+ "Start time as ISO date string (defaults to 7 days ago)"
+ }),
timeEnd: z
.string()
.refine((val) => !isNaN(Date.parse(val)), {
@@ -44,7 +52,8 @@ export const queryActionAuditLogsQuery = z.object({
.openapi({
type: "string",
format: "date-time",
- description: "End time as ISO date string (defaults to current time)"
+ description:
+ "End time as ISO date string (defaults to current time)"
}),
action: z.string().optional(),
actorType: z.string().optional(),
@@ -68,8 +77,9 @@ export const queryActionAuditLogsParams = z.object({
orgId: z.string()
});
-export const queryActionAuditLogsCombined =
- queryActionAuditLogsQuery.merge(queryActionAuditLogsParams);
+export const queryActionAuditLogsCombined = queryActionAuditLogsQuery.merge(
+ queryActionAuditLogsParams
+);
type Q = z.infer;
function getWhere(data: Q) {
@@ -78,7 +88,9 @@ function getWhere(data: Q) {
lt(actionAuditLog.timestamp, data.timeEnd),
eq(actionAuditLog.orgId, data.orgId),
data.actor ? eq(actionAuditLog.actor, data.actor) : undefined,
- data.actorType ? eq(actionAuditLog.actorType, data.actorType) : undefined,
+ data.actorType
+ ? eq(actionAuditLog.actorType, data.actorType)
+ : undefined,
data.actorId ? eq(actionAuditLog.actorId, data.actorId) : undefined,
data.action ? eq(actionAuditLog.action, data.action) : undefined
);
@@ -135,8 +147,12 @@ async function queryUniqueFilterAttributes(
.where(baseConditions);
return {
- actors: uniqueActors.map(row => row.actor).filter((actor): actor is string => actor !== null),
- actions: uniqueActions.map(row => row.action).filter((action): action is string => action !== null),
+ actors: uniqueActors
+ .map((row) => row.actor)
+ .filter((actor): actor is string => actor !== null),
+ actions: uniqueActions
+ .map((row) => row.action)
+ .filter((action): action is string => action !== null)
};
}
diff --git a/server/private/routers/auth/index.ts b/server/private/routers/auth/index.ts
index 39a60031..535d5887 100644
--- a/server/private/routers/auth/index.ts
+++ b/server/private/routers/auth/index.ts
@@ -13,4 +13,4 @@
export * from "./transferSession";
export * from "./getSessionTransferToken";
-export * from "./quickStart";
\ No newline at end of file
+export * from "./quickStart";
diff --git a/server/private/routers/auth/quickStart.ts b/server/private/routers/auth/quickStart.ts
index 02023a0b..612a3951 100644
--- a/server/private/routers/auth/quickStart.ts
+++ b/server/private/routers/auth/quickStart.ts
@@ -395,7 +395,8 @@ export async function quickStart(
.values({
targetId: newTarget[0].targetId,
hcEnabled: false
- }).returning();
+ })
+ .returning();
// add the new target to the targetIps array
targetIps.push(`${ip}/32`);
@@ -406,7 +407,12 @@ export async function quickStart(
.where(eq(newts.siteId, siteId!))
.limit(1);
- await addTargets(newt.newtId, newTarget, newHealthcheck, resource.protocol);
+ await addTargets(
+ newt.newtId,
+ newTarget,
+ newHealthcheck,
+ resource.protocol
+ );
// Set resource pincode if provided
if (pincode) {
diff --git a/server/private/routers/billing/createCheckoutSession.ts b/server/private/routers/billing/createCheckoutSession.ts
index e0e08a20..a2d8080f 100644
--- a/server/private/routers/billing/createCheckoutSession.ts
+++ b/server/private/routers/billing/createCheckoutSession.ts
@@ -26,8 +26,8 @@ import { getLineItems, getStandardFeaturePriceSet } from "@server/lib/billing";
import { getTierPriceSet, TierId } from "@server/lib/billing/tiers";
const createCheckoutSessionSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
export async function createCheckoutSession(
req: Request,
@@ -72,7 +72,7 @@ export async function createCheckoutSession(
billing_address_collection: "required",
line_items: [
{
- price: standardTierPrice, // Use the standard tier
+ price: standardTierPrice, // Use the standard tier
quantity: 1
},
...getLineItems(getStandardFeaturePriceSet())
diff --git a/server/private/routers/billing/createPortalSession.ts b/server/private/routers/billing/createPortalSession.ts
index a3a2f04f..9ebe84e0 100644
--- a/server/private/routers/billing/createPortalSession.ts
+++ b/server/private/routers/billing/createPortalSession.ts
@@ -24,8 +24,8 @@ import { fromError } from "zod-validation-error";
import stripe from "#private/lib/stripe";
const createPortalSessionSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
export async function createPortalSession(
req: Request,
diff --git a/server/private/routers/billing/getOrgSubscription.ts b/server/private/routers/billing/getOrgSubscription.ts
index adc4ee04..e1f8316e 100644
--- a/server/private/routers/billing/getOrgSubscription.ts
+++ b/server/private/routers/billing/getOrgSubscription.ts
@@ -34,8 +34,8 @@ import {
} from "@server/db";
const getOrgSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
registry.registerPath({
method: "get",
diff --git a/server/private/routers/billing/getOrgUsage.ts b/server/private/routers/billing/getOrgUsage.ts
index 9e605cca..1a343730 100644
--- a/server/private/routers/billing/getOrgUsage.ts
+++ b/server/private/routers/billing/getOrgUsage.ts
@@ -28,8 +28,8 @@ import { FeatureId } from "@server/lib/billing";
import { GetOrgUsageResponse } from "@server/routers/billing/types";
const getOrgSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
registry.registerPath({
method: "get",
@@ -78,11 +78,23 @@ export async function getOrgUsage(
// Get usage for org
const usageData = [];
- const siteUptime = await usageService.getUsage(orgId, FeatureId.SITE_UPTIME);
+ const siteUptime = await usageService.getUsage(
+ orgId,
+ FeatureId.SITE_UPTIME
+ );
const users = await usageService.getUsageDaily(orgId, FeatureId.USERS);
- const domains = await usageService.getUsageDaily(orgId, FeatureId.DOMAINS);
- const remoteExitNodes = await usageService.getUsageDaily(orgId, FeatureId.REMOTE_EXIT_NODES);
- const egressData = await usageService.getUsage(orgId, FeatureId.EGRESS_DATA_MB);
+ const domains = await usageService.getUsageDaily(
+ orgId,
+ FeatureId.DOMAINS
+ );
+ const remoteExitNodes = await usageService.getUsageDaily(
+ orgId,
+ FeatureId.REMOTE_EXIT_NODES
+ );
+ const egressData = await usageService.getUsage(
+ orgId,
+ FeatureId.EGRESS_DATA_MB
+ );
if (siteUptime) {
usageData.push(siteUptime);
@@ -100,7 +112,8 @@ export async function getOrgUsage(
usageData.push(remoteExitNodes);
}
- const orgLimits = await db.select()
+ const orgLimits = await db
+ .select()
.from(limits)
.where(eq(limits.orgId, orgId));
diff --git a/server/private/routers/billing/hooks/handleCustomerDeleted.ts b/server/private/routers/billing/hooks/handleCustomerDeleted.ts
index aa2e6964..e4140353 100644
--- a/server/private/routers/billing/hooks/handleCustomerDeleted.ts
+++ b/server/private/routers/billing/hooks/handleCustomerDeleted.ts
@@ -31,9 +31,7 @@ export async function handleCustomerDeleted(
return;
}
- await db
- .delete(customers)
- .where(eq(customers.customerId, customer.id));
+ await db.delete(customers).where(eq(customers.customerId, customer.id));
} catch (error) {
logger.error(
`Error handling customer created event for ID ${customer.id}:`,
diff --git a/server/private/routers/billing/hooks/handleSubscriptionDeleted.ts b/server/private/routers/billing/hooks/handleSubscriptionDeleted.ts
index 114a4b30..7a7d9149 100644
--- a/server/private/routers/billing/hooks/handleSubscriptionDeleted.ts
+++ b/server/private/routers/billing/hooks/handleSubscriptionDeleted.ts
@@ -12,7 +12,14 @@
*/
import Stripe from "stripe";
-import { subscriptions, db, subscriptionItems, customers, userOrgs, users } from "@server/db";
+import {
+ subscriptions,
+ db,
+ subscriptionItems,
+ customers,
+ userOrgs,
+ users
+} from "@server/db";
import { eq, and } from "drizzle-orm";
import logger from "@server/logger";
import { handleSubscriptionLifesycle } from "../subscriptionLifecycle";
@@ -43,7 +50,6 @@ export async function handleSubscriptionDeleted(
.delete(subscriptionItems)
.where(eq(subscriptionItems.subscriptionId, subscription.id));
-
// Lookup customer to get orgId
const [customer] = await db
.select()
@@ -58,10 +64,7 @@ export async function handleSubscriptionDeleted(
return;
}
- await handleSubscriptionLifesycle(
- customer.orgId,
- subscription.status
- );
+ await handleSubscriptionLifesycle(customer.orgId, subscription.status);
const [orgUserRes] = await db
.select()
diff --git a/server/private/routers/billing/index.ts b/server/private/routers/billing/index.ts
index 913ae865..59fce8d6 100644
--- a/server/private/routers/billing/index.ts
+++ b/server/private/routers/billing/index.ts
@@ -15,4 +15,4 @@ export * from "./createCheckoutSession";
export * from "./createPortalSession";
export * from "./getOrgSubscription";
export * from "./getOrgUsage";
-export * from "./internalGetOrgTier";
\ No newline at end of file
+export * from "./internalGetOrgTier";
diff --git a/server/private/routers/billing/internalGetOrgTier.ts b/server/private/routers/billing/internalGetOrgTier.ts
index ec114cca..92bbc2ba 100644
--- a/server/private/routers/billing/internalGetOrgTier.ts
+++ b/server/private/routers/billing/internalGetOrgTier.ts
@@ -22,8 +22,8 @@ import { getOrgTierData } from "#private/lib/billing";
import { GetOrgTierResponse } from "@server/routers/billing/types";
const getOrgSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
export async function getOrgTier(
req: Request,
diff --git a/server/private/routers/billing/subscriptionLifecycle.ts b/server/private/routers/billing/subscriptionLifecycle.ts
index 06b2a2a8..0fc75835 100644
--- a/server/private/routers/billing/subscriptionLifecycle.ts
+++ b/server/private/routers/billing/subscriptionLifecycle.ts
@@ -11,11 +11,18 @@
* This file is not licensed under the AGPLv3.
*/
-import { freeLimitSet, limitsService, subscribedLimitSet } from "@server/lib/billing";
+import {
+ freeLimitSet,
+ limitsService,
+ subscribedLimitSet
+} from "@server/lib/billing";
import { usageService } from "@server/lib/billing/usageService";
import logger from "@server/logger";
-export async function handleSubscriptionLifesycle(orgId: string, status: string) {
+export async function handleSubscriptionLifesycle(
+ orgId: string,
+ status: string
+) {
switch (status) {
case "active":
await limitsService.applyLimitSetToOrg(orgId, subscribedLimitSet);
@@ -42,4 +49,4 @@ export async function handleSubscriptionLifesycle(orgId: string, status: string)
default:
break;
}
-}
\ No newline at end of file
+}
diff --git a/server/private/routers/billing/webhooks.ts b/server/private/routers/billing/webhooks.ts
index 24ad1074..9c64350c 100644
--- a/server/private/routers/billing/webhooks.ts
+++ b/server/private/routers/billing/webhooks.ts
@@ -32,12 +32,13 @@ export async function billingWebhookHandler(
next: NextFunction
): Promise {
let event: Stripe.Event = req.body;
- const endpointSecret = privateConfig.getRawPrivateConfig().stripe?.webhook_secret;
+ const endpointSecret =
+ privateConfig.getRawPrivateConfig().stripe?.webhook_secret;
if (!endpointSecret) {
- logger.warn("Stripe webhook secret is not configured. Webhook events will not be priocessed.");
- return next(
- createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "")
+ logger.warn(
+ "Stripe webhook secret is not configured. Webhook events will not be priocessed."
);
+ return next(createHttpError(HttpCode.INTERNAL_SERVER_ERROR, ""));
}
// Only verify the event if you have an endpoint secret defined.
@@ -49,7 +50,10 @@ export async function billingWebhookHandler(
if (!signature) {
logger.info("No stripe signature found in headers.");
return next(
- createHttpError(HttpCode.BAD_REQUEST, "No stripe signature found in headers")
+ createHttpError(
+ HttpCode.BAD_REQUEST,
+ "No stripe signature found in headers"
+ )
);
}
@@ -62,7 +66,10 @@ export async function billingWebhookHandler(
} catch (err) {
logger.error(`Webhook signature verification failed.`, err);
return next(
- createHttpError(HttpCode.UNAUTHORIZED, "Webhook signature verification failed")
+ createHttpError(
+ HttpCode.UNAUTHORIZED,
+ "Webhook signature verification failed"
+ )
);
}
}
diff --git a/server/private/routers/certificates/getCertificate.ts b/server/private/routers/certificates/getCertificate.ts
index 4ff8184e..d06a1bad 100644
--- a/server/private/routers/certificates/getCertificate.ts
+++ b/server/private/routers/certificates/getCertificate.ts
@@ -24,10 +24,10 @@ import { registry } from "@server/openApi";
import { GetCertificateResponse } from "@server/routers/certificates/types";
const getCertificateSchema = z.strictObject({
- domainId: z.string(),
- domain: z.string().min(1).max(255),
- orgId: z.string()
- });
+ domainId: z.string(),
+ domain: z.string().min(1).max(255),
+ orgId: z.string()
+});
async function query(domainId: string, domain: string) {
const [domainRecord] = await db
@@ -42,8 +42,8 @@ async function query(domainId: string, domain: string) {
let existing: any[] = [];
if (domainRecord.type == "ns") {
- const domainLevelDown = domain.split('.').slice(1).join('.');
-
+ const domainLevelDown = domain.split(".").slice(1).join(".");
+
existing = await db
.select({
certId: certificates.certId,
@@ -64,7 +64,7 @@ async function query(domainId: string, domain: string) {
eq(certificates.wildcard, true), // only NS domains can have wildcard certs
or(
eq(certificates.domain, domain),
- eq(certificates.domain, domainLevelDown),
+ eq(certificates.domain, domainLevelDown)
)
)
);
@@ -102,8 +102,7 @@ registry.registerPath({
tags: ["Certificate"],
request: {
params: z.object({
- domainId: z
- .string(),
+ domainId: z.string(),
domain: z.string().min(1).max(255),
orgId: z.string()
})
@@ -133,7 +132,9 @@ export async function getCertificate(
if (!cert) {
logger.warn(`Certificate not found for domain: ${domainId}`);
- return next(createHttpError(HttpCode.NOT_FOUND, "Certificate not found"));
+ return next(
+ createHttpError(HttpCode.NOT_FOUND, "Certificate not found")
+ );
}
return response(res, {
diff --git a/server/private/routers/certificates/index.ts b/server/private/routers/certificates/index.ts
index e1b81ae1..b1543e5d 100644
--- a/server/private/routers/certificates/index.ts
+++ b/server/private/routers/certificates/index.ts
@@ -12,4 +12,4 @@
*/
export * from "./getCertificate";
-export * from "./restartCertificate";
\ No newline at end of file
+export * from "./restartCertificate";
diff --git a/server/private/routers/certificates/restartCertificate.ts b/server/private/routers/certificates/restartCertificate.ts
index a6ee5460..0e4b1910 100644
--- a/server/private/routers/certificates/restartCertificate.ts
+++ b/server/private/routers/certificates/restartCertificate.ts
@@ -25,9 +25,9 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const restartCertificateParamsSchema = z.strictObject({
- certId: z.string().transform(stoi).pipe(z.int().positive()),
- orgId: z.string()
- });
+ certId: z.string().transform(stoi).pipe(z.int().positive()),
+ orgId: z.string()
+});
registry.registerPath({
method: "post",
@@ -36,10 +36,7 @@ registry.registerPath({
tags: ["Certificate"],
request: {
params: z.object({
- certId: z
- .string()
- .transform(stoi)
- .pipe(z.int().positive()),
+ certId: z.string().transform(stoi).pipe(z.int().positive()),
orgId: z.string()
})
},
@@ -94,7 +91,7 @@ export async function restartCertificate(
.set({
status: "pending",
errorMessage: null,
- lastRenewalAttempt: Math.floor(Date.now() / 1000)
+ lastRenewalAttempt: Math.floor(Date.now() / 1000)
})
.where(eq(certificates.certId, certId));
diff --git a/server/private/routers/domain/checkDomainNamespaceAvailability.ts b/server/private/routers/domain/checkDomainNamespaceAvailability.ts
index 6c9cb23c..db9a4b46 100644
--- a/server/private/routers/domain/checkDomainNamespaceAvailability.ts
+++ b/server/private/routers/domain/checkDomainNamespaceAvailability.ts
@@ -26,8 +26,8 @@ import { CheckDomainAvailabilityResponse } from "@server/routers/domain/types";
const paramsSchema = z.strictObject({});
const querySchema = z.strictObject({
- subdomain: z.string()
- });
+ subdomain: z.string()
+});
registry.registerPath({
method: "get",
diff --git a/server/private/routers/domain/index.ts b/server/private/routers/domain/index.ts
index da9cec3f..3f4bbbf2 100644
--- a/server/private/routers/domain/index.ts
+++ b/server/private/routers/domain/index.ts
@@ -12,4 +12,4 @@
*/
export * from "./checkDomainNamespaceAvailability";
-export * from "./listDomainNamespaces";
\ No newline at end of file
+export * from "./listDomainNamespaces";
diff --git a/server/private/routers/domain/listDomainNamespaces.ts b/server/private/routers/domain/listDomainNamespaces.ts
index 29d5d201..180613a8 100644
--- a/server/private/routers/domain/listDomainNamespaces.ts
+++ b/server/private/routers/domain/listDomainNamespaces.ts
@@ -26,19 +26,19 @@ import { OpenAPITags, registry } from "@server/openApi";
const paramsSchema = z.strictObject({});
const querySchema = z.strictObject({
- limit: z
- .string()
- .optional()
- .default("1000")
- .transform(Number)
- .pipe(z.int().nonnegative()),
- offset: z
- .string()
- .optional()
- .default("0")
- .transform(Number)
- .pipe(z.int().nonnegative())
- });
+ limit: z
+ .string()
+ .optional()
+ .default("1000")
+ .transform(Number)
+ .pipe(z.int().nonnegative()),
+ offset: z
+ .string()
+ .optional()
+ .default("0")
+ .transform(Number)
+ .pipe(z.int().nonnegative())
+});
async function query(limit: number, offset: number) {
const res = await db
diff --git a/server/private/routers/gerbil/receiveBandwidth.ts b/server/private/routers/gerbil/receiveBandwidth.ts
deleted file mode 100644
index de0b2d2b..00000000
--- a/server/private/routers/gerbil/receiveBandwidth.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-/*
- * This file is part of a proprietary work.
- *
- * Copyright (c) 2025 Fossorial, Inc.
- * All rights reserved.
- *
- * This file is licensed under the Fossorial Commercial License.
- * You may not use this file except in compliance with the License.
- * Unauthorized use, copying, modification, or distribution is strictly prohibited.
- *
- * This file is not licensed under the AGPLv3.
- */
-
diff --git a/server/private/routers/hybrid.ts b/server/private/routers/hybrid.ts
index a61f37b2..3accc500 100644
--- a/server/private/routers/hybrid.ts
+++ b/server/private/routers/hybrid.ts
@@ -79,86 +79,72 @@ import semver from "semver";
// Zod schemas for request validation
const getResourceByDomainParamsSchema = z.strictObject({
- domain: z.string().min(1, "Domain is required")
- });
+ domain: z.string().min(1, "Domain is required")
+});
const getUserSessionParamsSchema = z.strictObject({
- userSessionId: z.string().min(1, "User session ID is required")
- });
+ userSessionId: z.string().min(1, "User session ID is required")
+});
const getUserOrgRoleParamsSchema = z.strictObject({
- userId: z.string().min(1, "User ID is required"),
- orgId: z.string().min(1, "Organization ID is required")
- });
+ userId: z.string().min(1, "User ID is required"),
+ orgId: z.string().min(1, "Organization ID is required")
+});
const getRoleResourceAccessParamsSchema = z.strictObject({
- roleId: z
- .string()
- .transform(Number)
- .pipe(
- z.int().positive("Role ID must be a positive integer")
- ),
- resourceId: z
- .string()
- .transform(Number)
- .pipe(
- z.int()
- .positive("Resource ID must be a positive integer")
- )
- });
+ roleId: z
+ .string()
+ .transform(Number)
+ .pipe(z.int().positive("Role ID must be a positive integer")),
+ resourceId: z
+ .string()
+ .transform(Number)
+ .pipe(z.int().positive("Resource ID must be a positive integer"))
+});
const getUserResourceAccessParamsSchema = z.strictObject({
- userId: z.string().min(1, "User ID is required"),
- resourceId: z
- .string()
- .transform(Number)
- .pipe(
- z.int()
- .positive("Resource ID must be a positive integer")
- )
- });
+ userId: z.string().min(1, "User ID is required"),
+ resourceId: z
+ .string()
+ .transform(Number)
+ .pipe(z.int().positive("Resource ID must be a positive integer"))
+});
const getResourceRulesParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(
- z.int()
- .positive("Resource ID must be a positive integer")
- )
- });
+ resourceId: z
+ .string()
+ .transform(Number)
+ .pipe(z.int().positive("Resource ID must be a positive integer"))
+});
const validateResourceSessionTokenParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(
- z.int()
- .positive("Resource ID must be a positive integer")
- )
- });
+ resourceId: z
+ .string()
+ .transform(Number)
+ .pipe(z.int().positive("Resource ID must be a positive integer"))
+});
const validateResourceSessionTokenBodySchema = z.strictObject({
- token: z.string().min(1, "Token is required")
- });
+ token: z.string().min(1, "Token is required")
+});
const validateResourceAccessTokenBodySchema = z.strictObject({
- accessTokenId: z.string().optional(),
- resourceId: z.number().optional(),
- accessToken: z.string()
- });
+ accessTokenId: z.string().optional(),
+ resourceId: z.number().optional(),
+ accessToken: z.string()
+});
// Certificates by domains query validation
const getCertificatesByDomainsQuerySchema = z.strictObject({
- // Accept domains as string or array (domains or domains[])
- domains: z
- .union([z.array(z.string().min(1)), z.string().min(1)])
- .optional(),
- // Handle array format from query parameters (domains[])
- "domains[]": z
- .union([z.array(z.string().min(1)), z.string().min(1)])
- .optional()
- });
+ // Accept domains as string or array (domains or domains[])
+ domains: z
+ .union([z.array(z.string().min(1)), z.string().min(1)])
+ .optional(),
+ // Handle array format from query parameters (domains[])
+ "domains[]": z
+ .union([z.array(z.string().min(1)), z.string().min(1)])
+ .optional()
+});
// Type exports for request schemas
export type GetResourceByDomainParams = z.infer<
@@ -566,8 +552,8 @@ hybridRouter.get(
);
const getOrgLoginPageParamsSchema = z.strictObject({
- orgId: z.string().min(1)
- });
+ orgId: z.string().min(1)
+});
hybridRouter.get(
"/org/:orgId/login-page",
@@ -1408,8 +1394,16 @@ hybridRouter.post(
);
}
- const { olmId, newtId, ip, port, timestamp, token, publicKey, reachableAt } =
- parsedParams.data;
+ const {
+ olmId,
+ newtId,
+ ip,
+ port,
+ timestamp,
+ token,
+ publicKey,
+ reachableAt
+ } = parsedParams.data;
const destinations = await updateAndGenerateEndpointDestinations(
olmId,
diff --git a/server/private/routers/integration.ts b/server/private/routers/integration.ts
index 7ce378d1..9eefff8f 100644
--- a/server/private/routers/integration.ts
+++ b/server/private/routers/integration.ts
@@ -18,7 +18,7 @@ import * as logs from "#private/routers/auditLogs";
import {
verifyApiKeyHasAction,
verifyApiKeyIsRoot,
- verifyApiKeyOrgAccess,
+ verifyApiKeyOrgAccess
} from "@server/middlewares";
import {
verifyValidSubscription,
@@ -26,7 +26,10 @@ import {
} from "#private/middlewares";
import { ActionsEnum } from "@server/auth/actions";
-import { unauthenticated as ua, authenticated as a } from "@server/routers/integration";
+import {
+ unauthenticated as ua,
+ authenticated as a
+} from "@server/routers/integration";
import { logActionAudit } from "#private/middlewares";
export const unauthenticated = ua;
@@ -37,7 +40,7 @@ authenticated.post(
verifyApiKeyIsRoot, // We are the only ones who can use root key so its fine
verifyApiKeyHasAction(ActionsEnum.sendUsageNotification),
logActionAudit(ActionsEnum.sendUsageNotification),
- org.sendUsageNotification,
+ org.sendUsageNotification
);
authenticated.delete(
@@ -45,7 +48,7 @@ authenticated.delete(
verifyApiKeyIsRoot,
verifyApiKeyHasAction(ActionsEnum.deleteIdp),
logActionAudit(ActionsEnum.deleteIdp),
- orgIdp.deleteOrgIdp,
+ orgIdp.deleteOrgIdp
);
authenticated.get(
diff --git a/server/private/routers/license/activateLicense.ts b/server/private/routers/license/activateLicense.ts
index 55b7827e..f6c8d266 100644
--- a/server/private/routers/license/activateLicense.ts
+++ b/server/private/routers/license/activateLicense.ts
@@ -21,8 +21,8 @@ import { z } from "zod";
import { fromError } from "zod-validation-error";
const bodySchema = z.strictObject({
- licenseKey: z.string().min(1).max(255)
- });
+ licenseKey: z.string().min(1).max(255)
+});
export async function activateLicense(
req: Request,
diff --git a/server/private/routers/license/deleteLicenseKey.ts b/server/private/routers/license/deleteLicenseKey.ts
index 6f5469fc..80212e6a 100644
--- a/server/private/routers/license/deleteLicenseKey.ts
+++ b/server/private/routers/license/deleteLicenseKey.ts
@@ -24,8 +24,8 @@ import { licenseKey } from "@server/db";
import license from "#private/license/license";
const paramsSchema = z.strictObject({
- licenseKey: z.string().min(1).max(255)
- });
+ licenseKey: z.string().min(1).max(255)
+});
export async function deleteLicenseKey(
req: Request,
diff --git a/server/private/routers/loginPage/createLoginPage.ts b/server/private/routers/loginPage/createLoginPage.ts
index 75744026..b5e8ccff 100644
--- a/server/private/routers/loginPage/createLoginPage.ts
+++ b/server/private/routers/loginPage/createLoginPage.ts
@@ -36,13 +36,13 @@ import { build } from "@server/build";
import { CreateLoginPageResponse } from "@server/routers/loginPage/types";
const paramsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const bodySchema = z.strictObject({
- subdomain: z.string().nullable().optional(),
- domainId: z.string()
- });
+ subdomain: z.string().nullable().optional(),
+ domainId: z.string()
+});
export type CreateLoginPageBody = z.infer;
@@ -149,12 +149,20 @@ export async function createLoginPage(
let returned: LoginPage | undefined;
await db.transaction(async (trx) => {
-
const orgSites = await trx
.select()
.from(sites)
- .innerJoin(exitNodes, eq(exitNodes.exitNodeId, sites.exitNodeId))
- .where(and(eq(sites.orgId, orgId), eq(exitNodes.type, "gerbil"), eq(exitNodes.online, true)))
+ .innerJoin(
+ exitNodes,
+ eq(exitNodes.exitNodeId, sites.exitNodeId)
+ )
+ .where(
+ and(
+ eq(sites.orgId, orgId),
+ eq(exitNodes.type, "gerbil"),
+ eq(exitNodes.online, true)
+ )
+ )
.limit(10);
let exitNodesList = orgSites.map((s) => s.exitNodes);
@@ -163,7 +171,12 @@ export async function createLoginPage(
exitNodesList = await trx
.select()
.from(exitNodes)
- .where(and(eq(exitNodes.type, "gerbil"), eq(exitNodes.online, true)))
+ .where(
+ and(
+ eq(exitNodes.type, "gerbil"),
+ eq(exitNodes.online, true)
+ )
+ )
.limit(10);
}
diff --git a/server/private/routers/loginPage/deleteLoginPage.ts b/server/private/routers/loginPage/deleteLoginPage.ts
index 5271ebd8..0d17a731 100644
--- a/server/private/routers/loginPage/deleteLoginPage.ts
+++ b/server/private/routers/loginPage/deleteLoginPage.ts
@@ -78,15 +78,11 @@ export async function deleteLoginPage(
// if (!leftoverLinks.length) {
await db
.delete(loginPage)
- .where(
- eq(loginPage.loginPageId, parsedParams.data.loginPageId)
- );
+ .where(eq(loginPage.loginPageId, parsedParams.data.loginPageId));
await db
.delete(loginPageOrg)
- .where(
- eq(loginPageOrg.loginPageId, parsedParams.data.loginPageId)
- );
+ .where(eq(loginPageOrg.loginPageId, parsedParams.data.loginPageId));
// }
return response(res, {
diff --git a/server/private/routers/loginPage/getLoginPage.ts b/server/private/routers/loginPage/getLoginPage.ts
index b3bde203..73f6a357 100644
--- a/server/private/routers/loginPage/getLoginPage.ts
+++ b/server/private/routers/loginPage/getLoginPage.ts
@@ -23,8 +23,8 @@ import { fromError } from "zod-validation-error";
import { GetLoginPageResponse } from "@server/routers/loginPage/types";
const paramsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
async function query(orgId: string) {
const [res] = await db
diff --git a/server/private/routers/loginPage/updateLoginPage.ts b/server/private/routers/loginPage/updateLoginPage.ts
index 0d02b124..bda614d3 100644
--- a/server/private/routers/loginPage/updateLoginPage.ts
+++ b/server/private/routers/loginPage/updateLoginPage.ts
@@ -35,7 +35,8 @@ const paramsSchema = z
})
.strict();
-const bodySchema = z.strictObject({
+const bodySchema = z
+ .strictObject({
subdomain: subdomainSchema.nullable().optional(),
domainId: z.string().optional()
})
@@ -86,7 +87,7 @@ export async function updateLoginPage(
const { loginPageId, orgId } = parsedParams.data;
- if (build === "saas"){
+ if (build === "saas") {
const { tier } = await getOrgTierData(orgId);
const subscribed = tier === TierId.STANDARD;
if (!subscribed) {
@@ -182,7 +183,10 @@ export async function updateLoginPage(
}
// update the full domain if it has changed
- if (fullDomain && fullDomain !== existingLoginPage?.fullDomain) {
+ if (
+ fullDomain &&
+ fullDomain !== existingLoginPage?.fullDomain
+ ) {
await db
.update(loginPage)
.set({ fullDomain })
diff --git a/server/private/routers/misc/sendSupportEmail.ts b/server/private/routers/misc/sendSupportEmail.ts
index f1f7a919..cd37560d 100644
--- a/server/private/routers/misc/sendSupportEmail.ts
+++ b/server/private/routers/misc/sendSupportEmail.ts
@@ -23,9 +23,9 @@ import SupportEmail from "@server/emails/templates/SupportEmail";
import config from "@server/lib/config";
const bodySchema = z.strictObject({
- body: z.string().min(1),
- subject: z.string().min(1).max(255)
- });
+ body: z.string().min(1),
+ subject: z.string().min(1).max(255)
+});
export async function sendSupportEmail(
req: Request,
@@ -66,6 +66,7 @@ export async function sendSupportEmail(
{
name: req.user?.email || "Support User",
to: "support@pangolin.net",
+ replyTo: req.user?.email || undefined,
from: config.getNoReplyEmail(),
subject: `Support Request: ${subject}`
}
diff --git a/server/private/routers/org/index.ts b/server/private/routers/org/index.ts
index 189c5323..8d11c42d 100644
--- a/server/private/routers/org/index.ts
+++ b/server/private/routers/org/index.ts
@@ -11,4 +11,4 @@
* This file is not licensed under the AGPLv3.
*/
-export * from "./sendUsageNotifications";
\ No newline at end of file
+export * from "./sendUsageNotifications";
diff --git a/server/private/routers/org/sendUsageNotifications.ts b/server/private/routers/org/sendUsageNotifications.ts
index 3ef27f91..4aa42152 100644
--- a/server/private/routers/org/sendUsageNotifications.ts
+++ b/server/private/routers/org/sendUsageNotifications.ts
@@ -35,10 +35,12 @@ const sendUsageNotificationBodySchema = z.object({
notificationType: z.enum(["approaching_70", "approaching_90", "reached"]),
limitName: z.string(),
currentUsage: z.number(),
- usageLimit: z.number(),
+ usageLimit: z.number()
});
-type SendUsageNotificationRequest = z.infer;
+type SendUsageNotificationRequest = z.infer<
+ typeof sendUsageNotificationBodySchema
+>;
export type SendUsageNotificationResponse = {
success: boolean;
@@ -97,17 +99,13 @@ async function getOrgAdmins(orgId: string) {
.where(
and(
eq(userOrgs.orgId, orgId),
- or(
- eq(userOrgs.isOwner, true),
- eq(roles.isAdmin, true)
- )
+ or(eq(userOrgs.isOwner, true), eq(roles.isAdmin, true))
)
);
// Filter to only include users with verified emails
- const orgAdmins = admins.filter(admin =>
- admin.email &&
- admin.email.length > 0
+ const orgAdmins = admins.filter(
+ (admin) => admin.email && admin.email.length > 0
);
return orgAdmins;
@@ -119,7 +117,9 @@ export async function sendUsageNotification(
next: NextFunction
): Promise {
try {
- const parsedParams = sendUsageNotificationParamsSchema.safeParse(req.params);
+ const parsedParams = sendUsageNotificationParamsSchema.safeParse(
+ req.params
+ );
if (!parsedParams.success) {
return next(
createHttpError(
@@ -140,12 +140,8 @@ export async function sendUsageNotification(
}
const { orgId } = parsedParams.data;
- const {
- notificationType,
- limitName,
- currentUsage,
- usageLimit,
- } = parsedBody.data;
+ const { notificationType, limitName, currentUsage, usageLimit } =
+ parsedBody.data;
// Verify organization exists
const org = await db
@@ -192,7 +188,10 @@ export async function sendUsageNotification(
let template;
let subject;
- if (notificationType === "approaching_70" || notificationType === "approaching_90") {
+ if (
+ notificationType === "approaching_70" ||
+ notificationType === "approaching_90"
+ ) {
template = NotifyUsageLimitApproaching({
email: admin.email,
limitName,
@@ -220,10 +219,15 @@ export async function sendUsageNotification(
emailsSent++;
adminEmails.push(admin.email);
-
- logger.info(`Usage notification sent to admin ${admin.email} for org ${orgId}`);
+
+ logger.info(
+ `Usage notification sent to admin ${admin.email} for org ${orgId}`
+ );
} catch (emailError) {
- logger.error(`Failed to send usage notification to ${admin.email}:`, emailError);
+ logger.error(
+ `Failed to send usage notification to ${admin.email}:`,
+ emailError
+ );
// Continue with other admins even if one fails
}
}
@@ -239,11 +243,13 @@ export async function sendUsageNotification(
message: `Usage notifications sent to ${emailsSent} administrators`,
status: HttpCode.OK
});
-
} catch (error) {
logger.error("Error sending usage notifications:", error);
return next(
- createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "Failed to send usage notifications")
+ createHttpError(
+ HttpCode.INTERNAL_SERVER_ERROR,
+ "Failed to send usage notifications"
+ )
);
}
}
diff --git a/server/private/routers/orgIdp/createOrgOidcIdp.ts b/server/private/routers/orgIdp/createOrgOidcIdp.ts
index c3ce774e..709f6167 100644
--- a/server/private/routers/orgIdp/createOrgOidcIdp.ts
+++ b/server/private/routers/orgIdp/createOrgOidcIdp.ts
@@ -32,19 +32,19 @@ import { CreateOrgIdpResponse } from "@server/routers/orgIdp/types";
const paramsSchema = z.strictObject({ orgId: z.string().nonempty() });
const bodySchema = z.strictObject({
- name: z.string().nonempty(),
- clientId: z.string().nonempty(),
- clientSecret: z.string().nonempty(),
- authUrl: z.url(),
- tokenUrl: z.url(),
- identifierPath: z.string().nonempty(),
- emailPath: z.string().optional(),
- namePath: z.string().optional(),
- scopes: z.string().nonempty(),
- autoProvision: z.boolean().optional(),
- variant: z.enum(["oidc", "google", "azure"]).optional().default("oidc"),
- roleMapping: z.string().optional()
- });
+ name: z.string().nonempty(),
+ clientId: z.string().nonempty(),
+ clientSecret: z.string().nonempty(),
+ authUrl: z.url(),
+ tokenUrl: z.url(),
+ identifierPath: z.string().nonempty(),
+ emailPath: z.string().optional(),
+ namePath: z.string().optional(),
+ scopes: z.string().nonempty(),
+ autoProvision: z.boolean().optional(),
+ variant: z.enum(["oidc", "google", "azure"]).optional().default("oidc"),
+ roleMapping: z.string().optional()
+});
// registry.registerPath({
// method: "put",
@@ -158,7 +158,10 @@ export async function createOrgOidcIdp(
});
});
- const redirectUrl = await generateOidcRedirectUrl(idpId as number, orgId);
+ const redirectUrl = await generateOidcRedirectUrl(
+ idpId as number,
+ orgId
+ );
return response(res, {
data: {
diff --git a/server/private/routers/orgIdp/deleteOrgIdp.ts b/server/private/routers/orgIdp/deleteOrgIdp.ts
index ca0112b2..721b91cb 100644
--- a/server/private/routers/orgIdp/deleteOrgIdp.ts
+++ b/server/private/routers/orgIdp/deleteOrgIdp.ts
@@ -66,12 +66,7 @@ export async function deleteOrgIdp(
.where(eq(idp.idpId, idpId));
if (!existingIdp) {
- return next(
- createHttpError(
- HttpCode.NOT_FOUND,
- "IdP not found"
- )
- );
+ return next(createHttpError(HttpCode.NOT_FOUND, "IdP not found"));
}
// Delete the IDP and its related records in a transaction
@@ -82,14 +77,10 @@ export async function deleteOrgIdp(
.where(eq(idpOidcConfig.idpId, idpId));
// Delete IDP-org mappings
- await trx
- .delete(idpOrg)
- .where(eq(idpOrg.idpId, idpId));
+ await trx.delete(idpOrg).where(eq(idpOrg.idpId, idpId));
// Delete the IDP itself
- await trx
- .delete(idp)
- .where(eq(idp.idpId, idpId));
+ await trx.delete(idp).where(eq(idp.idpId, idpId));
});
return response(res, {
diff --git a/server/private/routers/orgIdp/getOrgIdp.ts b/server/private/routers/orgIdp/getOrgIdp.ts
index 3ba85412..01ddc0f7 100644
--- a/server/private/routers/orgIdp/getOrgIdp.ts
+++ b/server/private/routers/orgIdp/getOrgIdp.ts
@@ -93,7 +93,10 @@ export async function getOrgIdp(
idpRes.idpOidcConfig!.clientId = decrypt(clientId, key);
}
- const redirectUrl = await generateOidcRedirectUrl(idpRes.idp.idpId, orgId);
+ const redirectUrl = await generateOidcRedirectUrl(
+ idpRes.idp.idpId,
+ orgId
+ );
return response(res, {
data: {
diff --git a/server/private/routers/orgIdp/index.ts b/server/private/routers/orgIdp/index.ts
index 562582c6..9cf937a4 100644
--- a/server/private/routers/orgIdp/index.ts
+++ b/server/private/routers/orgIdp/index.ts
@@ -15,4 +15,4 @@ export * from "./createOrgOidcIdp";
export * from "./getOrgIdp";
export * from "./listOrgIdps";
export * from "./updateOrgOidcIdp";
-export * from "./deleteOrgIdp";
\ No newline at end of file
+export * from "./deleteOrgIdp";
diff --git a/server/private/routers/orgIdp/listOrgIdps.ts b/server/private/routers/orgIdp/listOrgIdps.ts
index 646d808c..36cbc627 100644
--- a/server/private/routers/orgIdp/listOrgIdps.ts
+++ b/server/private/routers/orgIdp/listOrgIdps.ts
@@ -25,23 +25,23 @@ import { OpenAPITags, registry } from "@server/openApi";
import { ListOrgIdpsResponse } from "@server/routers/orgIdp/types";
const querySchema = z.strictObject({
- limit: z
- .string()
- .optional()
- .default("1000")
- .transform(Number)
- .pipe(z.int().nonnegative()),
- offset: z
- .string()
- .optional()
- .default("0")
- .transform(Number)
- .pipe(z.int().nonnegative())
- });
+ limit: z
+ .string()
+ .optional()
+ .default("1000")
+ .transform(Number)
+ .pipe(z.int().nonnegative()),
+ offset: z
+ .string()
+ .optional()
+ .default("0")
+ .transform(Number)
+ .pipe(z.int().nonnegative())
+});
const paramsSchema = z.strictObject({
- orgId: z.string().nonempty()
- });
+ orgId: z.string().nonempty()
+});
async function query(orgId: string, limit: number, offset: number) {
const res = await db
diff --git a/server/private/routers/orgIdp/updateOrgOidcIdp.ts b/server/private/routers/orgIdp/updateOrgOidcIdp.ts
index 3826f6b3..f29e4fc2 100644
--- a/server/private/routers/orgIdp/updateOrgOidcIdp.ts
+++ b/server/private/routers/orgIdp/updateOrgOidcIdp.ts
@@ -36,18 +36,18 @@ const paramsSchema = z
.strict();
const bodySchema = z.strictObject({
- name: z.string().optional(),
- clientId: z.string().optional(),
- clientSecret: z.string().optional(),
- authUrl: z.string().optional(),
- tokenUrl: z.string().optional(),
- identifierPath: z.string().optional(),
- emailPath: z.string().optional(),
- namePath: z.string().optional(),
- scopes: z.string().optional(),
- autoProvision: z.boolean().optional(),
- roleMapping: z.string().optional()
- });
+ name: z.string().optional(),
+ clientId: z.string().optional(),
+ clientSecret: z.string().optional(),
+ authUrl: z.string().optional(),
+ tokenUrl: z.string().optional(),
+ identifierPath: z.string().optional(),
+ emailPath: z.string().optional(),
+ namePath: z.string().optional(),
+ scopes: z.string().optional(),
+ autoProvision: z.boolean().optional(),
+ roleMapping: z.string().optional()
+});
export type UpdateOrgIdpResponse = {
idpId: number;
diff --git a/server/private/routers/re-key/index.ts b/server/private/routers/re-key/index.ts
index 41a1c967..9c1bccf8 100644
--- a/server/private/routers/re-key/index.ts
+++ b/server/private/routers/re-key/index.ts
@@ -13,4 +13,4 @@
export * from "./reGenerateClientSecret";
export * from "./reGenerateSiteSecret";
-export * from "./reGenerateExitNodeSecret";
\ No newline at end of file
+export * from "./reGenerateExitNodeSecret";
diff --git a/server/private/routers/re-key/reGenerateClientSecret.ts b/server/private/routers/re-key/reGenerateClientSecret.ts
index 310f2602..5478c690 100644
--- a/server/private/routers/re-key/reGenerateClientSecret.ts
+++ b/server/private/routers/re-key/reGenerateClientSecret.ts
@@ -123,7 +123,10 @@ export async function reGenerateClientSecret(
};
// Don't await this to prevent blocking the response
sendToClient(existingOlms[0].olmId, payload).catch((error) => {
- logger.error("Failed to send termination message to olm:", error);
+ logger.error(
+ "Failed to send termination message to olm:",
+ error
+ );
});
disconnectClient(existingOlms[0].olmId).catch((error) => {
@@ -133,7 +136,7 @@ export async function reGenerateClientSecret(
return response(res, {
data: {
- olmId: existingOlms[0].olmId,
+ olmId: existingOlms[0].olmId
},
success: true,
error: false,
diff --git a/server/private/routers/re-key/reGenerateExitNodeSecret.ts b/server/private/routers/re-key/reGenerateExitNodeSecret.ts
index b642f102..021d2ce9 100644
--- a/server/private/routers/re-key/reGenerateExitNodeSecret.ts
+++ b/server/private/routers/re-key/reGenerateExitNodeSecret.ts
@@ -12,7 +12,14 @@
*/
import { NextFunction, Request, Response } from "express";
-import { db, exitNodes, exitNodeOrgs, ExitNode, ExitNodeOrg, RemoteExitNode } from "@server/db";
+import {
+ db,
+ exitNodes,
+ exitNodeOrgs,
+ ExitNode,
+ ExitNodeOrg,
+ RemoteExitNode
+} from "@server/db";
import HttpCode from "@server/types/HttpCode";
import { z } from "zod";
import { remoteExitNodes } from "@server/db";
@@ -91,14 +98,15 @@ export async function reGenerateExitNodeSecret(
data: {}
};
// Don't await this to prevent blocking the response
- sendToClient(existingRemoteExitNode.remoteExitNodeId, payload).catch(
- (error) => {
- logger.error(
- "Failed to send termination message to remote exit node:",
- error
- );
- }
- );
+ sendToClient(
+ existingRemoteExitNode.remoteExitNodeId,
+ payload
+ ).catch((error) => {
+ logger.error(
+ "Failed to send termination message to remote exit node:",
+ error
+ );
+ });
disconnectClient(existingRemoteExitNode.remoteExitNodeId).catch(
(error) => {
diff --git a/server/private/routers/re-key/reGenerateSiteSecret.ts b/server/private/routers/re-key/reGenerateSiteSecret.ts
index b427dcc2..09cf7599 100644
--- a/server/private/routers/re-key/reGenerateSiteSecret.ts
+++ b/server/private/routers/re-key/reGenerateSiteSecret.ts
@@ -80,7 +80,7 @@ export async function reGenerateSiteSecret(
const secretHash = await hashPassword(secret);
// get the newt to verify it exists
- const existingNewts = await db
+ const existingNewts = await db
.select()
.from(newts)
.where(eq(newts.siteId, siteId));
@@ -120,15 +120,20 @@ export async function reGenerateSiteSecret(
data: {}
};
// Don't await this to prevent blocking the response
- sendToClient(existingNewts[0].newtId, payload).catch((error) => {
- logger.error(
- "Failed to send termination message to newt:",
- error
- );
- });
+ sendToClient(existingNewts[0].newtId, payload).catch(
+ (error) => {
+ logger.error(
+ "Failed to send termination message to newt:",
+ error
+ );
+ }
+ );
disconnectClient(existingNewts[0].newtId).catch((error) => {
- logger.error("Failed to disconnect newt after re-key:", error);
+ logger.error(
+ "Failed to disconnect newt after re-key:",
+ error
+ );
});
}
diff --git a/server/private/routers/remoteExitNode/createRemoteExitNode.ts b/server/private/routers/remoteExitNode/createRemoteExitNode.ts
index 5afa82ef..f734813e 100644
--- a/server/private/routers/remoteExitNode/createRemoteExitNode.ts
+++ b/server/private/routers/remoteExitNode/createRemoteExitNode.ts
@@ -36,9 +36,9 @@ export const paramsSchema = z.object({
});
const bodySchema = z.strictObject({
- remoteExitNodeId: z.string().length(15),
- secret: z.string().length(48)
- });
+ remoteExitNodeId: z.string().length(15),
+ secret: z.string().length(48)
+});
export type CreateRemoteExitNodeBody = z.infer;
diff --git a/server/private/routers/remoteExitNode/deleteRemoteExitNode.ts b/server/private/routers/remoteExitNode/deleteRemoteExitNode.ts
index e293f421..a23363fc 100644
--- a/server/private/routers/remoteExitNode/deleteRemoteExitNode.ts
+++ b/server/private/routers/remoteExitNode/deleteRemoteExitNode.ts
@@ -25,9 +25,9 @@ import { usageService } from "@server/lib/billing/usageService";
import { FeatureId } from "@server/lib/billing";
const paramsSchema = z.strictObject({
- orgId: z.string().min(1),
- remoteExitNodeId: z.string().min(1)
- });
+ orgId: z.string().min(1),
+ remoteExitNodeId: z.string().min(1)
+});
export async function deleteRemoteExitNode(
req: Request,
diff --git a/server/private/routers/remoteExitNode/getRemoteExitNode.ts b/server/private/routers/remoteExitNode/getRemoteExitNode.ts
index c7b98297..01ea080c 100644
--- a/server/private/routers/remoteExitNode/getRemoteExitNode.ts
+++ b/server/private/routers/remoteExitNode/getRemoteExitNode.ts
@@ -24,9 +24,9 @@ import { fromError } from "zod-validation-error";
import { GetRemoteExitNodeResponse } from "@server/routers/remoteExitNode/types";
const getRemoteExitNodeSchema = z.strictObject({
- orgId: z.string().min(1),
- remoteExitNodeId: z.string().min(1)
- });
+ orgId: z.string().min(1),
+ remoteExitNodeId: z.string().min(1)
+});
async function query(remoteExitNodeId: string) {
const [remoteExitNode] = await db
diff --git a/server/private/routers/remoteExitNode/getRemoteExitNodeToken.ts b/server/private/routers/remoteExitNode/getRemoteExitNodeToken.ts
index 16ec4d5d..24f0de15 100644
--- a/server/private/routers/remoteExitNode/getRemoteExitNodeToken.ts
+++ b/server/private/routers/remoteExitNode/getRemoteExitNodeToken.ts
@@ -55,7 +55,8 @@ export async function getRemoteExitNodeToken(
try {
if (token) {
- const { session, remoteExitNode } = await validateRemoteExitNodeSessionToken(token);
+ const { session, remoteExitNode } =
+ await validateRemoteExitNodeSessionToken(token);
if (session) {
if (config.getRawConfig().app.log_failed_attempts) {
logger.info(
@@ -103,7 +104,10 @@ export async function getRemoteExitNodeToken(
}
const resToken = generateSessionToken();
- await createRemoteExitNodeSession(resToken, existingRemoteExitNode.remoteExitNodeId);
+ await createRemoteExitNodeSession(
+ resToken,
+ existingRemoteExitNode.remoteExitNodeId
+ );
// logger.debug(`Created RemoteExitNode token response: ${JSON.stringify(resToken)}`);
diff --git a/server/private/routers/remoteExitNode/handleRemoteExitNodePingMessage.ts b/server/private/routers/remoteExitNode/handleRemoteExitNodePingMessage.ts
index 78492714..dafc1412 100644
--- a/server/private/routers/remoteExitNode/handleRemoteExitNodePingMessage.ts
+++ b/server/private/routers/remoteExitNode/handleRemoteExitNodePingMessage.ts
@@ -33,7 +33,9 @@ export const startRemoteExitNodeOfflineChecker = (): void => {
offlineCheckerInterval = setInterval(async () => {
try {
- const twoMinutesAgo = Math.floor((Date.now() - OFFLINE_THRESHOLD_MS) / 1000);
+ const twoMinutesAgo = Math.floor(
+ (Date.now() - OFFLINE_THRESHOLD_MS) / 1000
+ );
// Find clients that haven't pinged in the last 2 minutes and mark them as offline
const newlyOfflineNodes = await db
@@ -48,11 +50,13 @@ export const startRemoteExitNodeOfflineChecker = (): void => {
isNull(exitNodes.lastPing)
)
)
- ).returning();
-
+ )
+ .returning();
// Update the sites to offline if they have not pinged either
- const exitNodeIds = newlyOfflineNodes.map(node => node.exitNodeId);
+ const exitNodeIds = newlyOfflineNodes.map(
+ (node) => node.exitNodeId
+ );
const sitesOnNode = await db
.select()
@@ -77,7 +81,6 @@ export const startRemoteExitNodeOfflineChecker = (): void => {
.where(eq(sites.siteId, site.siteId));
}
}
-
} catch (error) {
logger.error("Error in offline checker interval", { error });
}
@@ -100,7 +103,9 @@ export const stopRemoteExitNodeOfflineChecker = (): void => {
/**
* Handles ping messages from clients and responds with pong
*/
-export const handleRemoteExitNodePingMessage: MessageHandler = async (context) => {
+export const handleRemoteExitNodePingMessage: MessageHandler = async (
+ context
+) => {
const { message, client: c, sendToClient } = context;
const remoteExitNode = c as RemoteExitNode;
@@ -120,7 +125,7 @@ export const handleRemoteExitNodePingMessage: MessageHandler = async (context) =
.update(exitNodes)
.set({
lastPing: Math.floor(Date.now() / 1000),
- online: true,
+ online: true
})
.where(eq(exitNodes.exitNodeId, remoteExitNode.exitNodeId));
} catch (error) {
@@ -131,7 +136,7 @@ export const handleRemoteExitNodePingMessage: MessageHandler = async (context) =
message: {
type: "pong",
data: {
- timestamp: new Date().toISOString(),
+ timestamp: new Date().toISOString()
}
},
broadcast: false,
diff --git a/server/private/routers/remoteExitNode/handleRemoteExitNodeRegisterMessage.ts b/server/private/routers/remoteExitNode/handleRemoteExitNodeRegisterMessage.ts
index a733db7d..5ad37edc 100644
--- a/server/private/routers/remoteExitNode/handleRemoteExitNodeRegisterMessage.ts
+++ b/server/private/routers/remoteExitNode/handleRemoteExitNodeRegisterMessage.ts
@@ -29,7 +29,8 @@ export const handleRemoteExitNodeRegisterMessage: MessageHandler = async (
return;
}
- const { remoteExitNodeVersion, remoteExitNodeSecondaryVersion } = message.data;
+ const { remoteExitNodeVersion, remoteExitNodeSecondaryVersion } =
+ message.data;
if (!remoteExitNodeVersion) {
logger.warn("Remote exit node version not found");
@@ -39,7 +40,10 @@ export const handleRemoteExitNodeRegisterMessage: MessageHandler = async (
// update the version
await db
.update(remoteExitNodes)
- .set({ version: remoteExitNodeVersion, secondaryVersion: remoteExitNodeSecondaryVersion })
+ .set({
+ version: remoteExitNodeVersion,
+ secondaryVersion: remoteExitNodeSecondaryVersion
+ })
.where(
eq(
remoteExitNodes.remoteExitNodeId,
diff --git a/server/private/routers/remoteExitNode/listRemoteExitNodes.ts b/server/private/routers/remoteExitNode/listRemoteExitNodes.ts
index a13a05cd..e6548600 100644
--- a/server/private/routers/remoteExitNode/listRemoteExitNodes.ts
+++ b/server/private/routers/remoteExitNode/listRemoteExitNodes.ts
@@ -24,8 +24,8 @@ import { fromError } from "zod-validation-error";
import { ListRemoteExitNodesResponse } from "@server/routers/remoteExitNode/types";
const listRemoteExitNodesParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const listRemoteExitNodesSchema = z.object({
limit: z
diff --git a/server/private/routers/remoteExitNode/pickRemoteExitNodeDefaults.ts b/server/private/routers/remoteExitNode/pickRemoteExitNodeDefaults.ts
index bb7c89d5..5dcd545e 100644
--- a/server/private/routers/remoteExitNode/pickRemoteExitNodeDefaults.ts
+++ b/server/private/routers/remoteExitNode/pickRemoteExitNodeDefaults.ts
@@ -22,8 +22,8 @@ import { z } from "zod";
import { PickRemoteExitNodeDefaultsResponse } from "@server/routers/remoteExitNode/types";
const paramsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
export async function pickRemoteExitNodeDefaults(
req: Request,
diff --git a/server/private/routers/remoteExitNode/quickStartRemoteExitNode.ts b/server/private/routers/remoteExitNode/quickStartRemoteExitNode.ts
index 4d368152..ebe365d1 100644
--- a/server/private/routers/remoteExitNode/quickStartRemoteExitNode.ts
+++ b/server/private/routers/remoteExitNode/quickStartRemoteExitNode.ts
@@ -38,7 +38,9 @@ export async function quickStartRemoteExitNode(
next: NextFunction
): Promise {
try {
- const parsedBody = quickStartRemoteExitNodeBodySchema.safeParse(req.body);
+ const parsedBody = quickStartRemoteExitNodeBodySchema.safeParse(
+ req.body
+ );
if (!parsedBody.success) {
return next(
createHttpError(
diff --git a/server/private/routers/ws/index.ts b/server/private/routers/ws/index.ts
index 4d803a3a..3a8db537 100644
--- a/server/private/routers/ws/index.ts
+++ b/server/private/routers/ws/index.ts
@@ -11,4 +11,4 @@
* This file is not licensed under the AGPLv3.
*/
-export * from "./ws";
\ No newline at end of file
+export * from "./ws";
diff --git a/server/private/routers/ws/messageHandlers.ts b/server/private/routers/ws/messageHandlers.ts
index 71c2b253..5a6c85cf 100644
--- a/server/private/routers/ws/messageHandlers.ts
+++ b/server/private/routers/ws/messageHandlers.ts
@@ -23,4 +23,4 @@ export const messageHandlers: Record = {
"remoteExitNode/ping": handleRemoteExitNodePingMessage
};
-startRemoteExitNodeOfflineChecker(); // this is to handle the offline check for remote exit nodes
\ No newline at end of file
+startRemoteExitNodeOfflineChecker(); // this is to handle the offline check for remote exit nodes
diff --git a/server/private/routers/ws/ws.ts b/server/private/routers/ws/ws.ts
index 41c400cd..784c3d51 100644
--- a/server/private/routers/ws/ws.ts
+++ b/server/private/routers/ws/ws.ts
@@ -37,7 +37,14 @@ import { validateRemoteExitNodeSessionToken } from "#private/auth/sessions/remot
import { rateLimitService } from "#private/lib/rateLimit";
import { messageHandlers } from "@server/routers/ws/messageHandlers";
import { messageHandlers as privateMessageHandlers } from "#private/routers/ws/messageHandlers";
-import { AuthenticatedWebSocket, ClientType, WSMessage, TokenPayload, WebSocketRequest, RedisMessage } from "@server/routers/ws";
+import {
+ AuthenticatedWebSocket,
+ ClientType,
+ WSMessage,
+ TokenPayload,
+ WebSocketRequest,
+ RedisMessage
+} from "@server/routers/ws";
import { validateSessionToken } from "@server/auth/sessions/app";
// Merge public and private message handlers
@@ -55,9 +62,9 @@ const processMessage = async (
try {
const message: WSMessage = JSON.parse(data.toString());
- logger.debug(
- `Processing message from ${clientType.toUpperCase()} ID: ${clientId}, type: ${message.type}`
- );
+ // logger.debug(
+ // `Processing message from ${clientType.toUpperCase()} ID: ${clientId}, type: ${message.type}`
+ // );
if (!message.type || typeof message.type !== "string") {
throw new Error("Invalid message format: missing or invalid type");
@@ -216,7 +223,7 @@ const initializeRedisSubscription = async (): Promise => {
// Each node is responsible for restoring its own connection state to Redis
// This approach is more efficient than cross-node coordination because:
// 1. Each node knows its own connections (source of truth)
-// 2. No network overhead from broadcasting state between nodes
+// 2. No network overhead from broadcasting state between nodes
// 3. No race conditions from simultaneous updates
// 4. Redis becomes eventually consistent as each node restores independently
// 5. Simpler logic with better fault tolerance
@@ -233,8 +240,10 @@ const recoverConnectionState = async (): Promise => {
// Each node simply restores its own local connections to Redis
// This is the source of truth - no need for cross-node coordination
await restoreLocalConnectionsToRedis();
-
- logger.info("Redis connection state recovery completed - restored local state");
+
+ logger.info(
+ "Redis connection state recovery completed - restored local state"
+ );
} catch (error) {
logger.error("Error during Redis recovery:", error);
} finally {
@@ -251,8 +260,10 @@ const restoreLocalConnectionsToRedis = async (): Promise => {
try {
// Restore all current local connections to Redis
for (const [clientId, clients] of connectedClients.entries()) {
- const validClients = clients.filter(client => client.readyState === WebSocket.OPEN);
-
+ const validClients = clients.filter(
+ (client) => client.readyState === WebSocket.OPEN
+ );
+
if (validClients.length > 0) {
// Add this node to the client's connection list
await redisManager.sadd(getConnectionsKey(clientId), NODE_ID);
@@ -303,7 +314,10 @@ const addClient = async (
Date.now().toString()
);
} catch (error) {
- logger.error("Failed to add client to Redis tracking (connection still functional locally):", error);
+ logger.error(
+ "Failed to add client to Redis tracking (connection still functional locally):",
+ error
+ );
}
}
@@ -326,9 +340,14 @@ const removeClient = async (
if (redisManager.isRedisEnabled()) {
try {
await redisManager.srem(getConnectionsKey(clientId), NODE_ID);
- await redisManager.del(getNodeConnectionsKey(NODE_ID, clientId));
+ await redisManager.del(
+ getNodeConnectionsKey(NODE_ID, clientId)
+ );
} catch (error) {
- logger.error("Failed to remove client from Redis tracking (cleanup will occur on recovery):", error);
+ logger.error(
+ "Failed to remove client from Redis tracking (cleanup will occur on recovery):",
+ error
+ );
}
}
@@ -345,7 +364,10 @@ const removeClient = async (
ws.connectionId
);
} catch (error) {
- logger.error("Failed to remove specific connection from Redis tracking:", error);
+ logger.error(
+ "Failed to remove specific connection from Redis tracking:",
+ error
+ );
}
}
@@ -372,7 +394,9 @@ const sendToClientLocal = async (
}
});
- logger.debug(`sendToClient: Message type ${message.type} sent to clientId ${clientId}`);
+ logger.debug(
+ `sendToClient: Message type ${message.type} sent to clientId ${clientId}`
+ );
return true;
};
@@ -411,14 +435,22 @@ const sendToClient = async (
fromNodeId: NODE_ID
};
- await redisManager.publish(REDIS_CHANNEL, JSON.stringify(redisMessage));
+ await redisManager.publish(
+ REDIS_CHANNEL,
+ JSON.stringify(redisMessage)
+ );
} catch (error) {
- logger.error("Failed to send message via Redis, message may be lost:", error);
+ logger.error(
+ "Failed to send message via Redis, message may be lost:",
+ error
+ );
// Continue execution - local delivery already attempted
}
} else if (!localSent && !redisManager.isRedisEnabled()) {
// Redis is disabled or unavailable - log that we couldn't deliver to remote nodes
- logger.debug(`Could not deliver message to ${clientId} - not connected locally and Redis unavailable`);
+ logger.debug(
+ `Could not deliver message to ${clientId} - not connected locally and Redis unavailable`
+ );
}
return localSent;
@@ -441,13 +473,21 @@ const broadcastToAllExcept = async (
fromNodeId: NODE_ID
};
- await redisManager.publish(REDIS_CHANNEL, JSON.stringify(redisMessage));
+ await redisManager.publish(
+ REDIS_CHANNEL,
+ JSON.stringify(redisMessage)
+ );
} catch (error) {
- logger.error("Failed to broadcast message via Redis, remote nodes may not receive it:", error);
+ logger.error(
+ "Failed to broadcast message via Redis, remote nodes may not receive it:",
+ error
+ );
// Continue execution - local broadcast already completed
}
} else {
- logger.debug("Redis unavailable - broadcast limited to local node only");
+ logger.debug(
+ "Redis unavailable - broadcast limited to local node only"
+ );
}
};
@@ -512,8 +552,10 @@ const verifyToken = async (
return null;
}
- if (olm.userId) { // this is a user device and we need to check the user token
- const { session: userSession, user } = await validateSessionToken(userToken);
+ if (olm.userId) {
+ // this is a user device and we need to check the user token
+ const { session: userSession, user } =
+ await validateSessionToken(userToken);
if (!userSession || !user) {
return null;
}
@@ -668,7 +710,7 @@ const handleWSUpgrade = (server: HttpServer): void => {
url.searchParams.get("token") ||
request.headers["sec-websocket-protocol"] ||
"";
- const userToken = url.searchParams.get('userToken') || '';
+ const userToken = url.searchParams.get("userToken") || "";
let clientType = url.searchParams.get(
"clientType"
) as ClientType;
@@ -690,7 +732,11 @@ const handleWSUpgrade = (server: HttpServer): void => {
return;
}
- const tokenPayload = await verifyToken(token, clientType, userToken);
+ const tokenPayload = await verifyToken(
+ token,
+ clientType,
+ userToken
+ );
if (!tokenPayload) {
logger.debug(
"Unauthorized connection attempt: invalid token..."
@@ -724,50 +770,68 @@ const handleWSUpgrade = (server: HttpServer): void => {
// Add periodic connection state sync to handle Redis disconnections/reconnections
const startPeriodicStateSync = (): void => {
// Lightweight sync every 5 minutes - just restore our own state
- setInterval(async () => {
- if (redisManager.isRedisEnabled() && !isRedisRecoveryInProgress) {
- try {
- await restoreLocalConnectionsToRedis();
- logger.debug("Periodic connection state sync completed");
- } catch (error) {
- logger.error("Error during periodic connection state sync:", error);
+ setInterval(
+ async () => {
+ if (redisManager.isRedisEnabled() && !isRedisRecoveryInProgress) {
+ try {
+ await restoreLocalConnectionsToRedis();
+ logger.debug("Periodic connection state sync completed");
+ } catch (error) {
+ logger.error(
+ "Error during periodic connection state sync:",
+ error
+ );
+ }
}
- }
- }, 5 * 60 * 1000); // 5 minutes
+ },
+ 5 * 60 * 1000
+ ); // 5 minutes
// Cleanup stale connections every 15 minutes
- setInterval(async () => {
- if (redisManager.isRedisEnabled()) {
- try {
- await cleanupStaleConnections();
- logger.debug("Periodic connection cleanup completed");
- } catch (error) {
- logger.error("Error during periodic connection cleanup:", error);
+ setInterval(
+ async () => {
+ if (redisManager.isRedisEnabled()) {
+ try {
+ await cleanupStaleConnections();
+ logger.debug("Periodic connection cleanup completed");
+ } catch (error) {
+ logger.error(
+ "Error during periodic connection cleanup:",
+ error
+ );
+ }
}
- }
- }, 15 * 60 * 1000); // 15 minutes
+ },
+ 15 * 60 * 1000
+ ); // 15 minutes
};
const cleanupStaleConnections = async (): Promise => {
if (!redisManager.isRedisEnabled()) return;
try {
- const nodeKeys = await redisManager.getClient()?.keys(`ws:node:${NODE_ID}:*`) || [];
-
+ const nodeKeys =
+ (await redisManager.getClient()?.keys(`ws:node:${NODE_ID}:*`)) ||
+ [];
+
for (const nodeKey of nodeKeys) {
const connections = await redisManager.hgetall(nodeKey);
- const clientId = nodeKey.replace(`ws:node:${NODE_ID}:`, '');
+ const clientId = nodeKey.replace(`ws:node:${NODE_ID}:`, "");
const localClients = connectedClients.get(clientId) || [];
const localConnectionIds = localClients
- .filter(client => client.readyState === WebSocket.OPEN)
- .map(client => client.connectionId)
+ .filter((client) => client.readyState === WebSocket.OPEN)
+ .map((client) => client.connectionId)
.filter(Boolean);
// Remove Redis entries for connections that no longer exist locally
- for (const [connectionId, timestamp] of Object.entries(connections)) {
+ for (const [connectionId, timestamp] of Object.entries(
+ connections
+ )) {
if (!localConnectionIds.includes(connectionId)) {
await redisManager.hdel(nodeKey, connectionId);
- logger.debug(`Cleaned up stale connection: ${connectionId} for client: ${clientId}`);
+ logger.debug(
+ `Cleaned up stale connection: ${connectionId} for client: ${clientId}`
+ );
}
}
@@ -776,7 +840,9 @@ const cleanupStaleConnections = async (): Promise => {
if (Object.keys(remainingConnections).length === 0) {
await redisManager.srem(getConnectionsKey(clientId), NODE_ID);
await redisManager.del(nodeKey);
- logger.debug(`Cleaned up empty connection tracking for client: ${clientId}`);
+ logger.debug(
+ `Cleaned up empty connection tracking for client: ${clientId}`
+ );
}
}
} catch (error) {
@@ -789,38 +855,38 @@ if (redisManager.isRedisEnabled()) {
initializeRedisSubscription().catch((error) => {
logger.error("Failed to initialize Redis subscription:", error);
});
-
+
// Register recovery callback with Redis manager
// When Redis reconnects, each node simply restores its own local state
redisManager.onReconnection(async () => {
logger.info("Redis reconnected, starting WebSocket state recovery...");
await recoverConnectionState();
});
-
+
// Start periodic state synchronization
startPeriodicStateSync();
-
+
logger.info(
`WebSocket handler initialized with Redis support - Node ID: ${NODE_ID}`
);
} else {
- logger.debug(
- "WebSocket handler initialized in local mode"
- );
+ logger.debug("WebSocket handler initialized in local mode");
}
// Disconnect a specific client and force them to reconnect
const disconnectClient = async (clientId: string): Promise => {
const mapKey = getClientMapKey(clientId);
const clients = connectedClients.get(mapKey);
-
+
if (!clients || clients.length === 0) {
logger.debug(`No connections found for client ID: ${clientId}`);
return false;
}
- logger.info(`Disconnecting client ID: ${clientId} (${clients.length} connection(s))`);
-
+ logger.info(
+ `Disconnecting client ID: ${clientId} (${clients.length} connection(s))`
+ );
+
// Close all connections for this client
clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
diff --git a/server/routers/accessToken/deleteAccessToken.ts b/server/routers/accessToken/deleteAccessToken.ts
index 5de4df9b..4e18ddeb 100644
--- a/server/routers/accessToken/deleteAccessToken.ts
+++ b/server/routers/accessToken/deleteAccessToken.ts
@@ -11,8 +11,8 @@ import { db } from "@server/db";
import { OpenAPITags, registry } from "@server/openApi";
const deleteAccessTokenParamsSchema = z.strictObject({
- accessTokenId: z.string()
- });
+ accessTokenId: z.string()
+});
registry.registerPath({
method: "delete",
diff --git a/server/routers/accessToken/generateAccessToken.ts b/server/routers/accessToken/generateAccessToken.ts
index 36a20268..35da6add 100644
--- a/server/routers/accessToken/generateAccessToken.ts
+++ b/server/routers/accessToken/generateAccessToken.ts
@@ -25,17 +25,14 @@ import { sha256 } from "@oslojs/crypto/sha2";
import { OpenAPITags, registry } from "@server/openApi";
export const generateAccessTokenBodySchema = z.strictObject({
- validForSeconds: z.int().positive().optional(), // seconds
- title: z.string().optional(),
- description: z.string().optional()
- });
+ validForSeconds: z.int().positive().optional(), // seconds
+ title: z.string().optional(),
+ description: z.string().optional()
+});
export const generateAccssTokenParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
export type GenerateAccessTokenResponse = Omit<
ResourceAccessToken,
diff --git a/server/routers/accessToken/listAccessTokens.ts b/server/routers/accessToken/listAccessTokens.ts
index 476c858b..2f929fc6 100644
--- a/server/routers/accessToken/listAccessTokens.ts
+++ b/server/routers/accessToken/listAccessTokens.ts
@@ -17,7 +17,8 @@ import stoi from "@server/lib/stoi";
import { fromZodError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
-const listAccessTokensParamsSchema = z.strictObject({
+const listAccessTokensParamsSchema = z
+ .strictObject({
resourceId: z
.string()
.optional()
diff --git a/server/routers/apiKeys/createRootApiKey.ts b/server/routers/apiKeys/createRootApiKey.ts
index 8e9e571d..fc076623 100644
--- a/server/routers/apiKeys/createRootApiKey.ts
+++ b/server/routers/apiKeys/createRootApiKey.ts
@@ -15,8 +15,8 @@ import logger from "@server/logger";
import { hashPassword } from "@server/auth/password";
const bodySchema = z.strictObject({
- name: z.string().min(1).max(255)
- });
+ name: z.string().min(1).max(255)
+});
export type CreateRootApiKeyBody = z.infer;
diff --git a/server/routers/apiKeys/listApiKeyActions.ts b/server/routers/apiKeys/listApiKeyActions.ts
index 7432d175..073a7583 100644
--- a/server/routers/apiKeys/listApiKeyActions.ts
+++ b/server/routers/apiKeys/listApiKeyActions.ts
@@ -47,8 +47,7 @@ export type ListApiKeyActionsResponse = {
registry.registerPath({
method: "get",
path: "/org/{orgId}/api-key/{apiKeyId}/actions",
- description:
- "List all actions set for an API key.",
+ description: "List all actions set for an API key.",
tags: [OpenAPITags.Org, OpenAPITags.ApiKey],
request: {
params: paramsSchema,
diff --git a/server/routers/apiKeys/setApiKeyActions.ts b/server/routers/apiKeys/setApiKeyActions.ts
index fe8cc4f1..62967388 100644
--- a/server/routers/apiKeys/setApiKeyActions.ts
+++ b/server/routers/apiKeys/setApiKeyActions.ts
@@ -11,9 +11,10 @@ import { eq, and, inArray } from "drizzle-orm";
import { OpenAPITags, registry } from "@server/openApi";
const bodySchema = z.strictObject({
- actionIds: z.tuple([z.string()], z.string())
- .transform((v) => Array.from(new Set(v)))
- });
+ actionIds: z
+ .tuple([z.string()], z.string())
+ .transform((v) => Array.from(new Set(v)))
+});
const paramsSchema = z.object({
apiKeyId: z.string().nonempty()
diff --git a/server/routers/apiKeys/setApiKeyOrgs.ts b/server/routers/apiKeys/setApiKeyOrgs.ts
index d60aad73..51d0f043 100644
--- a/server/routers/apiKeys/setApiKeyOrgs.ts
+++ b/server/routers/apiKeys/setApiKeyOrgs.ts
@@ -10,9 +10,10 @@ import { fromError } from "zod-validation-error";
import { eq, and, inArray } from "drizzle-orm";
const bodySchema = z.strictObject({
- orgIds: z.tuple([z.string()], z.string())
- .transform((v) => Array.from(new Set(v)))
- });
+ orgIds: z
+ .tuple([z.string()], z.string())
+ .transform((v) => Array.from(new Set(v)))
+});
const paramsSchema = z.object({
apiKeyId: z.string().nonempty()
diff --git a/server/routers/auditLogs/exportRequestAuditLog.ts b/server/routers/auditLogs/exportRequestAuditLog.ts
index 9e55cfc4..8b70ec5e 100644
--- a/server/routers/auditLogs/exportRequestAuditLog.ts
+++ b/server/routers/auditLogs/exportRequestAuditLog.ts
@@ -9,17 +9,23 @@ import logger from "@server/logger";
import {
queryAccessAuditLogsQuery,
queryRequestAuditLogsParams,
- queryRequest
+ queryRequest,
+ countRequestQuery
} from "./queryRequestAuditLog";
import { generateCSV } from "./generateCSV";
+export const MAX_EXPORT_LIMIT = 50_000;
+
registry.registerPath({
method: "get",
path: "/org/{orgId}/logs/request",
description: "Query the request audit log for an organization",
tags: [OpenAPITags.Org],
request: {
- query: queryAccessAuditLogsQuery,
+ query: queryAccessAuditLogsQuery.omit({
+ limit: true,
+ offset: true
+ }),
params: queryRequestAuditLogsParams
},
responses: {}
@@ -53,9 +59,19 @@ export async function exportRequestAuditLogs(
const data = { ...parsedQuery.data, ...parsedParams.data };
+ const [{ count }] = await countRequestQuery(data);
+ if (count > MAX_EXPORT_LIMIT) {
+ return next(
+ createHttpError(
+ HttpCode.BAD_REQUEST,
+ `Export limit exceeded. Your selection contains ${count} rows, but the maximum is ${MAX_EXPORT_LIMIT} rows. Please select a shorter time range to reduce the data.`
+ )
+ );
+ }
+
const baseQuery = queryRequest(data);
- const log = await baseQuery.limit(data.limit).offset(data.offset);
+ const log = await baseQuery.limit(MAX_EXPORT_LIMIT);
const csvData = generateCSV(log);
diff --git a/server/routers/auditLogs/generateCSV.ts b/server/routers/auditLogs/generateCSV.ts
index 8a067069..ea0da29f 100644
--- a/server/routers/auditLogs/generateCSV.ts
+++ b/server/routers/auditLogs/generateCSV.ts
@@ -2,15 +2,17 @@ export function generateCSV(data: any[]): string {
if (data.length === 0) {
return "orgId,action,actorType,timestamp,actor\n";
}
-
+
const headers = Object.keys(data[0]).join(",");
- const rows = data.map(row =>
- Object.values(row).map(value =>
- typeof value === 'string' && value.includes(',')
- ? `"${value.replace(/"/g, '""')}"`
- : value
- ).join(",")
+ const rows = data.map((row) =>
+ Object.values(row)
+ .map((value) =>
+ typeof value === "string" && value.includes(",")
+ ? `"${value.replace(/"/g, '""')}"`
+ : value
+ )
+ .join(",")
);
-
+
return [headers, ...rows].join("\n");
-}
\ No newline at end of file
+}
diff --git a/server/routers/auditLogs/queryRequestAnalytics.ts b/server/routers/auditLogs/queryRequestAnalytics.ts
index 9e4ea17e..a765f176 100644
--- a/server/routers/auditLogs/queryRequestAnalytics.ts
+++ b/server/routers/auditLogs/queryRequestAnalytics.ts
@@ -2,7 +2,7 @@ import { db, requestAuditLog, driver } from "@server/db";
import { registry } from "@server/openApi";
import { NextFunction } from "express";
import { Request, Response } from "express";
-import { eq, gt, lt, and, count, sql, desc, not, isNull } from "drizzle-orm";
+import { eq, gte, lte, and, count, sql, desc, not, isNull } from "drizzle-orm";
import { OpenAPITags } from "@server/openApi";
import { z } from "zod";
import createHttpError from "http-errors";
@@ -10,6 +10,7 @@ import HttpCode from "@server/types/HttpCode";
import { fromError } from "zod-validation-error";
import response from "@server/lib/response";
import logger from "@server/logger";
+import { getSevenDaysAgo } from "@app/lib/getSevenDaysAgo";
const queryAccessAuditLogsQuery = z.object({
// iso string just validate its a parseable date
@@ -19,7 +20,14 @@ const queryAccessAuditLogsQuery = z.object({
error: "timeStart must be a valid ISO date string"
})
.transform((val) => Math.floor(new Date(val).getTime() / 1000))
- .optional(),
+ .optional()
+ .prefault(() => getSevenDaysAgo().toISOString())
+ .openapi({
+ type: "string",
+ format: "date-time",
+ description:
+ "Start time as ISO date string (defaults to 7 days ago)"
+ }),
timeEnd: z
.string()
.refine((val) => !isNaN(Date.parse(val)), {
@@ -55,15 +63,10 @@ type Q = z.infer;
async function query(query: Q) {
let baseConditions = and(
eq(requestAuditLog.orgId, query.orgId),
- lt(requestAuditLog.timestamp, query.timeEnd)
+ gte(requestAuditLog.timestamp, query.timeStart),
+ lte(requestAuditLog.timestamp, query.timeEnd)
);
- if (query.timeStart) {
- baseConditions = and(
- baseConditions,
- gt(requestAuditLog.timestamp, query.timeStart)
- );
- }
if (query.resourceId) {
baseConditions = and(
baseConditions,
diff --git a/server/routers/auditLogs/queryRequestAuditLog.ts b/server/routers/auditLogs/queryRequestAuditLog.ts
index 663ad787..9cedec63 100644
--- a/server/routers/auditLogs/queryRequestAuditLog.ts
+++ b/server/routers/auditLogs/queryRequestAuditLog.ts
@@ -11,6 +11,7 @@ import { fromError } from "zod-validation-error";
import { QueryRequestAuditLogResponse } from "@server/routers/auditLogs/types";
import response from "@server/lib/response";
import logger from "@server/logger";
+import { getSevenDaysAgo } from "@app/lib/getSevenDaysAgo";
export const queryAccessAuditLogsQuery = z.object({
// iso string just validate its a parseable date
@@ -19,7 +20,14 @@ export const queryAccessAuditLogsQuery = z.object({
.refine((val) => !isNaN(Date.parse(val)), {
error: "timeStart must be a valid ISO date string"
})
- .transform((val) => Math.floor(new Date(val).getTime() / 1000)),
+ .transform((val) => Math.floor(new Date(val).getTime() / 1000))
+ .prefault(() => getSevenDaysAgo().toISOString())
+ .openapi({
+ type: "string",
+ format: "date-time",
+ description:
+ "Start time as ISO date string (defaults to 7 days ago)"
+ }),
timeEnd: z
.string()
.refine((val) => !isNaN(Date.parse(val)), {
diff --git a/server/routers/auditLogs/types.ts b/server/routers/auditLogs/types.ts
index 81cef733..474aa926 100644
--- a/server/routers/auditLogs/types.ts
+++ b/server/routers/auditLogs/types.ts
@@ -90,4 +90,4 @@ export type QueryAccessAuditLogResponse = {
}[];
locations: string[];
};
-};
\ No newline at end of file
+};
diff --git a/server/routers/auth/changePassword.ts b/server/routers/auth/changePassword.ts
index fa007d37..1a26b911 100644
--- a/server/routers/auth/changePassword.ts
+++ b/server/routers/auth/changePassword.ts
@@ -6,10 +6,7 @@ import { z } from "zod";
import { db } from "@server/db";
import { User, users } from "@server/db";
import { response } from "@server/lib/response";
-import {
- hashPassword,
- verifyPassword
-} from "@server/auth/password";
+import { hashPassword, verifyPassword } from "@server/auth/password";
import { verifyTotpCode } from "@server/auth/totp";
import logger from "@server/logger";
import { unauthorized } from "@server/auth/unauthorizedResponse";
@@ -23,10 +20,10 @@ import ConfirmPasswordReset from "@server/emails/templates/NotifyResetPassword";
import config from "@server/lib/config";
export const changePasswordBody = z.strictObject({
- oldPassword: z.string(),
- newPassword: passwordSchema,
- code: z.string().optional()
- });
+ oldPassword: z.string(),
+ newPassword: passwordSchema,
+ code: z.string().optional()
+});
export type ChangePasswordBody = z.infer;
@@ -62,12 +59,14 @@ async function invalidateAllSessionsExceptCurrent(
}
// Delete the user sessions (except current)
- await trx.delete(sessions).where(
- and(
- eq(sessions.userId, userId),
- ne(sessions.sessionId, currentSessionId)
- )
- );
+ await trx
+ .delete(sessions)
+ .where(
+ and(
+ eq(sessions.userId, userId),
+ ne(sessions.sessionId, currentSessionId)
+ )
+ );
});
} catch (e) {
logger.error("Failed to invalidate user sessions except current", e);
@@ -157,7 +156,10 @@ export async function changePassword(
.where(eq(users.userId, user.userId));
// Invalidate all sessions except the current one
- await invalidateAllSessionsExceptCurrent(user.userId, req.session.sessionId);
+ await invalidateAllSessionsExceptCurrent(
+ user.userId,
+ req.session.sessionId
+ );
try {
const email = user.email!;
diff --git a/server/routers/auth/checkResourceSession.ts b/server/routers/auth/checkResourceSession.ts
index 39466400..74a94a84 100644
--- a/server/routers/auth/checkResourceSession.ts
+++ b/server/routers/auth/checkResourceSession.ts
@@ -9,7 +9,7 @@ import logger from "@server/logger";
export const params = z.strictObject({
token: z.string(),
- resourceId: z.string().transform(Number).pipe(z.int().positive()),
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
});
export type CheckResourceSessionParams = z.infer;
@@ -21,7 +21,7 @@ export type CheckResourceSessionResponse = {
export async function checkResourceSession(
req: Request,
res: Response,
- next: NextFunction,
+ next: NextFunction
): Promise {
const parsedParams = params.safeParse(req.params);
@@ -29,8 +29,8 @@ export async function checkResourceSession(
return next(
createHttpError(
HttpCode.BAD_REQUEST,
- fromError(parsedParams.error).toString(),
- ),
+ fromError(parsedParams.error).toString()
+ )
);
}
@@ -39,7 +39,7 @@ export async function checkResourceSession(
try {
const { resourceSession } = await validateResourceSessionToken(
token,
- resourceId,
+ resourceId
);
let valid = false;
@@ -52,15 +52,15 @@ export async function checkResourceSession(
success: true,
error: false,
message: "Checked validity",
- status: HttpCode.OK,
+ status: HttpCode.OK
});
} catch (e) {
logger.error(e);
return next(
createHttpError(
HttpCode.INTERNAL_SERVER_ERROR,
- "Failed to reset password",
- ),
+ "Failed to reset password"
+ )
);
}
}
diff --git a/server/routers/auth/disable2fa.ts b/server/routers/auth/disable2fa.ts
index ebf6ab52..254d6ccd 100644
--- a/server/routers/auth/disable2fa.ts
+++ b/server/routers/auth/disable2fa.ts
@@ -17,9 +17,9 @@ import { unauthorized } from "@server/auth/unauthorizedResponse";
import { UserType } from "@server/types/UserTypes";
export const disable2faBody = z.strictObject({
- password: z.string(),
- code: z.string().optional()
- });
+ password: z.string(),
+ code: z.string().optional()
+});
export type Disable2faBody = z.infer;
@@ -56,7 +56,10 @@ export async function disable2fa(
}
try {
- const validPassword = await verifyPassword(password, user.passwordHash!);
+ const validPassword = await verifyPassword(
+ password,
+ user.passwordHash!
+ );
if (!validPassword) {
return next(unauthorized());
}
diff --git a/server/routers/auth/index.ts b/server/routers/auth/index.ts
index 4600a4cc..22040614 100644
--- a/server/routers/auth/index.ts
+++ b/server/routers/auth/index.ts
@@ -16,4 +16,4 @@ export * from "./checkResourceSession";
export * from "./securityKey";
export * from "./startDeviceWebAuth";
export * from "./verifyDeviceWebAuth";
-export * from "./pollDeviceWebAuth";
\ No newline at end of file
+export * from "./pollDeviceWebAuth";
diff --git a/server/routers/auth/pollDeviceWebAuth.ts b/server/routers/auth/pollDeviceWebAuth.ts
index 9949ab42..a5c71362 100644
--- a/server/routers/auth/pollDeviceWebAuth.ts
+++ b/server/routers/auth/pollDeviceWebAuth.ts
@@ -7,10 +7,7 @@ import logger from "@server/logger";
import { response } from "@server/lib/response";
import { db, deviceWebAuthCodes } from "@server/db";
import { eq, and, gt } from "drizzle-orm";
-import {
- createSession,
- generateSessionToken
-} from "@server/auth/sessions/app";
+import { createSession, generateSessionToken } from "@server/auth/sessions/app";
import { encodeHexLowerCase } from "@oslojs/encoding";
import { sha256 } from "@oslojs/crypto/sha2";
@@ -22,9 +19,7 @@ export type PollDeviceWebAuthParams = z.infer;
// Helper function to hash device code before querying database
function hashDeviceCode(code: string): string {
- return encodeHexLowerCase(
- sha256(new TextEncoder().encode(code))
- );
+ return encodeHexLowerCase(sha256(new TextEncoder().encode(code)));
}
export type PollDeviceWebAuthResponse = {
@@ -127,7 +122,9 @@ export async function pollDeviceWebAuth(
// Check if userId is set (should be set when verified)
if (!deviceCode.userId) {
- logger.error("Device code is verified but userId is missing", { codeId: deviceCode.codeId });
+ logger.error("Device code is verified but userId is missing", {
+ codeId: deviceCode.codeId
+ });
return next(
createHttpError(
HttpCode.INTERNAL_SERVER_ERROR,
@@ -165,4 +162,3 @@ export async function pollDeviceWebAuth(
);
}
}
-
diff --git a/server/routers/auth/requestPasswordReset.ts b/server/routers/auth/requestPasswordReset.ts
index 0f9953e8..42b53d24 100644
--- a/server/routers/auth/requestPasswordReset.ts
+++ b/server/routers/auth/requestPasswordReset.ts
@@ -18,8 +18,8 @@ import { hashPassword } from "@server/auth/password";
import { UserType } from "@server/types/UserTypes";
export const requestPasswordResetBody = z.strictObject({
- email: z.email().toLowerCase()
- });
+ email: z.email().toLowerCase()
+});
export type RequestPasswordResetBody = z.infer;
diff --git a/server/routers/auth/requestTotpSecret.ts b/server/routers/auth/requestTotpSecret.ts
index 53d80147..bc032ecd 100644
--- a/server/routers/auth/requestTotpSecret.ts
+++ b/server/routers/auth/requestTotpSecret.ts
@@ -17,9 +17,9 @@ import { verifySession } from "@server/auth/sessions/verifySession";
import config from "@server/lib/config";
export const requestTotpSecretBody = z.strictObject({
- password: z.string(),
- email: z.email().optional()
- });
+ password: z.string(),
+ email: z.email().optional()
+});
export type RequestTotpSecretBody = z.infer;
@@ -46,7 +46,8 @@ export async function requestTotpSecret(
const { password, email } = parsedBody.data;
- const { user: sessionUser, session: existingSession } = await verifySession(req);
+ const { user: sessionUser, session: existingSession } =
+ await verifySession(req);
let user: User | null = sessionUser;
if (!existingSession) {
@@ -112,11 +113,7 @@ export async function requestTotpSecret(
const hex = crypto.getRandomValues(new Uint8Array(20));
const secret = encodeHex(hex);
- const uri = createTOTPKeyURI(
- appName,
- user.email!,
- hex
- );
+ const uri = createTOTPKeyURI(appName, user.email!, hex);
await db
.update(users)
diff --git a/server/routers/auth/resetPassword.ts b/server/routers/auth/resetPassword.ts
index aeb85558..6e616346 100644
--- a/server/routers/auth/resetPassword.ts
+++ b/server/routers/auth/resetPassword.ts
@@ -18,11 +18,11 @@ import { sendEmail } from "@server/emails";
import { passwordSchema } from "@server/auth/passwordSchema";
export const resetPasswordBody = z.strictObject({
- email: z.email().toLowerCase(),
- token: z.string(), // reset secret code
- newPassword: passwordSchema,
- code: z.string().optional() // 2fa code
- });
+ email: z.email().toLowerCase(),
+ token: z.string(), // reset secret code
+ newPassword: passwordSchema,
+ code: z.string().optional() // 2fa code
+});
export type ResetPasswordBody = z.infer;
diff --git a/server/routers/auth/securityKey.ts b/server/routers/auth/securityKey.ts
index eed2328d..9a1ee2cd 100644
--- a/server/routers/auth/securityKey.ts
+++ b/server/routers/auth/securityKey.ts
@@ -19,9 +19,7 @@ import type {
GenerateAuthenticationOptionsOpts,
AuthenticatorTransportFuture
} from "@simplewebauthn/server";
-import {
- isoBase64URL
-} from '@simplewebauthn/server/helpers';
+import { isoBase64URL } from "@simplewebauthn/server/helpers";
import config from "@server/lib/config";
import { UserType } from "@server/types/UserTypes";
import { verifyPassword } from "@server/auth/password";
@@ -30,10 +28,12 @@ import { verifyTotpCode } from "@server/auth/totp";
// The RP ID is the domain name of your application
const rpID = (() => {
- const url = config.getRawConfig().app.dashboard_url ? new URL(config.getRawConfig().app.dashboard_url!) : undefined;
+ const url = config.getRawConfig().app.dashboard_url
+ ? new URL(config.getRawConfig().app.dashboard_url!)
+ : undefined;
// For localhost, we must use 'localhost' without port
- if (url?.hostname === 'localhost' || !url) {
- return 'localhost';
+ if (url?.hostname === "localhost" || !url) {
+ return "localhost";
}
return url.hostname;
})();
@@ -46,25 +46,38 @@ const origin = config.getRawConfig().app.dashboard_url || "localhost";
// This supports clustered deployments and persists across server restarts
// Clean up expired challenges every 5 minutes
-setInterval(async () => {
- try {
- const now = Date.now();
- await db
- .delete(webauthnChallenge)
- .where(lt(webauthnChallenge.expiresAt, now));
- // logger.debug("Cleaned up expired security key challenges");
- } catch (error) {
- logger.error("Failed to clean up expired security key challenges", error);
- }
-}, 5 * 60 * 1000);
+setInterval(
+ async () => {
+ try {
+ const now = Date.now();
+ await db
+ .delete(webauthnChallenge)
+ .where(lt(webauthnChallenge.expiresAt, now));
+ // logger.debug("Cleaned up expired security key challenges");
+ } catch (error) {
+ logger.error(
+ "Failed to clean up expired security key challenges",
+ error
+ );
+ }
+ },
+ 5 * 60 * 1000
+);
// Helper functions for challenge management
-async function storeChallenge(sessionId: string, challenge: string, securityKeyName?: string, userId?: string) {
- const expiresAt = Date.now() + (5 * 60 * 1000); // 5 minutes
-
+async function storeChallenge(
+ sessionId: string,
+ challenge: string,
+ securityKeyName?: string,
+ userId?: string
+) {
+ const expiresAt = Date.now() + 5 * 60 * 1000; // 5 minutes
+
// Delete any existing challenge for this session
- await db.delete(webauthnChallenge).where(eq(webauthnChallenge.sessionId, sessionId));
-
+ await db
+ .delete(webauthnChallenge)
+ .where(eq(webauthnChallenge.sessionId, sessionId));
+
// Insert new challenge
await db.insert(webauthnChallenge).values({
sessionId,
@@ -88,7 +101,9 @@ async function getChallenge(sessionId: string) {
// Check if expired
if (challengeData.expiresAt < Date.now()) {
- await db.delete(webauthnChallenge).where(eq(webauthnChallenge.sessionId, sessionId));
+ await db
+ .delete(webauthnChallenge)
+ .where(eq(webauthnChallenge.sessionId, sessionId));
return null;
}
@@ -96,7 +111,9 @@ async function getChallenge(sessionId: string) {
}
async function clearChallenge(sessionId: string) {
- await db.delete(webauthnChallenge).where(eq(webauthnChallenge.sessionId, sessionId));
+ await db
+ .delete(webauthnChallenge)
+ .where(eq(webauthnChallenge.sessionId, sessionId));
}
export const registerSecurityKeyBody = z.strictObject({
@@ -153,7 +170,10 @@ export async function startRegistration(
try {
// Verify password
- const validPassword = await verifyPassword(password, user.passwordHash!);
+ const validPassword = await verifyPassword(
+ password,
+ user.passwordHash!
+ );
if (!validPassword) {
return next(unauthorized());
}
@@ -197,9 +217,11 @@ export async function startRegistration(
.from(securityKeys)
.where(eq(securityKeys.userId, user.userId));
- const excludeCredentials = existingSecurityKeys.map(key => ({
+ const excludeCredentials = existingSecurityKeys.map((key) => ({
id: key.credentialId,
- transports: key.transports ? JSON.parse(key.transports) as AuthenticatorTransportFuture[] : undefined
+ transports: key.transports
+ ? (JSON.parse(key.transports) as AuthenticatorTransportFuture[])
+ : undefined
}));
const options: GenerateRegistrationOptionsOpts = {
@@ -207,18 +229,23 @@ export async function startRegistration(
rpID,
userID: isoBase64URL.toBuffer(user.userId),
userName: user.email || user.username,
- attestationType: 'none',
+ attestationType: "none",
excludeCredentials,
authenticatorSelection: {
- residentKey: 'preferred',
- userVerification: 'preferred',
+ residentKey: "preferred",
+ userVerification: "preferred"
}
};
const registrationOptions = await generateRegistrationOptions(options);
// Store challenge in database
- await storeChallenge(req.session.sessionId, registrationOptions.challenge, name, user.userId);
+ await storeChallenge(
+ req.session.sessionId,
+ registrationOptions.challenge,
+ name,
+ user.userId
+ );
return response(res, {
data: registrationOptions,
@@ -270,7 +297,7 @@ export async function verifyRegistration(
try {
// Get challenge from database
const challengeData = await getChallenge(req.session.sessionId);
-
+
if (!challengeData) {
return next(
createHttpError(
@@ -292,10 +319,7 @@ export async function verifyRegistration(
if (!verified || !registrationInfo) {
return next(
- createHttpError(
- HttpCode.BAD_REQUEST,
- "Verification failed"
- )
+ createHttpError(HttpCode.BAD_REQUEST, "Verification failed")
);
}
@@ -303,9 +327,13 @@ export async function verifyRegistration(
await db.insert(securityKeys).values({
credentialId: registrationInfo.credential.id,
userId: user.userId,
- publicKey: isoBase64URL.fromBuffer(registrationInfo.credential.publicKey),
+ publicKey: isoBase64URL.fromBuffer(
+ registrationInfo.credential.publicKey
+ ),
signCount: registrationInfo.credential.counter || 0,
- transports: registrationInfo.credential.transports ? JSON.stringify(registrationInfo.credential.transports) : null,
+ transports: registrationInfo.credential.transports
+ ? JSON.stringify(registrationInfo.credential.transports)
+ : null,
name: challengeData.securityKeyName,
lastUsed: new Date().toISOString(),
dateCreated: new Date().toISOString()
@@ -407,7 +435,10 @@ export async function deleteSecurityKey(
try {
// Verify password
- const validPassword = await verifyPassword(password, user.passwordHash!);
+ const validPassword = await verifyPassword(
+ password,
+ user.passwordHash!
+ );
if (!validPassword) {
return next(unauthorized());
}
@@ -447,10 +478,12 @@ export async function deleteSecurityKey(
await db
.delete(securityKeys)
- .where(and(
- eq(securityKeys.credentialId, credentialId),
- eq(securityKeys.userId, user.userId)
- ));
+ .where(
+ and(
+ eq(securityKeys.credentialId, credentialId),
+ eq(securityKeys.userId, user.userId)
+ )
+ );
return response(res, {
data: null,
@@ -502,10 +535,7 @@ export async function startAuthentication(
if (!user || user.type !== UserType.Internal) {
return next(
- createHttpError(
- HttpCode.BAD_REQUEST,
- "Invalid credentials"
- )
+ createHttpError(HttpCode.BAD_REQUEST, "Invalid credentials")
);
}
@@ -525,25 +555,37 @@ export async function startAuthentication(
);
}
- allowCredentials = userSecurityKeys.map(key => ({
+ allowCredentials = userSecurityKeys.map((key) => ({
id: key.credentialId,
- transports: key.transports ? JSON.parse(key.transports) as AuthenticatorTransportFuture[] : undefined
+ transports: key.transports
+ ? (JSON.parse(
+ key.transports
+ ) as AuthenticatorTransportFuture[])
+ : undefined
}));
}
const options: GenerateAuthenticationOptionsOpts = {
rpID,
allowCredentials,
- userVerification: 'preferred',
+ userVerification: "preferred"
};
- const authenticationOptions = await generateAuthenticationOptions(options);
+ const authenticationOptions =
+ await generateAuthenticationOptions(options);
// Generate a temporary session ID for unauthenticated users
- const tempSessionId = email ? `temp_${email}_${Date.now()}` : `temp_${Date.now()}`;
+ const tempSessionId = email
+ ? `temp_${email}_${Date.now()}`
+ : `temp_${Date.now()}`;
// Store challenge in database
- await storeChallenge(tempSessionId, authenticationOptions.challenge, undefined, userId);
+ await storeChallenge(
+ tempSessionId,
+ authenticationOptions.challenge,
+ undefined,
+ userId
+ );
return response(res, {
data: { ...authenticationOptions, tempSessionId },
@@ -580,7 +622,7 @@ export async function verifyAuthentication(
}
const { credential } = parsedBody.data;
- const tempSessionId = req.headers['x-temp-session-id'] as string;
+ const tempSessionId = req.headers["x-temp-session-id"] as string;
if (!tempSessionId) {
return next(
@@ -594,7 +636,7 @@ export async function verifyAuthentication(
try {
// Get challenge from database
const challengeData = await getChallenge(tempSessionId);
-
+
if (!challengeData) {
return next(
createHttpError(
@@ -646,7 +688,11 @@ export async function verifyAuthentication(
id: securityKey.credentialId,
publicKey: isoBase64URL.toBuffer(securityKey.publicKey),
counter: securityKey.signCount,
- transports: securityKey.transports ? JSON.parse(securityKey.transports) as AuthenticatorTransportFuture[] : undefined
+ transports: securityKey.transports
+ ? (JSON.parse(
+ securityKey.transports
+ ) as AuthenticatorTransportFuture[])
+ : undefined
},
requireUserVerification: false
});
@@ -672,7 +718,8 @@ export async function verifyAuthentication(
.where(eq(securityKeys.credentialId, credentialId));
// Create session for the user
- const { createSession, generateSessionToken, serializeSessionCookie } = await import("@server/auth/sessions/app");
+ const { createSession, generateSessionToken, serializeSessionCookie } =
+ await import("@server/auth/sessions/app");
const token = generateSessionToken();
const session = await createSession(token, user.userId);
const isSecure = req.protocol === "https";
@@ -703,4 +750,4 @@ export async function verifyAuthentication(
)
);
}
-}
\ No newline at end of file
+}
diff --git a/server/routers/auth/signup.ts b/server/routers/auth/signup.ts
index 842214cf..2605a026 100644
--- a/server/routers/auth/signup.ts
+++ b/server/routers/auth/signup.ts
@@ -56,8 +56,14 @@ export async function signup(
);
}
- const { email, password, inviteToken, inviteId, termsAcceptedTimestamp, marketingEmailConsent } =
- parsedBody.data;
+ const {
+ email,
+ password,
+ inviteToken,
+ inviteId,
+ termsAcceptedTimestamp,
+ marketingEmailConsent
+ } = parsedBody.data;
const passwordHash = await hashPassword(password);
const userId = generateId(15);
@@ -222,7 +228,9 @@ export async function signup(
);
res.appendHeader("Set-Cookie", cookie);
if (build == "saas" && marketingEmailConsent) {
- logger.debug(`User ${email} opted in to marketing emails during signup.`);
+ logger.debug(
+ `User ${email} opted in to marketing emails during signup.`
+ );
moveEmailToAudience(email, AudienceIds.SignUps);
}
diff --git a/server/routers/auth/startDeviceWebAuth.ts b/server/routers/auth/startDeviceWebAuth.ts
index 925df67f..85fb5262 100644
--- a/server/routers/auth/startDeviceWebAuth.ts
+++ b/server/routers/auth/startDeviceWebAuth.ts
@@ -13,10 +13,12 @@ import { maxmindLookup } from "@server/db/maxmind";
import { encodeHexLowerCase } from "@oslojs/encoding";
import { sha256 } from "@oslojs/crypto/sha2";
-const bodySchema = z.object({
- deviceName: z.string().optional(),
- applicationName: z.string().min(1, "Application name is required")
-}).strict();
+const bodySchema = z
+ .object({
+ deviceName: z.string().optional(),
+ applicationName: z.string().min(1, "Application name is required")
+ })
+ .strict();
export type StartDeviceWebAuthBody = z.infer;
@@ -34,14 +36,12 @@ function generateDeviceCode(): string {
// Helper function to hash device code before storing in database
function hashDeviceCode(code: string): string {
- return encodeHexLowerCase(
- sha256(new TextEncoder().encode(code))
- );
+ return encodeHexLowerCase(sha256(new TextEncoder().encode(code)));
}
// Helper function to extract IP from request
function extractIpFromRequest(req: Request): string | undefined {
- const ip = req.ip || req.socket.remoteAddress;
+ const ip = req.ip;
if (!ip) {
return undefined;
}
@@ -75,10 +75,10 @@ async function getCityFromIp(ip: string): Promise {
return undefined;
}
- // MaxMind CountryResponse doesn't include city by default
- // If city data is available, it would be in result.city?.names?.en
- // But since we're using CountryResponse type, we'll just return undefined
- // The user said "don't do this if not easy", so we'll skip city for now
+ if (result.country) {
+ return result.country.names?.en || result.country.iso_code;
+ }
+
return undefined;
} catch (error) {
logger.debug("Failed to get city from IP", error);
diff --git a/server/routers/auth/types.ts b/server/routers/auth/types.ts
index bb5a1b4e..023b2d8e 100644
--- a/server/routers/auth/types.ts
+++ b/server/routers/auth/types.ts
@@ -5,4 +5,4 @@ export type TransferSessionResponse = {
export type GetSessionTransferTokenRenponse = {
token: string;
-};
\ No newline at end of file
+};
diff --git a/server/routers/auth/validateSetupToken.ts b/server/routers/auth/validateSetupToken.ts
index 1a4725b6..26043f2d 100644
--- a/server/routers/auth/validateSetupToken.ts
+++ b/server/routers/auth/validateSetupToken.ts
@@ -9,8 +9,8 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const validateSetupTokenSchema = z.strictObject({
- token: z.string().min(1, "Token is required")
- });
+ token: z.string().min(1, "Token is required")
+});
export type ValidateSetupTokenResponse = {
valid: boolean;
@@ -41,10 +41,7 @@ export async function validateSetupToken(
.select()
.from(setupTokens)
.where(
- and(
- eq(setupTokens.token, token),
- eq(setupTokens.used, false)
- )
+ and(eq(setupTokens.token, token), eq(setupTokens.used, false))
);
if (!setupToken) {
@@ -79,4 +76,4 @@ export async function validateSetupToken(
)
);
}
-}
\ No newline at end of file
+}
diff --git a/server/routers/auth/verifyEmail.ts b/server/routers/auth/verifyEmail.ts
index 8d31eb45..31c5166d 100644
--- a/server/routers/auth/verifyEmail.ts
+++ b/server/routers/auth/verifyEmail.ts
@@ -14,8 +14,8 @@ import { freeLimitSet, limitsService } from "@server/lib/billing";
import { build } from "@server/build";
export const verifyEmailBody = z.strictObject({
- code: z.string()
- });
+ code: z.string()
+});
export type VerifyEmailBody = z.infer;
diff --git a/server/routers/auth/verifyTotp.ts b/server/routers/auth/verifyTotp.ts
index 9243c9f9..207287ea 100644
--- a/server/routers/auth/verifyTotp.ts
+++ b/server/routers/auth/verifyTotp.ts
@@ -19,10 +19,10 @@ import { verifySession } from "@server/auth/sessions/verifySession";
import { unauthorized } from "@server/auth/unauthorizedResponse";
export const verifyTotpBody = z.strictObject({
- email: z.email().optional(),
- password: z.string().optional(),
- code: z.string()
- });
+ email: z.email().optional(),
+ password: z.string().optional(),
+ code: z.string()
+});
export type VerifyTotpBody = z.infer;
diff --git a/server/routers/badger/exchangeSession.ts b/server/routers/badger/exchangeSession.ts
index b4b2deea..b8d01c11 100644
--- a/server/routers/badger/exchangeSession.ts
+++ b/server/routers/badger/exchangeSession.ts
@@ -12,7 +12,10 @@ import {
serializeResourceSessionCookie,
validateResourceSessionToken
} from "@server/auth/sessions/resource";
-import { generateSessionToken, SESSION_COOKIE_EXPIRES } from "@server/auth/sessions/app";
+import {
+ generateSessionToken,
+ SESSION_COOKIE_EXPIRES
+} from "@server/auth/sessions/app";
import { SESSION_COOKIE_EXPIRES as RESOURCE_SESSION_COOKIE_EXPIRES } from "@server/auth/sessions/resource";
import config from "@server/lib/config";
import { response } from "@server/lib/response";
@@ -55,8 +58,8 @@ export async function exchangeSession(
let cleanHost = host;
// if the host ends with :port
if (cleanHost.match(/:[0-9]{1,5}$/)) {
- const matched = ''+cleanHost.match(/:[0-9]{1,5}$/);
- cleanHost = cleanHost.slice(0, -1*matched.length);
+ const matched = "" + cleanHost.match(/:[0-9]{1,5}$/);
+ cleanHost = cleanHost.slice(0, -1 * matched.length);
}
const clientIp = requestIp?.split(":")[0];
@@ -153,8 +156,8 @@ export async function exchangeSession(
}
} else {
const expires = new Date(
- Date.now() + SESSION_COOKIE_EXPIRES
- ).getTime();
+ Date.now() + SESSION_COOKIE_EXPIRES
+ ).getTime();
await createResourceSession({
token,
resourceId: resource.resourceId,
diff --git a/server/routers/badger/logRequestAudit.ts b/server/routers/badger/logRequestAudit.ts
index 1cf97f98..1343bdaa 100644
--- a/server/routers/badger/logRequestAudit.ts
+++ b/server/routers/badger/logRequestAudit.ts
@@ -148,7 +148,7 @@ export async function cleanUpOldLogs(orgId: string, retentionDays: number) {
}
}
-export function logRequestAudit(
+export async function logRequestAudit(
data: {
action: boolean;
reason: number;
@@ -174,14 +174,13 @@ export function logRequestAudit(
}
) {
try {
- // Quick synchronous check - if org has 0 retention, skip immediately
+ // Check retention before buffering any logs
if (data.orgId) {
- const cached = cache.get(`org_${data.orgId}_retentionDays`);
- if (cached === 0) {
+ const retentionDays = await getRetentionDays(data.orgId);
+ if (retentionDays === 0) {
// do not log
return;
}
- // If not cached or > 0, we'll log it (async retention check happens in background)
}
let actorType: string | undefined;
@@ -261,16 +260,6 @@ export function logRequestAudit(
} else {
scheduleFlush();
}
-
- // Async retention check in background (don't await)
- if (
- data.orgId &&
- cache.get(`org_${data.orgId}_retentionDays`) === undefined
- ) {
- getRetentionDays(data.orgId).catch((err) =>
- logger.error("Error checking retention days:", err)
- );
- }
} catch (error) {
logger.error(error);
}
diff --git a/server/routers/badger/verifySession.test.ts b/server/routers/badger/verifySession.test.ts
index b0ad9873..7c967ace 100644
--- a/server/routers/badger/verifySession.test.ts
+++ b/server/routers/badger/verifySession.test.ts
@@ -1,13 +1,11 @@
-import { assertEquals } from '@test/assert';
+import { assertEquals } from "@test/assert";
function isPathAllowed(pattern: string, path: string): boolean {
-
// Normalize and split paths into segments
const normalize = (p: string) => p.split("/").filter(Boolean);
const patternParts = normalize(pattern);
const pathParts = normalize(path);
-
// Recursive function to try different wildcard matches
function matchSegments(patternIndex: number, pathIndex: number): boolean {
const indent = " ".repeat(pathIndex); // Indent based on recursion depth
@@ -30,7 +28,6 @@ function isPathAllowed(pattern: string, path: string): boolean {
// For full segment wildcards, try consuming different numbers of path segments
if (currentPatternPart === "*") {
-
// Try consuming 0 segments (skip the wildcard)
if (matchSegments(patternIndex + 1, pathIndex)) {
return true;
@@ -74,69 +71,213 @@ function isPathAllowed(pattern: string, path: string): boolean {
}
function runTests() {
- console.log('Running path matching tests...');
+ console.log("Running path matching tests...");
// Test exact matching
- assertEquals(isPathAllowed('foo', 'foo'), true, 'Exact match should be allowed');
- assertEquals(isPathAllowed('foo', 'bar'), false, 'Different segments should not match');
- assertEquals(isPathAllowed('foo/bar', 'foo/bar'), true, 'Exact multi-segment match should be allowed');
- assertEquals(isPathAllowed('foo/bar', 'foo/baz'), false, 'Partial multi-segment match should not be allowed');
+ assertEquals(
+ isPathAllowed("foo", "foo"),
+ true,
+ "Exact match should be allowed"
+ );
+ assertEquals(
+ isPathAllowed("foo", "bar"),
+ false,
+ "Different segments should not match"
+ );
+ assertEquals(
+ isPathAllowed("foo/bar", "foo/bar"),
+ true,
+ "Exact multi-segment match should be allowed"
+ );
+ assertEquals(
+ isPathAllowed("foo/bar", "foo/baz"),
+ false,
+ "Partial multi-segment match should not be allowed"
+ );
// Test with leading and trailing slashes
- assertEquals(isPathAllowed('/foo', 'foo'), true, 'Pattern with leading slash should match');
- assertEquals(isPathAllowed('foo/', 'foo'), true, 'Pattern with trailing slash should match');
- assertEquals(isPathAllowed('/foo/', 'foo'), true, 'Pattern with both leading and trailing slashes should match');
- assertEquals(isPathAllowed('foo', '/foo/'), true, 'Path with leading and trailing slashes should match');
+ assertEquals(
+ isPathAllowed("/foo", "foo"),
+ true,
+ "Pattern with leading slash should match"
+ );
+ assertEquals(
+ isPathAllowed("foo/", "foo"),
+ true,
+ "Pattern with trailing slash should match"
+ );
+ assertEquals(
+ isPathAllowed("/foo/", "foo"),
+ true,
+ "Pattern with both leading and trailing slashes should match"
+ );
+ assertEquals(
+ isPathAllowed("foo", "/foo/"),
+ true,
+ "Path with leading and trailing slashes should match"
+ );
// Test simple wildcard matching
- assertEquals(isPathAllowed('*', 'foo'), true, 'Single wildcard should match any single segment');
- assertEquals(isPathAllowed('*', 'foo/bar'), true, 'Single wildcard should match multiple segments');
- assertEquals(isPathAllowed('*/bar', 'foo/bar'), true, 'Wildcard prefix should match');
- assertEquals(isPathAllowed('foo/*', 'foo/bar'), true, 'Wildcard suffix should match');
- assertEquals(isPathAllowed('foo/*/baz', 'foo/bar/baz'), true, 'Wildcard in middle should match');
+ assertEquals(
+ isPathAllowed("*", "foo"),
+ true,
+ "Single wildcard should match any single segment"
+ );
+ assertEquals(
+ isPathAllowed("*", "foo/bar"),
+ true,
+ "Single wildcard should match multiple segments"
+ );
+ assertEquals(
+ isPathAllowed("*/bar", "foo/bar"),
+ true,
+ "Wildcard prefix should match"
+ );
+ assertEquals(
+ isPathAllowed("foo/*", "foo/bar"),
+ true,
+ "Wildcard suffix should match"
+ );
+ assertEquals(
+ isPathAllowed("foo/*/baz", "foo/bar/baz"),
+ true,
+ "Wildcard in middle should match"
+ );
// Test multiple wildcards
- assertEquals(isPathAllowed('*/*', 'foo/bar'), true, 'Multiple wildcards should match corresponding segments');
- assertEquals(isPathAllowed('*/*/*', 'foo/bar/baz'), true, 'Three wildcards should match three segments');
- assertEquals(isPathAllowed('foo/*/*', 'foo/bar/baz'), true, 'Specific prefix with wildcards should match');
- assertEquals(isPathAllowed('*/*/baz', 'foo/bar/baz'), true, 'Wildcards with specific suffix should match');
+ assertEquals(
+ isPathAllowed("*/*", "foo/bar"),
+ true,
+ "Multiple wildcards should match corresponding segments"
+ );
+ assertEquals(
+ isPathAllowed("*/*/*", "foo/bar/baz"),
+ true,
+ "Three wildcards should match three segments"
+ );
+ assertEquals(
+ isPathAllowed("foo/*/*", "foo/bar/baz"),
+ true,
+ "Specific prefix with wildcards should match"
+ );
+ assertEquals(
+ isPathAllowed("*/*/baz", "foo/bar/baz"),
+ true,
+ "Wildcards with specific suffix should match"
+ );
// Test wildcard consumption behavior
- assertEquals(isPathAllowed('*', ''), true, 'Wildcard should optionally consume segments');
- assertEquals(isPathAllowed('foo/*', 'foo'), true, 'Trailing wildcard should be optional');
- assertEquals(isPathAllowed('*/*', 'foo'), true, 'Multiple wildcards can match fewer segments');
- assertEquals(isPathAllowed('*/*/*', 'foo/bar'), true, 'Extra wildcards can be skipped');
+ assertEquals(
+ isPathAllowed("*", ""),
+ true,
+ "Wildcard should optionally consume segments"
+ );
+ assertEquals(
+ isPathAllowed("foo/*", "foo"),
+ true,
+ "Trailing wildcard should be optional"
+ );
+ assertEquals(
+ isPathAllowed("*/*", "foo"),
+ true,
+ "Multiple wildcards can match fewer segments"
+ );
+ assertEquals(
+ isPathAllowed("*/*/*", "foo/bar"),
+ true,
+ "Extra wildcards can be skipped"
+ );
// Test complex nested paths
- assertEquals(isPathAllowed('api/*/users', 'api/v1/users'), true, 'API versioning pattern should match');
- assertEquals(isPathAllowed('api/*/users/*', 'api/v1/users/123'), true, 'API resource pattern should match');
- assertEquals(isPathAllowed('api/*/users/*/profile', 'api/v1/users/123/profile'), true, 'Nested API pattern should match');
+ assertEquals(
+ isPathAllowed("api/*/users", "api/v1/users"),
+ true,
+ "API versioning pattern should match"
+ );
+ assertEquals(
+ isPathAllowed("api/*/users/*", "api/v1/users/123"),
+ true,
+ "API resource pattern should match"
+ );
+ assertEquals(
+ isPathAllowed("api/*/users/*/profile", "api/v1/users/123/profile"),
+ true,
+ "Nested API pattern should match"
+ );
// Test for the requested padbootstrap* pattern
- assertEquals(isPathAllowed('padbootstrap*', 'padbootstrap'), true, 'padbootstrap* should match padbootstrap');
- assertEquals(isPathAllowed('padbootstrap*', 'padbootstrapv1'), true, 'padbootstrap* should match padbootstrapv1');
- assertEquals(isPathAllowed('padbootstrap*', 'padbootstrap/files'), false, 'padbootstrap* should not match padbootstrap/files');
- assertEquals(isPathAllowed('padbootstrap*/*', 'padbootstrap/files'), true, 'padbootstrap*/* should match padbootstrap/files');
- assertEquals(isPathAllowed('padbootstrap*/files', 'padbootstrapv1/files'), true, 'padbootstrap*/files should not match padbootstrapv1/files (wildcard is segment-based, not partial)');
+ assertEquals(
+ isPathAllowed("padbootstrap*", "padbootstrap"),
+ true,
+ "padbootstrap* should match padbootstrap"
+ );
+ assertEquals(
+ isPathAllowed("padbootstrap*", "padbootstrapv1"),
+ true,
+ "padbootstrap* should match padbootstrapv1"
+ );
+ assertEquals(
+ isPathAllowed("padbootstrap*", "padbootstrap/files"),
+ false,
+ "padbootstrap* should not match padbootstrap/files"
+ );
+ assertEquals(
+ isPathAllowed("padbootstrap*/*", "padbootstrap/files"),
+ true,
+ "padbootstrap*/* should match padbootstrap/files"
+ );
+ assertEquals(
+ isPathAllowed("padbootstrap*/files", "padbootstrapv1/files"),
+ true,
+ "padbootstrap*/files should not match padbootstrapv1/files (wildcard is segment-based, not partial)"
+ );
// Test wildcard edge cases
- assertEquals(isPathAllowed('*/*/*/*/*/*', 'a/b'), true, 'Many wildcards can match few segments');
- assertEquals(isPathAllowed('a/*/b/*/c', 'a/anything/b/something/c'), true, 'Multiple wildcards in pattern should match corresponding segments');
+ assertEquals(
+ isPathAllowed("*/*/*/*/*/*", "a/b"),
+ true,
+ "Many wildcards can match few segments"
+ );
+ assertEquals(
+ isPathAllowed("a/*/b/*/c", "a/anything/b/something/c"),
+ true,
+ "Multiple wildcards in pattern should match corresponding segments"
+ );
// Test patterns with partial segment matches
- assertEquals(isPathAllowed('padbootstrap*', 'padbootstrap-123'), true, 'Wildcards in isPathAllowed should be segment-based, not character-based');
- assertEquals(isPathAllowed('test*', 'testuser'), true, 'Asterisk as part of segment name is treated as a literal, not a wildcard');
- assertEquals(isPathAllowed('my*app', 'myapp'), true, 'Asterisk in middle of segment name is treated as a literal, not a wildcard');
+ assertEquals(
+ isPathAllowed("padbootstrap*", "padbootstrap-123"),
+ true,
+ "Wildcards in isPathAllowed should be segment-based, not character-based"
+ );
+ assertEquals(
+ isPathAllowed("test*", "testuser"),
+ true,
+ "Asterisk as part of segment name is treated as a literal, not a wildcard"
+ );
+ assertEquals(
+ isPathAllowed("my*app", "myapp"),
+ true,
+ "Asterisk in middle of segment name is treated as a literal, not a wildcard"
+ );
- assertEquals(isPathAllowed('/', '/'), true, 'Root path should match root path');
- assertEquals(isPathAllowed('/', '/test'), false, 'Root path should not match non-root path');
+ assertEquals(
+ isPathAllowed("/", "/"),
+ true,
+ "Root path should match root path"
+ );
+ assertEquals(
+ isPathAllowed("/", "/test"),
+ false,
+ "Root path should not match non-root path"
+ );
- console.log('All tests passed!');
+ console.log("All tests passed!");
}
// Run all tests
try {
runTests();
} catch (error) {
- console.error('Test failed:', error);
+ console.error("Test failed:", error);
}
diff --git a/server/routers/billing/types.ts b/server/routers/billing/types.ts
index 2ec5a1b1..4e0aab52 100644
--- a/server/routers/billing/types.ts
+++ b/server/routers/billing/types.ts
@@ -14,4 +14,3 @@ export type GetOrgTierResponse = {
tier: string | null;
active: boolean;
};
-
diff --git a/server/routers/billing/webhooks.ts b/server/routers/billing/webhooks.ts
index 0ca38a8a..53eda78c 100644
--- a/server/routers/billing/webhooks.ts
+++ b/server/routers/billing/webhooks.ts
@@ -11,4 +11,4 @@ export async function billingWebhookHandler(
return next(
createHttpError(HttpCode.NOT_FOUND, "This endpoint is not in use")
);
-}
\ No newline at end of file
+}
diff --git a/server/routers/blueprints/applyJSONBlueprint.ts b/server/routers/blueprints/applyJSONBlueprint.ts
index f8c9caec..7eee15bf 100644
--- a/server/routers/blueprints/applyJSONBlueprint.ts
+++ b/server/routers/blueprints/applyJSONBlueprint.ts
@@ -9,12 +9,12 @@ import { OpenAPITags, registry } from "@server/openApi";
import { applyBlueprint } from "@server/lib/blueprints/applyBlueprint";
const applyBlueprintSchema = z.strictObject({
- blueprint: z.string()
- });
+ blueprint: z.string()
+});
const applyBlueprintParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
registry.registerPath({
method: "put",
diff --git a/server/routers/blueprints/getBlueprint.ts b/server/routers/blueprints/getBlueprint.ts
index 45c36af7..915e0481 100644
--- a/server/routers/blueprints/getBlueprint.ts
+++ b/server/routers/blueprints/getBlueprint.ts
@@ -13,12 +13,9 @@ import { OpenAPITags, registry } from "@server/openApi";
import { BlueprintData } from "./types";
const getBlueprintSchema = z.strictObject({
- blueprintId: z
- .string()
- .transform(stoi)
- .pipe(z.int().positive()),
- orgId: z.string()
- });
+ blueprintId: z.string().transform(stoi).pipe(z.int().positive()),
+ orgId: z.string()
+});
async function query(blueprintId: number, orgId: string) {
// Get the client
diff --git a/server/routers/blueprints/listBlueprints.ts b/server/routers/blueprints/listBlueprints.ts
index 315abfed..2ece9e53 100644
--- a/server/routers/blueprints/listBlueprints.ts
+++ b/server/routers/blueprints/listBlueprints.ts
@@ -11,23 +11,23 @@ import { OpenAPITags, registry } from "@server/openApi";
import { BlueprintData } from "./types";
const listBluePrintsParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const listBluePrintsSchema = z.strictObject({
- limit: z
- .string()
- .optional()
- .default("1000")
- .transform(Number)
- .pipe(z.int().nonnegative()),
- offset: z
- .string()
- .optional()
- .default("0")
- .transform(Number)
- .pipe(z.int().nonnegative())
- });
+ limit: z
+ .string()
+ .optional()
+ .default("1000")
+ .transform(Number)
+ .pipe(z.int().nonnegative()),
+ offset: z
+ .string()
+ .optional()
+ .default("0")
+ .transform(Number)
+ .pipe(z.int().nonnegative())
+});
async function queryBlueprints(orgId: string, limit: number, offset: number) {
const res = await db
diff --git a/server/routers/certificates/createCertificate.ts b/server/routers/certificates/createCertificate.ts
index e160e644..e858e5cd 100644
--- a/server/routers/certificates/createCertificate.ts
+++ b/server/routers/certificates/createCertificate.ts
@@ -1,5 +1,9 @@
import { db, Transaction } from "@server/db";
-export async function createCertificate(domainId: string, domain: string, trx: Transaction | typeof db) {
+export async function createCertificate(
+ domainId: string,
+ domain: string,
+ trx: Transaction | typeof db
+) {
return;
-}
\ No newline at end of file
+}
diff --git a/server/routers/certificates/types.ts b/server/routers/certificates/types.ts
index 80136de8..3ec90857 100644
--- a/server/routers/certificates/types.ts
+++ b/server/routers/certificates/types.ts
@@ -10,4 +10,4 @@ export type GetCertificateResponse = {
updatedAt: string;
errorMessage?: string | null;
renewalCount: number;
-}
\ No newline at end of file
+};
diff --git a/server/routers/client/listClients.ts b/server/routers/client/listClients.ts
index 68cd9aa0..42e47efe 100644
--- a/server/routers/client/listClients.ts
+++ b/server/routers/client/listClients.ts
@@ -10,7 +10,16 @@ import {
import logger from "@server/logger";
import HttpCode from "@server/types/HttpCode";
import response from "@server/lib/response";
-import { and, count, eq, inArray, isNotNull, isNull, or, sql } from "drizzle-orm";
+import {
+ and,
+ count,
+ eq,
+ inArray,
+ isNotNull,
+ isNull,
+ or,
+ sql
+} from "drizzle-orm";
import { NextFunction, Request, Response } from "express";
import createHttpError from "http-errors";
import { z } from "zod";
@@ -60,13 +69,9 @@ async function getLatestOlmVersion(): Promise {
return latestVersion;
} catch (error: any) {
if (error.name === "AbortError") {
- logger.warn(
- "Request to fetch latest Olm version timed out (1.5s)"
- );
+ logger.warn("Request to fetch latest Olm version timed out (1.5s)");
} else if (error.cause?.code === "UND_ERR_CONNECT_TIMEOUT") {
- logger.warn(
- "Connection timeout while fetching latest Olm version"
- );
+ logger.warn("Connection timeout while fetching latest Olm version");
} else {
logger.warn(
"Error fetching latest Olm version:",
@@ -77,10 +82,9 @@ async function getLatestOlmVersion(): Promise {
}
}
-
const listClientsParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const listClientsSchema = z.object({
limit: z
@@ -95,12 +99,14 @@ const listClientsSchema = z.object({
.default("0")
.transform(Number)
.pipe(z.int().nonnegative()),
- filter: z
- .enum(["user", "machine"])
- .optional()
+ filter: z.enum(["user", "machine"]).optional()
});
-function queryClients(orgId: string, accessibleClientIds: number[], filter?: "user" | "machine") {
+function queryClients(
+ orgId: string,
+ accessibleClientIds: number[],
+ filter?: "user" | "machine"
+) {
const conditions = [
inArray(clients.clientId, accessibleClientIds),
eq(clients.orgId, orgId)
@@ -158,16 +164,17 @@ type OlmWithUpdateAvailable = Awaited>[0] & {
olmUpdateAvailable?: boolean;
};
-
export type ListClientsResponse = {
- clients: Array>[0] & {
- sites: Array<{
- siteId: number;
- siteName: string | null;
- siteNiceId: string | null;
- }>
- olmUpdateAvailable?: boolean;
- }>;
+ clients: Array<
+ Awaited>[0] & {
+ sites: Array<{
+ siteId: number;
+ siteName: string | null;
+ siteNiceId: string | null;
+ }>;
+ olmUpdateAvailable?: boolean;
+ }
+ >;
pagination: { total: number; limit: number; offset: number };
};
@@ -271,28 +278,34 @@ export async function listClients(
const totalCount = totalCountResult[0].count;
// Get associated sites for all clients
- const clientIds = clientsList.map(client => client.clientId);
+ const clientIds = clientsList.map((client) => client.clientId);
const siteAssociations = await getSiteAssociations(clientIds);
// Group site associations by client ID
- const sitesByClient = siteAssociations.reduce((acc, association) => {
- if (!acc[association.clientId]) {
- acc[association.clientId] = [];
- }
- acc[association.clientId].push({
- siteId: association.siteId,
- siteName: association.siteName,
- siteNiceId: association.siteNiceId
- });
- return acc;
- }, {} as Record>);
+ const sitesByClient = siteAssociations.reduce(
+ (acc, association) => {
+ if (!acc[association.clientId]) {
+ acc[association.clientId] = [];
+ }
+ acc[association.clientId].push({
+ siteId: association.siteId,
+ siteName: association.siteName,
+ siteNiceId: association.siteNiceId
+ });
+ return acc;
+ },
+ {} as Record<
+ number,
+ Array<{
+ siteId: number;
+ siteName: string | null;
+ siteNiceId: string | null;
+ }>
+ >
+ );
// Merge clients with their site associations
- const clientsWithSites = clientsList.map(client => ({
+ const clientsWithSites = clientsList.map((client) => ({
...client,
sites: sitesByClient[client.clientId] || []
}));
@@ -322,7 +335,6 @@ export async function listClients(
} catch (error) {
client.olmUpdateAvailable = false;
}
-
});
}
} catch (error) {
@@ -333,7 +345,6 @@ export async function listClients(
);
}
-
return response(res, {
data: {
clients: clientsWithSites,
diff --git a/server/routers/client/pickClientDefaults.ts b/server/routers/client/pickClientDefaults.ts
index 3d447ecd..fd31da12 100644
--- a/server/routers/client/pickClientDefaults.ts
+++ b/server/routers/client/pickClientDefaults.ts
@@ -16,8 +16,8 @@ export type PickClientDefaultsResponse = {
};
const pickClientDefaultsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
registry.registerPath({
method: "get",
diff --git a/server/routers/client/targets.ts b/server/routers/client/targets.ts
index c9bb910b..653a2578 100644
--- a/server/routers/client/targets.ts
+++ b/server/routers/client/targets.ts
@@ -1,24 +1,51 @@
import { sendToClient } from "#dynamic/routers/ws";
-import { db, olms } from "@server/db";
+import { db, olms, Transaction } from "@server/db";
import { Alias, SubnetProxyTarget } from "@server/lib/ip";
import logger from "@server/logger";
import { eq } from "drizzle-orm";
+const BATCH_SIZE = 50;
+const BATCH_DELAY_MS = 50;
+
+function sleep(ms: number): Promise {
+ return new Promise((resolve) => setTimeout(resolve, ms));
+}
+
+function chunkArray(array: T[], size: number): T[][] {
+ const chunks: T[][] = [];
+ for (let i = 0; i < array.length; i += size) {
+ chunks.push(array.slice(i, i + size));
+ }
+ return chunks;
+}
+
export async function addTargets(newtId: string, targets: SubnetProxyTarget[]) {
- await sendToClient(newtId, {
- type: `newt/wg/targets/add`,
- data: targets
- });
+ const batches = chunkArray(targets, BATCH_SIZE);
+ for (let i = 0; i < batches.length; i++) {
+ if (i > 0) {
+ await sleep(BATCH_DELAY_MS);
+ }
+ await sendToClient(newtId, {
+ type: `newt/wg/targets/add`,
+ data: batches[i]
+ });
+ }
}
export async function removeTargets(
newtId: string,
targets: SubnetProxyTarget[]
) {
- await sendToClient(newtId, {
- type: `newt/wg/targets/remove`,
- data: targets
- });
+ const batches = chunkArray(targets, BATCH_SIZE);
+ for (let i = 0; i < batches.length; i++) {
+ if (i > 0) {
+ await sleep(BATCH_DELAY_MS);
+ }
+ await sendToClient(newtId, {
+ type: `newt/wg/targets/remove`,
+ data: batches[i]
+ });
+ }
}
export async function updateTargets(
@@ -28,12 +55,24 @@ export async function updateTargets(
newTargets: SubnetProxyTarget[];
}
) {
- await sendToClient(newtId, {
- type: `newt/wg/targets/update`,
- data: targets
- }).catch((error) => {
- logger.warn(`Error sending message:`, error);
- });
+ const oldBatches = chunkArray(targets.oldTargets, BATCH_SIZE);
+ const newBatches = chunkArray(targets.newTargets, BATCH_SIZE);
+ const maxBatches = Math.max(oldBatches.length, newBatches.length);
+
+ for (let i = 0; i < maxBatches; i++) {
+ if (i > 0) {
+ await sleep(BATCH_DELAY_MS);
+ }
+ await sendToClient(newtId, {
+ type: `newt/wg/targets/update`,
+ data: {
+ oldTargets: oldBatches[i] || [],
+ newTargets: newBatches[i] || []
+ }
+ }).catch((error) => {
+ logger.warn(`Error sending message:`, error);
+ });
+ }
}
export async function addPeerData(
@@ -101,14 +140,18 @@ export async function removePeerData(
export async function updatePeerData(
clientId: number,
siteId: number,
- remoteSubnets: {
- oldRemoteSubnets: string[];
- newRemoteSubnets: string[];
- } | undefined,
- aliases: {
- oldAliases: Alias[];
- newAliases: Alias[];
- } | undefined,
+ remoteSubnets:
+ | {
+ oldRemoteSubnets: string[];
+ newRemoteSubnets: string[];
+ }
+ | undefined,
+ aliases:
+ | {
+ oldAliases: Alias[];
+ newAliases: Alias[];
+ }
+ | undefined,
olmId?: string
) {
if (!olmId) {
diff --git a/server/routers/client/terminate.ts b/server/routers/client/terminate.ts
index dc49ef05..1cfdc709 100644
--- a/server/routers/client/terminate.ts
+++ b/server/routers/client/terminate.ts
@@ -2,7 +2,10 @@ import { sendToClient } from "#dynamic/routers/ws";
import { db, olms } from "@server/db";
import { eq } from "drizzle-orm";
-export async function sendTerminateClient(clientId: number, olmId?: string | null) {
+export async function sendTerminateClient(
+ clientId: number,
+ olmId?: string | null
+) {
if (!olmId) {
const [olm] = await db
.select()
diff --git a/server/routers/domain/createOrgDomain.ts b/server/routers/domain/createOrgDomain.ts
index 3f223bce..6558d748 100644
--- a/server/routers/domain/createOrgDomain.ts
+++ b/server/routers/domain/createOrgDomain.ts
@@ -1,6 +1,13 @@
import { Request, Response, NextFunction } from "express";
import { z } from "zod";
-import { db, Domain, domains, OrgDomains, orgDomains, dnsRecords } from "@server/db";
+import {
+ db,
+ Domain,
+ domains,
+ OrgDomains,
+ orgDomains,
+ dnsRecords
+} from "@server/db";
import response from "@server/lib/response";
import HttpCode from "@server/types/HttpCode";
import createHttpError from "http-errors";
@@ -16,16 +23,15 @@ import { build } from "@server/build";
import config from "@server/lib/config";
const paramsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const bodySchema = z.strictObject({
- type: z.enum(["ns", "cname", "wildcard"]),
- baseDomain: subdomainSchema,
- certResolver: z.string().optional().nullable(),
- preferWildcardCert: z.boolean().optional().nullable() // optional, only for wildcard
- });
-
+ type: z.enum(["ns", "cname", "wildcard"]),
+ baseDomain: subdomainSchema,
+ certResolver: z.string().optional().nullable(),
+ preferWildcardCert: z.boolean().optional().nullable() // optional, only for wildcard
+});
export type CreateDomainResponse = {
domainId: string;
@@ -72,7 +78,8 @@ export async function createOrgDomain(
}
const { orgId } = parsedParams.data;
- const { type, baseDomain, certResolver, preferWildcardCert } = parsedBody.data;
+ const { type, baseDomain, certResolver, preferWildcardCert } =
+ parsedBody.data;
if (build == "oss") {
if (type !== "wildcard") {
@@ -278,7 +285,7 @@ export async function createOrgDomain(
// TODO: This needs to be cross region and not hardcoded
if (type === "ns") {
nsRecords = config.getRawConfig().dns.nameservers as string[];
-
+
// Save NS records to database
for (const nsValue of nsRecords) {
recordsToInsert.push({
@@ -300,7 +307,7 @@ export async function createOrgDomain(
baseDomain: `_acme-challenge.${baseDomain}`
}
];
-
+
// Save CNAME records to database
for (const cnameRecord of cnameRecords) {
recordsToInsert.push({
@@ -322,7 +329,7 @@ export async function createOrgDomain(
baseDomain: `${baseDomain}`
}
];
-
+
// Save A records to database
for (const aRecord of aRecords) {
recordsToInsert.push({
diff --git a/server/routers/domain/deleteOrgDomain.ts b/server/routers/domain/deleteOrgDomain.ts
index fe4a4805..fa916beb 100644
--- a/server/routers/domain/deleteOrgDomain.ts
+++ b/server/routers/domain/deleteOrgDomain.ts
@@ -11,9 +11,9 @@ import { usageService } from "@server/lib/billing/usageService";
import { FeatureId } from "@server/lib/billing";
const paramsSchema = z.strictObject({
- domainId: z.string(),
- orgId: z.string()
- });
+ domainId: z.string(),
+ orgId: z.string()
+});
export type DeleteAccountDomainResponse = {
success: boolean;
@@ -48,10 +48,7 @@ export async function deleteAccountDomain(
eq(orgDomains.domainId, domainId)
)
)
- .innerJoin(
- domains,
- eq(orgDomains.domainId, domains.domainId)
- );
+ .innerJoin(domains, eq(orgDomains.domainId, domains.domainId));
if (!existing) {
return next(
diff --git a/server/routers/domain/getDNSRecords.ts b/server/routers/domain/getDNSRecords.ts
index 239cc455..5a373a11 100644
--- a/server/routers/domain/getDNSRecords.ts
+++ b/server/routers/domain/getDNSRecords.ts
@@ -11,16 +11,16 @@ import { OpenAPITags, registry } from "@server/openApi";
import { getServerIp } from "@server/lib/serverIpService"; // your in-memory IP module
const getDNSRecordsSchema = z.strictObject({
- domainId: z.string(),
- orgId: z.string()
- });
+ domainId: z.string(),
+ orgId: z.string()
+});
async function query(domainId: string) {
const records = await db
.select()
.from(dnsRecords)
.where(eq(dnsRecords.domainId, domainId));
-
+
return records;
}
@@ -72,8 +72,11 @@ export async function getDNSRecords(
const serverIp = getServerIp();
// Override value for type A or wildcard records
- const updatedRecords = records.map(record => {
- if ((record.recordType === "A" || record.baseDomain === "*") && serverIp) {
+ const updatedRecords = records.map((record) => {
+ if (
+ (record.recordType === "A" || record.baseDomain === "*") &&
+ serverIp
+ ) {
return { ...record, value: serverIp };
}
return record;
@@ -92,4 +95,4 @@ export async function getDNSRecords(
createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "An error occurred")
);
}
-}
\ No newline at end of file
+}
diff --git a/server/routers/domain/getDomain.ts b/server/routers/domain/getDomain.ts
index 408cf37d..3e5565f9 100644
--- a/server/routers/domain/getDomain.ts
+++ b/server/routers/domain/getDomain.ts
@@ -11,11 +11,9 @@ import { OpenAPITags, registry } from "@server/openApi";
import { domain } from "zod/v4/core/regexes";
const getDomainSchema = z.strictObject({
- domainId: z
- .string()
- .optional(),
- orgId: z.string().optional()
- });
+ domainId: z.string().optional(),
+ orgId: z.string().optional()
+});
async function query(domainId?: string, orgId?: string) {
if (domainId) {
@@ -65,7 +63,9 @@ export async function getDomain(
const domain = await query(domainId, orgId);
if (!domain) {
- return next(createHttpError(HttpCode.NOT_FOUND, "Domain not found"));
+ return next(
+ createHttpError(HttpCode.NOT_FOUND, "Domain not found")
+ );
}
return response(res, {
diff --git a/server/routers/domain/index.ts b/server/routers/domain/index.ts
index e7e0b555..73b28fea 100644
--- a/server/routers/domain/index.ts
+++ b/server/routers/domain/index.ts
@@ -4,4 +4,4 @@ export * from "./deleteOrgDomain";
export * from "./restartOrgDomain";
export * from "./getDomain";
export * from "./getDNSRecords";
-export * from "./updateDomain";
\ No newline at end of file
+export * from "./updateDomain";
diff --git a/server/routers/domain/listDomains.ts b/server/routers/domain/listDomains.ts
index 48f22c6c..20b23634 100644
--- a/server/routers/domain/listDomains.ts
+++ b/server/routers/domain/listDomains.ts
@@ -11,23 +11,23 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const listDomainsParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const listDomainsSchema = z.strictObject({
- limit: z
- .string()
- .optional()
- .default("1000")
- .transform(Number)
- .pipe(z.int().nonnegative()),
- offset: z
- .string()
- .optional()
- .default("0")
- .transform(Number)
- .pipe(z.int().nonnegative())
- });
+ limit: z
+ .string()
+ .optional()
+ .default("1000")
+ .transform(Number)
+ .pipe(z.int().nonnegative()),
+ offset: z
+ .string()
+ .optional()
+ .default("0")
+ .transform(Number)
+ .pipe(z.int().nonnegative())
+});
async function queryDomains(orgId: string, limit: number, offset: number) {
const res = await db
diff --git a/server/routers/domain/restartOrgDomain.ts b/server/routers/domain/restartOrgDomain.ts
index f2bf7c39..1039d2fb 100644
--- a/server/routers/domain/restartOrgDomain.ts
+++ b/server/routers/domain/restartOrgDomain.ts
@@ -9,9 +9,9 @@ import { fromError } from "zod-validation-error";
import { and, eq } from "drizzle-orm";
const paramsSchema = z.strictObject({
- domainId: z.string(),
- orgId: z.string()
- });
+ domainId: z.string(),
+ orgId: z.string()
+});
export type RestartOrgDomainResponse = {
success: boolean;
diff --git a/server/routers/domain/types.ts b/server/routers/domain/types.ts
index 4ae48fb1..ececc2db 100644
--- a/server/routers/domain/types.ts
+++ b/server/routers/domain/types.ts
@@ -5,4 +5,4 @@ export type CheckDomainAvailabilityResponse = {
domainId: string;
fullDomain: string;
}[];
-};
\ No newline at end of file
+};
diff --git a/server/routers/domain/updateDomain.ts b/server/routers/domain/updateDomain.ts
index 08301189..64e78641 100644
--- a/server/routers/domain/updateDomain.ts
+++ b/server/routers/domain/updateDomain.ts
@@ -10,14 +10,14 @@ import { eq, and } from "drizzle-orm";
import { OpenAPITags, registry } from "@server/openApi";
const paramsSchema = z.strictObject({
- orgId: z.string(),
- domainId: z.string()
- });
+ orgId: z.string(),
+ domainId: z.string()
+});
const bodySchema = z.strictObject({
- certResolver: z.string().optional().nullable(),
- preferWildcardCert: z.boolean().optional().nullable()
- });
+ certResolver: z.string().optional().nullable(),
+ preferWildcardCert: z.boolean().optional().nullable()
+});
export type UpdateDomainResponse = {
domainId: string;
@@ -25,7 +25,6 @@ export type UpdateDomainResponse = {
preferWildcardCert: boolean | null;
};
-
registry.registerPath({
method: "patch",
path: "/org/{orgId}/domain/{domainId}",
@@ -88,7 +87,6 @@ export async function updateOrgDomain(
);
}
-
const [existingDomain] = await db
.select()
.from(domains)
@@ -154,4 +152,4 @@ export async function updateOrgDomain(
createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "An error occurred")
);
}
-}
\ No newline at end of file
+}
diff --git a/server/routers/external.ts b/server/routers/external.ts
index 54e84e2e..54b48c6e 100644
--- a/server/routers/external.ts
+++ b/server/routers/external.ts
@@ -318,7 +318,7 @@ authenticated.post(
verifyRoleAccess,
verifyUserHasAction(ActionsEnum.setResourceRoles),
logActionAudit(ActionsEnum.setResourceRoles),
- siteResource.setSiteResourceRoles,
+ siteResource.setSiteResourceRoles
);
authenticated.post(
@@ -327,7 +327,7 @@ authenticated.post(
verifySetResourceUsers,
verifyUserHasAction(ActionsEnum.setResourceUsers),
logActionAudit(ActionsEnum.setResourceUsers),
- siteResource.setSiteResourceUsers,
+ siteResource.setSiteResourceUsers
);
authenticated.post(
@@ -336,7 +336,7 @@ authenticated.post(
verifySetResourceClients,
verifyUserHasAction(ActionsEnum.setResourceUsers),
logActionAudit(ActionsEnum.setResourceUsers),
- siteResource.setSiteResourceClients,
+ siteResource.setSiteResourceClients
);
authenticated.post(
@@ -345,7 +345,7 @@ authenticated.post(
verifySetResourceClients,
verifyUserHasAction(ActionsEnum.setResourceUsers),
logActionAudit(ActionsEnum.setResourceUsers),
- siteResource.addClientToSiteResource,
+ siteResource.addClientToSiteResource
);
authenticated.post(
@@ -354,7 +354,7 @@ authenticated.post(
verifySetResourceClients,
verifyUserHasAction(ActionsEnum.setResourceUsers),
logActionAudit(ActionsEnum.setResourceUsers),
- siteResource.removeClientFromSiteResource,
+ siteResource.removeClientFromSiteResource
);
authenticated.put(
@@ -812,17 +812,9 @@ authenticated.delete(
// createNewt
// );
-authenticated.put(
- "/user/:userId/olm",
- verifyIsLoggedInUser,
- olm.createUserOlm
-);
+authenticated.put("/user/:userId/olm", verifyIsLoggedInUser, olm.createUserOlm);
-authenticated.get(
- "/user/:userId/olms",
- verifyIsLoggedInUser,
- olm.listUserOlms
-);
+authenticated.get("/user/:userId/olms", verifyIsLoggedInUser, olm.listUserOlms);
authenticated.delete(
"/user/:userId/olm/:olmId",
diff --git a/server/routers/generatedLicense/types.ts b/server/routers/generatedLicense/types.ts
index 4c5efed7..76e86265 100644
--- a/server/routers/generatedLicense/types.ts
+++ b/server/routers/generatedLicense/types.ts
@@ -27,4 +27,4 @@ export type NewLicenseKey = {
};
};
-export type GenerateNewLicenseResponse = NewLicenseKey;
\ No newline at end of file
+export type GenerateNewLicenseResponse = NewLicenseKey;
diff --git a/server/routers/gerbil/createExitNode.ts b/server/routers/gerbil/createExitNode.ts
index 8148ed75..bc965036 100644
--- a/server/routers/gerbil/createExitNode.ts
+++ b/server/routers/gerbil/createExitNode.ts
@@ -5,7 +5,10 @@ import { getNextAvailableSubnet } from "@server/lib/exitNodes";
import logger from "@server/logger";
import { eq } from "drizzle-orm";
-export async function createExitNode(publicKey: string, reachableAt: string | undefined) {
+export async function createExitNode(
+ publicKey: string,
+ reachableAt: string | undefined
+) {
// Fetch exit node
const [exitNodeQuery] = await db.select().from(exitNodes).limit(1);
let exitNode: ExitNode;
diff --git a/server/routers/gerbil/getConfig.ts b/server/routers/gerbil/getConfig.ts
index 56ebd744..488ef75b 100644
--- a/server/routers/gerbil/getConfig.ts
+++ b/server/routers/gerbil/getConfig.ts
@@ -51,7 +51,10 @@ export async function getConfig(
);
}
- const exitNode = await createExitNode(publicKey, reachableAt);
+ // clean up the public key - keep only valid base64 characters (A-Z, a-z, 0-9, +, /, =)
+ const cleanedPublicKey = publicKey.replace(/[^A-Za-z0-9+/=]/g, '');
+
+ const exitNode = await createExitNode(cleanedPublicKey, reachableAt);
if (!exitNode) {
return next(
@@ -117,4 +120,4 @@ export async function generateGerbilConfig(exitNode: ExitNode) {
};
return configResponse;
-}
\ No newline at end of file
+}
diff --git a/server/routers/gerbil/index.ts b/server/routers/gerbil/index.ts
index bff57d05..aa957d3a 100644
--- a/server/routers/gerbil/index.ts
+++ b/server/routers/gerbil/index.ts
@@ -2,4 +2,4 @@ export * from "./getConfig";
export * from "./receiveBandwidth";
export * from "./updateHolePunch";
export * from "./getAllRelays";
-export * from "./getResolvedHostname";
\ No newline at end of file
+export * from "./getResolvedHostname";
diff --git a/server/routers/gerbil/receiveBandwidth.ts b/server/routers/gerbil/receiveBandwidth.ts
index ffbd05c1..5c9cacb2 100644
--- a/server/routers/gerbil/receiveBandwidth.ts
+++ b/server/routers/gerbil/receiveBandwidth.ts
@@ -14,12 +14,55 @@ import { build } from "@server/build";
// Track sites that are already offline to avoid unnecessary queries
const offlineSites = new Set();
+// Retry configuration for deadlock handling
+const MAX_RETRIES = 3;
+const BASE_DELAY_MS = 50;
+
interface PeerBandwidth {
publicKey: string;
bytesIn: number;
bytesOut: number;
}
+/**
+ * Check if an error is a deadlock error
+ */
+function isDeadlockError(error: any): boolean {
+ return (
+ error?.code === "40P01" ||
+ error?.cause?.code === "40P01" ||
+ (error?.message && error.message.includes("deadlock"))
+ );
+}
+
+/**
+ * Execute a function with retry logic for deadlock handling
+ */
+async function withDeadlockRetry(
+ operation: () => Promise,
+ context: string
+): Promise {
+ let attempt = 0;
+ while (true) {
+ try {
+ return await operation();
+ } catch (error: any) {
+ if (isDeadlockError(error) && attempt < MAX_RETRIES) {
+ attempt++;
+ const baseDelay = Math.pow(2, attempt - 1) * BASE_DELAY_MS;
+ const jitter = Math.random() * baseDelay;
+ const delay = baseDelay + jitter;
+ logger.warn(
+ `Deadlock detected in ${context}, retrying attempt ${attempt}/${MAX_RETRIES} after ${delay.toFixed(0)}ms`
+ );
+ await new Promise((resolve) => setTimeout(resolve, delay));
+ continue;
+ }
+ throw error;
+ }
+ }
+}
+
export const receiveBandwidth = async (
req: Request,
res: Response,
@@ -60,201 +103,215 @@ export async function updateSiteBandwidth(
const currentTime = new Date();
const oneMinuteAgo = new Date(currentTime.getTime() - 60000); // 1 minute ago
- // logger.debug(`Received data: ${JSON.stringify(bandwidthData)}`);
+ // Sort bandwidth data by publicKey to ensure consistent lock ordering across all instances
+ // This is critical for preventing deadlocks when multiple instances update the same sites
+ const sortedBandwidthData = [...bandwidthData].sort((a, b) =>
+ a.publicKey.localeCompare(b.publicKey)
+ );
- await db.transaction(async (trx) => {
- // First, handle sites that are actively reporting bandwidth
- const activePeers = bandwidthData.filter((peer) => peer.bytesIn > 0); // Bytesout will have data as it tries to send keep alive messages
+ // First, handle sites that are actively reporting bandwidth
+ const activePeers = sortedBandwidthData.filter((peer) => peer.bytesIn > 0);
- if (activePeers.length > 0) {
- // Remove any active peers from offline tracking since they're sending data
- activePeers.forEach((peer) => offlineSites.delete(peer.publicKey));
+ // Aggregate usage data by organization (collected outside transaction)
+ const orgUsageMap = new Map();
+ const orgUptimeMap = new Map();
- // Aggregate usage data by organization
- const orgUsageMap = new Map();
- const orgUptimeMap = new Map();
+ if (activePeers.length > 0) {
+ // Remove any active peers from offline tracking since they're sending data
+ activePeers.forEach((peer) => offlineSites.delete(peer.publicKey));
- // Update all active sites with bandwidth data and get the site data in one operation
- const updatedSites = [];
- for (const peer of activePeers) {
- const [updatedSite] = await trx
- .update(sites)
- .set({
- megabytesOut: sql`${sites.megabytesOut} + ${peer.bytesIn}`,
- megabytesIn: sql`${sites.megabytesIn} + ${peer.bytesOut}`,
- lastBandwidthUpdate: currentTime.toISOString(),
- online: true
- })
- .where(eq(sites.pubKey, peer.publicKey))
- .returning({
- online: sites.online,
- orgId: sites.orgId,
- siteId: sites.siteId,
- lastBandwidthUpdate: sites.lastBandwidthUpdate
- });
+ // Update each active site individually with retry logic
+ // This reduces transaction scope and allows retries per-site
+ for (const peer of activePeers) {
+ try {
+ const updatedSite = await withDeadlockRetry(async () => {
+ const [result] = await db
+ .update(sites)
+ .set({
+ megabytesOut: sql`${sites.megabytesOut} + ${peer.bytesIn}`,
+ megabytesIn: sql`${sites.megabytesIn} + ${peer.bytesOut}`,
+ lastBandwidthUpdate: currentTime.toISOString(),
+ online: true
+ })
+ .where(eq(sites.pubKey, peer.publicKey))
+ .returning({
+ online: sites.online,
+ orgId: sites.orgId,
+ siteId: sites.siteId,
+ lastBandwidthUpdate: sites.lastBandwidthUpdate
+ });
+ return result;
+ }, `update active site ${peer.publicKey}`);
if (updatedSite) {
if (exitNodeId) {
- if (
- await checkExitNodeOrg(
- exitNodeId,
- updatedSite.orgId,
- trx
- )
- ) {
- // not allowed
+ const notAllowed = await checkExitNodeOrg(
+ exitNodeId,
+ updatedSite.orgId
+ );
+ if (notAllowed) {
logger.warn(
`Exit node ${exitNodeId} is not allowed for org ${updatedSite.orgId}`
);
- // THIS SHOULD TRIGGER THE TRANSACTION TO FAIL?
- throw new Error("Exit node not allowed");
+ // Skip this site but continue processing others
+ continue;
}
}
- updatedSites.push({ ...updatedSite, peer });
- }
- }
-
- // Calculate org usage aggregations using the updated site data
- for (const { peer, ...site } of updatedSites) {
- // Aggregate bandwidth usage for the org
- const totalBandwidth = peer.bytesIn + peer.bytesOut;
- const currentOrgUsage = orgUsageMap.get(site.orgId) || 0;
- orgUsageMap.set(site.orgId, currentOrgUsage + totalBandwidth);
-
- // Add 10 seconds of uptime for each active site
- const currentOrgUptime = orgUptimeMap.get(site.orgId) || 0;
- orgUptimeMap.set(site.orgId, currentOrgUptime + 10 / 60); // Store in minutes and jut add 10 seconds
- }
-
- if (calcUsageAndLimits) {
- // REMOTE EXIT NODES DO NOT COUNT TOWARDS USAGE
- // Process all usage updates sequentially by organization to reduce deadlock risk
- const allOrgIds = new Set([...orgUsageMap.keys(), ...orgUptimeMap.keys()]);
-
- for (const orgId of allOrgIds) {
- try {
- // Process bandwidth usage for this org
- const totalBandwidth = orgUsageMap.get(orgId);
- if (totalBandwidth) {
- const bandwidthUsage = await usageService.add(
- orgId,
- FeatureId.EGRESS_DATA_MB,
- totalBandwidth,
- trx
- );
- if (bandwidthUsage) {
- usageService
- .checkLimitSet(
- orgId,
- true,
- FeatureId.EGRESS_DATA_MB,
- bandwidthUsage,
- trx
- )
- .catch((error: any) => {
- logger.error(
- `Error checking bandwidth limits for org ${orgId}:`,
- error
- );
- });
- }
- }
-
- // Process uptime usage for this org
- const totalUptime = orgUptimeMap.get(orgId);
- if (totalUptime) {
- const uptimeUsage = await usageService.add(
- orgId,
- FeatureId.SITE_UPTIME,
- totalUptime,
- trx
- );
- if (uptimeUsage) {
- usageService
- .checkLimitSet(
- orgId,
- true,
- FeatureId.SITE_UPTIME,
- uptimeUsage,
- trx
- )
- .catch((error: any) => {
- logger.error(
- `Error checking uptime limits for org ${orgId}:`,
- error
- );
- });
- }
- }
- } catch (error) {
- logger.error(
- `Error processing usage for org ${orgId}:`,
- error
- );
- // Don't break the loop, continue with other orgs
- }
+ // Aggregate bandwidth usage for the org
+ const totalBandwidth = peer.bytesIn + peer.bytesOut;
+ const currentOrgUsage =
+ orgUsageMap.get(updatedSite.orgId) || 0;
+ orgUsageMap.set(
+ updatedSite.orgId,
+ currentOrgUsage + totalBandwidth
+ );
+
+ // Add 10 seconds of uptime for each active site
+ const currentOrgUptime =
+ orgUptimeMap.get(updatedSite.orgId) || 0;
+ orgUptimeMap.set(
+ updatedSite.orgId,
+ currentOrgUptime + 10 / 60
+ );
}
+ } catch (error) {
+ logger.error(
+ `Failed to update bandwidth for site ${peer.publicKey}:`,
+ error
+ );
+ // Continue with other sites
}
}
+ }
- // Handle sites that reported zero bandwidth but need online status updated
- const zeroBandwidthPeers = bandwidthData.filter(
- (peer) => peer.bytesIn === 0 && !offlineSites.has(peer.publicKey) // Bytesout will have data as it tries to send keep alive messages
- );
+ // Process usage updates outside of site update transactions
+ // This separates the concerns and reduces lock contention
+ if (calcUsageAndLimits && (orgUsageMap.size > 0 || orgUptimeMap.size > 0)) {
+ // Sort org IDs to ensure consistent lock ordering
+ const allOrgIds = [
+ ...new Set([...orgUsageMap.keys(), ...orgUptimeMap.keys()])
+ ].sort();
- if (zeroBandwidthPeers.length > 0) {
- const zeroBandwidthSites = await trx
- .select()
- .from(sites)
- .where(
- inArray(
- sites.pubKey,
- zeroBandwidthPeers.map((p) => p.publicKey)
- )
- );
-
- for (const site of zeroBandwidthSites) {
- let newOnlineStatus = site.online;
-
- // Check if site should go offline based on last bandwidth update WITH DATA
- if (site.lastBandwidthUpdate) {
- const lastUpdateWithData = new Date(
- site.lastBandwidthUpdate
+ for (const orgId of allOrgIds) {
+ try {
+ // Process bandwidth usage for this org
+ const totalBandwidth = orgUsageMap.get(orgId);
+ if (totalBandwidth) {
+ const bandwidthUsage = await usageService.add(
+ orgId,
+ FeatureId.EGRESS_DATA_MB,
+ totalBandwidth
);
- if (lastUpdateWithData < oneMinuteAgo) {
- newOnlineStatus = false;
+ if (bandwidthUsage) {
+ // Fire and forget - don't block on limit checking
+ usageService
+ .checkLimitSet(
+ orgId,
+ true,
+ FeatureId.EGRESS_DATA_MB,
+ bandwidthUsage
+ )
+ .catch((error: any) => {
+ logger.error(
+ `Error checking bandwidth limits for org ${orgId}:`,
+ error
+ );
+ });
}
- } else {
- // No previous data update recorded, set to offline
- newOnlineStatus = false;
}
- // Always update lastBandwidthUpdate to show this instance is receiving reports
- // Only update online status if it changed
- if (site.online !== newOnlineStatus) {
- const [updatedSite] = await trx
- .update(sites)
- .set({
- online: newOnlineStatus
- })
- .where(eq(sites.siteId, site.siteId))
- .returning();
+ // Process uptime usage for this org
+ const totalUptime = orgUptimeMap.get(orgId);
+ if (totalUptime) {
+ const uptimeUsage = await usageService.add(
+ orgId,
+ FeatureId.SITE_UPTIME,
+ totalUptime
+ );
+ if (uptimeUsage) {
+ // Fire and forget - don't block on limit checking
+ usageService
+ .checkLimitSet(
+ orgId,
+ true,
+ FeatureId.SITE_UPTIME,
+ uptimeUsage
+ )
+ .catch((error: any) => {
+ logger.error(
+ `Error checking uptime limits for org ${orgId}:`,
+ error
+ );
+ });
+ }
+ }
+ } catch (error) {
+ logger.error(`Error processing usage for org ${orgId}:`, error);
+ // Continue with other orgs
+ }
+ }
+ }
+
+ // Handle sites that reported zero bandwidth but need online status updated
+ const zeroBandwidthPeers = sortedBandwidthData.filter(
+ (peer) => peer.bytesIn === 0 && !offlineSites.has(peer.publicKey)
+ );
+
+ if (zeroBandwidthPeers.length > 0) {
+ // Fetch all zero bandwidth sites in one query
+ const zeroBandwidthSites = await db
+ .select()
+ .from(sites)
+ .where(
+ inArray(
+ sites.pubKey,
+ zeroBandwidthPeers.map((p) => p.publicKey)
+ )
+ );
+
+ // Sort by siteId to ensure consistent lock ordering
+ const sortedZeroBandwidthSites = zeroBandwidthSites.sort(
+ (a, b) => a.siteId - b.siteId
+ );
+
+ for (const site of sortedZeroBandwidthSites) {
+ let newOnlineStatus = site.online;
+
+ // Check if site should go offline based on last bandwidth update WITH DATA
+ if (site.lastBandwidthUpdate) {
+ const lastUpdateWithData = new Date(site.lastBandwidthUpdate);
+ if (lastUpdateWithData < oneMinuteAgo) {
+ newOnlineStatus = false;
+ }
+ } else {
+ // No previous data update recorded, set to offline
+ newOnlineStatus = false;
+ }
+
+ // Only update online status if it changed
+ if (site.online !== newOnlineStatus) {
+ try {
+ const updatedSite = await withDeadlockRetry(async () => {
+ const [result] = await db
+ .update(sites)
+ .set({
+ online: newOnlineStatus
+ })
+ .where(eq(sites.siteId, site.siteId))
+ .returning();
+ return result;
+ }, `update offline status for site ${site.siteId}`);
if (updatedSite && exitNodeId) {
- if (
- await checkExitNodeOrg(
- exitNodeId,
- updatedSite.orgId,
- trx
- )
- ) {
- // not allowed
+ const notAllowed = await checkExitNodeOrg(
+ exitNodeId,
+ updatedSite.orgId
+ );
+ if (notAllowed) {
logger.warn(
`Exit node ${exitNodeId} is not allowed for org ${updatedSite.orgId}`
);
- // THIS SHOULD TRIGGER THE TRANSACTION TO FAIL?
- throw new Error("Exit node not allowed");
}
}
@@ -262,8 +319,14 @@ export async function updateSiteBandwidth(
if (!newOnlineStatus && site.pubKey) {
offlineSites.add(site.pubKey);
}
+ } catch (error) {
+ logger.error(
+ `Failed to update offline status for site ${site.siteId}:`,
+ error
+ );
+ // Continue with other sites
}
}
}
- });
+ }
}
diff --git a/server/routers/hybrid.ts b/server/routers/hybrid.ts
index 235961f1..398abdb8 100644
--- a/server/routers/hybrid.ts
+++ b/server/routers/hybrid.ts
@@ -1,4 +1,4 @@
import { Router } from "express";
// Root routes
-export const hybridRouter = Router();
\ No newline at end of file
+export const hybridRouter = Router();
diff --git a/server/routers/idp/createIdpOrgPolicy.ts b/server/routers/idp/createIdpOrgPolicy.ts
index b8c947b0..b9a0098b 100644
--- a/server/routers/idp/createIdpOrgPolicy.ts
+++ b/server/routers/idp/createIdpOrgPolicy.ts
@@ -12,14 +12,14 @@ import { eq, and } from "drizzle-orm";
import { idp, idpOrg } from "@server/db";
const paramsSchema = z.strictObject({
- idpId: z.coerce.number(),
- orgId: z.string()
- });
+ idpId: z.coerce.number(),
+ orgId: z.string()
+});
const bodySchema = z.strictObject({
- roleMapping: z.string().optional(),
- orgMapping: z.string().optional()
- });
+ roleMapping: z.string().optional(),
+ orgMapping: z.string().optional()
+});
export type CreateIdpOrgPolicyResponse = {};
diff --git a/server/routers/idp/createOidcIdp.ts b/server/routers/idp/createOidcIdp.ts
index 2548cb04..c7eeaf30 100644
--- a/server/routers/idp/createOidcIdp.ts
+++ b/server/routers/idp/createOidcIdp.ts
@@ -15,17 +15,17 @@ import config from "@server/lib/config";
const paramsSchema = z.strictObject({});
const bodySchema = z.strictObject({
- name: z.string().nonempty(),
- clientId: z.string().nonempty(),
- clientSecret: z.string().nonempty(),
- authUrl: z.url(),
- tokenUrl: z.url(),
- identifierPath: z.string().nonempty(),
- emailPath: z.string().optional(),
- namePath: z.string().optional(),
- scopes: z.string().nonempty(),
- autoProvision: z.boolean().optional()
- });
+ name: z.string().nonempty(),
+ clientId: z.string().nonempty(),
+ clientSecret: z.string().nonempty(),
+ authUrl: z.url(),
+ tokenUrl: z.url(),
+ identifierPath: z.string().nonempty(),
+ emailPath: z.string().optional(),
+ namePath: z.string().optional(),
+ scopes: z.string().nonempty(),
+ autoProvision: z.boolean().optional()
+});
export type CreateIdpResponse = {
idpId: number;
diff --git a/server/routers/idp/deleteIdp.ts b/server/routers/idp/deleteIdp.ts
index 56c0ca98..f2b55099 100644
--- a/server/routers/idp/deleteIdp.ts
+++ b/server/routers/idp/deleteIdp.ts
@@ -53,12 +53,7 @@ export async function deleteIdp(
.where(eq(idp.idpId, idpId));
if (!existingIdp) {
- return next(
- createHttpError(
- HttpCode.NOT_FOUND,
- "IdP not found"
- )
- );
+ return next(createHttpError(HttpCode.NOT_FOUND, "IdP not found"));
}
// Delete the IDP and its related records in a transaction
@@ -69,14 +64,10 @@ export async function deleteIdp(
.where(eq(idpOidcConfig.idpId, idpId));
// Delete IDP-org mappings
- await trx
- .delete(idpOrg)
- .where(eq(idpOrg.idpId, idpId));
+ await trx.delete(idpOrg).where(eq(idpOrg.idpId, idpId));
// Delete the IDP itself
- await trx
- .delete(idp)
- .where(eq(idp.idpId, idpId));
+ await trx.delete(idp).where(eq(idp.idpId, idpId));
});
return response(res, {
diff --git a/server/routers/idp/deleteIdpOrgPolicy.ts b/server/routers/idp/deleteIdpOrgPolicy.ts
index c5f18282..b52a37df 100644
--- a/server/routers/idp/deleteIdpOrgPolicy.ts
+++ b/server/routers/idp/deleteIdpOrgPolicy.ts
@@ -11,9 +11,9 @@ import { eq, and } from "drizzle-orm";
import { OpenAPITags, registry } from "@server/openApi";
const paramsSchema = z.strictObject({
- idpId: z.coerce.number(),
- orgId: z.string()
- });
+ idpId: z.coerce.number(),
+ orgId: z.string()
+});
registry.registerPath({
method: "delete",
diff --git a/server/routers/idp/generateOidcUrl.ts b/server/routers/idp/generateOidcUrl.ts
index 2db8783f..50b63ee5 100644
--- a/server/routers/idp/generateOidcUrl.ts
+++ b/server/routers/idp/generateOidcUrl.ts
@@ -24,8 +24,8 @@ const paramsSchema = z
.strict();
const bodySchema = z.strictObject({
- redirectUrl: z.string()
- });
+ redirectUrl: z.string()
+});
const querySchema = z.object({
orgId: z.string().optional() // check what actuall calls it
diff --git a/server/routers/idp/getIdp.ts b/server/routers/idp/getIdp.ts
index e8651c84..07253751 100644
--- a/server/routers/idp/getIdp.ts
+++ b/server/routers/idp/getIdp.ts
@@ -71,14 +71,8 @@ export async function getIdp(
const clientSecret = idpRes.idpOidcConfig!.clientSecret;
const clientId = idpRes.idpOidcConfig!.clientId;
- idpRes.idpOidcConfig!.clientSecret = decrypt(
- clientSecret,
- key
- );
- idpRes.idpOidcConfig!.clientId = decrypt(
- clientId,
- key
- );
+ idpRes.idpOidcConfig!.clientSecret = decrypt(clientSecret, key);
+ idpRes.idpOidcConfig!.clientId = decrypt(clientId, key);
}
return response(res, {
diff --git a/server/routers/idp/index.ts b/server/routers/idp/index.ts
index 81cec8d1..f0dcf02e 100644
--- a/server/routers/idp/index.ts
+++ b/server/routers/idp/index.ts
@@ -8,4 +8,4 @@ export * from "./getIdp";
export * from "./createIdpOrgPolicy";
export * from "./deleteIdpOrgPolicy";
export * from "./listIdpOrgPolicies";
-export * from "./updateIdpOrgPolicy";
\ No newline at end of file
+export * from "./updateIdpOrgPolicy";
diff --git a/server/routers/idp/listIdpOrgPolicies.ts b/server/routers/idp/listIdpOrgPolicies.ts
index 087b52f8..9f7cdb42 100644
--- a/server/routers/idp/listIdpOrgPolicies.ts
+++ b/server/routers/idp/listIdpOrgPolicies.ts
@@ -15,19 +15,19 @@ const paramsSchema = z.object({
});
const querySchema = z.strictObject({
- limit: z
- .string()
- .optional()
- .default("1000")
- .transform(Number)
- .pipe(z.int().nonnegative()),
- offset: z
- .string()
- .optional()
- .default("0")
- .transform(Number)
- .pipe(z.int().nonnegative())
- });
+ limit: z
+ .string()
+ .optional()
+ .default("1000")
+ .transform(Number)
+ .pipe(z.int().nonnegative()),
+ offset: z
+ .string()
+ .optional()
+ .default("0")
+ .transform(Number)
+ .pipe(z.int().nonnegative())
+});
async function query(idpId: number, limit: number, offset: number) {
const res = await db
diff --git a/server/routers/idp/listIdps.ts b/server/routers/idp/listIdps.ts
index 8ce2ab78..20d1899e 100644
--- a/server/routers/idp/listIdps.ts
+++ b/server/routers/idp/listIdps.ts
@@ -11,19 +11,19 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const querySchema = z.strictObject({
- limit: z
- .string()
- .optional()
- .default("1000")
- .transform(Number)
- .pipe(z.int().nonnegative()),
- offset: z
- .string()
- .optional()
- .default("0")
- .transform(Number)
- .pipe(z.int().nonnegative())
- });
+ limit: z
+ .string()
+ .optional()
+ .default("1000")
+ .transform(Number)
+ .pipe(z.int().nonnegative()),
+ offset: z
+ .string()
+ .optional()
+ .default("0")
+ .transform(Number)
+ .pipe(z.int().nonnegative())
+});
async function query(limit: number, offset: number) {
const res = await db
diff --git a/server/routers/idp/updateIdpOrgPolicy.ts b/server/routers/idp/updateIdpOrgPolicy.ts
index 82d3b5f2..6432faf6 100644
--- a/server/routers/idp/updateIdpOrgPolicy.ts
+++ b/server/routers/idp/updateIdpOrgPolicy.ts
@@ -11,14 +11,14 @@ import { eq, and } from "drizzle-orm";
import { idp, idpOrg } from "@server/db";
const paramsSchema = z.strictObject({
- idpId: z.coerce.number(),
- orgId: z.string()
- });
+ idpId: z.coerce.number(),
+ orgId: z.string()
+});
const bodySchema = z.strictObject({
- roleMapping: z.string().optional(),
- orgMapping: z.string().optional()
- });
+ roleMapping: z.string().optional(),
+ orgMapping: z.string().optional()
+});
export type UpdateIdpOrgPolicyResponse = {};
diff --git a/server/routers/idp/updateOidcIdp.ts b/server/routers/idp/updateOidcIdp.ts
index 1dbdd00a..a4d55187 100644
--- a/server/routers/idp/updateOidcIdp.ts
+++ b/server/routers/idp/updateOidcIdp.ts
@@ -19,19 +19,19 @@ const paramsSchema = z
.strict();
const bodySchema = z.strictObject({
- name: z.string().optional(),
- clientId: z.string().optional(),
- clientSecret: z.string().optional(),
- authUrl: z.string().optional(),
- tokenUrl: z.string().optional(),
- identifierPath: z.string().optional(),
- emailPath: z.string().optional(),
- namePath: z.string().optional(),
- scopes: z.string().optional(),
- autoProvision: z.boolean().optional(),
- defaultRoleMapping: z.string().optional(),
- defaultOrgMapping: z.string().optional()
- });
+ name: z.string().optional(),
+ clientId: z.string().optional(),
+ clientSecret: z.string().optional(),
+ authUrl: z.string().optional(),
+ tokenUrl: z.string().optional(),
+ identifierPath: z.string().optional(),
+ emailPath: z.string().optional(),
+ namePath: z.string().optional(),
+ scopes: z.string().optional(),
+ autoProvision: z.boolean().optional(),
+ defaultRoleMapping: z.string().optional(),
+ defaultOrgMapping: z.string().optional()
+});
export type UpdateIdpResponse = {
idpId: number;
diff --git a/server/routers/integration.ts b/server/routers/integration.ts
index 878d61fa..6301bb6d 100644
--- a/server/routers/integration.ts
+++ b/server/routers/integration.ts
@@ -352,6 +352,14 @@ authenticated.post(
user.inviteUser
);
+authenticated.delete(
+ "/org/:orgId/invitations/:inviteId",
+ verifyApiKeyOrgAccess,
+ verifyApiKeyHasAction(ActionsEnum.removeInvitation),
+ logActionAudit(ActionsEnum.removeInvitation),
+ user.removeInvitation
+);
+
authenticated.get(
"/resource/:resourceId/roles",
verifyApiKeyResourceAccess,
diff --git a/server/routers/license/types.ts b/server/routers/license/types.ts
index 945bd368..a78a287f 100644
--- a/server/routers/license/types.ts
+++ b/server/routers/license/types.ts
@@ -8,4 +8,4 @@ export type GetLicenseStatusResponse = LicenseStatus;
export type ListLicenseKeysResponse = LicenseKeyCache[];
-export type RecheckStatusResponse = LicenseStatus;
\ No newline at end of file
+export type RecheckStatusResponse = LicenseStatus;
diff --git a/server/routers/loginPage/types.ts b/server/routers/loginPage/types.ts
index 26f59cab..a68dd7d4 100644
--- a/server/routers/loginPage/types.ts
+++ b/server/routers/loginPage/types.ts
@@ -8,4 +8,4 @@ export type GetLoginPageResponse = LoginPage;
export type UpdateLoginPageResponse = LoginPage;
-export type LoadLoginPageResponse = LoginPage & { orgId: string };
\ No newline at end of file
+export type LoadLoginPageResponse = LoginPage & { orgId: string };
diff --git a/server/routers/newt/createNewt.ts b/server/routers/newt/createNewt.ts
index 930c04be..b5da405e 100644
--- a/server/routers/newt/createNewt.ts
+++ b/server/routers/newt/createNewt.ts
@@ -24,9 +24,9 @@ export type CreateNewtResponse = {
};
const createNewtSchema = z.strictObject({
- newtId: z.string(),
- secret: z.string()
- });
+ newtId: z.string(),
+ secret: z.string()
+});
export async function createNewt(
req: Request,
@@ -34,7 +34,6 @@ export async function createNewt(
next: NextFunction
): Promise {
try {
-
const parsedBody = createNewtSchema.safeParse(req.body);
if (!parsedBody.success) {
return next(
@@ -58,7 +57,7 @@ export async function createNewt(
await db.insert(newts).values({
newtId: newtId,
secretHash,
- dateCreated: moment().toISOString(),
+ dateCreated: moment().toISOString()
});
// give the newt their default permissions:
@@ -75,12 +74,12 @@ export async function createNewt(
data: {
newtId,
secret,
- token,
+ token
},
success: true,
error: false,
message: "Newt created successfully",
- status: HttpCode.OK,
+ status: HttpCode.OK
});
} catch (e) {
if (e instanceof SqliteError && e.code === "SQLITE_CONSTRAINT_UNIQUE") {
diff --git a/server/routers/newt/handleGetConfigMessage.ts b/server/routers/newt/handleGetConfigMessage.ts
index 5f42cd82..bfe14ec5 100644
--- a/server/routers/newt/handleGetConfigMessage.ts
+++ b/server/routers/newt/handleGetConfigMessage.ts
@@ -11,7 +11,7 @@ import {
} from "@server/db";
import { clients, clientSitesAssociationsCache, Newt, sites } from "@server/db";
import { eq } from "drizzle-orm";
-import { updatePeer } from "../olm/peers";
+import { initPeerAddHandshake, updatePeer } from "../olm/peers";
import { sendToExitNode } from "#dynamic/lib/exitNodes";
import { generateSubnetProxyTargets, SubnetProxyTarget } from "@server/lib/ip";
import config from "@server/lib/config";
@@ -140,92 +140,101 @@ export const handleGetConfigMessage: MessageHandler = async (context) => {
)
.where(eq(clientSitesAssociationsCache.siteId, siteId));
- // Prepare peers data for the response
- const peers = await Promise.all(
- clientsRes
- .filter((client) => {
- if (!client.clients.pubKey) {
- logger.warn(
- `Client ${client.clients.clientId} has no public key, skipping`
- );
- return false;
- }
- if (!client.clients.subnet) {
- logger.warn(
- `Client ${client.clients.clientId} has no subnet, skipping`
- );
- return false;
- }
- return true;
- })
- .map(async (client) => {
- // Add or update this peer on the olm if it is connected
- if (!site.publicKey) {
- logger.warn(
- `Site ${site.siteId} has no public key, skipping`
- );
- return null;
- }
+ let peers: Array<{
+ publicKey: string;
+ allowedIps: string[];
+ endpoint?: string;
+ }> = [];
- if (!exitNode) {
- logger.warn(`Exit node not found for site ${site.siteId}`);
- return null;
- }
+ if (site.publicKey && site.endpoint && exitNode) {
+ // Prepare peers data for the response
+ peers = await Promise.all(
+ clientsRes
+ .filter((client) => {
+ if (!client.clients.pubKey) {
+ logger.warn(
+ `Client ${client.clients.clientId} has no public key, skipping`
+ );
+ return false;
+ }
+ if (!client.clients.subnet) {
+ logger.warn(
+ `Client ${client.clients.clientId} has no subnet, skipping`
+ );
+ return false;
+ }
+ return true;
+ })
+ .map(async (client) => {
+ // Add or update this peer on the olm if it is connected
- if (!site.endpoint) {
- logger.warn(
- `Site ${site.siteId} has no endpoint, skipping`
- );
- return null;
- }
-
- // const allSiteResources = await db // only get the site resources that this client has access to
- // .select()
- // .from(siteResources)
- // .innerJoin(
- // clientSiteResourcesAssociationsCache,
- // eq(
- // siteResources.siteResourceId,
- // clientSiteResourcesAssociationsCache.siteResourceId
- // )
- // )
- // .where(
- // and(
- // eq(siteResources.siteId, site.siteId),
- // eq(
- // clientSiteResourcesAssociationsCache.clientId,
- // client.clients.clientId
- // )
- // )
- // );
- await updatePeer(client.clients.clientId, {
- siteId: site.siteId,
- endpoint: site.endpoint,
- relayEndpoint: `${exitNode.endpoint}:${config.getRawConfig().gerbil.clients_start_port}`,
- publicKey: site.publicKey,
- serverIP: site.address,
- serverPort: site.listenPort
- // remoteSubnets: generateRemoteSubnets(
- // allSiteResources.map(
- // ({ siteResources }) => siteResources
+ // const allSiteResources = await db // only get the site resources that this client has access to
+ // .select()
+ // .from(siteResources)
+ // .innerJoin(
+ // clientSiteResourcesAssociationsCache,
+ // eq(
+ // siteResources.siteResourceId,
+ // clientSiteResourcesAssociationsCache.siteResourceId
+ // )
// )
- // ),
- // aliases: generateAliasConfig(
- // allSiteResources.map(
- // ({ siteResources }) => siteResources
- // )
- // )
- });
+ // .where(
+ // and(
+ // eq(siteResources.siteId, site.siteId),
+ // eq(
+ // clientSiteResourcesAssociationsCache.clientId,
+ // client.clients.clientId
+ // )
+ // )
+ // );
- return {
- publicKey: client.clients.pubKey!,
- allowedIps: [`${client.clients.subnet.split("/")[0]}/32`], // we want to only allow from that client
- endpoint: client.clientSitesAssociationsCache.isRelayed
- ? ""
- : client.clientSitesAssociationsCache.endpoint! // if its relayed it should be localhost
- };
- })
- );
+ // update the peer info on the olm
+ // if the peer has not been added yet this will be a no-op
+ await updatePeer(client.clients.clientId, {
+ siteId: site.siteId,
+ endpoint: site.endpoint!,
+ relayEndpoint: `${exitNode.endpoint}:${config.getRawConfig().gerbil.clients_start_port}`,
+ publicKey: site.publicKey!,
+ serverIP: site.address,
+ serverPort: site.listenPort
+ // remoteSubnets: generateRemoteSubnets(
+ // allSiteResources.map(
+ // ({ siteResources }) => siteResources
+ // )
+ // ),
+ // aliases: generateAliasConfig(
+ // allSiteResources.map(
+ // ({ siteResources }) => siteResources
+ // )
+ // )
+ });
+
+ // also trigger the peer add handshake in case the peer was not already added to the olm and we need to hole punch
+ // if it has already been added this will be a no-op
+ await initPeerAddHandshake(
+ // this will kick off the add peer process for the client
+ client.clients.clientId,
+ {
+ siteId,
+ exitNode: {
+ publicKey: exitNode.publicKey,
+ endpoint: exitNode.endpoint
+ }
+ }
+ );
+
+ return {
+ publicKey: client.clients.pubKey!,
+ allowedIps: [
+ `${client.clients.subnet.split("/")[0]}/32`
+ ], // we want to only allow from that client
+ endpoint: client.clientSitesAssociationsCache.isRelayed
+ ? ""
+ : client.clientSitesAssociationsCache.endpoint! // if its relayed it should be localhost
+ };
+ })
+ );
+ }
// Filter out any null values from peers that didn't have an olm
const validPeers = peers.filter((peer) => peer !== null);
diff --git a/server/routers/newt/handleNewtPingRequestMessage.ts b/server/routers/newt/handleNewtPingRequestMessage.ts
index fea157fd..b75ddd5e 100644
--- a/server/routers/newt/handleNewtPingRequestMessage.ts
+++ b/server/routers/newt/handleNewtPingRequestMessage.ts
@@ -35,7 +35,11 @@ export const handleNewtPingRequestMessage: MessageHandler = async (context) => {
const { noCloud } = message.data;
- const exitNodesList = await listExitNodes(site.orgId, true, noCloud || false); // filter for only the online ones
+ const exitNodesList = await listExitNodes(
+ site.orgId,
+ true,
+ noCloud || false
+ ); // filter for only the online ones
let lastExitNodeId = null;
if (newt.siteId) {
diff --git a/server/routers/newt/handleNewtRegisterMessage.ts b/server/routers/newt/handleNewtRegisterMessage.ts
index f4d963a1..c7f2131e 100644
--- a/server/routers/newt/handleNewtRegisterMessage.ts
+++ b/server/routers/newt/handleNewtRegisterMessage.ts
@@ -255,7 +255,7 @@ export const handleNewtRegisterMessage: MessageHandler = async (context) => {
hcTimeout: targetHealthCheck.hcTimeout,
hcHeaders: targetHealthCheck.hcHeaders,
hcMethod: targetHealthCheck.hcMethod,
- hcTlsServerName: targetHealthCheck.hcTlsServerName,
+ hcTlsServerName: targetHealthCheck.hcTlsServerName
})
.from(targets)
.innerJoin(resources, eq(targets.resourceId, resources.resourceId))
@@ -328,7 +328,7 @@ export const handleNewtRegisterMessage: MessageHandler = async (context) => {
hcTimeout: target.hcTimeout, // in seconds
hcHeaders: hcHeadersSend,
hcMethod: target.hcMethod,
- hcTlsServerName: target.hcTlsServerName,
+ hcTlsServerName: target.hcTlsServerName
};
});
@@ -346,6 +346,7 @@ export const handleNewtRegisterMessage: MessageHandler = async (context) => {
type: "newt/wg/connect",
data: {
endpoint: `${exitNode.endpoint}:${exitNode.listenPort}`,
+ relayPort: config.getRawConfig().gerbil.clients_start_port,
publicKey: exitNode.publicKey,
serverIP: exitNode.address.split("/")[0],
tunnelIP: siteSubnet.split("/")[0],
@@ -366,7 +367,7 @@ async function getUniqueSubnetForSite(
trx: Transaction | typeof db = db
): Promise {
const lockKey = `subnet-allocation:${exitNode.exitNodeId}`;
-
+
return await lockManager.withLock(
lockKey,
async () => {
@@ -382,7 +383,8 @@ async function getUniqueSubnetForSite(
.map((site) => site.subnet)
.filter(
(subnet) =>
- subnet && /^(\d{1,3}\.){3}\d{1,3}\/\d{1,2}$/.test(subnet)
+ subnet &&
+ /^(\d{1,3}\.){3}\d{1,3}\/\d{1,2}$/.test(subnet)
)
.filter((subnet) => subnet !== null);
subnets.push(exitNode.address.replace(/\/\d+$/, `/${blockSize}`));
diff --git a/server/routers/newt/handleReceiveBandwidthMessage.ts b/server/routers/newt/handleReceiveBandwidthMessage.ts
index f5170feb..3d060a0c 100644
--- a/server/routers/newt/handleReceiveBandwidthMessage.ts
+++ b/server/routers/newt/handleReceiveBandwidthMessage.ts
@@ -10,7 +10,9 @@ interface PeerBandwidth {
bytesOut: number;
}
-export const handleReceiveBandwidthMessage: MessageHandler = async (context) => {
+export const handleReceiveBandwidthMessage: MessageHandler = async (
+ context
+) => {
const { message, client, sendToClient } = context;
if (!message.data.bandwidthData) {
@@ -44,7 +46,7 @@ export const handleReceiveBandwidthMessage: MessageHandler = async (context) =>
.set({
megabytesOut: (client.megabytesIn || 0) + bytesIn,
megabytesIn: (client.megabytesOut || 0) + bytesOut,
- lastBandwidthUpdate: new Date().toISOString(),
+ lastBandwidthUpdate: new Date().toISOString()
})
.where(eq(clients.clientId, client.clientId));
}
diff --git a/server/routers/newt/handleSocketMessages.ts b/server/routers/newt/handleSocketMessages.ts
index 09a473b9..f26f69c9 100644
--- a/server/routers/newt/handleSocketMessages.ts
+++ b/server/routers/newt/handleSocketMessages.ts
@@ -64,9 +64,5 @@ export const handleDockerContainersMessage: MessageHandler = async (
return;
}
- await applyNewtDockerBlueprint(
- newt.siteId,
- newt.newtId,
- containers
- );
+ await applyNewtDockerBlueprint(newt.siteId, newt.newtId, containers);
};
diff --git a/server/routers/newt/index.ts b/server/routers/newt/index.ts
index 9642a637..6b17f324 100644
--- a/server/routers/newt/index.ts
+++ b/server/routers/newt/index.ts
@@ -5,4 +5,4 @@ export * from "./handleReceiveBandwidthMessage";
export * from "./handleGetConfigMessage";
export * from "./handleSocketMessages";
export * from "./handleNewtPingRequestMessage";
-export * from "./handleApplyBlueprintMessage";
\ No newline at end of file
+export * from "./handleApplyBlueprintMessage";
diff --git a/server/routers/newt/peers.ts b/server/routers/newt/peers.ts
index 694f0c0f..c7546ff0 100644
--- a/server/routers/newt/peers.ts
+++ b/server/routers/newt/peers.ts
@@ -48,7 +48,11 @@ export async function addPeer(
return site;
}
-export async function deletePeer(siteId: number, publicKey: string, newtId?: string) {
+export async function deletePeer(
+ siteId: number,
+ publicKey: string,
+ newtId?: string
+) {
let site: Site | null = null;
if (!newtId) {
[site] = await db
diff --git a/server/routers/newt/targets.ts b/server/routers/newt/targets.ts
index a5883f30..e97aed35 100644
--- a/server/routers/newt/targets.ts
+++ b/server/routers/newt/targets.ts
@@ -26,22 +26,32 @@ export async function addTargets(
// Create a map for quick lookup
const healthCheckMap = new Map();
- healthCheckData.forEach(hc => {
+ healthCheckData.forEach((hc) => {
healthCheckMap.set(hc.targetId, hc);
});
const healthCheckTargets = targets.map((target) => {
const hc = healthCheckMap.get(target.targetId);
-
+
// If no health check data found, skip this target
if (!hc) {
- logger.warn(`No health check configuration found for target ${target.targetId}`);
+ logger.warn(
+ `No health check configuration found for target ${target.targetId}`
+ );
return null;
}
// Ensure all necessary fields are present
- if (!hc.hcPath || !hc.hcHostname || !hc.hcPort || !hc.hcInterval || !hc.hcMethod) {
- logger.debug(`Skipping target ${target.targetId} due to missing health check fields`);
+ if (
+ !hc.hcPath ||
+ !hc.hcHostname ||
+ !hc.hcPort ||
+ !hc.hcInterval ||
+ !hc.hcMethod
+ ) {
+ logger.debug(
+ `Skipping target ${target.targetId} due to missing health check fields`
+ );
return null; // Skip targets with missing health check fields
}
@@ -49,9 +59,11 @@ export async function addTargets(
const hcHeadersSend: { [key: string]: string } = {};
if (hcHeadersParse) {
// transform
- hcHeadersParse.forEach((header: { name: string; value: string }) => {
- hcHeadersSend[header.name] = header.value;
- });
+ hcHeadersParse.forEach(
+ (header: { name: string; value: string }) => {
+ hcHeadersSend[header.name] = header.value;
+ }
+ );
}
// try to parse the hcStatus into a int and if not possible set to undefined
@@ -77,12 +89,14 @@ export async function addTargets(
hcHeaders: hcHeadersSend,
hcMethod: hc.hcMethod,
hcStatus: hcStatus,
- hcTlsServerName: hc.hcTlsServerName,
+ hcTlsServerName: hc.hcTlsServerName
};
});
// Filter out any null values from health check targets
- const validHealthCheckTargets = healthCheckTargets.filter((target) => target !== null);
+ const validHealthCheckTargets = healthCheckTargets.filter(
+ (target) => target !== null
+ );
await sendToClient(newtId, {
type: `newt/healthcheck/add`,
diff --git a/server/routers/olm/createOlm.ts b/server/routers/olm/createOlm.ts
index 930c04be..b5da405e 100644
--- a/server/routers/olm/createOlm.ts
+++ b/server/routers/olm/createOlm.ts
@@ -24,9 +24,9 @@ export type CreateNewtResponse = {
};
const createNewtSchema = z.strictObject({
- newtId: z.string(),
- secret: z.string()
- });
+ newtId: z.string(),
+ secret: z.string()
+});
export async function createNewt(
req: Request,
@@ -34,7 +34,6 @@ export async function createNewt(
next: NextFunction
): Promise {
try {
-
const parsedBody = createNewtSchema.safeParse(req.body);
if (!parsedBody.success) {
return next(
@@ -58,7 +57,7 @@ export async function createNewt(
await db.insert(newts).values({
newtId: newtId,
secretHash,
- dateCreated: moment().toISOString(),
+ dateCreated: moment().toISOString()
});
// give the newt their default permissions:
@@ -75,12 +74,12 @@ export async function createNewt(
data: {
newtId,
secret,
- token,
+ token
},
success: true,
error: false,
message: "Newt created successfully",
- status: HttpCode.OK,
+ status: HttpCode.OK
});
} catch (e) {
if (e instanceof SqliteError && e.code === "SQLITE_CONSTRAINT_UNIQUE") {
diff --git a/server/routers/olm/getOlmToken.ts b/server/routers/olm/getOlmToken.ts
index 3852b00e..b6dc8148 100644
--- a/server/routers/olm/getOlmToken.ts
+++ b/server/routers/olm/getOlmToken.ts
@@ -197,6 +197,7 @@ export async function getOlmToken(
const exitNodesHpData = allExitNodes.map((exitNode: ExitNode) => {
return {
publicKey: exitNode.publicKey,
+ relayPort: config.getRawConfig().gerbil.clients_start_port,
endpoint: exitNode.endpoint
};
});
diff --git a/server/routers/olm/handleOlmPingMessage.ts b/server/routers/olm/handleOlmPingMessage.ts
index 35d704c7..0fa490c8 100644
--- a/server/routers/olm/handleOlmPingMessage.ts
+++ b/server/routers/olm/handleOlmPingMessage.ts
@@ -61,9 +61,12 @@ export const startOlmOfflineChecker = (): void => {
// Send a disconnect message to the client if connected
try {
- await sendTerminateClient(offlineClient.clientId, offlineClient.olmId); // terminate first
+ await sendTerminateClient(
+ offlineClient.clientId,
+ offlineClient.olmId
+ ); // terminate first
// wait a moment to ensure the message is sent
- await new Promise(resolve => setTimeout(resolve, 1000));
+ await new Promise((resolve) => setTimeout(resolve, 1000));
await disconnectClient(offlineClient.olmId);
} catch (error) {
logger.error(
diff --git a/server/routers/olm/handleOlmRelayMessage.ts b/server/routers/olm/handleOlmRelayMessage.ts
index 595b35ba..88886cd1 100644
--- a/server/routers/olm/handleOlmRelayMessage.ts
+++ b/server/routers/olm/handleOlmRelayMessage.ts
@@ -4,6 +4,7 @@ import { clients, clientSitesAssociationsCache, Olm } from "@server/db";
import { and, eq } from "drizzle-orm";
import { updatePeer as newtUpdatePeer } from "../newt/peers";
import logger from "@server/logger";
+import config from "@server/lib/config";
export const handleOlmRelayMessage: MessageHandler = async (context) => {
const { message, client: c, sendToClient } = context;
@@ -88,7 +89,8 @@ export const handleOlmRelayMessage: MessageHandler = async (context) => {
type: "olm/wg/peer/relay",
data: {
siteId: siteId,
- relayEndpoint: exitNode.endpoint
+ relayEndpoint: exitNode.endpoint,
+ relayPort: config.getRawConfig().gerbil.clients_start_port
}
},
broadcast: false,
diff --git a/server/routers/olm/handleOlmServerPeerAddMessage.ts b/server/routers/olm/handleOlmServerPeerAddMessage.ts
index c0556b0e..53f3474c 100644
--- a/server/routers/olm/handleOlmServerPeerAddMessage.ts
+++ b/server/routers/olm/handleOlmServerPeerAddMessage.ts
@@ -113,14 +113,14 @@ export const handleOlmServerPeerAddMessage: MessageHandler = async (
.select()
.from(clientSitesAssociationsCache)
.where(
- and(
+ and(
eq(clientSitesAssociationsCache.clientId, client.clientId),
isNotNull(clientSitesAssociationsCache.endpoint),
eq(clientSitesAssociationsCache.publicKey, client.pubKey) // limit it to the current session its connected with otherwise the endpoint could be stale
)
);
- // pick an endpoint
+ // pick an endpoint
for (const assoc of currentSessionSiteAssociationCaches) {
if (assoc.endpoint) {
endpoint = assoc.endpoint;
diff --git a/server/routers/olm/index.ts b/server/routers/olm/index.ts
index e671dd42..594ef9cb 100644
--- a/server/routers/olm/index.ts
+++ b/server/routers/olm/index.ts
@@ -8,4 +8,4 @@ export * from "./listUserOlms";
export * from "./deleteUserOlm";
export * from "./getUserOlm";
export * from "./handleOlmServerPeerAddMessage";
-export * from "./handleOlmUnRelayMessage";
\ No newline at end of file
+export * from "./handleOlmUnRelayMessage";
diff --git a/server/routers/olm/peers.ts b/server/routers/olm/peers.ts
index 4aa8edd7..e164b257 100644
--- a/server/routers/olm/peers.ts
+++ b/server/routers/olm/peers.ts
@@ -1,5 +1,6 @@
import { sendToClient } from "#dynamic/routers/ws";
import { db, olms } from "@server/db";
+import config from "@server/lib/config";
import logger from "@server/logger";
import { eq } from "drizzle-orm";
import { Alias } from "yaml";
@@ -156,6 +157,7 @@ export async function initPeerAddHandshake(
siteId: peer.siteId,
exitNode: {
publicKey: peer.exitNode.publicKey,
+ relayPort: config.getRawConfig().gerbil.clients_start_port,
endpoint: peer.exitNode.endpoint
}
}
diff --git a/server/routers/org/checkId.ts b/server/routers/org/checkId.ts
index 2a898c30..f11809d2 100644
--- a/server/routers/org/checkId.ts
+++ b/server/routers/org/checkId.ts
@@ -10,8 +10,8 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const getOrgSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
export async function checkId(
req: Request,
diff --git a/server/routers/org/createOrg.ts b/server/routers/org/createOrg.ts
index e0e42754..f1d06566 100644
--- a/server/routers/org/createOrg.ts
+++ b/server/routers/org/createOrg.ts
@@ -31,7 +31,12 @@ import { calculateUserClientsForOrgs } from "@server/lib/calculateUserClientsFor
const createOrgSchema = z.strictObject({
orgId: z.string(),
name: z.string().min(1).max(255),
- subnet: z.string()
+ subnet: z
+ // .union([z.cidrv4(), z.cidrv6()])
+ .union([z.cidrv4()]) // for now lets just do ipv4 until we verify ipv6 works everywhere
+ .refine((val) => isValidCIDR(val), {
+ message: "Invalid subnet CIDR"
+ })
});
registry.registerPath({
@@ -81,15 +86,6 @@ export async function createOrg(
const { orgId, name, subnet } = parsedBody.data;
- if (!isValidCIDR(subnet)) {
- return next(
- createHttpError(
- HttpCode.BAD_REQUEST,
- "Invalid subnet format. Please provide a valid CIDR notation."
- )
- );
- }
-
// TODO: for now we are making all of the orgs the same subnet
// make sure the subnet is unique
// const subnetExists = await db
diff --git a/server/routers/org/getOrg.ts b/server/routers/org/getOrg.ts
index 38a1c6ba..a30dcc1c 100644
--- a/server/routers/org/getOrg.ts
+++ b/server/routers/org/getOrg.ts
@@ -11,8 +11,8 @@ import { fromZodError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const getOrgSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
export type GetOrgResponse = {
org: Org;
diff --git a/server/routers/org/getOrgOverview.ts b/server/routers/org/getOrgOverview.ts
index dc704d6a..d368d1b3 100644
--- a/server/routers/org/getOrgOverview.ts
+++ b/server/routers/org/getOrgOverview.ts
@@ -19,8 +19,8 @@ import logger from "@server/logger";
import { fromZodError } from "zod-validation-error";
const getOrgParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
export type GetOrgOverviewResponse = {
orgName: string;
diff --git a/server/routers/org/updateOrg.ts b/server/routers/org/updateOrg.ts
index 6e7a9b35..aa9e2151 100644
--- a/server/routers/org/updateOrg.ts
+++ b/server/routers/org/updateOrg.ts
@@ -16,10 +16,11 @@ import { TierId } from "@server/lib/billing/tiers";
import { cache } from "@server/lib/cache";
const updateOrgParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
-const updateOrgBodySchema = z.strictObject({
+const updateOrgBodySchema = z
+ .strictObject({
name: z.string().min(1).max(255).optional(),
requireTwoFactor: z.boolean().optional(),
maxSessionLengthHours: z.number().nullable().optional(),
diff --git a/server/routers/orgIdp/types.ts b/server/routers/orgIdp/types.ts
index a8e205cc..f6f581ee 100644
--- a/server/routers/orgIdp/types.ts
+++ b/server/routers/orgIdp/types.ts
@@ -6,10 +6,10 @@ export type CreateOrgIdpResponse = {
};
export type GetOrgIdpResponse = {
- idp: Idp,
- idpOidcConfig: IdpOidcConfig | null,
- redirectUrl: string
-}
+ idp: Idp;
+ idpOidcConfig: IdpOidcConfig | null;
+ redirectUrl: string;
+};
export type ListOrgIdpsResponse = {
idps: {
@@ -18,7 +18,7 @@ export type ListOrgIdpsResponse = {
name: string;
type: string;
variant: string;
- }[],
+ }[];
pagination: {
total: number;
limit: number;
diff --git a/server/routers/remoteExitNode/types.ts b/server/routers/remoteExitNode/types.ts
index 55d0a286..25a7d6c5 100644
--- a/server/routers/remoteExitNode/types.ts
+++ b/server/routers/remoteExitNode/types.ts
@@ -31,4 +31,14 @@ export type ListRemoteExitNodesResponse = {
pagination: { total: number; limit: number; offset: number };
};
-export type GetRemoteExitNodeResponse = { remoteExitNodeId: string; dateCreated: string; version: string | null; exitNodeId: number | null; name: string; address: string; endpoint: string; online: boolean; type: string | null; }
\ No newline at end of file
+export type GetRemoteExitNodeResponse = {
+ remoteExitNodeId: string;
+ dateCreated: string;
+ version: string | null;
+ exitNodeId: number | null;
+ name: string;
+ address: string;
+ endpoint: string;
+ online: boolean;
+ type: string | null;
+};
diff --git a/server/routers/resource/addEmailToResourceWhitelist.ts b/server/routers/resource/addEmailToResourceWhitelist.ts
index f9cee838..53828b44 100644
--- a/server/routers/resource/addEmailToResourceWhitelist.ts
+++ b/server/routers/resource/addEmailToResourceWhitelist.ts
@@ -11,21 +11,19 @@ import { and, eq } from "drizzle-orm";
import { OpenAPITags, registry } from "@server/openApi";
const addEmailToResourceWhitelistBodySchema = z.strictObject({
- email: z.email()
- .or(
- z.string().regex(/^\*@[\w.-]+\.[a-zA-Z]{2,}$/, {
- error: "Invalid email address. Wildcard (*) must be the entire local part."
- })
- )
- .transform((v) => v.toLowerCase())
- });
+ email: z
+ .email()
+ .or(
+ z.string().regex(/^\*@[\w.-]+\.[a-zA-Z]{2,}$/, {
+ error: "Invalid email address. Wildcard (*) must be the entire local part."
+ })
+ )
+ .transform((v) => v.toLowerCase())
+});
const addEmailToResourceWhitelistParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "post",
diff --git a/server/routers/resource/addRoleToResource.ts b/server/routers/resource/addRoleToResource.ts
index c29f2757..ba344c6c 100644
--- a/server/routers/resource/addRoleToResource.ts
+++ b/server/routers/resource/addRoleToResource.ts
@@ -93,10 +93,7 @@ export async function addRoleToResource(
.select()
.from(roles)
.where(
- and(
- eq(roles.roleId, roleId),
- eq(roles.orgId, resource.orgId)
- )
+ and(eq(roles.roleId, roleId), eq(roles.orgId, resource.orgId))
)
.limit(1);
@@ -158,4 +155,3 @@ export async function addRoleToResource(
);
}
}
-
diff --git a/server/routers/resource/addUserToResource.ts b/server/routers/resource/addUserToResource.ts
index 6dbfe086..ee6081ff 100644
--- a/server/routers/resource/addUserToResource.ts
+++ b/server/routers/resource/addUserToResource.ts
@@ -127,4 +127,3 @@ export async function addUserToResource(
);
}
}
-
diff --git a/server/routers/resource/authWithAccessToken.ts b/server/routers/resource/authWithAccessToken.ts
index 81ca7fbc..53f72cb2 100644
--- a/server/routers/resource/authWithAccessToken.ts
+++ b/server/routers/resource/authWithAccessToken.ts
@@ -16,17 +16,17 @@ import stoi from "@server/lib/stoi";
import { logAccessAudit } from "#dynamic/lib/logAccessAudit";
const authWithAccessTokenBodySchema = z.strictObject({
- accessToken: z.string(),
- accessTokenId: z.string().optional()
- });
+ accessToken: z.string(),
+ accessTokenId: z.string().optional()
+});
const authWithAccessTokenParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .optional()
- .transform(stoi)
- .pipe(z.int().positive().optional())
- });
+ resourceId: z
+ .string()
+ .optional()
+ .transform(stoi)
+ .pipe(z.int().positive().optional())
+});
export type AuthWithAccessTokenResponse = {
session?: string;
diff --git a/server/routers/resource/authWithPassword.ts b/server/routers/resource/authWithPassword.ts
index 4c1f2058..ecf61896 100644
--- a/server/routers/resource/authWithPassword.ts
+++ b/server/routers/resource/authWithPassword.ts
@@ -16,15 +16,12 @@ import config from "@server/lib/config";
import { logAccessAudit } from "#dynamic/lib/logAccessAudit";
export const authWithPasswordBodySchema = z.strictObject({
- password: z.string()
- });
+ password: z.string()
+});
export const authWithPasswordParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
export type AuthWithPasswordResponse = {
session?: string;
diff --git a/server/routers/resource/authWithPincode.ts b/server/routers/resource/authWithPincode.ts
index 59f80ee0..78e132d2 100644
--- a/server/routers/resource/authWithPincode.ts
+++ b/server/routers/resource/authWithPincode.ts
@@ -15,15 +15,12 @@ import config from "@server/lib/config";
import { logAccessAudit } from "#dynamic/lib/logAccessAudit";
export const authWithPincodeBodySchema = z.strictObject({
- pincode: z.string()
- });
+ pincode: z.string()
+});
export const authWithPincodeParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
export type AuthWithPincodeResponse = {
session?: string;
diff --git a/server/routers/resource/authWithWhitelist.ts b/server/routers/resource/authWithWhitelist.ts
index 11f84043..6a2b7ee7 100644
--- a/server/routers/resource/authWithWhitelist.ts
+++ b/server/routers/resource/authWithWhitelist.ts
@@ -15,16 +15,13 @@ import config from "@server/lib/config";
import { logAccessAudit } from "#dynamic/lib/logAccessAudit";
const authWithWhitelistBodySchema = z.strictObject({
- email: z.email().toLowerCase(),
- otp: z.string().optional()
- });
+ email: z.email().toLowerCase(),
+ otp: z.string().optional()
+});
const authWithWhitelistParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
export type AuthWithWhitelistResponse = {
otpSent?: boolean;
diff --git a/server/routers/resource/createResource.ts b/server/routers/resource/createResource.ts
index b9ab3ce5..ba1fdba2 100644
--- a/server/routers/resource/createResource.ts
+++ b/server/routers/resource/createResource.ts
@@ -26,16 +26,17 @@ import { getUniqueResourceName } from "@server/db/names";
import { validateAndConstructDomain } from "@server/lib/domainUtils";
const createResourceParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
-const createHttpResourceSchema = z.strictObject({
+const createHttpResourceSchema = z
+ .strictObject({
name: z.string().min(1).max(255),
subdomain: z.string().nullable().optional(),
http: z.boolean(),
protocol: z.enum(["tcp", "udp"]),
domainId: z.string(),
- stickySession: z.boolean().optional(),
+ stickySession: z.boolean().optional()
})
.refine(
(data) => {
@@ -49,7 +50,8 @@ const createHttpResourceSchema = z.strictObject({
}
);
-const createRawResourceSchema = z.strictObject({
+const createRawResourceSchema = z
+ .strictObject({
name: z.string().min(1).max(255),
http: z.boolean(),
protocol: z.enum(["tcp", "udp"]),
@@ -188,7 +190,7 @@ async function createHttpResource(
const { name, domainId } = parsedBody.data;
const subdomain = parsedBody.data.subdomain;
- const stickySession=parsedBody.data.stickySession;
+ const stickySession = parsedBody.data.stickySession;
// Validate domain and construct full domain
const domainResult = await validateAndConstructDomain(
diff --git a/server/routers/resource/createResourceRule.ts b/server/routers/resource/createResourceRule.ts
index c3e086b0..3f86665b 100644
--- a/server/routers/resource/createResourceRule.ts
+++ b/server/routers/resource/createResourceRule.ts
@@ -16,19 +16,16 @@ import {
import { OpenAPITags, registry } from "@server/openApi";
const createResourceRuleSchema = z.strictObject({
- action: z.enum(["ACCEPT", "DROP", "PASS"]),
- match: z.enum(["CIDR", "IP", "PATH", "COUNTRY"]),
- value: z.string().min(1),
- priority: z.int(),
- enabled: z.boolean().optional()
- });
+ action: z.enum(["ACCEPT", "DROP", "PASS"]),
+ match: z.enum(["CIDR", "IP", "PATH", "COUNTRY"]),
+ value: z.string().min(1),
+ priority: z.int(),
+ enabled: z.boolean().optional()
+});
const createResourceRuleParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "put",
diff --git a/server/routers/resource/deleteResource.ts b/server/routers/resource/deleteResource.ts
index a81208a5..d8891d75 100644
--- a/server/routers/resource/deleteResource.ts
+++ b/server/routers/resource/deleteResource.ts
@@ -15,11 +15,8 @@ import { OpenAPITags, registry } from "@server/openApi";
// Define Zod schema for request parameters validation
const deleteResourceSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "delete",
diff --git a/server/routers/resource/deleteResourceRule.ts b/server/routers/resource/deleteResourceRule.ts
index 58cb7b48..638f2e1d 100644
--- a/server/routers/resource/deleteResourceRule.ts
+++ b/server/routers/resource/deleteResourceRule.ts
@@ -11,12 +11,9 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const deleteResourceRuleSchema = z.strictObject({
- ruleId: z.string().transform(Number).pipe(z.int().positive()),
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ ruleId: z.string().transform(Number).pipe(z.int().positive()),
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "delete",
diff --git a/server/routers/resource/getExchangeToken.ts b/server/routers/resource/getExchangeToken.ts
index 8a0276a0..b0af4b7f 100644
--- a/server/routers/resource/getExchangeToken.ts
+++ b/server/routers/resource/getExchangeToken.ts
@@ -17,11 +17,8 @@ import { checkOrgAccessPolicy } from "#dynamic/lib/checkOrgAccessPolicy";
import { logAccessAudit } from "#dynamic/lib/logAccessAudit";
const getExchangeTokenParams = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
export type GetExchangeTokenResponse = {
requestToken: string;
diff --git a/server/routers/resource/getResource.ts b/server/routers/resource/getResource.ts
index f2ce559e..7f3e8a0e 100644
--- a/server/routers/resource/getResource.ts
+++ b/server/routers/resource/getResource.ts
@@ -12,15 +12,15 @@ import stoi from "@server/lib/stoi";
import { OpenAPITags, registry } from "@server/openApi";
const getResourceSchema = z.strictObject({
- resourceId: z
- .string()
- .optional()
- .transform(stoi)
- .pipe(z.int().positive().optional())
- .optional(),
- niceId: z.string().optional(),
- orgId: z.string().optional()
- });
+ resourceId: z
+ .string()
+ .optional()
+ .transform(stoi)
+ .pipe(z.int().positive().optional())
+ .optional(),
+ niceId: z.string().optional(),
+ orgId: z.string().optional()
+});
async function query(resourceId?: number, niceId?: string, orgId?: string) {
if (resourceId) {
@@ -34,13 +34,18 @@ async function query(resourceId?: number, niceId?: string, orgId?: string) {
const [res] = await db
.select()
.from(resources)
- .where(and(eq(resources.niceId, niceId), eq(resources.orgId, orgId)))
+ .where(
+ and(eq(resources.niceId, niceId), eq(resources.orgId, orgId))
+ )
.limit(1);
return res;
}
}
-export type GetResourceResponse = Omit>>, 'headers'> & {
+export type GetResourceResponse = Omit<
+ NonNullable>>,
+ "headers"
+> & {
headers: { name: string; value: string }[] | null;
};
@@ -101,7 +106,9 @@ export async function getResource(
return response(res, {
data: {
...resource,
- headers: resource.headers ? JSON.parse(resource.headers) : resource.headers
+ headers: resource.headers
+ ? JSON.parse(resource.headers)
+ : resource.headers
},
success: true,
error: false,
diff --git a/server/routers/resource/getResourceAuthInfo.ts b/server/routers/resource/getResourceAuthInfo.ts
index 60f8e586..fe0a38c8 100644
--- a/server/routers/resource/getResourceAuthInfo.ts
+++ b/server/routers/resource/getResourceAuthInfo.ts
@@ -16,8 +16,8 @@ import logger from "@server/logger";
import { build } from "@server/build";
const getResourceAuthInfoSchema = z.strictObject({
- resourceGuid: z.string()
- });
+ resourceGuid: z.string()
+});
export type GetResourceAuthInfoResponse = {
resourceId: number;
diff --git a/server/routers/resource/getResourceWhitelist.ts b/server/routers/resource/getResourceWhitelist.ts
index 3171352a..52cff0c7 100644
--- a/server/routers/resource/getResourceWhitelist.ts
+++ b/server/routers/resource/getResourceWhitelist.ts
@@ -11,11 +11,8 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const getResourceWhitelistSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
async function queryWhitelist(resourceId: number) {
return await db
diff --git a/server/routers/resource/listResourceRoles.ts b/server/routers/resource/listResourceRoles.ts
index 3dbb8c0d..68dc58a2 100644
--- a/server/routers/resource/listResourceRoles.ts
+++ b/server/routers/resource/listResourceRoles.ts
@@ -11,11 +11,8 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const listResourceRolesSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
async function query(resourceId: number) {
return await db
diff --git a/server/routers/resource/listResourceRules.ts b/server/routers/resource/listResourceRules.ts
index bc2516a0..dae7922d 100644
--- a/server/routers/resource/listResourceRules.ts
+++ b/server/routers/resource/listResourceRules.ts
@@ -11,11 +11,8 @@ import logger from "@server/logger";
import { OpenAPITags, registry } from "@server/openApi";
const listResourceRulesParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
const listResourceRulesSchema = z.object({
limit: z
diff --git a/server/routers/resource/listResourceUsers.ts b/server/routers/resource/listResourceUsers.ts
index b07bcf0a..e7f73287 100644
--- a/server/routers/resource/listResourceUsers.ts
+++ b/server/routers/resource/listResourceUsers.ts
@@ -11,11 +11,8 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const listResourceUsersSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
async function queryUsers(resourceId: number) {
return await db
diff --git a/server/routers/resource/removeEmailFromResourceWhitelist.ts b/server/routers/resource/removeEmailFromResourceWhitelist.ts
index c2cac2de..d60133b8 100644
--- a/server/routers/resource/removeEmailFromResourceWhitelist.ts
+++ b/server/routers/resource/removeEmailFromResourceWhitelist.ts
@@ -11,21 +11,19 @@ import { and, eq } from "drizzle-orm";
import { OpenAPITags, registry } from "@server/openApi";
const removeEmailFromResourceWhitelistBodySchema = z.strictObject({
- email: z.email()
- .or(
- z.string().regex(/^\*@[\w.-]+\.[a-zA-Z]{2,}$/, {
- error: "Invalid email address. Wildcard (*) must be the entire local part."
- })
- )
- .transform((v) => v.toLowerCase())
- });
+ email: z
+ .email()
+ .or(
+ z.string().regex(/^\*@[\w.-]+\.[a-zA-Z]{2,}$/, {
+ error: "Invalid email address. Wildcard (*) must be the entire local part."
+ })
+ )
+ .transform((v) => v.toLowerCase())
+});
const removeEmailFromResourceWhitelistParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "post",
diff --git a/server/routers/resource/removeRoleFromResource.ts b/server/routers/resource/removeRoleFromResource.ts
index cb44ac4a..eab7660c 100644
--- a/server/routers/resource/removeRoleFromResource.ts
+++ b/server/routers/resource/removeRoleFromResource.ts
@@ -49,9 +49,7 @@ export async function removeRoleFromResource(
next: NextFunction
): Promise {
try {
- const parsedBody = removeRoleFromResourceBodySchema.safeParse(
- req.body
- );
+ const parsedBody = removeRoleFromResourceBodySchema.safeParse(req.body);
if (!parsedBody.success) {
return next(
createHttpError(
@@ -95,10 +93,7 @@ export async function removeRoleFromResource(
.select()
.from(roles)
.where(
- and(
- eq(roles.roleId, roleId),
- eq(roles.orgId, resource.orgId)
- )
+ and(eq(roles.roleId, roleId), eq(roles.orgId, resource.orgId))
)
.limit(1);
@@ -163,4 +158,3 @@ export async function removeRoleFromResource(
);
}
}
-
diff --git a/server/routers/resource/removeUserFromResource.ts b/server/routers/resource/removeUserFromResource.ts
index 8dce7e48..9da96d3c 100644
--- a/server/routers/resource/removeUserFromResource.ts
+++ b/server/routers/resource/removeUserFromResource.ts
@@ -49,9 +49,7 @@ export async function removeUserFromResource(
next: NextFunction
): Promise {
try {
- const parsedBody = removeUserFromResourceBodySchema.safeParse(
- req.body
- );
+ const parsedBody = removeUserFromResourceBodySchema.safeParse(req.body);
if (!parsedBody.success) {
return next(
createHttpError(
@@ -133,4 +131,3 @@ export async function removeUserFromResource(
);
}
}
-
diff --git a/server/routers/resource/setResourceHeaderAuth.ts b/server/routers/resource/setResourceHeaderAuth.ts
index 87ffbacd..b89179ae 100644
--- a/server/routers/resource/setResourceHeaderAuth.ts
+++ b/server/routers/resource/setResourceHeaderAuth.ts
@@ -15,9 +15,9 @@ const setResourceAuthMethodsParamsSchema = z.object({
});
const setResourceAuthMethodsBodySchema = z.strictObject({
- user: z.string().min(4).max(100).nullable(),
- password: z.string().min(4).max(100).nullable()
- });
+ user: z.string().min(4).max(100).nullable(),
+ password: z.string().min(4).max(100).nullable()
+});
registry.registerPath({
method: "post",
@@ -75,7 +75,9 @@ export async function setResourceHeaderAuth(
.where(eq(resourceHeaderAuth.resourceId, resourceId));
if (user && password) {
- const headerAuthHash = await hashPassword(Buffer.from(`${user}:${password}`).toString("base64"));
+ const headerAuthHash = await hashPassword(
+ Buffer.from(`${user}:${password}`).toString("base64")
+ );
await trx
.insert(resourceHeaderAuth)
diff --git a/server/routers/resource/setResourcePassword.ts b/server/routers/resource/setResourcePassword.ts
index 3f9ce9f1..9bd845a4 100644
--- a/server/routers/resource/setResourcePassword.ts
+++ b/server/routers/resource/setResourcePassword.ts
@@ -17,8 +17,8 @@ const setResourceAuthMethodsParamsSchema = z.object({
});
const setResourceAuthMethodsBodySchema = z.strictObject({
- password: z.string().min(4).max(100).nullable()
- });
+ password: z.string().min(4).max(100).nullable()
+});
registry.registerPath({
method: "post",
diff --git a/server/routers/resource/setResourcePincode.ts b/server/routers/resource/setResourcePincode.ts
index 6a88a279..0d527273 100644
--- a/server/routers/resource/setResourcePincode.ts
+++ b/server/routers/resource/setResourcePincode.ts
@@ -18,11 +18,11 @@ const setResourceAuthMethodsParamsSchema = z.object({
});
const setResourceAuthMethodsBodySchema = z.strictObject({
- pincode: z
- .string()
- .regex(/^\d{6}$/)
- .or(z.null())
- });
+ pincode: z
+ .string()
+ .regex(/^\d{6}$/)
+ .or(z.null())
+});
registry.registerPath({
method: "post",
diff --git a/server/routers/resource/setResourceRoles.ts b/server/routers/resource/setResourceRoles.ts
index 5064c7e0..751fe4f9 100644
--- a/server/routers/resource/setResourceRoles.ts
+++ b/server/routers/resource/setResourceRoles.ts
@@ -11,15 +11,12 @@ import { eq, and, ne, inArray } from "drizzle-orm";
import { OpenAPITags, registry } from "@server/openApi";
const setResourceRolesBodySchema = z.strictObject({
- roleIds: z.array(z.int().positive())
- });
+ roleIds: z.array(z.int().positive())
+});
const setResourceRolesParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "post",
@@ -113,10 +110,7 @@ export async function setResourceRoles(
.select()
.from(roles)
.where(
- and(
- eq(roles.isAdmin, true),
- eq(roles.orgId, resource.orgId)
- )
+ and(eq(roles.isAdmin, true), eq(roles.orgId, resource.orgId))
);
const adminRoleIds = adminRoles.map((role) => role.roleId);
@@ -129,9 +123,9 @@ export async function setResourceRoles(
)
);
} else {
- await trx.delete(roleResources).where(
- eq(roleResources.resourceId, resourceId)
- );
+ await trx
+ .delete(roleResources)
+ .where(eq(roleResources.resourceId, resourceId));
}
const newRoleResources = await Promise.all(
@@ -158,4 +152,3 @@ export async function setResourceRoles(
);
}
}
-
diff --git a/server/routers/resource/setResourceUsers.ts b/server/routers/resource/setResourceUsers.ts
index b5eca17c..5ddceb8f 100644
--- a/server/routers/resource/setResourceUsers.ts
+++ b/server/routers/resource/setResourceUsers.ts
@@ -11,15 +11,12 @@ import { eq } from "drizzle-orm";
import { OpenAPITags, registry } from "@server/openApi";
const setUserResourcesBodySchema = z.strictObject({
- userIds: z.array(z.string())
- });
+ userIds: z.array(z.string())
+});
const setUserResourcesParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "post",
diff --git a/server/routers/resource/setResourceWhitelist.ts b/server/routers/resource/setResourceWhitelist.ts
index 417ef6d9..18f612f2 100644
--- a/server/routers/resource/setResourceWhitelist.ts
+++ b/server/routers/resource/setResourceWhitelist.ts
@@ -11,25 +11,21 @@ import { and, eq } from "drizzle-orm";
import { OpenAPITags, registry } from "@server/openApi";
const setResourceWhitelistBodySchema = z.strictObject({
- emails: z
- .array(
- z.email()
- .or(
- z.string().regex(/^\*@[\w.-]+\.[a-zA-Z]{2,}$/, {
- error: "Invalid email address. Wildcard (*) must be the entire local part."
- })
- )
+ emails: z
+ .array(
+ z.email().or(
+ z.string().regex(/^\*@[\w.-]+\.[a-zA-Z]{2,}$/, {
+ error: "Invalid email address. Wildcard (*) must be the entire local part."
+ })
)
- .max(50)
- .transform((v) => v.map((e) => e.toLowerCase()))
- });
+ )
+ .max(50)
+ .transform((v) => v.map((e) => e.toLowerCase()))
+});
const setResourceWhitelistParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "post",
diff --git a/server/routers/resource/updateResource.ts b/server/routers/resource/updateResource.ts
index f3792e28..1dff9757 100644
--- a/server/routers/resource/updateResource.ts
+++ b/server/routers/resource/updateResource.ts
@@ -26,13 +26,11 @@ import { validateHeaders } from "@server/lib/validators";
import { build } from "@server/build";
const updateResourceParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
-const updateHttpResourceBodySchema = z.strictObject({
+const updateHttpResourceBodySchema = z
+ .strictObject({
name: z.string().min(1).max(255).optional(),
niceId: z.string().min(1).max(255).optional(),
subdomain: subdomainSchema.nullable().optional(),
@@ -91,7 +89,8 @@ const updateHttpResourceBodySchema = z.strictObject({
export type UpdateResourceResponse = Resource;
-const updateRawResourceBodySchema = z.strictObject({
+const updateRawResourceBodySchema = z
+ .strictObject({
name: z.string().min(1).max(255).optional(),
niceId: z.string().min(1).max(255).optional(),
proxyPort: z.int().min(1).max(65535).optional(),
@@ -239,11 +238,11 @@ async function updateHttpResource(
.select()
.from(resources)
.where(
- and(
- eq(resources.niceId, updateData.niceId),
- eq(resources.orgId, resource.orgId)
- )
- );
+ and(
+ eq(resources.niceId, updateData.niceId),
+ eq(resources.orgId, resource.orgId)
+ )
+ );
if (
existingResource &&
@@ -391,11 +390,11 @@ async function updateRawResource(
.select()
.from(resources)
.where(
- and(
- eq(resources.niceId, updateData.niceId),
- eq(resources.orgId, resource.orgId)
- )
- );
+ and(
+ eq(resources.niceId, updateData.niceId),
+ eq(resources.orgId, resource.orgId)
+ )
+ );
if (
existingResource &&
diff --git a/server/routers/resource/updateResourceRule.ts b/server/routers/resource/updateResourceRule.ts
index b92c3d07..cae3f16e 100644
--- a/server/routers/resource/updateResourceRule.ts
+++ b/server/routers/resource/updateResourceRule.ts
@@ -17,15 +17,13 @@ import { OpenAPITags, registry } from "@server/openApi";
// Define Zod schema for request parameters validation
const updateResourceRuleParamsSchema = z.strictObject({
- ruleId: z.string().transform(Number).pipe(z.int().positive()),
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ ruleId: z.string().transform(Number).pipe(z.int().positive()),
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
// Define Zod schema for request body validation
-const updateResourceRuleSchema = z.strictObject({
+const updateResourceRuleSchema = z
+ .strictObject({
action: z.enum(["ACCEPT", "DROP", "PASS"]).optional(),
match: z.enum(["CIDR", "IP", "PATH", "COUNTRY"]).optional(),
value: z.string().min(1).optional(),
diff --git a/server/routers/role/addRoleAction.ts b/server/routers/role/addRoleAction.ts
index 74540b78..5c258de7 100644
--- a/server/routers/role/addRoleAction.ts
+++ b/server/routers/role/addRoleAction.ts
@@ -10,12 +10,12 @@ import { eq } from "drizzle-orm";
import { fromError } from "zod-validation-error";
const addRoleActionParamSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
const addRoleActionSchema = z.strictObject({
- actionId: z.string()
- });
+ actionId: z.string()
+});
export async function addRoleAction(
req: Request,
diff --git a/server/routers/role/addRoleSite.ts b/server/routers/role/addRoleSite.ts
index d33c733d..ddd1f07e 100644
--- a/server/routers/role/addRoleSite.ts
+++ b/server/routers/role/addRoleSite.ts
@@ -10,12 +10,12 @@ import { eq } from "drizzle-orm";
import { fromError } from "zod-validation-error";
const addRoleSiteParamsSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
const addRoleSiteSchema = z.strictObject({
- siteId: z.string().transform(Number).pipe(z.int().positive())
- });
+ siteId: z.string().transform(Number).pipe(z.int().positive())
+});
export async function addRoleSite(
req: Request,
diff --git a/server/routers/role/createRole.ts b/server/routers/role/createRole.ts
index 26573c6c..16696af4 100644
--- a/server/routers/role/createRole.ts
+++ b/server/routers/role/createRole.ts
@@ -12,13 +12,13 @@ import { eq, and } from "drizzle-orm";
import { OpenAPITags, registry } from "@server/openApi";
const createRoleParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const createRoleSchema = z.strictObject({
- name: z.string().min(1).max(255),
- description: z.string().optional()
- });
+ name: z.string().min(1).max(255),
+ description: z.string().optional()
+});
export const defaultRoleAllowedActions: ActionsEnum[] = [
ActionsEnum.getOrg,
diff --git a/server/routers/role/deleteRole.ts b/server/routers/role/deleteRole.ts
index e4d89b2f..490fe91c 100644
--- a/server/routers/role/deleteRole.ts
+++ b/server/routers/role/deleteRole.ts
@@ -11,12 +11,12 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const deleteRoleSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
const deelteRoleBodySchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "delete",
diff --git a/server/routers/role/getRole.ts b/server/routers/role/getRole.ts
index afd6e83a..a5c45996 100644
--- a/server/routers/role/getRole.ts
+++ b/server/routers/role/getRole.ts
@@ -11,8 +11,8 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const getRoleSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "get",
diff --git a/server/routers/role/listRoleActions.ts b/server/routers/role/listRoleActions.ts
index 8392c296..31ef6604 100644
--- a/server/routers/role/listRoleActions.ts
+++ b/server/routers/role/listRoleActions.ts
@@ -10,8 +10,8 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const listRoleActionsSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
export async function listRoleActions(
req: Request,
diff --git a/server/routers/role/listRoleResources.ts b/server/routers/role/listRoleResources.ts
index 57a84c5c..7ba1fdab 100644
--- a/server/routers/role/listRoleResources.ts
+++ b/server/routers/role/listRoleResources.ts
@@ -10,8 +10,8 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const listRoleResourcesSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
export async function listRoleResources(
req: Request,
diff --git a/server/routers/role/listRoleSites.ts b/server/routers/role/listRoleSites.ts
index f35e367c..1c9dcdbe 100644
--- a/server/routers/role/listRoleSites.ts
+++ b/server/routers/role/listRoleSites.ts
@@ -10,8 +10,8 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const listRoleSitesSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
export async function listRoleSites(
req: Request,
diff --git a/server/routers/role/listRoles.ts b/server/routers/role/listRoles.ts
index 14a5c2d1..288a540d 100644
--- a/server/routers/role/listRoles.ts
+++ b/server/routers/role/listRoles.ts
@@ -12,8 +12,8 @@ import stoi from "@server/lib/stoi";
import { OpenAPITags, registry } from "@server/openApi";
const listRolesParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const listRolesSchema = z.object({
limit: z
diff --git a/server/routers/role/removeRoleAction.ts b/server/routers/role/removeRoleAction.ts
index 25fbaa29..3c2ee788 100644
--- a/server/routers/role/removeRoleAction.ts
+++ b/server/routers/role/removeRoleAction.ts
@@ -10,12 +10,12 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const removeRoleActionParamsSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
const removeRoleActionSchema = z.strictObject({
- actionId: z.string()
- });
+ actionId: z.string()
+});
export async function removeRoleAction(
req: Request,
diff --git a/server/routers/role/removeRoleResource.ts b/server/routers/role/removeRoleResource.ts
index d2c7cae9..fac1c941 100644
--- a/server/routers/role/removeRoleResource.ts
+++ b/server/routers/role/removeRoleResource.ts
@@ -10,15 +10,12 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const removeRoleResourceParamsSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
const removeRoleResourceSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
export async function removeRoleResource(
req: Request,
diff --git a/server/routers/role/removeRoleSite.ts b/server/routers/role/removeRoleSite.ts
index 8092eed1..6c64820e 100644
--- a/server/routers/role/removeRoleSite.ts
+++ b/server/routers/role/removeRoleSite.ts
@@ -10,12 +10,12 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const removeRoleSiteParamsSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
const removeRoleSiteSchema = z.strictObject({
- siteId: z.string().transform(Number).pipe(z.int().positive())
- });
+ siteId: z.string().transform(Number).pipe(z.int().positive())
+});
export async function removeRoleSite(
req: Request,
diff --git a/server/routers/role/updateRole.ts b/server/routers/role/updateRole.ts
index 136ca389..c9f63a7b 100644
--- a/server/routers/role/updateRole.ts
+++ b/server/routers/role/updateRole.ts
@@ -10,10 +10,11 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const updateRoleParamsSchema = z.strictObject({
- roleId: z.string().transform(Number).pipe(z.int().positive())
- });
+ roleId: z.string().transform(Number).pipe(z.int().positive())
+});
-const updateRoleBodySchema = z.strictObject({
+const updateRoleBodySchema = z
+ .strictObject({
name: z.string().min(1).max(255).optional(),
description: z.string().optional()
})
diff --git a/server/routers/site/createSite.ts b/server/routers/site/createSite.ts
index 2ec8d3dc..c798ea30 100644
--- a/server/routers/site/createSite.ts
+++ b/server/routers/site/createSite.ts
@@ -20,25 +20,25 @@ import { verifyExitNodeOrgAccess } from "#dynamic/lib/exitNodes";
import { build } from "@server/build";
const createSiteParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const createSiteSchema = z.strictObject({
- name: z.string().min(1).max(255),
- exitNodeId: z.int().positive().optional(),
- // subdomain: z
- // .string()
- // .min(1)
- // .max(255)
- // .transform((val) => val.toLowerCase())
- // .optional(),
- pubKey: z.string().optional(),
- subnet: z.string().optional(),
- newtId: z.string().optional(),
- secret: z.string().optional(),
- address: z.string().optional(),
- type: z.enum(["newt", "wireguard", "local"])
- });
+ name: z.string().min(1).max(255),
+ exitNodeId: z.int().positive().optional(),
+ // subdomain: z
+ // .string()
+ // .min(1)
+ // .max(255)
+ // .transform((val) => val.toLowerCase())
+ // .optional(),
+ pubKey: z.string().optional(),
+ subnet: z.string().optional(),
+ newtId: z.string().optional(),
+ secret: z.string().optional(),
+ address: z.string().optional(),
+ type: z.enum(["newt", "wireguard", "local"])
+});
// .refine((data) => {
// if (data.type === "local") {
// return !config.getRawConfig().flags?.disable_local_sites;
diff --git a/server/routers/site/deleteSite.ts b/server/routers/site/deleteSite.ts
index a086e143..09750c31 100644
--- a/server/routers/site/deleteSite.ts
+++ b/server/routers/site/deleteSite.ts
@@ -13,8 +13,8 @@ import { sendToClient } from "#dynamic/routers/ws";
import { OpenAPITags, registry } from "@server/openApi";
const deleteSiteSchema = z.strictObject({
- siteId: z.string().transform(Number).pipe(z.int().positive())
- });
+ siteId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "delete",
@@ -93,8 +93,11 @@ export async function deleteSite(
data: {}
};
// Don't await this to prevent blocking the response
- sendToClient(deletedNewtId, payload).catch(error => {
- logger.error("Failed to send termination message to newt:", error);
+ sendToClient(deletedNewtId, payload).catch((error) => {
+ logger.error(
+ "Failed to send termination message to newt:",
+ error
+ );
});
}
diff --git a/server/routers/site/index.ts b/server/routers/site/index.ts
index b97557a8..3edf67c1 100644
--- a/server/routers/site/index.ts
+++ b/server/routers/site/index.ts
@@ -5,4 +5,4 @@ export * from "./updateSite";
export * from "./listSites";
export * from "./listSiteRoles";
export * from "./pickSiteDefaults";
-export * from "./socketIntegration";
\ No newline at end of file
+export * from "./socketIntegration";
diff --git a/server/routers/site/listSiteRoles.ts b/server/routers/site/listSiteRoles.ts
index ec66d3c5..a2cacf1d 100644
--- a/server/routers/site/listSiteRoles.ts
+++ b/server/routers/site/listSiteRoles.ts
@@ -10,8 +10,8 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const listSiteRolesSchema = z.strictObject({
- siteId: z.string().transform(Number).pipe(z.int().positive())
- });
+ siteId: z.string().transform(Number).pipe(z.int().positive())
+});
export async function listSiteRoles(
req: Request,
diff --git a/server/routers/site/listSites.ts b/server/routers/site/listSites.ts
index f0854764..37ca8fe4 100644
--- a/server/routers/site/listSites.ts
+++ b/server/routers/site/listSites.ts
@@ -69,8 +69,8 @@ async function getLatestNewtVersion(): Promise {
}
const listSitesParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const listSitesSchema = z.object({
limit: z
diff --git a/server/routers/site/pickSiteDefaults.ts b/server/routers/site/pickSiteDefaults.ts
index 029ae322..69ed7688 100644
--- a/server/routers/site/pickSiteDefaults.ts
+++ b/server/routers/site/pickSiteDefaults.ts
@@ -45,8 +45,8 @@ registry.registerPath({
});
const pickSiteDefaultsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
export async function pickSiteDefaults(
req: Request,
@@ -74,7 +74,10 @@ export async function pickSiteDefaults(
if (!randomExitNode) {
return next(
- createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "No available exit node")
+ createHttpError(
+ HttpCode.INTERNAL_SERVER_ERROR,
+ "No available exit node"
+ )
);
}
@@ -90,7 +93,10 @@ export async function pickSiteDefaults(
// TODO: we need to lock this subnet for some time so someone else does not take it
const subnets = sitesQuery
.map((site) => site.subnet)
- .filter((subnet) => subnet && /^(\d{1,3}\.){3}\d{1,3}\/\d{1,2}$/.test(subnet))
+ .filter(
+ (subnet) =>
+ subnet && /^(\d{1,3}\.){3}\d{1,3}\/\d{1,2}$/.test(subnet)
+ )
.filter((subnet) => subnet !== null);
// exclude the exit node address by replacing after the / with a site block size
subnets.push(
diff --git a/server/routers/site/socketIntegration.ts b/server/routers/site/socketIntegration.ts
index 33893000..e0ad09d1 100644
--- a/server/routers/site/socketIntegration.ts
+++ b/server/routers/site/socketIntegration.ts
@@ -10,10 +10,7 @@ import { z } from "zod";
import { fromError } from "zod-validation-error";
import stoi from "@server/lib/stoi";
import { sendToClient } from "#dynamic/routers/ws";
-import {
- fetchContainers,
- dockerSocket
-} from "../newt/dockerSocket";
+import { fetchContainers, dockerSocket } from "../newt/dockerSocket";
import cache from "@server/lib/cache";
export interface ContainerNetwork {
@@ -47,13 +44,13 @@ export interface Container {
}
const siteIdParamsSchema = z.strictObject({
- siteId: z.string().transform(stoi).pipe(z.int().positive())
- });
+ siteId: z.string().transform(stoi).pipe(z.int().positive())
+});
const DockerStatusSchema = z.strictObject({
- isAvailable: z.boolean(),
- socketPath: z.string().optional()
- });
+ isAvailable: z.boolean(),
+ socketPath: z.string().optional()
+});
function validateSiteIdParams(params: any) {
const parsedParams = siteIdParamsSchema.safeParse(params);
@@ -161,9 +158,7 @@ async function triggerFetch(siteId: number) {
async function queryContainers(siteId: number) {
const { newt } = await getSiteAndNewt(siteId);
- const result = cache.get(
- `${newt.newtId}:dockerContainers`
- ) as Container[];
+ const result = cache.get(`${newt.newtId}:dockerContainers`) as Container[];
if (!result) {
throw createHttpError(
HttpCode.TOO_EARLY,
diff --git a/server/routers/site/updateSite.ts b/server/routers/site/updateSite.ts
index 4c25d4c5..44764362 100644
--- a/server/routers/site/updateSite.ts
+++ b/server/routers/site/updateSite.ts
@@ -12,16 +12,15 @@ import { OpenAPITags, registry } from "@server/openApi";
import { isValidCIDR } from "@server/lib/validators";
const updateSiteParamsSchema = z.strictObject({
- siteId: z.string().transform(Number).pipe(z.int().positive())
- });
+ siteId: z.string().transform(Number).pipe(z.int().positive())
+});
-const updateSiteBodySchema = z.strictObject({
+const updateSiteBodySchema = z
+ .strictObject({
name: z.string().min(1).max(255).optional(),
niceId: z.string().min(1).max(255).optional(),
dockerSocketEnabled: z.boolean().optional(),
- remoteSubnets: z
- .string()
- .optional()
+ remoteSubnets: z.string().optional()
// subdomain: z
// .string()
// .min(1)
@@ -41,8 +40,7 @@ const updateSiteBodySchema = z.strictObject({
registry.registerPath({
method: "post",
path: "/site/{siteId}",
- description:
- "Update a site.",
+ description: "Update a site.",
tags: [OpenAPITags.Site],
request: {
params: updateSiteParamsSchema,
@@ -111,7 +109,9 @@ export async function updateSite(
// if remoteSubnets is provided, ensure it's a valid comma-separated list of cidrs
if (updateData.remoteSubnets) {
- const subnets = updateData.remoteSubnets.split(",").map((s) => s.trim());
+ const subnets = updateData.remoteSubnets
+ .split(",")
+ .map((s) => s.trim());
for (const subnet of subnets) {
if (!isValidCIDR(subnet)) {
return next(
diff --git a/server/routers/siteResource/addClientToSiteResource.ts b/server/routers/siteResource/addClientToSiteResource.ts
index 587294e5..27d7f057 100644
--- a/server/routers/siteResource/addClientToSiteResource.ts
+++ b/server/routers/siteResource/addClientToSiteResource.ts
@@ -28,7 +28,8 @@ const addClientToSiteResourceParamsSchema = z
registry.registerPath({
method: "post",
path: "/site-resource/{siteResourceId}/clients/add",
- description: "Add a single client to a site resource. Clients with a userId cannot be added.",
+ description:
+ "Add a single client to a site resource. Clients with a userId cannot be added.",
tags: [OpenAPITags.Resource, OpenAPITags.Client],
request: {
params: addClientToSiteResourceParamsSchema,
@@ -49,7 +50,9 @@ export async function addClientToSiteResource(
next: NextFunction
): Promise {
try {
- const parsedBody = addClientToSiteResourceBodySchema.safeParse(req.body);
+ const parsedBody = addClientToSiteResourceBodySchema.safeParse(
+ req.body
+ );
if (!parsedBody.success) {
return next(
createHttpError(
@@ -153,4 +156,3 @@ export async function addClientToSiteResource(
);
}
}
-
diff --git a/server/routers/siteResource/addRoleToSiteResource.ts b/server/routers/siteResource/addRoleToSiteResource.ts
index 542ca535..abc2d221 100644
--- a/server/routers/siteResource/addRoleToSiteResource.ts
+++ b/server/routers/siteResource/addRoleToSiteResource.ts
@@ -163,4 +163,3 @@ export async function addRoleToSiteResource(
);
}
}
-
diff --git a/server/routers/siteResource/addUserToSiteResource.ts b/server/routers/siteResource/addUserToSiteResource.ts
index c9d1f30a..4edf741c 100644
--- a/server/routers/siteResource/addUserToSiteResource.ts
+++ b/server/routers/siteResource/addUserToSiteResource.ts
@@ -132,4 +132,3 @@ export async function addUserToSiteResource(
);
}
}
-
diff --git a/server/routers/siteResource/createSiteResource.ts b/server/routers/siteResource/createSiteResource.ts
index e5719e7f..c103b09e 100644
--- a/server/routers/siteResource/createSiteResource.ts
+++ b/server/routers/siteResource/createSiteResource.ts
@@ -10,7 +10,7 @@ import {
userSiteResources
} from "@server/db";
import { getUniqueSiteResourceName } from "@server/db/names";
-import { getNextAvailableAliasAddress } from "@server/lib/ip";
+import { getNextAvailableAliasAddress, portRangeStringSchema } from "@server/lib/ip";
import { rebuildClientAssociationsFromSiteResource } from "@server/lib/rebuildClientAssociations";
import response from "@server/lib/response";
import logger from "@server/logger";
@@ -39,13 +39,16 @@ const createSiteResourceSchema = z
alias: z
.string()
.regex(
- /^(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?$/,
- "Alias must be a fully qualified domain name (e.g., example.com)"
+ /^(?:[a-zA-Z0-9*?](?:[a-zA-Z0-9*?-]{0,61}[a-zA-Z0-9*?])?\.)+[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?$/,
+ "Alias must be a fully qualified domain name with optional wildcards (e.g., example.com, *.example.com, host-0?.example.internal)"
)
.optional(),
userIds: z.array(z.string()),
roleIds: z.array(z.int()),
- clientIds: z.array(z.int())
+ clientIds: z.array(z.int()),
+ tcpPortRangeString: portRangeStringSchema,
+ udpPortRangeString: portRangeStringSchema,
+ disableIcmp: z.boolean().optional()
})
.strict()
.refine(
@@ -53,7 +56,8 @@ const createSiteResourceSchema = z
if (data.mode === "host") {
// Check if it's a valid IP address using zod (v4 or v6)
const isValidIP = z
- .union([z.ipv4(), z.ipv6()])
+ // .union([z.ipv4(), z.ipv6()])
+ .union([z.ipv4()]) // for now lets just do ipv4 until we verify ipv6 works everywhere
.safeParse(data.destination).success;
if (isValidIP) {
@@ -64,7 +68,7 @@ const createSiteResourceSchema = z
const domainRegex =
/^(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)*[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?$/;
const isValidDomain = domainRegex.test(data.destination);
- const isValidAlias = data.alias && domainRegex.test(data.alias);
+ const isValidAlias = data.alias !== undefined && data.alias !== null && data.alias.trim() !== "";
return isValidDomain && isValidAlias; // require the alias to be set in the case of domain
}
@@ -80,7 +84,8 @@ const createSiteResourceSchema = z
if (data.mode === "cidr") {
// Check if it's a valid CIDR (v4 or v6)
const isValidCIDR = z
- .union([z.cidrv4(), z.cidrv6()])
+ // .union([z.cidrv4(), z.cidrv6()])
+ .union([z.cidrv4()]) // for now lets just do ipv4 until we verify ipv6 works everywhere
.safeParse(data.destination).success;
return isValidCIDR;
}
@@ -152,7 +157,10 @@ export async function createSiteResource(
alias,
userIds,
roleIds,
- clientIds
+ clientIds,
+ tcpPortRangeString,
+ udpPortRangeString,
+ disableIcmp
} = parsedBody.data;
// Verify the site exists and belongs to the org
@@ -237,7 +245,10 @@ export async function createSiteResource(
destination,
enabled,
alias,
- aliasAddress
+ aliasAddress,
+ tcpPortRangeString,
+ udpPortRangeString,
+ disableIcmp
})
.returning();
diff --git a/server/routers/siteResource/deleteSiteResource.ts b/server/routers/siteResource/deleteSiteResource.ts
index a7175608..3d1e70cc 100644
--- a/server/routers/siteResource/deleteSiteResource.ts
+++ b/server/routers/siteResource/deleteSiteResource.ts
@@ -106,7 +106,10 @@ export async function deleteSiteResource(
);
}
- await rebuildClientAssociationsFromSiteResource(removedSiteResource, trx);
+ await rebuildClientAssociationsFromSiteResource(
+ removedSiteResource,
+ trx
+ );
});
logger.info(
diff --git a/server/routers/siteResource/getSiteResource.ts b/server/routers/siteResource/getSiteResource.ts
index 48f10b8b..7cb9e620 100644
--- a/server/routers/siteResource/getSiteResource.ts
+++ b/server/routers/siteResource/getSiteResource.ts
@@ -11,44 +11,55 @@ import logger from "@server/logger";
import { OpenAPITags, registry } from "@server/openApi";
const getSiteResourceParamsSchema = z.strictObject({
- siteResourceId: z
- .string()
- .optional()
- .transform((val) => val ? Number(val) : undefined)
- .pipe(z.int().positive().optional())
- .optional(),
- siteId: z.string().transform(Number).pipe(z.int().positive()),
- niceId: z.string().optional(),
- orgId: z.string()
- });
+ siteResourceId: z
+ .string()
+ .optional()
+ .transform((val) => (val ? Number(val) : undefined))
+ .pipe(z.int().positive().optional())
+ .optional(),
+ siteId: z.string().transform(Number).pipe(z.int().positive()),
+ niceId: z.string().optional(),
+ orgId: z.string()
+});
-async function query(siteResourceId?: number, siteId?: number, niceId?: string, orgId?: string) {
+async function query(
+ siteResourceId?: number,
+ siteId?: number,
+ niceId?: string,
+ orgId?: string
+) {
if (siteResourceId && siteId && orgId) {
const [siteResource] = await db
.select()
.from(siteResources)
- .where(and(
- eq(siteResources.siteResourceId, siteResourceId),
- eq(siteResources.siteId, siteId),
- eq(siteResources.orgId, orgId)
- ))
+ .where(
+ and(
+ eq(siteResources.siteResourceId, siteResourceId),
+ eq(siteResources.siteId, siteId),
+ eq(siteResources.orgId, orgId)
+ )
+ )
.limit(1);
return siteResource;
} else if (niceId && siteId && orgId) {
const [siteResource] = await db
.select()
.from(siteResources)
- .where(and(
- eq(siteResources.niceId, niceId),
- eq(siteResources.siteId, siteId),
- eq(siteResources.orgId, orgId)
- ))
+ .where(
+ and(
+ eq(siteResources.niceId, niceId),
+ eq(siteResources.siteId, siteId),
+ eq(siteResources.orgId, orgId)
+ )
+ )
.limit(1);
return siteResource;
}
}
-export type GetSiteResourceResponse = NonNullable>>;
+export type GetSiteResourceResponse = NonNullable<
+ Awaited>
+>;
registry.registerPath({
method: "get",
@@ -103,10 +114,7 @@ export async function getSiteResource(
if (!siteResource) {
return next(
- createHttpError(
- HttpCode.NOT_FOUND,
- "Site resource not found"
- )
+ createHttpError(HttpCode.NOT_FOUND, "Site resource not found")
);
}
@@ -119,6 +127,11 @@ export async function getSiteResource(
});
} catch (error) {
logger.error("Error getting site resource:", error);
- return next(createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "Failed to get site resource"));
+ return next(
+ createHttpError(
+ HttpCode.INTERNAL_SERVER_ERROR,
+ "Failed to get site resource"
+ )
+ );
}
}
diff --git a/server/routers/siteResource/listAllSiteResourcesByOrg.ts b/server/routers/siteResource/listAllSiteResourcesByOrg.ts
index 5de66505..7b2e0233 100644
--- a/server/routers/siteResource/listAllSiteResourcesByOrg.ts
+++ b/server/routers/siteResource/listAllSiteResourcesByOrg.ts
@@ -11,8 +11,8 @@ import logger from "@server/logger";
import { OpenAPITags, registry } from "@server/openApi";
const listAllSiteResourcesByOrgParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const listAllSiteResourcesByOrgQuerySchema = z.object({
limit: z
@@ -30,7 +30,11 @@ const listAllSiteResourcesByOrgQuerySchema = z.object({
});
export type ListAllSiteResourcesByOrgResponse = {
- siteResources: (SiteResource & { siteName: string, siteNiceId: string, siteAddress: string | null })[];
+ siteResources: (SiteResource & {
+ siteName: string;
+ siteNiceId: string;
+ siteAddress: string | null;
+ })[];
};
registry.registerPath({
@@ -51,7 +55,9 @@ export async function listAllSiteResourcesByOrg(
next: NextFunction
): Promise {
try {
- const parsedParams = listAllSiteResourcesByOrgParamsSchema.safeParse(req.params);
+ const parsedParams = listAllSiteResourcesByOrgParamsSchema.safeParse(
+ req.params
+ );
if (!parsedParams.success) {
return next(
createHttpError(
@@ -61,7 +67,9 @@ export async function listAllSiteResourcesByOrg(
);
}
- const parsedQuery = listAllSiteResourcesByOrgQuerySchema.safeParse(req.query);
+ const parsedQuery = listAllSiteResourcesByOrgQuerySchema.safeParse(
+ req.query
+ );
if (!parsedQuery.success) {
return next(
createHttpError(
@@ -89,6 +97,9 @@ export async function listAllSiteResourcesByOrg(
destination: siteResources.destination,
enabled: siteResources.enabled,
alias: siteResources.alias,
+ tcpPortRangeString: siteResources.tcpPortRangeString,
+ udpPortRangeString: siteResources.udpPortRangeString,
+ disableIcmp: siteResources.disableIcmp,
siteName: sites.name,
siteNiceId: sites.niceId,
siteAddress: sites.address
@@ -108,6 +119,11 @@ export async function listAllSiteResourcesByOrg(
});
} catch (error) {
logger.error("Error listing all site resources by org:", error);
- return next(createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "Failed to list site resources"));
+ return next(
+ createHttpError(
+ HttpCode.INTERNAL_SERVER_ERROR,
+ "Failed to list site resources"
+ )
+ );
}
}
diff --git a/server/routers/siteResource/listSiteResourceClients.ts b/server/routers/siteResource/listSiteResourceClients.ts
index 9b04ac32..772750d1 100644
--- a/server/routers/siteResource/listSiteResourceClients.ts
+++ b/server/routers/siteResource/listSiteResourceClients.ts
@@ -52,7 +52,9 @@ export async function listSiteResourceClients(
next: NextFunction
): Promise {
try {
- const parsedParams = listSiteResourceClientsSchema.safeParse(req.params);
+ const parsedParams = listSiteResourceClientsSchema.safeParse(
+ req.params
+ );
if (!parsedParams.success) {
return next(
createHttpError(
@@ -82,4 +84,3 @@ export async function listSiteResourceClients(
);
}
}
-
diff --git a/server/routers/siteResource/listSiteResourceRoles.ts b/server/routers/siteResource/listSiteResourceRoles.ts
index 5504c003..0dc5913b 100644
--- a/server/routers/siteResource/listSiteResourceRoles.ts
+++ b/server/routers/siteResource/listSiteResourceRoles.ts
@@ -83,4 +83,3 @@ export async function listSiteResourceRoles(
);
}
}
-
diff --git a/server/routers/siteResource/listSiteResourceUsers.ts b/server/routers/siteResource/listSiteResourceUsers.ts
index 6cc19557..daf75480 100644
--- a/server/routers/siteResource/listSiteResourceUsers.ts
+++ b/server/routers/siteResource/listSiteResourceUsers.ts
@@ -86,4 +86,3 @@ export async function listSiteResourceUsers(
);
}
}
-
diff --git a/server/routers/siteResource/listSiteResources.ts b/server/routers/siteResource/listSiteResources.ts
index e530952d..6ecda7c4 100644
--- a/server/routers/siteResource/listSiteResources.ts
+++ b/server/routers/siteResource/listSiteResources.ts
@@ -11,9 +11,9 @@ import logger from "@server/logger";
import { OpenAPITags, registry } from "@server/openApi";
const listSiteResourcesParamsSchema = z.strictObject({
- siteId: z.string().transform(Number).pipe(z.int().positive()),
- orgId: z.string()
- });
+ siteId: z.string().transform(Number).pipe(z.int().positive()),
+ orgId: z.string()
+});
const listSiteResourcesQuerySchema = z.object({
limit: z
@@ -52,7 +52,9 @@ export async function listSiteResources(
next: NextFunction
): Promise {
try {
- const parsedParams = listSiteResourcesParamsSchema.safeParse(req.params);
+ const parsedParams = listSiteResourcesParamsSchema.safeParse(
+ req.params
+ );
if (!parsedParams.success) {
return next(
createHttpError(
@@ -83,22 +85,19 @@ export async function listSiteResources(
.limit(1);
if (site.length === 0) {
- return next(
- createHttpError(
- HttpCode.NOT_FOUND,
- "Site not found"
- )
- );
+ return next(createHttpError(HttpCode.NOT_FOUND, "Site not found"));
}
// Get site resources
const siteResourcesList = await db
.select()
.from(siteResources)
- .where(and(
- eq(siteResources.siteId, siteId),
- eq(siteResources.orgId, orgId)
- ))
+ .where(
+ and(
+ eq(siteResources.siteId, siteId),
+ eq(siteResources.orgId, orgId)
+ )
+ )
.limit(limit)
.offset(offset);
@@ -111,6 +110,11 @@ export async function listSiteResources(
});
} catch (error) {
logger.error("Error listing site resources:", error);
- return next(createHttpError(HttpCode.INTERNAL_SERVER_ERROR, "Failed to list site resources"));
+ return next(
+ createHttpError(
+ HttpCode.INTERNAL_SERVER_ERROR,
+ "Failed to list site resources"
+ )
+ );
}
}
diff --git a/server/routers/siteResource/removeClientFromSiteResource.ts b/server/routers/siteResource/removeClientFromSiteResource.ts
index c6a5dfe8..351128d1 100644
--- a/server/routers/siteResource/removeClientFromSiteResource.ts
+++ b/server/routers/siteResource/removeClientFromSiteResource.ts
@@ -28,7 +28,8 @@ const removeClientFromSiteResourceParamsSchema = z
registry.registerPath({
method: "post",
path: "/site-resource/{siteResourceId}/clients/remove",
- description: "Remove a single client from a site resource. Clients with a userId cannot be removed.",
+ description:
+ "Remove a single client from a site resource. Clients with a userId cannot be removed.",
tags: [OpenAPITags.Resource, OpenAPITags.Client],
request: {
params: removeClientFromSiteResourceParamsSchema,
@@ -159,4 +160,3 @@ export async function removeClientFromSiteResource(
);
}
}
-
diff --git a/server/routers/siteResource/removeRoleFromSiteResource.ts b/server/routers/siteResource/removeRoleFromSiteResource.ts
index 0041ed83..c9857e84 100644
--- a/server/routers/siteResource/removeRoleFromSiteResource.ts
+++ b/server/routers/siteResource/removeRoleFromSiteResource.ts
@@ -151,7 +151,7 @@ export async function removeRoleFromSiteResource(
)
);
- await rebuildClientAssociationsFromSiteResource(siteResource, trx);
+ await rebuildClientAssociationsFromSiteResource(siteResource, trx);
});
return response(res, {
@@ -168,4 +168,3 @@ export async function removeRoleFromSiteResource(
);
}
}
-
diff --git a/server/routers/siteResource/removeUserFromSiteResource.ts b/server/routers/siteResource/removeUserFromSiteResource.ts
index 280a01f2..84347b2f 100644
--- a/server/routers/siteResource/removeUserFromSiteResource.ts
+++ b/server/routers/siteResource/removeUserFromSiteResource.ts
@@ -138,4 +138,3 @@ export async function removeUserFromSiteResource(
);
}
}
-
diff --git a/server/routers/siteResource/setSiteResourceClients.ts b/server/routers/siteResource/setSiteResourceClients.ts
index 0a25b7e9..5a8acbcf 100644
--- a/server/routers/siteResource/setSiteResourceClients.ts
+++ b/server/routers/siteResource/setSiteResourceClients.ts
@@ -62,7 +62,9 @@ export async function setSiteResourceClients(
const { clientIds } = parsedBody.data;
- const parsedParams = setSiteResourceClientsParamsSchema.safeParse(req.params);
+ const parsedParams = setSiteResourceClientsParamsSchema.safeParse(
+ req.params
+ );
if (!parsedParams.success) {
return next(
createHttpError(
@@ -95,9 +97,7 @@ export async function setSiteResourceClients(
const clientsWithUsers = await db
.select()
.from(clients)
- .where(
- inArray(clients.clientId, clientIds)
- );
+ .where(inArray(clients.clientId, clientIds));
const clientsWithUserId = clientsWithUsers.filter(
(client) => client.userId !== null
@@ -119,9 +119,12 @@ export async function setSiteResourceClients(
.where(eq(clientSiteResources.siteResourceId, siteResourceId));
if (clientIds.length > 0) {
- await trx
- .insert(clientSiteResources)
- .values(clientIds.map((clientId) => ({ clientId, siteResourceId })));
+ await trx.insert(clientSiteResources).values(
+ clientIds.map((clientId) => ({
+ clientId,
+ siteResourceId
+ }))
+ );
}
await rebuildClientAssociationsFromSiteResource(siteResource, trx);
@@ -141,4 +144,3 @@ export async function setSiteResourceClients(
);
}
}
-
diff --git a/server/routers/siteResource/setSiteResourceRoles.ts b/server/routers/siteResource/setSiteResourceRoles.ts
index 7aa07de1..bb71a16b 100644
--- a/server/routers/siteResource/setSiteResourceRoles.ts
+++ b/server/routers/siteResource/setSiteResourceRoles.ts
@@ -136,15 +136,19 @@ export async function setSiteResourceRoles(
)
);
} else {
- await trx.delete(roleSiteResources).where(
- eq(roleSiteResources.siteResourceId, siteResourceId)
- );
+ await trx
+ .delete(roleSiteResources)
+ .where(
+ eq(roleSiteResources.siteResourceId, siteResourceId)
+ );
}
if (roleIds.length > 0) {
await trx
.insert(roleSiteResources)
- .values(roleIds.map((roleId) => ({ roleId, siteResourceId })));
+ .values(
+ roleIds.map((roleId) => ({ roleId, siteResourceId }))
+ );
}
await rebuildClientAssociationsFromSiteResource(siteResource, trx);
diff --git a/server/routers/siteResource/setSiteResourceUsers.ts b/server/routers/siteResource/setSiteResourceUsers.ts
index 4dae0ada..eacd826c 100644
--- a/server/routers/siteResource/setSiteResourceUsers.ts
+++ b/server/routers/siteResource/setSiteResourceUsers.ts
@@ -63,7 +63,9 @@ export async function setSiteResourceUsers(
const { userIds } = parsedBody.data;
- const parsedParams = setSiteResourceUsersParamsSchema.safeParse(req.params);
+ const parsedParams = setSiteResourceUsersParamsSchema.safeParse(
+ req.params
+ );
if (!parsedParams.success) {
return next(
createHttpError(
@@ -99,7 +101,9 @@ export async function setSiteResourceUsers(
if (userIds.length > 0) {
await trx
.insert(userSiteResources)
- .values(userIds.map((userId) => ({ userId, siteResourceId })));
+ .values(
+ userIds.map((userId) => ({ userId, siteResourceId }))
+ );
}
await rebuildClientAssociationsFromSiteResource(siteResource, trx);
@@ -119,4 +123,3 @@ export async function setSiteResourceUsers(
);
}
}
-
diff --git a/server/routers/siteResource/updateSiteResource.ts b/server/routers/siteResource/updateSiteResource.ts
index efc4939b..c3360e6f 100644
--- a/server/routers/siteResource/updateSiteResource.ts
+++ b/server/routers/siteResource/updateSiteResource.ts
@@ -23,7 +23,8 @@ import { updatePeerData, updateTargets } from "@server/routers/client/targets";
import {
generateAliasConfig,
generateRemoteSubnets,
- generateSubnetProxyTargets
+ generateSubnetProxyTargets,
+ portRangeStringSchema
} from "@server/lib/ip";
import {
getClientSiteResourceAccess,
@@ -49,20 +50,24 @@ const updateSiteResourceSchema = z
alias: z
.string()
.regex(
- /^(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?$/,
- "Alias must be a fully qualified domain name (e.g., example.internal)"
+ /^(?:[a-zA-Z0-9*?](?:[a-zA-Z0-9*?-]{0,61}[a-zA-Z0-9*?])?\.)+[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?$/,
+ "Alias must be a fully qualified domain name with optional wildcards (e.g., example.internal, *.example.internal, host-0?.example.internal)"
)
.nullish(),
userIds: z.array(z.string()),
roleIds: z.array(z.int()),
- clientIds: z.array(z.int())
+ clientIds: z.array(z.int()),
+ tcpPortRangeString: portRangeStringSchema,
+ udpPortRangeString: portRangeStringSchema,
+ disableIcmp: z.boolean().optional()
})
.strict()
.refine(
(data) => {
if (data.mode === "host" && data.destination) {
const isValidIP = z
- .union([z.ipv4(), z.ipv6()])
+ // .union([z.ipv4(), z.ipv6()])
+ .union([z.ipv4()]) // for now lets just do ipv4 until we verify ipv6 works everywhere
.safeParse(data.destination).success;
if (isValidIP) {
@@ -73,7 +78,7 @@ const updateSiteResourceSchema = z
const domainRegex =
/^(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)*[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?$/;
const isValidDomain = domainRegex.test(data.destination);
- const isValidAlias = data.alias && domainRegex.test(data.alias);
+ const isValidAlias = data.alias !== undefined && data.alias !== null && data.alias.trim() !== "";
return isValidDomain && isValidAlias; // require the alias to be set in the case of domain
}
@@ -89,7 +94,8 @@ const updateSiteResourceSchema = z
if (data.mode === "cidr" && data.destination) {
// Check if it's a valid CIDR (v4 or v6)
const isValidCIDR = z
- .union([z.cidrv4(), z.cidrv6()])
+ // .union([z.cidrv4(), z.cidrv6()])
+ .union([z.cidrv4()]) // for now lets just do ipv4 until we verify ipv6 works everywhere
.safeParse(data.destination).success;
return isValidCIDR;
}
@@ -158,7 +164,10 @@ export async function updateSiteResource(
enabled,
userIds,
roleIds,
- clientIds
+ clientIds,
+ tcpPortRangeString,
+ udpPortRangeString,
+ disableIcmp
} = parsedBody.data;
const [site] = await db
@@ -224,7 +233,10 @@ export async function updateSiteResource(
mode: mode,
destination: destination,
enabled: enabled,
- alias: alias && alias.trim() ? alias : null
+ alias: alias && alias.trim() ? alias : null,
+ tcpPortRangeString: tcpPortRangeString,
+ udpPortRangeString: udpPortRangeString,
+ disableIcmp: disableIcmp
})
.where(
and(
@@ -346,10 +358,18 @@ export async function handleMessagingForUpdatedSiteResource(
const aliasChanged =
existingSiteResource &&
existingSiteResource.alias !== updatedSiteResource.alias;
+ const portRangesChanged =
+ existingSiteResource &&
+ (existingSiteResource.tcpPortRangeString !==
+ updatedSiteResource.tcpPortRangeString ||
+ existingSiteResource.udpPortRangeString !==
+ updatedSiteResource.udpPortRangeString ||
+ existingSiteResource.disableIcmp !==
+ updatedSiteResource.disableIcmp);
// if the existingSiteResource is undefined (new resource) we don't need to do anything here, the rebuild above handled it all
- if (destinationChanged || aliasChanged) {
+ if (destinationChanged || aliasChanged || portRangesChanged) {
const [newt] = await trx
.select()
.from(newts)
@@ -363,7 +383,7 @@ export async function handleMessagingForUpdatedSiteResource(
}
// Only update targets on newt if destination changed
- if (destinationChanged) {
+ if (destinationChanged || portRangesChanged) {
const oldTargets = generateSubnetProxyTargets(
existingSiteResource,
mergedAllClients
diff --git a/server/routers/supporterKey/validateSupporterKey.ts b/server/routers/supporterKey/validateSupporterKey.ts
index d8b16421..9ac3c473 100644
--- a/server/routers/supporterKey/validateSupporterKey.ts
+++ b/server/routers/supporterKey/validateSupporterKey.ts
@@ -10,9 +10,9 @@ import { db } from "@server/db";
import config from "@server/lib/config";
const validateSupporterKeySchema = z.strictObject({
- githubUsername: z.string().nonempty(),
- key: z.string().nonempty()
- });
+ githubUsername: z.string().nonempty(),
+ key: z.string().nonempty()
+});
export type ValidateSupporterKeyResponse = {
valid: boolean;
diff --git a/server/routers/target/createTarget.ts b/server/routers/target/createTarget.ts
index 2c09b5a6..5d37f617 100644
--- a/server/routers/target/createTarget.ts
+++ b/server/routers/target/createTarget.ts
@@ -16,51 +16,41 @@ import { isTargetValid } from "@server/lib/validators";
import { OpenAPITags, registry } from "@server/openApi";
const createTargetParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
const createTargetSchema = z.strictObject({
- siteId: z.int().positive(),
- ip: z.string().refine(isTargetValid),
- method: z.string().optional().nullable(),
- port: z.int().min(1).max(65535),
- enabled: z.boolean().default(true),
- hcEnabled: z.boolean().optional(),
- hcPath: z.string().min(1).optional().nullable(),
- hcScheme: z.string().optional().nullable(),
- hcMode: z.string().optional().nullable(),
- hcHostname: z.string().optional().nullable(),
- hcPort: z.int().positive().optional().nullable(),
- hcInterval: z.int().positive().min(5).optional().nullable(),
- hcUnhealthyInterval: z.int()
- .positive()
- .min(5)
- .optional()
- .nullable(),
- hcTimeout: z.int().positive().min(1).optional().nullable(),
- hcHeaders: z
- .array(z.strictObject({ name: z.string(), value: z.string() }))
- .nullable()
- .optional(),
- hcFollowRedirects: z.boolean().optional().nullable(),
- hcMethod: z.string().min(1).optional().nullable(),
- hcStatus: z.int().optional().nullable(),
- hcTlsServerName: z.string().optional().nullable(),
- path: z.string().optional().nullable(),
- pathMatchType: z
- .enum(["exact", "prefix", "regex"])
- .optional()
- .nullable(),
- rewritePath: z.string().optional().nullable(),
- rewritePathType: z
- .enum(["exact", "prefix", "regex", "stripPrefix"])
- .optional()
- .nullable(),
- priority: z.int().min(1).max(1000).optional().nullable()
- });
+ siteId: z.int().positive(),
+ ip: z.string().refine(isTargetValid),
+ method: z.string().optional().nullable(),
+ port: z.int().min(1).max(65535),
+ enabled: z.boolean().default(true),
+ hcEnabled: z.boolean().optional(),
+ hcPath: z.string().min(1).optional().nullable(),
+ hcScheme: z.string().optional().nullable(),
+ hcMode: z.string().optional().nullable(),
+ hcHostname: z.string().optional().nullable(),
+ hcPort: z.int().positive().optional().nullable(),
+ hcInterval: z.int().positive().min(5).optional().nullable(),
+ hcUnhealthyInterval: z.int().positive().min(5).optional().nullable(),
+ hcTimeout: z.int().positive().min(1).optional().nullable(),
+ hcHeaders: z
+ .array(z.strictObject({ name: z.string(), value: z.string() }))
+ .nullable()
+ .optional(),
+ hcFollowRedirects: z.boolean().optional().nullable(),
+ hcMethod: z.string().min(1).optional().nullable(),
+ hcStatus: z.int().optional().nullable(),
+ hcTlsServerName: z.string().optional().nullable(),
+ path: z.string().optional().nullable(),
+ pathMatchType: z.enum(["exact", "prefix", "regex"]).optional().nullable(),
+ rewritePath: z.string().optional().nullable(),
+ rewritePathType: z
+ .enum(["exact", "prefix", "regex", "stripPrefix"])
+ .optional()
+ .nullable(),
+ priority: z.int().min(1).max(1000).optional().nullable()
+});
export type CreateTargetResponse = Target & TargetHealthCheck;
@@ -159,7 +149,9 @@ export async function createTarget(
if (existingTarget) {
// log a warning
- logger.warn(`Target with IP ${targetData.ip}, port ${targetData.port}, method ${targetData.method} already exists for resource ID ${resourceId}`);
+ logger.warn(
+ `Target with IP ${targetData.ip}, port ${targetData.port}, method ${targetData.method} already exists for resource ID ${resourceId}`
+ );
}
let newTarget: Target[] = [];
diff --git a/server/routers/target/deleteTarget.ts b/server/routers/target/deleteTarget.ts
index a70b2a1e..606d8635 100644
--- a/server/routers/target/deleteTarget.ts
+++ b/server/routers/target/deleteTarget.ts
@@ -14,8 +14,8 @@ import { getAllowedIps } from "./helpers";
import { OpenAPITags, registry } from "@server/openApi";
const deleteTargetSchema = z.strictObject({
- targetId: z.string().transform(Number).pipe(z.int().positive())
- });
+ targetId: z.string().transform(Number).pipe(z.int().positive())
+});
registry.registerPath({
method: "delete",
diff --git a/server/routers/target/getTarget.ts b/server/routers/target/getTarget.ts
index 7fe2e062..749e1399 100644
--- a/server/routers/target/getTarget.ts
+++ b/server/routers/target/getTarget.ts
@@ -11,12 +11,13 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const getTargetSchema = z.strictObject({
- targetId: z.string().transform(Number).pipe(z.int().positive())
- });
+ targetId: z.string().transform(Number).pipe(z.int().positive())
+});
-type GetTargetResponse = Target & Omit & {
- hcHeaders: { name: string; value: string; }[] | null;
-};
+type GetTargetResponse = Target &
+ Omit & {
+ hcHeaders: { name: string; value: string }[] | null;
+ };
registry.registerPath({
method: "get",
diff --git a/server/routers/target/handleHealthcheckStatusMessage.ts b/server/routers/target/handleHealthcheckStatusMessage.ts
index ee4e7950..2bfcff19 100644
--- a/server/routers/target/handleHealthcheckStatusMessage.ts
+++ b/server/routers/target/handleHealthcheckStatusMessage.ts
@@ -30,7 +30,9 @@ interface HealthcheckStatusMessage {
targets: Record;
}
-export const handleHealthcheckStatusMessage: MessageHandler = async (context) => {
+export const handleHealthcheckStatusMessage: MessageHandler = async (
+ context
+) => {
const { message, client: c } = context;
const newt = c as Newt;
@@ -59,7 +61,9 @@ export const handleHealthcheckStatusMessage: MessageHandler = async (context) =>
// Process each target status update
for (const [targetId, healthStatus] of Object.entries(data.targets)) {
- logger.debug(`Processing health status for target ${targetId}: ${healthStatus.status}${healthStatus.lastError ? ` (${healthStatus.lastError})` : ''}`);
+ logger.debug(
+ `Processing health status for target ${targetId}: ${healthStatus.status}${healthStatus.lastError ? ` (${healthStatus.lastError})` : ""}`
+ );
// Verify the target belongs to this newt's site before updating
// This prevents unauthorized updates to targets from other sites
@@ -76,7 +80,10 @@ export const handleHealthcheckStatusMessage: MessageHandler = async (context) =>
siteId: targets.siteId
})
.from(targets)
- .innerJoin(resources, eq(targets.resourceId, resources.resourceId))
+ .innerJoin(
+ resources,
+ eq(targets.resourceId, resources.resourceId)
+ )
.innerJoin(sites, eq(targets.siteId, sites.siteId))
.where(
and(
@@ -87,7 +94,9 @@ export const handleHealthcheckStatusMessage: MessageHandler = async (context) =>
.limit(1);
if (!targetCheck) {
- logger.warn(`Target ${targetId} not found or does not belong to site ${newt.siteId}`);
+ logger.warn(
+ `Target ${targetId} not found or does not belong to site ${newt.siteId}`
+ );
errorCount++;
continue;
}
@@ -101,11 +110,15 @@ export const handleHealthcheckStatusMessage: MessageHandler = async (context) =>
.where(eq(targetHealthCheck.targetId, targetIdNum))
.execute();
- logger.debug(`Updated health status for target ${targetId} to ${healthStatus.status}`);
+ logger.debug(
+ `Updated health status for target ${targetId} to ${healthStatus.status}`
+ );
successCount++;
}
- logger.debug(`Health status update complete: ${successCount} successful, ${errorCount} errors out of ${Object.keys(data.targets).length} targets`);
+ logger.debug(
+ `Health status update complete: ${successCount} successful, ${errorCount} errors out of ${Object.keys(data.targets).length} targets`
+ );
} catch (error) {
logger.error("Error processing healthcheck status message:", error);
}
diff --git a/server/routers/target/helpers.ts b/server/routers/target/helpers.ts
index 13b2ee46..fe76bd13 100644
--- a/server/routers/target/helpers.ts
+++ b/server/routers/target/helpers.ts
@@ -4,7 +4,10 @@ import { eq } from "drizzle-orm";
const currentBannedPorts: number[] = [];
-export async function pickPort(siteId: number, trx: Transaction | typeof db): Promise<{
+export async function pickPort(
+ siteId: number,
+ trx: Transaction | typeof db
+): Promise<{
internalPort: number;
targetIps: string[];
}> {
diff --git a/server/routers/target/listTargets.ts b/server/routers/target/listTargets.ts
index 356276cb..11a23f02 100644
--- a/server/routers/target/listTargets.ts
+++ b/server/routers/target/listTargets.ts
@@ -11,11 +11,8 @@ import logger from "@server/logger";
import { OpenAPITags, registry } from "@server/openApi";
const listTargetsParamsSchema = z.strictObject({
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
const listTargetsSchema = z.object({
limit: z
@@ -62,7 +59,7 @@ function queryTargets(resourceId: number) {
pathMatchType: targets.pathMatchType,
rewritePath: targets.rewritePath,
rewritePathType: targets.rewritePathType,
- priority: targets.priority,
+ priority: targets.priority
})
.from(targets)
.leftJoin(sites, eq(sites.siteId, targets.siteId))
@@ -75,8 +72,11 @@ function queryTargets(resourceId: number) {
return baseQuery;
}
-type TargetWithParsedHeaders = Omit>[0], 'hcHeaders'> & {
- hcHeaders: { name: string; value: string; }[] | null;
+type TargetWithParsedHeaders = Omit<
+ Awaited>[0],
+ "hcHeaders"
+> & {
+ hcHeaders: { name: string; value: string }[] | null;
};
export type ListTargetsResponse = {
@@ -136,7 +136,7 @@ export async function listTargets(
const totalCount = totalCountResult[0].count;
// Parse hcHeaders from JSON string back to array for each target
- const parsedTargetsList = targetsList.map(target => {
+ const parsedTargetsList = targetsList.map((target) => {
let parsedHcHeaders = null;
if (target.hcHeaders) {
try {
diff --git a/server/routers/target/updateTarget.ts b/server/routers/target/updateTarget.ts
index f4a59858..b00340ee 100644
--- a/server/routers/target/updateTarget.ts
+++ b/server/routers/target/updateTarget.ts
@@ -16,10 +16,11 @@ import { OpenAPITags, registry } from "@server/openApi";
import { vs } from "@react-email/components";
const updateTargetParamsSchema = z.strictObject({
- targetId: z.string().transform(Number).pipe(z.int().positive())
- });
+ targetId: z.string().transform(Number).pipe(z.int().positive())
+});
-const updateTargetBodySchema = z.strictObject({
+const updateTargetBodySchema = z
+ .strictObject({
siteId: z.int().positive(),
ip: z.string().refine(isTargetValid),
method: z.string().min(1).max(10).optional().nullable(),
@@ -32,22 +33,27 @@ const updateTargetBodySchema = z.strictObject({
hcHostname: z.string().optional().nullable(),
hcPort: z.int().positive().optional().nullable(),
hcInterval: z.int().positive().min(5).optional().nullable(),
- hcUnhealthyInterval: z.int()
- .positive()
- .min(5)
- .optional()
- .nullable(),
+ hcUnhealthyInterval: z.int().positive().min(5).optional().nullable(),
hcTimeout: z.int().positive().min(1).optional().nullable(),
- hcHeaders: z.array(z.strictObject({ name: z.string(), value: z.string() })).nullable().optional(),
+ hcHeaders: z
+ .array(z.strictObject({ name: z.string(), value: z.string() }))
+ .nullable()
+ .optional(),
hcFollowRedirects: z.boolean().optional().nullable(),
hcMethod: z.string().min(1).optional().nullable(),
hcStatus: z.int().optional().nullable(),
hcTlsServerName: z.string().optional().nullable(),
path: z.string().optional().nullable(),
- pathMatchType: z.enum(["exact", "prefix", "regex"]).optional().nullable(),
+ pathMatchType: z
+ .enum(["exact", "prefix", "regex"])
+ .optional()
+ .nullable(),
rewritePath: z.string().optional().nullable(),
- rewritePathType: z.enum(["exact", "prefix", "regex", "stripPrefix"]).optional().nullable(),
- priority: z.int().min(1).max(1000).optional(),
+ rewritePathType: z
+ .enum(["exact", "prefix", "regex", "stripPrefix"])
+ .optional()
+ .nullable(),
+ priority: z.int().min(1).max(1000).optional()
})
.refine((data) => Object.keys(data).length > 0, {
error: "At least one field must be provided for update"
@@ -166,7 +172,9 @@ export async function updateTarget(
if (foundTarget) {
// log a warning
- logger.warn(`Target with IP ${targetData.ip}, port ${targetData.port}, method ${targetData.method} already exists for resource ID ${target.resourceId}`);
+ logger.warn(
+ `Target with IP ${targetData.ip}, port ${targetData.port}, method ${targetData.method} already exists for resource ID ${target.resourceId}`
+ );
}
const { internalPort, targetIps } = await pickPort(site.siteId!, db);
@@ -205,9 +213,11 @@ export async function updateTarget(
// When health check is disabled, reset hcHealth to "unknown"
// to prevent previously unhealthy targets from being excluded
- const hcHealthValue = (parsedBody.data.hcEnabled === false || parsedBody.data.hcEnabled === null)
- ? "unknown"
- : undefined;
+ const hcHealthValue =
+ parsedBody.data.hcEnabled === false ||
+ parsedBody.data.hcEnabled === null
+ ? "unknown"
+ : undefined;
const [updatedHc] = await db
.update(targetHealthCheck)
diff --git a/server/routers/traefik/index.ts b/server/routers/traefik/index.ts
index 6f5bd4f0..195f0087 100644
--- a/server/routers/traefik/index.ts
+++ b/server/routers/traefik/index.ts
@@ -1 +1 @@
-export * from "./traefikConfigProvider";
\ No newline at end of file
+export * from "./traefikConfigProvider";
diff --git a/server/routers/traefik/traefikConfigProvider.ts b/server/routers/traefik/traefikConfigProvider.ts
index 9b12ed8a..e8ac1621 100644
--- a/server/routers/traefik/traefikConfigProvider.ts
+++ b/server/routers/traefik/traefikConfigProvider.ts
@@ -59,4 +59,4 @@ export async function traefikConfigProvider(
error: "Failed to build Traefik config"
});
}
-}
\ No newline at end of file
+}
diff --git a/server/routers/user/acceptInvite.ts b/server/routers/user/acceptInvite.ts
index 3e94d96c..d64ccfb5 100644
--- a/server/routers/user/acceptInvite.ts
+++ b/server/routers/user/acceptInvite.ts
@@ -15,9 +15,9 @@ import { FeatureId } from "@server/lib/billing";
import { calculateUserClientsForOrgs } from "@server/lib/calculateUserClientsForOrgs";
const acceptInviteBodySchema = z.strictObject({
- token: z.string(),
- inviteId: z.string()
- });
+ token: z.string(),
+ inviteId: z.string()
+});
export type AcceptInviteResponse = {
accepted: boolean;
diff --git a/server/routers/user/addUserAction.ts b/server/routers/user/addUserAction.ts
index f75d5005..ddbae6b0 100644
--- a/server/routers/user/addUserAction.ts
+++ b/server/routers/user/addUserAction.ts
@@ -10,10 +10,10 @@ import { eq } from "drizzle-orm";
import { fromError } from "zod-validation-error";
const addUserActionSchema = z.strictObject({
- userId: z.string(),
- actionId: z.string(),
- orgId: z.string()
- });
+ userId: z.string(),
+ actionId: z.string(),
+ orgId: z.string()
+});
export async function addUserAction(
req: Request,
diff --git a/server/routers/user/addUserSite.ts b/server/routers/user/addUserSite.ts
index 38ef264c..ffb9f1ba 100644
--- a/server/routers/user/addUserSite.ts
+++ b/server/routers/user/addUserSite.ts
@@ -10,9 +10,9 @@ import { eq } from "drizzle-orm";
import { fromError } from "zod-validation-error";
const addUserSiteSchema = z.strictObject({
- userId: z.string(),
- siteId: z.string().transform(Number).pipe(z.int().positive())
- });
+ userId: z.string(),
+ siteId: z.string().transform(Number).pipe(z.int().positive())
+});
export async function addUserSite(
req: Request,
@@ -61,7 +61,6 @@ export async function addUserSite(
status: HttpCode.CREATED
});
});
-
} catch (error) {
logger.error(error);
return next(
diff --git a/server/routers/user/adminGeneratePasswordResetCode.ts b/server/routers/user/adminGeneratePasswordResetCode.ts
index 5d283c5c..562a459e 100644
--- a/server/routers/user/adminGeneratePasswordResetCode.ts
+++ b/server/routers/user/adminGeneratePasswordResetCode.ts
@@ -19,7 +19,9 @@ const adminGeneratePasswordResetCodeSchema = z.strictObject({
userId: z.string().min(1)
});
-export type AdminGeneratePasswordResetCodeBody = z.infer;
+export type AdminGeneratePasswordResetCodeBody = z.infer<
+ typeof adminGeneratePasswordResetCodeSchema
+>;
export type AdminGeneratePasswordResetCodeResponse = {
token: string;
@@ -32,7 +34,9 @@ export async function adminGeneratePasswordResetCode(
res: Response,
next: NextFunction
): Promise {
- const parsedParams = adminGeneratePasswordResetCodeSchema.safeParse(req.params);
+ const parsedParams = adminGeneratePasswordResetCodeSchema.safeParse(
+ req.params
+ );
if (!parsedParams.success) {
return next(
@@ -52,12 +56,7 @@ export async function adminGeneratePasswordResetCode(
.where(eq(users.userId, userId));
if (!existingUser || !existingUser.length) {
- return next(
- createHttpError(
- HttpCode.NOT_FOUND,
- "User not found"
- )
- );
+ return next(createHttpError(HttpCode.NOT_FOUND, "User not found"));
}
if (existingUser[0].type !== UserType.Internal) {
@@ -122,4 +121,3 @@ export async function adminGeneratePasswordResetCode(
);
}
}
-
diff --git a/server/routers/user/adminGetUser.ts b/server/routers/user/adminGetUser.ts
index bda14476..06045c77 100644
--- a/server/routers/user/adminGetUser.ts
+++ b/server/routers/user/adminGetUser.ts
@@ -10,8 +10,8 @@ import logger from "@server/logger";
import { OpenAPITags, registry } from "@server/openApi";
const adminGetUserSchema = z.strictObject({
- userId: z.string().min(1)
- });
+ userId: z.string().min(1)
+});
registry.registerPath({
method: "get",
diff --git a/server/routers/user/adminListUsers.ts b/server/routers/user/adminListUsers.ts
index a3ad9cdd..3a965259 100644
--- a/server/routers/user/adminListUsers.ts
+++ b/server/routers/user/adminListUsers.ts
@@ -10,19 +10,19 @@ import { idp, users } from "@server/db";
import { fromZodError } from "zod-validation-error";
const listUsersSchema = z.strictObject({
- limit: z
- .string()
- .optional()
- .default("1000")
- .transform(Number)
- .pipe(z.int().nonnegative()),
- offset: z
- .string()
- .optional()
- .default("0")
- .transform(Number)
- .pipe(z.int().nonnegative())
- });
+ limit: z
+ .string()
+ .optional()
+ .default("1000")
+ .transform(Number)
+ .pipe(z.int().nonnegative()),
+ offset: z
+ .string()
+ .optional()
+ .default("0")
+ .transform(Number)
+ .pipe(z.int().nonnegative())
+});
async function queryUsers(limit: number, offset: number) {
return await db
diff --git a/server/routers/user/adminUpdateUser2FA.ts b/server/routers/user/adminUpdateUser2FA.ts
index 4bb2486a..7fb37d01 100644
--- a/server/routers/user/adminUpdateUser2FA.ts
+++ b/server/routers/user/adminUpdateUser2FA.ts
@@ -11,12 +11,12 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const updateUser2FAParamsSchema = z.strictObject({
- userId: z.string()
- });
+ userId: z.string()
+});
const updateUser2FABodySchema = z.strictObject({
- twoFactorSetupRequested: z.boolean()
- });
+ twoFactorSetupRequested: z.boolean()
+});
export type UpdateUser2FAResponse = {
userId: string;
@@ -90,13 +90,15 @@ export async function updateUser2FA(
);
}
- logger.debug(`Updating 2FA for user ${userId} to ${twoFactorSetupRequested}`);
+ logger.debug(
+ `Updating 2FA for user ${userId} to ${twoFactorSetupRequested}`
+ );
if (twoFactorSetupRequested) {
await db
.update(users)
.set({
- twoFactorSetupRequested: true,
+ twoFactorSetupRequested: true
})
.where(eq(users.userId, userId));
} else {
diff --git a/server/routers/user/createOrgUser.ts b/server/routers/user/createOrgUser.ts
index 99a2258c..e1902477 100644
--- a/server/routers/user/createOrgUser.ts
+++ b/server/routers/user/createOrgUser.ts
@@ -18,25 +18,26 @@ import { TierId } from "@server/lib/billing/tiers";
import { calculateUserClientsForOrgs } from "@server/lib/calculateUserClientsForOrgs";
const paramsSchema = z.strictObject({
- orgId: z.string().nonempty()
- });
+ orgId: z.string().nonempty()
+});
const bodySchema = z.strictObject({
- email: z.email()
- .toLowerCase()
- .optional()
- .refine((data) => {
- if (data) {
- return z.email().safeParse(data).success;
- }
- return true;
- }),
- username: z.string().nonempty().toLowerCase(),
- name: z.string().optional(),
- type: z.enum(["internal", "oidc"]).optional(),
- idpId: z.number().optional(),
- roleId: z.number()
- });
+ email: z
+ .email()
+ .toLowerCase()
+ .optional()
+ .refine((data) => {
+ if (data) {
+ return z.email().safeParse(data).success;
+ }
+ return true;
+ }),
+ username: z.string().nonempty().toLowerCase(),
+ name: z.string().optional(),
+ type: z.enum(["internal", "oidc"]).optional(),
+ idpId: z.number().optional(),
+ roleId: z.number()
+});
export type CreateOrgUserResponse = {};
diff --git a/server/routers/user/getOrgUser.ts b/server/routers/user/getOrgUser.ts
index 4e09afd6..f22a29d3 100644
--- a/server/routers/user/getOrgUser.ts
+++ b/server/routers/user/getOrgUser.ts
@@ -47,9 +47,9 @@ export type GetOrgUserResponse = NonNullable<
>;
const getOrgUserParamsSchema = z.strictObject({
- userId: z.string(),
- orgId: z.string()
- });
+ userId: z.string(),
+ orgId: z.string()
+});
registry.registerPath({
method: "get",
diff --git a/server/routers/user/inviteUser.ts b/server/routers/user/inviteUser.ts
index f43ebeb8..6a778868 100644
--- a/server/routers/user/inviteUser.ts
+++ b/server/routers/user/inviteUser.ts
@@ -22,16 +22,16 @@ import { build } from "@server/build";
import cache from "@server/lib/cache";
const inviteUserParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const inviteUserBodySchema = z.strictObject({
- email: z.email().toLowerCase(),
- roleId: z.number(),
- validHours: z.number().gt(0).lte(168),
- sendEmail: z.boolean().optional(),
- regenerate: z.boolean().optional()
- });
+ email: z.email().toLowerCase(),
+ roleId: z.number(),
+ validHours: z.number().gt(0).lte(168),
+ sendEmail: z.boolean().optional(),
+ regenerate: z.boolean().optional()
+});
export type InviteUserBody = z.infer;
@@ -109,12 +109,7 @@ export async function inviteUser(
const [role] = await db
.select()
.from(roles)
- .where(
- and(
- eq(roles.roleId, roleId),
- eq(roles.orgId, orgId)
- )
- )
+ .where(and(eq(roles.roleId, roleId), eq(roles.orgId, orgId)))
.limit(1);
if (!role) {
diff --git a/server/routers/user/listInvitations.ts b/server/routers/user/listInvitations.ts
index a61e2372..4289b877 100644
--- a/server/routers/user/listInvitations.ts
+++ b/server/routers/user/listInvitations.ts
@@ -11,23 +11,23 @@ import { fromZodError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const listInvitationsParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const listInvitationsQuerySchema = z.strictObject({
- limit: z
- .string()
- .optional()
- .default("1000")
- .transform(Number)
- .pipe(z.int().nonnegative()),
- offset: z
- .string()
- .optional()
- .default("0")
- .transform(Number)
- .pipe(z.int().nonnegative())
- });
+ limit: z
+ .string()
+ .optional()
+ .default("1000")
+ .transform(Number)
+ .pipe(z.int().nonnegative()),
+ offset: z
+ .string()
+ .optional()
+ .default("0")
+ .transform(Number)
+ .pipe(z.int().nonnegative())
+});
async function queryInvitations(orgId: string, limit: number, offset: number) {
return await db
diff --git a/server/routers/user/listUsers.ts b/server/routers/user/listUsers.ts
index aa70874e..401dcf58 100644
--- a/server/routers/user/listUsers.ts
+++ b/server/routers/user/listUsers.ts
@@ -12,23 +12,23 @@ import { OpenAPITags, registry } from "@server/openApi";
import { eq } from "drizzle-orm";
const listUsersParamsSchema = z.strictObject({
- orgId: z.string()
- });
+ orgId: z.string()
+});
const listUsersSchema = z.strictObject({
- limit: z
- .string()
- .optional()
- .default("1000")
- .transform(Number)
- .pipe(z.int().nonnegative()),
- offset: z
- .string()
- .optional()
- .default("0")
- .transform(Number)
- .pipe(z.int().nonnegative())
- });
+ limit: z
+ .string()
+ .optional()
+ .default("1000")
+ .transform(Number)
+ .pipe(z.int().nonnegative()),
+ offset: z
+ .string()
+ .optional()
+ .default("0")
+ .transform(Number)
+ .pipe(z.int().nonnegative())
+});
async function queryUsers(orgId: string, limit: number, offset: number) {
return await db
@@ -48,7 +48,7 @@ async function queryUsers(orgId: string, limit: number, offset: number) {
idpId: users.idpId,
idpType: idp.type,
idpVariant: idpOidcConfig.variant,
- twoFactorEnabled: users.twoFactorEnabled,
+ twoFactorEnabled: users.twoFactorEnabled
})
.from(users)
.leftJoin(userOrgs, eq(users.userId, userOrgs.userId))
diff --git a/server/routers/user/removeInvitation.ts b/server/routers/user/removeInvitation.ts
index 44ec8c23..ab6a96d2 100644
--- a/server/routers/user/removeInvitation.ts
+++ b/server/routers/user/removeInvitation.ts
@@ -8,11 +8,23 @@ import HttpCode from "@server/types/HttpCode";
import createHttpError from "http-errors";
import logger from "@server/logger";
import { fromError } from "zod-validation-error";
+import { OpenAPITags, registry } from "@server/openApi";
const removeInvitationParamsSchema = z.strictObject({
- orgId: z.string(),
- inviteId: z.string()
- });
+ orgId: z.string(),
+ inviteId: z.string()
+});
+
+registry.registerPath({
+ method: "delete",
+ path: "/org/{orgId}/invitations/{inviteId}",
+ description: "Remove an open invitation from an organization",
+ tags: [OpenAPITags.Org],
+ request: {
+ params: removeInvitationParamsSchema
+ },
+ responses: {}
+});
export async function removeInvitation(
req: Request,
diff --git a/server/routers/user/removeUserAction.ts b/server/routers/user/removeUserAction.ts
index 6e4c1a66..b9dc8cc0 100644
--- a/server/routers/user/removeUserAction.ts
+++ b/server/routers/user/removeUserAction.ts
@@ -10,13 +10,13 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const removeUserActionParamsSchema = z.strictObject({
- userId: z.string()
- });
+ userId: z.string()
+});
const removeUserActionSchema = z.strictObject({
- actionId: z.string(),
- orgId: z.string()
- });
+ actionId: z.string(),
+ orgId: z.string()
+});
export async function removeUserAction(
req: Request,
diff --git a/server/routers/user/removeUserOrg.ts b/server/routers/user/removeUserOrg.ts
index cbbb4495..97045e92 100644
--- a/server/routers/user/removeUserOrg.ts
+++ b/server/routers/user/removeUserOrg.ts
@@ -16,9 +16,9 @@ import { UserType } from "@server/types/UserTypes";
import { calculateUserClientsForOrgs } from "@server/lib/calculateUserClientsForOrgs";
const removeUserSchema = z.strictObject({
- userId: z.string(),
- orgId: z.string()
- });
+ userId: z.string(),
+ orgId: z.string()
+});
registry.registerPath({
method: "delete",
diff --git a/server/routers/user/removeUserResource.ts b/server/routers/user/removeUserResource.ts
index 14dbb540..bdb0cda3 100644
--- a/server/routers/user/removeUserResource.ts
+++ b/server/routers/user/removeUserResource.ts
@@ -10,12 +10,9 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const removeUserResourceSchema = z.strictObject({
- userId: z.string(),
- resourceId: z
- .string()
- .transform(Number)
- .pipe(z.int().positive())
- });
+ userId: z.string(),
+ resourceId: z.string().transform(Number).pipe(z.int().positive())
+});
export async function removeUserResource(
req: Request,
diff --git a/server/routers/user/removeUserSite.ts b/server/routers/user/removeUserSite.ts
index 6ed2288a..a531f02c 100644
--- a/server/routers/user/removeUserSite.ts
+++ b/server/routers/user/removeUserSite.ts
@@ -10,12 +10,12 @@ import logger from "@server/logger";
import { fromError } from "zod-validation-error";
const removeUserSiteParamsSchema = z.strictObject({
- userId: z.string()
- });
+ userId: z.string()
+});
const removeUserSiteSchema = z.strictObject({
- siteId: z.int().positive()
- });
+ siteId: z.int().positive()
+});
export async function removeUserSite(
req: Request,
diff --git a/server/routers/user/updateOrgUser.ts b/server/routers/user/updateOrgUser.ts
index e1000063..97bedb5f 100644
--- a/server/routers/user/updateOrgUser.ts
+++ b/server/routers/user/updateOrgUser.ts
@@ -10,11 +10,12 @@ import { fromError } from "zod-validation-error";
import { OpenAPITags, registry } from "@server/openApi";
const paramsSchema = z.strictObject({
- userId: z.string(),
- orgId: z.string()
- });
+ userId: z.string(),
+ orgId: z.string()
+});
-const bodySchema = z.strictObject({
+const bodySchema = z
+ .strictObject({
autoProvisioned: z.boolean().optional()
})
.refine((data) => Object.keys(data).length > 0, {
diff --git a/server/routers/ws/index.ts b/server/routers/ws/index.ts
index 16440ec9..b580b369 100644
--- a/server/routers/ws/index.ts
+++ b/server/routers/ws/index.ts
@@ -1,2 +1,2 @@
export * from "./ws";
-export * from "./types";
\ No newline at end of file
+export * from "./types";
diff --git a/server/routers/ws/types.ts b/server/routers/ws/types.ts
index 7063bc87..b4ec690b 100644
--- a/server/routers/ws/types.ts
+++ b/server/routers/ws/types.ts
@@ -58,7 +58,9 @@ export interface HandlerContext {
connectedClients: Map;
}
-export type MessageHandler = (context: HandlerContext) => Promise;
+export type MessageHandler = (
+ context: HandlerContext
+) => Promise;
// Redis message type for cross-node communication
export interface RedisMessage {
@@ -67,4 +69,4 @@ export interface RedisMessage {
excludeClientId?: string;
message: WSMessage;
fromNodeId: string;
-}
\ No newline at end of file
+}
diff --git a/server/routers/ws/ws.ts b/server/routers/ws/ws.ts
index abbec880..0544af9d 100644
--- a/server/routers/ws/ws.ts
+++ b/server/routers/ws/ws.ts
@@ -10,7 +10,13 @@ import { validateOlmSessionToken } from "@server/auth/sessions/olm";
import { messageHandlers } from "./messageHandlers";
import logger from "@server/logger";
import { v4 as uuidv4 } from "uuid";
-import { ClientType, TokenPayload, WebSocketRequest, WSMessage, AuthenticatedWebSocket } from "./types";
+import {
+ ClientType,
+ TokenPayload,
+ WebSocketRequest,
+ WSMessage,
+ AuthenticatedWebSocket
+} from "./types";
import { validateSessionToken } from "@server/auth/sessions/app";
// Subset of TokenPayload for public ws.ts (newt and olm only)
@@ -32,7 +38,11 @@ const connectedClients: Map = new Map();
const getClientMapKey = (clientId: string) => clientId;
// Helper functions for client management
-const addClient = async (clientType: ClientType, clientId: string, ws: AuthenticatedWebSocket): Promise => {
+const addClient = async (
+ clientType: ClientType,
+ clientId: string,
+ ws: AuthenticatedWebSocket
+): Promise => {
// Generate unique connection ID
const connectionId = uuidv4();
ws.connectionId = connectionId;
@@ -43,33 +53,46 @@ const addClient = async (clientType: ClientType, clientId: string, ws: Authentic
existingClients.push(ws);
connectedClients.set(mapKey, existingClients);
- logger.info(`Client added to tracking - ${clientType.toUpperCase()} ID: ${clientId}, Connection ID: ${connectionId}, Total connections: ${existingClients.length}`);
+ logger.info(
+ `Client added to tracking - ${clientType.toUpperCase()} ID: ${clientId}, Connection ID: ${connectionId}, Total connections: ${existingClients.length}`
+ );
};
-const removeClient = async (clientType: ClientType, clientId: string, ws: AuthenticatedWebSocket): Promise => {
+const removeClient = async (
+ clientType: ClientType,
+ clientId: string,
+ ws: AuthenticatedWebSocket
+): Promise => {
const mapKey = getClientMapKey(clientId);
const existingClients = connectedClients.get(mapKey) || [];
- const updatedClients = existingClients.filter(client => client !== ws);
+ const updatedClients = existingClients.filter((client) => client !== ws);
if (updatedClients.length === 0) {
connectedClients.delete(mapKey);
- logger.info(`All connections removed for ${clientType.toUpperCase()} ID: ${clientId}`);
+ logger.info(
+ `All connections removed for ${clientType.toUpperCase()} ID: ${clientId}`
+ );
} else {
connectedClients.set(mapKey, updatedClients);
- logger.info(`Connection removed - ${clientType.toUpperCase()} ID: ${clientId}, Remaining connections: ${updatedClients.length}`);
+ logger.info(
+ `Connection removed - ${clientType.toUpperCase()} ID: ${clientId}, Remaining connections: ${updatedClients.length}`
+ );
}
};
// Local message sending (within this node)
-const sendToClientLocal = async (clientId: string, message: WSMessage): Promise => {
+const sendToClientLocal = async (
+ clientId: string,
+ message: WSMessage
+): Promise => {
const mapKey = getClientMapKey(clientId);
const clients = connectedClients.get(mapKey);
if (!clients || clients.length === 0) {
return false;
}
const messageString = JSON.stringify(message);
- clients.forEach(client => {
+ clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(messageString);
}
@@ -77,11 +100,14 @@ const sendToClientLocal = async (clientId: string, message: WSMessage): Promise<
return true;
};
-const broadcastToAllExceptLocal = async (message: WSMessage, excludeClientId?: string): Promise => {
+const broadcastToAllExceptLocal = async (
+ message: WSMessage,
+ excludeClientId?: string
+): Promise => {
connectedClients.forEach((clients, mapKey) => {
const [type, id] = mapKey.split(":");
if (!(excludeClientId && id === excludeClientId)) {
- clients.forEach(client => {
+ clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(JSON.stringify(message));
}
@@ -91,39 +117,53 @@ const broadcastToAllExceptLocal = async (message: WSMessage, excludeClientId?: s
};
// Cross-node message sending
-const sendToClient = async (clientId: string, message: WSMessage): Promise => {
+const sendToClient = async (
+ clientId: string,
+ message: WSMessage
+): Promise => {
// Try to send locally first
const localSent = await sendToClientLocal(clientId, message);
- logger.debug(`sendToClient: Message type ${message.type} sent to clientId ${clientId}`);
+ logger.debug(
+ `sendToClient: Message type ${message.type} sent to clientId ${clientId}`
+ );
return localSent;
};
-const broadcastToAllExcept = async (message: WSMessage, excludeClientId?: string): Promise => {
+const broadcastToAllExcept = async (
+ message: WSMessage,
+ excludeClientId?: string
+): Promise => {
// Broadcast locally
await broadcastToAllExceptLocal(message, excludeClientId);
};
// Check if a client has active connections across all nodes
const hasActiveConnections = async (clientId: string): Promise => {
- const mapKey = getClientMapKey(clientId);
- const clients = connectedClients.get(mapKey);
- return !!(clients && clients.length > 0);
+ const mapKey = getClientMapKey(clientId);
+ const clients = connectedClients.get(mapKey);
+ return !!(clients && clients.length > 0);
};
// Get all active nodes for a client
-const getActiveNodes = async (clientType: ClientType, clientId: string): Promise => {
- const mapKey = getClientMapKey(clientId);
- const clients = connectedClients.get(mapKey);
- return (clients && clients.length > 0) ? [NODE_ID] : [];
+const getActiveNodes = async (
+ clientType: ClientType,
+ clientId: string
+): Promise => {
+ const mapKey = getClientMapKey(clientId);
+ const clients = connectedClients.get(mapKey);
+ return clients && clients.length > 0 ? [NODE_ID] : [];
};
// Token verification middleware
-const verifyToken = async (token: string, clientType: ClientType, userToken: string): Promise => {
-
-try {
- if (clientType === 'newt') {
+const verifyToken = async (
+ token: string,
+ clientType: ClientType,
+ userToken: string
+): Promise => {
+ try {
+ if (clientType === "newt") {
const { session, newt } = await validateNewtSessionToken(token);
if (!session || !newt) {
return null;
@@ -136,7 +176,7 @@ try {
return null;
}
return { client: existingNewt[0], session, clientType };
- } else if (clientType === 'olm') {
+ } else if (clientType === "olm") {
const { session, olm } = await validateOlmSessionToken(token);
if (!session || !olm) {
return null;
@@ -149,8 +189,10 @@ try {
return null;
}
- if (olm.userId) { // this is a user device and we need to check the user token
- const { session: userSession, user } = await validateSessionToken(userToken);
+ if (olm.userId) {
+ // this is a user device and we need to check the user token
+ const { session: userSession, user } =
+ await validateSessionToken(userToken);
if (!userSession || !user) {
return null;
}
@@ -161,7 +203,7 @@ try {
return { client: existingOlm[0], session, clientType };
}
-
+
return null;
} catch (error) {
logger.error("Token verification failed:", error);
@@ -169,7 +211,11 @@ try {
}
};
-const setupConnection = async (ws: AuthenticatedWebSocket, client: Newt | Olm, clientType: "newt" | "olm"): Promise => {
+const setupConnection = async (
+ ws: AuthenticatedWebSocket,
+ client: Newt | Olm,
+ clientType: "newt" | "olm"
+): Promise => {
logger.info("Establishing websocket connection");
if (!client) {
logger.error("Connection attempt without client");
@@ -180,7 +226,8 @@ const setupConnection = async (ws: AuthenticatedWebSocket, client: Newt | Olm, c
ws.clientType = clientType;
// Add client to tracking
- const clientId = clientType === 'newt' ? (client as Newt).newtId : (client as Olm).olmId;
+ const clientId =
+ clientType === "newt" ? (client as Newt).newtId : (client as Olm).olmId;
await addClient(clientType, clientId, ws);
ws.on("message", async (data) => {
@@ -188,7 +235,9 @@ const setupConnection = async (ws: AuthenticatedWebSocket, client: Newt | Olm, c
const message: WSMessage = JSON.parse(data.toString());
if (!message.type || typeof message.type !== "string") {
- throw new Error("Invalid message format: missing or invalid type");
+ throw new Error(
+ "Invalid message format: missing or invalid type"
+ );
}
const handler = messageHandlers[message.type];
@@ -213,33 +262,48 @@ const setupConnection = async (ws: AuthenticatedWebSocket, client: Newt | Olm, c
response.excludeSender ? clientId : undefined
);
} else if (response.targetClientId) {
- await sendToClient(response.targetClientId, response.message);
+ await sendToClient(
+ response.targetClientId,
+ response.message
+ );
} else {
ws.send(JSON.stringify(response.message));
}
}
} catch (error) {
logger.error("Message handling error:", error);
- ws.send(JSON.stringify({
- type: "error",
- data: {
- message: error instanceof Error ? error.message : "Unknown error occurred",
- originalMessage: data.toString()
- }
- }));
+ ws.send(
+ JSON.stringify({
+ type: "error",
+ data: {
+ message:
+ error instanceof Error
+ ? error.message
+ : "Unknown error occurred",
+ originalMessage: data.toString()
+ }
+ })
+ );
}
});
ws.on("close", () => {
removeClient(clientType, clientId, ws);
- logger.info(`Client disconnected - ${clientType.toUpperCase()} ID: ${clientId}`);
+ logger.info(
+ `Client disconnected - ${clientType.toUpperCase()} ID: ${clientId}`
+ );
});
ws.on("error", (error: Error) => {
- logger.error(`WebSocket error for ${clientType.toUpperCase()} ID ${clientId}:`, error);
+ logger.error(
+ `WebSocket error for ${clientType.toUpperCase()} ID ${clientId}:`,
+ error
+ );
});
- logger.info(`WebSocket connection established - ${clientType.toUpperCase()} ID: ${clientId}`);
+ logger.info(
+ `WebSocket connection established - ${clientType.toUpperCase()} ID: ${clientId}`
+ );
};
// Router endpoint
@@ -249,55 +313,89 @@ router.get("/ws", (req: Request, res: Response) => {
// WebSocket upgrade handler
const handleWSUpgrade = (server: HttpServer): void => {
- server.on("upgrade", async (request: WebSocketRequest, socket: Socket, head: Buffer) => {
- try {
- const url = new URL(request.url || '', `http://${request.headers.host}`);
- const token = url.searchParams.get('token') || request.headers["sec-websocket-protocol"] || '';
- const userToken = url.searchParams.get('userToken') || '';
- let clientType = url.searchParams.get('clientType') as ClientType;
+ server.on(
+ "upgrade",
+ async (request: WebSocketRequest, socket: Socket, head: Buffer) => {
+ try {
+ const url = new URL(
+ request.url || "",
+ `http://${request.headers.host}`
+ );
+ const token =
+ url.searchParams.get("token") ||
+ request.headers["sec-websocket-protocol"] ||
+ "";
+ const userToken = url.searchParams.get("userToken") || "";
+ let clientType = url.searchParams.get(
+ "clientType"
+ ) as ClientType;
- if (!clientType) {
- clientType = "newt";
- }
+ if (!clientType) {
+ clientType = "newt";
+ }
- if (!token || !clientType || !['newt', 'olm'].includes(clientType)) {
- logger.warn("Unauthorized connection attempt: invalid token or client type...");
- socket.write("HTTP/1.1 401 Unauthorized\r\n\r\n");
+ if (
+ !token ||
+ !clientType ||
+ !["newt", "olm"].includes(clientType)
+ ) {
+ logger.warn(
+ "Unauthorized connection attempt: invalid token or client type..."
+ );
+ socket.write("HTTP/1.1 401 Unauthorized\r\n\r\n");
+ socket.destroy();
+ return;
+ }
+
+ const tokenPayload = await verifyToken(
+ token,
+ clientType,
+ userToken
+ );
+ if (!tokenPayload) {
+ logger.warn(
+ "Unauthorized connection attempt: invalid token..."
+ );
+ socket.write("HTTP/1.1 401 Unauthorized\r\n\r\n");
+ socket.destroy();
+ return;
+ }
+
+ wss.handleUpgrade(
+ request,
+ socket,
+ head,
+ (ws: AuthenticatedWebSocket) => {
+ setupConnection(
+ ws,
+ tokenPayload.client,
+ tokenPayload.clientType
+ );
+ }
+ );
+ } catch (error) {
+ logger.error("WebSocket upgrade error:", error);
+ socket.write("HTTP/1.1 500 Internal Server Error\r\n\r\n");
socket.destroy();
- return;
}
-
- const tokenPayload = await verifyToken(token, clientType, userToken);
- if (!tokenPayload) {
- logger.warn("Unauthorized connection attempt: invalid token...");
- socket.write("HTTP/1.1 401 Unauthorized\r\n\r\n");
- socket.destroy();
- return;
- }
-
- wss.handleUpgrade(request, socket, head, (ws: AuthenticatedWebSocket) => {
- setupConnection(ws, tokenPayload.client, tokenPayload.clientType);
- });
- } catch (error) {
- logger.error("WebSocket upgrade error:", error);
- socket.write("HTTP/1.1 500 Internal Server Error\r\n\r\n");
- socket.destroy();
}
- });
+ );
};
// Disconnect a specific client and force them to reconnect
const disconnectClient = async (clientId: string): Promise => {
const mapKey = getClientMapKey(clientId);
const clients = connectedClients.get(mapKey);
-
+
if (!clients || clients.length === 0) {
logger.debug(`No connections found for client ID: ${clientId}`);
return false;
}
- logger.info(`Disconnecting client ID: ${clientId} (${clients.length} connection(s))`);
-
+ logger.info(
+ `Disconnecting client ID: ${clientId} (${clients.length} connection(s))`
+ );
+
// Close all connections for this client
clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
@@ -313,16 +411,16 @@ const cleanup = async (): Promise => {
try {
// Close all WebSocket connections
connectedClients.forEach((clients) => {
- clients.forEach(client => {
+ clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.terminate();
}
});
});
- logger.info('WebSocket cleanup completed');
+ logger.info("WebSocket cleanup completed");
} catch (error) {
- logger.error('Error during WebSocket cleanup:', error);
+ logger.error("Error during WebSocket cleanup:", error);
}
};
diff --git a/server/setup/clearStaleData.ts b/server/setup/clearStaleData.ts
index 2e54656c..8c7e85f0 100644
--- a/server/setup/clearStaleData.ts
+++ b/server/setup/clearStaleData.ts
@@ -1,5 +1,5 @@
import { build } from "@server/build";
-import { db, sessionTransferToken } from "@server/db";
+import { db, deviceWebAuthCodes, sessionTransferToken } from "@server/db";
import {
emailVerificationCodes,
newtSessions,
@@ -89,4 +89,12 @@ export async function clearStaleData() {
logger.warn("Error clearing expired sessionTransferToken:", e);
}
}
+
+ try {
+ await db
+ .delete(deviceWebAuthCodes)
+ .where(lt(deviceWebAuthCodes.expiresAt, new Date().getTime()));
+ } catch (e) {
+ logger.warn("Error clearing expired deviceWebAuthCodes:", e);
+ }
}
diff --git a/server/setup/ensureSetupToken.ts b/server/setup/ensureSetupToken.ts
index 46a62ca5..87b86321 100644
--- a/server/setup/ensureSetupToken.ts
+++ b/server/setup/ensureSetupToken.ts
@@ -16,11 +16,23 @@ function generateToken(): string {
return generateRandomString(random, alphabet, 32);
}
+function validateToken(token: string): boolean {
+ const tokenRegex = /^[a-z0-9]{32}$/;
+ return tokenRegex.test(token);
+}
+
function generateId(length: number): string {
const alphabet = "abcdefghijklmnopqrstuvwxyz0123456789";
return generateRandomString(random, alphabet, length);
}
+function showSetupToken(token: string, source: string): void {
+ console.log(`=== SETUP TOKEN ${source} ===`);
+ console.log("Token:", token);
+ console.log("Use this token on the initial setup page");
+ console.log("================================");
+}
+
export async function ensureSetupToken() {
try {
// Check if a server admin already exists
@@ -31,22 +43,55 @@ export async function ensureSetupToken() {
// If admin exists, no need for setup token
if (existingAdmin) {
- logger.debug("Server admin exists. Setup token generation skipped.");
+ logger.debug(
+ "Server admin exists. Setup token generation skipped."
+ );
return;
}
// Check if a setup token already exists
- const existingTokens = await db
+ const [existingToken] = await db
.select()
.from(setupTokens)
.where(eq(setupTokens.used, false));
+ const envSetupToken = process.env.PANGOLIN_SETUP_TOKEN;
+ console.debug("PANGOLIN_SETUP_TOKEN:", envSetupToken);
+ if (envSetupToken) {
+ if (!validateToken(envSetupToken)) {
+ throw new Error(
+ "invalid token format for PANGOLIN_SETUP_TOKEN"
+ );
+ }
+
+ if (existingToken?.token !== envSetupToken) {
+ console.warn(
+ "Overwriting existing token in DB since PANGOLIN_SETUP_TOKEN is set"
+ );
+
+ await db
+ .update(setupTokens)
+ .set({ token: envSetupToken })
+ .where(eq(setupTokens.tokenId, existingToken.tokenId));
+ } else {
+ const tokenId = generateId(15);
+
+ await db.insert(setupTokens).values({
+ tokenId: tokenId,
+ token: envSetupToken,
+ used: false,
+ dateCreated: moment().toISOString(),
+ dateUsed: null
+ });
+ }
+
+ showSetupToken(envSetupToken, "FROM ENVIRONMENT");
+ return;
+ }
+
// If unused token exists, display it instead of creating a new one
- if (existingTokens.length > 0) {
- console.log("=== SETUP TOKEN EXISTS ===");
- console.log("Token:", existingTokens[0].token);
- console.log("Use this token on the initial setup page");
- console.log("================================");
+ if (existingToken) {
+ showSetupToken(existingToken.token, "EXISTS");
return;
}
@@ -62,10 +107,7 @@ export async function ensureSetupToken() {
dateUsed: null
});
- console.log("=== SETUP TOKEN GENERATED ===");
- console.log("Token:", token);
- console.log("Use this token on the initial setup page");
- console.log("================================");
+ showSetupToken(token, "GENERATED");
} catch (error) {
console.error("Failed to ensure setup token:", error);
throw error;
diff --git a/server/setup/migrationsPg.ts b/server/setup/migrationsPg.ts
index c778cca3..0fc42f9d 100644
--- a/server/setup/migrationsPg.ts
+++ b/server/setup/migrationsPg.ts
@@ -30,7 +30,7 @@ const migrations = [
{ version: "1.11.0", run: m7 },
{ version: "1.11.1", run: m8 },
{ version: "1.12.0", run: m9 },
- { version: "1.13.0", run: m10 },
+ { version: "1.13.0", run: m10 }
// Add new migrations here as they are created
] as {
version: string;
diff --git a/server/setup/scriptsPg/1.12.0.ts b/server/setup/scriptsPg/1.12.0.ts
index 38cdaf43..d3c257e3 100644
--- a/server/setup/scriptsPg/1.12.0.ts
+++ b/server/setup/scriptsPg/1.12.0.ts
@@ -9,7 +9,9 @@ export default async function migration() {
try {
await db.execute(sql`BEGIN`);
- await db.execute(sql`UPDATE "resourceRules" SET "match" = 'COUNTRY' WHERE "match" = 'GEOIP'`);
+ await db.execute(
+ sql`UPDATE "resourceRules" SET "match" = 'COUNTRY' WHERE "match" = 'GEOIP'`
+ );
await db.execute(sql`
CREATE TABLE "accessAuditLog" (
@@ -92,40 +94,97 @@ export default async function migration() {
);
`);
- await db.execute(sql`ALTER TABLE "blueprints" ADD CONSTRAINT "blueprints_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`);
+ await db.execute(
+ sql`ALTER TABLE "blueprints" ADD CONSTRAINT "blueprints_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`
+ );
- await db.execute(sql`ALTER TABLE "remoteExitNode" ADD COLUMN "secondaryVersion" varchar;`);
- await db.execute(sql`ALTER TABLE "resources" DROP CONSTRAINT "resources_skipToIdpId_idp_idpId_fk";`);
- await db.execute(sql`ALTER TABLE "domains" ADD COLUMN "certResolver" varchar;`);
- await db.execute(sql`ALTER TABLE "domains" ADD COLUMN "customCertResolver" varchar;`);
- await db.execute(sql`ALTER TABLE "domains" ADD COLUMN "preferWildcardCert" boolean;`);
- await db.execute(sql`ALTER TABLE "orgs" ADD COLUMN "requireTwoFactor" boolean;`);
- await db.execute(sql`ALTER TABLE "orgs" ADD COLUMN "maxSessionLengthHours" integer;`);
- await db.execute(sql`ALTER TABLE "orgs" ADD COLUMN "passwordExpiryDays" integer;`);
- await db.execute(sql`ALTER TABLE "orgs" ADD COLUMN "settingsLogRetentionDaysRequest" integer DEFAULT 7 NOT NULL;`);
- await db.execute(sql`ALTER TABLE "orgs" ADD COLUMN "settingsLogRetentionDaysAccess" integer DEFAULT 0 NOT NULL;`);
- await db.execute(sql`ALTER TABLE "orgs" ADD COLUMN "settingsLogRetentionDaysAction" integer DEFAULT 0 NOT NULL;`);
- await db.execute(sql`ALTER TABLE "resourceSessions" ADD COLUMN "issuedAt" bigint;`);
- await db.execute(sql`ALTER TABLE "resources" ADD COLUMN "proxyProtocol" boolean DEFAULT false NOT NULL;`);
- await db.execute(sql`ALTER TABLE "resources" ADD COLUMN "proxyProtocolVersion" integer DEFAULT 1;`);
- await db.execute(sql`ALTER TABLE "session" ADD COLUMN "issuedAt" bigint;`);
- await db.execute(sql`ALTER TABLE "user" ADD COLUMN "lastPasswordChange" bigint;`);
- await db.execute(sql`ALTER TABLE "accessAuditLog" ADD CONSTRAINT "accessAuditLog_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`);
- await db.execute(sql`ALTER TABLE "actionAuditLog" ADD CONSTRAINT "actionAuditLog_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`);
- await db.execute(sql`ALTER TABLE "dnsRecords" ADD CONSTRAINT "dnsRecords_domainId_domains_domainId_fk" FOREIGN KEY ("domainId") REFERENCES "public"."domains"("domainId") ON DELETE cascade ON UPDATE no action;`);
- await db.execute(sql`ALTER TABLE "requestAuditLog" ADD CONSTRAINT "requestAuditLog_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`);
- await db.execute(sql`CREATE INDEX "idx_identityAuditLog_timestamp" ON "accessAuditLog" USING btree ("timestamp");`);
- await db.execute(sql`CREATE INDEX "idx_identityAuditLog_org_timestamp" ON "accessAuditLog" USING btree ("orgId","timestamp");`);
- await db.execute(sql`CREATE INDEX "idx_actionAuditLog_timestamp" ON "actionAuditLog" USING btree ("timestamp");`);
- await db.execute(sql`CREATE INDEX "idx_actionAuditLog_org_timestamp" ON "actionAuditLog" USING btree ("orgId","timestamp");`);
- await db.execute(sql`CREATE INDEX "idx_requestAuditLog_timestamp" ON "requestAuditLog" USING btree ("timestamp");`);
- await db.execute(sql`CREATE INDEX "idx_requestAuditLog_org_timestamp" ON "requestAuditLog" USING btree ("orgId","timestamp");`);
- await db.execute(sql`ALTER TABLE "resources" ADD CONSTRAINT "resources_skipToIdpId_idp_idpId_fk" FOREIGN KEY ("skipToIdpId") REFERENCES "public"."idp"("idpId") ON DELETE set null ON UPDATE no action;`);
+ await db.execute(
+ sql`ALTER TABLE "remoteExitNode" ADD COLUMN "secondaryVersion" varchar;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "resources" DROP CONSTRAINT "resources_skipToIdpId_idp_idpId_fk";`
+ );
+ await db.execute(
+ sql`ALTER TABLE "domains" ADD COLUMN "certResolver" varchar;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "domains" ADD COLUMN "customCertResolver" varchar;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "domains" ADD COLUMN "preferWildcardCert" boolean;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "orgs" ADD COLUMN "requireTwoFactor" boolean;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "orgs" ADD COLUMN "maxSessionLengthHours" integer;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "orgs" ADD COLUMN "passwordExpiryDays" integer;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "orgs" ADD COLUMN "settingsLogRetentionDaysRequest" integer DEFAULT 7 NOT NULL;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "orgs" ADD COLUMN "settingsLogRetentionDaysAccess" integer DEFAULT 0 NOT NULL;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "orgs" ADD COLUMN "settingsLogRetentionDaysAction" integer DEFAULT 0 NOT NULL;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "resourceSessions" ADD COLUMN "issuedAt" bigint;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "resources" ADD COLUMN "proxyProtocol" boolean DEFAULT false NOT NULL;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "resources" ADD COLUMN "proxyProtocolVersion" integer DEFAULT 1;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "session" ADD COLUMN "issuedAt" bigint;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "user" ADD COLUMN "lastPasswordChange" bigint;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "accessAuditLog" ADD CONSTRAINT "accessAuditLog_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "actionAuditLog" ADD CONSTRAINT "actionAuditLog_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "dnsRecords" ADD CONSTRAINT "dnsRecords_domainId_domains_domainId_fk" FOREIGN KEY ("domainId") REFERENCES "public"."domains"("domainId") ON DELETE cascade ON UPDATE no action;`
+ );
+ await db.execute(
+ sql`ALTER TABLE "requestAuditLog" ADD CONSTRAINT "requestAuditLog_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`
+ );
+ await db.execute(
+ sql`CREATE INDEX "idx_identityAuditLog_timestamp" ON "accessAuditLog" USING btree ("timestamp");`
+ );
+ await db.execute(
+ sql`CREATE INDEX "idx_identityAuditLog_org_timestamp" ON "accessAuditLog" USING btree ("orgId","timestamp");`
+ );
+ await db.execute(
+ sql`CREATE INDEX "idx_actionAuditLog_timestamp" ON "actionAuditLog" USING btree ("timestamp");`
+ );
+ await db.execute(
+ sql`CREATE INDEX "idx_actionAuditLog_org_timestamp" ON "actionAuditLog" USING btree ("orgId","timestamp");`
+ );
+ await db.execute(
+ sql`CREATE INDEX "idx_requestAuditLog_timestamp" ON "requestAuditLog" USING btree ("timestamp");`
+ );
+ await db.execute(
+ sql`CREATE INDEX "idx_requestAuditLog_org_timestamp" ON "requestAuditLog" USING btree ("orgId","timestamp");`
+ );
+ await db.execute(
+ sql`ALTER TABLE "resources" ADD CONSTRAINT "resources_skipToIdpId_idp_idpId_fk" FOREIGN KEY ("skipToIdpId") REFERENCES "public"."idp"("idpId") ON DELETE set null ON UPDATE no action;`
+ );
await db.execute(sql`ALTER TABLE "orgs" DROP COLUMN "settings";`);
-
// get all of the domains
- const domainsQuery = await db.execute(sql`SELECT "domainId", "baseDomain" FROM "domains"`);
+ const domainsQuery = await db.execute(
+ sql`SELECT "domainId", "baseDomain" FROM "domains"`
+ );
const domains = domainsQuery.rows as {
domainId: string;
baseDomain: string;
@@ -135,11 +194,11 @@ export default async function migration() {
// insert two records into the dnsRecords table for each domain
await db.execute(sql`
INSERT INTO "dnsRecords" ("domainId", "recordType", "baseDomain", "value", "verified")
- VALUES (${domain.domainId}, 'A', ${`*.${domain.baseDomain}`}, ${'Server IP Address'}, true)
+ VALUES (${domain.domainId}, 'A', ${`*.${domain.baseDomain}`}, ${"Server IP Address"}, true)
`);
await db.execute(sql`
INSERT INTO "dnsRecords" ("domainId", "recordType", "baseDomain", "value", "verified")
- VALUES (${domain.domainId}, 'A', ${domain.baseDomain}, ${'Server IP Address'}, true)
+ VALUES (${domain.domainId}, 'A', ${domain.baseDomain}, ${"Server IP Address"}, true)
`);
}
diff --git a/server/setup/scriptsPg/1.13.0.ts b/server/setup/scriptsPg/1.13.0.ts
index e13276df..9a56706c 100644
--- a/server/setup/scriptsPg/1.13.0.ts
+++ b/server/setup/scriptsPg/1.13.0.ts
@@ -255,7 +255,9 @@ export default async function migration() {
const siteDataQuery = await db.execute(sql`
SELECT "orgId" FROM "sites" WHERE "siteId" = ${site.siteId}
`);
- const siteData = siteDataQuery.rows[0] as { orgId: string } | undefined;
+ const siteData = siteDataQuery.rows[0] as
+ | { orgId: string }
+ | undefined;
if (!siteData) continue;
const subnets = site.remoteSubnets.split(",");
diff --git a/server/setup/scriptsPg/1.7.0.ts b/server/setup/scriptsPg/1.7.0.ts
index 3cb799e0..aa740ecb 100644
--- a/server/setup/scriptsPg/1.7.0.ts
+++ b/server/setup/scriptsPg/1.7.0.ts
@@ -121,7 +121,7 @@ export default async function migration() {
try {
await db.execute(sql`BEGIN`);
-
+
// Update all existing orgs to have the default subnet
await db.execute(sql`UPDATE "orgs" SET "subnet" = '100.90.128.0/24'`);
diff --git a/server/setup/scriptsPg/1.9.0.ts b/server/setup/scriptsPg/1.9.0.ts
index fdbf3ae9..eac7ade9 100644
--- a/server/setup/scriptsPg/1.9.0.ts
+++ b/server/setup/scriptsPg/1.9.0.ts
@@ -11,7 +11,9 @@ export default async function migration() {
try {
// Get the first siteId to use as default
- const firstSite = await db.execute(sql`SELECT "siteId" FROM "sites" LIMIT 1`);
+ const firstSite = await db.execute(
+ sql`SELECT "siteId" FROM "sites" LIMIT 1`
+ );
if (firstSite.rows.length > 0) {
firstSiteId = firstSite.rows[0].siteId as number;
}
@@ -52,33 +54,59 @@ export default async function migration() {
"enabled" boolean DEFAULT true NOT NULL
);`);
- await db.execute(sql`ALTER TABLE "resources" DROP CONSTRAINT "resources_siteId_sites_siteId_fk";`);
+ await db.execute(
+ sql`ALTER TABLE "resources" DROP CONSTRAINT "resources_siteId_sites_siteId_fk";`
+ );
- await db.execute(sql`ALTER TABLE "clients" ALTER COLUMN "lastPing" TYPE integer USING NULL;`);
+ await db.execute(
+ sql`ALTER TABLE "clients" ALTER COLUMN "lastPing" TYPE integer USING NULL;`
+ );
- await db.execute(sql`ALTER TABLE "clientSites" ADD COLUMN "endpoint" varchar;`);
+ await db.execute(
+ sql`ALTER TABLE "clientSites" ADD COLUMN "endpoint" varchar;`
+ );
- await db.execute(sql`ALTER TABLE "exitNodes" ADD COLUMN "online" boolean DEFAULT false NOT NULL;`);
+ await db.execute(
+ sql`ALTER TABLE "exitNodes" ADD COLUMN "online" boolean DEFAULT false NOT NULL;`
+ );
- await db.execute(sql`ALTER TABLE "exitNodes" ADD COLUMN "lastPing" integer;`);
+ await db.execute(
+ sql`ALTER TABLE "exitNodes" ADD COLUMN "lastPing" integer;`
+ );
- await db.execute(sql`ALTER TABLE "exitNodes" ADD COLUMN "type" text DEFAULT 'gerbil';`);
+ await db.execute(
+ sql`ALTER TABLE "exitNodes" ADD COLUMN "type" text DEFAULT 'gerbil';`
+ );
await db.execute(sql`ALTER TABLE "olms" ADD COLUMN "version" text;`);
await db.execute(sql`ALTER TABLE "orgs" ADD COLUMN "createdAt" text;`);
- await db.execute(sql`ALTER TABLE "resources" ADD COLUMN "skipToIdpId" integer;`);
+ await db.execute(
+ sql`ALTER TABLE "resources" ADD COLUMN "skipToIdpId" integer;`
+ );
- await db.execute(sql.raw(`ALTER TABLE "targets" ADD COLUMN "siteId" integer NOT NULL DEFAULT ${firstSiteId || 1};`));
+ await db.execute(
+ sql.raw(
+ `ALTER TABLE "targets" ADD COLUMN "siteId" integer NOT NULL DEFAULT ${firstSiteId || 1};`
+ )
+ );
- await db.execute(sql`ALTER TABLE "siteResources" ADD CONSTRAINT "siteResources_siteId_sites_siteId_fk" FOREIGN KEY ("siteId") REFERENCES "public"."sites"("siteId") ON DELETE cascade ON UPDATE no action;`);
+ await db.execute(
+ sql`ALTER TABLE "siteResources" ADD CONSTRAINT "siteResources_siteId_sites_siteId_fk" FOREIGN KEY ("siteId") REFERENCES "public"."sites"("siteId") ON DELETE cascade ON UPDATE no action;`
+ );
- await db.execute(sql`ALTER TABLE "siteResources" ADD CONSTRAINT "siteResources_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`);
+ await db.execute(
+ sql`ALTER TABLE "siteResources" ADD CONSTRAINT "siteResources_orgId_orgs_orgId_fk" FOREIGN KEY ("orgId") REFERENCES "public"."orgs"("orgId") ON DELETE cascade ON UPDATE no action;`
+ );
- await db.execute(sql`ALTER TABLE "resources" ADD CONSTRAINT "resources_skipToIdpId_idp_idpId_fk" FOREIGN KEY ("skipToIdpId") REFERENCES "public"."idp"("idpId") ON DELETE cascade ON UPDATE no action;`);
+ await db.execute(
+ sql`ALTER TABLE "resources" ADD CONSTRAINT "resources_skipToIdpId_idp_idpId_fk" FOREIGN KEY ("skipToIdpId") REFERENCES "public"."idp"("idpId") ON DELETE cascade ON UPDATE no action;`
+ );
- await db.execute(sql`ALTER TABLE "targets" ADD CONSTRAINT "targets_siteId_sites_siteId_fk" FOREIGN KEY ("siteId") REFERENCES "public"."sites"("siteId") ON DELETE cascade ON UPDATE no action;`);
+ await db.execute(
+ sql`ALTER TABLE "targets" ADD CONSTRAINT "targets_siteId_sites_siteId_fk" FOREIGN KEY ("siteId") REFERENCES "public"."sites"("siteId") ON DELETE cascade ON UPDATE no action;`
+ );
await db.execute(sql`ALTER TABLE "clients" DROP COLUMN "endpoint";`);
diff --git a/server/setup/scriptsSqlite/1.0.0-beta13.ts b/server/setup/scriptsSqlite/1.0.0-beta13.ts
index 9ced727f..9986b06f 100644
--- a/server/setup/scriptsSqlite/1.0.0-beta13.ts
+++ b/server/setup/scriptsSqlite/1.0.0-beta13.ts
@@ -25,7 +25,9 @@ export default async function migration() {
console.log(`Added new table and column: resourceRules, applyRules`);
} catch (e) {
- console.log("Unable to add new table and column: resourceRules, applyRules");
+ console.log(
+ "Unable to add new table and column: resourceRules, applyRules"
+ );
throw e;
}
diff --git a/server/setup/scriptsSqlite/1.0.0-beta3.ts b/server/setup/scriptsSqlite/1.0.0-beta3.ts
index fccfeb88..5d69af6b 100644
--- a/server/setup/scriptsSqlite/1.0.0-beta3.ts
+++ b/server/setup/scriptsSqlite/1.0.0-beta3.ts
@@ -38,4 +38,4 @@ export default async function migration() {
fs.writeFileSync(filePath, updatedYaml, "utf8");
console.log("Done.");
-}
\ No newline at end of file
+}
diff --git a/server/setup/scriptsSqlite/1.0.0-beta6.ts b/server/setup/scriptsSqlite/1.0.0-beta6.ts
index 89129678..a13a7e31 100644
--- a/server/setup/scriptsSqlite/1.0.0-beta6.ts
+++ b/server/setup/scriptsSqlite/1.0.0-beta6.ts
@@ -43,7 +43,9 @@ export default async function migration() {
const updatedYaml = yaml.dump(rawConfig);
fs.writeFileSync(filePath, updatedYaml, "utf8");
} catch (error) {
- console.log("We were unable to add CORS to your config file. Please add it manually.");
+ console.log(
+ "We were unable to add CORS to your config file. Please add it manually."
+ );
console.error(error);
}
diff --git a/server/setup/scriptsSqlite/1.0.0-beta9.ts b/server/setup/scriptsSqlite/1.0.0-beta9.ts
index 7cce1c2d..6d48ed39 100644
--- a/server/setup/scriptsSqlite/1.0.0-beta9.ts
+++ b/server/setup/scriptsSqlite/1.0.0-beta9.ts
@@ -182,12 +182,15 @@ export default async function migration() {
if (parsedConfig.success) {
// delete permanent from redirect-to-https middleware
- delete traefikConfig.http.middlewares["redirect-to-https"].redirectScheme.permanent;
+ delete traefikConfig.http.middlewares["redirect-to-https"]
+ .redirectScheme.permanent;
const updatedTraefikYaml = yaml.dump(traefikConfig);
fs.writeFileSync(traefikPath, updatedTraefikYaml, "utf8");
- console.log("Deleted permanent from redirect-to-https middleware.");
+ console.log(
+ "Deleted permanent from redirect-to-https middleware."
+ );
} else {
console.log(fromZodError(parsedConfig.error));
console.log(
diff --git a/server/setup/scriptsSqlite/1.10.0.ts b/server/setup/scriptsSqlite/1.10.0.ts
index 3065a664..03cf24dc 100644
--- a/server/setup/scriptsSqlite/1.10.0.ts
+++ b/server/setup/scriptsSqlite/1.10.0.ts
@@ -13,15 +13,11 @@ export default async function migration() {
try {
const resources = db
- .prepare(
- "SELECT resourceId FROM resources"
- )
+ .prepare("SELECT resourceId FROM resources")
.all() as Array<{ resourceId: number }>;
const siteResources = db
- .prepare(
- "SELECT siteResourceId FROM siteResources"
- )
+ .prepare("SELECT siteResourceId FROM siteResources")
.all() as Array<{ siteResourceId: number }>;
db.transaction(() => {
@@ -82,17 +78,13 @@ export default async function migration() {
// Handle auto-provisioned users for identity providers
const autoProvisionIdps = db
- .prepare(
- "SELECT idpId FROM idp WHERE autoProvision = 1"
- )
+ .prepare("SELECT idpId FROM idp WHERE autoProvision = 1")
.all() as Array<{ idpId: number }>;
for (const idp of autoProvisionIdps) {
// Get all users with this identity provider
const usersWithIdp = db
- .prepare(
- "SELECT id FROM user WHERE idpId = ?"
- )
+ .prepare("SELECT id FROM user WHERE idpId = ?")
.all(idp.idpId) as Array<{ id: string }>;
// Update userOrgs to set autoProvisioned to true for these users
diff --git a/server/setup/scriptsSqlite/1.10.1.ts b/server/setup/scriptsSqlite/1.10.1.ts
index f6f9894e..24181558 100644
--- a/server/setup/scriptsSqlite/1.10.1.ts
+++ b/server/setup/scriptsSqlite/1.10.1.ts
@@ -5,16 +5,16 @@ import path from "path";
const version = "1.10.1";
export default async function migration() {
- console.log(`Running setup script ${version}...`);
+ console.log(`Running setup script ${version}...`);
- const location = path.join(APP_PATH, "db", "db.sqlite");
- const db = new Database(location);
+ const location = path.join(APP_PATH, "db", "db.sqlite");
+ const db = new Database(location);
- try {
- db.pragma("foreign_keys = OFF");
+ try {
+ db.pragma("foreign_keys = OFF");
- db.transaction(() => {
- db.exec(`ALTER TABLE "targets" RENAME TO "targets_old";
+ db.transaction(() => {
+ db.exec(`ALTER TABLE "targets" RENAME TO "targets_old";
--> statement-breakpoint
CREATE TABLE "targets" (
"targetId" INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -57,13 +57,13 @@ SELECT
FROM "targets_old";
--> statement-breakpoint
DROP TABLE "targets_old";`);
- })();
+ })();
- db.pragma("foreign_keys = ON");
+ db.pragma("foreign_keys = ON");
- console.log(`Migrated database`);
- } catch (e) {
- console.log("Failed to migrate db:", e);
- throw e;
- }
-}
\ No newline at end of file
+ console.log(`Migrated database`);
+ } catch (e) {
+ console.log("Failed to migrate db:", e);
+ throw e;
+ }
+}
diff --git a/server/setup/scriptsSqlite/1.11.0.ts b/server/setup/scriptsSqlite/1.11.0.ts
index c79cfdb4..41d68563 100644
--- a/server/setup/scriptsSqlite/1.11.0.ts
+++ b/server/setup/scriptsSqlite/1.11.0.ts
@@ -13,25 +13,29 @@ export default async function migration() {
const db = new Database(location);
db.transaction(() => {
-
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'account' (
'accountId' integer PRIMARY KEY AUTOINCREMENT NOT NULL,
'userId' text NOT NULL,
FOREIGN KEY ('userId') REFERENCES 'user'('id') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'accountDomains' (
'accountId' integer NOT NULL,
'domainId' text NOT NULL,
FOREIGN KEY ('accountId') REFERENCES 'account'('accountId') ON UPDATE no action ON DELETE cascade,
FOREIGN KEY ('domainId') REFERENCES 'domains'('domainId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'certificates' (
'certId' integer PRIMARY KEY AUTOINCREMENT NOT NULL,
'domain' text NOT NULL,
@@ -49,11 +53,15 @@ export default async function migration() {
'keyFile' text,
FOREIGN KEY ('domainId') REFERENCES 'domains'('domainId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`CREATE UNIQUE INDEX 'certificates_domain_unique' ON 'certificates' ('domain');`).run();
+ db.prepare(
+ `CREATE UNIQUE INDEX 'certificates_domain_unique' ON 'certificates' ('domain');`
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'customers' (
'customerId' text PRIMARY KEY NOT NULL,
'orgId' text NOT NULL,
@@ -65,9 +73,11 @@ export default async function migration() {
'updatedAt' integer NOT NULL,
FOREIGN KEY ('orgId') REFERENCES 'orgs'('orgId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'dnsChallenges' (
'dnsChallengeId' integer PRIMARY KEY AUTOINCREMENT NOT NULL,
'domain' text NOT NULL,
@@ -77,26 +87,32 @@ export default async function migration() {
'expiresAt' integer NOT NULL,
'completed' integer DEFAULT false
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'domainNamespaces' (
'domainNamespaceId' text PRIMARY KEY NOT NULL,
'domainId' text NOT NULL,
FOREIGN KEY ('domainId') REFERENCES 'domains'('domainId') ON UPDATE no action ON DELETE set null
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'exitNodeOrgs' (
'exitNodeId' integer NOT NULL,
'orgId' text NOT NULL,
FOREIGN KEY ('exitNodeId') REFERENCES 'exitNodes'('exitNodeId') ON UPDATE no action ON DELETE cascade,
FOREIGN KEY ('orgId') REFERENCES 'orgs'('orgId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'loginPage' (
'loginPageId' integer PRIMARY KEY AUTOINCREMENT NOT NULL,
'subdomain' text,
@@ -106,27 +122,33 @@ export default async function migration() {
FOREIGN KEY ('exitNodeId') REFERENCES 'exitNodes'('exitNodeId') ON UPDATE no action ON DELETE set null,
FOREIGN KEY ('domainId') REFERENCES 'domains'('domainId') ON UPDATE no action ON DELETE set null
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'loginPageOrg' (
'loginPageId' integer NOT NULL,
'orgId' text NOT NULL,
FOREIGN KEY ('loginPageId') REFERENCES 'loginPage'('loginPageId') ON UPDATE no action ON DELETE cascade,
FOREIGN KEY ('orgId') REFERENCES 'orgs'('orgId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'remoteExitNodeSession' (
'id' text PRIMARY KEY NOT NULL,
'remoteExitNodeId' text NOT NULL,
'expiresAt' integer NOT NULL,
FOREIGN KEY ('remoteExitNodeId') REFERENCES 'remoteExitNode'('id') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'remoteExitNode' (
'id' text PRIMARY KEY NOT NULL,
'secretHash' text NOT NULL,
@@ -135,9 +157,11 @@ export default async function migration() {
'exitNodeId' integer,
FOREIGN KEY ('exitNodeId') REFERENCES 'exitNodes'('exitNodeId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'sessionTransferToken' (
'token' text PRIMARY KEY NOT NULL,
'sessionId' text NOT NULL,
@@ -145,9 +169,11 @@ export default async function migration() {
'expiresAt' integer NOT NULL,
FOREIGN KEY ('sessionId') REFERENCES 'session'('id') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'subscriptionItems' (
'subscriptionItemId' integer PRIMARY KEY AUTOINCREMENT NOT NULL,
'subscriptionId' text NOT NULL,
@@ -162,9 +188,11 @@ export default async function migration() {
'name' text,
FOREIGN KEY ('subscriptionId') REFERENCES 'subscriptions'('subscriptionId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'subscriptions' (
'subscriptionId' text PRIMARY KEY NOT NULL,
'customerId' text NOT NULL,
@@ -175,9 +203,11 @@ export default async function migration() {
'billingCycleAnchor' integer,
FOREIGN KEY ('customerId') REFERENCES 'customers'('customerId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'usage' (
'usageId' text PRIMARY KEY NOT NULL,
'featureId' text NOT NULL,
@@ -191,9 +221,11 @@ export default async function migration() {
'nextRolloverAt' integer,
FOREIGN KEY ('orgId') REFERENCES 'orgs'('orgId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'usageNotifications' (
'notificationId' integer PRIMARY KEY AUTOINCREMENT NOT NULL,
'orgId' text NOT NULL,
@@ -203,18 +235,22 @@ export default async function migration() {
'sentAt' integer NOT NULL,
FOREIGN KEY ('orgId') REFERENCES 'orgs'('orgId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'resourceHeaderAuth' (
'headerAuthId' integer PRIMARY KEY AUTOINCREMENT NOT NULL,
'resourceId' integer NOT NULL,
'headerAuthHash' text NOT NULL,
FOREIGN KEY ('resourceId') REFERENCES 'resources'('resourceId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'targetHealthCheck' (
'targetHealthCheckId' integer PRIMARY KEY AUTOINCREMENT NOT NULL,
'targetId' integer NOT NULL,
@@ -234,11 +270,13 @@ export default async function migration() {
'hcHealth' text DEFAULT 'unknown',
FOREIGN KEY ('targetId') REFERENCES 'targets'('targetId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
db.prepare(`DROP TABLE 'limits';`).run();
- db.prepare(`
+ db.prepare(
+ `
CREATE TABLE 'limits' (
'limitId' text PRIMARY KEY NOT NULL,
'featureId' text NOT NULL,
@@ -247,12 +285,15 @@ export default async function migration() {
'description' text,
FOREIGN KEY ('orgId') REFERENCES 'orgs'('orgId') ON UPDATE no action ON DELETE cascade
);
- `).run();
+ `
+ ).run();
db.prepare(`ALTER TABLE 'orgs' ADD 'settings' text;`).run();
db.prepare(`ALTER TABLE 'targets' ADD 'rewritePath' text;`).run();
db.prepare(`ALTER TABLE 'targets' ADD 'rewritePathType' text;`).run();
- db.prepare(`ALTER TABLE 'targets' ADD 'priority' integer DEFAULT 100 NOT NULL;`).run();
+ db.prepare(
+ `ALTER TABLE 'targets' ADD 'priority' integer DEFAULT 100 NOT NULL;`
+ ).run();
const webauthnCredentials = db
.prepare(
@@ -269,7 +310,7 @@ export default async function migration() {
dateCreated: string;
}[];
- db.prepare(`DELETE FROM 'webauthnCredentials';`).run();
+ db.prepare(`DELETE FROM 'webauthnCredentials';`).run();
for (const webauthnCredential of webauthnCredentials) {
const newCredentialId = isoBase64URL.fromBuffer(
@@ -304,7 +345,9 @@ export default async function migration() {
).run();
// 2. Select all rows
- const resources = db.prepare(`SELECT resourceId FROM resources`).all() as {
+ const resources = db
+ .prepare(`SELECT resourceId FROM resources`)
+ .all() as {
resourceId: number;
}[];
diff --git a/server/setup/scriptsSqlite/1.12.0.ts b/server/setup/scriptsSqlite/1.12.0.ts
index bb357c81..292f1f05 100644
--- a/server/setup/scriptsSqlite/1.12.0.ts
+++ b/server/setup/scriptsSqlite/1.12.0.ts
@@ -112,7 +112,6 @@ export default async function migration() {
`
).run();
-
db.prepare(
`
CREATE TABLE 'blueprints' (
@@ -212,10 +211,14 @@ export default async function migration() {
db.prepare(
`ALTER TABLE 'user' ADD 'lastPasswordChange' integer;`
).run();
- db.prepare(`ALTER TABLE 'remoteExitNode' ADD 'secondaryVersion' text;`).run();
+ db.prepare(
+ `ALTER TABLE 'remoteExitNode' ADD 'secondaryVersion' text;`
+ ).run();
// get all of the domains
- const domains = db.prepare(`SELECT domainId, baseDomain from domains`).all() as {
+ const domains = db
+ .prepare(`SELECT domainId, baseDomain from domains`)
+ .all() as {
domainId: number;
baseDomain: string;
}[];
diff --git a/server/setup/scriptsSqlite/1.13.0.ts b/server/setup/scriptsSqlite/1.13.0.ts
index 5b2bcf01..df8d7344 100644
--- a/server/setup/scriptsSqlite/1.13.0.ts
+++ b/server/setup/scriptsSqlite/1.13.0.ts
@@ -287,7 +287,10 @@ export default async function migration() {
let aliasIpOctet = 8;
for (const siteResource of siteResourcesForAlias) {
const aliasAddress = `100.96.128.${aliasIpOctet}`;
- updateAliasAddress.run(aliasAddress, siteResource.siteResourceId);
+ updateAliasAddress.run(
+ aliasAddress,
+ siteResource.siteResourceId
+ );
aliasIpOctet++;
}
@@ -303,7 +306,12 @@ export default async function migration() {
for (const subnet of subnets) {
// Generate a unique niceId for each new site resource
let niceId = generateName();
- insertCidrResource.run(site.siteId, subnet.trim(), niceId, site.siteId);
+ insertCidrResource.run(
+ site.siteId,
+ subnet.trim(),
+ niceId,
+ site.siteId
+ );
}
}
}
diff --git a/server/setup/scriptsSqlite/1.5.0.ts b/server/setup/scriptsSqlite/1.5.0.ts
index 46e9ccca..10c12294 100644
--- a/server/setup/scriptsSqlite/1.5.0.ts
+++ b/server/setup/scriptsSqlite/1.5.0.ts
@@ -48,9 +48,7 @@ export default async function migration() {
const rawConfig = yaml.load(fileContents) as any;
if (rawConfig.cors?.headers) {
- const headers = JSON.parse(
- JSON.stringify(rawConfig.cors.headers)
- );
+ const headers = JSON.parse(JSON.stringify(rawConfig.cors.headers));
rawConfig.cors.allowed_headers = headers;
delete rawConfig.cors.headers;
}
@@ -61,9 +59,7 @@ export default async function migration() {
console.log(`Migrated CORS headers to allowed_headers`);
} catch (e) {
- console.log(
- `Unable to migrate config file. Error: ${e}`
- );
+ console.log(`Unable to migrate config file. Error: ${e}`);
}
console.log(`${version} migration complete`);
diff --git a/server/setup/scriptsSqlite/1.6.0.ts b/server/setup/scriptsSqlite/1.6.0.ts
index adab2697..45abe693 100644
--- a/server/setup/scriptsSqlite/1.6.0.ts
+++ b/server/setup/scriptsSqlite/1.6.0.ts
@@ -58,7 +58,9 @@ export default async function migration() {
console.log(`Set trust_proxy to 1 in config file`);
} catch (e) {
- console.log(`Unable to migrate config file. Please do it manually. Error: ${e}`);
+ console.log(
+ `Unable to migrate config file. Please do it manually. Error: ${e}`
+ );
}
console.log(`${version} migration complete`);
diff --git a/server/setup/scriptsSqlite/1.9.0.ts b/server/setup/scriptsSqlite/1.9.0.ts
index 5f247ea5..89d7b595 100644
--- a/server/setup/scriptsSqlite/1.9.0.ts
+++ b/server/setup/scriptsSqlite/1.9.0.ts
@@ -11,26 +11,28 @@ export default async function migration() {
const db = new Database(location);
const resourceSiteMap = new Map();
- let firstSiteId: number = 1;
+ let firstSiteId: number = 1;
- try {
- // Get the first siteId to use as default
- const firstSite = db.prepare("SELECT siteId FROM sites LIMIT 1").get() as { siteId: number } | undefined;
- if (firstSite) {
- firstSiteId = firstSite.siteId;
- }
+ try {
+ // Get the first siteId to use as default
+ const firstSite = db
+ .prepare("SELECT siteId FROM sites LIMIT 1")
+ .get() as { siteId: number } | undefined;
+ if (firstSite) {
+ firstSiteId = firstSite.siteId;
+ }
- const resources = db
- .prepare(
- "SELECT resourceId, siteId FROM resources WHERE siteId IS NOT NULL"
- )
- .all() as Array<{ resourceId: number; siteId: number }>;
- for (const resource of resources) {
- resourceSiteMap.set(resource.resourceId, resource.siteId);
- }
- } catch (e) {
- console.log("Error getting resources:", e);
- }
+ const resources = db
+ .prepare(
+ "SELECT resourceId, siteId FROM resources WHERE siteId IS NOT NULL"
+ )
+ .all() as Array<{ resourceId: number; siteId: number }>;
+ for (const resource of resources) {
+ resourceSiteMap.set(resource.resourceId, resource.siteId);
+ }
+ } catch (e) {
+ console.log("Error getting resources:", e);
+ }
try {
db.pragma("foreign_keys = OFF");
diff --git a/server/types/HttpCode.ts b/server/types/HttpCode.ts
index 70f21053..a20c8577 100644
--- a/server/types/HttpCode.ts
+++ b/server/types/HttpCode.ts
@@ -59,7 +59,7 @@ export enum HttpCode {
INSUFFICIENT_STORAGE = 507,
LOOP_DETECTED = 508,
NOT_EXTENDED = 510,
- NETWORK_AUTHENTICATION_REQUIRED = 511,
+ NETWORK_AUTHENTICATION_REQUIRED = 511
}
export default HttpCode;
diff --git a/src/app/[orgId]/settings/(private)/billing/layout.tsx b/src/app/[orgId]/settings/(private)/billing/layout.tsx
index 538c7fde..e52f19ed 100644
--- a/src/app/[orgId]/settings/(private)/billing/layout.tsx
+++ b/src/app/[orgId]/settings/(private)/billing/layout.tsx
@@ -10,7 +10,7 @@ import { GetOrgUserResponse } from "@server/routers/user";
import { AxiosResponse } from "axios";
import { redirect } from "next/navigation";
import { cache } from "react";
-import { getTranslations } from 'next-intl/server';
+import { getTranslations } from "next-intl/server";
type BillingSettingsProps = {
children: React.ReactNode;
@@ -19,7 +19,7 @@ type BillingSettingsProps = {
export default async function BillingSettingsPage({
children,
- params,
+ params
}: BillingSettingsProps) {
const { orgId } = await params;
@@ -35,8 +35,8 @@ export default async function BillingSettingsPage({
const getOrgUser = cache(async () =>
internal.get>(
`/org/${orgId}/user/${user.userId}`,
- await authCookieHeader(),
- ),
+ await authCookieHeader()
+ )
);
const res = await getOrgUser();
orgUser = res.data.data;
@@ -49,8 +49,8 @@ export default async function BillingSettingsPage({
const getOrg = cache(async () =>
internal.get>(
`/org/${orgId}`,
- await authCookieHeader(),
- ),
+ await authCookieHeader()
+ )
);
const res = await getOrg();
org = res.data.data;
@@ -65,11 +65,11 @@ export default async function BillingSettingsPage({
- {children}
+ {children}
>
diff --git a/src/app/[orgId]/settings/(private)/idp/create/page.tsx b/src/app/[orgId]/settings/(private)/idp/create/page.tsx
index 8667abda..a899a2aa 100644
--- a/src/app/[orgId]/settings/(private)/idp/create/page.tsx
+++ b/src/app/[orgId]/settings/(private)/idp/create/page.tsx
@@ -64,10 +64,8 @@ export default function Page() {
clientSecret: z
.string()
.min(1, { message: t("idpClientSecretRequired") }),
- authUrl: z.url({ message: t("idpErrorAuthUrlInvalid") })
- .optional(),
- tokenUrl: z.url({ message: t("idpErrorTokenUrlInvalid") })
- .optional(),
+ authUrl: z.url({ message: t("idpErrorAuthUrlInvalid") }).optional(),
+ tokenUrl: z.url({ message: t("idpErrorTokenUrlInvalid") }).optional(),
identifierPath: z
.string()
.min(1, { message: t("idpPathRequired") })
@@ -379,9 +377,11 @@ export default function Page() {
>
{
form.setValue(
"autoProvision",
diff --git a/src/app/[orgId]/settings/(private)/remote-exit-nodes/ExitNodesDataTable.tsx b/src/app/[orgId]/settings/(private)/remote-exit-nodes/ExitNodesDataTable.tsx
index a1bb69c0..c12aa9ba 100644
--- a/src/app/[orgId]/settings/(private)/remote-exit-nodes/ExitNodesDataTable.tsx
+++ b/src/app/[orgId]/settings/(private)/remote-exit-nodes/ExitNodesDataTable.tsx
@@ -19,18 +19,17 @@ export function ExitNodesDataTable({
onRefresh,
isRefreshing
}: DataTableProps) {
-
const t = useTranslations();
return (
+