mirror of
https://github.com/netbirdio/docs.git
synced 2026-05-04 08:16:35 +00:00
Restructuring Phase 3 (#492)
This commit is contained in:
205
src/pages/use-cases/client-on-mikrotik-router.mdx
Normal file
205
src/pages/use-cases/client-on-mikrotik-router.mdx
Normal file
@@ -0,0 +1,205 @@
|
||||
import {Note} from "@/components/mdx";
|
||||
|
||||
# NetBird client on MikroTik router
|
||||
|
||||
RouterOS is MikroTik's operating system that powers MikroTik's physical routers, switches and Cloud Hosted Routers (CHR).\
|
||||
Container is MikroTik's implementation of Linux containers, added in RouterOS v7.4 as an extra package,
|
||||
allowing users to run containerized environments within RouterOS.\
|
||||
In this guide we'll deploy NetBird client in a MikroTik container.
|
||||
|
||||
## Use cases
|
||||
Running NetBird on MikroTik router or CHR enables cost-effective remote access to RouterOS devices (and their networks)
|
||||
without the need for additional hardware. In some usecases this can greatly simplify the setup and eliminate the need for additional infrastructure.
|
||||
|
||||
### Branch offices
|
||||
Not all remote locations have a server room or a similar setup where they can deploy an additional machine to run NetBird, i.e. small shops or branch offices that barely have a network cabinet to fit a switch and a router.
|
||||
|
||||
Running NetBird directly on a router allows us to have remote access to perform basic network management and monitoring without having to maintain an additional machine for NetBird router, or even worse—using one of the business-critical NetBird clients as a router.
|
||||
|
||||
The idea is that all computers on the network would still run clients, and this container would only be used for infrastructure management, monitoring
|
||||
and maybe one or two small camera streams.
|
||||
|
||||
Note that container routing in RouterOS is currently very CPU-bound and is likely not good enough for massive file transfers, database connectivity nor proper camera streaming.
|
||||
|
||||
### Field routers
|
||||
For companies with field teams operating in remote areas—such as construction sites, pop-up events, or field support for vehicles and industrial equipment—MikroTik routers are a game-changer. They provide reliable connectivity without depending on on-site infrastructure, offering a distinct advantage that sets them apart.
|
||||
|
||||
Team members would still run NetBird on computers and phones, but a separate IT or infra team needs to be able to remotely manage MikroTik devices to help with unpredicted issues in the field. E.g., reconfigure the router to piggyback the entire network over the location's guest Wi-Fi, or quickly switch between Wi-Fi, backup cellular or satellite, depending on the type of failure.
|
||||
|
||||
Traditionally, we would always have cellular modems in routers for minimal management connectivity and then run CHR in a cloud VM. Those routers would all start VPN tunnels to the cloud VM so the IT team can connect to the router if needed. On top of that, we would need an additional NetBird router in the cloud to enable remote access from NetBird to that cloud router and NAT to remote devices.
|
||||
|
||||
Running NetBird directly on field routers removes the need for a lot of complexity because there's no longer a need for CHR to serve as a VPN concentrator or a dedicated VM to route NetBird clients to MikroTiks.
|
||||
|
||||
## Limitations
|
||||
<Note>
|
||||
Use at your own risk. All [RouterOS containers warnings](https://help.MikroTik.com/docs/display/ROS/Container#Container-Disclaimer) apply.\
|
||||
This is unsupported by both MikroTik and NetBird because it uses MikroTik's beta and NetBird's legacy features.
|
||||
</Note>
|
||||
|
||||
There are quite a few caveats to this approach because containers on RouterOS are still
|
||||
a relatively new feature, provide relatively slow throughput, and are CPU-bound. They are also very restrictive compared to
|
||||
standard Kubernetes or Docker platforms, so NetBird can't take advantage of kernel modules or netfilter rules.\
|
||||
Also, very few current MikroTik devices are optimized for running containers, so we should be careful when deploying this in production.
|
||||
- Routing through RouterOS containers is relatively slow, CPU intensive and may overload smaller devices.
|
||||
- NetBird in RouterOS containers can not use an exit node (because it uses legacy routing mode).
|
||||
- NetBird in RouterOS containers can't perform NAT, but it can do direct routing, and we can do NAT on RouterOS instead.
|
||||
|
||||
## Tested on
|
||||
- Cloud Hosted Router (a.k.a CHR, x86) v7.15.3, v7.16b7
|
||||
- D53G-5HacD2HnD (a.k.a. Chateau, arm) v7.15.3, v7.16b7
|
||||
|
||||
## Step-by-step guide
|
||||
|
||||
### Prerequisites
|
||||
1. RouterOS v7.5 or newer on MikroTik router, physical machine, or [CHR on a virtual machine](https://help.MikroTik.com/docs/display/ROS/Cloud+Hosted+Router%2C+CHR)
|
||||
2. [Enabled container mode](https://help.MikroTik.com/docs/display/ROS/Container#Container-EnableContainermode)
|
||||
3. [Installed RouterOS container package](https://help.MikroTik.com/docs/display/ROS/Container#Container-Containeruseexample)
|
||||
from [extra packages](https://MikroTik.com/download)
|
||||
4. Adequate storage, such as a good quality USB thumb drive or external SSD.
|
||||
We should not put a container filesystem or container pull caches in the router's built-in flash storage.
|
||||
Normal container use could wear out the built-in storage's write cycles or fill up the disk space, thus bricking or even destroying the router.\
|
||||
If our device has plenty of RAM, we can use Tmpfs for the container filesystem and image cache, but that complicates the setup due to race conditions after reboot.
|
||||
Please check RouterOS documentation and the MikroTik forum if you want to go that route.
|
||||
|
||||
### Prepare RouterOS for container networking
|
||||
These actions can be performed on RouterOS either via SSH or in Terminal (via Winbox or Web interface), or using Winbox gui.\
|
||||
More information is available in [MikroTik's management tools documentation](https://help.mikrotik.com/docs/display/ROS/Management+tools).
|
||||
|
||||
Create a bridge interface for containers and VETH interface for NetBird container:
|
||||
```shell
|
||||
/interface/veth/add name=netbird address=172.17.0.2/24 gateway=172.17.0.1
|
||||
/interface/bridge/add name=containers
|
||||
/ip/address/add address=172.17.0.1/24 interface=containers
|
||||
/interface/bridge/port add bridge=containers interface=netbird
|
||||
```
|
||||
Set up NAT for containers so they can access the internet and other networks:
|
||||
```shell
|
||||
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24
|
||||
```
|
||||
Because NetBird in RouterOS containers can't perform NAT, we'll want to add a route from MikroTik to our NetBird subnet via NetBird container.
|
||||
This assumes our NetBird subnet is `100.80.0.0/16`.
|
||||
```shell
|
||||
/ip/route/add dst-address=100.80.0.0/16 gateway=172.17.0.2
|
||||
```
|
||||
|
||||
We'll also want to add appropriate in, out, and forward rules, but those vary depending on the network setup, so are not covered by this guide.
|
||||
|
||||
We should also allow remote DNS queries from the container to the router's DNS server.
|
||||
Ensure the router's firewall rules are set to block external access to DNS ports, and also allow access to DNS ports from containers.
|
||||
This is beyond the scope of this guide's, though important as we'll be setting container's resolvers to router's IP addresses.
|
||||
|
||||
Enable container functionality logging in RouterOS and configure DockerHub registry cache on the external disk.
|
||||
This assumes that our USB drive is mounted as `/usb1`:
|
||||
```shell
|
||||
/system/logging add topics=container
|
||||
/container/config/set registry-url=https://registry-1.docker.io tmpdir=/usb1/pull
|
||||
```
|
||||
|
||||
### Prepare the NetBird container
|
||||
```shell
|
||||
/container/mounts/add name=netbird_etc src=disk1/etc dst=/etc/netbird
|
||||
```
|
||||
Note that we placed `/etc/netbird` on router's built-in flash. This is because we don't want someone stealing the USB drive
|
||||
and getting access to router's private keys. This file doesn't really change all that often so it's ok to put it there.
|
||||
|
||||
```shell
|
||||
/container envs
|
||||
add key=NB_SETUP_KEY name=netbird value=YOUR_NETBIRD_SETUP_KEY
|
||||
add key=NB_NAME name=netbird value=CONTAINER_HOSTNAME
|
||||
add key=NB_HOSTNAME name=netbird value=CONTAINER_HOSTNAME
|
||||
add key=NB_LOG_LEVEL name=netbird value=info
|
||||
add key=NB_DISABLE_CUSTOM_ROUTING name=netbird value=true
|
||||
add key=NB_USE_LEGACY_ROUTING name=netbird value=true
|
||||
```
|
||||
We had to set `NB_DISABLE_CUSTOM_ROUTING` and `NB_USE_LEGACY_ROUTING` because RouterOS containers don't allow access to netfilter kernel module.
|
||||
We also set `NB_NAME` and `NB_HOSTNAME` to match our router's identity as seen in `/system/identity/print`
|
||||
because RouterOS won't allow us to set container's hostname to the same value as router's hostname.
|
||||
If using a self-hosted NetBird server we'll also want to add the correct URLs to our server:
|
||||
```shell
|
||||
add key=NB_MANAGEMENT_URL name=netbird value=YOUR_NETBIRD_MANAGEMENT_URL
|
||||
add key=NB_ADMIN_URL name=netbird value=YOUR_NETBIRD_ADMIN_URL
|
||||
```
|
||||
|
||||
Create the container and trigger image pull from DockerHub:
|
||||
```shell
|
||||
/container/add remote-image=netbirdio/netbird interface=netbird root-dir=/usb1/netbird_filesystem mounts=netbird_etc envlist=netbird dns=10.71.71.1 hostname=netbird logging=yes │
|
||||
```
|
||||
Note that we had to set container's hostname to something other than router's identity because RouterOS doesn't allow hostname conflicts.\
|
||||
We have also set container's DNS resolver to router's DNS server. Feel free to tweak this if needed.
|
||||
|
||||
Our container is now ready and container image pull from DockerHub should have been triggered. We can check the RouterOS logs to see if the pull was successful and we should see that RouterOS created our image cache directory in `/usb1/pull`.
|
||||
|
||||
### Start the container
|
||||
We can verify that the container is created by running
|
||||
```shell
|
||||
/container print
|
||||
```
|
||||
|
||||
We can now start it using the appropriate number from the `print` command:
|
||||
```shell
|
||||
/container start number=0
|
||||
```
|
||||
|
||||
At this point we should see the container in our NetBird dashboard and we should be able to create routes through it in NetBird.
|
||||
Via the NetBird dashboard, create a route through the container to the router's bridge IP address.
|
||||
Address will be 172.17.0.1/32 and routing peer will be our container. Don't forget to disable NAT on this route.
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
1. Increase NetBird's verbosity by setting `NB_LOG_LEVEL` env var to `trace`.\
|
||||
2. Check the logs to see what's going on:
|
||||
```shell
|
||||
/log/print without-paging where topics~"container"
|
||||
```
|
||||
3. In firewall rules, enable logging for any dropp/reject rules to see if packets are being dropped.\
|
||||
|
||||
## Get a shell in the container
|
||||
Assuming that our container keeps stopping because NetBird is crashing, we can override the container entrypoint
|
||||
to get a shell in the container and investigate.
|
||||
Setting the entrypoint to 600 gets us 10 minutes to investigate before the container stops.
|
||||
```shell
|
||||
/container/set entrypoint="sleep 600" numbers=0
|
||||
/container/shell number=0
|
||||
```
|
||||
When done revert the entrypoint back to NetBird:
|
||||
```shell
|
||||
/container/set entrypoint="" numbers=0
|
||||
```
|
||||
|
||||
### NetBird starts and logs into management server but it doesn’t show up as online
|
||||
Log shows something like this:
|
||||
```
|
||||
DEBG client/internal/login.go:93: connecting to the Management service https://api.netbird.io:443
|
||||
DEBG client/internal/login.go:63: connected to the Management service https://api.netbird.io:443
|
||||
DEBG client/internal/login.go:93: connecting to the Management service https://api.netbird.io:443
|
||||
DEBG client/internal/login.go:63: connected to the Management service https://api.netbird.io:443
|
||||
INFO client/internal/connect.go:119: starting NetBird client version 0.28.6 on linux/amd64
|
||||
DEBG client/internal/connect.go:180: connecting to the Management service api.netbird.io:443
|
||||
DEBG client/internal/connect.go:188: connected to the Management service api.netbird.io:443
|
||||
DEBG signal/client/grpc.go:81: connected to Signal Service: signal.netbird.io:443
|
||||
INFO iface/tun_usp_unix.go:33: using userspace bind mode
|
||||
DEBG client/internal/routemanager/sysctl/sysctl_linux.go:86: Set sysctl net.ipv4.conf.all.src_valid_mark from 0 to 1
|
||||
ERROR client/internal/routemanager/systemops/systemops_linux.go:100: Error setting up sysctl: 1 errors occurred:
|
||||
t* read sysctl net.ipv4.conf.eth0.rp_filter: open /proc/sys/net/ipv4/conf/eth0/rp_filter: no such file or directory
|
||||
INFO client/internal/routemanager/manager.go:135: Routing setup complete
|
||||
INFO iface/tun_usp_unix.go:48: create tun interface
|
||||
DEBG iface/tun_link_linux.go:113: adding address 100.80.100.176/16 to interface: wt0
|
||||
DEBG iface/wg_configurer_usp.go:39: adding Wireguard private key
|
||||
INFO client/firewall/create_linux.go:58: no firewall manager found, trying to use userspace packet filtering firewall
|
||||
DEBG iface/tun_usp_unix.go:95: device is ready to use: wt0
|
||||
INFO client/internal/dns/host_unix.go:68: System DNS manager discovered: file
|
||||
DEBG signal/client/grpc.go:126: signal connection state READY
|
||||
WARN signal/client/grpc.go:141: disconnected from the Signal Exchange due to an error: didn't receive a registration header from the Signal server while connecting to the streams
|
||||
DEBG signal/client/grpc.go:126: signal connection state IDLE
|
||||
ERROR util/grpc/dialer.go:38: Failed to dial: dial: dial tcp: lookup signal.netbird.io on 172.17.0.1:53: read udp 172.17.0.2:34638->172.17.0.1:53: i/o timeout
|
||||
```
|
||||
Solution: double-check environment variables:
|
||||
```
|
||||
NB_DISABLE_CUSTOM_ROUTING=true
|
||||
NB_USE_LEGACY_ROUTING=true
|
||||
```
|
||||
|
||||
|
||||
|
||||
[Environment]::SetEnvironmentVariable("NB_DISABLE_CUSTOM_ROUTING", "true", "Machine")
|
||||
[Environment]::SetEnvironmentVariable("NB_USE_LEGACY_ROUTING", "true", "Machine")
|
||||
@@ -98,7 +98,7 @@ helm install --create-namespace -f values.yaml -n netbird netbird-operator netbi
|
||||
|
||||
**Expose Kubernetes Control Plane to your NetBird Network**
|
||||
|
||||
To access your Kubernetes control plane from a NetBird network, you can expose your Kubernetes control plane as a [**NetBird resource**](https://docs.netbird.io/how-to/networks#resources) by enabling the following option in the operator values:
|
||||
To access your Kubernetes control plane from a NetBird network, you can expose your Kubernetes control plane as a [**NetBird resource**](https://docs.netbird.io/manage/networks#resources) by enabling the following option in the operator values:
|
||||
|
||||
```jsx
|
||||
ingres:
|
||||
@@ -141,7 +141,7 @@ kubectl -n argocd annotate svc/argocd-server netbird.io/expose="true" netbird.
|
||||
|
||||
Next we will enable sidecars. **Why Sidecars?** The application controller needs to make API calls to remote MicroK8s clusters. The sidecar provides transparent network access to those clusters through the NetBird mesh.
|
||||
|
||||
To enable sidecar functionality in your deployments, you first need to generate a setup key, either via the UI (enable the **Ephemeral Peers** options) or by following [**this guide**](https://docs.netbird.io/how-to/register-machines-using-setup-keys) for more details on setup keys. We will inject side-cars to ArgoCD application controller so it can communicate with remote MicroK8s clusters.
|
||||
To enable sidecar functionality in your deployments, you first need to generate a setup key, either via the UI (enable the **Ephemeral Peers** options) or by following [**this guide**](https://docs.netbird.io/manage/peers/register-machines-using-setup-keys) for more details on setup keys. We will inject side-cars to ArgoCD application controller so it can communicate with remote MicroK8s clusters.
|
||||
|
||||
Note: We recommend checking out the section of our [Kubernetes Operator docs on using sidecars](https://docs.netbird.io/how-to/kubernetes-operator#accessing-remote-services-using-sidecars) for more context and detail.
|
||||
|
||||
|
||||
115
src/pages/use-cases/examples.mdx
Normal file
115
src/pages/use-cases/examples.mdx
Normal file
@@ -0,0 +1,115 @@
|
||||
|
||||
export const title = 'Examples'
|
||||
|
||||
## NetBird Client on AWS ECS (Terraform)
|
||||
|
||||
<p>
|
||||
<img src="/docs-static/img/use-cases/examples/wiretrustee-on-aws-ecs.png" alt="high-level-dia" width="400"/>
|
||||
</p>
|
||||
|
||||
A common way to run containers in the AWS cloud is to use Elastic Container Service (ECS).
|
||||
ECS is a fully managed container orchestration service that makes it easy to deploy, manage, and scale containerized applications.
|
||||
|
||||
It is best practice and common to run this infrastructure behind security guardrails like strict security groups and private subnets.
|
||||
|
||||
Also, a routine for many System's administrators and Developers, is to connect to servers that run their company's software in order to troubleshoot, validate output and even install dependencies.
|
||||
If you have your systems running in a private network, you got a few options to allow communication to hosts in that network:
|
||||
* Add a [bastion host](https://en.wikipedia.org/wiki/Bastion_host) or [jump server](https://en.wikipedia.org/wiki/Jump_server).
|
||||
* Connect a [site-2-site](https://en.wikipedia.org/wiki/Virtual_private_network#Types) VPN.
|
||||
* [Remote access](https://en.wikipedia.org/wiki/Virtual_private_network#Types) VPN.
|
||||
* Allow IP(s) address in the server's security group.
|
||||
|
||||
All these options are valid and proved to work over the years, but they come with some costs that in the short to mid-term you start to deal with:
|
||||
* Hard implementation.
|
||||
* Fragile firewall configuration.
|
||||
* Yet, another server to secure and maintain.
|
||||
|
||||
**In this example, we will run NetBird client configured as a daemon set in ECS deployed with Terraform.**
|
||||
|
||||
This allows you to:
|
||||
|
||||
* Run NetBird as an ECS native service, you can manage and maintain it the same way you do with your other services.
|
||||
* Connect to EC2 running on private subnets without the need to open firewall rules or configure bastion servers.
|
||||
* Access other services connected to your NetBird network and running anywhere.
|
||||
|
||||
### Requirements
|
||||
* Terraform > 1.0.
|
||||
* A NetBird account with a Setup Key.
|
||||
* Another NetBird client in your network to validate the connection (possibly your laptop or a machine you are running this example on).
|
||||
* The [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) installed.
|
||||
* An [AWS account](https://aws.amazon.com/free/).
|
||||
* Your AWS credentials. You can [create a new Access Key on this page](https://console.aws.amazon.com/iam/home?#/security_credentials).
|
||||
### Notice
|
||||
> Before getting started with this example, be aware that creating the resources from it may incur charges from AWS.
|
||||
|
||||
### Getting started
|
||||
|
||||
Clone this repository, download, and install Terraform following the guide [here](https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started).
|
||||
|
||||
Login to https://app.netbird.io and [add your machine as a peer](https://app.netbird.io/add-peer), once you are done with the steps described there, copy your [Setup key](https://app.netbird.io/setup-keys).
|
||||
|
||||
Using a text editor, edit the [variables.tf](https://github.com/wiretrustee/wiretrustee-examples/tree/master/ecs-client-daemon/variables.tf) file, and update the `wt_setup_key` variable with your setup key. Also, make sure that `ssh_public_key_path` variable is pointing to the correct public key path. If necessary, update the remaining variables according to your requirements and their descriptions.
|
||||
|
||||
Before continuing, you may also update the [provider.tf](https://github.com/wiretrustee/wiretrustee-examples/tree/master/ecs-client-daemon/provider.tf) to configure proper AWS region and default tags.
|
||||
|
||||
#### Creating the resources with Terraform
|
||||
Follow the steps below to run terraform and create your test environment:
|
||||
|
||||
1. From the root of the cloned repository, enter the ecs-client-daemon folder and run terraform init to download the modules and providers used in this example.
|
||||
```shell
|
||||
cd ecs-client-daemon
|
||||
terraform init
|
||||
```
|
||||
2. Run terraform plan to get the estimated changes
|
||||
```shell
|
||||
terraform plan -out plan.tf
|
||||
```
|
||||
3. Run terraform apply to create your infrastructure
|
||||
```shell
|
||||
terraform apply plan.tf
|
||||
```
|
||||
|
||||
#### Validating the deployment
|
||||
After a few minutes, the autoscaling group will launch an EC2 instance and there you will find the NetBird's ECS Daemon service running. With that, we can go to our [NetBird dashboard](https://app.netbird.io) and pick the IP of the node that is running NetBird, then we can connect to the node via ssh. For Unix(s) systems:
|
||||
```shell
|
||||
ssh ec2-user@100.64.0.200
|
||||
```
|
||||
Once you've login, you should be able to see the containers running by using the docker command:
|
||||
```shell
|
||||
sudo docker ps
|
||||
```
|
||||
|
||||
#### Deleting the infrastructure resources used in the example
|
||||
Once you are done validating the example, you can remove the resources with the following steps:
|
||||
1. Run terraform plan with the flag `-destroy`
|
||||
```shell
|
||||
terraform plan -out plan.tf -destroy
|
||||
```
|
||||
2. Then execute the apply command:
|
||||
```shell
|
||||
terraform apply plan.tf
|
||||
```
|
||||
|
||||
## NetBird Client in Docker
|
||||
|
||||
One of the simplest ways of running NetBird client application is to use a pre-built [Docker image](https://hub.docker.com/r/netbirdio/netbird).
|
||||
|
||||
**Prerequisites:**
|
||||
* **Docker installed.**
|
||||
If you don't have docker installed, please refer to the installation guide on the official [Docker website](https://docs.docker.com/get-docker/).
|
||||
* **NetBird account.**
|
||||
Register one at [app.netbird.io](https://app.netbird.io/).
|
||||
|
||||
You would need to obtain a [setup key](/manage/peers/register-machines-using-setup-keys) to associate NetBird client with your account.
|
||||
|
||||
The setup key could be found in the NetBird Management dashboard under the Setup Keys tab - [https://app.netbird.io/setup-keys](https://app.netbird.io/setup-keys).
|
||||
|
||||
Set the ```NB_SETUP_KEY``` environment variable and run the command.
|
||||
|
||||
```bash
|
||||
docker run --rm --name PEER_NAME --hostname PEER_NAME --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --cap-add=SYS_RESOURCE -d -e NB_SETUP_KEY=<SETUP KEY> -v netbird-client:/var/lib/netbird netbirdio/netbird:latest
|
||||
```
|
||||
|
||||
That is it! Enjoy using NetBird.
|
||||
|
||||
If you would like to learn how to run NetBird Client as an ECS agent on AWS, please refer to [this guide](#net-bird-client-on-aws-ecs-terraform).
|
||||
56
src/pages/use-cases/netbird-on-faas.mdx
Normal file
56
src/pages/use-cases/netbird-on-faas.mdx
Normal file
@@ -0,0 +1,56 @@
|
||||
|
||||
# Running NetBird on serverless environments (FaaS)
|
||||
|
||||
Function as a Service (FaaS) is a cloud computing model where developers deploy small, specific-purpose code functions, managed by a cloud provider.
|
||||
FaaS environments, however, impose restrictions like limited access to the system's root, kernel, and network stack, crucial for security in shared cloud infrastructure.
|
||||
|
||||
Since [v0.25.3](https://github.com/netbirdio/netbird/releases), NetBird enables secure connectivity and access from serverless functions like AWS lambda and Azure Functions to cloud or on-premises servers,
|
||||
containers, databases, and other internal resources. NetBird has adapted to the constraints of FaaS environments by leveraging netstack from
|
||||
the [gVisor](https://github.com/google/gvisor) Go package, which is part of [Wireguard-go](https://github.com/netbirdio/wireguard-go),
|
||||
enabling the WireGuard stack to run entirely in userspace. This approach circumvents the typical need for network or kernel-level access.
|
||||
|
||||
## How to enable netstack mode?
|
||||
You can enable the netstack mode for the NetBird client using environment variables:
|
||||
|
||||
`NB_USE_NETSTACK_MODE`: Set to true to enable netstack mode. (Default: false)
|
||||
`NB_SOCKS5_LISTENER_PORT`: Set the port where the Socks5 proxy listens. (Default: 1080)
|
||||
|
||||
With these variables, NetBird will launch a Socks5 proxy that you can use to connect to your internal resources.
|
||||
|
||||
<Note>
|
||||
The DNS feature is not supported. You can reach the peers by IP address only.
|
||||
</Note>
|
||||
|
||||
### Running locally
|
||||
```bash
|
||||
export NB_USE_NETSTACK_MODE=true
|
||||
export NB_SOCKS5_LISTENER_PORT=30000
|
||||
netbird up -F
|
||||
```
|
||||
|
||||
### Docker
|
||||
Some container environments can be restricted as well. For example, Docker containers are not allowed to create new VPN interfaces by default. For that reason, you can run a NetBird agent in a standard mode to enable the netstack mode:
|
||||
```bash
|
||||
docker run --rm --name PEER_NAME --hostname PEER_NAME -d \
|
||||
-e NB_SETUP_KEY=<SETUP KEY> -e NB_USE_NETSTACK_MODE=true -e NB_SOCKS5_LISTENER_PORT=1080 -v netbird-client:/var/lib/netbird netbirdio/netbird:latest
|
||||
```
|
||||
This is useful when you want to configure a simple routing peer without adding privileged permissions or linux capabilities.
|
||||
|
||||
## How to use the SOCKS5 proxy?
|
||||
Once you have the agent running in netstack mode, you need to configure your application to use the SOCKS5 proxy. The following is an example of a python 3 application:
|
||||
```python
|
||||
import socks
|
||||
import socket
|
||||
import os
|
||||
def Example():
|
||||
socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", int(os.getenv('NB_SOCKS5_LISTENER_PORT', '1080')))
|
||||
socket.socket = socks.socksocket
|
||||
# rest of the code...
|
||||
```
|
||||
## How to use NetBird in FaaS environments?
|
||||
Cloud providers like AWS and Azure, allow you to configure custom runtime environments for their function services, in AWS this is called Lambda Layers,
|
||||
and in Azure, it's called containerized Azure Functions.
|
||||
|
||||
There are many ways that you can configure these environments with NetBird's client binary. We have created a simple example using containerized Azure Functions,
|
||||
which you can find [Azure functions python db access example
|
||||
](https://github.com/netbirdio/azure-functions-python-db-access).
|
||||
148
src/pages/use-cases/routing-peers-and-kubernetes.mdx
Normal file
148
src/pages/use-cases/routing-peers-and-kubernetes.mdx
Normal file
@@ -0,0 +1,148 @@
|
||||
import {Note} from "@/components/mdx";
|
||||
|
||||
# Deploy routing peers to a Kubernetes cluster
|
||||
This guide provides instructions on how to use NetBird agent within a Kubernetes cluster to establish secure, peer-to-peer
|
||||
networking between your Kubernetes pods and external services or other clusters.
|
||||
|
||||
## Prerequisites
|
||||
- Access to a Kubernetes cluster
|
||||
- Kubernetes CLI (kubectl) installed and configured
|
||||
- Access to the NetBird management dashboard
|
||||
|
||||
## Use Case Scenario
|
||||
Imagine you're running a multi-cloud Kubernetes environment where your application components are distributed across
|
||||
different cloud providers, including on-premise Kubernetes clusters. Your goal is to securely access your kubernetes services
|
||||
from hosts running on a Hetzner without exposing them to the public internet.
|
||||
|
||||
## Step-by-Step guide
|
||||
### Step 1: Create a setup key
|
||||
Navigate to Setup Keys in the NetBird management dashboard and click on "Create setup key".
|
||||
|
||||
Choose a name, e.g. `Kubernetes routing peers`, mark the key as `reusable` and enable `Ephemeral peers`. This option is
|
||||
ideal for stateless workloads like containers, where peers that are offline for over 10 minutes are automatically removed.
|
||||
|
||||
Create or add group called `kubernetes-routers` to the `Auto-assigned groups` list. This designation can be adjusted to
|
||||
suit your needs.
|
||||
|
||||
See the screenshot below for reference:
|
||||
<p>
|
||||
<img src="/docs-static/img/use-cases/routing-peers-and-kubernetes/k8s-create-setup-key.png" alt="k8s-create-setup-key" width="400" className="imagewrapper"/>
|
||||
</p>
|
||||
|
||||
With your setup key created, note it down for the next steps.
|
||||
|
||||
### Step 2: Add a network route
|
||||
Navigate to Network Routes in the NetBird management dashboard and click on `Add Route`.
|
||||
|
||||
Set your kubernetes pod range as the destination network, and select the `Peer group` option, choosing the
|
||||
"kubernetes-routers" group. This configuration allows for scaling pods as necessary within your Kubernetes cluster.
|
||||
|
||||
Set the distribution group to `hetzner-servers`. This group is used to distribute the route to all servers in the group.
|
||||
|
||||
See the screenshot below for reference:
|
||||
<p>
|
||||
<img src="/docs-static/img/use-cases/routing-peers-and-kubernetes/k8s-add-network-route.png" alt="k8s-add-network-route" width="400" className="imagewrapper"/>
|
||||
</p>
|
||||
|
||||
Click on Name & Description to give your route a name and description. Then click on `Add Route` to save your changes.
|
||||
<p>
|
||||
<img src="/docs-static/img/use-cases/routing-peers-and-kubernetes/k8s-name-network-route.png" alt="k8s-name-network-route" width="400" className="imagewrapper"/>
|
||||
</p>
|
||||
|
||||
### Step 3: Create an access control policy
|
||||
Navigate to Access Control Policies in the NetBird management dashboard and click on `Add Policy`.
|
||||
|
||||
Set the source group to `hetzner-servers` and the destination group to `kubernetes-routers`. This configuration allows
|
||||
the Hetzner servers to access the kubernetes pods.
|
||||
<p>
|
||||
<img src="/docs-static/img/use-cases/routing-peers-and-kubernetes/k8s-add-access-control-policy.png" alt="k8s-add-access-control-policy" width="400" className="imagewrapper"/>
|
||||
</p>
|
||||
|
||||
Click on Name & Description to give your policy a name and description. Then click on `Add Policy` to save your changes.
|
||||
<p>
|
||||
<img src="/docs-static/img/use-cases/routing-peers-and-kubernetes/k8s-name-access-control-policy.png" alt="k8s-name-access-control-policy" width="400" className="imagewrapper"/>
|
||||
</p>
|
||||
|
||||
### Step 4: Deploy the NetBird agent
|
||||
You can deploy the NetBird agent using a daemon set or a deployment. Below is an example of a deployment configuration with 1 replica.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: netbird
|
||||
namespace: default
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: netbird
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: netbird
|
||||
spec:
|
||||
containers:
|
||||
- name: netbird
|
||||
image: netbirdio/netbird:latest
|
||||
env:
|
||||
- name: NB_SETUP_KEY
|
||||
value: "0000000000-0000-0000-0000-0000000000" # replace with your setup key
|
||||
- name: NB_HOSTNAME
|
||||
value: "netbird-k8s-router" # name that will appear in the management UI
|
||||
- name: NB_LOG_LEVEL
|
||||
value: "info"
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_ADMIN
|
||||
- SYS_RESOURCE
|
||||
- SYS_ADMIN
|
||||
```
|
||||
|
||||
Edit your deployment.yml file, incorporating the setup key into the relevant sections.
|
||||
|
||||
Apply the updated deployment file to your Kubernetes cluster using the following command:
|
||||
```shell
|
||||
kubectl apply -f deployment.yml
|
||||
```
|
||||
|
||||
<Note>
|
||||
In this example the setup key is passed as an environment variable. You should use a secret to pass the setup key.
|
||||
</Note>
|
||||
|
||||
### Step 5: Make the deployment highly available
|
||||
NetBird network routes support multiple routing peers running in a fail-over mode, where one routing peer will be select
|
||||
as gateway for a network and when this peer becomes unavailable other routing peer will be select for the role, proving a
|
||||
highly available network route.
|
||||
|
||||
To make the deployment highly available, you can increase the number of replicas in the deployment configuration to 3 or more.
|
||||
|
||||
```yaml
|
||||
---
|
||||
...
|
||||
spec:
|
||||
replicas: 3
|
||||
...
|
||||
```
|
||||
Apply the updated deployment file to your Kubernetes cluster using the following command:
|
||||
```shell
|
||||
kubectl apply -f deployment.yml
|
||||
```
|
||||
### Step 6: Verify the deployment
|
||||
After deploying the NetBird agent, you can verify that the agent is running by checking the logs of the pods.
|
||||
|
||||
```shell
|
||||
kubectl logs -l app=netbird
|
||||
```
|
||||
|
||||
You can also verify that the agent is connected to the NetBird management dashboard by checking the dashboard.
|
||||
<p>
|
||||
<img src="/docs-static/img/use-cases/routing-peers-and-kubernetes/k8s-netbird-agent-connected.png" alt="k8s-netbird-agent-connected" className="imagewrapper-big"/>
|
||||
</p>
|
||||
|
||||
## Conclusion
|
||||
By following these steps, you've successfully integrated Netbird within your Kubernetes cluster, enabling secure,
|
||||
peer-to-peer networking between your Kubernetes pods and external services. This setup is particularly beneficial for
|
||||
hybrid, multi-cloud environments and remote access, ensuring seamless connectivity and security across your infrastructure.
|
||||
Reference in New Issue
Block a user