Install Inlets Uplink¶
Inlets Uplink requires a Kubernetes cluster, and an inlets uplink subscription.
The installation is performed through a Helm chart (inlets-uplink-provider)) which is published as an OCI artifact in a container registry.
The default installation keeps tunneled services private, with only the control-plane exposed to the public Internet. To expose the data-plane for one or more tunnels, after you've completed the installation, see the page Expose tunnels.
Before you start¶
Before you start, you'll need the following:
- A Kubernetes cluster where you can create a LoadBalancer i.e. a managed Kubernetes service like AWS EKS, Azure AKS, Google GKE, etc.
- A domain name clients can use to connect to the tunnel control plane.
- An inlets uplink license (an inlets-pro license cannot be used)
-
Optional: arkade - a tool for installing popular Kubernetes tools
To install arkade run:
curl -sSLf https://get.arkade.dev/ | sudo sh
You can obtain a subscription for inlets uplink here: inlets uplink plans.
Create a Kubernetes cluster¶
We recommend creating a Kubernetes cluster with a minimum of three nodes. Each node should have a minimum of 2GB of RAM and 2 CPU cores.
Install cert-manager¶
Install cert-manager, which is used to manage TLS certificates for inlets-uplink for the control-plane and the REST API.
You can use Helm, or arkade:
helm install \
cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
Create a namespace for the chart and add the license secret¶
Make sure to create the target namespace for you installation first.
kubectl create namespace inlets
Create the required secret with your inlets-uplink license.
Check that your license key is in lower-case
There is a known issue with LemonSqueezy where the UI will copy the license key in lower-case, it needs to be converted to upper-case before being used with Inlets Uplink.
Convert the license to upper-case, if it's in lower-case:
(
mv $HOME/.inlets/LICENSE_UPLINK{,.lower}
cat $HOME/.inlets/LICENSE_UPLINK.lower | tr '[:lower:]' '[:upper:]' > $HOME/.inlets/LICENSE_UPLINK
rm $HOME/.inlets/LICENSE_UPLINK.lower
)
Create the secret for the license:
kubectl create secret generic \
-n inlets inlets-uplink-license \
--from-file license=$HOME/.inlets/LICENSE_UPLINK
Setup up Ingress for the control-plane¶
Tunnel clients will connect to the client-router component which needs to be exposed via Ingress.
You can use Kubernetes Ingress or Istio. We recommend using Ingress (Option A), unless your team or organisation is already using Istio (Option B).
A) Install with Kubernetes Ingress¶
We recommend Traefik for Ingress, and have finely tuned the configuration to work well for the underlying websocket for inlets. If your organisation uses a different Ingress Controller, you can alter the class fields in the chart.
NGINX Ingress Controller Retirement
The Kubernetes NGINX Ingress Controller project has announced its retirement in March 2026 and will no longer receive updates or security patches.
The uplink chart version 0.5.0 changes the default ingress class from Nginx to Traefik. To upgrade to the latest uplink while keeping NGINX ingress see the Ingress NGINX section for legacy configuration options.
Install traefik with Helm:
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik \
--namespace=traefik \
--create-namespace
See also: Traefik installation
Create a values.yaml file for the inlets-uplink-provider chart:
ingress:
class: "traefik"
issuer:
# When set, a production issuer will be generated for you
# to use a pre-existing issuer, set issuer.enabled=false
enabled: true
clientRouter:
# Customer tunnels will connect with a URI of:
# wss://uplink.example.com/namespace/tunnel
domain: uplink.example.com
tls:
issuerName: letsencrypt-prod
ingress:
enabled: true
Make sure to replace the domain with your actual domain name.
Optionally, you can add rate limiting to the client-router Ingress using Traefik Middleware. See Traefik rate limiting for details.
Want to use the staging issuer for testing?
To use the Let's Encrypt staging issuer, pre-create your own issuer, update clientRouter.tls.issuerName with the name you have chosen, and then update clientRouter.tls.issuer.enabled and set it to false.
B) Install with Istio¶
We have added support in the inlets-uplink chart for Istio to make it as simple as possible to configure with a HTTP01 challenge.
If you don't have Istio setup already you can deploy it with arkade.
arkade install istio
Label the inlets namespace so that Istio can inject its sidecars:
kubectl label namespace inlets \
istio-injection=enabled --overwrite
Create a values.yaml file for the inlets-uplink chart:
ingress:
issuer:
# When set, a production issuer will be generated for you
# to use a pre-existing issuer, set issuer.enabled=false
enabled: true
class: "istio"
clientRouter:
# Customer tunnels will connect with a URI of:
# wss://uplink.example.com/namespace/tunnel
domain: uplink.example.com
tls:
issuerName: letsencrypt-prod
istio:
enabled: true
Make sure to replace the domain with your actual domain name.
Deploy with Helm¶
The chart is served through a container registry (OCI), not GitHub pages
Many Helm charts are served over GitHub pages, from a public repository, making it easy to browse and read the source code. We are using an OCI artifact in a container registry, which makes for a more modern alternative. If you want to browse the source, you can simply run helm template instead of helm upgrade.
Unauthorized?
The chart artifacts are public and do not require authentication, however if you run into an "Access denied" or authorization error when interacting with ghcr.io, try running helm registry login ghcr.io to refresh your credentials, or docker logout ghcr.io.
The Helm chart is called inlets-uplink-provider, you can deploy it using the custom values.yaml file created above:
helm upgrade --install inlets-uplink \
oci://ghcr.io/openfaasltd/inlets-uplink-provider \
--namespace inlets \
--values ./values.yaml
If you want to pin the version of the Helm chart, you can do so with the --version flag.
Where can I see the various options for values.yaml?
All of the various options for the Helm chart are documented in the configuration reference.
How can I view the source code?
See the note on helm template under the configuration reference.
How can I find the latest version of the chart?
If you omit a version, Helm will use the latest published OCI artifact, however if you do want to pin it, you can browse all versions of the Helm chart on GitHub
As an alternative to using ghcr.io's UI, you can get the list of tags, including the latest tag via the crane CLI:
arkade get crane
# List versions
crane ls ghcr.io/openfaasltd/inlets-uplink-provider
# Get the latest version
LATEST=$(crane ls ghcr.io/openfaasltd/inlets-uplink-provider |tail -n 1)
echo $LATEST
Verify the installation¶
Once you've installed inlets-uplink, you can verify it is deployed correctly by checking the inlets namespace for running pods:
$ kubectl get pods --namespace inlets
NAME READY STATUS RESTARTS AGE
client-router-b5857cf6f-7vrdh 1/1 Running 0 92s
prometheus-74d8d7db9b-2hptm 1/1 Running 0 16s
uplink-operator-7fccc9bdbc-twd2q 1/1 Running 0 92s
You should see the client-router and cloud-operator in a Running state.
If you installed inlets-uplink with Kubernetes ingress, you can verify that ingress for the client-router is setup and that a TLS certificate is issued for your domain using these two commands:
$ kubectl get -n inlets ingress/client-router
NAME CLASS HOSTS ADDRESS PORTS AGE
client-router traefik uplink.example.com 188.166.194.102 80, 443 31m
$ kubectl get -n inlets cert/client-router-cert
NAME READY SECRET AGE
client-router-cert True client-router-cert 30m
Setup the REST API¶
The REST API for Uplink is enabled by default and accessible on the same domain as the client-router under the /v1 path prefix. For example, if your client-router domain is uplink.example.com, the API will be available at https://uplink.example.com/v1.
See the REST API reference to learn how to invoke the API and for a full list of endpoints.
Optionally, the client-api can be exposed on a separate domain.
Access token¶
An access token is generated by Helm and stored as a Kubernetes secret during installation. This token can be used to authenticate with the API.
If you need to create the token manually, for example to use a specific value, you can do so before installing the chart:
# Generate a new access token
export token=$(openssl rand -base64 32|tr -d '\n')
echo -n $token > $HOME/.inlets/client-api
# Store the access token in a secret in the inlets namespace.
kubectl create secret generic \
client-api-token \
-n inlets \
--from-file client-api-token=$HOME/.inlets/client-api
If the secret already exists, Helm will use the existing token instead of generating a new one.
See the OAuth configuration section for instructions on how to enable OAuth.
Configure OAuth¶
You can configure any OpenID Connect (OIDC) compatible identity provider for use with Inlets Uplink.
- Register a new client (application) for Inlets Uplink with your identity provider.
- Enable the required authentication flows. The Client Credentials flow is ideal for serve-to-server interactions where there is no direct user involvement. This is the flow we recommend and use in our examples any other authentication flow can be picked depending on your use case.
-
Configure Client API
Update your
values.yamlfile and add the following parameters to theclientApisection:clientApi: # OIDC provider url. issuerURL: "https://myprovider.example.com" # The audience is generally the same as the value of the domain field, however # some issuers like keycloak make the audience the client_id of the application/client. audience: "uplink.example.com"
The issuerURL needs to be set to the url of your provider, eg. https://accounts.google.com for google or https://example.eu.auth0.com/ for Auth0.
The audience is usually the client apis public URL although for some providers it can also be the client id.
Configure a separate API domain¶
By default, the client-api is exposed on the same domain as the client-router under the /v1 path prefix. If you prefer to use a separate domain, set the clientApi.domain field in your values.yaml file and enable the dedicated ingress:
clientApi:
# Use a dedicated domain for the client API
domain: clientapi.example.com
tls:
ingress:
enabled: true
When a dedicated domain is set and clientApi.tls.ingress.enabled is true, a separate Ingress resource is created for the client-api.
Download the tunnel CLI¶
We provide a CLI to help you create and manage tunnels. It is available as a plugin for the inlets-pro CLI.
Download the inlets-pro binary:
- Download it from the GitHub releases
- Get it with arkade:
arkade get inlets-pro
Get the tunnel plugin:
inlets-pro plugin get tunnel
Run inlets-pro tunnel --help to see all available commands.
Setup the first customer tunnel¶
Continue the setup here: Create a customer tunnel
Upgrading the chart and components¶
If you have a copy of values.yaml with pinned image versions, you should update these manually.
Next, run the Helm chart installation command again, and remember to use the sames values.yaml file that you used to install the software originally.
Over time, you may find using a tool like FluxCD or ArgoCD to manage the installation and updates makes more sense than running Helm commands manually.
Ingress class change in chart version 0.5.0
The default ingress class changed from Nginx to Traefik in chart version 0.5.0. If you are still using NGINX ingress, make sure your values.yaml includes the required configuration from the Ingress NGINX section before upgrading.
If the Custom Resource Definition (CRD) has changed, you can extract it from the Chart repo and install it before or after upgrading. As a rule, Helm won't install or upgrade CRDs a second time if there's already an existing version:
helm template oci://ghcr.io/openfaasltd/inlets-uplink-provider \
--include-crds=true \
--output-dir=/tmp
kubectl apply -f \
/tmp/inlets-uplink-provider/crds/uplink.inlets.dev_tunnels.yaml
Upgrading existing customer tunnels¶
The operator will upgrade the image: version of all deployed inlets uplink tunnels automatically based upon the tag set in values.yaml.
If no value is set in your overridden values.yaml file, then whatever the default is in the chart will be used.
inletsVersion: 0.9.23
When a tunnel is upgraded, you'll see a log line like this:
2024-01-11T12:25:15.442Z info operator/controller.go:860 Upgrading version {"tunnel": "ce.inlets", "from": "0.9.21", "to": "0.9.23"}
Configuration reference¶
Looking for the source for the Helm chart? The source is published directly to a container registry as an OCI bundle. View the source with: helm template oci://ghcr.io/openfaasltd/inlets-uplink-provider
If you need a configuration option outside of what's already available, feel free to raise an issue on the inlets-pro repository.
Overview of inlets-uplink parameters in values.yaml.
| Parameter | Description | Default |
|---|---|---|
pullPolicy |
The a imagePullPolicy applied to inlets-uplink components. | Always |
operator.image |
Container image used for the uplink operator. | ghcr.io/openfaasltd/uplink-operator:0.1.5 |
ingress.issuer.name |
Name of cert-manager Issuer. | letsencrypt-prod |
ingress.issuer.enabled |
Create a cert-manager Issuer. Set to false if you wish to specify your own pre-existing object for each component. | true |
ingress.issuer.email |
Let's Encrypt email. Only used for certificate renewing notifications. | "" |
ingress.class |
Ingress class for client router ingress. | traefik |
clientRouter.image |
Container image used for the client router. | ghcr.io/openfaasltd/uplink-client-router:0.1.5 |
clientRouter.domain |
Domain name for inlets uplink. Customer tunnels will connect with a URI of: wss://uplink.example.com/namespace/tunnel. | "" |
clientRouter.tls.ingress.enabled |
Enable ingress for the client router. | enabled |
clientRouter.tls.ingress.annotations |
Annotations to be added to the client router ingress resource. | {} |
clientRouter.tls.istio.enabled |
Use an Istio Gateway for incoming traffic to the client router. | false |
clientRouter.service.type |
Client router service type | ClusterIP |
clientRouter.service.nodePort |
Client router service port for NodePort service type, assigned automatically when left empty. (only if clientRouter.service.type is set to "NodePort") | nil |
tunnelsNamespace |
Deployments, Services and Secrets will be created in this namespace. Leave blank for a cluster-wide scope, with tunnels in multiple namespaces. | "" |
inletsVersion |
Inlets Pro release version for tunnel server Pods. | 0.9.12 |
clientApi.enabled |
Enable tunnel management REST API. | true |
clientApi.domain |
Domain for a dedicated client API ingress. By default the API is exposed on the client-router's domain under the /v1 path prefix. |
"" |
clientApi.tls.ingress.enabled |
Enable a dedicated ingress for the client API. Requires clientApi.domain to be set. |
false |
clientApi.tls.ingress.annotations |
Annotations to be added to the client API ingress resource. | {} |
clientApi.image |
Container image used for the client API. | ghcr.io/openfaasltd/uplink-api:0.1.5 |
prometheus.create |
Create the Prometheus monitoring component. | true |
prometheus.resources |
Resource limits and requests for prometheus containers. | {} |
prometheus.image |
Container image used for prometheus. | prom/prometheus:v2.40.1 |
prometheus.service.type |
Prometheus service type | ClusterIP |
prometheus.service.nodePort |
Prometheus service port for NodePort service type, assigned automatically when left empty. (only if prometheus.service.type is set to "NodePort") | nil |
nodeSelector |
Node labels for pod assignment. | {} |
affinity |
Node affinity for pod assignments. | {} |
tolerations |
Node tolerations for pod assignment. | [] |
Specify each parameter using the --set key=value[,key=value] argument to helm install
Telemetry and usage data¶
The inlets-uplink Kubernetes operator will send telemetry data to OpenFaaS Ltd on a periodic basis. This information is used for calculating accurate usage metrics for billing purposes. This data is sent over HTTPS, does not contain any personal information, and is not shared with any third parties.
This data includes the following:
- Number of tunnels deployed
- Number of namespaces with at least one tunnel contained
- Kubernetes version
- Inlets Uplink version
- Number of installations of Inlets Uplink
Traefik rate limiting¶
With Traefik, rate limiting is configured using Middleware custom resources. You can use the RateLimit middleware to limit requests per second and the InFlightReq middleware to limit simultaneous connections.
Create a Middleware resource for rate limiting in the inlets namespace:
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: uplink-rate-limit
namespace: inlets
spec:
rateLimit:
average: 17
period: 1s
burst: 50
Create a Middleware resource for limiting simultaneous connections:
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: uplink-inflight-limit
namespace: inlets
spec:
inFlightReq:
amount: 300
To apply the middleware to the client-router Ingress, add the traefik.ingress.kubernetes.io/router.middlewares annotation in your values.yaml:
clientRouter:
tls:
ingress:
annotations:
traefik.ingress.kubernetes.io/router.middlewares: inlets-uplink-rate-limit@kubernetescrd,inlets-uplink-inflight-limit@kubernetescrd
The annotation value follows the format <namespace>-<middleware-name>@kubernetescrd. Multiple middleware can be chained with commas.
Ingress NGINX¶
The Kubernetes NGINX Ingress Controller project has announced its retirement in March 2026 and will no longer receive updates or security patches. The uplink chart version 0.5.0 changes the default ingress class from Nginx to Traefik. If you want to update to the latest uplink version but have not migrated your ingress controller yet, you need to add the following additional parameters in the values.yaml configuration for the uplink Helm chart.
ingress:
class: "nginx"
clientRouter:
tls:
ingress:
annotations:
nginx.ingress.kubernetes.io/limit-connections: "300"
nginx.ingress.kubernetes.io/limit-rpm: "1000"
# 10 minutes for the websocket
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
# Up the keepalive timeout to max
nginx.ingress.kubernetes.io/keepalive-timeout: "350"
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
When you have the data-router deployed you can add these additional rate-limiting annotations as well. They used to be set as defaults by the chart.
dataRouter:
tls:
ingress:
annotations:
nginx.ingress.kubernetes.io/limit-connections: "300"
nginx.ingress.kubernetes.io/limit-rpm: "1000"