Architecture: Critical Sequences

Firezone is a distributed system with many moving parts, but some parts are especially critical to the integrity of the entire system:

These will be explained in more detail below.


Firezone authenticates users using two primary methods:

The authentication process for each is similar. Both methods begin the authentication process at your Firezone account's sign in page:<your-account>.

However, the OIDC flow redirects the user to the identity provider for authentication before the final redirect back to Firezone.

Here's how the authentication flow works:

Firezone authentication sequence diagram
  1. User clicks Sign in from the Client.
  2. The Client generates random 32-byte state and nonce values. These are used to prevent certain kinds of forgery and injection attacks.
  3. A browser window opens to your account's sign in page,<your-account> containing the nonce and state parameters.
  4. The user chooses which authentication method to use. If OIDC, the user is redirected out to the identity provider.
  5. After successfully authenticating, the user is redirected back to the admin portal.
  6. The admin portal mints a Firezone token created from the nonce parameter and other information.
  7. The admin portal issues a final redirect to firezone-fd0020211111://handle_client_sign_in_callback with the token and state parameters from the initial request.
  8. The Client receives this callback URL and validates the state parameter matches what it originally sent. This prevents other applications from injecting tokens into the Client's callback handler.
  9. The Client saves this token in a platform-specific secure storage mechanism, for example Keychain on macOS and iOS.
  10. The Client now has a valid token and uses it to authenticate with the control plane API.
  11. The authentication process is complete.

Policy evaluation

Policy evaluation is the process the Policy Engine uses to decide whether to allow or deny a connection request from a Client to a Resource.

If the request is allowed, connection setup information is sent to the Client and the appropriate Gateway. If the request is denied, it's logged and then dropped. This ensures that Clients are only connected to Gateways that are serving Resources the User is allowed to access.

Connections in Firezone are always default-deny. Policies must be created to allow access.

Here's how the process works:

Firezone policy evaluation sequence diagram
  1. The User attempts to access a Resource, e.g.
  2. The Client sees the request and opens a connection request to the Policy Engine.
  3. The Policy Engine evaluates the request against the configured Policies in your account based on factors such as the Groups the user is a part of, which Resource is being accessed, and so forth. If a match is found, the connection is allowed. If no match is found, the connection is dropped.
  4. If the connection is allowed, the Policy Engine sends the Client the WireGuard keys and NAT traversal information for the Gateway that will serve the Resource.
  5. The Policy Engine sends similar details to the Gateway.
  6. The Client and Gateway establish a WireGuard tunnel, and the Gateway sets up a forwarding rule to the Resource.
  7. Connection setup is complete. The User can now access the Resource.

Since the Client only receives WireGuard keys and NAT traversal information when a connection is allowed, it's not possible for a Client to exchange packets with the Gateway until explicitly allowed by the Policy Engine.

This means Gateways remain invisible to the outside world, helping to protect against classes of attacks that perimeter-based models may be susceptible to, such as DDoS attacks.

DNS resolution

Secure DNS resolution is a critical function in most organizations.

Firezone employs a unique, granular approach to split DNS to ensure traffic intended only for DNS-based Resources is routed through Firezone, leaving other traffic untouched -- even when resolved IP addresses overlap.

To achieve this, Firezone embeds a tiny, in-memory DNS proxy inside each Client that intercepts all DNS queries on the system.

When the proxy sees a query that doesn't match a known Resource, it operates in pass-through mode, forwarding the the query to the system's default resolvers or configured upstream resolvers in your account.

If the query matches a Resource, however, the following happens:

  1. The proxy sends the query request to the Policy Engine for evaluation. If the request is allowed, the Policy Engine finds an appropriate Gateway to resolve the query.
  2. The Gateway resolves the query and sends the response back to the proxy.
  3. The proxy generates a special, internal IP from the range or fd00:2021:1111:8000::/107 and maps this IP to the resolved IP returned by the Gateway.
  4. The proxy responds to the Client with this internal IP where it is then returned back to the application making the original request.

This is why you'll see DNS-based Resources resolve to IPs such as while the Client is signed in:

> nslookup


Non-authoritative answer:


For a deeper dive into how (and why) DNS works this way in Firezone, see the How DNS works in Firezone article.

Why Firezone uses a mapped address for DNS Resources

This is a common source of confusion among new Firezone users, so it's helpful to explain why Firezone uses mapped IPs for DNS Resources instead of simply using the actual resolved IP.

Consider the case where two DNS Resources resolve to the same IP address, such as when Name-based virtual hosting is used to host two web applications on the same server:

  • resolves to IP
  • also resolves to IP

Remember that routing happens at the IP level. We can't independently route packets for the same IP to two different places. If Firezone used the Resource's actual IP address to route packets, the User would be able to access if they were granted access only to

Using mapped IPs allows Firezone to securely route DNS Resources no matter how many other services share the same IP address.

High availability

Firezone was designed from the ground up to support high availability requirements. This is achieved through a combination of load balancing and automatic failover, described below.

Load balancing

When a Client wants to connect to a Resource, Firezone randomly selects a healthy Gateway in the Site to handle the request. The Client maintains the connection to that Gateway until either the Client disconnects or the Gateway becomes unhealthy.

This effectively shards Client connections across all Gateways in a Site, achieving higher overall throughput than otherwise possible with a single Gateway.

Automatic failover

Two or more Gateways deployed within a Site provide automatic failover in the event of a Gateway failure.

Here's how it works:

  1. When the admin portal detects a particular Gateway is unhealthy, it will stop using it for new connection requests to Resources in the Site.
  2. Existing Clients will remain connected to the Gateway until they themselves detect it to be unhealthy.
  3. Clients identify unhealthy gateways using keepalive timers. If the timer expires, the Client will disconnect from the unhealthy Gateway and request a new, healthy one from the portal.
  4. The Client keepalive timer expires after 10 seconds. This is the maximum time it takes for existing Client connections to be rerouted to a healthy Gateway in the event of a Gateway failure.

By using two independent health checks in the portal and the Client, Firezone ensures that temporary network issues between the Client and portal do not interrupt existing connections to healthy Gateways.

Need additional help?

Try asking on one of our community-powered support channels:

Or try searching the docs:
Last updated: May 09, 2024