<p>The preferred approach is with <ahref="https://docs.docker.com/config/daemon/ipv6/#create-an-ipv6-network">user-defined networks</a> via <code>compose.yaml</code> (recommended) or CLI with <code>docker network create</code>:</p>
<p>This article describes how to deploy DMS to Kubernetes. Please note that there is also a<ahref="https://github.com/docker-mailserver/docker-mailserver-helm">Helm chart</a> available.</p>
<divclass="admonition attention">
<p>This article describes how to deploy DMS to Kubernetes. We highly recommend everyone to use our community<ahref="https://github.com/docker-mailserver/docker-mailserver-helm">DMS Helm chart</a>.</p>
<divclass="admonition note">
<pclass="admonition-title">Requirements</p>
<p>We assume basic knowledge about Kubernetes from the reader. Moreover, we assume the reader to have a basic understanding of mail servers. Ideally, the reader has deployed DMS before in an easier setup with Docker (Compose).</p>
<ol>
<li>Basic knowledge about Kubernetes from the reader.</li>
<li>A basic understanding of mail servers.</li>
<li>Ideally, the reader has already deployed DMS before with a simpler setup (<em><code>docker run</code> or Docker Compose</em>).</li>
</ol>
</div>
<divclass="admonition warning">
<pclass="admonition-title">About Support for Kubernetes</p>
<p>Please note that Kubernetes <strong>is not</strong> officially supported and we do not build images specifically designed for it. When opening an issue, please remember that only Docker & Docker Compose are officially supported.</p>
<p>This content is entirely community-supported. If you find errors, please open an issue and provide a PR.</p>
<pclass="admonition-title">Limited Support</p>
<p>DMS <strong>does not officially support Kubernetes</strong>. This content is entirely community-supported. If you find errors, please open an issue and raise a PR.</p>
<p>We want to provide the basic configuration in the form of environment variables with a <code>ConfigMap</code>. Note that this is just an example configuration; tune the <code>ConfigMap</code> to your needs.</p>
<p>We can also make use of user-provided configuration files, e.g. <code>user-patches.sh</code>, <code>postfix-accounts.cf</code> and more, to adjust DMS to our likings. We encourage you to have a look at <ahref="https://kustomize.io/">Kustomize</a> for creating <code>ConfigMap</code>s from multiple files, but for now, we will provide a simple, hand-written example. This example is absolutely minimal and only goes to show what can be done.</p>
</div>
<p>You can also make use of user-provided configuration files (<em>e.g. <code>user-patches.sh</code>, <code>postfix-accounts.cf</code>, etc</em>), to customize DMS to your needs.</p>
<detailsclass="example">
<summary>Providing config files</summary>
<p>Here is a minimal example that supplies a <code>postfix-accounts.cf</code> file inline with two users:</p>
<p>With the configuration shown above, you can <strong>not</strong> dynamically add accounts as the configuration file mounted into the mail server can not be written to.</p>
<p>Use persistent volumes for production deployments.</p>
<p>The inline <code>postfix-accounts.cf</code> config example above provides file content that is static. It is mounted as read-only at runtime, thus cannot support modifications.</p>
<p>For production deployments, use persistent volumes instead (via <code>PersistentVolumeClaim</code>). That will enable files like <code>postfix-account.cf</code> to add and remove accounts, while also persisting those changes externally from the container.</p>
<p>Thereafter, we need persistence for our data. Make sure you have a storage provisioner and that you choose the correct <code>storageClassName</code>.</p>
</details>
<divclass="admonition tip">
<pclass="admonition-title">Modularize your <code>ConfigMap</code></p>
<p><ahref="https://kustomize.io/">Kustomize</a> can be a useful tool as it supports creating a <code>ConfigMap</code> from multiple files.</p>
</div>
</div>
<divclass="tabbed-block">
<p>To persist data externally from the DMS container, configure a <code>PersistentVolumeClaim</code> (PVC).</p>
<p>Make sure you have a storage system (like Longhorn, Rook, etc.) and that you choose the correct <code>storageClassName</code> (according to your storage system).</p>
<p>A <code>Service</code> is required for getting the traffic to the pod itself. The service is somewhat crucial. Its configuration determines whether the original IP from the sender will be kept. <ahref="#exposing-your-mail-server-to-the-outside-world">More about this further down below</a>.</p>
<p>The configuration you're seeing does keep the original IP, but you will not be able to scale this way. We have chosen to go this route in this case because we think most Kubernetes users will only want to have one instance.</p>
</div>
</div>
<divclass="tabbed-block">
<p>A <ahref="https://kubernetes.io/docs/concepts/services-networking/service"><code>Service</code></a> is required for getting the traffic to the pod itself. It configures a load balancer with the ports you'll need.</p>
<p>The configuration for a <code>Service</code> affects if the original IP from a connecting client is preserved (<em>this is important</em>). <ahref="#exposing-your-mail-server-to-the-outside-world">More about this further down below</a>.</p>
<p>Last but not least, the <code>Deployment</code> becomes the most complex component. It instructs Kubernetes how to run the DMS container and how to apply your <code>ConfigMaps</code>, persisted storage, etc. Additionally, we can set options to enforce runtime security here.</p>
</div>
</div>
<divclass="tabbed-block">
<divclass="admonition example">
<pclass="admonition-title">Using <ahref="https://cert-manager.io/docs/"><code>cert-manager</code></a> to supply TLS certificates</p>
<p>You could supply RSA certificates as fallback certificates instead, with ECDSA as the primary. DMS supports dual certificates via the ENV <code>SSL_ALT_CERT_PATH</code> and <code>SSL_ALT_KEY_PATH</code>.</p>
</div>
<divclass="admonition warning">
<pclass="admonition-title">Always provide sensitive information via a <code>Secret</code></p>
<p>For storing OpenDKIM keys, TLS certificates, or any sort of sensitive data - you should be using <code>Secret</code>s.</p>
<p>A <code>Secret</code> is similar to <code>ConfigMap</code>, it can be used and mounted as a volume as demonstrated in the <ahref="#deployment"><code>Deployment</code> manifest</a> tab.</p>
</div>
</div>
<divclass="tabbed-block">
<p>The <ahref="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment"><code>Deployment</code></a> config is the most complex component.</p>
<ul>
<li>It instructs Kubernetes how to run the DMS container and how to apply your <code>ConfigMap</code>s, persisted storage, etc.</li>
<li>Additional options can be set to enforce runtime security.</li>
<h3id="certificates-an-example"><aclass="toclink"href="#certificates-an-example">Certificates - An Example</a></h3>
<p>In this example, we use <ahref="https://cert-manager.io/docs/"><code>cert-manager</code></a> to supply RSA certificates. You can also supply RSA certificates as fallback certificates, which DMS supports out of the box with <code>SSL_ALT_CERT_PATH</code> and <code>SSL_ALT_KEY_PATH</code>, and provide ECDSA as the proper certificates.</p>
<p>You will need to have <ahref="https://cert-manager.io/docs/"><code>cert-manager</code></a> configured. Especially the issue will need to be configured. Since we do not know how you want or need your certificates to be supplied, we do not provide more configuration here. The documentation for <ahref="https://cert-manager.io/docs/"><code>cert-manager</code></a> is excellent.</p>
<p>For storing OpenDKIM keys, TLS certificates or any sort of sensitive data, you should be using <code>Secret</code>s. You can mount secrets like <code>ConfigMap</code>s and use them the same way.</p>
</div>
<p>The <ahref="../../security/ssl/">TLS docs page</a> provides guidance when it comes to certificates and transport layer security. Always provide sensitive information vai <code>Secrets</code>.</p>
<h2id="exposing-your-mail-server-to-the-outside-world"><aclass="toclink"href="#exposing-your-mail-server-to-the-outside-world">Exposing your Mail Server to the Outside World</a></h2>
<p>The more difficult part with Kubernetes is to expose a deployed DMS to the outside world. Kubernetes provides multiple ways for doing that; each has downsides and complexity. The major problem with exposing DMS to outside world in Kubernetes is to <ahref="https://kubernetes.io/docs/tutorials/services/source-ip">preserve the real client IP</a>. The real client IP is required by DMS for performing IP-based SPF checks and spam checks. If you do not require SPF checks for incoming mails, you may disable them in your <ahref="../override-defaults/postfix/">Postfix configuration</a> by dropping the line that states: <code>check_policy_service unix:private/policyd-spf</code>.</p>
<p>The easiest approach was covered above, using <codeclass="highlight"><spanclass="nt">externalTrafficPolicy</span><spanclass="p">:</span><spanclass="w"></span><spanclass="l l-Scalar l-Scalar-Plain">Local</span></code>, which disables the service proxy, but makes the service local as well (which does not scale). This approach only works when you are given the correct (that is, a public and routable) IP address by a load balancer (like MetalLB). In this sense, the approach above is similar to the next example below. We want to provide you with a few alternatives too. <strong>But</strong> we also want to communicate the idea of another simple method: you could use a load-balancer without an external IP and DNAT the network traffic to the mail server. After all, this does not interfere with SPF checks because it keeps the origin IP address. If no dedicated external IP address is available, you could try the latter approach, if one is available, use the former.</p>
<p>The simplest way is to expose DMS as a <ahref="https://kubernetes.io/docs/concepts/services-networking/service">Service</a> with <ahref="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips">external IPs</a>. This is very similar to the approach taken above. Here, an external IP is given to the service directly by you. With the approach above, you tell your load-balancer to do this.</p>
<p>The more difficult part with Kubernetes is to expose a deployed DMS instance to the outside world.</p>
<p>The major problem with exposing DMS to the outside world in Kubernetes is to <ahref="https://kubernetes.io/docs/tutorials/services/source-ip">preserve the real client IP</a>. The real client IP is required by DMS for performing IP-based DNS and spam checks.</p>
<p>Kubernetes provides multiple ways to address this; each has its upsides and downsides.</p>
<!-- This empty quote block is purely for a visual border -->
<divclass="admonition quote">
<divclass="tabbed-set tabbed-alternate"data-tabs="2:3"><inputchecked="checked"id="configure-ip-manually"name="__tabbed_2"type="radio"/><inputid="host-network"name="__tabbed_2"type="radio"/><inputid="using-the-proxy-protocol"name="__tabbed_2"type="radio"/><divclass="tabbed-labels"><labelfor="configure-ip-manually">Configure IP Manually</label><labelfor="host-network">Host network</label><labelfor="using-the-proxy-protocol">Using the PROXY Protocol</label></div>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabled/><spanclass="task-list-indicator"></span></label> Requires the node to have a dedicated, publicly routable IP address</li>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabled/><spanclass="task-list-indicator"></span></label> Limited to a single node (<em>associated to the dedicated IP address</em>)</li>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabled/><spanclass="task-list-indicator"></span></label> Your deployment requires an explicit IP in your configuration (<em>or an entire Load Balancer</em>).</li>
</ul>
</details>
<divclass="admonition info">
<pclass="admonition-title">Requirements</p>
<ol>
<li>You can dedicate a <strong>publicly routable IP</strong> address for the DMS configured <code>Service</code>.</li>
<li>A dedicated IP is required to allow your mail server to have matching <code>A</code> and <code>PTR</code> records (<em>which other mail servers will use to verify trust when they receive mail sent from your DMS instance</em>).</li>
</ol>
</div>
<divclass="admonition example">
<pclass="admonition-title">Example</p>
<p>Assign the DMS <code>Service</code> an external IP directly, or delegate an LB to assign the IP on your behalf.</p>
<p>The DMS <code>Service</code> is configured with an "<ahref="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips">external IP</a>" manually. Append your externally reachable IP address to <code>spec.externalIPs</code>.</p>
<li>does not preserve the real client IP, so SPF check of incoming mail will fail.</li>
<li>requires you to specify the exposed IPs explicitly.</li>
</ul>
<h3id="proxy-port-to-service"><aclass="toclink"href="#proxy-port-to-service">Proxy port to Service</a></h3>
<p>The <ahref="https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service">proxy pod</a> helps to avoid the necessity of specifying external IPs explicitly. This comes at the cost of complexity; you must deploy a proxy pod on each <ahref="https://kubernetes.io/docs/concepts/architecture/nodes">Node</a> you want to expose DMS on.</p>
<p>This approach</p>
<ul>
<li>does not preserve the real client IP, so SPF check of incoming mail will fail.</li>
</ul>
<h3id="bind-to-concrete-node-and-use-host-network"><aclass="toclink"href="#bind-to-concrete-node-and-use-host-network">Bind to concrete Node and use host network</a></h3>
<p>One way to preserve the real client IP is to use <code>hostPort</code> and <code>hostNetwork: true</code>. This comes at the cost of availability; you can reach DMS from the outside world only via IPs of <ahref="https://kubernetes.io/docs/concepts/architecture/nodes">Node</a> where DMS is deployed.</p>
</div>
<divclass="tabbed-block">
<p>The config differs depending on your choice of load balancer. This example uses <ahref="https://metallb.universe.tf/">MetalLB</a>.</p>
<spanclass="w"></span><spanclass="nt">addresses</span><spanclass="p">:</span><spanclass="w"></span><spanclass="p p-Indicator">[</span><spanclass="w"></span><spanclass="nv"><YOUR PUBLIC DEDICATED IP IN CIDR NOTATION></span><spanclass="w"></span><spanclass="p p-Indicator">]</span>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabled/><spanclass="task-list-indicator"></span></label> Requires the node to have a dedicated, publicly routable IP address</li>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabled/><spanclass="task-list-indicator"></span></label> Limited to a single node (<em>associated to the dedicated IP address</em>)</li>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabled/><spanclass="task-list-indicator"></span></label> It is not possible to access DMS via other cluster nodes, only via the node that DMS was deployed on</li>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabled/><spanclass="task-list-indicator"></span></label> Every port within the container is exposed on the host side</li>
</ul>
</details>
<divclass="admonition example">
<pclass="admonition-title">Example</p>
<p>Using <code>hostPort</code> and <code>hostNetwork: true</code> is a similar approach to <ahref="https://docs.docker.com/compose/compose-file/compose-file-v3/#network_mode"><code>network_mode: host</code> with Docker Compose</a>.</p>
<li>it is not possible to access DMS via other cluster Nodes, only via the Node DMS was deployed at.</li>
<li>every Port within the Container is exposed on the Host side.</li>
</div>
</div>
<divclass="tabbed-block">
<detailsclass="abstract"open="open">
<summary>Advantages / Disadvantages</summary>
<ulclass="task-list">
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabledchecked/><spanclass="task-list-indicator"></span></label> Preserves the origin IP address of clients (<em>which is crucial for DNS related checks</em>)</li>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabledchecked/><spanclass="task-list-indicator"></span></label> Aligns with a best practice for Kubernetes by using a dedicated ingress, routing external traffic to the k8s cluster (<em>with the benefits of flexible routing rules</em>)</li>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabledchecked/><spanclass="task-list-indicator"></span></label> Avoids the restraint of a single <ahref="https://kubernetes.io/docs/concepts/architecture/nodes">node</a> (<em>as a workaround to preserve the original client IP</em>)</li>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabled/><spanclass="task-list-indicator"></span></label> Introduces complexity by requiring:<ul>
<li>A reverse-proxy / ingress controller (<em>potentially extra setup</em>)</li>
<li>Kubernetes manifest changes for the DMS configured <code>Service</code></li>
<li>DMS configuration changes for Postfix and Dovecot</li>
</ul>
<h3id="proxy-port-to-service-via-proxy-protocol"><aclass="toclink"href="#proxy-port-to-service-via-proxy-protocol">Proxy Port to Service via PROXY Protocol</a></h3>
<p>This way is ideologically the same as <ahref="#proxy-port-to-service">using a proxy pod</a>, but instead of a separate proxy pod, you configure your ingress to proxy TCP traffic to the DMS pod using the PROXY protocol, which preserves the real client IP.</p>
<h4id="configure-your-ingress"><aclass="toclink"href="#configure-your-ingress">Configure your Ingress</a></h4>
<p>With an <ahref="https://kubernetes.github.io/ingress-nginx">NGINX ingress controller</a>, set <code>externalTrafficPolicy: Local</code> for its service, and add the following to the TCP services config map (as described <ahref="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services">here</a>):</p>
</li>
<liclass="task-list-item"><labelclass="task-list-control"><inputtype="checkbox"disabled/><spanclass="task-list-indicator"></span></label> To keep support for direct connections to DMS services internally within cluster, service ports must be "duplicated" to offer an alternative port for connections using PROXY protocol</li>
</ul>
</details>
<detailsclass="question">
<summary>What is the PROXY protocol?</summary>
<p>PROXY protocol is a network protocol for preserving a client’s IP address when the client’s TCP connection passes through a proxy.</p>
<p>It is a common feature supported among reverse-proxy services (<em>NGINX, HAProxy, Traefik</em>), which you may already have handling ingress traffic for your cluster.</p>
<preclass="mermaid"><code>flowchart LR
A(External Mail Server) -->|Incoming connection| B
subgraph cluster
B("Ingress Acting as a Proxy") -->|PROXY protocol connection| C(DMS)
end</code></pre>
<p>For more information on the PROXY protocol, refer to <ahref="../../../examples/tutorials/mailserver-behind-proxy/">our dedicated docs page</a> on the topic.</p>
</details>
<detailsclass="example"open="open">
<summary>Configure the Ingress Controller</summary>
<p>On Traefik's side, the configuration is very simple.</p>
<ul>
<li>Create an entrypoint for each port that you want to expose (<em>probably 25, 465, 587 and 993</em>).</li>
<li>Each entrypoint should configure an <ahref="https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroutetcp"><code>IngressRouteTCP</code></a> that routes to the equivalent internal DMS <code>Service</code> port which supports PROXY protocol connections.</li>
</ul>
<p>The below snippet demonstrates an example for two entrypoints, <code>submissions</code> (port 465) and <code>imaps</code> (port 993).</p>
<spanclass="w"></span><spanclass="nt">port</span><spanclass="p">:</span><spanclass="w"></span><spanclass="l l-Scalar l-Scalar-Plain">subs-proxy</span><spanclass="w"></span><spanclass="c1"># note the 15 character limit here</span>
<pclass="admonition-title"><code>*-proxy</code> port name suffix</p>
<p>The <code>IngressRouteTCP</code> example configs above reference ports with a <code>*-proxy</code> suffix.</p>
<ul>
<li>These port variants will be defined in the <ahref="#deployment"><code>Deployment</code> manifest</a>, and are scoped to the <code>mailserver</code> service (via <code>spec.routes.services.name</code>).</li>
<li>The suffix is used to distinguish that these ports are only compatible with connections using the PROXY protocol, which is what your ingress controller should be managing for you by adding the correct PROXY protocol headers to TCP connections it routes to DMS.</li>
</ul>
</div>
</div>
<divclass="tabbed-block">
<p>With an <ahref="https://kubernetes.github.io/ingress-nginx">NGINX ingress controller</a>, add the following to the TCP services config map (<em>as described <ahref="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services">here</a></em>):</p>
<p>With <ahref="https://hub.docker.com/_/haproxy">HAProxy</a>, the configuration should look similar to the above. If you know what it actually looks like, add an example here. <imgalt="😃"class="twemoji"src="https://cdn.jsdelivr.net/gh/jdecked/twemoji@15.0.3/assets/svg/1f603.svg"title=":smiley:"/></p>
</div>
<h4id="configure-the-mailserver"><aclass="toclink"href="#configure-the-mailserver">Configure the Mailserver</a></h4>
<p>Then, configure both <ahref="../override-defaults/postfix/">Postfix</a> and <ahref="../override-defaults/dovecot/">Dovecot</a> to expect the PROXY protocol:</p>
<detailsclass="example">
<summary>HAProxy Example</summary>
</div>
</div>
</details>
<detailsclass="example"open="open">
<summary>Adjust DMS config for Dovecot + Postfix</summary>
<detailsclass="warning">
<summary>Only ingress should connect to DMS with PROXY protocol</summary>
<p>While Dovecot will restrict connections via PROXY protocol to only clients trusted configured via <code>haproxy_trusted_networks</code>, Postfix does not have an equivalent setting. Public clients should always route through ingress to establish a PROXY protocol connection.</p>
<p>You are responsible for properly managing traffic inside your cluster and to <strong>ensure that only trustworthy entities</strong> can connect to the designated PROXY protocol ports.</p>
<p>With Kubernetes, this is usually the task of the CNI (<em>container network interface</em>).</p>
</details>
<divclass="admonition tip">
<pclass="admonition-title">Advised approach</p>
<p>The <em>"Separate PROXY protocol ports"</em> tab below introduces a little more complexity, but provides better compatibility for internal connections to DMS.</p>
</div>
<divclass="tabbed-set tabbed-alternate"data-tabs="5:2"><inputchecked="checked"id="only-accept-connections-with-proxy-protocol"name="__tabbed_5"type="radio"/><inputid="separate-proxy-protocol-ports-for-ingress"name="__tabbed_5"type="radio"/><divclass="tabbed-labels"><labelfor="only-accept-connections-with-proxy-protocol">Only accept connections with PROXY protocol</label><labelfor="separate-proxy-protocol-ports-for-ingress">Separate PROXY protocol ports for ingress</label></div>
<divclass="tabbed-content">
<divclass="tabbed-block">
<divclass="admonition warning">
<pclass="admonition-title">Connections to DMS within the internal cluster will be rejected</p>
<p>The services for these ports can only enable PROXY protocol support by mandating the protocol on all connections for these ports.</p>
<p>This can be problematic when you also need to support internal cluster traffic directly to DMS (<em>instead of routing indirectly through the ingress controller</em>).</p>
</div>
<p>Here is an example configuration for <ahref="../override-defaults/postfix/">Postfix</a>, <ahref="../override-defaults/dovecot/">Dovecot</a>, and the required adjustments for the <ahref="#deployment"><code>Deployment</code> manifest</a>. The port names are adjusted here only to convey the additional context described earlier.</p>
<p>Supporting internal cluster connections to DMS without using PROXY protocol requires both Postfix and Dovecot to be configured with alternative ports for each service port (<em>which only differ by enforcing PROXY protocol connections</em>).</p>
<ul>
<li>it is not possible to access DMS via cluster-DNS, as the PROXY protocol is required for incoming connections.</li>
<li>The ingress controller will route public connections to the internal alternative ports for DMS (<code>*-proxy</code> variants).</li>
<li>Internal cluster connections will instead use the original ports configured for the DMS container directly (<em>which are private to the cluster network</em>).</li>
</ul>
</div>
<p>In this example we'll create a copy of the original service ports with PROXY protocol enabled, and increment the port number assigned by <code>10000</code>.</p>
<p>Create a <code>user-patches.sh</code> file to apply these config changes during container startup:</p>
<spanclass="c1"># Duplicate the config for the submission(s) service ports (587 / 465) with adjustments for the PROXY ports (10587 / 10465) and `syslog_name` setting:</span>
<spanclass="c1"># Enable PROXY Protocol support (different setting as port 25 is handled via postscreen), optionally configure a `syslog_name` to distinguish in logs:</span>
<p>For Dovecot, you can configure <ahref="../override-defaults/dovecot/"><code>dovecot.cf</code></a> to look like this:</p>
<divclass="highlight"><pre><span></span><code><spanclass="na">haproxy_trusted_networks</span><spanclass="w"></span><spanclass="o">=</span><spanclass="w"></span><spanclass="s"><YOUR POD CIDR></span>
<p>If you use other Dovecot ports (110, 995, 4190), you may want to configure those similar to above. The <code>dovecot.cf</code> config for these ports is <ahref="../../../examples/tutorials/mailserver-behind-proxy/">documented here</a> (<em>in the equivalent section of that page</em>).</p>
<pclass="admonition-title">Configuring DNS - DKIM record</p>
<p>When you generated your key in the previous step, the DNS data was saved into a file <code><selector>.txt</code> (default: <code>mail.txt</code>). Use this content to update your <ahref="https://www.vultr.com/docs/introduction-to-vultr-dns/">DNS via Web Interface</a> or directly edit your <ahref="https://en.wikipedia.org/wiki/Zone_file">DNS Zone file</a>:</p>
<divclass="tabbed-set tabbed-alternate"data-tabs="2:2"><inputchecked="checked"id="__tabbed_2_1" name="__tabbed_2"type="radio"/><inputid="__tabbed_2_2" name="__tabbed_2"type="radio"/><divclass="tabbed-labels"><labelfor="__tabbed_2_1">Web Interface</label><labelfor="__tabbed_2_2">DNS Zone file</label></div>
<divclass="tabbed-set tabbed-alternate"data-tabs="2:2"><inputchecked="checked"id="web-interface" name="__tabbed_2"type="radio"/><inputid="dns-zone-file" name="__tabbed_2"type="radio"/><divclass="tabbed-labels"><labelfor="web-interface">Web Interface</label><labelfor="dns-zone-file">DNS Zone file</label></div>
<h2id="running-inside-a-rootless-container"><aclass="toclink"href="#running-inside-a-rootless-container">Running Inside A Rootless Container</a></h2>
<p><ahref="https://github.com/rootless-containers/rootlesskit"><code>RootlessKit</code></a> is the <em>fakeroot</em> implementation for supporting <em>rootless mode</em> in Docker and Podman. By default, RootlessKit uses the <ahref="https://github.com/rootless-containers/rootlesskit/blob/v0.14.5/docs/port.md#port-drivers"><code>builtin</code> port forwarding driver</a>, which does not propagate source IP addresses.</p>
<p>It is necessary for F2B to have access to the real source IP addresses in order to correctly identify clients. This is achieved by changing the port forwarding driver to <ahref="https://github.com/rootless-containers/slirp4netns"><code>slirp4netns</code></a>, which is slower than the builtin driver but does preserve the real source IPs.</p>
<p>For <ahref="https://docs.docker.com/engine/security/rootless">rootless mode</a> in Docker, create <code>~/.config/systemd/user/docker.service.d/override.conf</code> with the following content:</p>
<p>This reduces many of the benefits for why you might use a reverse proxy, but they can still be useful.</p>
<p>Some deployments may require a service to route traffic (kubernetes) when deploying, in which case the below advice is important to understand well.</p>
<p>The guide here has also been adapted for <ahref="../../../config/advanced/kubernetes/#using-the-proxy-protocol">our Kubernetes docs</a>.</p>
<h2id="what-can-go-wrong"><aclass="toclink"href="#what-can-go-wrong">What can go wrong?</a></h2>
<p>Without a reverse proxy involved, a service is typically aware of the client IP for a connection.</p>
<p>However when a reverse proxy routes the connection this information can be lost, and the proxied service mistakenly treats the client IP as the reverse proxy handling the connection.</p>