Configuring DNS and Load Balancer
You can use your existing DNS servers and Load Balancers and ensure that there is connectivity between Load Balancer, DNS server, NPS toolkit VM (Bastion host), and OCP cluster.
Networking requirements for User-Provisioned Infrastructure
The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, a default DNS search zone can be configured to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their FQDN (fully qualified domain names) in both the node objects and all DNS requests.
The network connectivity is configured between the machines to allow cluster components to communicate with each other. Each machine must be able to resolve the host names of all other machines in the cluster.
Protocol | Port | Description |
---|---|---|
TCP | 2379-2380 | etcd server, peer, and metrics ports |
6443 | Kubernetes API | |
9000-9999 | Host level services, including the node exporter on ports 9100 to 9101 and the Cluster Version Operator on port 9099. |
|
10249-10259 | The default ports that Kubernetes reserves. | |
10256 | OpenShift-sdn | |
UDP | 4789 | VxLAN and Geneve |
6081 | VxLAN and Geneve | |
9000-9999 | Host level services, including the node exporter on ports 9100 to 9101. |
|
30000-32767 | Kubernetes Node Port |
Internal Load Balancer | External Load Balancer | |
---|---|---|
TCP |
|
|
UDP |
|
|
Configuring DNS
The following DNS records are required for an OpenShift Container Platform cluster that uses User-Provisioned infrastructure (UPI). In each record,
<cluster_name>
is the cluster name and
<base_domain>
is the cluster base domain.
Component | Record | Description |
---|---|---|
Kubernetes API |
api.<cluster_name>.<base_domain> | This DNS record must point to the external Load Balancer for the control plane machines. This record must be resolvable by both clients, external to the cluster and from all the nodes within the cluster. |
api-int.<cluster_name>.<base_domain> | This DNS record must point to the internal Load Balancer for the control plane machines. This record must be resolvable from all the nodes within the cluster. |
|
Routes |
*.apps.<cluster_name>.<base_domain> | A wildcard DNS record that points to the internal Load Balancer that targets the machines running the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
etcd |
etcd-<index>.<cluster_name>.<base_domain> | OpenShift Container Platform requires DNS records for each etcd instance to point to the control plane machines that host the instances. The etcd instances are differentiated by index values, which start with 0 and ends with n-1, where n is the number of control plane machines in the cluster. The DNS record must resolve to a unicast IPv4 address for the control plane machine and the records must be resolvable from all the nodes in the cluster. |
_etcd-server-ssl._tcp.<cluster_name>.<base_domain> | For each control plane machine, OpenShift Container Platform requires an SRV DNS record for etcd server on that machine with priority 0, weight 10, and port 2380. A cluster that uses three control plane machines requires the following records:
|
|
Master node |
<master-0_hostname>.<cluster_name>.<base_domain> <master-1_hostname>.<cluster_name>.<base_domain> <master-2_hostname>.<cluster_name>.<base_domain> |
Create entries for the master hosts. |
Worker node |
<worker-0_hostname>.<cluster_name>.<base_domain> <worker-1_hostname>.<cluster_name>.<base_domain> |
Create entries for the worker hosts. |
As a part of verification, ensure that IP addresses and hostnames configured are resolvable.
Configuring Load Balancer
Before configuring the OpenShift container platform, two layer-4 Load Balancers must be provisioned. The API requires one Load Balancer and the default Ingress Controller needs the second Load Balancer to provide ingress to applications.
Port 6443 and 22623 must be configured to point to the bootstrap and master nodes. Port 80 and 443 must be configured to point to the worker nodes.
Port | Machines | Internal | External | Description |
---|---|---|---|---|
6443 | Bootstrap and control plane. Bootstrap machine must be removed from the Load Balancer after the bootstrap machine initializes the cluster control plane. |
X | X | Kubernetes API server |
22623 | Bootstrap and control plane. Bootstrap machine must be removed from the Load Balancer after the bootstrap machine initializes the cluster control plane. |
X | X | Machine Config server |
443 | The machines that run the Ingress router pods, compute, or worker, by default. |
X | X | HTTPS traffic |
80 | The machines that run the Ingress router pods, compute, or worker by default. |
X | X | HTTP traffic |
Sample DNS records
- Sample Forward DNS records.
The api points to the IP of your load balancer api.<cluster_name> IN A 10.XX.XX.<External LB IP> api-int.<cluster_name> IN A 10.XX.XX.<Internal LB IP> The wildcard also points to the load balancer *.apps.<cluster_name> IN A 10.XX.XX.<Internal LB IP> Create entry for the bootstrap host <bootstraphostname>.<cluster_name> IN A 10.XX.XX.XX Create entries for the master hosts <master-0_hostname>.<cluster_name> IN A 10.XX.XX.XX <master-1_hostname1>.<cluster_name> IN A 10.XX.XX.XX <master-2_hostname2>.<cluster_name> IN A 10.XX.XX.XX Create entries for the worker hosts <worker-0_hostname>.<cluster_name> IN A 10.XX.XX.XX <worker-1_hostname>.<cluster_name> IN A 10.XX.XX.XX The ETCd cluster lives on the masters...so point these to the IP of the masters etcd-0.<cluster_name> IN A 10.XX.XX.XX etcd-1.<cluster_name> IN A 10.XX.XX.XX etcd-2.<cluster_name> IN A 10.XX.XX.XX The SRV records are IMPORTANT....make sure you get these right...note the trailing dot at the end... _etcd-server-ssl._tcp.<cluster_name> IN SRV 0 10 2380 etcd-0.<cluster_name>.<base_domain>. _etcd-server-ssl._tcp.<cluster_name> IN SRV 0 10 2380 etcd-1.<cluster_name>.<base_domain>. _etcd-server-ssl._tcp.<cluster_name> IN SRV 0 10 2380 etcd-2.<cluster_name>.<base_domain>.
- Sample Reverse DNS records.
7 IN PTR <master-0_hostname>.<cluster_name>.<base_domain>. 8 IN PTR <master-1_hostname>.<cluster_name>.<base_domain>. 9 IN PTR <master-2_hostname>.<cluster_name>.<base_domain>. 10 IN PTR <bootstraphostname>.<cluster_name>.<base_domain>. 6 IN PTR api.<cluster_name>.<base_domain>. 5 IN PTR api-int.<cluster_name>.<base_domain>. 11 IN PTR <worker-0_hostname>.<cluster_name>.<base_domain>. 12 IN PTR <worker-1_hostname>.<cluster_name>.<base_domain>. EOF
OCP cluster management node (Bastion host):
In this blueprint, NPS toolkit VM is used for cluster management. The openshift-client and openshift-install binaries are installed on NPS toolkit VM.
SSH keys and ignition files are generated on this server. SSH keys facilitate passwordless ssh to OCP cluster nodes.
In general, Open Shift Cluster Management node are Linux, Windows, or MAC-based machines. After deployment, any node of this kind can be configured as management node. The node has:
The appropriate OC binary downloaded.
Reachability to OCP cluster, and data center services like Load Balancers and DNS.
SSH keys generated during installation (or copied from NPS Server).
- For more information, see Red Hat OpenShift Getting Started with the CLI.
- For more information, see Deploying NPS toolkit VM.