This web page requires JavaScript to be enabled.

JavaScript is an object-oriented computer programming language commonly used to create interactive effects within web browsers.

How to enable JavaScript?

Registering Existing Kubernetes Clusters to Rancher

Blog March 28, 2024 0

The control that Rancher has to manage a registered cluster depends on the type of cluster. We want to manage "set of" clusters within one Rancher UI, so this is guide how to perform the task quickly and ez.


Kubernetes Node Roles

Registered RKE Kubernetes clusters must have all three node roles – etcd, controlplane and worker.


By default, Kubernetes Engine (*KE) does not grant the cluster-admin role, you must run these commands on KE clusters before you can register them.

To register a cluster in Rancher, you must have cluster-admin privileges within that cluster with serviceaccount type (instead of user).

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=admin --serviceaccount=cattle-system:cattle-admin

Registering a Cluster

  1. From Rancher Dashboard or Cluster Management, Following On the Clusters page, Import Existing a Cluster.

  2. Choose Import any Kubernetes cluster -> Generic type

  3. Named for the Cluster in the Cluster Name.

  4. You can add some Member Roles here. Use Member Roles to configure user authorization for the cluster. Click Add Member to add users that can access the cluster. Use the Role drop-down to set permissions for each user.

  5. In the Cluster Detail, there are cluster status and some command kubectl. Performing the kubectl in existed cluster to create rancher cluster agent to communicate with the Rancher UI.

Nam Le, on Flickr


Your cluster is registered and assigned a state of Pending. Rancher is deploying resources to manage your cluster.

Nam Le, on Flickr

You can access your cluster after its state is updated to Active.

Active clusters are assigned two Projects: Default (containing the namespace default) and System (containing the namespaces cattle-system, ingress-nginx, kube-public and kube-system, if present).


Why we use serviceaccount instead of user? I just figured out when read the file yaml in step 5. above, something look like:

kind: ClusterRoleBinding
  name: cattle-admin-binding
  namespace: cattle-system
  labels: "norman"
- kind: ServiceAccount
  name: cattle
  namespace: cattle-system
  kind: ClusterRole
  name: cattle-admin

So we have to change –user to –serviceaccount and following with namespace:account



Nam Le, [email protected],

Nam Le

0 responds

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.