Federation
Establishing cluster federation from your cluster to Nautilus cluster via admiralty
Before Start
Create a namespace on your cluster with the same name as the one on Nautilus. Otherwise, federation wont work.
kubectl create namespace <the-same-namespace>(optional) set a new context with the new namespace on your cluster.
kubectl config set-context <context-name> --namespace=<the-same-namespace> \ --cluster=<your-cluster> \ --user=<your-user>On Nautilus Cluster (Target)
First, change the currrent context to use your namespace inside the Nautilus cluster.
Create a service account in Nautilus cluster for your cluster to access
kubectl create serviceaccount my-saNautilus cluster will automatically create a secret token associated with the service account you just created.
kubectl get secretNAME TYPE DATA AGEdefault-token-j29lh kubernetes.io/service-account-token 3 2y178dkubernetes-dashboard-key-holder Opaque 2 42dmy-sa-token-qfz8b kubernetes.io/service-account-token 3 11sCreate a config file named config_sa that includes the token emitted for the service account that you just created
First get the name of the secret:
TOKENNAME=`kubectl get serviceaccount/my-sa -o jsonpath='{.secrets[0].name}'`echo $TOKENNAMEThen get the secret using the TOKENNAME we found:
TOKEN=`kubectl get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode`echo $TOKENAfter that copy the OIDC config file you use to access Nautilus cluster and add the new user instead of the current one. Change <nautilus-config> and <your_cilogon_user_id> accordingly.
cd ~cp .kube/<nautilus-config> .kube/config_sakubectl --kubeconfig=.kube/config_sa config set-credentials my-sa --token=$TOKENkubectl --kubeconfig=.kube/config_sa config set-context --current --user=my-sakubectl --kubeconfig=.kube/config_sa config viewkubectl --kubeconfig=.kube/config_sa config unset users.http://cilogon.org/server<your_cilogon_user_id>Now you need to let the service account act on behalf of user. To do this, change <your-namespace> accordingly and run:
kubectl create rolebinding my-sa-rb --clusterrole=edit --serviceaccount=<your-namespace>:my-saNow check if you can list pods:
kubectl --kubeconfig=.kube/config_sa get podsThe resulting config_sa file looks like:
apiVersion: v1clusters:- cluster: certificate-authority-data: DATA+OMITTED server: <nautilus-apiserver> name: nautiluscontexts:- context: cluster: nautilus namespace: <your-namespace> user: my-sa name: nautiluscurrent-context: nautiluskind: Configpreferences: {}users:- name: my-sa user: token: REDACTEDCreate a Source object in the same namespace. Use the same service account name you used in the previous step
apiVersion: multicluster.admiralty.io/v1alpha1kind: Sourcemetadata: name: my-cluster #name of this source objectspec: serviceAccountName: my-sakubectl apply -f source.yamlOn Your Cluster (Source)
First, change the currrent context to use your namespace (mine is default) inside your cluster.
Create the secret holding credentials to access target cluster
Encode the config file you just created in Base64 format and copy the output of the encoded config file.
cat ~/.kube/config_sa | base64 -w 0Then create a secret object and paste the encoded config file into the config field.
apiVersion: v1data: config: <Base64-encoded config file to access the target cluster>kind: Secretmetadata: name: my-secrettype: Opaquekubectl apply -f secret.yamlCreate a Target object, referencing the secret we just created
apiVersion: multicluster.admiralty.io/v1alpha1kind: Targetmetadata: name: nautilus-cluster #name of this target objectspec: kubeconfigSecret: name: my-secretkubectl apply -f target.yamlLabel the namespace as being federated
kubectl label ns default multicluster-scheduler=enabled
Check if the virual node is up
kubectl get nodes --watchNAME STATUS ROLES AGE VERSIONadmiralty-dev-namespace-nautilus-tg-2e2a858480 Ready cluster,control-plane,master 4d23hTry to run a federated pod by adding the annotation
apiVersion: v1kind: Podmetadata: annotations: multicluster.admiralty.io/elect: "" name: test-podspec: containers: - name: mypod image: centos:centos7 resources: limits: memory: 100Mi cpu: 100m requests: memory: 100Mi cpu: 100m command: ["sh", "-c", "echo 'Im a new pod' && sleep infinity"]kubectl apply -f test-pod.yamlCheck if the proxy and delegate pods are running on source and target cluster respectively
#proxy podkubectl --context=<your-cluster-context> get podsNAME READY STATUS RESTARTS AGEtest-pod 1/1 Running 0 20m#delegate podkubectl --context=nautilus get podsNAME READY STATUS RESTARTS AGEtest-pod-stdhg 1/1 Running 0 15m