MORO CLOUD
  • INTRODUCTION
  • User Guide
    • Know Your Moro Cloud
    • Infrastructure as a Service
      • Self-Managed Infrastructure as a Service (IaaS)
      • Login procedure
      • Role Based Access Control
      • Create a VM
      • Tier-1 Edge Firewall
      • Catalogs
      • Load Balancer
      • Micro-Segmentation
      • Monitoring
    • Advanced Load Balancer
      • In-Action
    • Backup as a Service
      • In-Action
    • Disaster Recovery as a Service
      • In-Action
    • Kubernetes as a Service
      • In-Action
Powered by GitBook
On this page
  • Accessing the K8aaS
  • 3. Deploying Kubernetes Cluster
  • 4. Adding Worker nodes to a Kubernetes Cluster
  • 5. Accessing Kubernetes Cluster
  • NFS node management
  • Adding NFS node during Cluster Creation
  • Adding NFS node to an existing Cluster
  • Configure NFS share

Was this helpful?

  1. User Guide
  2. Kubernetes as a Service

In-Action

PreviousKubernetes as a Service

Last updated 3 years ago

Was this helpful?

Accessing the K8aaS

  1. Login to your Moro Cloud Organization with Tenant Admin credentials. More → Kubernetes Container Cluster

Screenshot 2 Access K8aaS 1

  1. This will list Kubernetes Clusters deployed in your Moro Cloud Tenant. You can get more details about the cluster by clicking on the Cluster name.

Screenshot 3 Access K8aaS 2

Screenshot 4 Access K8aaS 3

3. Deploying Kubernetes Cluster

  1. To create a new Kubernetes Cluster, select More → Kubernetes Container Cluster

Screenshot 5 Add K8aaS Cluster 1

  1. Select “Add” to open “Create New Cluster” wizard.

Screenshot 6 Add K8aaS Cluster 2

  1. Select the Organization VDC where Cluster vApp will be deployed.

Screenshot 7 Add K8aaS Cluster 3

  1. Under General section fill in the required details as mentioned below:

    • Name - Unique name for your Kubernetes Cluster vApp

    • Details (Optional) – Below are optional details, if not provided then wizard takes default values from Template and your Organization VDC. You can modify these to Customize your Cluster.

      • Number of Worker Nodes: <Default: 2>

      • Number of CPU: <Based on selected template> #CPUs per worker node.

      • Memory (MB): <Based on selected template> Memory assigned to each worker node.

      • Storage Profile: <Default storage profile for the tenant>

      • SSH Keys: <Disabled> Enable and enter the public key in the text box. This used to connect to Kubernetes node without password as remote password authentication disabled on all the nodes

      • Enable NFS: <Disabled> enabling this adds a Virtual machine to Cluster as NFS node for container storage. Configuring NFS storage post deployment requires manual steps which are detailed in later section of this guide.

      • Rollback: <Enabled> It removes left over virtual machines if Cluster creation fails.

Screenshot 8 Add K8aaS Cluster 4

  1. Select the Network to which Cluster will be connected

Screenshot 9 Add K8aaS Cluster 5

  1. In next window, select the appropriate Template from which Cluster will be created. There are different Templates available for different Kubernetes version.

Screenshot 10 Add K8aaS Cluster 6

  1. Review the details and click “Finish” to start Cluster creation.

Screenshot 11 Add K8aaS Cluster 7

  1. In above example we have used the default value in the general section to create a Cluster. Below is an example where we have used custom values to create a Cluster.

Screenshot 12 Add K8aaS Cluster 8

7. Once the Cluster creation is completed. A new vApp is available with the Cluster name.

4. Adding Worker nodes to a Kubernetes Cluster

Using Moro Cloud Portal, you can add worker nodes to an existing Kubernetes Cluster.

  1. From Moro Cloud Home page, navigate to More → Kubernetes Container Cluster

Screenshot 13 Add node 1

  1. Select the Cluster to which you want to add worker nodes

Screenshot 14 Add node 2

  1. Click on “Nodes” tab → ADD to add a worker node to the Cluster

Screenshot 15 Add node 3

  1. In “Add Node” wizard enter the details for new worker nodes

Screenshot 16 Add node 4

  1. Select the Network for the new worked node

Screenshot 17 Add node 5

  1. Review the details → Finish to start the new worker node deployment.

Screenshot 18 Add node 6

5. Accessing Kubernetes Cluster

You can access Kubernetes Cluster by logging into the Master node using SSH Keys (if keys were added during cluster creation) or remotely using the KUBE configuration file. Note that by default password authenticating is disabled on all Kubernetes nodes. To download the KUBE Configuration file:

  1. From Moro Cloud Home page, navigate to More → Kubernetes Container Cluster

Screenshot 19 Access Cluster 1

  1. Select the Cluster for which you want to download KUBE Configuration file

Screenshot 20 Access Cluster 2

Screenshot 21 Access Cluster 3

  1. You can use the KUBE CONFIGURATION file to connect to the Kubernetes Cluster remotely. Make sure remote machine has Kubectl installed and have connectivity to Master node on port 6443.

  2. Root password for all the Kubernetes Cluster nodes can be found under Virtual Machine details page > Guest OS Customization > Edit

Screenshot 22 Access Cluster 4

NFS node management

You can add NFS node Virtual Machine to your Kubernetes Cluster to provide persistent storage to the Containers. NFS node can be added during the Cluster creating or afterwards. This option only creates a Virtual Machine, adding disks to NFS Node and creating NFS export is done manually post deployment.

Screenshot 23 NFS Node 1

Adding NFS node during Cluster Creation

“Enable NFS” toggle switch creates an additional Node/Virtual Machine for NFS during the Cluster creation.

Screenshot 24 NFS Node 2

Adding NFS node to an existing Cluster

  1. To add NFS node to existing Kubernetes Cluster. Select the Cluster to which you want to add NFS node

Screenshot 25 NFS Node 3

  1. Go to “Nodes” section > ADD

Screenshot 26 NFS Node 4

  1. Add Node wizard opens, leave everything else empty (you can add SSH key if needed) and click the “Enable NFS” toggle switch. Select the Network in next window and finish adding the NFS node.

Screenshot 27 NFS Node 5

Screenshot 28 NFS Node 6

Screenshot 29 NFS Node 7

Configure NFS share

The next step is to create NFS shares that can be allocated via persistent volume resources. (Login Information is available in Point 6)

  1. First, we need to add a named disk to the NFS node to create a file system that we can export. Navigate to “Named Disk” section in your Virtual Data Centre.

Screenshot 30 NFS Share 1

  1. Click on “New” to create a Names Disk.

Screenshot 31 NFS Share 2

Note – Bus Sub-Type for the new disk should be same as the Bus Type for the Virtual Machines you are going to attach to.

Screenshot 32 NFS Share 3

  1. Next, ssh into the NFS host itself.

... (root prompt appears) ...

  1. Partition and format the new disk. On Ubuntu the disk will show up as /dev/sdb. The procedure below is an

example; feel free to use other methods depending on your taste in Linux administration.

root@nfsd-ljsn:~# parted /dev/sdb

(parted) mklabel gpt

Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?

Yes/No? yes (parted) unit GB

(parted) mkpart primary 0 100 (parted) print

Model: VMware Virtual disk (scsi) Disk /dev/sdb: 100GB

Sector size (logical/physical): 512B/512B Partition Table: gpt

Disk Flags:

Number Start End Size File system Name

Flags

1 0.00GB 100GB 100GB primary

(parted) quit

root@nfsd-ljsn:~# mkfs.ext4 -L nfs_fs /dev/sdb root@nfsd-ljsn:~# mkdir /export

Add below entry in the /etc/fstab

LABEL=nfs_fs /export ext4 defaults 0 0

Mont the newly created filesystem root@nfsd-ljsn:~# mount /export

  1. At this point you should have a working file system under /export. The last step is to share via NFS.

vi /etc/exports

...Add following at end of file...

/export *(rw,sync,no_root_squash,no_subtree_check)

...Save and quit exportfs -r

  1. We can use this NFS share to create persistent volume.

Select the “DOWNLOAD KUBE CONFIGURATION” option

ssh

In case of any urgent requirements that may arise, you can reach out to Moro Support Center on 2266, Or .

root@10.150.200.22
Support@Morohub.com
cse-nfs