This post is in continuation with my previous post “Bootstrap vCenter Server Appliance 6.5 on vSAN 6.6” ,refer to the below link:
I will cover the expansion of the vSAN datastore created during the VCSA bootstrap in the previous blog post.
The first thing after vCenter deployment is to add the hosts in vCenter and configure the VMkernel interface for vSAN traffic (and any other VMkernel interface) on each host. I have personally configured the VMK interface on the standard switches and later migrated them to the VDS (I am not covering the standard to distributed switch migration in this post).
This is how VMkernel networking looks on hosts:
Now turn on the vSAN by clicking on edit option under Cluster -> VSAN –> General -> Edit
Cluster –> configure -> under vSAN click on Disk management ->claim disks
In manual mode vSAN will show you all the eligible HDD and SSD which can be claimed from the Hosts in the cluster with vSAN VMK configured
Above is the list of all the HDD from the 3 hosts, to claim the HDD simply click on “claim for capacity tier”.
Similarly we can claim all the flash resources from the eligible host by clicking on “claim for cache tier” .
Once you claim the SSD and HDD resources, vSAN will start the creation of the disk groups, you can see this in the vCenter recent tasks:
Go to the vSAN Datastore summary to confirm if the total capacity is reflecting the storage from all vSAN host in the cluster.
That’s all for this. Let me know if you have any feedback’s and do share this is you consider the posts worth sharing.
I have recently installed vSphere 6.5 and vSAN 6.6 in our lab, I have got 4 vSAN Hybrid ready nodes , which I will use to setup a vSAN cluster.
Most interesting thing with the 6.5 vSphere release apart from the HTML client and other enhancements is the ability to bootstrap VCSA on a target host by creating a vSAN datastore. With earlier version we used to deploy the VCSA on a temporary data store and later storage vMotioed to the vSAN datastore.
“Jase McCarty” has written a cool blog on the same, you can refer to the below link for details:
However, I will try to cover the deployment in more details including all the screenshot which can help people deploying vSAN 6.6 for the first time. So let’s get started.
I have installed ESXi 6. 5 on all 4 nodes. It’s time to install the vCenter to configure the vSAN Cluster.
Mount the VCSA installer and run the installer.exe file:
Wizard is similar to previous VCSA 6.x install until we reach the “Install – Stage 1: Deploy PSC” page:
I am deploying the External PSC appliance, however the process is similar for Embedded PSC as well.
The Screenshot is self-explanatory, I am deploying the vCenter appliance on ESXi host “172.24.1.101” .
Select yes for the certificate warning.
This is where we will be creating a vSAN datastore locally on the host and install VCSA. Note that during bootstrapping, you don’t need to have vSAN Network configured on all the nodes. At this moment vSAN Datastore is local to the host, I will cover in another blog post how to expand the vSAN Datastore by claiming the disk from other nodes in the cluster.
Provided you are using a vSAN compatible controller and Drives, ESXi will detect the flash and HDD resources in the server. In case ESXi is not detecting Flash or HDD, you can manually tag local storage resources as SSD or HDD in this step. For checking the vSAN compatibility, refer to the link below:
Enter the required networking details for the PSC, make sure to configure the DNS host name resolution (forward and reverse) of PSC before deployment .
Finish and wait, the deployment took less than 5 minutes
Looking at the host client, I can now see a new “vSAN datastore” and PSC getting deployed on newly created vSAN Datastore.
Once done, we need to configure the appliance size and SSO in stage 2, refer to the below screenshots:
Here you can either join the PSC to an existing SSO (if exists) to run a linked mode configuration or if it is a new deployment select the “create new SSO domain”.
That’s it for PSC deployment, now we need to run the same installer, this time we will install the vCenter server.
Select the vSAN datastore created during the PSC installation.
Enter the network configuration for the vCenter server:
Finish and wait, you can actually see the VCSA deployment progress by login in to the target host.
With this now we need to configure the SSO for the vCenter server to complete the deployment.
That’s it for this post , I have covered the expansion of vSAN datastore by claiming storage resources from rest of the hosts in below post :
For the past few weeks i am working on enhancing my VMware home lab setup to be more scalable and enterprise grade , which gave me an opportunity to migrate the embededd PSC to external to extend my vCenter Single Sign-On domain with more vCenter Server instances to support multi site NSX and SRM use cases, you can reconfigure and repoint the existing vCenter Server instance to an external Platform Services Controller.
Few things to note before starting the migration :
- The process is relatively straightforwad but remember there is no coming back once you migrate the embedded PSC to external .
- Make Sure to take the snapshot of vCenter Server , in case anything gone wrong during the migration you can revert back vCenter to the last working state
- Non Ephemeral virtual port groups are not supported by the PSC , as a workaround we need to create a new Ephemeral port group in the same VLAN (if using VLANs) as vCenter server network for the sake of deployment of new PSC . You can migrate the PSC network to non ephemeral port group after the migration completes successfully .
This is what I am running in my lab currently , a vCenter server appliance with embedded PSC:
I want to achieve the below topology with External PSC:
Lets start this by installing the external Platform Services Controller instance as a replication partner of the existing embedded Platform Services Controller instance in the same vCenter Single Sign-On site.
Mount the VCSA ISO and start the installation .
Enter the credentials of the ESXi host where you are planning to deploy the PSC appliance.
Acceppt the self sigh certificate .
Here select “Install Platform Service Controller” .
Select Join an SSO domain in an existing vCenetr PSC:
Join the exsiting site and select the SSO site name:
As I have explained before e, if you have not created a Ephemeral virtual port group you will not be able to select a network to deploy the new PSC.
Go back to vCenter and create a Distributed port group with Ephemeral port binding which will be used for the PSC Deployment.
Enter the standard networking parameters and complete the deployment wizard.
Click on finish and wait for the deployment completion . This process will take approx: 8-10 minutes.
You will get the below screen once PSC deployed successfully.
Now , Log in to the vCenter Server instance with an embedded Platform Services Controller.Verify that all Platform Services Controller services are running by executing the below command:
service-control –status –all
The final step is to run the below command to repoint the embedded PSC to new deployed external PSC:
cmsso-util reconfigure –repoint-psc psc_fqdn_or_static_ip –username username –domain-name domain_name –passwd password [–dc-port port_number]
Use the –dc-port option if the external Platform Services Controller runs on a custom HTTPS port. The default value of the HTTPS port is 443.
If you have followed all the instructions mentioned above, you will get the below success message: “vCenter Server has been successfully reconfigured and repointed to the external PSC 172.18.36.17 .
That was it , PSC has been successfully migrated from Embedded to external! I hope it was helpful .
vRealize Network Insight (vRNI) delivers intelligent operations for your software defined network environment (specially NSX). In short, it does what vRealize Operations does for your virtualized environment, but only to the SDN environment. With the help of this product you can optimize network performance and availability with visibility and analytics across virtual and physical networks. Provide planning and recommendations for implementing micro-segmentation security, plus operational views to quickly and confidently manage and scale VMware NSX deployment.
This product comes with the following two OVA files.
Below are the system requirement for the OVA deployment.
- vRealize Network Insight Platform OVA:
– 8 cores – Reservation 4096 Mhz
– 32 GB RAM – Reservation – 16GB
– 750 GB – HDD, Thin provisioned
- vRealize Network Insight Proxy OVA
- 4 cores – Reservation 2048 Mhz
- 10 GB RAM – Reservation – 5GB
- 150 GB – HDD, Thin provisioned
- VMware vCenter Server (version 5.5 and 6.0).
- vCenter Server Credentials with privileges:
- Distributed Switch: Modify
- dvPort group: Modify
- VMware ESXi:
- 5.5 Update 2 (Build 2068190) and above
- 6.0 Update 1b (Build 3380124) and above
- VMware Tools is installed on all the virtual machines in the data center. This helps in identifying the VM to VM traffic.
The deployment is relatively straight forward, similar to deploying any other OVA.
Select the Data center
For my environment, I am going with the medium configuration.
Select the datastore to be used by the virtual appliance.
I am going with the thin provisioning option, however I would strongly recommend to use for thick provision in production environment.
Below, simply enter the basic networking details.
Once done, click on finish and wait for the virtual appliance deployment completion.
You can open the virtual appliance console to check the progress of appliance deployment.
Once Appliance is successfully deployed and powered on, go to the configuration screen by going to https://<IP or FQDN of the appliance> . The first thing is to enter the vRNI license and click on validate.
Once the license is validated, setup the admin password for the appliance login. Click on activate
Next, you need to generate the secret for the proxy VM. Click on Generate button to generate the Secret.
Copy the shared secret. Platform Appliance will wait for the deployment of Proxy VM. It will keep looking for it till the time proxy VM is deployed.
Let’s go ahead and deploy the Proxy VM .I will not cover the Proxy VM deployment again, it’s relatively straight forward and similar to the platform appliance deployment.
During Proxy Appliance deployment under property, you need to paste the shared secret generated during the platform virtual appliance
Once the deployment is done and the Proxy VM is up and running, it is automatically sensed in the main configuration page.
Click on finish and login to the vRNI GUI using “admin@local” user and password that you have setup initially.
First thing after logging in to the appliance for the first time is adding the data sources (vCenter Server and NSX manager).
On the top right corner, click on Profile -> settings ->Data Sources ->Add new data sources.
Enter the vCenter server admin credentials and validate to check if vRNI is able to connect to vCenter Server successfully
Similarly add NSX manager as a data source to vRNI and validate.
This conclude the vRNI appliance deployment and initial configuration.