In the previous posts, we were able to make our vSphere hosts cluster up and running by creating a virtual datacenter, creating a cluster, then adding the ESXi hosts to it. We then configured the vMotion networking for VM mobility, and also iSCSI VMkernel networking for iSCSI storage connectivity. Well, this was only one part of the prerequisites as we mentioned the fact that a even through the cluster is configured, it needs to connect usually to a shared storage to offer the most common features, such as vMotion or HA.
A VMware datastore is created on any new installed ESXi hosts by default, but if you’re using more than one host, it’s more convenient to put the VMFS on ESXi shared storage, such as your SAN, than on the local data store. There is also the fact that with a centralized storage area network (SAN), different servers can access virtual machine disk files (VMDKs), which keeps VMs available even if a host goes down.
Shared storage in vSphere can be implemented with different technologies. It can be either:
- Bloc based like FC (using FC, FCoE, or SCSI protocols) or iSCSI (using IP/SCSI protocols).
- File based like NAS (using IP/NFS protocols).
In this post, we will implement an iSCSI shared storage using the iSCSI Target Server service role, that is part of the in Windows Server 2012 R2. Just for info, iSCSI can also be implemented using a Linux distro called OpenFiler, but we’ll use Windows Server as it is much faster to implement in my opinion.
Before proceeding to install the iSCSI Target Role, I have added a 20 GB disk and added a vNIC in VMnet3 network (This is the iSCSI network) to the DC server. The disk will be used to host iSCSI LUNs later when I’ll do my configuration, and the vNIC will be assigned an IP of 192.168.1.1/24.
So, let’s add a 20GB HDD for iSCSI shared storage and a vNIC in VMnet3 virtual switch for iSCSI network connectivity.
You’ll need to go Disk Manager to make the disk online and initialize it
Finally, we’ll create a simple Volume and assign it to S: drive
You may be wondering if 20GB is enough to host our VMs. The answer is yes. It is. This is because we are using DSL Linux, a tiny light small linux distro that consumes minimum resources in terms of RAM and disk.
Installing iSCSI Target Server Role
There are many tutorials out there explaining how to install and configure the iSCSI Target Server graphically from Server Manager, so let’s do it the easy the easy way and do it with PowerShell.
To install the role, run the following CmdLet:
Add-WindowsFeature -Name FS-iSCSITarget-Server -IncludeManagementTools
Next, create a LUN of 10GB of size, name it LUN1 and put it on S:\iSCSI folder using the following CmdLet
New-IscsiVirtualDisk -Path S:\iSCSI\LUN1.vhdx -Size 10GB
Check if the LUN is actually created by browsing the S:\iSCSI folder
Then, create a target and specify which initiators can connect to it with the -InitiatorIds parameter
New-IscsiServerTarget -TargetName TestTarget1 -InitiatorId IPAddress:192.168.1.11,IPAddress:192.168.1.12,IPAddress:192.168.1.13,IPAddress:192.168.1.14
<img class="aligncenter size-full wp-image-868" src="http://vadmin-land.azurewebsites.net/wp-content/uploads/2017/06/iscsi-create-target.jpg" alt="iscsi-create-target" width="833" height="316" />
Finally, we need to assign the LUNs (I have created two) to the iSCSI Target.
Add-IscsiVirtualTargetMapping -TargetName TestTarget1 -Path S:\iSCSI\LUN1.vhdx
You may also check that your LUNs are ready to be consumed by having a look at the ServerManager console
Go to the host > Manage > Storage> Storage Adapters > click on the iSCSI Software Adapter > in the Adapter Details, go to Target > Dynamic Discovery > Click Add
Put 192.168.1.1 as the IP address of the iSCSI Target to add
Rescan to refresh the display
If everything works fine, you should be able something under Devices tab
and also under Paths tab
Connection to the iSCSI Target Server is now done from ESX1. Follow the same steps to connect ESX2 too, and once done, let’s go ahead and add some datastores!
Adding the datastores
Go to the host > Manage > Related > Storage Adapters > Datastores > Click on Create a new datastore
Make sure VMFS is selected and click Next
The LUNs created previously in the DC Target Server will show up in this window. Select one of them and type a name for the datastore, then click Next
Use all free space in the LUN
Confirm your selection and click Finish
The datastore is now attached to the host and is now visible from the ESX1. Do a rescan and the datastore should also be visible from ESX2.
In the next post, we will create a virtual distributed switch and attach some VMs to it, then we will test network connectivity and vMotion in the last article of these series. See you in the next post!