VMware: What VMkernel Ports means to you

Just wanted to put this information out there.  Where I work, our ESX cluster will have 10 NICS (6 currently on 2 VLANS) and 3 different VLANs segmenting off different traffic.  The VMkernel Port is needed to utilize the following

  • ISCSI
  • NFS/NAS
  • VMotion

This information is provided when going through “Add Networking” under the configuration tab in ESX3.x.

VMkernel Add Networking

VMkernel Add Networking

In ESX 3.x VMware has the ability to create a VMkernel Port on each vSwitch.  So far, we have had no luck utilizing NFS or software ISCSI through two different VMkernel Ports.  Example:

There is a NAS on both a private VLAN and the normal LAN network.  Each goes through a different vSwitch.  After adding a VMkernel Port on both vSwitches, NFS and ISCSI worked fine to the first defined connection.  After VMware first connects to a server on either subnet, in this example 172.0.1.2, the next connection to 172.0.2.1 will fail. It will fail no matter if trying to connect through the other vSwitch using the same protocol (NFS) or ISCSI.  Whichever connection is established first, VMWare seems not to be able to route through the other VMkernel Port

Dual NFS Mount Errors

Dual NFS Mount Errors

Below shows a vSwitch configuration with a VMkernel Port

vSwitch With VMkernel

vSwitch With VMkernel

Note: I am unsure if this would be an issue when using an hardware ISCSI (ISCSI card). As far as NFS is concerned, I do not have a work around for this currently.  If you will need to mount NFS shares from two different sources, through different vSwitches, better find the answer to this problem first.  I hope that ESX 4 solves this issue.

About these ads

~ by Kevin Goodman on December 3, 2008.

4 Responses to “VMware: What VMkernel Ports means to you”

  1. I experienced this same issue last week and opened a SR on it. I was also told that the vmkernel port can only use one active nic at a time, which is concerning (I am on vSphere 4u1).

    We had 4, 1 Gb nics dedicated for the environment. The engineer said to split two and two. Leave two of the nic’s in vSwitch0 and then drop another 2 nics into vSwitch1. He also stated that you should have the first vSwitch dedicated to particular data stores, and the second to different or you would see performance degradation.

    I am looking into if the VMKernel port really does just use one active nic at a time. I am in full agreement, you should be able to pin NFS/iSCSI traffic to particular interfaces much like you can pin VMotion to a particular interface.

  2. Even if your kernel port is in a vswitch containing multiple NICS, vmware will only transfer IP based protocols (NFS, ISCSI) through one physical interface. Only time that traffic will go throgh another NIC in that vswitch is if the current ports link fails. Basically there is no True bonding in VMware ESX. Same goes for virtual machines. VMs are assigned to a specific interface in the vswitch when started. Even if there are 4 physical NICs in th vswitch, a VM will ever only use one.

    The VMWare person is right. By splitting the ports and assignig datadtores to each, it will give you multiple paths to the storage forcing the use of 2 physical ports. In doing so, doubling the speed. Each datastore has to access the San or nas throught a different IP for that to work correctly.

    This is the reason why a lot of the VMware community have changed to doing software iscsi through the VMs operating system to access storage instead of relying on ESX. Basically creating a base disk in esx to install the OS and then software iscsi or NFS mont more storage through the VMs OS. This spreads the storage traffic through all NICs in the infrastructure. Make sense?

  3. ESX *can* load balance between multiple pNICs for software iSCSI.

    Essentially you need two vmkernel ports (on the same vSwitch) and manually bind both to the software iCSI initiator via SSH:

    esxcli swiscsi nic add –n [vmkernel-port] –d vmhba33

    Each vmkernel port should be bound to a single vmnic only via client. Then change the LUN’s multipath properties to vmware round-robin.

    Finally, set the properties of each LUN through SSH to change the default IO’s limit of 1000 used for load balancing between these connections to a much lower value, say 3 (needs to be scripted as it will be lost on host reboots):

    esxcli nmp roundrobin setconfig –device [lunid] –iops 3 –type iops

    This will provide full bandwidth even to individual VMs – I measured 220MB/s read to a WIndows VM running on ESX 4u1 configured in this way with iSCSI storage.

    HTH

  4. [...] SharePoint Backup in the VMWare PaaS Infrastructure or Internal Cloud.  The Operating System is VMWare VMKernel although Linux Red Hat OS boots the ESX and then hands it over to the Hypervisor or ESXi 4.1, or [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 1,372 other followers

%d bloggers like this: