Month: July 2012

Understanding VMDirectPath I/O

What is VMDirectPath I/O

VMDirectPath I/O is a VMware technology that can be used with I/O hardware to reduce the CPU impact of high-bandwidth throughput workload’s by ‘‘bypassing’’ the hypervisor. It is supported for specific networking adapters in vSphere ESX 4, and it is experimental for specific storage adapters in vSphere ESX 4.

By allowing virtual machines to directly access the underlying hardware devices, VMDirectPath I/O device access enhances CPU efficiency in handling workload’s that require constant and frequent access to I/O devices. VMDirectPath I/O for networking I/O devices is fully supported with the Intel 82598 10 Gigabit Ethernet Controller and Broadcom 57710 and 57711 10 Gigabit Ethernet Controller.

Prerequisites

1. To use VMDirectPath, verify that the host has Intel® Virtualization Technology for Directed I/O (VT-d) or AMD I/O Virtualization Technology (IOMMU) enabled in the BIOS. Refer to my earlier post.
2. Verify that the PCI devices are connected to the host and marked as available for passthrough.
3. Verify that the virtual machine is using hardware version 7.

Drawbacks

Use of the VMDirectPath disables many advanced VMware functions for the virtual machine, so be careful before you start using VMDirect Path.
1. VMotion
2. High availability
3. Suspend and resume
4. Record and replay
5. Fault tolerance
6. Memory overcommitment and page sharing
7. Hot add/remove of virtual devices
8. No Snapshot backup

Few Good links about VMDirect Path I/O

1. A video is posted here by chad which will show you step by step instructions. All though this link is for Cisco Unified Computing (UCS) but still     it will help –  Link
2. VMware VMDirect Path I/O by Simon Long
3. To check for device compatibility please visit here
4. Configure VMDirect Path article 1010789
5. Troubleshooting VMDirect Path
6. http://communities.vmware.com/docs/DOC-11089
7. Scott Lowe blog about VMDirect Path

Checking the queue depth of the storage adapter and the storage device

To identify the storage adapter queue depth:

  1. Run the esxtop command in the service console of the ESX host or the ESXi shell (Tech Support mode). For more information, see Using Tech Support Mode in ESXi 4.1 and ESXi 5.0 (1017910) or Tech Support Mode for Emergency Support (1003677) for ESXi 3.5 and 4.0.
  2. Press d.
  3. Press f and select Queue Stats F.
  4. The value listed under AQLEN is the queue depth of the storage adapter. This is the maximum number of ESX VMKernel active commands that the adapter driver is configured to support.

To identify the storage device queue depth:

  1. Run the esxtop command in the service console of the ESX host or the ESXi shell (Tech Support mode). For more information, see Using Tech Support Mode in ESXi 4.1 and ESXi 5.0 (1017910) or Tech Support Mode for Emergency Support (1003677) for ESXi 3.5 and 4.0.
  2. Press u.
  3. Press f and select Queue Stats F.
  4. The value listed under DQLEN is the queue depth of the storage device. This is the maximum number of ESX VMKernel active commands that the device is configured to support.

Notes:

  • The value listed under LQLEN is the LUN queue depth. This is the maximum number of ESX/ESXi VMkernel active commands supported by the LUN.
  • The value listed under %USD is the percentage of queue depth (adapter, LUN, or world) used by ESX/ESXi VMkernel active commands.

Cool storage blog!

Hi All,

I have landed on to this cool storage blog which talk starting from early days of storage. Hope you enjoy it!!!

http://blog.fosketts.net/

 

Maximizing Virtual Machine Performance

How-to ressource paper to get the most of your infrastructure by doing some performance tuning.

 

In this technical Whitepaper called Maximizing Virtual Machine Performance, Mattias Sundling walks you in several steps, through the basics on where you can recover the maximum of performance of your VMs, which is otherwise lost in missconfigurations.

A well-tuned foundation enables you to make better use of your virtual infrastructure. Optimizing CPU, memory, disk and network will improve performance and make your virtual environment more efficient to manage. Download this white paper by Quest Evangelist, Mattias Sundling.

© 2018 RAVI IT BLOG.COM

Theme by Anders NorenUp ↑