Multipathing policies in ESX/ESXi 4.x and ESXi 5.x

  • Most Recently Used (MRU)— Selects the first working path, discovered at system boot time. If this path becomes unavailable, the ESX/ESXi host switches to an alternative path and continues to use the new path while it is available. This is the default policy for Logical Unit Numbers (LUNs) presented from an Active/Passive array. ESX/ESXi does not return to the previous path when if, or when, it returns; it remains on the working path until it, for any reason, fails.Note: The preferred flag, while sometimes visible, is not applicable to the MRU pathing policy and can be disregard
  • Round Robin (RR)— Uses an automatic path selection rotating through all available paths, enabling the distribution of load across the configured paths.

    • For Active/Passive storage arrays, only the paths to the active controller will used in the Round Robin policy.
    • For Active/Active storage arrays, all paths will used in the Round Robin policy.

    Note: This policy is not currently supported for Logical Units that are part of a Microsoft Cluster Service (MSCS) virtual

  • Fixed (Fixed) — Uses the designated preferred path flag, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX/ESXi host cannot use the preferred path or it becomes unavailable, ESX/ESXi selects an alternative available path. The host automatically returns to the previously-defined preferred path as soon as it becomes available again. This is the default policy for LUNs presented from an Active/Active storage array.

    |Fixed path with Array Preference — The VMW_PSP_FIXED_AP policy was introduced in ESX/ESXi 4.1. It works for both Active/Active and Active/Passive storage arrays that support ALUA. This policy queries the storage array for the preferred path based on the arrays preference. If no preferred path is specified by the user, the storage array selects the preferred path based on specific criteria.


  • These pathing policies apply to VMware’s Native Multipathing (NMP) Path Selection Plugins (PSP). Third party PSPs have their own restrictions.
  • Switching to Round Robin from MRU or Fixed is safe and supported for all arrays unless otherwise explicitly documented.

Warning: VMware does not recommend changing the LUN policy from Fixed to MRU, as the automatic selection of the pathing policy is based on the array that has been detected by the NMP PSP.

High Level Boot process of ESX server

Several boot loaders are used on Linux systems, such as the Grand Unified bootloader (GRUB) and the Linux Loader (LILO). ESX uses LILO as the boot loader and has system components that expect the presence of LILO as the boot loader, so don’t replace LILO with another boot loader, or your server may experience problems. The configuration parameters for the bootloader are contained in / l i l o . c o n f in a human-readable format, but the actual bootloader is stored in a binary format on the boot sector of the default boot disk. This section explains the boot process of ESX

Server, as well as how to load the VMkernel and configuration files.

1 . BIOS is executed on the server.

2. BIOS launches LILO from the default boot drive.

3. LILO loads Linux Kernel for the Service Console.

4. The Service Console launches VMkernel.

5. MUI Server is started.

6. Virtual machines can then be launched by VMkernel and managed through MUI.

How to change ESX host root password

1. Shutdown and Reboot your VMware ESX Server

If you don’t know the VMware ESX Server root password, you don’t know any passwords for root-equivalent accounts, and your virtual center server also does not have it cached, the only way to change the root user password is to first shutdown / power off your VMware ESX Server.

2. Press “a” to modify the kernel arguments

As soon as you see the GRUB boot screen, press “a” to modify the kernel arguments, like this:

3. Enter single user mode

At the end of the kernel arguments command line, type “single” and press Enter, like this:

  4. Change the root password

Now, change the root password using the passwd command, like this:

You will need to enter the new root password twice.


5. Reboot the ESX Server

Once you have reset the root password, reboot the server to go back into multi-user mode.

6. Verify the new password

Once the system reboots, verify that the new root password works 🙂


vSphere 5 vRAM Licensing

The new licensing scheme is used in vSphere 5. Unlike in vSphere 4, where vRAM is not taken into consideration rather the number of cores per socket.

Let do a refresh. In vSphere 4, for Enterprise edition is entitled to 6 cores per physical processor per server. For Advanced/Enterprise Plus edition, is entitled to 12 cores per physical processor.

An example would be follows:
1 server with 2 physical CPU, each with 8 cores. This will require 2 x Enterprise Plus license.
If you apply 2 x Enterprise instead of Enterprise Plus license, only 6 cores per CPU will be used and 2 cores per CPU left idle.

Let’s talk about vSphere 5 licensing. Before we begin, vSphere 5 have removed Advanced edition. An customers who are on Advanced Edition on vSphere 4 will be upgrade to vSphere 5 Enterprise.

vRAM entitlement that is based on edition per physical CPU (no more limitation of number of cores). Its base on vRAM allocated. So what is vRAM different from physical RAM?

vRAM actually define the virtual memory that is allocated to any VM that is powered on. But there are misconception here that most customer have.

Consideration 1:
If you have a server with 128GB of RAM with 2 physical CPU, do I purchase 128GB of vRAM? The answer is No. In a normal setup, we often leave buffer of resource for HA or DRS in such this buffer does not have to take into considerations. Since allocated memory to a VM remain the same when HA or DRS kicked in you are not using twice as much the vRAM.

Consideration 2:
What if I need to increase the allocated vRAM to a VM once awhile? VMware will only use the average vRAM allocation per year. In such, occasional increase or even creation of a VM for testing for a short period and later destroyed or power off will not bridge your entitlement as long in average/year allocation does not exceed. This is possible as the license does not have a hard restriction on the server even though you do not have enough vRAM entitlement you are still able to allocate more than you have.

When planning your license, you do have to take note of the vRAM required minus the buffer used for HA where e..g N+1 is used. Of cause you can always buy additional license in the future when needed as well.

If HA was to kicked in when one host were to fail, the entitlement on the failed host will be available and shared by the remaining hosts. I.E to say the total vRAM entitlement will be pool together and shared within the cluster. Provided that the editions are the same.

Taking the same example of the physical server as above:

1 server with 2 physical CPUs (cores is not a concern), 128GB physical RAM.
Since there are 2 physical CPUs, you would need 2 x any edition license.
Then we will take note of the vRAM now.
If you are on Essential Plus license, you will be need 2 x Essential Plus which entitles you to 32×2=64GB vRAM.

Say you intend to have 100GB of vRAM for use taking consideration for furture VMs and leaving 28GB as buffer. We will be short of 36GB. Then you will need to get 2 x Essential Plus license to top up as 2x32GB is only 64GB of vRAM.

Say you are into Enterprise license, you will still need 2 x Enterprise = 96 x 2=192GB of vRAM. In such, you have more than enough. In such, you can even increase the physical RAM without purchasing additional vRAM license.

Consideration 3:
Now you may ask why do I have to buy so many Essential+ from the example above? Can I just buy another edition to top up? The answer is No and not a recommended practice. If a host is entitled to a different edition, the entitlement cannot be shared. In such, the host that is assigned to the different edition license will be standalone.

Consideration 4:
What if I have a VM that requires more than 96GB of vRAM that Enterprise Plus entitles?
If you are on Enterprise Plus, the VM can be allocate more than 96GB of vRAM without any penalty. Its 96GB or above.

Consideration 5:
What if you are using View won’t I have to purchase lots of license?
Answer is No. For View, vRAM is unlimited license. It will be Desktop license which has unlimited vRAM entitlement.

IMPORTANT: Due to the different licensing entitlement for vRAM for Desktop and infrastructure, it is strongly recommended not to mixed both environments together.

I have the table of the license entitlement below:

vSphere Edition

vRAM Entitlement/CPU

vSphere Enterprise Plus 96 GB
vSphere Enterprise 64 GB
vSphere Standard 32 GB
vSphere Essentials Plus 32 GB
vSphere Essentials 32 GB
vSphere Hypervisor Free Edition 32 GB
vSphere Desktop Unlimited

Thanks to :

ESXi and ESX Architectures Compared

VMware ESX Architecture.

In the original ESX architecture, the virtualization kernel (referred to as the vmkernel) is augmented with a management partition known as the console operating system (also known as COS or service console). The primary purpose of the Console OS is to provide a management interface into the host. Various VMware management agents are deployed in the Console OS, along with other infrastructure service agents (e.g. name service, time service, logging, etc). In this architecture, many customers deploy other agents from 3rd parties to provide particular functionality, such as hardware monitoring and system management. Furthermore, individual admin users log into the Console OS to run configuration and diagnostic commands and scripts.


    • VMware agents run in Console OS
    • Nearly all other management functionality provided by agents running in the Console OS
    • Users must log into Console OS in order to run commands for configuration and diagnostics

VMware ESXi Architecture.

In the ESXi architecture, the Console OS has been removed and all of the VMware agents run directly on the vmkernel. Infrastructure services are provided natively through modules included with the vmkernel. Other authorized 3rd party modules , such as hardware drivers and hardware monitoring components, can run in vmkernel as well. Only modules that have been digitally signed by VMware are allowed on the system, creating a tightly locked-down architecture. Preventing arbitrary code from running on the ESXi host greatly improves the security of the system.

  • VMware agents ported to run directly on VMkernel
  • Authorized 3rd party modules can also run in Vmkernel. These provide specific functionality
    • Hardware monitoring
    • Hardware drivers
  • VMware components and third party components can be updated independently
  • The “dual-image” approach lets you revert to prior image if desired
  • Other capabilities necessary for integration into an enterprise datacenter are provided natively
  • No other arbitrary code is allowed on the system

What is the difference between ESX and ESXi

Capability  ESX 4.0 ESX 4.1 ESXi 4.0 ESXi 4.1
Service Console Present Present Removed Removed
Admin/config CLIs COS + vCLI COS + vCLI PowerCLI + vCLI PowerCLI + vCLI
Advanced Troubleshooting COS COS Tech Support Mode Tech Support Mode
Scripted Installation Supported Supported Not Supported Supported
Boot from SAN Supported Supported Not Supported Supported
SNMP Supported Supported Supported (limited) Supported (limited)
Active Directory 3rd party in COS Integrated Not Supported Integrated
HW Monitoring 3rd party agents in COS 3rd party agents in COS CIM providers CIM providers
Web Access Supported Not Supported Not Supported Not Supported
Serial Port Connectivity Supported Supported Not Supported Not Supported
Jumbo Frames Supported Supported Supported Supported

Configuration difference between ESX 3.5 to vSphere?

Virtual Machine VI 3.5 vSphere 4
Number of virtual CPUs per virtual machine     4     8
RAM per virtual machine   64 GB   255 GB
NICs per VM   4     10
Concurrent remote console sessions    10      40


ESX host VI 3.5 vSphere 4
Hosts per storage volume   32    64
Fibre Channel paths to LUN   32    16
NFS Datastores    32    64
Hardware iSCSI initiators per host    2     4
Virtual CPUs per host    192    512
Virtual Machines per host    170    320
Logical processors per host    32    64
RAM per host 256 GB    1 TB
Standard vSwitches per host   127    248
Virtual NICs per standard vSwitch 1,016    4,088
Resource pools per host   512    4,096
Children per resource pool   256    1,024
Resource pools per cluster   128     512

How to identify VC database’s long running transaction and /or blocking SPID’s in SQL server using a query.

The below query will identify all the open / long running transactions and blocking SPID’s from the database to the virtual center server.  Long running transactions and blocking SPID’s can prevent access to VC and even cause access to VC inventory extremely slow.

If you find yourself in a situation where the VC service is in started state and still you are  unable to access virtual center then follow the below steps. You might receive error messages similar to “The server took too long to respond.. request timed out”.  Check all open connection and kill which is not critical and has been running for a longer duration.

I am assuming you have access to your SQL database or touch base with your SQL DBA for assistance.  This is how they would identify the culprit.

SELECT spid ,status ,loginame = SUBSTRING(loginame, 1, 12),hostname = SUBSTRING(hostname, 1, 12)

,blk = CONVERT(char(3), blocked) ,open_tran

,dbname = SUBSTRING(DB_NAME(dbid),1,10)

,cmd ,waittype ,waittime ,last_batch

FROM master.dbo.sysprocesses

WHERE spid IN (SELECT blocked FROM master.dbo.sysprocesses)

AND blocked = 0

In my example (attached screenshot), I did not have any blocking culprit to showcase. 🙂

The above query would look for blocking from the entire database, if you want to run the query against a specific database then use the below query. In this example the database name  is “VCDB”.

 SELECT blocked FROM master.dbo.sysprocesses where dbid=db_id(‘VCDB’)

 To kill the identified blocking you would need to issue command

kill <SPID number>

How to update ESX host name and Ip address.

1. Navigate to “vi /etc/hosts”

Add the below line  to the file, first the IP address followed by the ESX host name as shown below. ESX002


2. Navigate to  “vi /etc/sysconfig/network” and update ESX host name in the second line of the file as shown below.





You would need to restart the network service to implement the change, by issuing the below command.

service network restart