Announcement

Collapse
No announcement yet.

VMware question

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • VMware question

    I am very green when it comes to VMware. In an ESXi 5.1+ environment, how does the vCPU to CPU cores assignment work? I was told in previous versions, that VMs had to wait until all cores become available before it can run the process. Then they added sockets, that allow the allocated cores to be divided, so the VM doesn't have to wait for all cores to be available just half (if you have 2 sockets set). Does this mean that a process still has to wait for one total socket to be available?

    I'm trying to see if I am understanding this correctly. Then there's shares, and resource allocation.

  • #2
    Some light reading on CPU scheduling in 5.1 -- http://www.vmware.com/files/pdf/tech...Sched-Perf.pdf

    Cliff Notes.

    1. Don't overthink Cpu allocation. Cpu scheduling is much better in 5.1 than older versions. Don't go tweaking on it unless you have a dire need. Most of the time you will do more harm than good.

    2. Don't overbook your vCpu to actual CPU count by a HUGE amount. Your system will thrash.

    3. Don't over-allocate vCpu to a guest OS just because you can. (An MS SQL server does not need 32 CPU's no matter what the DBA says)

    4. Be realistic in vCPU allocation. Monitor realtime CPU usage on each Guest OS and adjust accordingly.

    5. Watch your vCPU allocation on 32bit guest OS. For example Windows 2003 and down can only allocate 4 CPUs. If you need allocate 8 vCPUs for this OS, you need to define them as 4 dual-core or 2 quad-core processors. (64bit OS does not have this limitation)

    Comment


    • #3
      Originally posted by S_K View Post
      Some light reading on CPU scheduling in 5.1 -- http://www.vmware.com/files/pdf/tech...Sched-Perf.pdf

      Cliff Notes.

      1. Don't overthink Cpu allocation. Cpu scheduling is much better in 5.1 than older versions. Don't go tweaking on it unless you have a dire need. Most of the time you will do more harm than good.

      2. Don't overbook your vCpu to actual CPU count by a HUGE amount. Your system will thrash.

      3. Don't over-allocate vCpu to a guest OS just because you can. (An MS SQL server does not need 32 CPU's no matter what the DBA says)

      4. Be realistic in vCPU allocation. Monitor realtime CPU usage on each Guest OS and adjust accordingly.

      5. Watch your vCPU allocation on 32bit guest OS. For example Windows 2003 and down can only allocate 4 CPUs. If you need allocate 8 vCPUs for this OS, you need to define them as 4 dual-core or 2 quad-core processors. (64bit OS does not have this limitation)
      249 vCPU and 110 logical CPUs

      Comment


      • #4
        Your avatar/signature makes it very hard to read this thread at work.

        Comment


        • #5
          How many guest OS are running? That may not be too bad if it is not a large number of VMs. Rule of thumb is at LEAST one physical core per VM so I would not go over 90-100 VMs on this server. Hopefully this ESX server is part of a High Availibility cluster so you can load balance.

          Comment


          • #6
            Originally posted by roliath View Post
            Your avatar/signature makes it very hard to read this thread at work.
            Not sure if this is a bad thing?

            Comment


            • #7
              Originally posted by S_K View Post
              How many guest OS are running? That may not be too bad if it is not a large number of VMs. Rule of thumb is at LEAST one physical core per VM so I would not go over 90-100 VMs on this server. Hopefully this ESX server is part of a High Availibility cluster so you can load balance.
              One cluster has 40 logical cores, and 87 vCPUs.

              We are installing a VM monitor that will help use cut down the allocation of resources.

              Comment


              • #8
                Real-time data is your friend. You need both peak and average over at least a week to get any useful insight. Also check the time of your peak usages. I don't know how many times I have seen people start all their virus scans or backups at the same time. 100 servers kick off virus scan at the same time and they wonder why their system tanks.

                Comment


                • #9
                  Originally posted by big_tiger View Post
                  One cluster has 40 logical cores, and 87 vCPUs.

                  We are installing a VM monitor that will help use cut down the allocation of resources.
                  Make sure you are using the N-1 rule when building ESX clusters. For example:

                  for a 100-120 VM environment, the minimum I would go would be 3 physical servers with 64 processors each. This would give you 192 physical cores and around 256 vCPUs to allocate. This will load balance nicely and still be able to handle the load in case of an ESX server failure.

                  You could go with fewer servers and more processors, but I would be afraid of overloading the network cards or data I/O bus of the server. (the most common bottlenecks in VMware)

                  Comment


                  • #10
                    Its no where close to 100 VMs, but the hardware is kinda old. G5 and 6 HPs.

                    Comment


                    • #11
                      Repeat after me, "I will get off my ass and install vCOPS"

                      rinse, wash, repeat.

                      Seriously, vCOPs will save your ass from over allocation and under utilization.

                      As a VERY general rule, I usually allocate no more than 4vCPUs per physical core when I don't have any performance data. This is usually for greenfield deployments. On VDI I will let it go up to 7 vCPUs per physical core. I then keep an eye on things with vCOPs to determine where I can free up system resources..

                      One other thing... you will almost always run out of memory resources long before you run out of physical CPU resources. It is just a rule of thumb that applies most times.

                      Comment


                      • #12
                        ^^^^^
                        I'd pay special attention to what this man has to say.

                        Comment


                        • #13
                          vCops is definitely your friend. I manage the VDI environment at work and it makes a huge difference just knowing where you're spending your resources.

                          Comment


                          • #14
                            Originally posted by Sgt Beavis View Post
                            Repeat after me, "I will get off my ass and install vCOPS"

                            rinse, wash, repeat.

                            Seriously, vCOPs will save your ass from over allocation and under utilization.

                            As a VERY general rule, I usually allocate no more than 4vCPUs per physical core when I don't have any performance data. This is usually for greenfield deployments. On VDI I will let it go up to 7 vCPUs per physical core. I then keep an eye on things with vCOPs to determine where I can free up system resources..

                            One other thing... you will almost always run out of memory resources long before you run out of physical CPU resources. It is just a rule of thumb that applies most times.
                            Originally posted by Tx Redneck View Post
                            ^^^^^
                            I'd pay special attention to what this man has to say.
                            Every god damn bit of this.

                            Before I discovered vCOPS, I would stick to the 1:1 rule, that is 1 core per socket, and then check logs for a bit to see if anything is retarded, and increase cores from there. Generally I don't go over 2 or three cores per socket, depending on the environment and setup. In my limited experience, most of the time 2 sockets, 1 core each is good enough for a medium-use server. It's a bad, BAD thing when machines start fighting for cores.

                            Comment


                            • #15
                              Originally posted by Ratt View Post
                              Every god damn bit of this.

                              Before I discovered vCOPS, I would stick to the 1:1 rule, that is 1 core per socket, and then check logs for a bit to see if anything is retarded, and increase cores from there. Generally I don't go over 2 or three cores per socket, depending on the environment and setup. In my limited experience, most of the time 2 sockets, 1 core each is good enough for a medium-use server. It's a bad, BAD thing when machines start fighting for cores.
                              Agreed. Most people overbook their vCPUs and totally defeat the CPU scheduler.

                              Another Amen to VCOPS. (or Foglight if you are in a mixed environment). Without hard data you are just throwing darts at the wall blindfolded.

                              Comment

                              Working...
                              X