About the speaker
Brien Posey is a freelance technical writer who has received Microsoft's MVP award 9 times for his work with Exchange Server, Windows Server, IIS, and File Systems Storage.
Hyper-V R3 – What’s New in Windows Server 2012
In this session you will learn:
- PowerShell Management
- Memory Balooning
- Live Migrations (beyond the cluster, shared nothing Live Migration)
- Failover Clustering
- Storage Migrations
- Hyper-V Replica
- Virtual Fibre Channel
- NIC Teaming
- Offload datacenter and more!
More sessions by Brien Posey
Greetings! I’m Brien Posey and in this presentation I want to talk about what's new in Hyper-V 3.0
Before we get started, I want to give you a little bit of information about me. I’m a freelance technical author and industry analyst, and a nine time Microsoft MVP. I've won the MVP award in several different areas over the years including file systems / storage, which is my current MVP specialty. I’ve also earned received the MVP award for Exchange Server, Windows Server, and IIS.
Before I went freelance, I was a CIO for nationwide chain of hospitals and healthcare facilities and I also served as a network engineer for the Department of Defense at Fort Knox. Prior to that worked as a network administrator for some of the nation's largest insurance companies.
So my agenda for this presentation is pretty simple. I'm going to start out by giving you a brief history of Hyper-V and from there I'm going to turn my attention to talk about what's new in version 3.0.
Hyper-V has a relatively short release history. It was first introduced with Windows Server 2008 just a few years ago. After that Windows Server 2008 R2 continued to include Hyper-V, and Microsoft made some very welcome changes.
The original 1.0 released that came with Windows Server 2008 only offered basic functionality. The 2.0 release (which was the Windows Server 2008 R2 version) had a few enhancements. The two most important of those enhancements were live migration (that's the ability to move a running virtual machine from one host server to another) and memory over commitment. Memory over commitment refers to the ability to allocate more memory to virtual machines than what was physically present within the host server.
Those were the two big changes that came about with version 2.0. Now with Windows Server 2012 Microsoft is releasing Hyper-V 3.0. Hyper-V 3.0 is going to include hundreds of new features. Obviously I won't be able to cover all of these new features within this webcast but I will discuss the most important ones.
With version 3.0 Hyper-V is finally going to become an enterprise class hypervisor, and you'll see why as I go through and talk about some of the new features.
Before I get started I need to quickly point out that throughout its entire release history Hyper-V has been offered in two different editions. There's a free standalone product but you can also run Hyper-V as a service on top of Windows Server 2008 or 2008 R2. Microsoft has announced that Hyper-V 3.0 will be also be available as a standalone hypervisor or as a service for Windows Server 2012. In this presentation I'm only going to focus on Hyper-V as a service (Hyper-V 3.0 running on top of Windows Server 2012).
So ever since Microsoft first started previewing Hyper-V 3.0 the one question that I have been asked more often than any other is did Microsoft finally get it right this time around. I'm happy to say that I really do think that they did get it right. Hyper-V 3.0 finally has enterprise class features and the features that are found in Hyper-V 3.0 are comparable to within VMware 5. As a matter fact, if any VMware administrators are watching this webcast I'm sure that some of the new features are going to look very familiar to you.
One of the things that made me the happiest about Hyper-V 3.0 is that in spite of all these new features that Microsoft is introducing, they still managed to keep the interface for Hyper-V 3.0 very simple and intuitive. In fact, the interface looks a lot like what we had with Hyper-V 2.0.
The main tool for managing Hyper-V has always been the Hyper-V Manager, which you can see right here on the slide. As you can see, the console tree is divided into several different panes that I will quickly show you.
On the left side is the console tree and I’ve selected the Hyper-V server. The top of the middle pane list all of the virtual machines that are installed on the server. You can see not only the virtual machine names but also which virtual machines running and how much CPU time and how much memory those virtual machines are consuming. The center of the middle pane lists the snapshots for the selected virtual machine. Right now no snapshots are showing up because the virtual machine that is selected doesn't have any snapshots.
Down at the bottom of the center pane is the review section. This basically just tells you when the virtual machine was created and lists any notes for the virtual machine.
In the actions pane (off to the right) there is a section at the top section that shows you all the different actions that you can do for the host as a whole. We can do things like importing virtual machines, creating new virtual machines, modifying the Hyper-V settings, and things like that. These are all things that are specific to the hypervisor.
The lower right pane gives you actions that you can perform against the selected virtual machine. We can do things like connecting to the virtual machine, modifying the virtual machine settings, starting the virtual machine, creating snapshots, exporting the virtual machines, and other things like that. So this is just a really quick overview of the Hyper-V Manager as it existed in Hyper-V 2.0 just so that you have it for reference purposes. So with that said, let’s take a look at the Hyper-V Manager for version 3.0. It has the same basic layout as what you just saw for Hyper-V 2.0. The console tree remains on the left, and the top middle pane lists all the virtual machines that are present on the server. The center pane lists the snapshots for the virtual machine and then off to the right we have the Actions pane for the hypervisor is a whole, and in the lower right we have the Actions pane for the currently selected virtual machine. So this is exactly the same layout as what we had with Hyper-V 2.0
This is a big part of why I said that Microsoft got it right. Hyper-V 3.0 contains an endless array of new features and yet Microsoft managed to provide the same simple interface that they always had. As a matter of fact when you're looking at the Hyper-V Manager for Hyper-V 3.0, the only really notable change is down in the lower center pane. Whereas the pane used to just be the summary for the currently selected virtual machine, we now have a series of tabs. There are tabs for Summary, Memory, Networking, and Replication. Some of the new features are exposed through these tabs down at the bottom of the screen. Other features are exposed through various property sheets and setting screens and things like that that. All in all I think that Microsoft did an incredibly good job of creating this Console.
I could go on and on about the Hyper-V Manager, but the big news isn’t that Microsoft has extended the Hyper-V Manager. That's totally to be expected. The big news is that now you can fully manage Hyper-V and all of your virtual machines through Windows PowerShell.
If you take a look at the slide, you can see that I entered the Get-VM command, and when I typed in that command PowerShell returned the names of all the virtual machines that are present on the host. You can also see which virtual machines are running and which ones are turned off, the CPU usage, the memory consumption for each virtual machine, how long they've been up, and whether or not they are operating normally.
This is actually just the tip of the iceberg. There's a whole lot more information that we can acquire from virtual machines and we can do things such as starting and stopping virtual machines.
If you look at the bottom of this pane you will see that I have entered the Start-VM VM3 command and that command actually starts the virtual machine named VM3.
So this is just a quick sampling of a couple of things you can do with the PowerShell interface. PowerShell is very powerful and you can do all sorts of things with your virtual machines.
Now in all fairness this type of management was possibly Hyper-V 2.0, but the option to use PowerShell to manage Hyper-V and your virtual machines wasn't built in. There was a separate module that you had to download and import into PowerShell. Now all of this functionality is included natively within the operating system.
When I talked about the Hyper-V release history I mentioned that there were two big new features that came out with Hyper-V 2.0 - memory over commitment and live migration. So I want to talk about those two features first, and talk about how Hyper-V 3.0 builds on what was already there with Hyper-V 2.0.
Memory over commitment still exist with Hyper-V 3.0 and the reason why it exists is because memory over commitment allows you to allocate more memory to your virtual machines collectively then is physically present within the system. By doing this you are able to increase your virtual machine density and improve your overall return on investment for your server hardware. That's great, but memory over commitment had one fatal flaw within Hyper-V 2.0. The fatal flaw has something to do with the way the Windows works.
When Windows starts up it usually consumes more memory a startup that it does when Windows is idle. The problem is that the memory over commitment featuring Hyper-V 2.0 treated startup memory and minimal memory is the same thing. Startup memory was the minimum memory. There was no way that the virtual machine could ever use less memory than what it used to start. This is all change in Hyper-V 3.0. In Hyper-V 3.0 Microsoft introduced a feature called memory ballooning. Memory ballooning is the same as memory over commitment except now the startup memory and minimum memory settings have been broken apart and are treated as two completely separate things. So you can allocate the memory that a VM needs at system startup and then release some of that memory as the virtual machine stabilizes and goes idle. So this could help you to use Hyper-V memory a lot more efficiently.
So to make the concept of memory ballooning make just a bit more sense, take a look at the slide. What I’ve done here in this slide is to take the virtual machine settings screen from Hyper-V 2.0 and put it right next to the virtual machine settings screen from Hyper-V 3.0 and select the memory configuration option. You can see that both Hyper-V 2.0 and 3.0 support the use of static or dynamic memory. If you enable dynamic memory then Hyper-V 2.0 lets you specify startup RAM and maximum RAM, but if you look over Hyper-V 3.0 you’ve got some different options. You can still set the startup RAM and maximum RAM, but there is also an option to set the minimum RAM. You can set the minimum RAM to be lower than the startup RAM so that Hyper-V is able to reclaim some of that memory that gets used when a Windows virtual machine is starting up. So that's what memory ballooning looks like in Hyper-V 3.0.
The other big new features that was released with Hyper-V 2.0 was live migration. Live migration refers to the ability to move a running virtual machine from one host server to another without taking the virtual machine off-line and without having any interruptions in service.
In Hyper-V 2.0 live migration was a serial operation. This means is that you can only live migrate one virtual machine time. If you had ten different virtual machines that you wanted to migrate then you would have to migrate those VMs in sequence. You couldn't do parallel live migrations. This has changed in Hyper-V 3.0. Now it is possible to live migrate multiple virtual machines at the same time. The actual number of live migrations that you can perform concurrently depends greatly on how much bandwidth you have available, but Hyper-V 3.0 has an interface that you can use to set the maximum number of live migrations that you want to allow to be performed concurrently.
So in the slide you can see the Hyper-V Settings page for a virtualization host. If you look towards the middle of the pane on the right, you can see that we have the option to specify how many simultaneous live migrations we want to allow. Right now the Simultaneous Live Migrations setting is set to two, but you can adjust the value up or down depending on how much bandwidth you have available.
Directly underneath the value, we have an option called incoming live migrations and we have the option to either use any available network adapter for live migrations or we can specify the IP address that we want to use for live migrations. The reason why this setting is available is because you may not necessarily wanted to perform live migrations over every network adapter because doing that could impact production workloads if you got virtual machines that are depending on network adapters to do their jobs.
Another thing you will notice at the top of this dialog box is that on the right we have the option of selecting an authentication protocol. We can use credential security support providers or we can use Kerberos. So there are several different options that we can use with regard to how live migrations are configured.
Microsoft has actually done a tremendous amount of work around live migrations beyond what you've already seen. One of the big changes to live migrations that with previous versions of Hyper-V it was only possible to live migrate a virtual machine that existed within a virtual machine cluster. You could move a virtual machine to another cluster node, but you couldn't move it anywhere else and you couldn't do live migrations from non-cluster members.
In Hyper-V 3.0 that limitation goes away. Now it's possible to live migrate a virtual machine to hosts that exists outside of a cluster, or even to live migrate from outside a cluster to inside a cluster. So you have ways of moving virtual machines around with a tremendous amount of flexibility. This is something that Microsoft refers to as Shared Nothing Migrations. It is important to note that shared nothing migrations shouldn't be used as a long-term replacement for a traditional failover clustering solution. It's more a convenience feature that you can use to move virtual machines around and get them where you need them to be.
I don't want to get too technical because this presentation is designed to be a light overview of the new features in Hyper-V 3.0, but I do at least want to mention that there are some requirements that you have to adhere to in order to be able to do shared nothing live migrations. The first criteria is that you have to have at least two Hyper-V 3.0 host. They can be standalone hosts or they can be clustered. It doesn't really matter. The next requirement is that each server requires adequate storage this can be local storage or it can be remote storage in terms of SMB. The third requirement is that hosts must have the same family or type of processor if you are using the processor compatibility feature. Next, the hosts must exist within the same Active Directory domain. If you've got hosts in different domains and you want to live migrate between those, then that’s not something that you're going to be able to do unfortunately.
The next criteria is that the hosts have to be connected to each other via a gigabit or faster network link. Live migrations are bandwidth intensive and you have to have a gigabit or higher network to facilitate shared nothing live migrations.
Another requirement - and this is a biggie - is that the host have to have the Client for Microsoft Networks and the File and Print Sharing Service for Microsoft Networks enable. Otherwise shared nothing live migrations won't work. Also, pass-through storage can’t be used because it can’t be live migrated. Finally, both machines that are involved in the shared nothing live migration have to have the same virtual switch configuration. So those are the requirements behind making a shared nothing live migration work.
Another huge change that Microsoft has made with regard to live migration in Hyper-V 3.0 is that now shared storage isn’t required anymore. It's recommended that you shared storage for performance reasons and for reliability, but you don't have to. In Hyper-V 2.0 you had to have what was known as a Clustered Shared Volume. This was shared storage that could be used by all the host within the virtualization cluster. That requirement no longer exists in Hyper-V 3.0. Now instead of using shared storage you can actually use local storage and perform live migration across an Ethernet cable. So this is a huge new feature for Hyper-V 3.0, and it's one of the centerpieces of the new hypervisor.
As you can see a great number of enhancements that Microsoft has made an Hyper-V 3.0 are related to failover clustering. Obviously the ability to do live migrations without the need for shared storage or a clustered shared volume is huge, but there are also other improvements as well. Microsoft has made tremendous scalability improvements, which I will be talking about a little bit later on.
Another area where Microsoft has enhanced Hyper-V 3.0 is with regard to storage migration. Storage migrations are different from live migrations. Live migrations involve moving a running virtual machine from one host server to another. Storage migrations involve moving a virtual machine’s components (specifically the virtual hard drive files and the components that make up the virtual machine) from one storage location to another.
Storage migrations existed in Hyper-V 2.0, but they were referred to as quick migrations. The downside to quick storage migrations was that there was a brief outage that occurred at the end of the storage migration while Hyper-v switched over to using the new location. In Hyper-V 3.0 that limitation is gone. Storage migrations actually occur without any downtime at all. So storage migrations can now be referred to as storage live migrations because the process really is done live, with no downtime.
One of my personal favorite new features in Hyper-V 3.0 is something called Hyper-V replica. Hyper-V replica refers to the ability to replicate a running virtual machine to another host. This host can be an alternate Hyper-V server in a remote datacenter.
The nice thing about the Hyper-V Replica feature is that it works really well in high latency environments or in environments where connectivity is not always reliable, such as over an Internet connection.
It is worth noting that the replication process isn't real time. You're not going to see change that you make in the primary virtual machine show up instantly in the replica. Instead, replication is near real-time. There's actually an adjustable schedule that controls how frequently changes are replicated. Generally virtual machines are going to be replicated at five-minute intervals, but there are a number of different factors that can affect this.
Another thing that you need to know about the Hyper-V replica feature is that this isn't a failover clustering solution. When a failure occurs in the primary data center the replica is not going to automatically take over (although there is a process of manually activating the replica server). Hyper-V replicas are intended to be a disaster recovery solution. In other words if something goes wrong in your primary data center and you lose a virtual machine you can use the replica to get the contents of that virtual machine back. So Hyper-V Replica is a very powerful feature and it brings enterprise class replication to small businesses. That type of feature would have previously been totally out for smaller organizations due to financial reasons.
Another new feature that hasn't really received as much press as some of the other features, but that I'm excited about nonetheless is something called virtual Fibre Channel. Virtual Fibre Channel will allow virtual machines to connect directly to Fibre Channel storage through a virtual SAN.
Virtual Fibre Channel is based on something called N_Port ID virtualization mapping, or NPIV as it is often abbreviated. So because of that physical Fibre Channel host bus adapters must support NPIV and they have to be connected to an NPIV enable SAN, and that SAN has to adhere to an NPIV topology. One of the really cool things about virtual Fibre Channel is it's actually possible to live migrate virtual machine that uses virtual Fibre Channel. But in order to make it happen there are some underlying requirements. Specifically the host from which you're going to live migrate virtual machines to has to have host bus adapters that can be used for Fibre Channel communications. That's a very logical requirement, but also each host bus adapter has to be assigned two separate worldwide names in order to facilitate live migration. The reason for that is because if there weren’t two separate worldwide names for each host bus adapter then connectivity would be momentarily lost to this Fibre Channel storage during the live migration process. Having two separate worldwide names keeps that from happening, so you can truly do a live migration without losing connectivity to Fibre Channel storage. Earlier I mentioned that there were a lot of scalability improvements in Hyper-V. Some of the scalability improvements are related to the hypervisor as a whole and others are specific to clustering. So I show you some of the scalability improvements to Hyper-V as a whole as compared with what we had with Hyper-V 2.0.
So in Hyper-V 2.0 there were a maximum of four virtual CPUs that could be assigned to a virtual machine. In Hyper-V 3.0 that limit goes all the way from 4 virtual CPUs to 32 virtual CPUs that can be assigned to a virtual machine.
The maximum amount of memory that can be assigned to a virtual machine has also dramatically increased. We went from a 64 GB maximum in Hyper-V 2.0 to 512 GB maximum in Hyper-V 3.0.
The maximum virtual hard disk size was 2 TB in Hyper-V 2.0, which is pretty good. In some ways this limit still exist in Hyper-V 3.0, but Microsoft has introduced a new virtual hard disk file called VHDX is an XML-based virtual hard disk, and if you use this new format (which you don't have) the maximum virtual hard disk size goes all the way up 16 TB. If you continue to use the legacy VHD format then the maximum virtual hard disk size limit remains at 2 TB. So those are a few the scalability improvements with regard to the hypervisor as a whole.
Microsoft has also made a lot of improvements to cluster scalability. To show you what I mean, take a look at the chart on the slide. The maximum number of virtual machines per host that can be powered on at any time in Hyper-V 2.0 was 384. In Hyper-V 3.0 Microsoft increases that limit to 1024. So over 1000 virtual machines powered on at any given time on a host within a cluster.
The maximum number of virtual machines in a cluster has also been increased. In Hyper-V 2.0 you could have 1000 virtual machines per cluster. In Hyper-V 3.0 that limits has increased to 4000 virtual machines.
The maximum number of host per cluster has also been increased. In Hyper-V 2.0 you could have a cluster made up of up to 16 hosts. Hyper-V 3.0 clusters can accommodate up to 63 hosts. So that really opens up a lot of possibilities with regard to what you can do with that cluster. This makes distance clustering a lot more practical.
The maximum amount of RAM per host server has doubled. In Hyper-V-2.0 you could have a terabyte of RAM. In Hyper-V 3.0 you have a full 2 TB of RAM per host server. So Microsoft has really made some great scalability improvements for clustering in Hyper-V 3.0.
One of the big features in the original Hyper-V release was snapshots. Snapshots allow you to make a point in time image of a virtual machine so that if something goes wrong later on you can revert back to that point in time very quickly.
Snapshots are a great feature, but they have a couple of downsides. For one thing, as you accumulate virtual machine snapshot virtual machine performance can be impacted tremendously. The more snapshots you accumulate for a virtual machine the worse the read performance can become because of the way the differencing disks are used.
The problem with that is that you can eventually go in clear up the snapshots that you don't want anymore, but in Hyper-V 1 .0 and 2.0 you had actually shut down the virtual machine before you could clean up the unwanted snapshots. In Hyper-V 3.0 you don't have to shut down the virtual machine to do snapshot maintenance. You can actually merge snapshots while the virtual machine is running. So that's a very welcome change in Hyper-V 3.0. Another feature that has gotten almost no press, but that that I think is really cool is the ability to hot add resources. Hot adding resources refers to the ability to add certain hardware resources to a virtual machine while it is running. This is something totally new to Hyper-V 3.0. This ability didn't exist in any of the previous versions of Hyper-V.
The resources that you can hot add are memory and disks. You can actually add these resources to a virtual machine while it is running. There is no need to take the virtual machine offline. CPU cores unfortunately can't be hot added. That is something you can do in VMware, but unfortunately is not something that can be done in Hyper-V 3.0.
Another new feature in Hyper-V 3.0 something called NIC Teaming. NIC teaming allows you to combine multiple network interfaces into one logical network interface. There are a couple of benefits to doing this. First, the logical NIC that you create supports higher bandwidth than a single physical NIC would because the logical NIC combines the aggregate bandwidth of all of the network adapters within the team into one single connection. The other advantage is that you establish a degree of fault tolerance because if any one of the hardware NICs fail then the connection continues to function because the other NICs within the team are still functional.
Now when I say that NIC teaming is new Hyper-V 3.0, that's not entirely accurate. Hyper-V 2.0 did support NIC teaming, but only at the hardware level. You had to have proprietary hardware that could establish a hardware level NIC team. That functionality was only available from Intel and Broadcom. With Hyper-V 3.0 NIC teams are formed at the software level and you can actually use commodity NICs. The NICs can even be from different vendors, so now it's possible to create a NIC team very inexpensively without having to delve into specialized hardware. Another new feature is deduplication at the file system level. Now this isn’t technically Hyper-V features, but rather a Windows Server 2012 feature that is designed to work with Hyper-V.
File system level deduplication allows you to deduplicate all sorts of things. In Hyper-V environment some of the things that might be duplicated include VHD libraries, ISO files and even live VHD volumes. Deduplication is great because it can reduce the amount of storage that your virtual machines consume.
Imagine that you have a host server that has a dozen virtual machines and those virtual machines are all running the same operating systems. Well, there's a lot of redundancy there with regard operating system files. So if you performed deduplication you can greatly reduce the amount of physical storage space that those virtual machines consume.
One of the benefits to that is that it can make the use of solid-state storage a lot more practical. Solid-state storage is great because it tends to be a lot faster than other storage medium such SATA or SAS drives because solid-state drives don't have any moving parts, so they aren’t limited by mechanical constraints. In recent years the prices of solid-state drives has dropped. Although the price of the drive itself is on par with other types of drives such as SATA or SAS, the cost per gigabyte is still higher than that of traditional hard drives because solid state drives have a much lower overall capacity. So if you can reduce the amount of space that your virtual machines consume then you might be able to get away with implementing solid-state storage and seeing that performance gain without spending a lot of money beyond what you would normally spend for traditional hard drives.
Another new feature is SMB storage support. With previous versions of Hyper-V you had a couple of options for storage. You could store virtual machines locally using direct attached storage, or you could use iSCSI or Fibre Channel to connect to a Cluster Shared Volume.
Now with Hyper-V 3.0 you also have the option of putting virtual machines on SMB file shares. But there are some limitations that you need to be aware of. The use of legacy SMB is not supported. Hyper-V 3.0 requires a specific SMB version. Also, your ability to scale the use of virtual machines on SMB storage is limited by the bandwidth is available, so this is primarily something that's going to be used in smaller organizations or for small-scale deployments. But that's not to say that SMB storage can’t be used in larger environments. It's just that if you want to use SMB storage in a large environment you have to have high end hardware with lots of bandwidth to make that feasible.
I mentioned earlier that a lot of the enhancements at Hyper-V 3.0 revolved around clustering, and another enhancement that's related to clustering is something called affinity and anti-affinity rules. The basic idea behind this is that when a failure occurs within a cluster you never know where the virtual machines running on failed server are going to fail over to. Sometimes you might have a need for certain virtual machines to stay together. If certain virtual machines need to fail over to a common host then that's were affinity rules come into play. Affinity rules can make sure that those virtual machines all stay together during the failover and that they end up on a common host.
On the flipside you might have regulations or business requirements that require certain virtual machine to never exist on the same host together. One practical example is domain controllers. You never want to be in a situation where all of your virtualized domain controllers exist on a common host, because if the host were to fail then all of your domain controllers are gone with it. So anti-affinity rules can keep certain virtual machines from ever failing over to a common host server.
Something else that Microsoft allows in Hyper-V 3.0 is virtual machine prioritization. The idea is that sometimes if a failure occurs virtual machines might fail over to a host that's already overloaded and that host may not have enough resources available to run all of the virtual machines it was previously running plus all of virtual machines from the failover. In Hyper-V 3.0 it becomes possible to prioritize virtual machines so the higher priority virtual machine start first and then once all the high-priority virtual machines are up and running then the lower priority virtual machine start up one at a time until you reach the limit of what the host server is capable of running. So those are a few good cluster specific features that are new to Hyper-V 3.0.
Another cool new feature is Offload Data Transfer support. If you really stop and think about the nature of virtualization it's that a single physical host can accommodate multiple virtual workloads. But the flipside of that is that all those virtual workloads have to share a finite amount of hardware. So machines have to be configured in a way that prevents any one virtual machine from consuming excessive hardware resources. One of problems with this is that certain operations tend to be resource intensive.
Normally when you do a file copy, you are generating a lot of disk I/O. But in a SAN environment with Hyper-V 3.0 it's possible to offload copy operations to the SAN’s hardware using Offload Data Transfer or ODX as it is often referred to. This allows the storage hardware do all the heavy lifting and all of the I/O intensive operations so that your virtual machines don't have to. The nice thing about this is that ODX support is built in. There is no Enable ODX button that you have to click and no options you have to configure. If Hyper-V detects that you’ve got hardware that supports ODX it will use the hardware by default without you having to do anything to enable that functionality.
Another nice feature that I want to quickly talk about is the extensible virtual switch. Hyper-V, since the very beginning, used something called virtual network switches. Virtual switches provide network connectivity to virtual machines. The problem with these virtual switches is that if you use data management or monitoring software to monitor your switches the software sees Hyper-V virtual switches almost a black box. In the past, management software couldn't see inside the virtual switches in order to do any sort of management or monitoring. This is changing with Hyper-V3 well because Microsoft has made the virtual switches extensible. This means that the vendors can actually write code that plugs into the Hyper-V virtual switch to make it so that their software can go in and monitor or managed virtual switches the same way that would manage a physical switch. So this is going to greatly extend your management and monitoring capabilities.
As you can see Hyper-V 3.0 has a lot of great new features. I have only scratched the surface in talking about what’s new. There are many other features beyond what I've been able to talk about in this presentation. So I have to say that with version 3.0 Hyper-V has finally arrived. It is going to be a great product and I'm really excited about using it in a production environment once is finally released.