Booting to a VHD in Windows 7

This is something I tried before but didn’t complete for some reason. Now that I’m on vacation this week and I’ve got some extra time to play, I went back to finish it.

As you probably know by now, Windows 7 has the ability to boot into a .vhd file. This is awesome, as you can create a virtual testing environment that you can run directly against you hardware.

There are a few gotchas, though. You’re limited in the OS’s you can run on the virtual side to Windows 7 and Windows 2008. I’ve seen post of people getting other OS’s to run but I haven’t tried. And I’ve seen warnings not to do this on a laptop, though I’ll try it, once I install my new bigger laptop hard drive next week.

Anyway, this is how I did it. I had created a Windows 7 virtual machine in Virtual PC 2007. I used the vhd from that VM instead of creating a new one, though you certainly could if you wanted to.

The first thing I did was to run sysprep inside my VM. I’m not an expert on sysprep, I just followed instructions I found on the web. Briefly, sysprep is a GUI tool that prepares the image to be configured to use the hardware on the new server. You’ll find it in C:\Windows\system32\sysprep. Run it as an admin. Choose the default target of out of the box experience and also choose to generalize. Also choose to shut down. After sysprep is finished it will power down your VM.

I didn’t do this, but it’s probably a good idea to make a copy of the vhd at this point.

The next thing I did was to set up Windows bootloader to see the vhd file. I opened a admin command prompt and ran the following: bcdedit /copy {current} /d “Win 7 VHD”. This returns a GUID I saved to notepad. “Win 7 VHD” is the description I wanted to see on the boot menu. After that I ran these three commands:

bcdedit /set {guid} device vhd=[C:]\VM\Win7\Win7.vhd

bcdedit /set {guid} osdevice vhd=[C:]\VM\Win7\Win7.vhd

bcdedit /set {guid} detecthal on

In my example I replaced guid with the guid I saved earlier. VM\Win7\ is the path to my vhd file, and Win7.vhd is the file I’m using. Note that the drive letter is in square brackets: [C].    

And that’s just about it. Once I restarted my computer I could see both my original Windows 7 installation and my new vhd boot option. When I chose the vhd, Windows started and applied the hardware changes. After that I just logged in and ran my Windows vhd. Once in the virtual environment, I can see all the drives on the computer, including those for my “real” Windows 7. Notice Disk 1 is a blue icon; this shows that it’s a vhd file. It also shows the reserved system partition. I can also see the files on the other physical drives.

VMSetup

I don’t get this if I’m running my physical Windows 7. I can mount the vhd file (on the menu go to Action > Attach VHD). But it doesn’t stay mounted between reboots. I haven’t tried mounting it with DISKPART yet, I’ll try that when I create my laptop VM.

image 

The only drawback is the vhd is not portable, and I can’t run it in Virtual PC 2007 anymore. I can probably run sysprep again to get it back, but I think I’ll keep it as it is for now.

p5rn7vb

Virtualization – final wrapup

This is another topic I want to wrap up before the end of the year.

A quick overview. earlier this year, I did some testing of virtualizing our production servers hosted on VMWare servers. I captured a typical workload from our busiest server and replayed it on servers set up for testing; a 32-bit physical server with 16 CPU set up as a production server, the same server with hyper threading turned off, a 32-bit and a 64-bit virtual server with 4 CPUs,and a 32-bit and a 64-bit virtual server using vSphere and 8 CPUs. All servers had 16 GB of RAM.

After running the workload multiple times on each server configuration we compared results. What we saw was understandable – neither the 4 CPU or 8 CPU servers matched the 16 core baseline. Even the physical server with HT off fell short.

What I didn’t show in earlier posts was the counter for latches. Latches are the #1 wait type on our servers, and this held true on all testing servers. Our servers aren’t running optimized and this is being magnified in a virtual environment. Having 16 cores lessens the performance hit from all those latches in production.

So we’re not going to virtualize,at least the production servers. While virtualization is useful and can be used for SQL, it’s not a good idea to try and virtualize servers with performance problems. We’re currently using VMWare to run our development and testing environments as well as a few smaller, less used SQL servers, and we’re not seeing any issues with them.

I’m not going to published a summary of all my counters, but I’ll make them available on request if anyone is interested.

Alas, no booting to a .VHD for me

I’ve been reading quite a bit lately about attaching a virtual drive to a computer in Windows 7 and Windows 2008 Server R2. It would be nice to have the ability to boot directly into a virtual server. But I’m not seeing a way I can achieve it, at least right now.

My hardware would be my current laptop, an HP Pavilion TX 2000 (I wanted the tablet capabilities) running a 64 bit version of Windows 7 Ultimate. It has 2 CPUs, 4 GB of RAM and a 250 GB hard drive, certainly enough to run one virtual machine. I wanted to run a virtual 64 bit Windows 2008 Server R2 as my BI sandbox. Unfortunately, neither Virtual Server or Virtual PC support running 64 bit guests, only 32 bit. So I built my VM using VMWare’s Workstation. But those virtual disks can’t be mounted.

So I created another VM, this time a 32 bit version of Windows 2008. I created a fixed disk, installed the OS, and followed the directions on Charlie Calvert’s blog. Mounting the .vhd file was simple, as was using bcdboot. When I rebooted both servers showed in the boot launcher. Everything good to go, right?

Wrong. When I tried to boot into the .vhd, I’d get an error message that the computer was not correct because of hardware changes. And the computer manager no longer showed the .vhd drive as mounted. That’s when I went back and reread the fine print.

Windows 7 only supports mounting Windows 7 or Windows 2008 Server R2. My VM was only 2008. At this point my options are to restore my VMWare VM (backed up luckily) or to try to install R2 directly into a vhd. But I don’t think that will work, either; it wouldn’t be compatible with Virtual PC.

Well, VMWare Workstation is still a great option. It’s just disappointing I haven’t figured out a way to do this yet.  

Replaying traces, Part 5

So I’m now testing my workload on a virtual 64 bit server, again hosted by VMWare, again with only 4 CPUs. And again I find more issues.

I ran this replay 3 times so far. All three times the replay appeared to have hung at the 99% mark, similar to the 8 CPU physical test. The only difference was that, for the final test, I didn’t save the results to disk. I did this because on the second test there appeared to be SQL activity that was suspended by the TRACEWRITE wait type. The drive it was writing to was the only virtual drive; all databases are on SAN storage.

While all three hung at the 99% mark after about 3 hours, each test had different results. The first test was showing transactions against the database throughout the trace (the black line. The blue line is % Processor Time, the yellow line is % Privilege Time, the red line is transactions/sec against tempdb). The Processor time showed activity from the Profiler after the 99% mark .Test 1
The second trace showed no transactions after the 99% mark. This is the one that showed the TRACEWRITE wait type. Notice that there are no transactions against the db after the 3 hour mark this time.Test 2  
Test 3 shows the same pattern as test 1. There were no wait times for tests 1 and 3.Test 3

There is one more issue with these three tests. After stopping the replays, I had to manually kill the Profiler process because profiler became unresponsive. The third test stayed alive and was stopping the process and I let it go. However the next morning I checked and it was still “stopping” after 13 hours. And the Profiler was still responding.

The more I run these replays the more confused I get. I’m going to try one more time. This run will be from a remote machine, I’m only going to capture the counters on the virtual server.

Replaying traces, Part 3

I had a few surprises when I began replaying a production workload on a virtual server. I’m using the same workload I captured from a production server and replayed on a physical server in different scenarios. The first virtual test was on a 32 bit server hosted on a VMWare box with CPU affinity turned on. A limitation of virtual servers (at least for now) is that they are limited to 4 CPUs.

The first surprise was that the replay was faster than the 4 CPU test on a physical server, 3:07 virtual compared to 3:40 physical. The second surprise is that the total number of transactions appeared to have dropped.

I’ll be looking into this to see what I’m missing, and I’ll post more details soon.

Replaying traces with SQL Profiler

One of the projects I’ve become involved with at work has been the virtualization of our regional databases. However, when we hooked up a copy of one of the databases in a virtual environment we noticed a huge performance degradation, mainly in regard to CPU usage. The only part of the server that was virtualized was the drive containing the OS. User databases were mainly placed on a SAN so the IO looked to be acceptable, at least based on the small workload we generated. We know that the servers were not comparable; the current, physical server has 16 CPUs, where the virtual server is limited by VMWare to 4. Since performance was much worse than expected, we decided to try and replay a workload from one of our production servers on a virtual server and compare the results.
 
We thought of a few different ways to do this, but we eventually decided on using the replay ability of SQL Profiler. So we captured a trace from production, set up a copy of the databases on a second physical server configured identically to the production server, then used the different options inside profiler for the replay. What we saw was surprising.

The original trace was run for an hour. Using the multithreading option on the test box, the replay took twice as long, and the CPU usage was at least 10% higher for the length of the replay. When we set the number of threads to 64, the replay took 5 hours, and the CPU usage was maybe 25% of the original trace.

So the next step is to replay a trace on the same server to see what the results are. I’ll post more on this in a few days, after a few more test cases. I’ll also include some of the actual numbers.