Skip to Content
Technical Articles
Author's profile photo Thomas Jung

My Journey From VMWare to Hyper-V As An SAP Developer

Introduction

I want to say up front this is going to be a blog primarily focused on those of you who use Microsoft Windows as your OS. I want to describe my journey on my local laptop as I transitioned from VMWare to Hyper-V and how this has impacted the various SAP software and development tools which I run on my laptop.

First a little background on why and how I was using VMWare on my laptop to begin with. I began using VMWare several years ago mostly to allow me to run SAP HANA on my local laptop.  HANA supporting only Linux, meant that I needed a virtualized environment in order to run it. I started installing HANA directly myself on SUSE within a VM and eventually migrated over to HANA Express once it became available. And I’ve had good experience and performance running HANA within an VM over the years. I’m sure I’ve spent more time working with my local installation of HANA in this fashion than any other mode.

Over the years, I’ve seen the increased need for more and more virtualized environments to support the development I do.  I added an ABAP system, Minikube, and even Project Kyma to my list of VMs.

Why Make a Change?

You might wonder, if I’ve had such a good experience using VMWare to support all these various options; why consider moving to something else? Well the answer was simple – increasingly I was finding solutions I wanted to use or try that simply wouldn’t run in or with VMWare. New solutions like Docker, native Minikube on Windows, and WSL (Windows Subsystem for Linux) are all designed to run in Hyper-V not VMWare.

Why can’t Hyper-V and VMWare play nice together? The short answer for those of us that aren’t virtualization experts – they both want to take some level of exclusive control of the Hypervisor of the OS.  This leads to a completely incompatible situation where they can’t both have exclusive control. Now Microsoft and VMWare seem to be working on the issue by allowing a future version of VMWare to access a new set of Hypervisor APIs. https://blogs.vmware.com/workstation/2019/08/workstation-hyper-v-harmony.html  But this is still some time away.

Not yet ready to give up on all those VM images (especially the HANA ones) that I had used for so long, I attempted to find work arounds. For example I ran Docker via an unsupported community script that used a VMWare image as the Docker virtualization engine. This approach is what my co-worker, DJ Adams, referred to as Inception level virtualization.  With this approach I could limp along for a while dealing with the complicated port forwarding networking required to keep this solution working.

But ultimately it was WSL – Windows Subsystem for Linux or more accurately the upcoming WSL 2 that really forced my decision to make a change. WSL 2 promises to bring a Linux kernel very nicely integrated into the Windows OS, but still requires Hyper-V even for its new, lightweight virtualization. If you do development on a Windows based machine I strongly encourage you to read more about what’s coming soon in WSL 2.

https://docs.microsoft.com/en-us/windows/wsl/wsl2-about

https://devblogs.microsoft.com/commandline/wsl2-will-be-generally-available-in-windows-10-version-2004/

I’m very excited about the possibilities of WSL 2 and therefore decided that now was the right time to make the jump over to Hyper-V in preparation for it later this year.

First Attempt

A few weeks ago I made my first attempt at a switch over.  The process to activate Hyper-V is easy enough – basically just using the Add/Remove Features of Windows itself. Right away you will see just how incompatible VMWare and Hyper-V really are.  As soon as rebooting after turning on the Hyper-V feature, VMWare fails to start any images.

Although going back is not difficult to configure, it does require a reboot each time. Therefore its not really something you want to perhaps do multiple times a day as you switch between different environments and tools.  So I had a strong incentive to get everything working in Hyper-V or turn back.

I was off to a good start with Hyper-V and Docker. Unlike the funky, unsupported scripts I had used previously to get Docker working with VMWare; I was able to use the standard Docker desktop installer and graphic admin tool.  For example I got Portainer running without any special networking hoops to jump through.

Emboldened by my easy success with Docker I wanted to move onto the more complicated task of replacing the more traditional Virtual Machine based containerization.

I was able to create a traditional VM image in Hyper-V and setup to boot from the openSUSE ISO to begin installation.

But this is where my luck started to run out.  I couldn’t connect to the VM in order to continue with the installation.

The Default Switch Network device wasn’t responding.  In fact the networking stack was taking up a high amount of CPU load and was constantly dropping and recreating the Hyper-V based virtual network devices. This was causing the Hyper-V Manager to even become unresponsive. To make matters worse, I noticed that my DirectAccess connection to the SAP network was also broken.

So my first learning is that the Networking Setup in Hyper-V is a bit more complicated and requires more high touch than VMWare.  I was also seeing this interruption to my other, existing network connections that I couldn’t live long without. Therefore I was forced to roll everything back and give up on Hyper-V for now.

Second Attempt

Never one to give up easily, I decided to give the switch over to Hyper-V a second try. This time I came into the project fully prepared to do some deep level networking troubleshooting. I’m not going to share all the dirty details because it was honestly an entire afternoon of looking at logs, cursing, tinkering with network settings, more cursing, PowerShell scripts specific to Hyper-V networking, and finally lots more cursing. The end result: I had installed a DNS caching software on my laptop a long time ago.  I was using it to provide wild card DNS routing for running HANA Express XSA with hostname based routing.  But somehow this DNS cache service was completely breaking the virtual networking in Hyper-V.  As soon as I disabled that service, the Default Switch in Hyper-V appeared and started providing stable NATS based networking. I was finally in business!

Now able to connect to my Virtual Machines running in Hyper-V I could start a Linux OS installation.

Docker, Minikube and a full Linux VM image: now I was making some progress. I experimented with migrating VMWare OVA format to Hyper-V. I tried a couple of different tools but kept running into issues, I believe, because I had used the NVMe based virtual disks in VMWare.  That seemed to blow up most of the conversion tools. Therefore I ultimately decided the better option was to re-install HANA Express from scratch.

HANA Express was up and running within a few minutes.  I could then move onto more complex VM Images.  For example I installed an ABAP S/4 Foundation 1909 system using the same sort of approach.

With the all the basic capabilities covered I felt like I could move forward and not look back.  I then began experimenting with some of the new things I could try thanks to the switch to Hyper-V.

Although WSL 2 is still a few months away, I could already utilize with the first version of the Windows Subsystem for Linux. For example I’ve tried my hand at VSCode support for Remote-WSL terminal and testing.

I’m also enjoying having the local Kubernetes environment via Minikube directly on my main OS.  I can now use tools like the Kubernetes Explorer in VSCode to interact and manage my local K8S installation.  This was much more complicated before when I could only run Kubernetes in a Linux VM within VMWare.

Last but certainly not least, I use the local minikube to also install and run Project Kyma locally. The only “trick” I found is that when using Hyper-V with Kyma you have to supply the hypervVirtualSwitch and choose a target virtual network device during the provision command. Otherwise it’s working just great.

What Does the Future Hold?

I’ve happily moved over all my current systems and functionality and even expanded a bit thanks to the new possibilities with Minikube running within Windows. As I look to the future there are a few things I’d like to try out. For example there are others that have written about running HANA Express via Docker instead of a full VM. I plan to give that a try but I think I want to wait until I can run Docker  on WSL 2 for improved performance.  In general I think the promise of WSL 2 is very exciting.  I’m hoping for a future when I can run Docker, Minikube, Kyma, and maybe even HANA Express directly in WSL 2.  After all most of this effort was driven by the future possibilities that WSL 2 will likely provide.

Assigned Tags

      4 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Martin Stenzig
      Martin Stenzig

      Glad to see I am not the only person using HyperV . 🙂 I started working on this setup 8 month ago (don't have as many configurations as you do) but found that it's super convenient.

      For me the initial motivation was to have a separate development system running VS Code, GIT, Node.... and, and, and, that I can blow away or reset on a moments notice without having to re-image my whole machine.

      So far my experience has been very positive, even though I should probably do some tuning when I have a few minutes as the Windows in the Hyper V seems to be distinctly slower than my base system.

      Author's profile photo James Bungay
      James Bungay

      Nice breakdown of your vm journey!

      I've been running a bunch of different systems - a couple of S4 systems (1809 and 1909) along with some individual hana setups for XSA prototyping and other non SAP systems for different purposes.

      I've been primarily using vmware workstation, but for my S4 setup I have the hana database instance running in virtualbox because it allows me to assign more than 64gb of ram.  And I use zerotier to provide access to others on my team (I run the vms on my laptop).

      Pretty handy to host a lab of different systems locally in this format, and nice to hear about other setups using different solutions.

      Cheers,

      James

      Author's profile photo Muhammad Ilyas
      Muhammad Ilyas

      I tried it several times and failed to covert all of my  vmdk to vhds. Apart from docker, minikube,  autostart vm in case of unexpected restarts is much needed in my case and in fact very useful too. Please share your failures with conversations.

      Author's profile photo Tayfun Deger
      Tayfun Deger

      an impressive post, thanks for sharing.