[Tech] Dve’s Guide to MSWin Virtualization for Daoc (Unix)
Posted: Jan 09, 2017 13:24
CptDve’s Guide to MSWindows Virtualization for Daoc (on Linux! Maybe works on Mac, too.)
EDIT: For performance reasons, I have switched from “vmplayer” to KVM via qemu with GPU-passthrough more than 8 months ago, but it requires to have special hardware for that (2 monitors and an 2nd video card, which can be an old one that you already retired). Because most of you don't want to upgrade their hardware, I will keep the description of the vmplayer settings (with the mouse etc) here. If you have a second monitor, an old video card and a fitting PCIe slot, scroll to the middle of this post (Option 2: kvm via qemu with GPU-passthrough) for a more performant solution.
Pretext (important read for... reasons):
Now back to the topic! I tried a few virtualization methods (there are quite a few already) and the most common choice was Oracle’s Virtualbox, because the free version is (or by now maybe 'was') fully open sourced. Sadly, I had no luck in getting hardware acceleration in VirtualBox, so my video card was not used at all and everything (even 3D stuff) had to be calculated on my CPU, which made playing games impossible on my rig. I heard of others using VMware Workstation, but I was put off by their ridiculous licensing prices. Then I noticed it is possible to run it on the free “vmplayer”, if you are not deterred by changing some config files manually. Hardware acceleration is working here!
Old Option 1: vmplayer (if you dont have a second video card)
NEW Option 2: kvm via qemu with GPU-passthrough (second monitor and video card required)
EDIT: For performance reasons, I have switched from “vmplayer” to KVM via qemu with GPU-passthrough more than 8 months ago, but it requires to have special hardware for that (2 monitors and an 2nd video card, which can be an old one that you already retired). Because most of you don't want to upgrade their hardware, I will keep the description of the vmplayer settings (with the mouse etc) here. If you have a second monitor, an old video card and a fitting PCIe slot, scroll to the middle of this post (Option 2: kvm via qemu with GPU-passthrough) for a more performant solution.
Pretext (important read for... reasons):
This is no quote but a highlighted pretext wrote:There is actually no reason for daoc to run on a virtual machine as a Linux or Mac user, as the client itself runs natively on all systems via WINE (which is not an emulator, but rather a runtime environment that provides all the system related ms-windows stuff like self-implemented DLLs a program might expect from a real ms-windows, and translating the MSWindows systemcalls to Unix commands, which makes it run on Linux natively in the end). This was possible for more than TEN YEARS already, but now we are forced to use windows.
The reason why we have to switch from WINE to a full virtualization method is the new uthgard launcher. It is using dotnet4.6 for cryptographic reasons (login via token), which is said to be not working on WINE. This is partly false, as I was able to successfully install dotnet4.6 via winetricks recently. But even then I still get the message ‘An error occurred during a cryptographic operation’ when using the launcher.
I really hope this gets solved somehow sooner or later on a newer version of the uthgard launcher by using a different way, because it is a real waste to not run daoc on Linux natively (which is very resource friendly and comfortable), when daoc itself is actually fully supported but just the uthgard patcher/launcher isn’t. I don’t regard the following guide as a long term solution, but rather as a temporary workaround. The way we have to do this for now is to virtualize a computer with emulated hardware inside of your actual OS, installing a REAL mswindows on top of it and running the uthgard launcher there. This is actually not acceptable for me, and there are a million of reasons why people do not want to use windows (the most obvious one being freedom). Even though it is 2017 and it was possible to daoc without it for 10 years, we are now forced us to use it. In fact it is the only way to run uthgard until they change the method of the connection chain.
I encourage the devs and coders (/wave thekroko) to not use this guide as an excuse to stop considering a better solution, because this is nothing else than installing windows. Please remember the ideals of the open source project called DOL, in which you were a part of(?). It is more than just a relative of the Uthgard project; without DOL, there might have been no Uthgard at all. To some degree, DOL is the father of Uthgard. I don't ask to make the Uthgard project open source, the only thing I ask for is to make it accessible for people of the open source community. Thank you, if you are reading this!
Now back to the topic! I tried a few virtualization methods (there are quite a few already) and the most common choice was Oracle’s Virtualbox, because the free version is (or by now maybe 'was') fully open sourced. Sadly, I had no luck in getting hardware acceleration in VirtualBox, so my video card was not used at all and everything (even 3D stuff) had to be calculated on my CPU, which made playing games impossible on my rig. I heard of others using VMware Workstation, but I was put off by their ridiculous licensing prices. Then I noticed it is possible to run it on the free “vmplayer”, if you are not deterred by changing some config files manually. Hardware acceleration is working here!
Old Option 1: vmplayer (if you dont have a second video card)
- Install the package of your distribution's repository that contains the "vmplayer".
To do this, you should consult the wiki of your Unix distribution regarding VMware. The most critical part is the patch level of your kernel: as maintaining the free version of the vmplayer is not VMware’s first priority, it will not work with the most cutting edge unstable or custom kernel from testing, and right now not even on the normal stable kernel (on Ach Linux) without the use of some patches, which should be described on the wiki pages of your distribution. After installing, you should check if it works with your kernel by launching "vmplayer". - Install any mswindows iso (>=win7) you want on the vmplayer.
I think you can get a ‘free’ win10 copy from MS themselves, if you don’t mind the permanent “Activate Windows” watermark on the desktop. It is actually like a permanent trial version from MS and apparently they will never shut it down. - Install vmware-tools on the guest system by using the menu option in the vmplayer frontend
- Resolve mouse issues by adding some lines in the .vmx file of the virtual machine you created
As there occur some issues in mouse catching applications like 3D games, you need to open the .vmx (it’s somewhere in /home/user/vmware) and add/change the following line:- Code: Select all
vmmouse.present = "FALSE"
mouse.vusb.useBasicMouse = "FALSE"
usb.generic.allowHID = "TRUE"
I am not 100% sure if that was all that I added to it, so there might have been one or two other options. Tell me if you notice any troubles here. - Install a browser of your choice and follow the steps on uthgard.org to install the client and launcher
All you need is a browser, dotnet4.6, vcrun2015 (x86), the daoc client and the uthgard launcher. Don't bloat up your virtual machine. - Adjust position and size of vmplayer's window to get a 'windowed fullscreen' experience
Don’t do this manually with your mouse, but use some command like wmctrl to be able to script it, as you will need to reposition the window every time you start the box + after the character selection screen. After finding out the correct position and size (position to be hiding vmplayer’s menu bar on top, and size to be not overlaying your tint panel on the bottom of your Linux desktop if you have one), you need to launch the game (connect to uthgard) and select the correct settings in the login screen for your adjusted vmware-window (full screen and the resolution needs to be the size of your vmplayer’s window). My wmctrl command to adjust the vmplayer window to "borderless fullscreen"-feel looks like this:- Code: Select all
wmctrl -r "uthgardbox - VMware Workstation 12 Player (Non-commercial use only)" -e 0,-4,-57,1924,1107
(its 2 pixels on the left and right because of the border of the window, and the height is shortened because I want to keep my tint panel on the bottom. 57 pixels is the height of the menu bar of the vmplayer's window, which i want to hide by positioning it above the screen). You will have to adjust this here and there because I am sure you use different window managers with different menu bar sizes.
To fully tab out of the game AND the whole virtualized mswindows, you will now have to use CTRL+ALT instead of ALT+TAB. It takes a while to accustom to it, but ALT+TAB will just send you to the desktop of your ugly windows box, which you want to see least as possible.
NEW Option 2: kvm via qemu with GPU-passthrough (second monitor and video card required)
- Check if your CPU + Motherboard support Hardware Virtualization and IOMMU
For Intel, its called VT-d or VT-x: https://ark.intel.com/Search/FeatureFil ... s&VTD=true
For AMD: Bulldozer generation and up should be compatible, but better check for yourself.
If you are not sure about your motherboard, no problem. We are going to check it by enabling the iommu-option in your bootloader. Plugin your 2nd video card in your free PCIe slot and then add "intel_iommu=on" to the options in the entry of your bootloader (location depends on your bootloader, mine looks like this):arch.conf wrote:title Arch Linux
linux /vmlinuz-linux
initrd /intel-ucode.img
initrd /initramfs-linux.img
options root=PARTUUID=48053146-731f-4cca-a25e-02ec93a0a177 rw intel_iommu=on
If you are using GRUB, it could be like this: GRUB_CMDLINE_LINUX_DEFAULT="... intel_iommu=on"
Dont forget to rebuild your cfgs and bootimage, initramfs, etc, then reboot.
Run lspci -nnk and look for the BUS IDs of both of your GPU. It should look like this:lspci -nnk wrote:01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1)
Subsystem: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:11bf]
Kernel modules: nouveau, nvidia_drm, nvidia
01:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1)
Subsystem: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:11bf]
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
...
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GT 640] [10de:0fc1] (rev a1)
Subsystem: ASUSTeK Computer Inc. GK107 [GeForce GT 640] [1043:83f3]
Kernel modules: nouveau, nvidia_drm, nvidia
02:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio Controller [10de:0e1b] (rev a1)
Subsystem: ASUSTeK Computer Inc. GK107 HDMI Audio Controller [1043:83f3]
Kernel modules: snd_hda_intel
So 01:00.0 is the BUSID of my main GPU, while 02:00.0 is the BUSID of my secondary, older GPU that i want to use for Daoc. We can now check if an iommu_group was created for our secondary GPU:- Code: Select all
[david@arch ~]$ ls -lha /sys/bus/pci/devices/0000\:02\:00.0/iommu_group/devices/
total 0
drwxr-xr-x 2 root root 0 Feb 14 14:47 .
drwxr-xr-x 3 root root 0 Feb 14 14:46 ..
lrwxrwxrwx 1 root root 0 Feb 14 14:47 0000:00:01.0 -> ../../../../devices/pci0000:00/0000:00:01.0
lrwxrwxrwx 1 root root 0 Feb 14 14:47 0000:00:01.1 -> ../../../../devices/pci0000:00/0000:00:01.1
lrwxrwxrwx 1 root root 0 Feb 14 14:47 0000:01:00.0 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.0
lrwxrwxrwx 1 root root 0 Feb 14 14:47 0000:01:00.1 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.1
lrwxrwxrwx 1 root root 0 Feb 14 14:47 0000:02:00.0 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.0
lrwxrwxrwx 1 root root 0 Feb 14 14:47 0000:02:00.1 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.1
If you don't see this, iommu/vt-d was not enabled or your motherboard might not support it. - Fix the IOMMU Grouping with a custom built Kernel
As you can see in the last "ls -lha" check, we currently have BOTH GPUs in the same IOMMU group (GTX 1050 TI in 01:00.0 and GT 640 in 02:00.0). We don't want this, because the system cannot share hardware between Host and Guest, if they are in the same IOMMU group. In order to split them up, we need to use a custom build Kernel that allows us to change this.
On Arch Linux, the package we require is called "linux-vfio" (i am using both linux-vfio and linux-vfio-lts in case the usual one gets killed by updates). If you have pacaur, all you have to do is "pacaur -S linux-vfio" and then wait for 1-2 hours, while the kernel is being compiled.
After compilation, check your new bootloader entry for the new kernel and add the options "intel_iommu=on" and "pcie_acs_override=downstream". This looks different depending on your bootloader, but in my "/boot/loader/entries/" I copied the file "arch.conf", named it "arch-vfio.conf" and edited to use the new image, initramfs and more options:arch-vfio.conf wrote:title Arch Linux VFIO
linux /vmlinuz-linux-vfio
initrd /intel-ucode.img
initrd /initramfs-linux-vfio.img
options root=PARTUUID=... rw intel_iommu=on pcie_acs_override=downstream
Now BEFORE you reboot and test it out, you should install the DKMS (Dynamic Kernel Modul Support) version of your video card drivers, or they wont work on your custom kernel. The name of the DKMS drivers for nvidia are called "nvidia-dkms" on arch linux. They will automatically reinstall the drivers and generate new bootimages/initramfs whenever there is a new update.
Reboot, select the new bootloader entry and pray that your video drivers are working. If it was successful, check your iommu grouping again. It should now look like this:- Code: Select all
ls -lha /sys/bus/pci/devices/0000\:02\:00.0/iommu_group/devices/
total 0
drwxr-xr-x 2 root root 0 Feb 14 14:57 .
drwxr-xr-x 3 root root 0 Feb 14 14:57 ..
lrwxrwxrwx 1 root root 0 Feb 14 14:57 0000:02:00.0 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.0
lrwxrwxrwx 1 root root 0 Feb 14 14:57 0000:02:00.1 -> ../../../../devices/pci0000:00/0000:00:01.1/0000:02:00.1
We did it! the BUSID from our main GPU (GTX 1050 TI) is gone from the iommu group, and we only have the BUSID from our older GT 640 left. - Grabbing the 2nd video card away from the host system
Now we have the problem that the host system doesn't know that this video card is supposed to be used by another system. So if we want to keep the card for our guest system, we have to claim it before the kernel starts:
Add the kernel module "pci-stub" to the MODULES="..." section in "/etc/mkinitcpio.conf" like this- Code: Select all
MODULES="... pci-stub"
Check the PCI-IDs of both the VGA and the Audio device of your secondary video card with "lspci -nnk":02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GT 640] [10de:0fc1] (rev a1)
Subsystem: ASUSTeK Computer Inc. GK107 [GeForce GT 640] [1043:83f3]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia
02:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio Controller [10de:0e1b] (rev a1)
Subsystem: ASUSTeK Computer Inc. GK107 HDMI Audio Controller [1043:83f3]
Kernel driver in use: sda_hda_intel
Kernel modules: snd_hda_intel
Now we can grab both devices 10de:0fc1 and 10de:0e1b before the kernel does by adding another option to our boot entry:arch-vfio.conf wrote:title Arch Linux VFIO
linux /vmlinuz-linux-vfio
initrd /intel-ucode.img
initrd /initramfs-linux-vfio.img
options root=PARTUUID=... rw intel_iommu=on pcie_acs_override=downstream pci-stub.ids=10de:0fc1,10de:0e1b
Rebuild your initramfs, reboot and check if the kernel modules are active for your card with "lspc -nnk":lspci -nnk wrote:02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GT 640] [10de:0fc1] (rev a1)
Subsystem: ASUSTeK Computer Inc. GK107 [GeForce GT 640] [1043:83f3]
Kernel driver in use: pci-stub
Kernel modules: nouveau, nvidia_drm, nvidia
02:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio Controller [10de:0e1b] (rev a1)
Subsystem: ASUSTeK Computer Inc. GK107 HDMI Audio Controller [1043:83f3]
Kernel driver in use: pci-stub
Kernel modules: snd_hda_intel
With this kernel driver, we are able to release the video card for a guest system once we fire it up. To make this happen easier, you should create this script and save it as /usr/bin/vfio-bind (don't forget to run "chmod +x /usr/bin/vfio-bind" afterwards to make it executable):- Code: Select all
#!/bin/bash
modprobe vfio-pci
for dev in "$@"; do
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done
- Installing qemu and preparing to install Windows
Install qemu (pacman -S qemu) and create a folder where you want the whole virtual system to be. Navigate to that folder
Create an container for your virtual hard drive like this:- Code: Select all
qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on win.img 120G
To install a windows iso on it, you need to add two iso images to this folder:
virtio-win.iso for virtual IO drivers: https://fedorapeople.org/groups/virt/vi ... io-win.iso
win10.iso to install windows 10 from: https://support.microsoft.com/en-us/hel ... windows-10
Figure out if your secondary video card supports BIOS or UEFI. If your video card is a BIOS card, you can just continue using this guide with SEABIOS. If you have an UEFI card, you might be forced to use a method called "OVMF" (https://bbs.archlinux.org/viewtopic.php?id=162768) and change this script slightly. Regardless you should check if it just works like i propose to you. - Setting up monitors and creating the KVM script
The connected cables of my monitors look like this:
GTX 1050 TI---DVI-D-Cable---------------leftMonitor
GTX 1050 TI---HDMI-Cable---------------rightMonitor
GT 640---VGA-Cable---DVI-D-Adaptor--leftMonitor
While my virtual windows is not running, both monitors get signals from the 1050TI to display my extended linux desktop. Whenever this script is started, the first thing that happens is that the x-server stops using the DVI-D Cable, which makes the left monitor go black and wait for a signal. Once qemu-system has started, it will get it's signal from the VGA cable, so you will have windows on your left side, and linux on your right sided monitor.
Here is the script that I am running in order to fire up my virtual windows. You will have to adjust some of the options, for example the amount of memory you want to give away to windows (-m number) or amount of cores you have totally (cores=4).
I also added my microphone and a second USB-Keyboard (the lines with: "-usb -usbdevice") to the passthrough, so I could install windows with the keyboard on the first bootup.- Code: Select all
vfio-bind 0000:02:00.0 0000:02:00.1
xrandr --output DVI-D-0 --off
QEMU_ALSA_DAC_BUFFER_SIZE=512 QEMU_ALSA_DAC_PERIOD_SIZE=170 QEMU_AUDIO_DRV=alsa
qemu-system-x86_64 -enable-kvm -m 8192 -cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-device vfio-pci,host=02:00.0,x-vga=on -device vfio-pci,host=02:00.1 \
-vga none \
-soundhw hda \
-usb -usbdevice host:1532:0111 \
-usb -usbdevice host:17a0:0305 \
-device virtio-scsi-pci,id=scsi \
-drive file=/home/david/kvm/win10.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
-drive file=/home/david/kvm/win.img,id=disk,format=qcow2,if=none,cache=writeback -device scsi-hd,drive=disk \
-drive file=/home/david/kvm/virt.iso,id=virtiocd,if=none,format=raw -device ide-cd,bus=ide.1,drive=virtiocd
xrandr --output DVI-D-0 --mode "1920x1080" --rate 60 --left-of HDMI-0
When you execute this script, you should be able to install windows with your keyboard like this. Once you installed Windows 10 successfully, you can delete the line "-drive file=/home/david/kvm/win10.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \" from the script. For a better keyboard (and mouse) solution, also delete the second "-usb -usbdevice" line and install the same version of "Synergy" on both Windows and Linux: https://en.wikipedia.org/wiki/Synergy_(software)