j a m i e

Upgrading RAID cards in a Dell machine.

This was terrifying for me first time, so don’t do what I did. It’s pretty painless if you take your time and have backups in place beforehand. Lets dive in!

So first off, migrate off all the VMs you need to keep online to another host. Make backups of anything left because you’ll wish you did if anything goes wrong. I’m going to use the H700 as an example in this post. I did this without making a backup first – I was being dumb and lazy, make backups!

Shut down the remaining VMs, set it to maintenance mode and shut her down.

Swap out the old RAID card, replace it with the new one. Ensure the battery is hooked up and the cabling is set correctly – SAS A to SAS A etc.

Boot it back up, then use this guide to live-patch your host to get the drivers etc for your new RAID card (h700). It’s going to take a while to finish, reboot afterwards.

Upon booting up, hit ctrl+r to get back to the RAID configuration. It will complain that all your drives are lost, don’t worry. Hit c to load configuration and y to confirm.

When you’re in the RAID config, hit F2 and import foreign config. This will load the previous config on your drives.

Now, this part is important. I don’t know why, but sometimes it requires the RAID to be rebuilt despite nothing changing. Two of my servers didn’t need to, two did – go figure. If it does, it will automatically start an operation in the RAID config screen called back init. For me, this took around 35 minutes however if you have a much larger setup it’s obviously going to take longer.

So when that’s finished – or if you didn’t need to rebuild, lets get that datastore back on ESXi!

After rebooting again and letting it boot up, you’ll probably get quite a fright that your datastore isn’t there and all VMs are throwing errors – to add to it too in ESXi you can’t even see the datastore to add it!

This is where VSphere VCenter comes in (the naming structure of this kills me).

Login to VCenter and navigate to your host, click on ‘Actions’ -> ‘Storage’ -> ‘New Datastore’.

Select your datastore type, hit next and it should show up here.

From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column.

Note: The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.

Under Mount Options, you’ll be looking for this option Keep Existing Signature: Persistently mount the LUN

Review & finish – If you hop back to ESXi your VMs should all be populated again – on a sidenote you may need to reconfigure globallogging again if that’s something you do.

To do this go to ESXi -> ‘Manage’ -> ‘System’ -> ‘Advanced Settings’ and search for Syslog.global.logDir

Set it up like so, swap my datastore name for yours and you’re all done.

Updating your VSphere VCenter SSO password.

If I had a penny for everytime this happened to me..

Anyways, resetting it isn’t that bad. Login to vcenter appliance management admin and enable SSH access.

Go ahead and SSH in with the appliance management password and follow these steps;

Enable shell shell.set --enabled true.
Type shell and hit enter.
Next, /usr/lib/vmware-vmdir/bin/vdcadmintool

This will popup an options list, select 3 for reset password.
Input your vcenter username, might be something like ‘[email protected]
It will give you a temporary password, copy and paste that back to vcenter to get logged in, then set a new password.

Back in shell, type 0 to exit the options and exit again to surprise, close out your session.
Go back to vcenter appliance management and disable SSH access, all done.

Updating the FQDN of ESXi vCenter server.

I made the mistake of skipping the FQDN when setting this up thinking I could just set it afterwards, was too lazy to update my dnsmasq entries. On the bright side, updating it isn’t so bad.

Login to vCenter, hop to access and click ‘Edit’ up in the top right hand side.

Login via SSH, type shell to open a new bash screen.

/opt/vmware/share/vami/vami_config_net

Hit 3 for hostname, type in the new value you want and hit enter, then 1 to exit.

This does not require a reboot or anything either, nice!

Ensure your internal DNS reflects the new settings and you're good to go.

ESXi CVE-2018-3646 (55806) and how to mitigate it.

I updated my servers before grabbing a screenshot of the message in ESXi, it’s something like this,

Your host may be susceptible to Intel CPU CVE-2018-3646 (55806) - See https://kb.vmware.com/s/article/55806 for more details. 

The fix I ran with was enabling shell on the hypervisors and

esxcli system settings kernel list -o hyperthreadingMitigation

If it comes back as False, you'll want to do this,

esxcli system settings kernel set -s hyperthreadingMitigation -v TRUE

Disable secure SSH for all hosts, and reboot and you're all done.

GPU passthrough to VMware ESXI 6.5

So 3-4 months ago I bought an Nvidia GT710 to muck about with GPU passthrough on a spare ESXI host but I ran into issues at basically every step. Was working on it again today and managed to get it working so time for another blog post!

Before we do, this is the issue I had the most trouble with

Windows has stopped this device because it has reported problems. (Code 43)


This is with the hypervisor updated, and after manually installing the NVIDIA vib, along with ensuring the memory for the VM is reserved like so.

To reserve the memory, Right click the VM, Edit, then click the arrow to the left of 'memory'.

So how to fix it then?

I had to add a new parameter to the VM;

1. Login to ESXi
2. Right click your VM > Edit Settings > VM Options > Advanced > Edit Configuration
3. Add Parameter;
hypervisor.cpuid.v0 in the key column.
FALSE in the value column.

And lo and behold;