Today I’ve upgraded our vCenter Server from 5.5 to 6.0. Our BackupAdmin has upgraded the Backup Server one week before and all worked fine.
The upgrade on my nested vSphere test environment works fine too. The backup and restore tests runs over the vCenter 6 with all the ESXi 5.5 and 6 servers. *damn*
Now in our production environment the upgrade runs an all services works. After starting the test backup for one virtual machine i get the following error:
Job ended: Wednesday, 17 Jun 2015 at 9:03:05 PM
Completed status: Failed
Final error: 0x200095bf - A failure occured while locking the virtual machine in place for backup/restore operations.
Final error category: Resource Errors
For additional information regarding this error refer to link V-79-8192-38335
With Backup Exec 15 revision 1180 Feature Pack 1 the problem could be solved. Thx to Alex Millà for his comment. I also could reproduce it in my environment and the error is away.
In my last training Fabian Lenz and I discuss pro and con of power management in ESXi.
We thought DPM is mostly used in VDI-environments. In those environments the savings would be the best, because the virtual desktops didn’t work at night and many of the ESXi hosts could be powered off. But DPM ist not the only thing which affects the power consumption. ESXi could use ACPI C- and P-States from the CPU to control the power of the physical cores. Fabian experiements with a bigger VDI environment and gets interessting informations with activated DPM (watch his blog).
In vSphere 5.5 and 6.0 a new command was given to esxcli. With this command you can reclaim deleted blocks on thin-provisoned LUNs.
You could run the mechanism in production time. This might sound good…. Our maintainance windows gets smaller and smaller with every new service and if you could do maintainance when everybody could work is that great.
BUT one moment!
The ESXi host which runs the reclaim would get very high cpu usage and might costops VMs within the host. Especially the first run will take a lot of time.
The storage multipathing policy in ESXi configures in which way ESXi uses the availiable paths his LUNs.
This article discribes the different policies und explaines how to change policies for connected LUNs. Additional I show how to change the default policies for new connected LUNs.
Ich beschäftige mich beruflich nun seit vielen Jahren mit Virtualisierung und haben mittlerweile eingie Umgebungen mit aufgebaut und betreut.
Bisher konnte ich mich erfolgreich “wehren” einen Server daheim zu betreiben. Da aber Dienste auf meinem NAS-System teilweise nicht vorhanden oder mit fehlenden Funktionen ausgestattet sind, begann ich zu überlegen, wie ich kleinere Dienste bereitsstellen und Softwaretests Zuhause durchführen kann. Continue reading