3PAR 3.2.2 MU3 and Veeam B&R 9.0 Update 2 – integration bug

Latest 3PAR firmware 3.2.2 MU3 breaks SAN integration with Veeam Backup and Replication. Issue affects all versions, including version 9 Update 2.

You job will fail with “User authentication failed” error:
Continue reading

Posted in 3par, Random stuff | Tagged , , | 2 Comments

ESXi 5.5/6.x bug HPE CIM – /var/run/sfcb inode table of its ramdisk is full

Another bug from VMware/HPE – unfortunately we don’t have public KB available at this point. As per our conversation with VMware engineer this issue affects both ESXi 5.5 and ESXi 6.x hosts.
I suspect VMware sfcb service fails to clear temporary files created bu HPE CIM providers or HPE CIM providers create files which they are not suppose to.
I observed this issue with HPE ProLiant BL660c Gen8 blades running ESXi 5.5. These blades come with 4 CPU sockets and 1TB of ram – they are hosting VDI environment so they do have high density and a lot of power on/off operations.
As the troubleshooting options we tried updating to the latest ESXi patches, HPE drivers and software but issue was still persisting.

Issue affects ESXi 5.5 and ESXi 6.x running HPE CIM providers, such as OEM HPE customized images.

Unable to power on new VMs, vMotion fails.
vkernel.log shows the following errors:
Cannot create file /var/run/sfcb/52494bef-1566-c7e5-6604-676ddd5b9c46 for process sfcb-CIMXML-Pro because the inode table of its ramdisk (root) is full.

You see alot of files inside /var/run/sfcb directory

Below you will find workarounds to address this issue.
Continue reading

Posted in VMware | Tagged , , , , , , | Leave a comment

Alert! Alerte! Achtung! Critical bug in vSphere 6

Backing up VMs in vSphere 6 can cause data loss in your backups! Earlier I wrote about critical bug with vSphere 5.5 update 3 related to backups which was absolutely unacceptable and here we go again…

Here’s symptoms from VMware’s KB:

When running virtual machine backups which utilize Changed Block Tracking (CBT) in ESXi 6.0, you experience these symptoms:
The CBT API call QueryDiskChangedAreas() API call can sometimes return incorrect changed sectors, which results in inconsistent incremental virtual machine backups.
Inconsistent virtual machine backups

Of course, no fix yet but let’s take a look at the joke of a workarounds they suggest:
– downgrade ESXi to version 5.5 and change VM hardware version to 10
– Shutdown VM before doing incremental backup
– Do full backup daily instead of incremental
Really? Do you think any of these solution are applicable in production environment? Ha ha…

VMware’s KB 2136854
I honestly feel horrible for IT Professionals caught by poor QA from VMware, yet again.

Update: 11/26/2015: VMware released patch to fix it: ESXi600-201511001

Posted in VMware | Tagged , , , | Leave a comment

3PAR WSAPI via Powershell

Earlier I’ve demonstrated how to use 3PAR CLI with Powershell. In this example I will show how to work with 3PAR’s WSAPI via Powershell and poll last time remote copy group was synchronized (i use Last Sync date from first volume in the remote copy group).

I was asked to create solution to monitor replication via Recovery Manager for SQL as it sometimes fails for whichever reason and we don’t get notification that our SQL server wasn’t synchronizing to DR side for a while. I have special place for this product from HP (read my earlier posts).

If you’re using WSAPI to only read information, i recommend you create brand new account with limited privileges, as oppose to using 3paradm.

Continue reading

Posted in Random stuff | Tagged , , , , | 3 Comments

Alert: ESXi 5.5 Update 3 bug – Deleting VM snapshot crashes VM

Removing snapshot causes Unexpected signal: 11. Most of the backup software relies on creating and removing snapshots.

Bug is not officially confirmed by VMWare. No fix, the only solution is to roll back to ESXi 5.5 update 2.
See the following post for more details:


Update 1/10/2015 14:30 EST: VMware released kb http://kb.vmware.com/kb/2133118

Update 7/10/2015 15:30 EST: VMware released kb http://kb.vmware.com/kb/2133825

Posted in Random stuff | Leave a comment

HP Proliant BL460c Gen9 HBA bug

There is a bug with HP FlexFabric 650FLB adapters on HP Gen 9 blades. Buggy firmware prevents HBA from properly negotiating FC protocols. HBA will fail to initiate the PLOGI (Port Login) process. Symptoms on Brocade FC switch will show FC4 type as “none” and switch would fail to detect it as initiator.
If you have Brocade switch it can be confirmed via portloginshow # command, where # is the port number where blade is connected.

fctest:admin>portloginshow 1
Type PID World Wide Name credit df_sz cos
fd 01153b xx:xx:xx:xx:xx:c2:86:04 16 2112 c scr=0x3
fd 011537 xx:xx:xx:xx:xx:c2:86:14 16 2112 c scr=0x3
ff 01153b xx:xx:xx:xx:xx:c2:86:04 0 0 8 d_id=FFFFFC

d=id=FFFFC will be missing for faulty HBA. I the example above HBA ending with c2:86:14 has frmware with the bug.

Issue is confirmed with (latest available on HP website)
Resolution: update firmware to

You can download firmware below (it’s not available via HP website yet)

Update: HP issued advisory:

FACT:HP ProLiant BL460c Gen9 Server
FACT:HP FlexFabric 20Gb 2-port 650M Adapter
FACT:HP FlexFabric 20Gb 2-port 650FLB Adapter
SYMPTOM: Storage path will disappear after the Firmware upgrade to
SYMPTOM: Problem is seen with VIrtual Connect Manager and OneView enviornments
SYMPTOM: Storage path may disappear after 650FLB Firmware upgrade to
SYMPTOM:P roblem is seen with Virtual Connect FLexfabric 10/24 and 20/40 modules
Upgrading with latest firmware ( on 650 FLB may cause the path to the storage to disappear. This issue may occur when using the latest SPP Version 2015.06.0.
CAUSE: This issue only occurs because of the 650FLB firmware version
FIX: This issue is currently under investigation
As Workaround downgrade the firmware to 10.2.477.23 using (SPP) Version 2015.04.0 Or Reduce the uplink to 1 per Virtual connect SAN fabric or OneView Fibre Channel Uplink set.
Posted in Random stuff | Tagged , , , , , , | 3 Comments

3PAR remote syslog

To view current config for remote syslog
showsys -param
cli% showsys -param
System parameters from configured settings

——Parameter—— –Value—
RawSpaceAlertFC : 0
RawSpaceAlertNL : 0
RawSpaceAlertSSD : 0
RemoteSyslog : 0
RemoteSyslogHost :
SparingAlgorithm : Default
EventLogSize : 3M
VVRetentionTimeMax : 336 Hours
UpgradeNote :
PortFailoverEnabled : yes
AutoExportAfterReboot : yes
AllowR5OnNLDrives : no
AllowR0 : no
ThermalShutdown : yes
FailoverMatchedSet : no

To configure remote syslog:
setsys RemoteSyslogHost where is the ip of remote syslog
setsys RemoteSyslog 1

and that’s it!

Posted in 3par | Tagged , , | Leave a comment

3PAR real free space

Today browsing one of my favorite 3PAR related websites (3parug.com) I came across topic asking for a “real” free space. I assume someone is trying to find out how more of the actual data he/she can fit before running out of space.

Before we answer this question lets take a look at different “layers” of free space.
1. Physical Drive space
Let’s take for example 900GB FC drive. Inside 3PAR MC it will report as Total Capacity of 819GB. On the other hand 900GB SSD will report Total Capacity 852GB.
Note: I don’t have information (formula) on how Total Capacity derived from capacity reported by HD manufactures.

Now let’s take a look what is used within Total Capacity You can view it by issuing showpd -space command.
– Size – total size described above
– Volume – how much space is actually used by Volumes
– Spare – space used by spare chunklets
– Free – space available for Volumes
Now let’s look at MC:
Total Capacity = Size
Free Capacity = Free
Allocated Capacity = Volume + Spare

2. CPG space
In order to “use” PD space described above you need to assign drive to one CPG. CPG creates underlaying RAID from chunklets (1GB in size). So for example CPG with 5 – Data 1 – Parity will consume 6 GB of Free Capacity on physical drives for each 5GB of data.

In 3PAR MC you can view remaining free space:

Estimated Free System Space should give a good indication on how much “real” free space (after RAID parity) remains on your 3PAR for a given CPG.

Please remember with Thin Volumes you can over-provision space as 3PAR’s ASIC removes all “zeros” on the fly.

Posted in 3par | Tagged , | Leave a comment

VMware Site Recovery Manager 5.8 command on SRM with Powershell bug

Hello, we have another VMware Site Recovery Manager (SRM) bug. This time it’s with Command on SRM server and Powershell scripts.

I am not SRM developer but it seems SRM itself parses commands before passing it to Windows OS for execution. Sometimes it causes issues.
Let’s take a look at this line:
c:\windows\system32\windowspowershell\v1.0\powershell.exe -Command "(Invoke-Command -ComputerName REMOTEPC -FilePath "C:\SRM\test1.ps1")"

In the example above we are executing Powershell script on remote host (REMOTEPC). Everything looks standard and it works if you run it directly in the Windows Operating System.

The same script (test1.ps1) will fail to execute when we call to execute via SRM. Let’s take a look at SRM’s vmware-dr log:
2014-12-01T09:24:20.348-05:00 [00884 info 'Recovery' ctxID=39eff996 opID=3913b17c] [recovery-plan-1036482.beforePrepareStorage-0] Executing command c:\windows\system32\windowspowershell\v1.0\powershell.exe -Command "(Invoke-Command -ComputerName REMOTEPC -FilePath "C:\SRM\test1.ps1")"
2014-12-01T09:24:20.348-05:00 [00884 verbose 'Recovery' ctxID=39eff996 opID=3913b17c] COMMAND LINE ENVIRONMENT SETTINGS::
2014-12-01T09:24:20.348-05:00 [00884 verbose 'SysCommandWin32' ctxID=39eff996 opID=3913b17c] Starting process: "c:\windows\system32\windowspowershell\v1.0\powershell.exe" -Command "(Invoke-Command -ComputerName REMOTEPC -FilePath C:\SRM\test1.ps1\")"

As you can see SRM messes up double quotes C:\SRM\test1.ps1\", thus making command invalid.

If we format this command sightly different (removed wrapping double quotes for C:\SRM\test1.ps1)
c:\windows\system32\windowspowershell\v1.0\powershell.exe -Command "(Invoke-Command -ComputerName REMOTEPC -FilePath C:\SRM\test1.ps1)"
script executes flawlessly from both SRM and natively Windows OS.

Replace space in file path’s name (think about DOS days haha) and remove double quotes wrapping.

My ticket is still open with VMware and engineering team is currently investigating. You will be affected by this bug if your script’s file path name contains spaces. You need to wrap it with quotes.

Update: 14/05/2015
VMware published internal KB 2116057

Posted in Random stuff | Tagged , , , , , | Leave a comment

VMware Site Recovery Manager 5.8 Bug – Linked Mode

There is a “well known” bug in VMware Site Recovery Manager 5.8 (SRM), which puts your DR plan at risk. It will only affect if you have vCenters connected in Linked mode. Well, let me put it this way: when you have Site Disaster – you will not meet your RTO.

Luckily for us we caught this bug during our latest DR testing.

If you have two vCenters in Linked mode and would like to confirm this bug please bring down vCenter in you Production site down, log into vCenter at DR site and try to run recovery. You will see this:
SRM 5.8 bug

Additionally in SRM log you will see the following errors:
2014-11-29T10:01:05.750-05:00 [03060 error 'HttpConnectionPool-000000'] [ConnectComplete] Connect failed to fqdn-prodvcenter:80>; cnx: (null), error: class Vmacore::Http::HttpException(HTTP error response: Service Unavailable)
VMware Engineer confirmed this bug and said currently they don’t have a fix. Removing Linked mode between vCenters is the workaround.

1. On the recovery site vCenter Server, point to Start -> All Programs -> vCenter Server Linked Mode Configuration.
2. Click Next, select Modify Linked Mode configuration and click Next.
3. Ensure that the checkbox Isolate this vCenter Server instance from Linked Mode group is selected and click Next.
4. Click Continue to isolate the vCenter Server.
5. When the wizard has completed, check that the Site Recovery Manager service is still running and start it if necessary.

It seems VMware under a lot of pressure from Microsoft to shorten release cycle for their products. I can’t believe QA team missed such a huge bug.

Update: VMWare published KB

Posted in Random stuff | Tagged , , , | 1 Comment