OK so we had a scheduled power outage at one of our secondary DCs. The SAN was shutdown following the VNX shutdown procedure via Unisphere.

After the power was back on everything on the SAN came up accept the Unisphere web management. After some Google-fu the culprit was the NAS slots had not come up properly, in fact they were powered off.

[root@vnx-cs0 ~]# /nasmcd/getreason
10 - slot_0 primary control station
11 - slot_1 secondary control station
0 - slot_2 off
0 - slot_3 off

As you can see slot 2 and 3 nas blades are off. Now this array does not use file storage only block. That does not matter for the virtual ip and unisphere management service. To power them on use these commands:

/nasmcd/sbin/t2reset pwron -s 2
/nasmcd/sbin/t2reset pwron -s 3

Once they power on – the state changes on the getreason:

5 - slot_2 contacted
5 - slot_3 contacted

This saved me a full round trip to this DC, very handy. If this still doesnt fix it – something is wrong with the physical cabling between the nas blades and the Control Station, or there is a fault.

About Post Author