preloader
19 April 2011 / #Powercli #Vsphere

IBM BladeCenter S et vSphere

post-thumb

Nous avons un projet qui va être hébergé sur une infrastructure composée de châssis IBM BladeCenter S.

Pour rappel, ce châssis IBM à la particularité d’intégrer au sein de 7U, le compute (avec des lames HS22), le réseau (via différents switchs possibles) mais surtout le stockage, via 2 baies de disques et les RSSM directement intégrés au châssis.

Le côté All-In-One est vraiment sympa/pratique. Dans notre cas, on devait poser un bout d’infrastructure sur plusieurs sites, sans pour autant réquisitionner une baie entière à chaque fois.

blade-s

L’interface d’administration de la partie storage du châssis est pas ultra bien fichue par contre, surtout quand on doit gérer une grande quantité de châssis (il faut donc forcement passer par IBM Director dans ce cas là).

Mais par contre, châque Storage Controller est accessible en CLI/SSH, donc les geeks que nous sommes ne sont pas dépaysés :p Le CLI met à disposition certaines commandes assez pratiques :

Lister les contrôleurs de stockage

list controller

Current Machine Local Time: 01/31/2011 01:19:43 PM
 ___________________________________________________________
| Ctlr#     | Controller   | Status            | Ports    | LUNs    |
|___________|______________|________________|_______|_______|
| 0         | Ctlr0           | PRIMARY        | 1        | 16    |
| 1         | Ctlr1           | SECONDARY        | 1        | 16    |
|___________|______________|________________|_______|_______|

Lister les volumes de stockage définis

list pool

Current Machine Local Time: 01/31/2011 01:20:03 PM
 ______________________________________________________________________________
|Pool#|ID|Name        |RaidType|OwnerCtlr|TotalCap|AvailCap| Status|State|Degraded|
|_____|__|_________|________|_________|________|________|_______|_____|________|
| 0      | 1|POOL_VM01| 5      | Slot 0  | 729GB  | 1MB    | Viable| ONV | No     |
| 1   | 2|POOL_VM02| 1      | Slot 0  | 279GB  | 1MB    | Viable| ONV | No     |
|_____|__|_________|________|_________|________|________|_______|_____|________|
State: OFN/SN=Offline Non-viable/Service Non-viable
ONF/OFF/SF=Online Failed/Offline Failed/ Service Failed
ONV/OFV/SV= Online Viable/Offline Viable/Service Viable

ONN=Online Non-viable/Pending Non-Viable; one or more drives are missing in this pool.
The pool state changes to ONV if missing drive(s) comes back to the pool.
The pool state changes to OFN if the user acknowledges the alert.

Lister les commandes disponible en CLI

help

===============================================================
CLI HELP INDEX
===============================================================
--------------------------------
#Display Commands :
--------------------------------
1. list -help
2. list controller
3. list volume
4. list pool
5. list drive
6. list enclosure
7. list drivelba [-name poolname:volname] | [-number number] -vlba number
8. detail -help
9. detail volume verbose
10. detail volume [-name poolname:volumename| -number number]
11. detail pool [-name poolname | -number number]
12. detail controller -ctlr [0|1]
13. detail enclosure -encl [0|1|2]
14. detail drive [-slot | -number [NUMBER]]
-------------------------------
#Managing Volumes :
-------------------------------
1. create pool -drives [....] -raidtype [ 10 | 5 | 0 | 1 ] -port [0|1] -name POOL_NAME
2. create volume -name poolname:volumename -size [%|MB|GB]
[-seqpostreadcmdsize size -seqreadaheadmargin margin]
3. host -[add WWN [-name HOSTNAME] | delete WWN | get]
4. hostlun -[get | map -volume poolname:volumeName -permission [R/RW] [-name [HOSTNAME]]| unmap] [-wwn WWN -lun lunnumber]
5. delete volume -name poolname:volumename
6. delete pool -name poolName
7. global spare -[add -slot [-autocopyback [COPYOPTION]] | get | delete -[slot |number [NUMBER]] ]
8. assimilate drive -[get | set [-slot | -number [NUMBER]]]
9. local spare -pool [POOLNAME] -[add -slot [-autocopyback [COPYOPTION]] | get | delete -[slot |number [NUMBER]]
-------------------------------
#Volume Services :
-------------------------------
1. synchronize volume [-name pool[:volume]]
2. delete all
3. view long running tasks
4. add capacity -pool name -drivelist
5. initialize -drive
6. expand -volume poolname:volumename -add capacityIncrement [MB|GB|%]
7. datascrub -[get | set -auto [on|off]
8. add mirror -pool name -drives ..
9. migrate -volume poolname:volumename -targetpool poolname [-newname newvolumename]
10. copyback -source -dest [-convert]
-----------------------------------
#System Control and Configuration :
-----------------------------------
1. commparams -get
2. swversion
3. post result
4. list features
5. show raid levels
6. validate key [-get | -set <192 bit key>]
7. event log [-show [all|arts|alsal|tlc]] | [-save [all|arts|alsal|tlc]] |
[-setlevel [-tlc | -alsal | -arts ] | -getlevel]
8. locate [-getobject [drive | pool | volume | ctlr | bbu | enclosure] ] | [[-setobject | -off ]
[-drive [ slot | all] | -pool poolname | -volume poolname:volumename | -ctlr [0|1] |
-bbu [0|1] | -enclosure [0] | -number objectnumber]]
9. cache -[get | set [-volumesetting -seqpostreadcmdsize [SIZE] -seqreadaheadmargin [MARGIN]
[-systemdefault] [-volumename pool:volume] | -ctlrsetting -writecachepolicy [on|off] [-suspendmigrates]]
10. time -[get | set -date mm/dd/yyyy -time hh:mm:ss -[am|pm] ]
11. controller config -[[save | load] filename | get]
12. service mode -getreason
13. shutdown -ctlr [0|1] -state [servicemode [-readytoremove] | reboot] | -system -state [servicemode | reboot]
14. cliscript -f filename
15. email alert -[get | set [-test] -email [EMAIL] -smtpserver [SERVER] -smtpport [PORT] -smtpsender [SENDER] | -test | -delete -e mail [EMAIL]]
16. configure alert -[get | set [-email | -initiallymasked] -on -off | setgenericalerttemplate - code
-type [persistent|ackable] -initiallymasked [on|off] -email [on|off] -severity [critical|warning|info] -msg "Desi redAlertStringInQuotes"]
17. alert [-get | [-create |-clear] -code genericAlertCode | -savehistory | -[ mask | unmask | ack ] -code AlertCode -id Id -ctlr SlotID]
18. battery -ctlr [0|1] -get
19. chpasswd -[cli | mgmtInterface] -oldpasswd [OLDPWD] -newpasswd [NEWPWD]
20. mountstate -[getobject -[drive |pool |mediatray |enclosure |bbu ] | setobject -[mount [-drive < ... > | -bbu [0|1 ] | -mediatray 0
| -enclosure < ...>] | dismount [-drive < ... > | -pool < ...> | -bb u [0|1] | -mediatray 0
| -enclosure < ...> [-okdegraded]]] ]
21. shellscript -file filename [-param "STRING"]
22. mountpolicy -[get | set -automount [on|off]]
23. configure pool -name [POOLNAME] -changeowner
------------------------------
#Miscellaneous Commands:
------------------------------
1. exit
2. help

Lors de l’utilisation de ce type de châssis avec une infrastructure vSphere (dans notre cas), il faut bien s’assurer que la politique de multipathing soit bien en *Fixed*, comme le spécifie bien le document suivant : http://www-947.ibm.com/support/entry/portal/docdisplay?brand=5000020&lndocid=MIGR-5081899

Pour cela, nous avons fait un petit script PowerCLI qui permet de paramétrer cette politique de multipathing :

Foreach ($ScsiLun in (Get-VMHost -Location "DATACENTER" | Get-ScsiLun -LunType "disk" | Where { $_.Model -match "1820N00" }) ) {
    $slp = @()
    $slp += Get-VMHost -Id $ScsiLun.VMHostId | Get-ScsiLun | where {$_.CanonicalName -eq $ScsiLun.CanonicalName} | Get-ScsiLunPath
    Get-VMHost -Id $ScsiLun.VMHostId | Get-ScsiLun | Where {$_.CanonicalName -eq $ScsiLun.CanonicalName} | Set-ScsiLun -MultipathPolicy Fixed -PreferredPath $slp[0]
}

> Frederic MARTIN