IBM BladeCenter and DS4000 Question and Answer session

The following is the modified chat log portion of a live demo for an IBM BladeCenter and DS4000 Question and Answer session. JonathanD led the session.

[@JonathanD] I’m a certified expert in IBM bladecenter.

[@JonathanD] I’ve installed several dozen DS4000 units.

[Attendee2] JonathanD: mind if sidetrack some of your DS4000 time torward the DS3200 ?

[@JonathanD] Attendee2: I’d prefer to do that one next week, after I’ve installed one

[@JonathanD] I’d really need Brian here, if we want to do it today

[@JonathanD] and I don’t think he’s available right now.

[Attendee2] but most of my questions about LUNs and arrays would be the same anyway right ?

[@JonathanD] Attendee2: yeah, they should be.

[@JonathanD] everyone in VNC?

[@JonathanD] So, does anyone need to know more about the basics… what a SAN is?

[Attendee3] I don’t, mostly.

[@JonathanD] I’ll go over some of the very basic stuff anyway, just as a refresh and in case anyone needs it.

[Attendee2] JonathanD: is this sotrage software the same, regardless of the model ?

[@JonathanD] Attendee2: for the 4000 series, yes.

[@JonathanD] this is what you would see.

[Attendee2] ok it looks very similar to the DS3400 software doc

[@JonathanD] so, what makes a SAN different from a NAS…

[@JonathanD] Attendee2: it is

[@JonathanD] a NAS is a network share, basically, like a windows network share. different machines can use it, and store things there. You can use it with normal networking hardware, nothing special.

[Attendee3] SAN attaches via high speed fiber to a server, whereas a NAS connects via Ethernet?

[Attendee1] Generally.

[@JonathanD] A SAN, generally, is seperate, on it’s own network, and doesn’t have shares.

[@JonathanD] It has LUNs

[@JonathanD] which are assigned to hosts.

[@JonathanD] a LUN is a Logical unit number. to a server, a LUN is a physical disk.

[@JonathanD] on a scsi bus, each drive is a LUN.

[Attendee2] can any one disk (or array ) be split into multiple LUNs ?

[@JonathanD] Attendee2: yes. We’ll take a look at how that works in a moment.

[@JonathanD] so, a LUN, presented to a host, is functionally the same as a physical scsi disk installed IN that host.

[@JonathanD] thats the single biggest difference between SAN and NAS.

[@JonathanD] so, lets take a look at our san here.

[Attendee1] JonathanD: Try not to click around without giving at least a sec or two.

[Attendee1] We won’t be able to follow the screen updates.

[Attendee3] but the ds4k series doesn’t use scsi, it uses SATA?

[@JonathanD] this is a demo system. It has 2 drawers, one with 16 SATA drives, one with 10 FC drives.

[@JonathanD] Attendee3: it uses SATA or FC. FC is funtionally equiv to SCSI

[@JonathanD] from a performance perspective

[@JonathanD] but all these disks are connected to the 4000 via fiber

[@JonathanD] so, you can “see” the disks right now, right?

[Attendee2] and to the hosts via …. ?

[@JonathanD] Attendee2: fiber

[@JonathanD] on the right, we have luns.

[@JonathanD] we’re going to look at how you create a LUN and where it comes from

[@JonathanD] everyone ready?

[@JonathanD] I have about 600GB of unconfigured Fibre disk space.

[@JonathanD] as you can see.

[@JonathanD] thats those 7 drives which currently have little purple things under them.

[@JonathanD] you can see here the drive selection and raid options.

[@JonathanD] we’re going to create a 3 disk raid 5, right now.

[@JonathanD] which will give us 135GB, after parity.

[Attendee1] What’s enclosure loss protection ?

[@JonathanD] someone let me know if I’m going to fast for updates.

[@JonathanD] Attendee1: consider the following.

[@JonathanD] You have a DS4000 with 8 enclosures.

[@JonathanD] (that is, disk cabinets)

[Attendee2] like expansion units ?

[@JonathanD] you build your arrays, as 8 disk raid 5 arrays, going vertically down the cabinets

[@JonathanD] yes, Attendee2

[@JonathanD] so disk 1 in each is part of array 1, disk 2, array 2, and so on.

[@JonathanD] if you lose an entire enclosure, somehow, you still are online.

[Attendee1] Interesting.

[@JonathanD] as you have lost only one disk from each array.

[@JonathanD] that is enclosure loss protection

[@JonathanD] losing an enclosure is a RARE thing, but if a customer does have enough enclosures to do this, we will do it.

[@JonathanD] otherwise, it’s considered minor.

[Attendee3] Ah. So that little notification is telling you if those drives are located on different enclosures?

[@JonathanD] Attendee3: well, it’s telling me that TOO many drives from this array are in the same enclosures.

[@JonathanD] you can do loss protection with only 2 enclosures, too.

[@JonathanD] with raid 10

[Attendee3] makes sense. You’d need three enclosures to do raid 5

[@JonathanD] so now,wwe’re looking at the next screen. here, we define the lun size and name.

[@JonathanD] Attendee3: correct.

[@JonathanD] enclosures failing is pretty unlikely, anyway.

[@JonathanD] each has redundant power, and 2 ESMs, which are the devices that connect the drives to the controllers

[Attendee1] Okay, so here you are defining logical/virtual disks carved out of the array you just created?

[@JonathanD] Attendee1: exactly.

[@JonathanD] lets make 2

[@JonathanD] so, an array can be used by more than one host.

[Attendee2] thats perfect

[Attendee1] So you could if you wanted, make the whole thing 1 giant RAID 5 + hot spares, and carve luns out for various uses, right?

[@JonathanD] Attendee1: yes, but that would be a bad idea for more than say, 8 drives.

[Attendee1] Why?

[@JonathanD] Attendee1: raid 5 rebuild time increase rapidly past 8 drives.

[Attendee1] Really?

[@JonathanD] a rebuild that took 2 hours on 8 disks could take 8 hours on 10

[Attendee1] So like a 20 drive RAID 50, or RAID 10 would be okay right?

[Attendee2] thats really what i wanted to do was 2 RAID5 arrays and split out 3 LUNs per array to take care of my 6 blades

[@JonathanD] Attendee1: both would probably be fine.

[Attendee3] seems it’d be a better idea to create multiple raid 5 arrays and carve them out as needed.

[@JonathanD] so what were looking at now is LUN characteristics

[@JonathanD] the segment size, cache prefetch, etc.

[Attendee1] The radio buttons do what? Auto suggest geometry?

[@JonathanD] Attendee1: yes.

[@JonathanD] we generally go with cache read prefetch on, and 256 for segment size for mixed workloads

[@JonathanD] unless you have an app making small writes, it’s a good fit

[Attendee1] So like a webserver, you might use smaller segments, right?

[@JonathanD] controller ownership defines which controller owns that lun.

[@JonathanD] Attendee1: a webserver with lots of small files, sure.

[@JonathanD] 64K might be good

[Attendee1] Where webpages are typically under 20k.

[Attendee2] ok well what about something like an exchange DB ? that writes a lot of small things but large things consistantly too

[@JonathanD] 256, Attendee2

[@JonathanD] best for mixed loads like that

[@JonathanD] 128 would be acceptable too

[Attendee2] so i dont really want to use the DB radio ?

[@JonathanD] you can change it, after install

[@JonathanD] you can, Attendee2

[@JonathanD] it picks 128, which would be fine

[@JonathanD] generally you want “map later” here.

[@JonathanD] mapping to default only applies on DS4000 connected to a single server or cluster

[Attendee1] What’s a storage partition?

[@JonathanD] Attendee1: a storage parition is a segmentation of the storage system.

[@JonathanD] without ANY, all hosts see all luns.

[Attendee1] So, it’s used to divvy up luns for connecting computers?

[@JonathanD] right.

[@JonathanD] basically

[@JonathanD] the 4000 supports up to 64

[@JonathanD] a cluster with shared disk uses 1, not 1 per node.

[@JonathanD] since all nodes need the shared disk

[@JonathanD] so, we have 2 luns now

[Attendee2] Did you mean to put that on a diff controller ?

[Attendee2] I think you used A for the first one.

[@JonathanD] yes.

[Attendee2] why?

[@JonathanD] it automates back and forth

[@JonathanD] ideally, you want to balance between them.

[@JonathanD] you want equal load on each controller

[Attendee2] so if i have 4 LUNs in an array i want 2 of them to be on one controller and 2 on the other

[@JonathanD] yes.

[Attendee2] well i think ill end up with the single controller 3200 so i shouldnt have to worry about that !

[@JonathanD] (this is the san switch)

[Attendee3] Ok, what are you doing here?

[@JonathanD] well, typically we would do zoning here.

[@JonathanD] we don’t need to, because they are already there.

[Attendee3] define zoning for me, please.

[@JonathanD] zoning determines which blades and servers are allowed to see which storage systems, and tapes, and other FC things.

[@JonathanD] it’s similar to VLANs, in ethernet networking.

[Attendee1] If I remember correctly fiber channel is an explicitly defined network if you will.

[Attendee1] Not automatic like normal TCP/IP networks.

[@JonathanD] as you can see here, our HS21 blade can see DS4700_A1

[@JonathanD] which is port 1 on controller A of the 4700

[@JonathanD] here we have the masks.

[Attendee1] Can you also do RADIUS or other authentication?

[@JonathanD] no, theres no provision for that in fiber channel

[Attendee1] Is each blade in a chassis an available endpoint?

[@JonathanD] yes.

[@JonathanD] you can see here, we have an HS21 and an LS21 blade, each with it’s own ports defined

[Attendee2] and the hosts will autodetect from the chssis to the SAN switch ?

[@JonathanD] Attendee2: basically. once the zone is in place.

[@JonathanD] the 4000 sees the ports, you tell the 4000 which ports belong to which host

[Attendee1] Okay, what are we doing now?

[@JonathanD] note: this blade has NO internal disks

[@JonathanD] now, we’re going to put our new luns on a blade 🙂

[@JonathanD] and a windows boot lun, as well

[@JonathanD] I’m just taking off the existing linux boot lun and shared lun

[@JonathanD] so, we just added the lun “HS21_Lun0”

[Attendee1] gvstg01 is what kind of device?

[@JonathanD] as LUN0

[Attendee1] Is that the switch?

[@JonathanD] a ds4700

[@JonathanD] no, it’s the storage system.

[Attendee1] Okay, so it’s the FC controller we configured earlier?

[@JonathanD] right, we’re still on it now

[Attendee1] Host HS21 is the chassis w/ the blade we are trying to setup right?

[@JonathanD] so, I’m adding the 2 luns we created earlier

[@JonathanD] HS21 is the specific blade, in the chassis

[@JonathanD] in this case, blade 2

[Attendee1] Okay, so the specific blade.

[@JonathanD] I just added the 2 luns we created, as luns 1 and 2

[@JonathanD] and you can see them in the list here.

[@JonathanD] except I added them to the wrong host 🙂

[Attendee3] changing them seems easy enough.

[Attendee3] did you do that on purpose? 🙂

[@JonathanD] hehe, catch on quick, Attendee3 🙂

[@JonathanD] so here is our HS21

[Attendee3] I’m wise to the sneaky demo guy tricks..

[@JonathanD] and we’re going to hope that nobody changed whats installed on that Lun0

[@JonathanD] cause if they did we’ll have to install windows 🙂

[@JonathanD] tis the danger of an open lab 🙂

[Attendee1] So, does this device support snapshots of luns?

[@JonathanD] you can see we have 3 luns, the 3 we just added.

[@JonathanD] Attendee1: you can make a flashcopy, yes.

[@JonathanD] it’s 100% space though.

[@JonathanD] and it’s a cost option

[Attendee3] What is a flashcopy?

[@JonathanD] duplicates a lun, Attendee3

[@JonathanD] you can duplicate it to a DS4000 elsewhere, too

[@JonathanD] I just configured us to boot from lun 0

[@JonathanD] if all is well, we should be able to reboot into windows.

[Attendee2] so say i have a LUN with server 2k3 on it, i can just copy that to another LUN that i intend on using for another host for the same reason

[@JonathanD] you wouldn’t copy to a lun, a new one would be crated by the copy process

[Attendee1] You’ll need to sysprep.

[Attendee3] generate new ssids and all that, but yes.

[@JonathanD] Attendee2: there are better ways of doing it, in any event.

[@JonathanD] it’s primarily a DR feature

[Attendee2] well it would be killer if i could use 1 LUN for all hosts’ boot partition !

[@JonathanD] caching!

[@JonathanD] Attendee2: can’t do that, windows doesn’t support sharing disks like that.

[Attendee2] JonathanD: i didnt think i could, just for registry issues alonew

[@JonathanD] yet, anyway

[@JonathanD] (This is the switch in the chassis)

[@JonathanD] you’ll notice the interface is nearly identical

[Attendee3] So, things blinked past sorta quickly for me. To set up the LUN as a bootable disk, you used the utility on the blade?

[Attendee1] shrink the top console bar too.

[@JonathanD] can’t change it without kicking everyone out.

[@JonathanD] Attendee3: yes.

[@JonathanD] Attendee3: basically like setting boot order.

[@JonathanD] you can say “use this, then use this”

[@JonathanD] as you would with an addon raid card.

[Attendee1] That was part of the setup function in the QLogic controller attached to the chassis for the blades right?

[@JonathanD] the qlogic is the HBA, in the blade.

[@JonathanD] and theres our disks.

[Attendee1] So, diskless blades.

[Attendee3] Cool, so from this point, you just treat it like a normal disk.

[@JonathanD] yup

[Attendee1] Now, let’s say this Blade #2 kicks the bucket.

[Attendee1] How do you fix it?

[@JonathanD] yeah, I just formatted

[@JonathanD] Attendee1: remember when I moved it from the LS21 to the HS21?

[Attendee3] can you just slide in a new blade, reassign the luns and run with it?

[@JonathanD] Attendee3: exactly.

[Attendee1] Nice, so how much to buy this technology?

[@JonathanD] a DS4700 is gonna cost you around 40k, usually

[Attendee1] And the bladecenter + blades is how much?

[@JonathanD] the expensive part there is the SAN switches.

[@JonathanD] they’re 15k. each. for 2.

[@JonathanD] which is more than the chassis itself.

[@JonathanD] so it’s not cheap 😛

[Attendee2] Attendee1: the chassis is like $5k and the blades range from $1000 – $20,000 (for that 4x dual core AMD)

[@JonathanD] right, Attendee2

[Attendee3] So, odds are good I’ll have to stick with my HP Storageworks SA1000

[@JonathanD] how does that connect, Attendee3?

[Attendee3] fiber

[Attendee3] It was already set up when I got here, the idea was to have centralized storage for a pair of production servers, however, it never worked out that way, so now it’s just being used as a glorified external drive.

[Attendee3] It’s got four 72gig U320 drives in it.

[@JonathanD] 47K for a DS4700 with 14 500GB sata.

[@JonathanD] 7K of that was for an AIX kit for it

[@JonathanD] so 40k, yup

[@JonathanD] Now, 3400 which can get you about 3TB of storage can be had for around 15K, *WITH* the 3TB in it.

[@JonathanD] thats 3TB usable, after parity, hot spare, etc.

[Attendee1] Does IBM sell equipment without support contracts, or do they require support contracts?

[@JonathanD] almost anything you can get without support.

[@JonathanD] I’m pretty sure you can’t get a zSeries without it, though.

[@JonathanD] keep in mind, this is in an array that is still building 🙂

[Attendee3] Well, thanks for the demo, that was really informative. 🙂

[@JonathanD] Attendee1: what performance did you get pre-build?

[Attendee1] Like 8MB/sec or something.

[@JonathanD] so, pretty competative ehh? 😉

[@JonathanD] no problem, Attendee3

[@JonathanD] hope everyone enjoyed it.

[@JonathanD] oh, one more thing.

[@JonathanD] Attendee1: there is your perf, on just 3 drives, degraded.

[Attendee3] good demo. I’m assuming the LUN concepts will translate to other SAN devices?

[@JonathanD] yup

[Attendee1] JonathanD: Thanks, I think I learned a bunch.

Leave a Reply

Your email address will not be published. Required fields are marked *