Monday, September 24, 2012

Designing ZFS at Datacenter Scale

Here is the abstract for my talk next week at zfsday!

The ZFS hybrid storage pool model is very flexible and allows many different combinations of storage technology to be used. This presents a dilemma to the systems architect: what is the best way to build and configure a pool to meet business requirements? We'll discuss modeling ZFS systems and hybrid storage pools at a datacenter scale. The models consider space, performance, dependability, and cost of the storage devices and any interconnecting networks (including SANs). We will also discuss methods for measuring the performance of the system.

I hope to see your smiling face in the audience, or you can register to see the live, streaming video. Sign up today!

Friday, September 21, 2012

OmniTI adds weight behind illumos/ZFS day

Theo and the crew at OmniTI have added their support to the illumos/ZFS day event in San Francisco next month. Thanks guys! We look forward to hearing more about the OmniOS distribution!

Tuesday, September 18, 2012

ZFS day is coming soon!

We are hosting illumos and ZFS day events in San Francisco October 1 - 3, 2012. Our good friends from DDRdrive, Delphix, Joyent, and Nexenta are also sponsoring the event. I will be talking about how to optimize the design of ZFS-based systems and explain how to get the best bang for your buck. Jason and Garrett are also on the speakers list, talking about how illumos has really taken hold as a foundation for building modern businesses.

For all of the Solaris fans and alumni, there is also a Solaris Family Reunion on Monday evening.

There is no cover charge, just register at www.zfsday.com and join us.

We look forward to seeing you there... even if you have to sneak out of a boring Oracle Open World session!

Wednesday, September 12, 2012

cifssvrtop updated

When I originally wrote cifssvrtop (top for CIFS servers), all of the systems I tested with had one thing in common: the workstations (clients) had names. Interestingly, I recently found a case where the workstations are not named, so the results were less useful than normal.


2012 Sep 11 23:50:48, load: 3.11, read: 0        KB, write: 176448   KB
Client          CIFSOPS   Reads  Writes   Rd_bw   Wr_bw    Rd_t    Wr_t  Align%
                   3391       0    3033       0  192408       0      85     100
all                3391       0    3033       0  192408       0      85     100

In this case, there are supposed to be 5 clients. But none have workstation names, so they all get lumped together under "". 

The fix is, of course, easy and obvious: make an option to discern clients by IPv4 address instead of workstation name. This is more consistent with nfssvrtop and iscsisvrtop, a good thing. Now the output looks like:

2012 Sep 12 19:52:23, load: 2.50, read: 0        KB, write: 1766632  KB
Client          CIFSOPS   Reads  Writes   Rd_bw   Wr_bw    Rd_t    Wr_t  Align%
172.60.0.101        452       0     441       0   27984       0     108     100
172.60.0.104        488       0     473       0   30072       0     101     100
172.60.0.103        505       0     490       0   31068       0     849      99
172.60.0.102        625       0     614       0   38979       0    2710      99
172.60.0.105        792       0     773       0   49002       0    4548      99
all                2864       0    2792       0  177106       0    2030      99

Here we can clearly see the clients separated by IPv4 address. The sorting is by CIFSOPS, which is the easiest way to deal with dtrace aggregations.

To implement this change, I added a new "-w" flag that will print the workstation name instead of the IPv4 address. If you prefer the previous defaults, then feel free to fork it on github.

I've updated the cifssvrtop sources in github, check it out. The code has details, the "-h" option shows usage, and there is a PDF presentation to accompany the top tools there. Finally, feedback and bug reports are always welcome!