Observing Failover of Busy Pools

While looking on failover tests under load, we can easily see the system-level effects of failover in a single chart.

But first, some background. At InterModal Data, we've built an easy-to-manage system of many nodes that can provide scalable NFS and iSCSI shares in the Petabytes range. This software defined storage system scales nicely with great hardware, such as the HPE systems shown here. An important part of the system is the site-wide analytics where we measure many aspects of performance, usage, and environmental data. This data from both clients and servers is stored in an influxdb time-series database for analysis.

For this test, the NFS clients are throwing a sustained, mixed read/write, mixed size, mixed randomness workload at multiple shares on two pools (p115 and p116) served by two Data Nodes (s115 and s116). At the start of the sample, both pools are served by sut116. At 01:34 pool put115 is migrated (failed over, in cluster terminology) to sut115. The samples are taken every minute, but the actual failover time for pool p115 is 11 seconds under an initial load of 11.5k VOPS (VFS layer operations per second). After the migration, both Data Nodes are serving the workload, thus the per-pool performance increases to 16.5k VOPS. The system changes from serving an aggregate of 23k VOPS to 33k VOPS -- a valid reason for re-balancing the load.


  1. Is intermodaldata still a going concern?
    can't get to http://www.intermodaldata.com/ and even http://isitdownorjust.me/intermodaldata-com/ etc. reports it as down :(


Post a Comment