tag:blogger.com,1999:blog-1111328281312116171.post3633749034424926981..comments2022-12-04T22:45:41.126-08:00Comments on Ramblings from Richard's Ranch: Latency and I/O Size: Cars vs TrainsRichard Ellinghttp://www.blogger.com/profile/15596339461577430423noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-1111328281312116171.post-70167486347167274802012-06-02T15:25:42.881-07:002012-06-02T15:25:42.881-07:00It is a myth that jumbo frames are 9k. In fact the...It is a myth that jumbo frames are 9k. In fact there is no standard jumbo frame size, which is why changing them can be frustrating.<br /><br />It is also a myth that jumbo frames are always faster. They can get stuck in other coalescing software/firmware and actually be slower for workloads where the average xfer is < 1/2 of the MTU.<br /><br />As with most cases where you deviate from the standard, testing in your environment with your workload is necessary to optimize.Richard Ellinghttps://www.blogger.com/profile/15596339461577430423noreply@blogger.comtag:blogger.com,1999:blog-1111328281312116171.post-70420431078699431602012-06-02T09:57:42.547-07:002012-06-02T09:57:42.547-07:00Right. Larger block sizes work better workloads t...Right. Larger block sizes work better workloads that are more like "streaming", where there are few or no cars on the road. (Actually examining trains in places like Russia, where you have trains sharing the road with cars, might be an interesting analogy.) Trains work best when they are isolated.<br /><br />With respect to Jumbo frames - Ethernet is an extreme case because 1500 bytes is *so* tiny that you wind up paying a huge overhead just to send even a single 8K page. Jumbo frames are designed to minimize that overhead, and yet are still small enough (9K usually) that they look more like minivans than trains. :-)<br /><br />Another consideration with Ethernet is LSO and other TOE-ish techniques. With these, you try to "gather up" a large chunk of data (1MB) and that does cause the stoppage at the NIC, or elsewhere in the stack. Especially as there are TCP control packets that have to get involved to transmit this data. I view LSO as a sadly necessary bit of infrastructure when folks want to send large TCP segments over 10GbE -- mostly because CPUs still have some trouble keeping up with 10GbE. (As CPUs get faster, ToE should matter less -- except network pipes are getting faster too.) We used to worry a lot about this stuff at 1GbE, but modern CPUs hardly break a sweat filling a 1GbE pipe, even without *any* offloading.Anonymoushttps://www.blogger.com/profile/15319934747521320351noreply@blogger.comtag:blogger.com,1999:blog-1111328281312116171.post-69124708821805654582012-04-22T22:47:16.923-07:002012-04-22T22:47:16.923-07:00Two related anecdotes:
1) We used to compare T1 l...Two related anecdotes:<br /><br />1) We used to compare T1 line performance to 24x 56k modems in parallel. You can move 1.5Mbps across both, but eventually the receipt of data has to be acknowledged. The 60ms+ turnaround time for the modems kills the comparison...<br /><br />2) When running big VDI workloads on NAS, the conventional wisdom is wrong too. In reality, tuning the NAS for "acceptable bandwidth" and "very low latency" allows for more desktops per storage pool. Traditional tuning results in extremely slow file system operations and mount resets. This type of workload is a VERY good example of your "cars waiting at the crossing" analogy... <br /><br />It stands to reason that in "cloudy" infrastructures, tuning for latency provides for a wider set of "acceptable" workloads - since workloads of many profiles will inhabit the same infrastructure (storage and network). I'd submit that NFS/NAS is more susceptible to this kind of phenomena than block storage because both data and file system semantics are subject to the same delays. However, I've seen the same type of latency-bound performance characteristic in iSCSI systems in VDI workloads resulting in catastrophic failures (LUN timeouts, APD failures, lost resource locks, etc). The damn train sometimes takes too long to clear the crossing...Anonymousnoreply@blogger.com