Openzfs hardware requirements
I have a few questions.įor example if I have no deduplication or compression enabled on a zpool with 2 vdevs of 8x2TB drives in Z2 and no ZIL or L2ARC. And I do a write over iSCSI with 10Gb connection, how does the filesystem handle the writes, and what resources will it I'm curious since I am helping someone build a ZFS array right now. So if one of you can briefly give an example of a load, and how the filesystem deals with it and then what resources it uses for that, then maybe we can get a better idea of how the lack of a certain resource will impact performance.įor example if I have no deduplication or compression enabled on a zpool with 2 vdevs of 8x2TB drives in Z2 and no ZIL or L2ARC.
![openzfs hardware requirements openzfs hardware requirements](https://www.ixsystems.com/wp-content/uploads/2020/11/R-Graphic-Thumbnail-980x551.png)
If someone can provide or link to a good explanation of different kinds of loads on the filesystem and how it stresses resources, then a mythical rule of thumb won't be necessary. I have a few questions.įirst of all, I might add I don't care about ECC or non ECC, to me it makes no sense to argue over it.īut I am curious about the behaviour of the filesystem itself, how it can be bottlenecked by certain types of loads and how it will deal with those bottlenecks.
![openzfs hardware requirements openzfs hardware requirements](https://www.symmatrix.com/wp-content/uploads/2021/03/SMX-U24.jpg)
You can also cluster these controllers together for a maximum of 144PB, flash pool max capacity is much less but that is expected as SSDs can put way more load on CPU due to increased performance but this is a CPU limit not I'm curious since I am helping someone build a ZFS array right now. For example a Netapp 8060 controller has 64GB of ram and supports 6PB of raw capacity and you can use the deduplication + compression features (performance load permitting). I have had the good fortune to have worked with many enterprise storage arrays so I pretty much knew from the get go that this advice was rather shaky. Then you also have to stop and think, what if my array is 800TB? Holy freakin hell this doesn't make any sense at all! It's not to you really do some in depth research in to ZFS itself, technical papers etc, does the memory advice start to break down. Then to top that off it's so perpetuated everywhere that if you do a quick fact check it checks out. Yea, the problem is on the fact of it it seems to make a lot of sense. I took that information as valid, since I thought I was getting it from "experts" of that software. Thanks for posting this - I myself have been guilty of perpetuating this myth, as it was very widely spread, even among ZFS forums, posts, guides, etc. I know ram is cheap so 16GB is a fine starting point, just please never use the 1GB-5GB of ram per TB advice ever again. So please spread the word when necessary. What makes it even worse is I see those large memory requirements being advised even when deduplication is not going to be used. The number is a function of multiple variables, not a constant. That 1GB of RAM per 1TB data stored "rule" is nonsense though.
#Openzfs hardware requirements code#
It is just that the deduplication code will slow down writes when enabled. The system will still work without more RAM. That causes 3 disk seeks for each DDT miss when writing to disk and tends to slow things down unless there is enough cache for the DDT to avoid extra disk seeks. The only time when more RAM might be needed is when you are turn on data deduplication. That just keeps an extra copy around and is evicted as needed. The data is stored on disk, not in RAM with the exception of cache. I am certain of thatĪ system with 1 GB of RAM would not have much trouble with a pool that contains 1 exabyte of storage, much less a petabyte or a terabyte.
![openzfs hardware requirements openzfs hardware requirements](https://giveawaybase.com/wp-content/uploads/2020/12/wd_contest_giveaway_v3-e1607902179917.png)
You will not find any of the OpenZFS developers spreading the idea that ZFS needs an inordinate amount of RAM though. The first generation crowd could probably fill you in on more details than I could with my take on how it started.
![openzfs hardware requirements openzfs hardware requirements](https://idighardware.com/wp-content/uploads/2014/03/IBC-2012-Table-716.4-Partial.jpg)
I am what I will call a second generation ZFS developer because I was never at Sun and I postdate the death of OpenSolaris. The OpenZFS developers have been playing whack a mole with that advice ever since. ZFS' ARC being reported as used memory rather than cached memory reinforced the idea that ZFS needed plenty of memory when in fact it was just used in an evict-able cache. Some people then started thinking that it applied to ZFS in general. Some well meaning people years ago thought that they could be helpful by making a rule of thumb for the amount of RAM needed for good write performance with data deduplication.