Two Write Modes of RAID Card Cache
Sep 02, 2024
For upper-layer write IO, there are two modes for RAID controllers:
(1) WriteBack mode: When data arrives from the upper layer, the RAID controller saves it to the cache and immediately notifies the host IO is complete. This allows the host to proceed to the next IO without waiting, while the data remains in the RAID card's cache without being written to the disk. The RAID controller optimizes the disk writes by either writing to the disk individually, in batches, or queuing the IOs using queueing techniques. However, this approach has a critical drawback: if a power outage occurs, the data in the RAID card's cache is lost while the host assumes the IO is completed, resulting in significant inconsistencies between the upper and lower layers. Hence, certain critical applications, such as databases, implement their own consistency detection measures.
Due to this reason, high-end RAID cards require batteries to protect the cache. In the event of a power outage, the battery continues to supply power to the cache, ensuring data integrity. Upon power restoration, the RAID card prioritizes writing the incomplete IOs stored in the cache to the disk.
(2) WriteThrough mode: In this mode, IO from the upper layer is only considered complete after the RAID controller writes the data to the disk. This approach guarantees high reliability. Although the cache's performance advantage is lost in this mode, its buffering function remains effective.
In addition to being a write cache, read cache is also very important. The cache algorithm is a very complex subject, with a set of complex mechanisms. One of the algorithms is called PreFetch, which means that the data on the disk that is "likely" to be accessed by the host next time is "read into the cache" before the host issues a read I0 request. How is this "likely" calculated?
In fact, it is assumed that the host has a high probability of reading the data in the adjacent position of the disk where the data read this time is located in the next IO. This assumption is very applicable to continuous IO sequential reading, such as reading logically continuous stored data. Such applications, such as FTP large file transfer services and video on demand services, are all applications for reading large files. If many fragmented small files are also stored continuously in adjacent positions on the disk, caching will greatly improve performance, because the IOPS required to read small files is very high. If there is no cache, it depends entirely on the head seek to complete each IO, which takes a long time.
STOR Technology Limited provides you with high-quality 9560-16I, 9560-8I, 9361-4I, 9540-8I, etc. We provide you with higher-quality services and assured after-sales service. Welcome to visit us and discuss related products with us.
Our website: https://www.cloudstorserver.com/
Contact us: alice@storservers.com / +86-755-83677183
Whatsapp : +8613824334699