SNIA SDC Dishes Up Storage Innovations


The 2018 SNIA Storage Developers Conference is a great event to find out the latest developments in digital storage technology and storage management software. There were announcements and presentations, demonstrations and lots of networking covering topics as diverse as cloud storage, containers, NVMe (which is now moving to TCP as well as fibre channel and Infiniband support), NVMe-oF, new storage architectures and long-term (cold storage) data retention.

The Cloud Data Management Interface is an open international ISO standard, ISO/IEC 17826:2016, that defines the functional interface that applications will use to create, retrieve, update and delete data elements from the cloud. The SNIA Cloud Data Management Interface (CDMI) 2.0 was announced as an open source standard. This empowers anyone to contribute to the ISO standard by signing the SNIA Contributor License Agreement. The CDMI standard provides end users with tools to enable data access, data protection and data migration from one cloud service to another. The release of CDMI 2.0 is expected in the second half of 2019. The CDMI is currently soliciting input and contributions from the cloud storage user community.

Swordfish is a SNIA initiative that extends the DMFT Redfish specification. It extends a unified approach for the management of storage equipment and services in converged, hyper-converged hyperscale and cloud infrastructure environments, to make it easier for IT administrators and DevOps to integrate scalable solutions into their data centers. Swordfish utilizes Class of Service (intent or service level) based provisioning, management and monitoring. Swordfish extends traditional storage to include converged environments (covering servers, storage and fabric together). Below is a figure showing how Swordfish adds to the Redfish standard. Storage would be via a hosted service or an integrated service configuration.

Swordfish added to RedfishImage from SDC Presentation

There was much discussion of the NVMe initiative and the PCIe interface upon which it is based. The figure below from Dr. Debendra Das Sharma, a member of the PCI-SIG Board of Directors showed the figure below showing current NVMe storage devices and the EDSFF form factor family, including the “ruler” configuration introduced by Intel. NVMe will get even faster with the introduction of PCIe Gen 5 which doubles the communication rate from 16 Giga-transitions/second (GT/s) to 32 GT/s.

NVMe Form FactorsImage from SDC Presentation

Among one of the most interesting ideas for NVMe over fabric (NVMe-oF) was a proposal for a distributed self-learning NVMe-oF as outlined in the figure below. This technology utilizes a “redirector” and hints. The initiator redirector directs I/O to the correct NVMe-oF target based on the hint table. The target redirector forwards I/O to the correct storage node and propagates hints backward and forward, allowing fully decoupled and legacy clients. The initiator redirector updates the local hint table based upon received hints about a volume and eventually learns about the volumes distribution across nodes.

Proposal or Self Learning NVMe-oFImage from SDC Presentation

There was a presentation by Matias Bjorling from Western Digital about Open-Channel SSDs. Open channel SSDs would align writes to internal block sizes and sequentially write only LBA ranges. There would be a sparse addressing scheme projected onto the NVMe LBA address space. It would also include host-assisted media refresh to improve I/O predictability. It would also include host-assisted wear-leveling.

While proposals like Open-Channel SSDs would take some functions out of the SSD and move it to the host, other proposals at the SDC would put more processing power into an SSD to provide what is variously called computational storage, in-situ computing or memory-centric computing. Scott Shadley from NGD Systems discussed their approach to putting more processing power into SSDs. The data rate to transfer data between storage and processors is much slower than the internal data rate in storage devices, particularly for flash memory based SSDs. Moving processing into the storage devices can enable a better balance between processing and memory speed and removing the communication bottleneck.

Image Search Time for Processor Only and NGD SSDsImage from SDC Presentation

Working with Microsoft Research NGD Systems with up to 16 SSDs including 64-bit in-situ ARM-based processors image recognition throughput was increased enormously compared to host only processing as shown below.

There was also talk by James Borden and Timothy Walker from Seagate on creating HDD parallelism using multiple actuator HDDs. They pointed out that with the current type of HDD design, moving to energy assisted magnetic recording, even with increases in HDD capacity, the IOPS will not grow with capacity. This can cause problems with these large capacity HDDs. For instance, rebuilding a failed HDD could take an excessive long time for a 40 TB HDD unless the data rate for writing and reading can be increased. This is why Seagate has been pursuing dual actuator HDDs. A slide showing projected HDD storage capacity increases over the next seven years.

Seagate HDD Capacity ProjectionsImage from SDC Presentation

These new drives would have two or four actuators that could run in parallel addressing different disk surfaces. Each of the actuators would have its own head electronics, permitting each actuator to write or read information independently from any other actuator. This would provide a way to increase data to and from the HDD and thus increase drive IOPs even with higher capacity HDDs.

In some off-the-cuff discussion at the SDC there was even a mention of creating HDDs with NVMe interfaces. Currently there are NVMe HDD storage enclosures (recently introduced by Western Digital). Moving HDDs to NVMe would be an interesting move and could create a unified interface to address SSDs and HDDs together.

Microsoft had a keynote talk that discussed cold storage. They showed an effort to create a storage tier (Pelican) between capacity HDDs and magnetic tape using arrays of specially made “archival” 3.5” HDDs that would be powered off when not in use (MAID returns!).

Microsoft Pelican Cold Storage TierImage from SDC Presentation

These archival HDDs would be optimized for low cost and cold storage. They might use shingled magnetic recording—which gives higher density recording, but creates a extra erase cycle with overwriting prior written data. These drives are currently available with storage capacities up to 14 TB.

Storage innovations continue with both host controlled and CPU enabled SSDs, multi-actuator HDDs, machine learning enabled NVMe-oF and cloud management initiatives with CDMI 2.0 and Swordfish.

Let’s block ads! (Why?)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *