One inescapable aspect of modern life is that humans are creating data constantly… and a lot of it. Photos, videos, spreadsheets, podcasts, documents, financial transaction records–nearly every action of modern life either depends on stored data, or creates new data that needs to be stored.
That’s where people like Andy Walls, an IBM fellow and the CTO for FlashSystem and Distributed Storage, are making change. And from Mr. Wall’s point of view, data storage is facing a new frontier.
“Moore’s Law… is the principle that says every couple of years that semiconductors will double in capacity. So in two years you would be able to have processors that take half the area; and presumably at the same cost. That’s become very challenging as we’re down into the five nanometer and seven nanometer geometries. It’s very hard now our processors continue to scale, but it’s hard to do so at the same cost,” Mr. Walls says.
What makes this new challenge so formidable is that scaling data storage is more important now than ever. In 2018 one study projected that global data production would nearly quadruple in five years from 33 to 120 zettabytes of information. And this boom of data production shows no signs of stopping– let alone slowing down.
Approximately 328.77 million gigabytes of data are created every day. To put that into perspective: If each gigabyte was a single person, we would be generating nearly the entire population of the United States every single day.
So how is IBM pushing through this barrier to progress? By rethinking the way storage operates entirely.
That’s where IBM FlashSystem with FlashCore technology is engineered to lead the way. And the way Mr. Walls sees the future of data going, efficiency is the core challenge. The reasons that efficiency matters are myriad– but among them are durable systems, energy efficiency, cost reduction, and maximizing storage in the smallest footprint. All of which are critical factors as enterprises continue to scale their data storage.
“It’s really a computational storage device” Mr. Walls says of IBM’s FlashCore Module. Instead of a controller that does all of the data processing, and an array of drives that are for storage only, IBM FlashSystem utilizes a technology known as computational storage. “In that FlashCore module, we offload things from the processor… that’s what makes the FCM unique,” says Mr. Walls.
In simple terms, this means the storage drives themselves handle some of the processing that typically takes place in the central processing unit, or CPU, of a data array. This distributed workload is designed to not only improve latency, but also efficiency. “The Flash core module is efficient in that it takes data stored by the storage controllers, compresses it, and stores less data, and it’s efficient in its algorithms to… be as effective as possible.”
Taken together, this means that enterprises have the ability to simply scale their data storage– whether that’s healthcare records, geothermal imaging, or social cooking videos– so they will be able to handle whatever the future of data throws at them.