In the early days of the company, one of our biggest debates was whether the core of our architecture should be a POSIX-compliant data tier or a modern key-value store. Back then (and for that matter, even today!) a vast majority of workflows depended on a legacy POSIX interface to storage.
What We Think
Previously, I presented the details of our underlying RatioPerfect™ architecture and how that allows us to deliver a platform for your data that is free of I/O bottlenecks and truly a fail-in-place model. Today, I’d like to talk about our software architecture, in particular how it allows for an extensible data path.
We have previously described our vision around Zero-Touch Infrastructure™, the first of two key architectural components that enable us to deliver on the promise of a True Cloud for Local Data. In this article, I will expand on the second key architectural component which is on-premises appliances are truly able to be a fail-in-place platform.
We coined the term Zero-Touch Infrastructure™ in the earliest days of the company. Looking back at my notes, the earliest mention of this term was in an internal blog I wrote on December 4, 2013, less than 40 days from when we got started.
Over the past couple of years, customer after customer told us what they really liked about the public cloud infrastructure and how much they desire its characteristics within their datacenters
Google ‘The Data Center is Dead’ and you’ll find no shortage of pundits predicting just that. This guy, for example, first started predicting the death of enterprise data centers back in 2011. But is that really true? Are data centers going the way of the buggy whip?
Everywhere you turn there are articles and blog posts about Software Defined everything: Networks, data centers, WANs and, more recently, storage. What is Software Defined Storage (SDS) and, more importantly, why do you care?
As anyone who knows me knows, I hate traffic. Which is why the Waze app caught my eye. The moving map is amazingly accurate regarding traffic congestion, accidents, and police locations.
Amazon introduced AWS Lambda last year at ReInvent. I believe Lambda is a game-changer, but it doesn’t go far enough. Let’s begin by reviewing what AWS Lambda is …
Take a look back at information storage from its humble beginnings to the heights of current advanced computer technology and the future.
Just as a fish probably rarely thinks about water, we who work in technology rarely question the actual purpose of all this technology.
A few years ago I read a Google-sponsored study about the B2B sales process. What jumped out was a stat that the typical B2B buyer is more than HALF way (57 percent) through their buying journey before they engage with vendors.
Remember this viral video from 2007, (back when viral videos were still novel)? At the time, it was seen as a hilarious send-up of the excesses of tech in general and Silicon Valley in particular.
Have you noticed how the approach to failure in computing has changed radically over the past few years? Good thing it has!
There is a saying that nothing beats the bandwidth of a FedEx truck. To this point, in 2007 Jonathan Schwartz (then CEO of Sun) claimed that moving a petabyte from San Francisco to Hong Kong would be faster via sailboat than via a network link.
Trends are hard to spot. Sometimes it helps to step back and take a look from a distance at where we’ve been to get a better view to what is coming.
In a recent post, I mentioned that VCs have been pumping more than a billion USD a year into the storage market for the past few years. That makes perfect sense to me. Almost nothing has changed for storage architectures in the past three decades.
What is the number one issue customers have with storage pricing? Here’s a hint: It is not the overall cost.
When you think of ‘cool’ tech market segments, which ones come to mind? Cloud, of course. Big data. The Internet of Things. But storage? For decades, storage has been a necessary, massive, but boring market.
What We Read
Perspectives on hybrid cloud for large, unstuctured data that we share with Amazon, plus some differences in opinion informed by our customers.
GitLab's decision to move from the public cloud was covered well in their blog post "How We Knew It Was Time to Leave the Cloud." We understand, and much of their actual experience was consistent with what we predicted when we started our company.
In the spirit of reaching out beyond our own posts, we held a Crowdchat featuring industry veterans from leading analyst firms, service providers, and practioners as they talked through what's behind #LocalData that can't or won't move to the public cloud.
The subtitle of this article summarizes the concept well, which is "For those with large troves," the cloud may not be ideal."
#GartnerSYM - Technically not "what we read" but rather "where we went!"
I attended Gartner Symposium the week after our company emerged from stealth. We spent three years focused on R&D and servicing our pre-launch customers, so we wanted to ensure that our focused mission was still resonating with broader discussions happening at the CIO and senior IT executive levels. The good news is that we believe our timing is even better now than it was three years ago when we started!
Few companies have the capabilities or talent to build their own at-scale cloud infrastructure. Dropbox talks about their journey from the public cloud to their own infrastructure and how ridiculously difficult it was.
The world is moving from workflow-first applications to data-first applications.
Everyone is talking Petabytes, Exabytes and Zettabytes. See what we've been reading this week in storage.
Take a look at the news that we've been reading recently.
Are you ready to declare VR another leg of the 3rd platform movement?
Have you ever stopped to consider the hidden cost of software-defined storage products?
What type of issues do you get when you deploy open source software to tens of thousands of machines?
Exciting new memory technology was recently introduced by Micron and Intel that could alter the separation of storage and the compute. Are we seeing the future that puts the fast side of storage even closer to the compute?
Earlier we wrote about Storage is Cool and how new use cases are creating new architectures.
In a prior linked article we re-posted flash is not taking over the world. New disk technologies like SMR will continue to increase the densities and lower the cost point for capacity storage.
The Register reports that North America is down to its last few thousand IPv4 addresses.
Imagine having a fully-functional Lego-sized data center on your desk. Sounds pretty cool?
Computerworld's Lucas Mearian reports that hard drives will remain the dominant mass storage device in laptops and desktops for years to come.
The human species may face extinction someday but all of our data could live on in DNA storage. Scientists say all the world's data can fit on a DNA hard drive the size of a teaspoon.
Nanterro came out of stealth this week and announced it has been developing a new form of non-volatile RAM (NRAM) based on Carbon Nanotubes (CNT).
The Cloud is facing resistance or failing in many organizations. John Mathon dissects the problems and suggests 5 steps to start using the Cloud today.
Backblaze put hard drives to ultimate test and, lucky you, they're sharing their latest reliability stats.
Inside Igneous Application
Is RAID dead? A quick Google search indicates it might be, at least for Big Storage. Why?
You can’t be in tech today and not have mixed feelings about patents. In theory, patents allow the ‘little person’ to benefit from their ‘intellectual property’ and invention.