Pure CEO Charlie Giancarlo on data management, all-flash and tariffs

Pure CEO Charlie Giancarlo on data management, all-flash and tariffs

At Pure Storage’s Accelerate 2025 event this week in Las Vegas, the company launched its Enterprise Data Cloud, which knits together a common operating system across its storage products, plus monitoring and management capabilities from its Fusion control plane and as-a-service procurement options.

We caught up with CEO Charles Giancarlo to ask him about Pure’s foray into data management, and how it can do this where other storage suppliers might not.

We also asked about Pure’s much-vaunted promise to drive out spinning disk hard drives from the datacentre, a quest which has received a boost by undergoing proof-of-concept work with hyperscaler Meta on its Direct Flash Modules.

We also asked Giancarlo about the company’s exposure to tariff uncertainty, and found a company largely untouched, except perhaps for having to constantly plan for much-changing scenarios.

What’s different about Pure’s Enterprise Data Cloud? How does it differentiate you from other storage suppliers?

What’s really different is that we now have this intelligent control plane, which has never existed in data storage before.

If you think of the way a computer is designed, most computers have storage inside, right? And servers today have storage inside.

The concept of an array came about in the mid-90s, as datasets started getting really large. It couldn’t fit in the server anymore, so they put it as a separate device, and sometimes they’d want multiple servers to use the data.

So, they could connect it with a SAN, or a network of some type, and you could have a lot of data, and have computers run off it. But the concept was still that the data belonged to a server or a set of servers, still operating like a data store for a set of servers.

And that’s the way it’s been for the past 30 years. Now, the cloud has operated differently. They said, “OK, we’re going to need a lot of storage for a lot of companies and a lot of different applications.” So, they developed it as a horizontal layer.

Now, they might have multiple horizontal layers, everything from archive to high-performance, and maybe one or two in-between. But it’s not dedicated to any set of servers, not dedicated to any use case or any application. They have software that will allow a company to define how much storage they need and where they need it, right?

That’s not the way it’s done in the enterprise, but let me step back a minute.

It’s not the way you and I do it anymore. You and I, at one time, had an external hard drive. But if I wasn’t at home, I didn’t have access to that information. Also, if it filled up, I’d have to buy a new one and then I’d have to move all the data from the old one to the new one unless I wanted to have stacks of these things, which you never do. So, it’s kind of a pain.

And also, if I wanted to share information, I’d have to attach it to an email or something like that. It wasn’t a point and click to share it with somebody. So, the benefits of your data being part of, quote, a cloud is so much higher than it being part of a dedicated system that’s tied to an application stack.

Why hasn’t anyone done this before? Why hasn’t a storage supplier provided this kind of visibility into the data across their systems?

The fact of the matter is people have tried. We talk about storage, but of course there’s high performance, low price, there’s block file and object storage. There are large systems and there are small systems. And all of those in the past, because of the limitations of hard disk, were designed with different software, or because maybe the portfolio was assembled by acquisition, they have different software.

It’s a lot harder to do this if you have different software for different systems.

We have developed everything on Purity. And Purity in our first product was a block system, and then we added file and object. But even though it was a block system, we don’t view everything as a block. At the core of Purity is something called a key-value store, which is a very modern way of having very scalable metadata.

And so, whether it’s a block file or object, a large file, a small file, everything is just a key value that is done as a lookup. My point is, having a unified data plane made it easier for us.

And the fact that we’re on all-flash made it easier for us to be able to deliver for the first time a virtualised cloud of storage rather than individual arrays.

There are things called clusters that most of the vendors have, but a cluster is a few arrays. Not only do all of the arrays in our system on a global basis appear as part of the data cloud, but then you can create a set of rules around how data sets get managed.

You create a data storage class that represents your company’s policies, procedures and compliance rules around replication, around backup, around recovery times, and all of that. And then that becomes a data storage class. And every new application that needs to use data that fits that class, you just write to its API, and bingo, you get the same set of characteristics for that data class.

If you think about it from a compliance standpoint, when arrays are separate, then the way you put in the rules for how that array operates relative to, let’s say, cyber, is done manually. Well, it’s done manually everywhere around the enterprise, which means it could be done differently in one location than another, or one application than another. And so, your rules are really just on paper.

What someone actually did, there are a few records of, and it’s not consistent. Not only that, let’s say you’ve got hundreds of arrays, which a lot of large customers have, and you decide, OK, we’re going to update our compliance policy this year – and compliance policies are always being updated. All right, now you’ve got to go to 200 arrays and change them.

With Fusion, you change the policy, and the changes take place. You just change the policy.

In 2023, you said hard drives would be dead by 2028. How do you think that’s going? I would estimate at the moment that one win with a hyperscaler – Meta – for a relatively limited use case isn’t a great deal of progress. What’s your assessment?

I’ll quibble with you in terms of a limited use case. We’re being certified up and down their entire stack. They have multiple tiers of storage. So, we’re being certified for all of the tiers.

Now, yes, they will probably start at their most expensive and highest performance tier and go down. But we had to prove to them on a TCO basis before they would even start to use us that we were equal to their lowest non-archive, their lowest tier, lowest online tier of hard disk storage. And one of the benefits that we bring to them, which is really unique, is that the software they will be using from us is exactly the same for every tier.

Believe it or not, that’s not the way they operate today. Every disk, every type of disk, every vendor of disk, every type of SSD, every vendor of SSD is so different they have to change their kernel to be able to utilise it.

We don’t operate in kernel space. We operate in user space, so they will not have to change their kernels.

And the only difference will be whether they are using a 300TB [terabyte] drive, a 75TB drive, or eventually a 600TB drive. They’ll have different price performance levels for each one of those. That’s a huge benefit for them.

What it means is, without any change in their software, they’ll be able to leverage us, and it will only depend on relative pricing.

What is Pure’s exposure to any uncertainty around tariffs? And if there is any exposure, what are you doing about it?

Of course, we’re exposed to all of that, because we never know what the decision is going to be tomorrow, what pronouncement is going to be made tomorrow. It could be tariffs on everything going through South Dakota. Who knows? To say that it’s uncertain is, I think, somewhat generous.

Not only is it uncertain, but if you look behind the pronouncements as to what detail is behind it; oh, my God, no detail. So, you announce a 150% tariff on some producer or some country, and then you say, okay, well, what does that apply to? Does it apply to the value add that was placed there? Because remember, electronics in particular touches a lot of countries before it ends up as a product. Is it the value add that was placed there or is it the entire thing? Well, it’s not in the detail.

But that being said, what all of the repeated updates on what the tariffs has done is create a huge amount of activity around planning, but no action being taken because you don’t want to take action. The thing we can’t do, unlike some people in the government, is go back to our suppliers after making a decision and say, “Oh, just kidding”. We actually want to go back to the way it was before.

Our current view is, for our international customers, it doesn’t matter and it’s going to have minimal effect on our US customers.

We can manufacture a product without it ever touching the US. And it wouldn’t matter for export anyway.

We do our sub-assemblies largely in Vietnam. A final assembly is done in three locations right now: Juárez in Mexico; Houston in the US; and Czechia.

We can manufacture products for Europe without it ever going through the US. And we’ve got the US. MCA [United States-Mexico-Canada Agreement], which allows us to import from Juárez without tariffs. And we can obviously also satisfy our Asian customers outside the US. Actually, regardless of the tariff regime that was being identified, it would only affect US customers.


Source link