john_gallant
by john_gallant

NetApp’s Tom Georgens: How we got big, stayed nimble, and view storage today

feature
Jan 19, 201238 mins

In an exclusive interview, CEO Tom Georgens talks about NetApp's plans for virtualization, the private cloud, and big data

Those of us with a bit of institutional memory recall a brash upstart named Network Appliance that burst onto the storage scene to challenge EMC — itself once a brash newcomer — and other storage royalty like IBM. But that was 20 years ago, as difficult as that seems to believe, and the company, now named NetApp, is $5 billion-plus storage leader in its own right.

In this installment of the IDG Enterprise CEO Interview Series, CEO Tom Georgens talked to IDGE Chief Content Officer John Gallant about what’s driven NetApp’s success and shared his views on key technology issues like big data and deduplication. Georgens also explained why NetApp’s single-architecture approach gives the company a big development and agility advantage compared to EMC, and explained why “server vendors” like Dell and IBM are falling behind in the storage arms race. Furthermore, he talked about NetApp’s keen focus on the private cloud and how partnerships with companies like Microsoft and Cisco are helping NetApp deliver quickly on that emerging model of computing.

John Gallant: What’s the NetApp mission, and what defines the company?

Tom Georgens: At our core, we’re a storage and data management company. NetApp has been around roughly 20 years, and our history has been about innovation, about enabling people to use their information assets more effectively and more cost-effectively than they previously could or can now with alternative approaches. NetApp has been an innovator from day one. We effectively invented the NAS business back in the early days of the company. With the demise of the dot-com bubble, NetApp lost a substantial amount of its customer base and needed to make a very, very important transition into the enterprise space. We innovated there around storage efficiency, around integration with business applications. Then we went through the most recent recession, and we came out of that as the innovation leader in storage for virtualized environments.

So our core area is storage and data management. That’s been our focus from day one, and we intend to be experts in that, although clearly as we get bigger we’ll expand our footprint. Innovation is in our culture, innovation is key to what we do, and the message to the team is that ongoing innovations that are relevant to the customer are key to our ongoing survival as a company.

Our technology enables people [not only to] store stuff, but make better decisions, bring products to market faster, lower costs, drive velocity. We’re not in the disk drive business, we’re in the storage management business.

Gallant: A recent analyst report said you are on “an unstoppable growth vector.” What’s driving that, and where in the company are you seeing the greatest growth?

Georgens: Words like “unstoppable” have a certain entitlement message, and I clearly wouldn’t want to imply that. We need to earn our business and our customer trust every day. But the business, even from the early days, has outgrown the market by several [times] for 20 years. Our last fiscal year which ended in April — we run an unusual fiscal year — was 30 percent year-over-year growth for us. It was our biggest market share-gain year in our history. Clearly very, very solid, and almost all of that growth [came] organically. Since then, the overall macro environment has not been quite as robust as it was last year, but our growth for the first six months this year in our reported quarters is 20 percent-plus. Admittedly, there was some inorganic activity in that, but nonetheless still a market share gainer.

The key to that is bridging the gap between available technologies, software innovation, and customer need. NetApp recognized early on that this virtualization trend was not only relevant for servers, it had big implications for storage as well. Our innovation has enabled more virtualization, which is the evolution to shared infrastructure or the private cloud or whatever terminology you want to use. That’s been a big part of our dialogue with customers: How do they redesign their IT infrastructure, take advantage of this new technology, and drive cost efficiency, flexibility, and velocity?

Gallant: Within your revenue base, what product line grew the fastest over the past year?

Georgens: Our technology base is driven around a single core technology. We have a bunch of other smaller products, but Data Ontap is our core operating system, and we apply that to different markets and to different applications. But most of [the growth] came in our core storage and data management business. You know, we don’t necessarily line item our software separately, but software is our differentiator, and it’s how we add value for customers. The growth of the business is the ability to continue to differentiate with our software.

Gallant: How is the storage market changing and what are the key forces that are reshaping it right now?

Georgens: The storage market — unlike some other markets like networking that have an 80 percent market-share player or a very, very high market-share player — is still very, very fragmented. There are independent players that are storage and data management experts like ourselves. The server vendors are players as well with their own portfolio of products. You’ve certainly followed their acquisition trends of recent years. If there was an overall trend over the last few years, it’s that the server vendors’ share is actually declining and the share of the independent, best-of-breed players is increasing, not only ourselves but competitors that are similar to us. The demand for storage is clearly going up, but more important, it’s a market where there are still opportunities for innovation and differentiation. These are not businesses that have been commoditized, that lend themselves to being subsumed by the server vendor’s selling motion. This is still a very, very differentiated market, particularly from the midlevel to the high end. The best-performing companies still drive reasonable margins and are gaining share. Storage is not only a growing market, but it’s still a market in which there are opportunities for differentiation. It’s not commoditized.

Gallant: In addition to the explosion in the amount of data that has to be managed, what are the other key things you’re seeing among customers that are driving growth in the market?

Georgens: It’s more than just the amount of data, it’s the desire to do something useful with the data. A lot of industries are regulated, so there are compliance things related to their data, whether it be data retention or data longevity, those types of things. For certain markets like health care and financial services, increasing regulation is driving data. But I’d say that across the board we’re seeing several things going on, one of which is the rise of multimedia, which is generating new data types that were not material 5 or 10 years ago and that are going to become more significant as time goes on.

The other one is the desire to use all of this data that we have in an effective way. How do we make better product decisions? How do we make better choices? How do we accelerate our business? To cite an example, we have a client in the credit card business, and when you swipe your credit card, they run through their databases what is your credit limit, what is your payment history, have you ever bought from this vendor before? Those kinds of things. What they would like to do is go out and glean all the data that’s knowable about you in the public domain, whether it be unpaid parking tickets, did you just buy a house, did you just change jobs, all of these types of things. At that point of time when you swipe your card, they want to do a risk assessment on you that transcends the information that’s currently in their database, rather than waiting 90 days for you to default. What we’re seeing is that virtually every industry has got a component of that. People are trying to take the data that they have or the data that’s available in the external world and use that to their advantage, whether it be buying behavior decisions, technology decisions, or market evolution decisions. They want to use data to drive much, much better decision making. For us, it’s not simply that people are writing more emails and they have to store them someplace, people are trying to get greater utilization of this information.

Gallant: Let’s talk about the competitive situation against three of the companies that come up most often in the discussion about storage. First, how does your approach to solving these problems differ from EMC’s?

Georgens: I think it’s different on a few fronts. NetApp is one of the major market share players in this industry, [even though] NetApp is far and away the youngest. [That’s because] one of the things that NetApp has done from early on is instead of having a set of point products that solve a whole bunch of point issues, NetApp has built more generic technology that can be extended. For instance, without getting too technical on the storage side, there are high-end products, there are low-end products, there’s SAN technology for accessing storage, there’s NAS technology for accessing storage, there’s backup, there’s archiving. Many of our competitors have separate products with separate operating systems and separate hardware and separate development efforts dedicated to each one of those.

NetApp has developed a single operating system, Data Ontap, that can do SAN and NAS, high-end and low-end archiving, disk-to-disk backup. What that means is that when we introduce a new feature, it’s available on all of those technologies at the same time and it works the same way at the same time and it has the same set of tools to manage it. Our competition has to develop the products multiple times, they’ve got different manageability techniques, functionality doesn’t necessarily work the same on all the different products. Overall, that’s given us not only simplicity from a customer perspective — because it’s one set of tools, one set of people — but it’s given us tremendous development leverage as a company. If you ask how NetApp come from a standing start 20 years ago to actually No. 2 in market share in this industry, passing HP and IBM last year in storage, at least in the SAN and NAS cycle as we measure it, it is because we’ve had this tremendous development leverage. That is a fundamentally different approach than the other guys. EMC is bigger than us, they’ve got a bigger R&D budget, and they might be able to sustain a number of these point technologies for a longer period of time. But the smaller companies that have neither this unified approach nor the R&D budget will continue to lag.

Frankly, if you look at the traditional architectures in this space, almost all the innovation has been done by startups. You see almost no real innovation being done by the server guys anymore. Even our largest competitor, where they’re gaining share [is with] products they acquired. The innovation rate has slowed on their core technology. Simply put, it’s hard to advance five or six separate platforms at a high rate so that they’ll all be compelling and competitive in their markets. NetApp has gotten tremendous development leverage from this approach. Certainly there are compromises to it, but it’s served us well to this point in time, and it’s enabled us to out-innovate the market for 20 years.

Gallant: When you look specifically at IBM and how you’re approaching certain customer needs and challenges, how do you differ?

Georgens: I put IBM more in the category of the server vendors. Let me just take a step back a second. We compete against EMC. EMC is going to compete similarly to the way we would on the basis of technology value and what it can bring to the business. EMC’s notion is that their purpose-built products will be better than any generic products from NetApp. That’s sort of the argument that they’d make and certainly a thing we’ve been competing against for 20 years successfully. The server guys will compete on a different vector. The server guys will compete less on technological innovation and they will compete primarily on integration: a one-stop shop for server, networking, storage, support services, all from one place. There’s a set of customers for which that has appeal. But the trade-off they’re forcing is that what they’re selling in the form of integration, the customers know they’re also forfeiting in innovation. The server guys are not particularly innovative from a storage technology perspective. While I understand the idea of one-stop shopping and its appeal, if you actually look at the market share charts, it’s clear that that’s not working for them from a storage perspective. If you look at the recent acquisition activity by the server companies in the storage space, I think that would be an indication of the need to kind of reload from a technology perspective because the existing technologies are not gaining them any share.

Gallant: Would you consider HP in that same category as IBM?

Georgens: Sure, HP, IBM, Dell, I would put them all in the same category. Just look at the market share numbers for them over an extended period of time. Storage is a very, very important part of the IT budget. It’s a big part of the spend. Therefore, innovation that not only allows [IT shops] to control spending, but also allows them to create opportunities through better use of information, has a material business impact. It’s worth separating the storage decision from the server decision to realize those benefits. If this business ever commoditizes or those facts are not true, then the appeal of a single one-point shop would make sense. But as long as there’s compelling value at a business level to buy storage separately or make that choice separately, then I think companies like us can prosper.

Gallant: I want to talk about cloud and what it means for NetApp. What are you hearing from IT leaders about the move to cloud?

Georgens: It’s a serious discussion at every customer, and opinions vary about it. I can think of two large banking accounts that I’ve spoken to just in the last two weeks, one of which says, “How can one of these [cloud] firms have more scale than me, and how can they be cheaper?” Another one says on-premise computing is a thing of the past and my job is to manage the transition. Those are two completely different opinions.

I think there are a few things that are in play. Number one is — and I use this message internally to NetApp, so I’ll speak from NetApp’s perspective — that we’ve become so dependent upon our systems that with every major transformation effort in the company, IT is the long pole in the tent and is the gating item to get to that outcome. That’s not to say that we’re faster or slower than anybody else, it’s to say that if we could go faster, it could have a big impact on our business. The other one is that virtualization is forcing people to rethink how they do their data centers. I think a lot of people faced with a choice now of basically redesigning the data center and the associated investment are saying maybe this is a time I should look outside. When they look outside, cloud means a whole bunch of things. It could mean somebody selling infrastructure that they can run their existing applications on, so it’s primarily an infrastructure sale. Or it could be buying an entire application from somebody, like a Salesforce.com or a Microsoft Office 365. I see a lot of tire-kicking and a fair amount of experimentation. But the fundamental question is “can they give me the service levels that I’m looking for?” Another classic comment is “I’ve been burned by outsourcing before, so how is this any different?” There is also the question of security. Some of these industries, particularly financial services, are highly regulated, so their options may be limited. But we’ve certainly seen these high-profile hacking scenarios, including hacking of companies that are in the security business, so protecting yourself is difficult.

But if the economics are compelling and the flexibility is compelling, then people are going to continue to look at this. If cloud lives up to expectations in terms of service level agreements, in terms of sustainability of the cost model, and reasonable security, then I think people are going to move more and more in that direction. Certainly there’s pressure to move in that direction. The question is will these technologies live up to their expectation? Time will tell, and right now there’s a lot of tire-kicking.

Gallant: What does this mean for NetApp?

Georgens: Going back to the fact that the market is still very fragmented and there’s no overwhelmingly dominant player, what it means is that the go-to-market is also fragmented. The OEMs have got a component, there are vertical segments of the market. One of the challenges that we — and everybody else in the storage industry — have is coverage. How do we get our storage to as many customers as we can that reasonably cover the market? So if you believe that entities are going to emerge, cloud or whatever, that are going to consolidate a substantial amount of storage demand in one place, then they represent enormous amounts of leverage for us as a company. The other thing about those companies, if they’re going to provide IT as a service to clients and do it better than the clients could do it themselves and be more cost effective, they’re going to be very demanding customers. They’re going to be the customers that have to wring every last bit of value out of the technology. Our technologies around storage efficiency, around ease of use, around a single architecture that can support multiple application requirements, the economic value of that is really compelling. There are going to be players in this space that are going to [build] this technology themselves. Google would fall into that category. But a lot of the other ones need our storage management technology. If they’re going to provide value to their clients, they’ll have to solve the storage problem for them, and that’s an opportunity for us to get our technology in front of a lot of vendors.

A key component, a key enabling technology, is virtualization, and we are innovating around that. I think our storage efficiency, the ability to meet business requirements with dramatically less physical storage, makes it compelling from an economic point of view, so they can offer an economic argument for their customers. The latest generation of our technology brings all this software functionality and marries it with clustering, allowing us to do this at a scale that our competition can’t. For somebody that’s going to provide a large infrastructure at scale, that’s geographically distributed, that’s cost effective, that’s a compelling value proposition to end-users, we can enable that. Our value proposition in that space is that we want to make the offerings of the cloud providers more competitive and enable them to win business with end-users. That’s how we’re approaching it.

Gallant: You have been very aggressive around private cloud. Explain your strategy there and talk about your partnership with Microsoft around the reference architecture for private cloud.

Georgens: In the near term, I think the private cloud opportunity is the bigger business opportunity for us. The public cloud has to live up to those expectations. If it does, then I think it too will become a big opportunity. Today, customers are saying, “I have all of these applications and most of them run on dedicated hardware, so I’ve got a proliferation of hardware, a proliferation of tools, a proliferation of complexity.” What the private cloud is enabling is what virtualization has done, is it has allowed applications to become mobile and therefore decoupled from the infrastructure. What customers are able to do, and this is what NetApp is enabling, is instead of having an application, a server to run it, and storage to store the data, now I can build a big shared infrastructure that’s capable of running many apps at the same time, that is highly automated, highly efficient, and homogeneous. I can manage that on a capacity planning basis. If I want to bring a new application online, the first step doesn’t have to be to procure a server and a place to put it and power to power it. Now I’ve got an infrastructure that I can just add another application to. I can instantly provision new applications, my infrastructure is very efficient and very homogeneous.

That transition from the siloed model to the shared infrastructure model is one of the things that NetApp has been participating in in a significant way. We can come to the customer and say that we’ve got one architecture for storage, and it can run many different apps — it can do backup, do archiving, your primary storage. That’s unlike the competition, which has got four or five different products. Even if you buy them all from the same vendor, you’re effectively re-creating these islands of storage that you went to this shared infrastructure to get away from. Now you can build one gigantic storage pool, just like you’re building one big server pool, that can serve a big pool of applications. One of the things that we’re doing is reference designs, whether it be our work with Microsoft or whether it be our FlexPod that we’re doing with Cisco. This argument I made earlier that the primary value proposition that the server company will be offering will be one of integration. One-stop shop, server, networking, storage, all from the same guy. But that’s not offering customers best-of-breed solutions, and they’re forfeiting functionality if they do that. We’re saying we partner with other like-minded best-of-breed players, whether it be VMWare or Microsoft or Cisco or BMC Software or SAP, and we can work together to tightly integrate our technologies and test them and do reference architectures and sizing guides and support infrastructure. We’ve got something that is every bit as integrated and every bit as tested as anything the server vendor can do, but it’s made of best-of-breed components that can add value to your enterprise. That’s the rationale of the work we’re doing with Microsoft, with Cisco, and obviously our ongoing work with VMware and a number of other players.

Gallant: One of the terms we hear about is the enterprise date explosion, which is only going to be worsened by the rapid expansion of mobile, social, and collaboration technologies. What are the two or three most significant things you’re doing to help customers cost-effectively manage that explosion in data?

Georgens: There are a couple things. Number one is you need to lower the cost of the physical storage itself, so we have various storage efficiency technologies. We have things like deduplication, we’ve got compression, we’ve got zero-space cloning or FlexClone. That allows you to create a logical copy of the data without replicating any physical space, because not only is the data exploding, the copies of the data are exploding as well. Copies for disaster recovery, copies for backup, copies for test and development, and copies for decision support. If we can make copies of the data without replicating the physical space, then we can dramatically lower the cost of storage. So step number one is reducing the amount of physical storage to have to buy. The other thing that users will tell you is that while the procurement cost of storage is not trivial, it is actually a small number compared to the overall management of the storage. How do I back it up? How do I make it compliant? How do I secure it? So having one set of tools to manage a large storage pool, the ability to cluster it so you can basically manage many, many, many machines as if they were one [is critical]. You need to minimize the management of that storage and the administrative overhead associated with it.

Gallant: We also hear a lot about big data. How real big data today and how are you helping customers capitalize on it?

Georgens: One of the frustrations with big data is there isn’t a clear definition of it, and a lot of times people are putting into big data everything that’s already big, making big data “big” by definition. The other thing about big data is it covers a bunch of use cases that aren’t necessarily aligned with each other.

We break big data into what we call ABC. A is for analytics, and that’s what I talked about earlier. How do people combine the information that they’ve got and information that may be available in the outside world in a way that they can make better decisions about their business? [That could be] about consumer buying behavior, risk assessment, the TSA using facial recognition to decide who gets on planes or not. The ability to take multiple, disparate data sources and bring those together to make business decisions is real. The amount of experimentation going on in that area is very, very significant as well as the conversion of them into production. It’s one thing to do a science project, it’s another thing to bet your business on it. A lot of companies are in that transition. The interest on the analytic side of big data is real. No question about it.

The B is big bandwidth. NetApp is the number one supplier in the world of storage to the federal government, and we see applications there — satellite telemetry, things like that — that are going from taking individual pictures to high-definition pictures to thermal imaging to multi-satellites for 3-D to high-definition, full-motion video. There are segments that are talking about generating a petabyte a day, which is a million gigabytes per day. These are really, really compelling requirements, and video is a big part of it. If you think about a petabyte per day, that’s a lot of gigabytes per second. Products that are designed to serve up email or files or do small database transactions aren’t optimized for that type of workload. And video is not just the military. It’s casinos, it’s retail, it’s stadiums, it’s municipalities, it’s prisons, it’s all sorts of things that we’re seeing demand for that will be a big driver, that will require a massive ingest of large amounts of data.

The C in the ABC of big data is, generically, content. Every single business has something that requires a massive repository of data. It could be medical imagery with patient records, it could be insurance claims, it could be Yahoo! with emails, it could be Apple with iTunes or iCloud. There are a lot of businesses that require massive, massive, massive repositories of data. That needs to be done efficiently, they need to know what they’ve got, and they need to be able to find it if they’re looking for it.

So that’s how we see big data. A is analytics, B is big bandwidth, and C is content. The reason why I don’t like the big data name is that all three of those have a different set of requirements, all three of them have a different application case, and they’re kind of lumped into this generic term “big data.”

Gallant: What are NetApp’s big data solutions? Is that a specific combination of new and emerging technologies?

Georgens: They vary. Big bandwidth now requires a platform optimized for that, so in that particular case it’s a product. In the analytics and the content side, it’s more the overall software value proposition, because it’s not just getting massive amounts of data and sticking it someplace, it’s about the data protection, the data management, it’s about the efficiency and enabling people to interact with the data in an effective way.

Gallant: I want to cover quickly some and get your take on them. Let’s start with SSD (solid-state drives). How fast an uptake, what are the best use cases, what are you seeing in that piece of the market?

Georgens: Well, first I’d want to separate SSD and flash [memory]. The situation in Thailand looks like it’s going to be problematic for drives in the desktop class, so that may well help the SSD business. But I’d rather go to the generic question, and that is flash itself. Flash memory is driven primarily by mobile devices. There’s a big R&D investment going into that technology. For big bulk devices like we sell, high-density storage devices, it’s not really going to be a capacity alternative to rotating media any time soon. But the performance aspect of it, the power aspect of it, and now the economics of it driven by handheld devices has put flash in the range where we’re going to see it deployed more in IT applications. We’re certainly seeing flash in servers to accelerate databases and accelerate applications, and we’ve been shipping flash in our systems to accelerate performance for almost two years now. I think flash is going to be a big deal. Whether flash is actually instantiated as a solid-state disk in a form factor of a disk drive, or whether it’s just going to be raw flash on boards plugged into our systems, time will tell. You’ll see flash proliferate in the host, you’ll see it proliferate in storage devices, you’ll see it proliferate in network-attached devices as well.

Gallant: Next up, data deduplication. You were involved in a high-profile bidding war with EMC over Data Domain some time ago. How does NetApp stack up now compared to EMC and to others in that dedupe market? Is it an area where you still think about an acquisition?

Georgens: The way backup has historically been handled is you do a full backup, then you do a bunch of incrementals, and then you do another full backup, and then a bunch of incrementals, and a full backup. Over a long period of time, you’ve backed up the same file over and over and over again. Deduplication technology basically eliminates all of that and it makes disk-to-disk backup cost effective. That’s one form of deduplication targeted at backup, and that’s what Data Domain was about. NetApp has also done deduplication. Unfortunately, it has the same name, but we focus on primary storage, that is the first copy of the data, so your first email record, the first documents that you create, the first database transactions. Deduplication for primary storage is something NetApp was first to market with, and we’re really the only company still to this day that can do that for the wide variety of primary storage applications. Data Domain was about enabling us to participate in the disk-to-disk backup market, of which deduplication was but one feature. But for deduplication for primary storage, NetApp is in fact the market leader. It’s been a big part of our growth and, in fact, in virtualized environments, the technology advantage is compelling. When NetApp is the first copy of the data, we sell a fair amount of the disk-to-disk backup behind it using our existing technologies. When NetApp is not the primary, the first copy of the data, the disk-to-disk backup there, what we call heterogeneous disk-to-disk backup, is why we were interested in Data Domain. We’ve done some partnerships in that area to enable us to compete, and that’s working well for us. Deduplication for primary: NetApp, I think, is the unquestioned leader. Deduplication for backup is a different type of technology, and clearly we want to have both over time.

Gallant: Next, we hear a lot of talk about data center fabrics. How real and how does that change the storage landscape for you?

Georgens: How do you define data center fabrics? That’s not a terminology, to be perfectly honest, we use very much. If you mean a converged network, the idea of converging everything ultimately into an Ethernet framework — things like Fibre Channel over Ethernet, traditional network attached storage like we do today and networking traffic over a single wire — I think the promise of that in terms of simplification is powerful and very similar to our unified storage story. From that perspective, I think it makes sense. Since we entered the storage business on an Ethernet-attached base, through NAS, I think that anything that flows toward Ethernet as a common tool and away from other proprietary technologies plays to our strength, because that’s where our legacy comes from. The simple fact of the matter is the amount of money being invested in Ethernet as an interconnect technology far exceeds any other technology, so I believe Ethernet will win in the long run. There are questions, though, [including] the practical matter of can I segregate the traffic? What about infrastructures I’ve already invested in? This will take time because there are management tools and methodologies and performance characterization that people currently understand over Fibre Channel infrastructures that people need to become familiar with [on Ethernet]. But that’s not where we’re innovating. In other words, we’re not inventing that technology. We will use that technology and take advantage of it and integrate it within our systems as we go forward. NetApp was very early on with Fibre Channel over Ethernet, NetApp was very early on iSCSI technology. We’d love to see that [convergence] happen, but we don’t make silicon, and we don’t make that type of low-level technology. That’s really in the domain of the network vendors.

Gallant: One more: Desktop virtualization. Are you seeing greater adoption of desktop virtualization, and is that driving storage sales?

Georgens: Well, yes and no. Do I see desktop virtualization? The answer to that is, unquestionably, yes. Initially desktop virtualization was in places where we saw big fixed PC infrastructures — trading floors, hospitals, [manufacturing] floors, places where there’s big PC infrastructure all in one place. But over the last few years, we’ve been seeing it a lot more from a mobile perspective, and financial services companies — all their users are mobile — have been big adopters. Part of it is platform independence. We’re seeing a preference or certainly a trend toward iPads and Macs and things like that, and the ability to make the applications platform independent clearly is a driver. Obviously, the security, the patch management, the data protection is also key.

As far as storage, desktop virtualization effectively takes the storage off the desktop where NetApp doesn’t participate and puts it in a data center, where NetApp does participate. So that trend is moving storage demand towards NetApp-style products. One of the technologies we have in that space, I mentioned it earlier, is this FlexClone, the zero-space cloning. When you get a new PC, it has effectively the same image as your neighbor who gets a new PC. So companies have standard images and rather than storing them 10,000 times for 10,000 employees, NetApp has the ability to clone the technology, so with only a handful of copies, we can create a logical copy for all of these other people. Our ability to cost-effectively solve the storage problem for virtual desktop is one of the areas where NetApp is not only a leader, but I think we are an enabler. The economic advantage of this technology is so compelling. We have an account that will go to 750,000 seats, and it’s doing a 50,000 seat proof of concept right now. A lot of the very, very largest virtual desktop installations in the world are actually done with NetApp storage because of these technology advantages we have.

Gallant: Can you outline you philosophy toward acquisitions? You made a couple in 2011, including Engenio, which was the external storage business of LSI Corp.

Georgens: I look at acquisitions in two phases. Number one is there needs to be affinity to what we currently do. If we’re going to do an acquisition, then it either needs to be something that our existing channels and sales force can sell more of or, by virtue of having it in our portfolio, we can sell more of our existing products. I think if there’s anything that this industry has shown, and I’m sure you can think of examples, is that buying dissimilar assets simply to build a portfolio does not create a lot of shareholder value. There have been some very high-profile combinations of very, very good assets that together have not yielded great stock outcomes. If we’re going to bring it into the NetApp domain, then it needs to be leveraged. Otherwise, why would we pay a premium? Therefore, affinity matters.

The other thing goes back to my original commentary about one of the things that’s driven a lot of velocity for us. In the workloads that we’re seeking to serve, we can do that with one operating system. What I don’t want to do is bring in a technology that’s got substantial overlap to what I do, even if it has some incremental value, because if I’ve got two platforms that are serving the same customer set, then it’s inevitable that I’m going to have to replicate functionality on both of those platforms. That is very, very dilutive, and that’s counter to how we got here in the first place. When people ask why was NetApp not interested in 3PAR and Compellent and some of these other properties that were for sale a while back, the answer is that if we brought them in, the new team would have a desire to replicate what we already have and NetApp would try and replicate whatever new functionality was coming in. I’d be diluting my R&D. [Ed. note: 3PAR was acquired by HP; Compellent was bought up by Dell.]

In the Engenio case, the rationale was that we were not bringing that product into our core business applications workflow. We wanted that to be focused on big bandwidth, high-performance computing, where we don’t necessarily sell our existing products, where the needs for our feature set around storage efficiency and application integration are not compelling. We can focus them on new workloads that are particularly targeted and exploitable by the technology we acquire without the burden of having to replicate all the functionality that NetApp has already developed. As long as we can keep the workloads separate, then we can keep the development activity separate. The other thing is that NetApp has had good growth, so I don’t see acquisitions as [needing] to address lack of growth in our core business. We still feel good about our core business. But as we get bigger and bigger, supporting our growth aspirations is going to require a bigger footprint for NetApp, so we’re going to look for adjacencies, things with affinity that don’t overlap with what we’re currently doing.

Gallant: Oracle’s strategy seems to be one of trying to re-create and own all the pieces of the so-called computing stack, with hardware, operating systems, applications, etc. Do you expect Oracle to get more aggressive in storage? Would you expect them to acquire a storage company?

Georgens: Obviously, I can’t speak for them. The generic case that they’re doing of integrated solutions from software all the way down to storage, interestingly enough, is for all intents and purposes orthogonal to the virtualization story of building a big, broad horizontal infrastructure that can run many apps at the same time. As I see that play out, the velocity of the economics is too compelling. That will be the primary way of deploying applications. There is a possibility there will be certain applications that are so important, so mission critical, so performance centric that they will necessitate a standard top-to-bottom infrastructure. We have accounts that run 10,000 databases — will they be deploying that vertical stack for all 10,000 databases? Probably not. Will they deploy for some of them? Possibly. And certainly they’re looking at that technology now. As far as Oracle more broadly, if their aspiration is to be IBM, then clearly I think they need more elements of the portfolio. If their aspiration is to basically dominate all components that run the Oracle database, then they can pick a different strategy. So I’ll let Mark Hurd speak for them. But Oracle is an aggressive company and I expect them to continue to do aggressive things. They view themselves as a consolidator. Whether they invest in storage or not, I don’t know.

Gallant: What should people watch for from NetApp in 2012?

Georgens: Continued innovation. I think the evolution of our Ontap operating system, where we marry what I think the industry would recognize as the largest portfolio of software management products with clustering. That will deliver us much greater opportunities for scale and performance and nonstop operation. I think that is a capability that does not exist in this industry and I think this is a meaningful step forward in innovation. You’ll start to see NetApp promote this technology as our preferred solution for a greater and greater set of the workloads that we provide. We will continue developing our partner alliances so that we can lower the barriers to acceptance of our technology by new accounts — the work that you asked about earlier with Microsoft and with FlexPod. Basically, we’ll have more bundled, best-of-breed solutions that are easier for customers to consume. The NetApp story will always be about innovation, it’ll be about partnerships, and hopefully ongoing growth and prosperity.

Gallant: If there was one tech trend or one issue that you think a CIO or a senior IT executive should learn more about, what’s that issue for 2012?

Georgens: I expect 2012 will still be a constrained environment. There are a lot of technologies that NetApp has that our competition doesn’t that could really help customers from a budget perspective. The last time we went through a downturn, the great recession of 2008, 2009, 2010, through that period NetApp didn’t have any down years. In fact, we actually had our best new customer acquisition years in that time period. When things are good, people keep doing what they’re doing, they just do more of it. When times are bad and budgets are constrained, CEOs don’t say, “I have to stop investing in R&D and I have to stop investing in marketing because I need to store more data because storage is exploding.” What CEOs say to the CIO is, “It’s a shame about that storage product, but here’s your budget.” That forces people to think about different ways of doing things. I think if there was a message, it’s that there has been a lot of innovation. It may not have come from vendors that you’re currently working with, but there are ways to materially impact both the economics and the velocity of your storage infrastructure. Through the last downturn, when people paid attention, I think it really helped NetApp, and I think a big part of our growth last year was the seeds we sowed in the downturn. People were buying into our message and buying into our technology even though they didn’t have any money to spend, and they bounced back last year. Going into 2012, if cost is a top priority of customers, we can show them how we can dramatically lower their cost without impacting their service level agreements, and perhaps at the same time improving their flexibility and velocity. We get back to the fundamental message of enabling customers to propel their business forward at a lower cost structure. And in a difficult economic time, if that story is credible, people are going to pay attention to it.

Gallant: Any final thoughts, Tom?

Georgens: NetApp is about innovation. I don’t know what our innovation will be 10 years from now, but we recognize that if we’re going to be here 10 years from now we’re going to need to continue to innovate. The other thing is that there are technologies that have meaningful impact on the cost of the storage infrastructure that I think a lot of end users are not aware of. That’s really going to be our challenge over the next couple years, to demonstrate these technologies. Usually, we do that in conjunction with virtualization because virtualization is already causing them to change their way of thinking. Customers are already predisposed to change, incumbency has weakened. So us connecting to that, giving us a chance to demonstrate our value, that’s opened up opportunities elsewhere within the customer’s environment.

This article, “NetApp’s Tom Georgens: How we got big, stayed nimble, and view storage today,” was originally published at InfoWorld.com. Follow the latest developments in business technology news and get a digest of the key stories each day in the InfoWorld Daily newsletter. For the latest business technology news, follow InfoWorld on Twitter.