In an interview with IDGE, Josh Rogers and Lonne Jaffe of Syncsort explain how they plan to transform big iron and traditional data warehouse/analytics Credit: Thinkstock When you think of leaders in big data and analytics, you’d be forgiven for not listing Syncsort among them. But this nearly 50-year-old company, which began selling software for the decidedly unglamorous job of optimizing mainframe sorting, has refashioned itself into a critical conduit by which core corporate data flows into Hadoop and other key big data platforms. Syncsort labels itself “a freedom fighter” liberating data and dollars — sometimes millions of dollars — from the stranglehold of big iron and traditional data warehouse/analytics systems. In this installment of the IDG CEO Interview Series, Chief Content Officer John Gallant spoke with Josh Rogers, who was named CEO this week, as well as outgoing CEO Lonne Jaffe, who remains as Senior Advisor to Syncsort’s board. Among other topics, the pair talked about why Syncsort was recently acquired by Clearlake Capital Group, and how Syncsort’s close partnership with Splunk is dramatically improving security and application performance management. IDGE: Lonne, I understand you like good storytelling. What’s the story you tell IT leaders today about Syncsort? Jaffe: Syncsort was founded in 1968. It was one of the very earliest software companies. I joined a little over two and a half years ago, and over the last couple of years the company has focused on this new mission around liberating data and liberating budgets from the stranglehold of legacy systems, while making the data and the budgets available for the fastest-growing data platforms in the world, things like Apache Hadoop and Splunk. Those platforms allow some of the interesting next-generation machine learning technology that we’re seeing manifest across all sorts of interesting industries like health care and self-driving cars and the Internet of things and the like. It has been a remarkable transformation in a lot of ways because people don’t usually think of a 48-year-old software company as the kind of entity that would be able to innovate around next-generation big data platforms organically. We’ve also been doing a lot of acquisitions. The play since I joined — and even after we were acquired a few weeks ago — is to add to our organic innovation acquisitions of high-value businesses that are in near-adjacent spaces and that are aligned with that theme, acquiring companies that have technology and talent that would help with that storyline of liberating budgets and liberating data. We think of ourselves a little bit like freedom fighters. The company is in a unique position because Silicon Valley companies often struggle with even understanding the basics of some of the larger existing platforms that are out there, especially things like the mainframe. The really large companies that have the talent and the go-to-market that would be well suited to do something like this often have the classic innovator’s dilemma, which is that the last thing they want to do is liberate budgets from their existing businesses in order to make those budgets available for new systems that are largely open source that they don’t control and that are being sold for sometimes one one-hundredth the price of their existing products. We’re in this unique position of being large enough to be able to pull it off and having the talent and technology that would be needed. [We’re also] small enough that we can be decisive and act with conviction around these growth opportunities that involve not just making data accessible to the new machine learning platforms but also shutting down huge amounts of spend — orders of magnitude more spend in some of these legacy platforms as we do that data liberation. Rogers: What we see with Syncsort customers is that they’re grappling with significant data challenges — how to look at analytics systems and data repositories and leverage the power of new tech like Hadoop to increase their ability to analyze data. That whole decision process is complex. But then, once decided, they’re also having to think about how to make those new platforms usable by plugging them into existing systems. One of the most difficult to integrate is the mainframe — moving data out of it and making it useful in Hadoop infrastructure is challenging because Hadoop’s open source nature, there aren’t a lot of user tools to help. Syncsort makes that much easier for customers, so they can actually get an ROI on their investments, and allow them to take advantage of mainframe data — often their most important data source. The other piece of the story we tell is the cost-savings opportunity of moving workloads from the mainframe or data warehouse and replicating those same workloads to execute in a low cost infrastructure like Hadoop. We’re hugely focused on helping customers here. We’re an active contributor to the Hadoop open source project and have good partnerships in the space, but also have 48 years of mainframe experience. This puts us in a unique place, and has allowed us to gain trust of companies and customers on both sides: big iron and big data. IDGE: Lonne, you worked at IBM in acquisitions and in some other roles in the tech industry. Why did you take on this job? What did you see in the company that made this an appealing opportunity for you? Jaffe: A couple of things. One was that there was already organic technology that was highly differentiated in terms of being usable for the strategy. It was the nature of the product itself, including the Hadoop product that was largely built before I joined. We’re unique in the industry in terms of capabilities, extremely high performance with deeply instrumented hooks into the existing legacy business platforms and native integration with Apache Hadoop, which is arguably the fastest-growing software platform in the entire industry. That was a rich and powerful asset that I was excited about. The other piece is the company itself, because it had been around for so long and was selling industrial scale software to the largest companies in the world. [We have] thousands of customers in 87 countries with subsidiaries in eight countries and the existing renewal stream where we had conversations with thousands of customers every year. That was a rich, intangible asset that could be used as an anchor to acquire some of the more interesting high-value technology companies out there, so we had this ability to do a two-prong strategy. One [prong] was to double down on organic growth. Part of that was launching a number of new products that would capitalize on the existing go-to-market [capabilities] but also the existing products, the technology the company already had. The other piece explicitly from the beginning was to do acquisitions. I’m a big believer that the innovation tool kit has many tools in it. One of them is building products and taking them to market, but another really important innovation tool is the ability to acquire businesses that already exist. In many ways that can be easier and more rewarding than even building products can be because a lot of times when you build stuff it doesn’t work or it takes really a long time. When you acquire something you can look to see if it works before you buy it and once you do the acquisition you have it instantaneously. A lot of times it comes along not only with tech and talent but also revenue and profit, customers and existing go-to-market. The ability to do that two-prong strategy was a big part of what attracted me to the company. IDGE: You talked about capitalizing on this big data opportunity. Specifically, how are you doing that? What are the steps you’ve taken to do that? Jaffe: There are two products that are particularly salient. The first is called DMX-h and the other is Ironstream. DMX-h is our Hadoop-based product. The strategy there has been to make tremendous contributions to the Hadoop open source project. There are a number of open source players like Cloudera and Hortonworks, MapR that have been big supporters of ours in making those contributions, and we’ve been one of the more prolific contributors to it. As we’ve done those contributions we’ve designed them in such a way that they help the Hadoop stack mature, but they also give us an advantage as we connect in our higher-value software that runs on top of Hadoop. Hadoop is in many ways becoming the de facto operating system for data in the industry. I think of it a little bit like TCP/IP. It’s becoming a standard. It’s not really a product exactly anymore. It’s a framework and an architecture that everyone is building to. I’ve never really seen this level of ubiquitous agreement by every large existing company and all the well-resourced newer companies to build to the same exact fundamental architecture. Our product is a runtime engine that runs on top of Hadoop for existing workloads that you were previously running in a place that’s a lot more expensive and locked away. People have legacy data warehouses today where you can spend as much as $200,000 a terabyte in a typical deployment. These can be $100-$150 million deployments at a typical customer. They can move those workloads to run on top of our product in Hadoop (and we have some tools to make that easier that we don’t sell that help the moving process). Because Hadoop is using commodity hardware, regular Intel-based machines, and it’s running open source software that is very inexpensive and is becoming increasingly easy to manage, the all-in cost can be on the order of $400 to $1,000 a terabyte. That’s somewhere between one-thousandth or — more generously to the legacy vendors — probably one-hundredth the price. When something is that much cheaper it’s essentially free. Emotionally, it feels like it’s free when it’s a hundredth the price of what you’re currently spending. That can save staggering amounts of money — it can be $10 million in a single year in terms of run-rate spend — that you can immediately use to hire data scientists and machine learning experts to do advanced analytics. You now also have all of your data in a platform that’s getting better faster than anything else in the technology industry. It has two huge advantages. One is you save a ton of money, and immediately you also unlock the data and put it in a place where every day there’s another next-generation advanced machine learning system that comes online. All of your data is already there and ready to be used. There is a big shift that’s happening in the data world around machine learning that is still in its infancy but is a juggernaut in terms of its momentum. It used to be that humans wrote almost all of the software. Humans wrote software, then the software did something. Now it’s becoming increasingly the case that the machines are writing the software. People don’t call it that. They call it machine learning or they call it training models. But really what’s happening is the data is coming in, the machines are using the data to write software and then that software runs and does something. You see this even with companies that you think of as being in a different industry. Tesla is a car company, but in some ways it’s also a machine learning company where they have to build a car so that they can gather all the data that they need to train their machine learning model to be a self-driving system. The system gets better when it has more data. This is why Google was able to open-source all of its core IT associated with the search algorithms and a lot of its other machine learning systems because they know that their insurmountable advantage is not the IP or the algorithms but rather the staggering amount of data that they have compared to everyone else. If you don’t have that volume of data you can’t train the models and the software. A lot of times the computers that are writing the software that then runs, the humans don’t even know what it does but they don’t have to know because it works and it works because it’s trained on really large amounts of data. A lot of these algorithms were written in the ’70s and the ’80s, and they weren’t that useful until recently because there wasn’t enough data to train the models. Now, because of these new open source platforms like Hadoop, it’s possible to have seven years of data on your customer ATM system or your credit card processing system or your airline reservation system instead of only three weeks of data, so the models can get really good. When you’re building your clinical analytic system for health care, figuring out which treatments result in the best outcome and you feed all the data in, you can get interesting, useful results whereas before you didn’t have enough data to do that. That’s a big underlying secular growth opportunity around the play. The short-term capability of the product is a really powerful runtime engine that’s well suited to moving workloads from systems like the mainframe, Teradata, Natezza, Oracle inside Hadoop, and it serves as a catcher for those systems, then runs them in a way that’s easy to maintain and high performance and very secure. IDGE: To be clear, that’s DMX? Jaffe: DMX-h, yes. The second product is called Ironstream, and that is a primarily cyber security product. There’s also an underlying growth trend. The mainframe, which is tens of billions of dollars a year in spend, is a mission-critical system that runs some of the world’s most important transactional environments, things like airline reservation systems or credit card processing systems or retail commerce systems. It’s used for those cases because it’s an I/O supercomputer. It’s unmatched in terms of concurrent transaction processing. It’s not useful in the same way for things like running a social network and you need to do status updates, for things where it doesn’t matter if you refresh the page if the thing hasn’t updated yet. But things where it does matter, where you need perfect transactional integrity like financial services, they’ve started to measure the power of the mainframe by the number of Cyber Mondays that you can run on a single box. That’s the metric that they’re using because it’s so powerful. There’s a big data company called Splunk, which is one of the fastest-growing software companies in the history of humanity. It’s been targeting a couple of really important use cases, two of which are cyber security and application performance monitoring. There has been this gap, which is that they couldn’t monitor the mainframe because it’s actually really hard. [Splunk’s] technology pulls log data off of pretty much almost every other kind of system that exists in an enterprise except for the mainframe. The mainframe tech is really hard because there are a lot of logs. It’s one of the most prolific log generators, and it’s running your systems of record. Your cyber security system is not that useful if it can secure your entire enterprise except for your system of record, the system that has all of your customer bank accounts on it. Similarly, for your application performance monitoring system, it can monitor basically your entire enterprise except for the third tier of your three-tier applications. You can never get to the root causes of any problems. Splunk approached us about that, and we were in this unique position where most of the other mainframe software vendors have massive existing businesses that are getting decimated by Splunk, so the last thing they want to do is try to help them get mainframe log data off the mainframe at Splunk. The Silicon Valley companies, many of them don’t even realize the mainframe still exists, let alone that the mainframe is a bigger market than most of the data industry. That’s what Ironstream does. It pulls the cyber security data and the advanced application performance monitoring data off the mainframe, feeds it into Splunk for advanced analytics on those use cases; cyber defense and application performance monitoring and increasingly other things like broad analytics and new customer churn and the like. That’s actually become the fastest-growing product in the 48-year history of Syncsort, is already closing huge, million-dollar-plus deals and has been really exciting in terms of real time streaming and telemetry data. Rogers: If you look at our customer base for DMX-h and Ironstream, something that’s unique is the pace at which we’ve been able to take the largest enterprises in the world into production. We’re approaching customers not only with products that can deliver the value prop we’re describing, but also with expertise, and even battle scars, of having done this with many Fortune 500 enterprise over last 36 months as Syncsort has been delivering these products to market. We believe we have the best experience in world in offloading processing to Hadoop and delivering mainframe data to big data infrastructure — like infusing Splunk with critical log data for monitoring. This expertise is packaged into every proposal we do — we include services to help customers get their tech configured and operationally sound. We don’t simply deliver technology, but help customers have success with it. One large financial institution got started with a data warehouse offload program and was very successful. As it got deeper into it and put more and more processing into production in its Hadoop cluster, the company started to look at other areas to reduce cost. It had a test data management process running on the mainframe to support app testing, and it was very rigid and expensive – the company could only run it once a year. It realized this was the type of process that would be well-suited for Hadoop. It turned to Syncsort to help them rebuild that process in a Hadoop cluster. Businesses might start with data warehouse offload, but once they have the infrastructure up, we see it starts to attract other workloads and starts to increase the value of investment. IDGE: The other thing that you brought up is cost savings and you mentioned it on the Hadoop thing but if you were to crystallize it, what are the key ways that you save money for customers? Rogers: The bulk of cost savings is in moving workloads from expensive platforms to less expensive ones, but there’s another piece and it invokes the company name. We’re steeped in history and are the leader in sorting technology. The sort function gets invoked at every stage of data processing — it drives up mainframe CPU and cost because of the high usage, and it increases the size of Hadoop clusters. One of the benefits of all of Syncsort’s products — this has been the case for 40-plus years and a key differentiator — is we’ll execute the same workload more efficiently than any other product because of the core architecture of our compute engine and the time-tested algorithms and optimizations that get invoked at runtime. There’s the cost savings because you’re running data processes on a Hadoop cluster instead of a Teradata box, but also huge cost savings because our technology makes the workloads more efficient in the Hadoop cluster. They’re even more efficient than if you wrote those jobs as custom code like MapReduce code directly on Hadoop. Jaffe: The main mechanism is the ability to shut down existing spend on legacy systems. I’ll give you an example around DMX-h. The traditional architecture of an analytics environment is you have six or seven different source systems. One of them might be social media and maybe Web logs from your website and mainframe and legacy databases and things like that. You bring them into a data warehouse, and you do some preprocessing, then you do some analytics, maybe create some dashboards against it. What the customers are doing is they’re putting Hadoop in the middle of that architecture. They’re putting it between the source systems and the downstream systems. When that happens they can shut off a lot of things. They can shut off their spend on ETL products, products and companies like Informatica. They just turn them off. They can shut off half of the capacity in their data warehouse. About half of that capacity is effectively preprocessing that can be done for a teeny fraction of the cost in Hadoop. Then on the source system side you have these big systems like mainframes, which are tens of billions of dollars, about half of which is essentially inefficient batch preprocessing, all of which can be moved into Hadoop. Mainframe is actually metered, so as soon as you move workloads off of it you start saving a huge amount of spend and that happens instantaneously. That savings is all immediate. That’s the step-one project. That’s a project that we’ve done over and over again at some of the largest companies in the world, including the banks and many other largest telecom and financial institutions. It’s a very easy project because it doesn’t require any reengineering of your business processes. You’re not changing the analytics that you’re doing at all, you’re not changing the source systems, nothing is changing other than you’re replacing some really inefficient systems with Hadoop and you just snap it right in. It’s pretty elegant and you’re ready to go. The step-two project, which in some ways is the more interesting one, is you can start shutting down the downstream part of that architecture altogether. Instead of sending it through Hadoop on the way to where it was going anyway, you start sending it to Hadoop and you start running new, next-generation analytics directly against that. Now you’re able to do all sorts of things that you weren’t able to do before. That’s less of a cost savings play, although a lot of times the use cases are very cost-savings oriented like churn analytics, one of the most common things people are doing on their large volumes of data they’re running into Hadoop, predicting which customers are going to continue so that you can rescue them now for relatively small amounts of money instead of having to pay more money to rescue them later. Those can be business-oriented cost savings that need to happen as well. In the case of Ironstream, one of the main ways that people are able to save money is by shoving off the legacy monitoring products, some of which are billion-dollar-a-year businesses that are running what I sometimes affectionately refer to captive-grazing-based business models where they are not really improving the products at all. They’re jacking up prices on the customers, they’re just extracting value and so by now lighting up their existing environment within something like Splunk, which they’re typically already using for everything else, customers can start turning off those systems and save many orders of magnitude more money than they spend on Splunk and Ironstream and all the rest of their new analytical environment put together. Rogers: A real-life example of this is with another financial institution we work with that had challenges with compliance and security. Application testing groups would leverage mainframe data assets, and the organization didn’t have a good way of monitoring that only authorized users were accessing various data assets. Insuring privacy has grown in importance as regulations have increased in the financial sector. They had some level of mainframe monitoring, but it required a single mainframe expert to interpret whether any violations in access had occurred. This was big job, and the person monitoring the mainframe was not connected to the application testing groups and the information was in a format that was not easily shareable across departments. This institution was a Splunk user and had great success in using Splunk dashboards to deliver information to broad sets of users to support other use cases. They determined a Splunk dashboard would be the best way to communicate to testing managers data access patterns and ensure compliance, but needed a way to move the appropriate MF log data (up to a 1TB per day) into Splunk on a real-time basis, which is what Ironstream does. They were able to decrease the risk of violating compliance regulations, and the project had a material impact on the business — it increased the speed at which they can deliver new features in applications because it sped up the testing process. IDGE: You’ve made two acquisitions. How do those acquisitions advance the strategy? Jaffe: I’ll go in reverse order starting with the most recent one. William Data Systems made software that helped our cyber security play. It added another type of security data to the Ironstream product, which was network security. Network security is very important when you’re doing cyber defense because a lot of the interesting attacks and problems happen in the context of the network and all of these systems, especially the high-value and industrial-scale systems. They are a London, UK-based company. We acquired the technology, we immediately put the IP inside Ironstream so that you get it for free as part of Ironstream, and we started using all of the talent in the company to build new capabilities within the Ironstream product. Then we also gave the existing William Data products a lot of lift. We were able to go to our existing install base and say to everybody: Do you want this? A lot of the customers said yes, so we were able to create some growth there. The prior acquisition is a company called Circle Computer Group, and also based in the United Kingdom, also near London, and it was a very similar dynamic. It had software that allowed you to shut down spend on some legacy data platforms while also moving the data to places where it was a lot more accessible by fast-growing big data platforms. Within a few weeks of closing the acquisition, we had essentially paid back a quarter of the purchase price of the company by giving it lift and bringing it to our existing customers. The CEO of the company became the head of our European operations because he’s a fantastic leader. All of the technical talent started working on building Ironstream, and actually they’ve built a big part of that core organic IP. Then we were able to also get a number of customers we didn’t have before who we could upsell with the rest of our products. That’s the kind of acquisition we were looking for, highly differentiated tech that’s a near adjacency to what we currently do where we can give them lift and then use all of the other parts of the company, like the talent and the intellectual property, for some of our new initiatives. Rogers: Circle is right in the vein of saving customers money. IMS is one of first databases ever invented and many large enterprises still run it. The licensing cost from IBM is expensive, the architecture is hierarchical, and there’s the skills gap risk — people with the skills to run it are retiring. What Circle’s product does is allows you to move data from IMS to DB2Z — a relational database running on the mainframe. You can eliminate the IMS database and stop paying the licensing fee, and now data is in relational format, a modern architecture where plenty of skills exist. The last piece is that now your data is more usable to be shipped off-platform for analytics. It’s a low-risk modernization for cost savings and to liberate data for analytics — which is our overlying strategy. It’s hard to overstate the talent we get with acquisitions — there’s really talented folks in these companies, and we’re quickly able to repurpose some of that talent and apply it to our next-generation products to increase the pace of innovation for organic solutions Syncsort has and is developing. We continue to have a very active acquisition pipeline and see lots of interesting opportunities with highly differentiated technology that is near-adjacency to the existing business and value prop of saving money and making data more shareable for analytics. IDGE: Lonne, what drove the acquisition of Syncsort by Clearlake Capital and how does that advance the strategy? Jaffe: We were the first investment in its brand-new $1.4 billion fund, and it was looking to double down on both of the aspects of the strategy that I was talking about. It wanted to both invest in the organic products — in particular DMX-h and Ironstream — and, perhaps more important for them, to use the company as an anchor asset to deploy a lot of additional equity capital for the purposes of acquisitions. As a financial sponsor you can buy companies yourself or if you do it through a powerful strategic partner like Syncsort you can get all sorts of cross energy and lift associated with the acquisition. That was the goal. A big part of it was the management team, I think, which had the idea that there are really interesting technology companies, and if you acquire them you could then include them in this overall strategy of liberating data and budgets. You want to be selective about what you acquire but it can be very, very powerful as a strategy to do that sort of acquisition play in addition to the organic growth. IDGE: Here’s your fun fact. I started covering mainframe software in 1986 writing at Computerworld, so I know Syncsort, but I know a very different company. I think that there are a lot of people in the business that if they do know Syncsort they think of it as a very different company than what you’re describing. How are you going about changing that thinking and getting people to understand essentially the new Syncsort? Jaffe: It’s definitely been a challenge. There are advantages of it as well because many of the customers have been running Syncsort for decades, and they know and trust the software and they know it’s industrial scale and it’s enterprise class. But culturally it’s been a challenge to get accepted into the open source ecosystem around Hadoop and to get perceived as an innovator in and around these fast-growing big data platforms. One thing that’s really helped as we’ve been working with the open source community is being willing to do a tremendous amount of work in and around these products and improving them. The open source communities have been a meritocracy and people who make the most contributions that are good have the most influence on the project, so that was a big part of getting accepted in that community, especially around Hadoop. The acquisitions helped a lot. When people see that you are acquiring really interesting tech and bringing it to market and you do good marketing around it and you show up at all the right conferences and you have a strategy around it, people take notice and it changes the perception of the company from being one that still makes incredible sorting software to one that now has a whole portfolio of really interesting big data assets. IDGE: In your press releases and in public statements there’s been talk about how crucial alliances and partnerships are. All tech companies have alliances and partnerships but what are a couple that are really critical to success with your strategy? Jaffe: Splunk is at the very top of the list there. In their earnings call, Splunk mentioned a couple of partners. There was Amazon Web Services, Palo Alto Networks, and Syncsort. We were mentioned first. That partnership has been fantastic. We’re such natural allies in terms of being able to bring the mainframe data into Splunk, which is really important strategically for them but also helps to shut down the spend on some of the existing competitors that they have. For us, it gives us this incredible juggernaut of a go-to-market around the Ironstream product. Then the Hadoop distributions have been absolutely critical to our success. We have a partnership with Cloudera, which is some of our larger Hadoop production deployments, Hortonworks, MapR. They’ve all been incredible in terms of supporting us and helping bring us to market and giving us the street cred that we need in the Hadoop ecosystem that we might otherwise have lacked by virtue of being an older company. Then some of our reseller partners like Dell, for example, made a huge bet on Syncsort as the flagship technology on their Hadoop appliance. When the big existing companies make those types of bets, it’s a signal to the market that this is powerful technology worth taking seriously. Rogers: Cognizant has also been a great partner. They’ve developed multiple solutions around Syncsort’s products — like Cognizant BigFrame to help with offloading batch workloads from the mainframe — and represent an important augmentation of skills in the marketplace around our technologies. Our goal is not to build a large service organization — we have enough services to help customers see success with Syncsort technology, but it’s important to have skills in the market to support large deployments and projects, too. IDGE: What are you prioritizing and spending your time on in 2016? Jaffe: Yes, I’m a big believer in prioritization. That includes often choosing what you’re not going to do. One of the first things I did when I joined the company was a divestiture of about a third of the business, which is this backup software business. Going forward, the major focus areas are going to be to continue to find around things like Apache Spark and Kafka, which are part of the Hadoop and friends broader ecosystem; they’re not technically part of the Hadoop stack, but they are now being included by many of the distributions to think about real-time in-memory capabilities, engineered systems like what we were talking about with Dell Cloud. We launched our Amazon Web Services based product that runs in Elastic MapReduce, which is their Hadoop distribution. I would say in order of importance it would be cloud, the work around in-memory, then real-time with Kafka, in-memory with Spark, and the engineered systems that we’re doing with Dell and some of their other ecosystem partners. These are systems that are designed for offload, so that includes our software and has Hadoop in it as well as hardware. They are basically a turnkey platform that you can move C workloads into. Rogers: People are making massive investments in new big data technologies like Hadoop and Splunk, but their ability to gain value and insight is limited by their ability to get data assets into those environments. What we’re seeing in customers is mainframe data is the most important and most challenged because of the way it’s stored. The way you get access is complicated and different from distributed systems, and there’s a fair amount of technical work that needs to be done, but it needs to be cost-effective, can’t increase mainframe costs, and must absolutely protect the data and allow companies to comply with regulations. Today customers are using a broad set of technologies to accomplish this and these products are not particularly efficient nor is the environment necessarily easy to manage, and of course having to apply multiple products from a variety of vendors is quite expensive. We refer to this space as “big iron to big data,” and in 2016 we’re going to be doing a lot of research with customers and analysts to define this problem. We certainly understand pieces of it well, but it’s broader than what we tackle today, and we want to be able to put a better definition around the set of challenges enterprise face in charting a path of big iron to big data. As we expand our solutions we will absolutely take advantage of new technologies like Spark and Kafka. They are critical, but they’re enablers. They are in no way prepared today to tackle the challenges of big iron to big data on their own. They’ll need extensions and additional management layers to attack the problem, and that’s where Syncsort believes there’s a huge opportunity to help existing and future customers. IDGE: Were there other things that we didn’t touch on that you think are important to people’s understanding of the company and the strategy? Jaffe: The cyber security focus is going to be increasingly important for the company. This is something I think people understand intuitively now, but the threat surface in the world is expanding, especially when you’re dealing with really large volumes of data stored on commodity hardware connected to the Internet. We’re squarely focused on it now with Ironstream, but going forward it’s going to be not only important for the technology industry as a whole but for Syncsort’s growth. SecurityData ManagementTechnology Industry