Data maturity curve movement and driving business transformation


In recent weeks, I’ve had the pleasure of interviewing some IBM executives about data and analytics, migration to cloud, digital transformation and other topics important to CIOs in client organizations. From recent interviews, I’ll be sharing some comments from those conversations in a series of postings. And these interviews will also be available as podcasts. I hope you enjoy listening to them as much as I enjoyed chatting with these executives.

From transformation to data maturity

I spoke recently with Rob Thomas, vice president, development for analytics, at IBM, about business transformation, which led us to further discuss a data maturity curve. This first installment presents what Thomas had to say on this topic and more.

What is a data maturity curve for analytics?

Many companies are at very different stages in their maturity with respect to big data and analytics and how they’re using it to transform their business. Companies may use new tools and techniques, but they’re focused only on reducing or managing costs. Where many firms fall short is using analytics to drive new business models and really disrupt within their industries.

We used to define mature by the degree of usage of analytics, and to some degree that is still true, but true maturity goes much further. Think about how some industries are reimagining how they provide products and services, change the experience for their consumers and disrupt the industry by thinking differently. Yes, analytics is they key to getting you there.

According to Thomas, when you move along that curve and become less focused on simply saving money, you become more open to thinking outside the box. You tap into new ways to actually transform your business. And you start to ask yourself: What are you going to do in terms of line-of-business imperatives and line-of-business applications? How do you get to a new level of insight that impacts your daily business processes and ultimately the relationship that you have with clients?

For years, IBM has helped clients move from left to right on that curve. Companies moving further to the right are ready to innovate with new business models, more sustainable competitive differentiation and potentially disruption of an industry. In fact, it’s interesting to compare Thomas’s views of the curve with comments from an analyst at IDC, who also addresses the strategic role of a CIO in a recent InfoBrief.

For IBM, is really embracing open source more than just a rumor?

One of the great things about open source is the speed and scale of innovation. In years past, organizations have been limited to the talent pool in those organizations. None of us can be as good as all of us; by tapping into the community, you can expand on the talent and innovation available to your organization. Open source is the key. 

By working with the community on projects such as Apache Spark and Apache Hadoop, we can see how together we can accelerate new capabilities for all of us. And to see just how far and fast these projects have come in a short time is pretty exciting. 

Moving a big component of the IBM product development to a new paradigm—working in open source communities, contributing code and ultimately building our product on top of open source bases—has been a strategic IBM imperative the last few years. That imperative is helping us get the benefits of rapid innovation with broader community support, and it also leads to a more open architecture that gives clients flexibility without lock in to any one vendor or solution.

CIOs have a tremendous opportunity to remake their infrastructure and to create a new environment that’s built for speed. But they need to have a healthy skepticism about the safety of moving fast along that journey and knowing which partners are best suited to help. “It requires either an intense amount of skills development within an enterprise to deal with that complexity or it requires a partner like IBM that can help you through that journey,” says Thomas.http://www.seenews.info/wp-content/uploads/2016/10/datamaturity_embed.jpg

Is open source enterprise really ready?

I was interested in hearing Thomas’s thoughts on this question. In short, his answer was that management and integration of open source are not yet enterprise ready. “It’s a bit of a naïve statement to say Hadoop is mature,” says Thomas, as almost 30 different subprojects exist within Hadoop at different stages of maturity.

As for Spark, some components are very mature, but others have a long way to go. In those areas, IBM is making significant contributions to help push toward maturity. “The main use case for Spark is around machine learning,” says Thomas. “That’s why we contributed a machine learning engine and optimizer to Apache, which will eventually make its way into the Spark project, we believe.”

Machine learning on a large corpus of data can enable businesses to make better recommendations and decisions, and we’re hearing that it will become mainstream in the next three years. Thomas quoted an example of a retailer who was able to move from the industry standard sell-through rate of 30–40 percent of inventory to a 90 percent rate. How? By running machine learning algorithms to make highly personalized recommendations for clients. It’s a kind of impact that can be transformative for any business.

What can we think about beyond hybrid cloud?

IBM has made a significant investment in cloud computing, and I asked Thomas about some of the specifics of cloud for data and analytics at IBM. He noted that IBM is moving all its data services to the cloud.

One of the biggest mistakes that many organizations make is taking the current paradigm—what they have on premises—and basically lifting and shifting that to the cloud. They can’t really accomplish what they want in terms of agility and speed if they move existing, fairly restrictive deployments and do so in the same form to the cloud.

Instead, those who are doing a better job of getting benefits from the cloud are changing their approach from a traditional IT stack to a fluid layer of data services capable of being composed. The traditional model of one repository, with business intelligence (BI) or custom applications on top, is very rigid because you have to move data into that single repository to get access to it. But a fluid data layer enables ingesting data from every potential source, whether it’s on premises or in the cloud. The new approach focuses on a stream of data that can be persisted in any form factor or any repository as analytics are needed by the organization. 

From a rigid to a fluid construct

“All of this makes Spark so important because Spark gives you the ability to do that type of persistence or that processing above the repository layer,” says Thomas. “Which again totally changes the dynamic from a rigid construct to a much more fluid construct. So that’s what we mean. It’s about really changing the nature of how data flows into an organization and then how it can be accessed.”

Thomas is one of our speakers at IBM Insight at World of Watson 2016. Be there to hear him live.

Follow @IBMBigData

This entry was posted in Big Data. Bookmark the permalink.