Contributor

When big data meets innovation: cognitive platforms

opinion
Sep 5, 20175 mins

Why we need learned logic as the new layer in the application architecture

For the longest time, the structure of a typical business application remained the same. Implementation details could vary, but the solution invariably consisted of the front end, the business logic, and the data management layer.

We started with monolithic applications running on mainframes. From there, we progressed to the client/server architecture with the business logic distributed across desktop applications and database procedures. This approach was later replaced with thin apps running in the browser and the business logic residing in the middle tier. Eventually, the middle tier was broken down into functions and microservices.

In addition to browser-based apps, we now have apps running on mobile devices and embedded systems. This situation is about to change in a rather dramatic fashion, and the explosion in the volume of data that must be managed by the modern application is the reason behind this upheaval.

The data volume growth curve has three distinct phases: the business data phase, the human-generated data phase, and the sensor-generated data phase. The classic application architecture was adequate for business data processing, as it strained under the load of human-generated data. However, it is entirely inadequate for processing sensor-generated data.

The reality is that no human can possibly formalize and encode a set of business rules required to derive meaning and actionable insight from the huge volume of data that is streaming in real-time from millions of devices and sensors. This is a situation that is rather common in today’s e-commerce, e-health, and Industry 4.0 applications.

We may choose to ignore this reality and keep business as usual. We must then come to terms, though, with the fact that this leads to getting left behind in a rapidly changing technology landscape and falling behind competitors at a drastic rate. Alternatively, we may embrace this change and look for ways to alter the application architecture and rise to the exploding data volume challenge.

Enter artificial intelligence (AI).

The AI topic has received a lot of coverage lately with scenarios ranging from excessively rosy to overly pessimistic. As far as application development is concerned, all we need to know is that AI is a collection of mathematical methods and algorithms that are used to detect patterns in very large historical sets of data and make predictions or inferences when presented with new data.

A deep learning algorithm, for instance, can be used to detect patterns in a very large data set consisting of pictures of cats. Knowledge of these patterns can then be used to classify previously unseen images as pictures with or without cats. This pattern detection process is called machine learning, and it produces an AI model. This model can be packaged as a service and called remotely, and can also be turned into a software module and embedded in an application running on a mobile device.

An application may need access to several such models to perform image classification, speech recognition, content personalization, fraud detection, and other tasks. Together, these models form a new component in the application architecture—the so called ‘learned logic’—as opposed to business logic.

Application developers have been building the frontend, business logic, and data layer components for ages. Every step in the software design, development, testing, deployment, and management process has been researched and optimized to perfection. The tool set is extensive and very well known.

On the other hand, very little is known about industrial-scale learned logic development.

To begin with, AI models are created by data scientists, not application developers. Very few of them are software engineers. Many of them are “true” scientists with advanced degrees in physics, math, microbiology, economics, and other non-computer fields. The tool sets and processes that are used by the data scientists to create AI models are very different from the ones used by software engineers.

Unlike reusable software modules, with very few exceptions, models trained on data in one application domain cannot be easily transferred to another domain. Models tend to decay over time.

As the underlying data changes, patterns need to be relearned and models need to be updated. To ensure consistent performance and high level of service, model life cycle management must be integrated with the overall application life cycle management.  

And what about privacy and security? Can a model that was trained on sensitive data in a secure environment be deployed to a broader group of users in an environment that is less secure?

These are just some of the issues that one must consider when adding the learned logic component to an application architecture. There are no ready answers.

At this point, early adopters and innovators are going through the painful trial-and-error discovery process. They are developing their own tools and processes integrating the two domains: software engineering and machine learning.

If history is any indication, eventually their hard work will give rise to a new platform for a new class of applications with broad cognitive capabilities. The business opportunity is enormous; the race is on.

Dmitri Tcherevik, Chief Technology Officer of Progress, leads the company's vision and technology strategy for cognitive applications across its product portfolio. As a core member of the Progress executive team, he not only executes upon Progress’ technology road map, but leads future technology efforts including incubation projects, technology M&A and strategic alliances. Dmitri is an industry innovator with a proven track record of creating and evangelizing game-changing technology strategies. He’s at once a hands-on engineer, strategist and practical go to market expert, with exceptional skills in devising and implementing technology strategy for emerging technologies.

Dmitri is a serial entrepreneur, having founded two successful technology start-ups—MightyMeeting and Infostoria (acquired by Fatwire in 2007). Dmitri served as CTO at FatWire, where he helped to define the market around Web Experience Management, until its strategic sale to Oracle in 2010. Prior to FatWire, Dmitri led emerging technology development at CA. He carries specialties in the areas of mobile apps and platforms, cloud services, collaboration, web experience management and enterprise application integration. He also has a background in machine learning, having contributed to development of a groundbreaking AI engine used in chess programs and economic forecasting software.

Dmitri majored in applied math and intelligent systems design at the National Nuclear Research University, Moscow, where he graduated with honors.

The opinions expressed in this blog are those of Dmitri Tcherevik and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author