Loading…

READY TO GO?

Click the button below to start exploring the future of Procurement
Start exploring

How Big Data is going to seriously disrupt P2P

During Lakran’s P2P Innovation Experience, held on May 25th 2016, some very exciting breakout sessions took place. Four teams enthusiastically took up the challenge to invent innovative applications for P2P, based on four technological trends: Networked Economy, User Experience, Internet of Things and Big Data. I had the pleasure, together with Daphne Reurslag (Coupa), Robert-Jan Meeks (OptiMeeks Inkoop Implementatie), Walter Schoevaars (Marcksland) and Bjorn de Rooij (Menzis) to discuss Big Data.

Big Data concerns a concept that none of us had ever heard of only ten years ago. But today there is an overwhelming number of publications about this topic. You often see that concepts that come into vogue are picked up by many parties, that all give their own flavor to it. We see this with SaaS, Cloud, Mobile, Internet of Things and Social. We also see this with Big Data.

To understand Big data, one thing is crucial: exponential growth.

First of all, exponential growth of data. Thousands of years, the amount of available data was hardly growing. For a very long time, the entire body of wisdom of mankind was in the heads of people. When people figured out how to write and read, this skill was mastered by a limited number of people. The invention of printing made it possible to scale up the number of documents, and thus data, but it was only when the presses became larger and more people learned how to read, that the number of publications really started to grow.

But the real boom in the growth of data runs in parallel with the development of the computer and the applications that it enables. In professional environments this took on a large dimension with the introduction of corporate applications like ERP. All over sudden much more data was recorded in a structured way, for example in transactions in sales and procurement. And in a way that made it possible to run reports on it. But the ultimate explosion came with the arrival of the internet. It is said that by 2020, the world will know five times more data than in 2015. Or that currently, more data is generated in a minute than between the years 0 and 2000.

Secondly, exponential growth of computing power. As said, the growth of data was only possible due to the invention of the computer and the subsequent growth of computing power. it is far more easy to create data. All kinds of computer-aided applications generate data. With the advent of sensors that can be put in all kinds of “things” (like machines, human bodies, packaging, roads, etc.) that are linked to the internet (IoT), this will accelerate the generation of data beyond imagination.

This enormous growth puts up some challenges: how can we control all this data? And this is not about storage, because inversely proportional to the increase of computing power, the costs of storage have decreased. It is all about making sense of all this data. There are some obstacles that we have to deal with: the enormous amount of data, the questions how to assess the quality of data and the complexity of data structures. How to link data from different sources, based on different data models and gathered with different objectives? Again, the exponentially grown computing power is indispensable here. Not only is it possible to plow through massive amounts of data in a relatively short period of time, but fast computers also enable us to use smart search queries and algorithms to distil useful information from unstructured data.

The challenge for our group was to work out in one hour, how to apply the principles of Big data – mounted on exponential growth and computing power, with the domain of purchase-to-pay. How to make processes better, faster, cheaper, easier with the use of Big data?

The team agreed that over the last 20 years, many people worked very hard on improvements in P2P, but that all these improvements can be regarded incremental improvements. Specifically aimed at doing more efficient what we already did before. And that this never brought us the overwhelming results that we were aiming for. Perhaps it is time to take another road: instead of trying to do the same things better, we could look for ways to do things differently.

The aim of P2P is being in control: have employees order from preferred suppliers to realize savings and objectives relating to quality and sustainability. Prerequisites for the process to do this are efficiency and flexibility. Up to the present day, an iron logic teaches us that: if the “control” goes up, the efficiency/ flexibility goes down and the other way around. Could Big data be the holy grail with which we can undermine this logic?

We think so. If you use process mining techniques to unveil what the P2P processes really look like in organizations, then it becomes quite clear that these processes might seriously deviate from the processes that were once defined in a blueprint document. Apparently there is a need to do things differently, get around perceived obstacles, to use workarounds. This is a clear indication that there is a strong demand for more variance than rigid corporate processes can offer. A logical approach would be to use the results of the analysis to visualize the bottlenecks and take steps to clear them away: incremental improvements or continuous improvements. The downside of this approach is that the world keeps on changing, which makes it inevitable that processes need to be updated continuously.

Would it be possible to do this better and faster if we use the concept of big data? And what if we would combine that with Artificial Intelligence (AI) and Machine Learning (ML). In a next blog which will be published on shortly,  we will present a concrete example of what this could look like.