Big data in the supply chain – mere hype or a useful tool?

AndSoft develops solutions for transport and logistics very attentive to the evolution of big data in the supply chain. For this reason, we publish below the analysis of a leading expert.

By Frans Kok, General Manager Asia-Pacific, AEB

Madrid, august 14, 2015.- Is the currently popular term ‘big data’ all hype and no substance? After all, the definition of the highly-evolving term is now so broad that it could be used to describe even the simplest of data analytics. Today, the number of daily searches on Google for ‘big data’ has increased twelvefold in the span of just four years. Granted, the popularity of the term ‘big data’, may run its course in time – like every other search term that’s currently trending. But it doesn’t necessarily mean that the technology behind the term will go out together with it. On the contrary, big data technology will have the opportunity to become more clearly defined than ever.

Big data in its original sense, describes any voluminous amount of structured, semi-structured and unstructured data that has the potential to be mined for information. The benefit of big data lies in crunching large sets of recent data to draw conclusions that can be used to make the best possible decisions for the future, sometimes in real time. The underlying theory is that wherever rules or mathematical formulas alone fail to yield useful forecasts, a combined analysis of all possible marginal parameters will make it possible to identify certain dependencies and patterns that allow more precise statements about the future.

A knowledge of engineering makes it possible to determine the maximum number of hours a jet engine can be in service until maintenance is required, for example. But it takes real-time analysis of an array of sensors in the engines measuring the tiniest irregularities (temperature distributions, imbalances, etc.) during operation, to make an accurate prediction of when maintenance is actually needed.

Similarly, setting up a supply chain to ensure that T-shirts are available in greater quantities at the start of the season is relatively simple. However, it takes the analysis of up-to-the-minute online consumer search queries, sales figures, social media activities, etc. to quickly identify local trends. For example, an imminent surge in demand for a T-shirt with a particular design can be expected if a popular celebrity was recently spotted wearing it in public.

However, this approach also highlights what is so curious about big data. Is it really possible to predict anything if you simply analyse enough of the right data? Does everything necessarily follow a pattern, or, to put it philosophically, is “predetermination” overrated? Is it even possible to get your hands on data of suitable quality and process it reliably? Are misinterpretations the rule or the exception?

Experts argue these points, and are yet to reach a consensus. In many instances, the approach described in the above examples will yield better and better results – but in other instances, it may not, or may even cause the opposite effect.

At what point is big data actually “big”? When is data analysis by conventional methods sufficient, and when are big data methods really needed? In the example given above, the sensor readings from a single engine of a Boeing 787 would generate an astounding 500 GB of data on a single flight. That is big data in terms of volume.

What about data generated in a supply chain? A company that ships a million packages a month could collect compressed package data for over 400 years before accumulating 500 GB of data. If a company with such an impressive volume of shipments were to lower its period of data collection to a more realistic 5 years, the resulting volume of data – even when generously augmented by other consignment data for analysis – could still be easily analysed with a simple personal computer running standard software.

So do supply chains really generate “big data” that needs to be analysed? The answer to this question is that it depends. For projects where the objective is to analyse performance, costs, vulnerabilities of (even large-scale) logistics operations, and assess the effects of optimisations, traditional data analysis (also known as business intelligence) is adequate. The goal here is to obtain key information based primarily on past data, though in some cases, this data can certainly allow extrapolations into the future. For many companies, even such relatively simple analyses harbour great potential for improving workflows and saving costs.

Big data in the supply chain makes sense when you want the analysis to include dynamic factors outside your own sphere of influence. An example specific to the supply chain industry would be advanced knowledge of supply chain risks (such as strikes or political unrest). At this point, the boundaries between strict logistics and the planning, control, and implementation of procurement, production, sales, and any after-sales services, begin to blur. An example is real-time control (or influence) of the purchasing process on e-commerce portals. We’ve all heard about the vision Amazon has of moving goods before an order is actually placed, based on expected customer behavior. But the significance of such knowledge is diminished when we consider how removed one is from the end customer. For example, what do these possibilities mean for a third-tier supplier, for example?.

If it is possible to predict sudden surges in demand, natural disasters, or strikes with sufficient accuracy, what good does this knowledge do if you lack the capacity to respond accordingly? If you want to benefit from big data over the long term, you also need to build up a highly agile and flexible supply chain to take advantage of the insight that it can bring. If captured, handled and analysed correctly, big data has the potential to introduce a new generation of risk management capabilities.

The question that remains, is: where can manufacturers and shippers obtain the necessary, reliable data? It is likely we will see further development of the market for service providers that deliver “bite-size” data sets on the political or meteorological climate, areas of turmoil, commodity prices, trends, and the like. We can also continue to hope that the advancement of technology will produce data processing systems of increasing intelligence that are more resistant to singular misinterpretations.

Advanced knowledge of these types of business-relevant developments promises invaluable competitive advantage. It is precisely this vision (and especially the nearly endless potential uses beyond the supply chain) that feeds the hype surrounding big data, and that’s exactly why the technology behind the term ‘big data’ will become increasingly better defined over time. Big data should be used as a tool, rather than a neatly packaged solution. We all need to decide for ourselves if, when, and how we use it. For many businesses, broader-based conventional analysis of existing data is already possible and greatly beneficial. Incorporating this into standard business processes would provide a strong basis for leveraging big data when it becomes available, or when the time is right for it to be used.

Read the latest news on Big Data in our regular section in this Blog.

1 thought on “Big data in the supply chain – mere hype or a useful tool?”

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s