Europe, August 5, 2016.- The visibility of the supply chain for all its actors is already a standard. However, each logistics manager has his vision on the visibility.
In AndSoft we have had the opportunity to read the following analysis of the company Trax Technologies. And we published here because it provides details of analysis enriching. We hope that it will be of your interest.
Everyone who manages logistics operations or relies upon logistics information wants better supply-chain visibility.The phrase “supply-chain visibility” means many things. To some, it means knowing where in-transit goods are. To others, it means understanding how much an organization spends for logistics services—by business unit, department, product line, etc. To still others, it means understanding the operational performance of both inbound and outbound supply chains—by lane, customer, supplier (of goods and logistics services), product line, or product. All would agree that the goal of visibility is to provide insights that improve supply-chain operations and to provide reliable information to dependent groups, such as product managers or sales people, who make decisions intended to optimize margins. Most parties who use current visibility and analytics “solutions” would also agree they leave much to be desired for quality, completeness, and timeliness. Current solutions still can’t consistently provide the scope and reliability of information needed to drive better global logistics operations and improved margin management.
The benefits of logistics visibility and analytics are real and substantial. Why are they so hard to achieve? The challenge arises mainly from two factors: process complexity and the low quality of data flowing through logistics processes. Process complexity is inherent to the logistics industry. In logistics, more than most industries, many parties participate in each transaction. The parties may include consignors, consignees, 3PLs, freight forwarders, performing carriers of record, actual performing carriers, and others. Each party uses multiple information systems, ranging from the completely manual to the fully automated, to accept and pass on data for a transaction. They communicate in multiple media and formats, with varying degrees of timeliness.
The diversity of data sources produces data of low quality. Each party may unknowingly introduce errors, omissions, or inconsistencies in data due to the requirements or limitations of its own operational systems. Or a downstream party may unknowingly pass along bad data it receives from an upstream party. An error as seemingly unimportant as a misspelled city name can negatively affect rating, cost allocation, and supply-chain network analysis. To improve the reliability of logistics visibility, it’s not enough to implement more sophisticated windows to the data. These “solutions” won’t fix the underlying problem. The industry must also improve the quality of the data—its completeness, consistency and accuracy. But how can it do so? Few individual companies can justify the high cost of technologies and operations they’d need to keep their logistics data reliable.
That’s where third-party organizations known as data refineries come in. A logistics data refinery receives data from the numerous, disparate sources across the supply chain. It converts the data to a common, standard structure. It then uses correlation and other Big Data techniques to normalize, correct, and enhance the data for specific use cases. Finally, it evaluates the level of trust and confidence associated with each data element, considering each use case. With logistics data that’s standardized, normalized, and correlated to specific use cases, companies can:
- Gain reliable visibility into their global supply-chain operations.
- Tie actual costs to specific services, clients, and products.
- Provide insights about the performance of logistics operations and the effects of expenditures on margin optimization.
- Give greater confidence that analytics and predictive modeling provide valid insights.