Despite a multitude of difficulties, many outbound LSPs are working to benchmark performance, but it will take significant co-operation between OEMs to really push the process forward

Benchmarking in finished vehicle logistics can be a fraught practice. The international scale of the automotive industry, myriad business models and varying company objectives can make even relatively straightforward measurements, such as lead time, labour productivity and damage, difficult to compare between companies. 

Meanwhile, anti-trust laws, along with a reluctance among some OEMs to share data, prevent carmakers and logistics providers from making comparisons outside their own networks and operations. Some experts believe vehicle logistics has lagged behind other industries, such as retail and supermarkets, in adopting benchmarking on a sufficiently large scale.

Even a company comparing its own providers’ performance across regions can run into trouble. For instance, stevedore damage rates tend to vary enormously in different countries. Is it valid to benchmark a company that loads most of its cars in Japanese ports against one that loads cars in Indonesia? While the objective should be to reach the same quality, it could be counterproductive to measure performance by the same criteria in all areas. 

“One of the biggest problems is that, even with the relatively small number of manufacturers, there is a lot of variance in definitions of damage, for instance,” observes Matt Holmes, director at Sevatas, a claims management firm. It would be very difficult to compare, say, distribution operations in a developed European country, with moving cars thousands of miles on the Trans-Siberian railway. “You’d have to create a lot of definitions and conditions, and even getting the data in the first place could be a challenge,” he adds. 

Damage rates can also vary widely by season and geographic areas of a country, which could further confound statistical comparisons. Some experts say damage rates go up during the school holidays, as chucking stones at passing trains is a favourite pastime for bored kids (although others say that there is little factual evidence for this). 

To be truly valid, any benchmarking system would need to be sophisticated enough to take such factors into account. Furthermore, it would also need to work through complex industry semantics. Consultant Bill Kerrigan, who is also programme manager for finished vehicle logistics at the Automotive Industry Action Group (AIAG) in the US, has worked on a number of benchmarking initiatives. He stresses that companies have to start by defining what common terms actually mean, such as ‘at ramp’, ‘at port’, ‘at rail’ and so on. “Then you can construct your measurements and you have your baseline,” he says.

Screenshot_15_09_2014_14_47

Ben Waller, senior researcher at distribution consultancy ICDP, says there is interest in benchmarking logistics data among manufacturers, though to date it has tended to be on a case-by-case basis. “For example, they might exchange information on how traffic is moving on a particular route to support the building of a new terminal,” he says. Companies also occasionally swap information on fuel consumption in their finished transport operations. 

Other industries may do more logistics benchmarking, Waller suspects, but the drivers may be different, particularly where retailers control more of the process. Finished vehicle stock turn is as vital a measure in automotive as in other sectors, in the same way that it is a measure of cash flow in the grocery sector. However, the turn is slower and the ownership of the problem less clear. 

Know your indicators 

comparing dealer satisfaction

If OEMs might not be able to easily share their delivery data and logistics performance, then it may be worth investigating this further downstream, particularly at dealers. There are a number of dealer surveys for issues such as delivery accuracy and timeliness. JD Power, for example, has delivery components in its customer satisfaction surveys. ICDP, an international research organisation focused on automotive distribution, conducts regular dealer and national sales company surveys in major European markets, the most recent being in 2012. 

Screenshot_15_09_2014_15_07

“We measure late delivery compared with the original order-to-delivery promise date to dealers,” says ICDP senior researcher Ben Waller. 

The survey asks how many cars are delivered late compared to the original promise date, including orders placed in the system for a new car built at the factory, cars sold in transit from the factory, and cars held at a central distribution centre or compound within the market. The percentage given is how many cars are delivered late compared to the original promise date. The delivery promise date itself varies from OEM to OEM, with most giving a week or a date plus several days.

However, Waller adds that on-time delivery to customer promise is not simply a finished vehicle logistics or final delivery issue. “Manufacturing pipeline stability is also a major component of the outcome here,” he notes. “Delivery-to-promise-date has actually improved [in Europe] over the period 2005-2012, due to the decline in volumes and more deliberate caution in date setting. However, note that in the intermediate 2008 survey the late to promise delivery actually increased during the [government incentive] scrappage period where demand exceeded supply.”

Manufacturing order lead times grew overall from 2005 to 2012, and particularly since the financial crisis, largely because of the loss of supplier flexibility, says Waller. At the same time, delivery lead times also rose, largely because of capacity constraints and cost drivers on frequency and volume. Waller says he is now working with manufacturers on order pipeline stability and the impact of model level product variety on lead times. ICDP will report to members at the end of 2014.

Ruud Vossebeld, director of business development at Germany-based software provider Inform, says establishing the right KPIs is essential to understanding if an operation is world-class. Companies use dozens such KPIs, from relatively standard ones like damage, on-time delivery, lead time and dwell time to more specific points such as tracking inaccurate delivery to dealers, labour hours for vehicle processing, or inventory held between factory and dealer.  

However, assessing the measured KPIs is not always simple. Few would disagree that the fewer damages or incorrect deliveries, the better. However, along with geographic and seasonal variations, internal conditions and processes can matter as much as external ones like the weather. Some manufacturers have much more highly developed and monitored handling standards for moving and storing vehicles, for example, making comparisons between brands that much more difficult. 

Brands also tend to have their own criteria for how to measure damage rates. Some manufacturers will pay for small damages: €250 ($330) of repairs might be counted as a different level of damage by one manufacturer versus another (each American state has different regulations on this, including what needs to be revealed to consumers).

Delivery lead time is one of the most critical and strategic measurements for outbound logistics. But as with other KPIs, it should be considered carefully. Averages across benchmarks must be weighted by factors like market share, to make sure that higher or lower volume brands or markets don’t skew results. Furthermore, similar performance can have very different impacts depending on strategy. For example, the implications of longer order-to-delivery lead times are different for a manufacturer with a high sold-order and customisation content, compared with a carmaker that sells almost all cars from stock.

The time taken to get a car through the compound to a dealer’s forecourt also has natural variations. Different model types require different tests, accessory installs or special handling and at times of slack demand, a slow performance may be financially advantageous and therefore deliberate. 

Such a measurement is usually broken up into its component pieces, such as the turnaround time at terminals or rail loading. However, it is debatable from which point you should start measuring time. If you gauge it from when a transporter enters the gate until it leaves again, you won’t capture how long it queued to get into the terminal in the first place. Truckers might be able to provide that data, but there may be questions about its accuracy. 

Similarly, when moving a car by rail, a company needs to determine whether it should measure time from when the wagon has been positioned, or from when its doors are open and the car is ready for release.

Even measuring the number of cars stored per square kilometre in a compound can be thorny. If land prices are high, it is probably worth parking them as tightly as possible, even if it means taking longer to extract vehicles for processing or dispatch. But that would make little sense in the wide-open spaces of the US midwest and could be a problem in markets where supply is tight and shifting sold orders is crucial. 

Despite the legal issues, competitive worries and general pitfalls that may come in trying to glean objective conclusions from a subjective data set, the industry should take benchmarking seriously, says Vossebeld. Defining the top five KPIs would be a good start. There is also good software available, he says, as well as examples from other industries that may act as a template.

A few companies and associations are pushing forward with cross-industry benchmarking projects. US-based systems provider ICL, for example, has produced a report based specifically on loaded transit performance for seven carmakers in the US. 

There has also been a push for some harmonisation in areas where standards have varied widely, which should make benchmarking more practical. The Association of European Vehicle Logistics (ECG) has published handling standards that have been approved by many manufacturers, as well as imported by the AIAG to North America. In practice, however, having a single standard, even within a single region, is a long way off.

Barriers to benchmarking

There are further questions about how to appropriately collect and distribute data. Manufacturers sharing their information need to feel assured that it has been sufficiently ‘anonymised’. Larger transport carriers that serve a multitude of customers across a given region may be in a good position to set up a benchmarking process of transport and logistics flows, but they generally would still need a trusted intermediary to handle the data.

Mark Morgan, director of business development at inspection company Unicar, a division of the testing and certification group Bureau Veritas, makes the point that anti-trust laws in the US, and similar legislation in other regions, restrict collaboration between competitors, limiting the potential for most carmakers and logistics operators to benchmark their operations against each other. 

It is a common reason cited by many companies as to why they don’t collaborate further across the supply chain. However, Bill Kerrigan believes that the anti-trust issue has been overstated. A lack of willingness may be the real reason. “It’s a problem no one wants to go after. Maybe they’re afraid of the result; no one likes to be told that they’re not performing,” he says.

Matt Holmes suggests that another factor is the amount of work needed before a set of viable benchmarks can be created. People don’t feel inclined to start a task if the benefits will not be realised until a long time in the future, he notes. 

A lack of standards is a further reason why cross-sector comparisons have been difficult. Marty Colbeck director of sales, east coast, at Auto Warehousing Company (AWC) in the US, says that as a port processor, he is naturally interested in how his company is measured by its customers. “The difficulty, though, is that there is, as yet, no standardised benchmarking in the industry,” he says. 

In the absence of industry-wide standards, AWC benchmarks internally, comparing factors such as the time taken or the man-hours required to complete a task with its own standard figure. 

t4_toyota_2_12_43_web
While wider sharing across industry might be lacking, manufacturers and importers often compare the operational performance of their logistics service providers through KPIs and regularly use these to manage their contracts. These KPIs will cover lead time and damage levels, but could also include the results of audits on the quality of operational sites or transporters, rail wagons and ships.

Unicar, for example, provides data to summarise the condition of vehicles as they pass through the supply chain. “Inspecting over 8m vehicles per annum all around the world, we have one of the most comprehensive database platforms of vehicle damage,” Mark Morgan explains.

At Mitsubishi Motor Corporation Russia (MMC Rus), logistics director Hikiami Yasuaki says his company benchmarks on-time pick-up and arrival, along with damage rates. While MMC Rus also benchmarks other factors including cost, truckload factors and reaction time, it puts more focus on business expandability. “In other words, we appreciate a logistics partner which can provide proactive solutions and add more values to our business process,” he concludes.

Clubs of one

At the moment, the closest process to benchmarking in the industry are an OEM’s internal measurements, which are generally not compared to anything outside the company. Sevatas’s Matt Holmes points to the work the company has done with Jaguar Land Rover’s logistics provider group, which holds regular forums. Sevatas presents a regular damage section, which is effectively a benchmarking exercise. It may only be a couple of times a year, and it may be for just one OEM, but it is genuine benchmarking, Holmes suggests.

Group companies that have separate commercial operations, but consolidated or partly consolidated logistics management – like Hyundai and Kia, Renault and Nissan, and the Volkswagen Group – may also have more obvious ways of carrying out benchmarking.

SRX_LZC 025_web
Other providers are also using data across various customers and operations to benchmark performance. Vascor – a logistics provider, processor and data manager for a number of manufacturers, including Toyota – monitors, measures and tracks performance against objectives in areas such as damage rates, yard operations, schedule deviation, productivity and logistics costs, according to a spokesperson. KPIs are compared over time to targets, forecasts and, when available, against industry best practices. 

Kerrigan argues that it will take a co-operative effort among several OEMs to develop any serious benchmarking on a wider scale. “But it isn’t necessary to do it globally at first – it could be done regionally,” he says. 

Unicar’s Morgan says technological developments mean some of the more forward-looking OEMs will be able to extend their global operational logistics mapping. “This means that key data could be collected into a common platform, accessible by everyone, so that benchmarking could be done and become one of the foundations for a new era of dynamic forward logistics planning,” he says. 

German Statsenko, outbound logistics controller at the Russian arm of Renault Nissan’s Alliance Logistics Europe, says that interest in benchmarking is growing in the Russian finished vehicle logistics industry. He suggests that comparing damage or on-time delivery could have many benefits in the Russian market, both from the point of view of the end customer and the dealer. Transit time could be monitored, including the time taken between order receipt and the car leaving the compound, and that between the compound and the dealer – although for the latter there are many factors outside the logistics organisation’s own control.

However, it is hard to get good reliable information from other OEMs in Russia and benchmarking in the industry may need the impetus of a group of OEMs or business associations to get it off the ground. There is a dealer association that compiles statistics on when deliveries are made, which are available to forwarders, although checking this information is a manual, labour-intensive process. Compiling the information through RFID or GPS systems might be possible, but it would not be cheap, believes Statsenko.

In theory, the data could be made available through an IT system but connectivity at most OEMs is still in its infancy. 

A window to the outbound world 

Certainly, the infrastructure for benchmarking already exists and companies are beginning to venture into the area, albeit in specific regions and segments of the market. For example, supply chain management solution provider to the finished vehicle logistics industry, ICL, has “a unique and unmatched window” on the outbound transport process in North America, according to vice-president of strategic planning and administration, Tom Swennes. 

ICL recently completed a benchmarking report that focused specifically on loaded transit performance for the year to March 2014 (see box below). The results cover vehicles delivered in the US for Mazda, Mercedes-Benz, Mitsubishi, Nissan, Porsche, Toyota and Volkswagen.

“We view benchmarking as a valuable tool for our customers and we look on this as a long-term initiative that will grow and evolve over time,” Swennes says. 

The annual report is available free to existing customers and other OEMs can participate for a fee. Swennes says a lot of effort went into ‘blinding’ the data to protect company identifiers. Any vendor analysis was based on data from at least three OEMs. 

The nine measures in the survey included factors such as ‘release to delivery or to outgate’, ‘rail shipment to delivery’, ‘ramp unload to delivery’ and so on. As well as measuring the total time for each segment, ICL calculated the variance between actual hours and service level agreements (SLAs) between OEMs and logistics providers. This, among other things, showed how actual performance could beat the SLA during good weather, but slumped below it during a severe winter. It would also be important to measure not just actual hours taken, but the level of variance from the SLA, which is often divergent. 

Customer feedback has been positive, says ICL, and there have already been suggestions to expand the service to include damage, ocean shipping and transport costs. “We believe there is tremendous benefit to be derived through benchmarking,” Swennes says. “We are encouraged to see this effort being embraced by the industry.”

What ICL learned from benchmarking

Screenshot_15_09_2014_15_08
Tom Swennes, vice-president of strategic planning and administration, discusses the results of ICL’s benchmarking initiative.

“While being able to measure the transit time of each segment and the overall trip length provides some insight on the relative velocity of each OEM’s supply chain, it lacks sufficient context to be truly meaningful or actionable. Variations in the allocation process, shipment prioritisation rules, day-of-week loading and other factors affecting the transportation process vary from one OEM to the next. By benchmarking transit performance solely on the basis of transit time, you miss these nuances in the measurement process.

“Therefore, to better account the operational differences among the OEMs and to provide additional context to the analysis, we looked at each OEM’s actual performance versus what they negotiated with their vendors, as expressed in their SLA (service level agreements) standards. By measuring the variance to SLA, we can clearly see how the actual performance for a route or vendor compares to what the OEM believes should be the optimum performance. A lower percentage of variance indicates a supply chain with a higher level of overall consistency.

Screenshot_15_09_2014_15_07 2
“To illustrate this, we looked at railcar shipment to railcar unload performance for all OEMs from April 1st 2013 to March 31st 2014 (see chart below). In August 2013, the average time from shipment to unload was 173.8 hours, slightly below the average SLA for all OEMs of 186.5 hours. During August, only 33% of the rail shipments measured exceeded the average SLA. Compare that with February 2014, when nearly 54% of all rail shipments measured exceeded the average SLA of 195.76 hours, and the average time from shipment to unload soared to 225.3 hours.

“Despite a slight seasonal increase in the overall SLA average in Q4 2013 and Q1 2014, not surprisingly the difficult operating conditions this past winter resulted in over half of the railcars we measured missing their SLA performance target.

“So how did the OEMs compare to one another for this particular measure over the same period? In our survey, we found that manufacturers ‘D’ and ‘G’ had the lowest SLA variance rate during this time. Although manufacturer ‘C’ had the second-lowest average hours from shipment to unload, its variance rate of 80.4% actually tied it as the worst performer in this area. Had we only looked at the actual hours, and not included the variance to SLA, we would have missed this important distinction.”