The current design spec of most truck-grade GPS, combined with off-the-shelf point-in-polygon geofencing, does not support accurate calculation of turn time. It produces approximate answers that may serve only limited purposes. Comparative analyses require accurate measurements, which demand higher-grade in-vehicle equipment and sophisticated algorithms. It pays to know the type and magnitude of errors involved.
The idea is straightforward and appealing. We have a queue, we have a terminal. When the GPS indicates that the truck's in the queue geofence, that's counted as queue time. When it's in the terminal geofence, that's in-terminal time (Figure 1). What's so difficult?
Three things. 1 Being clear about the purpose of measuring turn time. If it's to settle arguments between truckers and terminals, accuracy is perhaps not critical. But if it's to follow trends over time, to compare performance, to diagnose causes of port congestion and to guide decision making to improve efficiency — which is the real value proposition in turn time research — then nuances matter; we do have to sweat the details.
2 Every GPS point has a little error: ±3 meters ideally, sometimes ±10 meters, or 1–3 traffic lanes. GPS breadcrumbs dropped along a street look like they were positioned with a spray gun. “Exhibit A” in the banner graphic illustrates this point with a month of actual data. Compounding the GPS error, there are operator-dependent inaccuracies digitizing the terminal and queue boundaries. Queue boundaries in particular are often poorly defined: just a stripe of paint on asphalt separates the in-bound and out-bound/through traffic.
3 Truck GPS trackers ping position infrequently: 15 minutes was the traditional standard, better services are now 2–10 minutes. There's a common misunderstanding that with say a 5-minute ping, the error can be no worse than 2½ minutes, and the over-measurements and under-measurements probably cancel out. Not so. Consider Figure 2(a). With a 5-minute ping, a 15-minute terminal visit produces perhaps one point in the queue, and two points inside the terminal. With sparse sampling, an in/out error in any of these points destroys the logic of the visit. Worse, the error cannot be detected manually or automatically — trucks often do re-enter terminals, so there's no basis to reject the result. On the other hand, as in Figure 2(b), with sufficiently dense sampling, the path can be traced visually, by logic of continuity. Erroneous points are easily identified and adjusted by the eye. Intelligent computer algorithms can be written to mimic that visual process.
Some terminals are particularly problematic, with queue zones shared between two terminals, and vehicles queuing on an overpass above the terminal operating area. An automated analytical method has to handle all these cases.
Clearly the consequences of sparse data and simplistic methodology are not just trivial 2½ minute errors, but also gross errors (hours). There are errors of commission and omission. One stray point from a truck driving by registers a phantom visit. A single visit gets split into two short ones, or assigned partly to a neighboring terminal. Visits that did take place are not recorded, or shortened.
Errors do not affect all visits, perhaps just 10–25% of them, but mistakes on this scale distort statistics. Patterns — that may reveal causes of congestion — get muddied. Because errors depend on terminal layout, some terminals are systematically favored over others, invalidating comparison.
NCFRP Report 11 (Project 14) by the Tioga Group—a highly recommended resource on drayage, incidentally—describes an attempt to automate turn time calculation, and draws the same conclusions. It states (page 22): “The biggest issue with the data was false positives and false negatives. These occur repeatedly because the terminals, regularly used roadways, and the motor carrier's domicile are in very close proximity.” The investigators appropriately chose to analyze the data manually, using simple rules to reduce, but not eliminate, the errors.
Not all consultants are that perceptive. In one study, investigators applied geofences to coarse data, then filtered out the shortest 5% of visits on the grounds that they were outliers. They discarded the 5% longest visits on the same grounds. Ironically, lopping off both the low and high tails does not impact key statistics such as the median, so why filter? Arbitrary manipulation of results exacerbates bias, invalidates findings, and defeats the argument of technology-based objectivity that GPS is supposed to bring to the turn time problem.
Port delays are costly. The decisions to address them have high stakes, and must be based on accurate information. The proper way to manage error is to understand why it arises, and to trap and eliminate it in the computational process. Every measurement that passes up to statistical analysis, short or long, must be accurate. Such a solution exists, and it doesn't cost more.
METRIS specifies a ping interval ideally at 5–15 seconds, and our proprietary devices achieve this at the same monthly cost as 15-minute devices. A rich stream of data, 10–60 times as dense as any other, describes a truck's movements in detail. When a truck turns off I-105 to I-710, we capture several points just on the ramp. When it joins a queue, then leaves and U-turns into a new approach path, the data reflect it. When it backs up to hook a wheeled box, we usually see it — e.g. the black arrow at bottom left of the graphic.
There's still the ±10 meter error. But backing up the high ping frequency is a suite of advanced analytical methods that overcome the uncertainties. Rather than simplistic geofences, METRIS uses proprietary data models to represent terminals, their operating areas, queue and exit zones, including surrounding city streets. Robust custom algorithms track the progress of a vehicle as it snakes through the queue, entry gate, terminal and exit. Multiple integrity tests verify the logic of the path. Trucks queued on an elevated ramp are distinguished from those in the terminal. Odd maneuvers such as re-entries and slip-entries are trapped. Errors are detected, and inconsistent readings are automatically corrected or discarded.
We've been in the business of innovation in GIS for the past 30 years, designing mission-critical systems, testing national ITS messaging standards and writing papers on GIS accuracy and error. We know the pitfalls, and what it takes to get this right.
Our LiveQ service does all of this in real-time.
There are still some technical and semantic challenges. Carriers have different business models: for example, day-time versus night-time operations. Their experiences are different. When trucks mill around a hot dog stand in the queue zone, are they in the queue? How do you trap and discard the calculation for a trucker that parks in the rear of the queue zone, killing time until his appointment? This is the class of challenge that we consider vexing and unresolved.