Trucks may dominate the movement of U.S. freight, but they do so over a network of highways and roads used by all kinds of vehicles. Maintenance and capital repairs (CAPEX) are performed by states and local governments – sometimes toll road authorities. It is two separately managed roles.
That’s not the way infrastructure conditions are maintained in the railway world.
For a look at the railroad maintenance process, let’s start with a review of the content in research papers from The University of Delaware conference on the practical uses of big data analytics. The sixth annual session concluded recently.
The university’s forum is unique for two reasons.
One, it is free. A variety of maintenance vendors sponsor the conference development and sessions. Last year, more than 250 registered for the 1.5-day series of structured and well-documented presentations.
Leading the conference were multiple university heads including Dr. Allan M. Zarembski formerly with Zeta Tech Associates and Harsco Rail.
Why is it so popular? Because the entire North American railway industry is increasingly collecting huge volumes of data from its operating systems, its track maintenance equipment and its track inspectors.
The second reason the forum is unique is that it specializes in how such data is used by mechanical, track and even bridge engineers to provide insights about the longevity of railroad hard assets. The presentations emphasize how to turn that intelligence into preventative maintenance aids (applications).
The applications include predictive failure and replacement alerts for the railway assets, some of which include the moving assets like the locomotives and the rolling stock cars.
Who attends? Lots of folks with titles like chief data scientist and manager of advanced analytics. They represent the new breed that supports chief engineers, information technology chiefs, chief operating officers, and some risk managers. Past attendees included senior executives and even CEOs.
It is not exclusive to just freight railroads. Both Amtrak and commuter agencies are repeat participants.
The range of engineering uses documented during the December 2018 conference is fascinating to read. It includes statistical methods like predictive analytics, Bayesian Inference, machine learning, image recognition, language recognition, text analytics and a latent semantic analysis (LSA) method.
In regard to railroad tracks, presenters identify different approaches to address all aspects of track maintenance and safety, ranging from rail wear, broken rail safety, tie design, field inspection techniques and prediction of track geometry degradation. All these topics are of course associated with the risk of derailments.
The object of all of this research and development is incorporated into the railroads’ capital planning process.
For those FreightWaves’ readers interested in a more complete technical description of the formulas plus engineering exhibits, please use the sources found at this URL: (https://www.railwayage.com/analytics/big-data-drives-big-results/).
Figure 1 is a simple graphic to help understand the complexity of managing both the operations of trains and freight cars while simultaneously managing their infrastructure (the “way”) maintenance and capital program.
Figure 1. Railroad Maintenance Data Architecture
For one of the best descriptions of the value of using big data for railroad purposes, let’s turn to a recent report by Kevin Smith. He covers railway topics for the International Railway Journal out of London. He developed this business definition for a global railway audience.
“Data in isolation has no meaning, and the real value lies in how it is used. Infrastructure managers and train operators harvest increasing volumes of data about their assets. This is the starting point for potentially transformative insights that could have profound insights for the way railways operate.”
The problem is learning how to extract “meaningful insight from the terabytes of data streaming in from billions of data points on trains, track and signaling systems.” That is indeed a challenging task.
It is a universal problem, not just a problem for the U.S. rail industry.
According to Matthew Miller, global transportation industry principal for OSIsoft, market hype around the potential for big data is being driven by four mega-trends:
- Pervasive, cheap and small sensors.
- Declining computing and data storage costs.
- New abilities to process and analyze data.
- Ubiquitous connectivity.
Here are two other take-aways from Europe’s big data rail industry meetings.
Condition-based maintenance (CBM) is a key business driver for railways. According to The Rail Sector’s Changing Maintenance Game, a 2017 report by McKinsey.
CBM can reduce railroad rolling stock manual diagnostics costs by at least 60%. Furthermore, it could lead to an overall reduction of at least 10% to 15% in subsequent hard maintenance costs of the rolling stock. Those are big dollar numbers in a railroad’s corporate budget.
Another big European lesson is that “We are burning millions of [dollars] replacing assets that are not life-expired.” That is according to Perpetuum Global Sales Director Robert Mulder. “If the condition of every single bogie was known, we would be able to extend overhaul intervals by 25% to 75%.” That’s another huge financial benchmark.
Those are big financial consequences if not properly “grabbed.” Big data scientists can lead the way.
Who should be most interested in avoiding unnecessary capital replacement or maintenance expenses?
The chief financial officer is who. That person, together with the chief risk management expert need to be engaged at examining what these railway data scientists discover.
Why? Because they are critical allies at getting railway budgets approved or voided when resources are tight and opportunities for avoiding unnecessary expenses are few and far between.
This insight from across the Atlantic Ocean brings us back to this year’s University of Delaware Rail Big Data conference.
Bridges – “Don’t replace them too soon”
One of the University of Delaware presentations this year covered bridges. Railway bridges, if thought to be defective, are big dollar items if the only solution is thought to be complete replacement.
There are more than 61,000 railroad bridges in the U.S. freight network – more if we include culverts and those used by the passenger lines and commuter services.
In a 2019 presentation by John Schmid, P.E. with Parsons Transportation and Peter Vanderee, president and CEO of LifeSpan Technologies, the authors demonstrate how modern sensor- integrated technology can be used to create a huge gain in bridge span defects assessment.
Bridge conditions are usually cited as one of nine condition assessments following standard federal approved procedures. The problem is that for more than three decades it has been known that the combination of visual inspections and standardized measurement techniques, the best engineering spreadsheets can often result in a possible error of up to two condition bands.
That is far less metric precision then is obtained when engineers measure track geometry or track movement under train movement stress and strain.
The bridge solution is to use sensors that detect expansion and contraction movement at minute levels along critical structural members of the bridges being inspected. That used to be very expensive as the older sensors were often destroyed during the process.
Enter new technology.
Sensors that could withstand repeated movements in both directions were invented. Then the sensors were enabled with remote communication functions. Not only could they communicate, but after a bridge was tested, the sensors most often were reusable on other bridges. The sensor data could be remotely captured. They could be set and forgotten to communicate until the critical parking forces were detected, recorded and even transmitted to the cloud.
Those advancements were a major improvement.
Demonstrated and tested time and again, but not by the railroads. Nope. For about two decades, the technology was used mostly by toll roads and public highway agencies. The railroads seem to resist.
But several railroads have recently turned to the sensor technology when a really big CAPEX bridge expense was being signaled by the older inspection techniques.
Vanderzee and Schmid document this week the case of a $75 million Canadian Pacific 110-year old swing bridge replacement near La Crosse, Wisconsin. Why? Because visual inspections and the best spreadsheets suggested both substructure concerns and extensive section loss involving critical bridge members.
The actual risk level was pinpointed by using 20 strain sensors, seven accelerometers, four inclinometers and three temperature sensors. They isolated the actual bridge parts that were in effect “moving.”
How much data had to be sifted through? The 20 sensors provided more than 860,000 data points; the temperature data some 64,000 data points; the inclinometer sensors over 86,000 points; and the accelerometers over 8 million data points.
All of this data was collected during live loads and no traffic continuous periods.
Figure 3. The CP Bridge in Wisconsin
What was the outcome?
The 110-year old bridge needed selective repairs, not a replacement. Using equations with superior data points, the bridge engineers concluded that:
- Precious visual and measured movements were mostly “expected” deflection rates.
- The swing span trusses were behaving as designed.
- Future inspections should focus on cross-bracing and the floor system.
The financial conclusion? The planned $75 million project can be safely deferred as the railroad continues to monitor the bridge piers for movement at key bridge member locations.
This week’s railroad market view message for railroaders
There is a great deal of technology and data science that can help extend both the track and bridge structure life. But the railroads are not always out in front in exploiting the opportunities.
Engineers can see the practical uses. But at the big executive table that allocates the budget resources, identified economic opportunities point to the need for the chief financial officer and his or her risk management staff to get directly involved in examining the identified options not seen before data analytics entered the tool box.
What’s your conclusion?