Recent Papers published by Attwater Consultants

These are free resources that you can download and use.  No registration or login is required, so you won't get a bunch of annoying e-mails with advertising of our services.  Many folks have been able to use the information below to develop the very capabilities we provide as services to solve similar or related problems.  If you prefer otherwise, we can teach you how to solve such problems, or solve them for you.  Just Contact Us.

Many of our consulting contracts are let to solve difficult problems that are just not tractable for solution with established procedures and methods.  To solve these tough problems, we usually have to develop new mathematical, engineering, and/or statistical approaches.  When our customers allow us to, we like to publish these new techniques to help others find solutions to similar problems.

The papers listed below are some recent examples of where Attwater Consultants solved real world problems in Systems Engineering, Reliability and Maintenance, Safety, and Risk Management by developing innovative analytical, statistical, numerical, and/or engineering approaches.  These papers provide excellent examples of the types of problems that Attwater Consultants can solve for your firm, and also provide the means to solve these very problems if your firm encounters them.  Attwater Consulting specializes in solving those problems were there are very few if any data, without resorting to unnecessary and questionable assumptions, such that your decision making is easy, quick, and comfortable.

1)  Finding the Lowest Cost Preventative Maintenance Interval using very Few Failure Data

The US Coast Guard had a cockpit cooling turbine in their fleet of C130 aircraft that was failing during flights.  When it failed, the cabin lost pressure and cooling, and the mission was always compromised because the crew had to get to lower altitudes and interrupt their orders to secure the turbine, amongst the smoke and racket in the cockpit caused by the failing cooling turbine.  So this was more than a reliability and mission assurance issue for the Coast Guard, it was also a safety issue.  The Coast Guard was allowing this cooling turbine to fail in service, and it cost $30K to replace.  One day they sent a used cooling turbine in for refurbishment (preventative maintenance), and it only cost $500 - they could perform 60 refurbishments for the price of replacing a single cooling turbine!  But they only had five failures and the one survivor that got refurbished as data.  Using a commercial-off-the-shelf reliability software package, the Coast Guard was unable to gain enough confidence in their reliability estimates to find a preventative maintenance interval that would maximize their cost savings that they could trust.  They were paralyzed in making a preventative maintenance decision.  Attwater Consultants developed a method based on a Bayesian approach employing Markov Chain Monte Carlo numerical methods.  This method worked extremely well using just the few Coast Guard data for this cooling turbine, and did not use any assumptions.  We parameterized distributions of cost savings as a function of candidate preventative maintenance intervals, and the Coast Guard was able to quickly and comfortably select a preventative maintenance interval to start saving on maintenance costs for their C130 fleet.  This new method developed by Attwater Consultants can be used to find the lowest cost preventative maintenance interval for any system that has fixed replacement and refurbishment costs.

Citation:  Mark A. Powell, "Optimal Cost Preventative Maintenance Scheduling for High Reliability Aerospace Systems," Proceedings from the 2010 IEEE Aerospace Conference, Big Sky, MT, March 5-12, 2010.

This paper is an expansion and improvement of the following paper from 2002.

Citation:  Mark A. Powell and Edward B. Sheppard, Jr., "Applications of Conditional Inferential Methods for Operational Cost Savings for US Coast Guard C130 Aircraft Maintenance," Proceedings from the 12th Annual International Symposium, International Council on Systems Engineering, Las Vegas, July 29-August 1, 2002.

2) Finding the Lowest Cost Plan for Verifying a Reliability Requirement when you do not Expect any Failures in the Test

It is very difficult to test the reliability for high reliability systems because it is too expensive (and usually infeasible) to test long enough with enough test items (usually very reliable systems are very expensive) to get any failure data.  Bryan Dodson came up with a really useful graphical technique to find the lowest cost verification plan for testing reliability when no failures are expected, all based on classical statistics.  Classical statistics, developed for use in science, are notoriously conservative.  Attwater Consultants approached this problem using more realistic Bayesian approaches and found that Dodson's lowest cost verification plans cost more than twice what they needed to cost.  Further, Dodson in his paper on this problem suggested that if a failure were to occur in the test to just replace the failed item with a new one and recalculate the test time, it should still be close to minimum cost.  Attwater Consultants actually looked at the classical statistics for the case where a failure occurred, and with a failure, the new test cost even less than Dodson's lowest cost plan without a failure.  Plus, the probability that Dodson's lowest cost test plan would complete without a failure was only 10%.  The test plan developed by Attwater consultants using Bayesian approaches had a better than 88% of completing without a failure.  And, if a failure did occur, as intuition would suggest, the cost did indeed increase, to about Dodson's lowest cost test plan cost, and the probability of completing the rest of the test actually increased.  This new approach to testing reliability for very high reliable systems as developed by Attwater Consultants can be used for any system to save significant expense in verification.

Citation:  Mark A. Powell, "Optimal and Adaptable Reliability Test Planning Using Conditional Methods," Proceedings from the 14th Annual International Symposium, International Council on Systems Engineering, Toulouse, FR, June 20-24, 2004.

3)  Producing a Quantitative Risk Assessment when you have no Failure Data for Safety Issues

There are many very complex systems for which a failure would produce a serious safety issue to the public (nuclear power plants, commercial aircraft, etc.)  Nuclear power plants for example operate for decades without any failures; they are of course designed to do this.  But with decades of operational experience without any failures, how can the risk for a safety incident be calculated?  A similar question was asked at NASA about the risk of an astronaut breaking a bone in micro-gravity operations.  NASA was interested in increases in risk if they increased the duration of an International Space Station mission from half a year to a full year, and the risk for astronauts on long duration Mars missions.  But, in the entire history of human spaceflight, no astronaut has ever broken a bone during micro-gravity operations.  Attwater Consultants, when posed with this question, recognized that there actually is quite a lot of data for this risk problem, but classical procedures and techniques for calculating risk discard this type of data.  Attwater Consultants developed a Bayesian approach that computed risk distributions for an astronaut breaking a bone during various durations of micro-gravity operations, all based on 977 astronaut micro-gravity missions of varying duration where no astronaut had ever broken a bone.  This method opens up a whole new vista in quantitative risk assessment for systems that have operated for long periods of time without a failure or safety incident.

Citation:  Mark A. Powell, "Risk Assessment Sensitivities for Very Low Probability Events with Severe Consequences," Proceedings from the 2010 IEEE Aerospace Conference, Big Sky, MT, March 5-12, 2010.

This paper was an improvement and expansion of the following paper from 2008.

Mark A. Powell, "Risk Assessment for Very Low Probability Events with Severe Consequences," Proceedings from the Asia Pacific Conference on Systems Engineering, Yokohama, September 22-23, 2008.

4)  Using Surrogate Data to Reduce the Cost of Verification of Mission Assurance

Many of today's highly reliable systems are designed and built by a small number of manufacturers, who will use the very same design and manufacturing processes for a new system as they did for developing and manufacturing many previous similar systems.  Rockets are a good example of this, just a few manufacturers design and build all the rockets used in the the United States, and will design and build any new US rockets with the same processes in the same plants.  The very high reliability and mission assurance required for a rocket to be used for manned spaceflight is very expensive to test.  Attwater Consultants developed a method for NASA to use to take advantage of the rich heritage of US rocket development and operations data to dramatically reduce the cost of verifying mission assurance and safety for the ARES rockets to be used for the Constellation Program.  By considering both test and actual operational data for rockets similar to the ARES rockets, Attwater Consultants were able to dramatically reduce the number of ARES rockets that would need to be flown to test for the needed mission assurance and safety for the Constellation Program.  This technique can produce significant savings for verification of any highly reliable system that has a heritage of test and operational data.

Citation:  Mark A. Powell, "Improved Verification for Aerospace Systems," Proceedings from the 2009 IEEE Aerospace Conference, Big Sky, MT, March 7-14, 2009.

5)  How to Take Advantage of Covariate Data that is Normally Ignored to Reduce Risk

For many risk assessments the primary data are observed failures and successes.  Quite often there is information associated with these failures and successes that is technically called covariate to the primary data, information like time or some other parameter.  A risk assessment that extracts and uses the information available in the covariate data will always be better for decision making and risk predictions than a risk assessment that ignores the information.  Unfortunately, no classical statistical methods have ever been developed that can take advantage of the information in covariate data.  NASA had a problem with an oxygen sensor on board the International Space Station that was critical to astronaut safety for space-suited work outside the space station.  This sensor was drifting away from its calibration, and subjecting the astronauts to serious and life threatening risks.  NASA was faced with either an expensive sensor redesign, halting extra-vehicular activities until the new sensor could be developed and sent to the station, or finding some way to compensate these sensors for the observed drift to reduce the astronaut risk to acceptable levels.  Time in this case was covariate to the measured oxygen sensor drift errors.  Attwater consultants developed an uncertainty model for these measured drift errors that incorporated the covariate time directly, and along with a Bayesian approach using Markov Chain Monte Carlo numerical methods was able to produce quantitative risk assessments for the astronauts both with and without drift compensation.  The risk reductions produced by drift compensation were sufficient with a large enough certainty that NASA was able to continue astronaut missions outside the space station with acceptable risk.  This covariate modeling process along with Bayesian approaches can be used to dramatically improve any risk assessment where covariate data exist.

Citation:  Mark A. Powell, "Method to Employ Covariate Data in Risk Assessments," Proceedings from the 2011 IEEE Aerospace Conference, Big Sky, MT, March 6-12, 2011.

6)  Determining Quantitatively How Reliability Changes as a Function of How Many Times a System has been Repaired or Refurbished, Especially when the Majority of Data is Survivors

Most systems today are maintained on a regular interval basis.  Even if they fail in service, most systems are repaired and returned to service.  For highly reliable systems, they rarely fail, and the majority of the data is survivors that were refurbished as a part of preventative maintenance.  How reliability changes from maintenance service or repair to the next, can be used to improve a maintenance scheme to save costs, improve reliability, and in some cases improve safety.  This was the case for the US Navy whose F/A-18 jet engines are repaired/refurbished many times returning the engines to additional service.  Over 71% of the data for this Navy jet engine was survivor data, and tagged with all data was the number of times that the engine had been repaired or refurbished.  Attwater Consultants developed a unique covariate model for the uncertainty in this data that incorporates the covariate numbers of repairs, and reflects the possibility that repeated servicing might improve, degrade, or maintain levels of reliability.  Because of the uniqueness of this model, and the preponderance of survivor data, Attwater Consultants used a Bayesian approach to compute distributions of reliability as a function of the covariate number of times an engine had been repaired or refurbished.  For a given risk level of accomplishing a mission, a schedule for preventative maintenance for the F/A-18 jet engine can be developed that considers the number of times the engine has been previously repaired or refurbished.  In this case, extensions of intervals for preventative maintenance based on these results can dramatically improve safety risks for the servicemen and women who must perform maintenance on the decks of the aircraft carriers from which the F/A-18 jets operate.  This method developed by Attwater Consultants can be used directly for any system that is regularly maintained and returned to service.

Citation:  Mark A. Powell and Richard C. Millar, "Method for Investigating Repair/Refurbishment Effectiveness," Proceedings from the 2011 IEEE Aerospace Conference, Big Sky, MT, March 6-12, 2011.

7)  Reliable Detection and Confirmation of Multiple Failure Modes in a set of Data with Many Survivors

Many of today's complex systems have the potential to fail for more than one reason.  Detecting whether more than one failure mode is represented in a batch of failure and survival data has always been complex and difficult to do reliably.  Without confidence about the specifics of the failure modes, it is very difficult to perform the engineering analysis to find the failure modes and correct them.  Most commercial off the shelf reliability software tools do offer an indicator that more than one failure mode may be in play in failure and survivor data, usually a bent regression line.  The process they recommend understand these failure modes when their tool suggests a bent regression line is to segregate the failure and survival data at the "bend" and reprocess each segregated set of data separately.  This process produced very unsettling and confusing results for the US Coast Guard for an HU-25 aircraft subsystem with a 50/50 data mix of failures and survivors.  It is very difficult with any confidence to assign a particular failure datum to any particular failure mode, but it is entirely impossible to assign a survivor datum to a particular failure mode when for that datum the system has not failed in any mode.  Attwater Consultants developed a mixture model for this problem that did not require any data segregation at all.  Using a Bayesian approach with Markov Chain Monte Carlo numerical methods, we derived a technique that would reliably detect and confirm if multiple failure modes were in play in a set of failure and survivor data, isolated the characteristics of each failure mode, and used it for the HU-25 aircraft subsystem for the US Coast Guard.  This technique is particularly robust in that it cannot detect phantom failure modes, failure modes that are not actually represented in the data.  The mixture model and Bayesian and numerical techniques can be used to reliably detect and confirm multiple failure modes using failure and survivor data for any system, leading to better and more efficient correction of the faults or at least more effective and efficient maintenance planning.

Citation:  Mark A. Powell, "Method for Detection and Confirmation of Multiple Failure Modes with Numerous Survivor Data," Proceedings from the 2011 IEEE Aerospace Conference, Big Sky, MT, March 6-12, 2011.


Attwater Consultants solve many problems that just cannot be solved with standard methods and procedures commonly in use in engineering today.  If you have any questions about any of these papers, or have a problem that your engineers just have not solved satisfactorily yet,   Contact Us for more information on how an Attwater Consultant can help with your problem like we have demonstrated in these papers.  We selectively accept short term and long term assignments all over the world; language used is English.