Note: this page was originally written for, and posted on, the web site of the Royal Statistical Society, in a section devoted to careers in statistics. Its main use these days is (I hope) to provide some motivation to young science & engineering graduates who may be considering a career in industry. The narrative ends at the point when I left Jaguar Land Rover in 2010 to set up my own consulting business (which is maybe how you ended up on this page in the first place).

In June 2013, I gave a presentation to the Professional Statisticians Forum of the Royal Statistical Society about my career - an audio recording with slides can be accessed here, and provides an adjunct to the narrative below.

Let me start at University. I was good at maths. at school, so I applied to study this and I eventually went to the University College of Wales, Aberystwyth where I read a combination of Pure & Applied Maths., and Statistics. During the course of my first two years, I decided I enjoyed applying mathematical ideas within the framework of statistical science, so I concentrated exclusively on the statistical options in my third year, and ended up with a reasonable degree in Statistics. The year was 1981.

I knew then that I wanted to work in industry; it was probably in my blood, as I had been born and raised in the industrial heartland of the West Midlands (I spent some of my formative years working in my father's bakery making bread). I had some vague notion that statistical methods might be useful in helping to understand the way things were designed and made, although I had no idea how, since all of my experience in analysing real data sets as an undergraduate seemed to have been with agricultural applications. This probably reflects the fact that Aberystwyth is in the heart of rural Wales, and not, for example, in industrial Birmingham!

After some trying to get into industry, I landed a position at the Dunlop Tyre Company in Birmingham as a Mathematical Technologist . I was posted to the department that was responsible for testing tyres. Most of the tests involved testing the tyres (indoors) on a large steel drum until they failed for some reason (bits of rubber coming off, components separating, overheating, etc.). My job was to figure out the best way to analyse the resulting data (fail times and fail causes), and to make proposals for improving the test regime.

There were some special features about the data that I did not know how to handle. For example, some tyres were removed from the test in an unfailed condition. It didn’t seem right to treat these removal times as if they were fail times, and it didn’t seem fair to just ignore them. Also, the engineers who ran the tests changed the test conditions half way through the test - surely this meant that no simple probability distribution would fit the data? (as a fresh graduate I was desperate to use the Normal Distribution!). So what should be done? I was prepared to ignore the complication of more than one failure cause for the time being.

I made a couple of calls back to my old department at Aberystwyth, and went back to see them. They gave me some advice, I looked up a couple of articles in the statistical literature, and realised there was a wealth of information surrounding reliability and survival analysis. It looked interesting, although a lot of the papers contained medical data sets (maybe the methods worked for tyres too, though?).

At around this time, I was also encouraged by my old department to join the Royal Statistical Society (RSS), which I did. I also joined the Institute of Statisticians (IoS) as a Graduate member. Later these two organisations would merge into a single body. In those days, the RSS was generally a learned society, concentrating on mostly academic themes, while the IoS catered for statisticians outside the academic sector, working in the private sector and government.

Back at work, I called the statistics department at Birmingham University - would they let me enroll as a part-time student so I could study reliability theory, and try to apply it to tyres? Fortunately they said yes, so I began some study, under the tutelage of Professor Tony Lawrance (who is now at the University of Warwick), which would eventually culminate in my PhD, via a qualifying Masters level.

I spent about a day a week at the university, looking up papers, talking with Tony, and learning how to think. Back at work, I started drawing interesting pictures of the tyre data (hazard plots, survival curves, and the like), which could cope with all the complications in my data, including multiple fail types. These plots showed interesting features of the data that no-one really knew before e.g. that when we changed the test conditions, 1) the expected overall failure rate increase could actually be quantified, and 2) the failure rate of different failure modes reacted differently to the change in test conditions.

This was exciting stuff - we changed some of the testing protocols as a result so that the tests were more likely to produce failure modes seen in the field. It also turned out that we could adapt the simple graphical methods to cope with yet more complications, including using explanatory variables to describe some of the variability in the data. The point here was not to predict the failure rate in the field (which is impossible), but to formulate counter-measures that would make these failure modes go away. It was at about this time that I realized that the statistical contribution to engineering was primarily concerned with selection, rather than prediction, which is a distinction lost on many statisticians who profess to know something about engineering.

While I was with Dunlop, the Company was taken over by the Japanese corporation Sumitomo . Once, Dunlop had more or less owned the tyre division of Sumitomo, and in more recent times, the two Companies had exchanged technical information on an equal basis. Now the tables had been completely turned. 

I needn't have worried. In fact it was close contact with my Japanese colleagues that established, for me, how to use statistical science in engineering and manufacturing industry. I spent five weeks in the spring of 1985 actually working in Japan with my new colleagues. While my statistical work up to this point was mostly devoted to describing the data that was produced by my testing work, the Japanese were using their facility in Kobe to actually run experiments to help them improve the performance of their tyres, rather than just describe the data produced by routine tests. We would hold quality circle meetings to discuss how we were going to solve problems, and run experiments in the testing lab. and in the factory to see if our ideas would work. If they did, we would introduce our counter-measures into production, and if not, we would iterate again via the quality circle meeting. I was pleased when the results I had uncovered in Birmingham with Dunlop proved useful in solving some of the problems I was discussing with my new Japanese colleagues at Sumitomo.

The type of statistical methods needed to help engineers improve their designs through selecting counter-measures are somewhat different and more exciting than those whose primary job is just to describe the data; think of the difference between planning and helping to run an experiment, analysing the results, and implementing the conclusions, compared to performing a “test of significance” on data you have had no part in collecting. Moreover, the standard of statistical skills in evidence among the engineering base in Japan was then, and still is now I suspect, much greater than it is in Europe, which is a major factor in Japan’s economic strength through superior quality. (Since my time in Japan, I have taken every opportunity to speak with engineering faculties in universities, and institutions like the Institution of Mechanical Engineers trying to persuade them to teach more statistical methods to engineers in their degree courses. There is still a lot to do in this area, and I have done some work with the Royal Academy of Engineering to address this.) 

I returned from Japan re-invigorated and determined to turn my attention to quality and reliability improvement. Shortly after I got back, I went to a lecture in London given by the eminent statistician George Barnard (1915-2002). He talked about the ideas of an American statistician and management philosopher called W. Edwards Deming (1900-1993), and lectured about his work in Japan, and latterly in his home country. Deming used statistical methods and statistical thinking as the basis for his approach to 1) quality improvement, and 2) management philosophy - he linked the two ideas together by showing that the understanding of variability and its effects was key to both. Of course it was then that I realized it was Deming's influence that I had seen so much in evidence in Japan.

I got my Masters qualifying thesis at around this time based on the work I had done up till that point. Dunlop had also become interested in Statistical Process Control (SPC), possibly motivated by the Japanese influence, but mostly by the requirements of the Ford Motor Company at that time, who demanded evidence of the use of these techniques in manufacturing processes of their suppliers to ensure predicable quality levels. At that time, Deming was working with Ford's top management in Dearborn, Michigan, and SPC, in particular the Control Chart, was a key tool in his teaching.

As it happened, Ford or Europe were on the lookout for a statistician and I applied for the position, not really expecting to get into such a large company. However, I was fortunate enough to be selected, and I began work at Ford in 1986.

I spent the first period of my time with Ford in the Quality Office in the central staffs. I was less involved with engineering product at this time, and more involved with looking at data generated by customers, for example how satisfied they were with their car, and what if anything had gone wrong with it. I did some interesting regression work on this data, and constructed some models (both linear and non-linear), which helped us to decide which types of problem had the biggest impact on customer satisfaction. The idea was to work first on the problems with the largest regression coefficients (I am now returning to this work in conjunction with my job as Chief Analytics  Officer at We Predict ).

Shortly after I joined Ford, I went on Deming’s celebrated 4-day management seminar, where Deming told us first hand why statistical thinking was important for management. Surprising at the time, but not now, was his vehement criticism of statistical hypothesis testing as a basis for prediction. Rather, he spent a good deal of time talking about the control chart (invented by Walter A. Shewhart), and how it could be used to understand variability. After about eighteen months with Ford, I was posted on to the engineering centre at Dunton, working out of the Statistical Methods Office. Another fellow of the RSS, Ian Puzey, ran the office, and we also had someone at Merkenich, our equivalent engineering centre in Germany.

As I was joining Ford in 1986, the name of a Japanese engineer Genichi Taguchi was becoming well known in the automotive industry. Taguchi was advocating the use of statistically designed experiments to discover robust engineering designs i.e. designs that are not affected by so-called noise-factors (those factors which can have a disturbing influence on the function of a system). These noise factors come in various shapes and sizes, but there is a strong statistical theme running through them - for example, piece-to-piece variation introduced by the manufacturing process, rate of wear-out, distributions of customer usage patterns and environmental variables, and so-on. It was natural then, that as a statistician, Taguchi’s ideas would be appealing.

I had the pleasure of working with Dr. Taguchi on his many visits to the Ford Motor Company around this time (here we are together in 1991), and we were able to set up a number of robustness experiments, the results of which appear in some of my publications. However, it was also clear from reading the literature that was growing around Taguchi’s work, that some of his statistical treatments for the problems he was solving could be greatly improved. Primary among the researchers into Taguchi's statistical methods was George Box and his co-workers at the Center for Quality and Productivity Improvement at the University of Wisconsin-Madison, in the United States.

Dan Grove, a colleague of mine from Birmingham University, had the idea of writing a book on the role of statistical experiments in engineering generally. We had recently collaborated on research into some of Taguchi's ideas, which we were lucky enough to get published in Technometrics, and it seemed natural to extend this collaboration into a book, which we did, as I was about in the middle of writing up a my PhD. the research for which, meanwhile, had finally come to fruition. Writing a book and a PhD. thesis at the same time, on different subjects, is not to be recommended! I got my Ph.D. in 1991 ('Competing risk survival analysis - theory and industrial applications'), and the book followed in 1992 ('Engineering, quality, and experimental design').

Coincident with this, we decided at Ford to develop an integrated approach to Quality Training, bringing together many statistical ideas such as Statistical Process Control, Design of Experiments, and Robustness & Reliability methods. A small team, including myself, pulled the training program together, which altogether comprised about 35 days training in all. This was a major innovation - never before had such a commitment been made to quality training, and the integration of various statistical techniques into an engineering strategy (which I would later call “statistical engineering”) was a major step forward. Our book became a major source of material for the Design of Experiments course.

My Ford colleague Ian Puzey had been involved with the Royal Statistical Society's Business and Industrial Section, and his time on the committee was coming to an end. Although I had been a Fellow of the RSS since 1982, I had not really been a very active member, as I was busy with research and the book. However, I followed Ian onto the Business and Industrial Section in 1989 and eventually became chairman of the committee (1991-1993). I also served for four years on the Council of the RSS, which is the policy making body of the Society, and for 2 years I held the office of Vice-President (1993-1995). Much later, I subsequently served a second term on Council (2010-13), including two years on the Executive Committee (2011-12).

During my early active time with the Society, I made several presentations at conferences organized by its various sections, mostly on my work and experiences of working with engineers on their problems (quality or otherwise). Dan Grove and I presented a short course at the 1992 RSS Conference in Sheffield, based around our book which was about to come out, and stemming from this, we were nominated for the 1993 Greenfield Industrial Medal of the Society, which to our surprise and delight, we won. The Royal Statistical Society awards this particular medal for effective application of statistical methods in manufacturing industry. It was a pleasure to receive this medal on the same occasion that George Box was awarded the Society’s highest honor, the Guy medal in Gold.

I have mentioned the strong academic representation in the RSS, and I “joined the club” between 1991 and 1994, teaching undergraduates at Birmingham, and postgraduates at University College, London in industrial experimental design. UCL is steeped in statistical history, it being the oldest statistics department in the world. I try to maintain a strong connection with the academic side of statistics; I currently hold the title of Honorary Professor at the University of Warwick, and I have published a number of articles in various statistical journals, based on my work with engineers, including collaborations with a colleague of Box at Wisconsin, Professor Norman Draper.

One of the advantages of operating as a statistical scientist is that you get to meet a wide variety of people in the organization, from hourly paid operators in the plant, to junior engineers, engineering managers, and senior directors. In 1994, on the strength of my statistical work, I was promoted within Ford to the position of quality manager. I worked first with the Escort range of cars, and subsequently I was made responsible for the larger cars, Mondeo, Scorpio, and Galaxy, and I moved with my family to Germany. I was based at Ford's engineering center in Merkenich, near Cologne, and lived with my family just south of Bonn.

Finding myself in a management position gave me the opportunity to begin to influence strategy and events, rather than just react to them. Some of my ideas were included in a major speech to the Royal Academy of Engineering, given by Richard Parry-Jones, who has done much to guide my career, entitled Engineering for corporate success in the new millennium.

After four enjoyable years in Germany, I was then promoted to the position of Quality Director for the Truck group in North America, and I lived in Bloomfield Hills, a few miles north of Detroit. Being a Quality Director meant that I didn’t have as much time to get involved in the nitty-gritty of technical work, although bringing statistical thinking to a management position is vital, and the teaching of Deming seems now even more relevant. Reacting to a problem as if it were a special cause rather than common cause can send a number of people off working on the wrong problem, for example, and getting people to state their degree of belief in conclusions from their data with a level of statistical confidence seems as ludicrous as Deming said it was on his 4-day seminar! (do you want to know the probability of your data given the hypothesis is true, or do you want to make a probability statement about your hypothesis given the data?).

A few years before moving to the United States, I had the good fortune to meet Don Clausing, an engineering professor at MIT. He impressed me greatly with the need to view engineering from the standpoint of avoiding failure modes, which he postulated where caused by either mistakes or lack of robustness. It was Professor Clausing’s teaching that finally convinced me to abandon the probabilistic approach to reliability, favoured by many statisticians, and to concentrate instead on understanding the engineering design process that created the failure modes in the first place (the job of the engineer first and foremost, is to select the design that will fail the least, rather than to predict the failure rate of the selected design - selection vs. prediction again). I applied these teachings while I was posted as the quality director in the United States, and it was particularly gratifying that two of the vehicles that I was responsible for (the Ranger and F150 trucks) came top for reliability in the 2004 JD Power 3-year-in-service Vehicle Dependability Study, proving that Clausing’s strategies worked in practice. Don and I were regular collaborators on engineering ideas (right up to a few days before his death in 2010), and I developed 4 days of seminar material, called simply Failure Mode Avoidance, which I taught at Ford’s engineering centres’s around the world.

Ford saw fit to designate me a Henry Ford Technical Fellow in 2001 – it was only the 10th time out of (currently) thirteen in the history of the Company that the honor has been awarded, and the first time for engineering achievements through the application of statistical science. I served as a Technical Fellow from 2001 to 2007, when I devoted almost all of my time to solving strategic technical engineering issues rather than running a department. I was based in the UK, at the Gaydon Design and Engineering Center, and I maintained an office in the United States, in the Company’s Scientific Research Laboratory.

The area of quality engineering through failure mode avoidance is full of technically challenging issues; recent examples that I have worked on include:

  • trying to predict field failure rates for a failure mode of an emissions component, based on heavily censored (for time and mileage constraints) warranty data;
  • instituting the use of the hazard function as a main analysis and prediction tool for warranty;
  • making verification testing as efficient as possible by including as many of the previously mentioned noise factors in the tests;
  • developing a quality operating system to establish the viability of launching new product;
  • modifying the experimental and analysis procedures so that electronic engine controllers can be programmed more efficiently;
  • determining why tires fail for tread separation, and understanding the associated consequences (here are some examples of the media reaction: Detroit News (June 15, 2001); Detroit Free Press (October 5, 2001); Detroit News (October 5, 2001); BBC (October 18, 2001); CNN (June 14, 2001); BBC (June 17, 2001) Detroit News (October 14, 2005)). See also the US Government perspective on all this here.
  • Using the ideas of Failure Mode Avoidance to re-define reliability as an physics/materials/geometry based attribute, rather than as a probability, so that it can be assessed early in product development;
  • formulating an approach to automotive engineering which might be called statistical engineering.
  • redefining the FMEA as Failure Mode & Effect Avoidance (rather than Failure Mode & Effect Analysis), an developing an approach to project management based on the use of early detection events for failure modes.

and so-on. The financial consequences of making different decisions in all of the problems can be measured in tens of millions, and occasionally billions, of dollars (no pressure then!). However the main leverage of the application of statistical science is in planning and executing for quality and the avoidance of failure modes, and my team and I spend a good deal of time involved in upfront activities designed for preventing problems, and achieving perfect engineering execution.

I was elected to Fellowship of the Institute of Mechanical Engineers (IMechE) in 2004, and am now a chartered engineer as well as a chartered statistician. I was honoured and privileged to be elected to the IMechE via the presidential route, and having chartership in both engineering and statistics provides a nice professional symmetry to the work that I did for the Ford Motor Company, and for Jaguar Land Rover as the director of the Office for Quality and Automotive Safety. This symmetry was enhanced in 2005, when the Safety & Reliability group of the IMechE awarded me the Donald Julius Groen Prize for my work in reliability, adding to the Greenfield medal awarded to me in 1993 by the Royal Statistical Society. I have also been pleased to accept honorary professorships, at the University of Bradford as professor of statistical engineering, and at Warwick, in the department of statistics.

I left Jaguar Land Rover in 2010 to set up my own consulting business in the field of quality engineering and failure mode avoidance - I specialise in helping clients solve problems at the interface of engineering and statistical science; but you probably know that, having arrived at this page via my web-site. I am also the CTO at We Predict, a young data analytics company based in Swansea. I am happy that I am the oldest employee by about 20 years!

Certainly, I can't complain about how my (unplanned) career worked out. If I am honest, I wish I had studied more engineering and physics along the way, but a good statistical education is just as vital. I have been very lucky in having worked with brilliant people, and I am still learning, and enjoying what I do - so I can't complain!