What can STP leaders learn from Formula 1?
The Formula 1 season began at pace this weekend. With all the excitement of watching fast cars hurry to the finishing line, it may surprise people to know that each season a F1 team make improvements that can help shave over two seconds off of a lap time. This can mean the difference between 1st and 10th position on the starting grid.
"What can leaders learn from F1 to give their local STP plans the edge in the next few months?"
F1 is very different to the NHS, but its design thinking approach to performance improvement is impressive. Most importantly it is tried and tested and rooted in evidence and hard data. This allows the pace of improvement to be rapid and successful.
On a visit last year to a F1 team’s factory, I was inspired by the speed of change and development. Ideation to race car development happens in a matter of weeks. This happens in an environment that only tolerates failure if lessons are learnt quickly.
In summary, every four weeks (1):
Day 1 - Ideation: Designers and engineers in the factory develop proposals for new or changed components on the F1 car. This may be done individually or as a batch that may be guided by a high level programme of improvement work (“we need a better aerodynamics package” or “we need greater reliability”).
Day 2 (morning) - Initial short listing: The Technical Director reviews submissions from the team to determine what will and will not be taken forward. Some ideas may go in a holding bay. Crucially, the technical director is accountable for the team’s performance over the season, so has an incentive to exercise experienced judgement and risk taking on the development of new parts. This may also include decisions about whether a component will meet regulated standards.
Day 2 (afternoon) - Initial prototyping: Short listed prototypes are built in hard (and cheap) plastic and fitted to a life size plastic version of the F1 car – to make sure the parts fit.
Day 3 - Simulation 1: Short listed components are then considered for further testing (does it work?). Outside of the (relatively simple) computers and programmes, there are two ways this is done (2):
- 1 on a supercomputer that runs complex non-linear simulations
- in a wind tunnel that tests wind flow over components (often built using a 3D printer)
Each has its own advantage: the wind tunnel is arguably more realistic, while the super computer can simulate dynamics for a range of different scenarios (rain at Monaco vs dry in Canada). Correlating the results between the two is powerful, so the relative use of the asset has to be carefully considered.
Day 5 - Simulation 2: Computer and wind tunnel tests are done in controlled environments. This creates a risk that a part or new component may not stand up to race conditions. So all parts are tested on a seven-point race simulator. This is an actual F1 car on a large mechanical platform that can recreate race conditions (bouncing, twisting, turning) using telemetry from actual races.
Day 6 - Simulation 3: After successful testing and development, a package of parts will be built to race specification in the factory. These parts will then be tested at the Thursday practice session. This allows the collection of valuable data to correlate with previous testing. Crucially this includes a valuable sensor that can’t currently be modelled – the driver. Feedback from the driver is crucial: how does the package feel?
Day 10 - Development: After some further refinement in the factory, parts are developed and then shipped to the next race location. The logistics of this are not easy.
This process of ideation to parts being on a race car happens within two races (approximately four weeks). This means development is constant and relentless - small but constant incremental changes. The effect of these over the season can be hard to observe, but the effect cannot be ignored. It is rare that one team has a single “big idea” that puts them miles ahead of the competition – the diffuser used on the Brawn F1 car in 2009 (and whose “legality” was questioned by some of the top teams) is a good example.
While new car parts are being developed, the F1 pit lane crew tirelessly drill pit stops. These may improve pit stops by 0.1 of a second – enough to make a difference.
F1’s learning loop is rapid and effective. It is also unparalleled in the NHS. This could change. The NHS is more complex, but this shouldn’t prevent quicker cycles of improvement that can demonstrate effects quickly – length of stay for admitted patients is usually two weeks…
The culture of improvement is vital. Clarity on what good looks and feels like may not be as sharp as a race win in F1, but systems should focus on more than simply meeting minimum targets. Good judgement is needed on what ideas are taken forward (resources are scarce). Equally, teams should be supported to improve and allowed to fail provided lessons are learnt and mistakes not repeated.
Better data analysis is needed. This helps learning and identifies areas to improve. This should include simulation modelling operated by experts not hobbyists (in complex systems it is not hard to simulate things badly). Equally, the role and value of the driver shouldn’t be forgotten – the best sensor the F1 team have in the car. In the NHS, this should go further than simply looking at Patient Reported Outcome Measures. It may also include doctors, nurses, families and other informal carers.
(1) In some areas I have had to simplify the steps to preserve technical details that may be commercially sensitive.
(2) Use of both of these facilities has been regulated to narrow the performance gap between the teams with more or less assets/wealth. Teams can use the wind tunnel for a maximum number of hours or a maximum number of terabytes of data processing on the super computer per season.
Note: article adapted from the original, which was written in March 2016