I agree with this. I do not trust that just because the problem is understood, it is presumed to be solved. I am sure those of you in the industry know mistakes or problems arise from technical failures or human ones. They both have a probability (and hence certainty) of failing at some point. Protocols and technical solutions rely then on double checks to minimise the probability of something slipping through. This sleeper incident has happened out of relatively few occasions of this particular train being split, so that indicates a vulnerability with a potential high disasterous outcome. It needs a certain assurance (through RAIB ?) of an additional protocol and/or technical double check.
Of interest:
Boeing maintains that 737 Max disasters were a complex product of probability of events. The view from others is that their new system of stabilisation of the 737 Max to level it in flight depends on only one sensor, which when failed caused the plane to push nose down. Boeing 737Max have already flown massively larger numbers of times and more hours, before the probability of failure kicked in, and then in disasterous ways. The solution looks like quietly to be installing two sensors. Corporate culture and economics are not always open to being respondent to risk.