Border Agency arrests up 10 per cent – but compared to what?

During the arguments over the suspension of controls by the Border Agency, the Home Secretary, Theresa May, has claimed that the pilot study under which these changes were made had increased arrests by 10 per cent. How does she know?

Next week, the Home Affairs Select Committee will examine how the Border Agency’s controls were practised during, and outside of, this pilot study. So far journalists have elicited few details about how the pilot was designed and how it reached the conclusion reported  by the Home Secretary.

I hope that the Select Committee will be able to do better. Did the study compare a new targeting regime for the checks made by immigration officials at points of entry into UK versus “business as usual”?.

If so, how might such a study have been designed?

One idea would be to select staff at random, train them to the new targeting regime, and have them put it into practice on all their shifts. This is a bad idea because staff in the same immigration hall would be operating different checking regimes and passengers (and especially those whose entry we want to bar) could drift between the lines and self-select the checking regime to which they would be subject.

Better would be for all staff at selected points of entry to be trained on the new targeting regime but to be told on a daily basis whether their entry-point is to operate ‘business as usual’ or targeted screening. The daily instruction is determined by randomization.

Then on any given day, all staff would operate the same checks but the regime they follow would have been selected randomly. Thus, the Border Agency can compare arrests (and secondary outcomes such as queuing time) on a like-with-like basis between the randomly selected days on which staff were instructed to operate targeting versus ‘business as usual’. On any given day, roughly half the Border Agency’s entry-points in the pilot study would have received the instruction ‘business as usual’.

A second issue is the number of arrests on which the comparison between the two regimes was based. For a true difference of 10 per cent in arrest-counts to have had an 80 per cent chance of reaching statistical significance, the total number of arrests in the pilot study would need to have been around 3,000.  

Were there that many? Or were inferences drawn in the absence of an adequate denominator and using an imperfect design?  Clearly, if a policy change was enabled at all entry points during a holiday-period of high passenger-throughput, arrests might have increased by 10 per cent merely because throughput did.

Declaration of interest: SMB writes in a personal capacity. SMB is member of Home Office’s Surveys, Design and Statistics Subcommittee.