Official Report: Minutes of Evidence
Committee for Justice , meeting on Thursday, 3 April 2025
Members present for all or part of the proceedings:
Ms Joanne Bunting (Chairperson)
Miss Deirdre Hargey (Deputy Chairperson)
Mr Danny Baker
Mr Doug Beattie MC
Mr Maurice Bradley
Mr Stephen Dunne
Ms Connie Egan
Mrs Ciara Ferguson
Mr Justin McNulty
Witnesses:
Mr Michael Birtwistle, Ada Lovelace Institute
Justice Bill: Ada Lovelace Institute
The Chairperson (Ms Bunting): We have with us Michael Birtwistle, the associate director of law and policy at the Ada Lovelace Institute. He will give oral evidence on the Bill. Predominately, the Ada Lovelace Institute is associated with biometrics, so that will be the focus of our discussion today. Michael, you are very welcome to the meeting. Thank you so much for taking the time to speak to us today and provide some evidence. We are very interested in what you have to say. I will hand over to you to make some opening remarks, and then, if you are content, we will ask questions. Is that all right?
Mr Michael Birtwistle (Ada Lovelace Institute): That is great. Thank you so much, and thank you to the Committee for having me here today. Can you all hear me OK?
Mr Birtwistle: Are you happy for me to present for about 10 minutes, after which I can take questions? Will that work? I can make my presentation a little shorter.
Mr Birtwistle: Lovely. I will attempt to share my screen, if that is helpful for the presentation, but I may need permissions for that. If it is not possible, I will just plough ahead.
The Ada Lovelace Institute is an independent research institute that is based in the UK and Brussels, and our mission is to make AI and data work for people in society. We are primarily a research organisation, with a policy function that I look after. Our first foray into biometrics was in 2020, when we produced a synthesis report that combined a legal review by Matthew Ryder KC; a wrap-up of a bunch of public engagement and public attitudes research conducted through a project called the citizens' biometrics council, which took 50 ordinary people through the process of learning about biometrics, understanding how they related to them and determining how they would want biometrics to be governed; and a public attitudes survey called 'Beyond face value'.
We have been updating that work since then. I will talk the Committee through the developments that we are seeing in biometrics, at least in England, and the outcomes from our recent public attitudes work so that you can understand where people's attitudes sit with this emblematic technology when it comes to police use of the technology and to artificial intelligence.
As I said, we conducted a review in 2020. We made recommendations suggesting that the UK Government needed to pass new legislation to bring together oversight and enforcement of the rules around biometrics, because, as a set of technologies, they are inherently personal and invasive, and those factors need to be balanced against cases of potentially beneficial use. At least in England, there is no single source of such regulation and no clear incentive structure to ensure that there are standards of accuracy, validity and reliability in the way in which biometrics are used and to ensure that the deployment of such technologies takes into account the balance to be struck with fundamental rights, such as the right to privacy.
We are seeing the use of biometrics develop. I will briefly define "biometrics". Biometrics is personal data that relates to the technical processing of physical, psychological or behavioural characteristics of a real person. The difference between a photograph and a piece of biometric data is that biometric data measures something about the person in that photograph. The traditional, first-generation set of biometrics largely related to things such as fingerprints that were easy to calculate or where someone's unique profile could be compared to something else.
Around five years ago, we saw the roll-out of facial recognition technologies (FRTs) that were increasingly using artificial intelligence to make them more accurate. Those technologies measured physical characteristics for the purpose of identification, which, in such contexts, is usually done by comparing one to many. That means that you have a photograph that you have taken of someone in a crowd, and, measuring it against a watch list, you ask, "How likely is it that the person in the photograph is one of the faces on the watch list?".
Over the past five years, we have seen a newer generation of biometric technologies emerge, as well as an increasing roll-out of the original technologies. The Minister of State for Crime, Policing and Fire confirmed in Westminster earlier this week that the UK has bought 10 live facial recognition vans. We have also had confirmation that two of the first permanent facial recognition cameras have been installed in Croydon, south of London. There is also a movement to do what is called operator-initiated facial recognition, which is when, from a body cam or smartphone, an officer will take a photo of an individual and try to identify that individual. There is also retrospective facial recognition, which is when old CCTV footage is run in order to try to do the same analysis.
We are also seeing the newer generation of biometric technologies, which are attempting to do something called categorisation or classification. Instead of asking the question, "Who is this person?", they ask, "What is this person's gender or ethnicity? Are they paying attention? Do they intend to commit a crime? Are they being aggressive?". Those are all real-use cases that are used variously in policing and on the Transport for London (TFL) network. There was a trial of some of those technologies done at Willesden Green tube station. In the private sector, use cases are being marketed right now that are seeking to sell those technologies as a product. That raises further questions about how the technologies are governed, whether they are proportionate and what their impact is going to be on privacy in general.
In English law, we had the R (Bridges) v South Wales Police Court of Appeal judgement in 2020. That is the only significant case law on the lawfulness of facial recognition. It does, however, give us a map of all the mandatory practices that live facial recognition use by the police might need to meet in order to constitute a sufficient legal framework to be lawful under data protection law, equality law and a bunch of other English and Welsh laws, such as the Regulation of Investigatory Powers Act 2000.
The issue of proportionality also comes into the question. It sets tests such as deployment needing to be strictly necessary for specific law enforcement purposes, deployment happening only when less-intrusive measures cannot achieve the same objectives and a fair balance being struck between the rights of individuals and the rights of communities. What we have seen so far is that the newer-use cases that I have just described are not necessarily taking account of Bridges and are proceeding and, if you like, waiting for forgiveness rather than for permission.
The absence of a strong regulatory body that is able to take the Bridges requirements and ensure that they are being applied in practice has two effects. We have deployments occurring without taking into account, or without having accountability for, the need to meet the mandatory practices set out in Bridges. We also have hesitation by police forces and others in deploying the technologies in ways that may be beneficial. The Policing Minister in Westminster said in November 2024 that some senior police leaders believe:
"the lack of a specific legal framework inhibits the use of the technology and dampens willingness to innovate."
We have seen in the UK that the Information Commissioner's Office (ICO), which holds the pen on data protection law, which itself contains lots of the relevant law on biometrics, has been enforcing. We saw it enforcing in the Serco case, in which Serco Leisure centres were using facial recognition technology to time-keep people in and out of work. The ICO said that that use was disproportionate and that easier ways have been used for hundreds of years to clock people in and out of work. The ICO has been further updating its guidance on the use of biometric data. That is a legal framing of where we are at and of where we are at with use cases.
The other issue that I want to cover is the question of public attitudes. We do a lot of public attitudes research at Ada, and we reached the conclusion that if people are reluctant to share their data, if people use AI tools that are not seen to be legitimate or if people withdraw from services, the benefits of technology are lost if there is not that public trust. We have therefore done a lot of work to try to understand how public trust relates to those sorts of technologies. At a general level, people have very high expectations about the use of AI and about being protected from its negative impacts. They want independent regulation that has teeth, and they really value explainability, often above accuracy. People want to feel that they are in control and that decisions made about them are fair.
I will try sharing once more, but I think that I am not able to present slides. I will flag a piece of public attitudes work that we have just published. It is the largest representative survey of UK citizens — 3,500 people — with an oversampling done of minoritised communities. The public have a set of really nuanced views about facial recognition technology for policing. Around 91% of people perceive the use of facial recognition technology for policing to be potentially beneficial, but at least 39% — I say "at least" for a reason — perceive the same technology to be concerning. People have nuanced views, whereby they do not just feel that, on a sliding scale, the technology is good or bad. They can see that it might have benefits, but they can also see that it might hold challenges.
One of the reasons that we deliberately oversampled from minoritised communities is that the numbers change quite a lot when examined. Compared with 39% of the general population who expressed concern about facial recognition technology for policing, 57% of black people and 52% of Asian people expressed concern. I understand that those numbers do not significantly change for Northern Ireland, which was included in the sample.
Another really interesting point that came out of that most recent survey, which we did with the Alan Turing Institute, which is also based in London, concerns attitudes towards regulation. We asked this question: what would make you feel more comfortable with AI? In 2022-23, 62% said that regulation was the top thing that would make them feel more comfortable with AI. Two years later — this year — it is 72%. That 10% shift in two years is really quite extraordinary compared with the normal speed of change in public attitudes to big issues.
To finish off, I will talk a tiny bit about some of the key issues that arose from our public attitudes work. One of the major questions is around identity, bias and discrimination. I have some nice quotations on the slides, but I will not read them out now. I do, however, want to highlight a piece of research that we have just brought out, which is called 'Making good'. It is on our website and was part of a project called Public voices in AI. We went to three locations across the UK, one of them being Belfast, to ask participants about how they viewed different technologies. Doing that was really interesting, because it told us about the significant gap between public attitudes to the public use of facial recognition technology in aggregate, which one might say is broadly positive, although 39% on average is still a pretty significant proportion of the population that is concerned.
The views, however, of people about applications of that technology in specific social contexts are much more nuanced. We asked the Belfast participants in the 'Making good' research to role-play FRT use by the police as PSNI officers and to explore how the cameras should be used in public spaces. The participants were, in the majority, white, born locally and from Catholic and Protestant backgrounds, but there were also individuals from more positive and tech-optimist backgrounds. We asked them to establish recommendations for use. That research was published only last week.
The group felt unable to implement facial recognition technology in a Northern Ireland context at this time. They cited worries about potential misuse or discrimination that might arise from its use. Their primary concern was distrust in current police oversight structures and in the Police Ombudsman, as well as the history of Northern Ireland's contested governance. The recommendations were focused on the need to create trustworthy accountability systems, to maintain the progress that has been made in relationships between the community and the police and to resource the police before considering introducing technology into that fraught context.
What that case study points to is the need for engagement and for slow, careful work with local publics in order to build confidence and to ensure that a wider system of trustworthy relationships is in place before introducing that sort of technology into policing.
Hopefully, that gives the Committee an idea of the UK picture and of the Northern Ireland-specific research that we have been doing. To wrap up, we are releasing a report in a couple of weeks, and the key point to make is that the lawfulness of frontier biometric deployments is still in question. The absence of a comprehensive legal framework will continue to lead to the duality of bad outcomes of either deployment without proper oversight or a hesitation to deploy.
The UK has a Biometrics and Surveillance Camera Commissioner, who was almost got rid of last year through a Bill. The role is now being preserved, however. Fundamentally, he can report only on the use of biometrics and their impact but cannot necessarily affect in any significant way how they are being used by police or others. The Home Office is likely to consult during 2025, in part to address the challenges that the Policing Minister outlined in the quotation that I gave earlier.
We are quite interested in the Scottish model. Biometrics is a devolved competency over there. Around two or three years ago, a Scottish Biometrics Commissioner was appointed. He works explicitly with police on the use of biometrics and has far more teeth to be able to issue mandatory guidance to Police Scotland on the use of biometrics and to answer questions on proportionality, retention and that sort of thing.
Hopefully, some of that has been useful. I am now very happy to answer questions to the best of my ability. I am in your hands.
The Chairperson (Ms Bunting): Michael, thanks so much. That really was useful. It was very helpful to hear the local context and to learn that some survey work has been carried out in Northern Ireland. That is useful for us to know. It is also useful for us to have some understanding of the bigger picture across the UK. That is really great. Thank you. I will now bring in members to ask questions.
Miss Hargey: Thanks very much for your presentation, Michael. Some of the issues that you have raised are things on which the Committee has been looking for clarity from the Department. You touched on the research that you did on facial recognition with people here. Did you say that it has been published? Is it on your website?
Mr Birtwistle: Yes, it is.
Mr Birtwistle: I absolutely can, yes.
Miss Hargey: Did you say that you will be publishing the research on the use of biometric data in a few weeks? If we could also get that when it is published, that would be useful.
Mr Birtwistle: Yes. We are pretty close to producing a publishable draft, but I am happy to share a not-quite-published draft with the Committee so that you do not have to wait.
Miss Hargey: That would be brilliant. Thank you. In the legislation that we are scrutinising, we are looking to have a biometrics commissioner here. We had the Scottish Biometrics Commissioner over at the Committee just over a month ago. Do you feel that what is contained in the Bill about the biometrics commissioner and their independence and functions is appropriate?
Mr Birtwistle: Independence for a commissioner is very important. Despite the lack of teeth that the England and Wales commissioner had, there was real accountability. If you read, say, the most recent five publications that Fraser Sampson, the preceding Biometrics and Surveillance Camera Commissioner for England and Wales, put out, you will see that he really did not pull any punches in articulating the impact on privacy in the use cases that he was seeing of those technologies and in articulating what was needed in further policy or parliamentary response to those use cases. He was able to do that because he was independent from the Home Office and the Policing Minister and had quite a frank relationship with them.
I am only adjacently familiar with the Scottish model, but my understanding is that there is independence provided for in the legislation. I have been told anecdotally, so take this with a grain of salt, that there are quite a lot of former Police Scotland staff among the commissioner's current staff, and that is likely to mean that the advice that he receives is sympathetic to the work of the police. If you want truly independent oversight, you need to make sure that there is balance in the advice provided so that independence is not just mentioned in the Bill but happens in practice. If that were being done in England and Wales, it would be less of a problem, because we have a much more distributed set of forces. We strongly advocate the independence of regulatory bodies, be they biometrics commissioners or the Information Commissioner's Office in England. I advocate that for any new commissioner as well.
Miss Hargey: We are looking at the biometrics commissioner's role and reach. You touched on independence. Should the commissioner's role cover any other areas, or should they have other powers? If so, what are they? On the commissioner's scope, should the biometrics end of it include photos?
Mr Birtwistle: Those are two big questions. I will take the second one first. There is a lot of value in the UK GDPR definition of "biometrics", because it captures the popular use case — the most-used use case for identification — and the emerging use cases, in which you look at behaviour and try to guess what is going on inside someone's head from looking at what is going on outside. In a justice context, using your mandate, you may want to consider restricting that not just to policing but to the concept of surveillance, because, in the UK, there is an increasing overlap between policing use cases and security and surveillance use cases in the private sector. There has been a lot of back and forth between the Information Commissioner's Office and Facewatch. A number of artefacts about those regulatory interactions are available on the ICO website. I am happy to provide the Committee with the links. They describe the increasing desire for private-sector actors to be able to use those technologies for, arguably, a policing-adjacent purpose.
Hopefully, that addresses your question about definitions and what should be captured. I do not think that it is necessarily helpful to focus on a single medium, such as photos. The definition of the type of data about measuring a person is really what you want to get at, because that lets you capture retinal stuff; facial recognition; gait analysis; which is about how a person moves; and measurements on where you look, which are about attention. If you just focus on a medium such as a still image, you will not necessarily capture all the different use cases. My point is that all the different use cases can be just as invasive as using a photo.
Your first question was about the powers that a biometrics commissioner should have. In a policing context, the Scottish model is not a bad one. The powers that we have articulated in our recommendation include publishing a register of public-sector use cases; monitoring trends, which, I think, is already in the Bill; and, as part of the regulatory function, assessing biometric technologies on two levels.
The first is to ensure that biometric technologies meet the standards of accuracy, validity and reliability that are appropriate for the context. You can argue about where those standards should lie and what is appropriate for a policing context. You empower an independent regulator to think about that. The second is to think about the use of biometric technologies, by the public sector in publicly accessible places for public services and by the private sector for security purposes, to be able to assess their proportionality based on human rights standards prior to use.
Miss Hargey: Thank you, Michael. To clarify, when you talked about surveillance, did you mention that there would be a report on private-sector use and the intersectionality between that and policing? Did you say that a report is coming out or that somebody has looked at that issue?
Mr Birtwistle: We will bring out a policy explainer in around two weeks' time that will discuss progress on biometrics in a range of areas and will include coverage of the use that you mentioned. If you would like a much more in-depth report, I recommend the Big Brother Watch report that came out last year and went into a lot more detail about where and how it is being used. As I said — this is not confirmed, so it is not yet Westminster Government policy — my best understanding is that the UK Government are likely to consult, at a minimum, on police use of facial recognition technology but there could be broader consultation on police use of biometrics.
Mr Baker: Yes, if that is OK. Michael, thank you. Looking at the use of biometrics through the lens of children's rights, children are in a more vulnerable position. What is your view of our Justice Bill with regard to safeguarding children and young people?
Mr Birtwistle: That is not an area that I am expert in or that we have research on, but the Committee Clerk said that it might be of interest to the Committee. In the UK context, it might be instructive for the Committee to look at the gangs violence matrix that was run by the Metropolitan Police in London. That is generally regarded as a pretty poor example of how to record and store the data of people, primarily young people. It had an impact on the lives of the people who were put on that matrix for a variety of reasons. I noted the conversation in the previous evidence session on the different reasons why you might end up on the police's radar or in their records. From the research that we have done and the reading that I have done, I would say that the reality of how people end up in police records is often much messier than the theory, yet it can have a long-lasting impact.
On the question of retention that the Committee was discussing earlier, the Protection of Freedoms Act 2012, which covers biometric data retention limits, might be of interest. I believe that there is a limit of around three years for relevant offences, which are defined in the Act, and the police can extend that limit for another two years by asking the Biometrics Commissioner. I think that that is roughly how it works. There is a longer limit of, I think, 15 years for fingerprints that are collected for the purposes of immigration and migration. Those are the sorts of figures in England and Wales for that kind of data.
Ms Ferguson: Thank you, Michael. Following on from Danny's question, I would like to hear your views, based on your research with local people, on the proportionality of the retention periods in the Justice Bill. From your research, do you feel that we are striking the right balance between the protection of rights and the prevention and detection of crime? Were those issues discussed with local people during your research? Are they even aware of the retention periods?
Mr Birtwistle: I am not sure that that came up in our research, I am afraid. Our research was much more focused on the immediacy of using and having to interact with cameras that are collecting your data in that way than it was on the onward process of retention. I am afraid, therefore, that I cannot illuminate you any further on that. I will write to the Committee with any highlights from the report that link with people's views on those sorts of technologies.
The research tells us that there is concern that the use of such technologies, which inherently involves the collection of that data, would take place in a very fraught context. There was notable concern about whether that would worsen confidence in and relationships with the police. I cannot read more than that into it for you, but I invite you to read report. I will try to provide links to anything that seems particularly relevant.
Ms Ferguson: Secondly, Michael, you have re-emphasised the importance of oversight and what citizens want. They want to have trust and confidence when they are passing on that type of sensitive information. The situation is moving quite fast already: you mentioned the deployment of vans and the permanent installation of facial recognition cameras. Given how fast technology is developing, do you have any grave concerns about the impact of future technologies?
Mr Birtwistle: I take it that you mean future biometrics technologies. We are seeing deployments and capabilities growing at pace. Whether the technologies are accurate to the level that you would need them to be in order to safely deploy them in a particular context, you are seeing Amazon vans with cameras that try to measure whether the driver is paying attention and national railway stations that are deploying such cameras to create what, I think, is termed "situational awareness" and, potentially, to offer people appropriate advertising based on something that the camera has determined about a person.
As I said, there are surveillance and security-focused use cases that are of interest to the police and to public-sector bodies that are adjacent to the police and work closely with them, such as those in the transport sector. That is evidenced by Transport for London's pilot in Willesden Green. It is important that the ability of our institutions to respond to those developments is fast and future-proofed. You achieve that by ensuring that the remit is wide enough to catch the scope of all those technologies and the institutions are sufficiently empowered to respond flexibly to them. Does that answer your question?
Mr Bradley: Thanks for your presentation, Michael. I have a few concerns about facial recognition and gait recognition.
When a recording kicks in, if it is a live situation like that — a rally, protest or football match — where violence is breaking out, how is it reviewed? Is it reviewed hours or days after the event? How can you be sure that it is accurate after the event, considering the advances of AI and other electronic expansions? How do you expect that to be regulated? Do you expect it to be regulated in such a way that it will not impinge on individual freedoms?
Mr Birtwistle: There are two questions there, I suppose: one about rights at the individual level; the other about rights at the group level. The net effect of deploying additional surveillance technologies on society is really important. The example that you have mentioned is, perhaps, a little more focused on the individual. A good outcome looks like, say, a police force wanting to use a new technology and going to a regulator, or having to go to a regulator, and saying, "OK. Here is something that does gait recognition, it theoretically, tells us whether that person is being aggressive and allows us to deploy, in real time, the right people to the right place in order to respond to that". As I said, the regulator is empowered to set the right standards of scientific validity and accuracy of that system and, essentially, tell the police the point at which their system is able to provide the level of accuracy necessary to ensure that those interventions are not happening wrongfully. That is not a huge step away from how the Scottish Biometrics Commissioner is set up, at least in the Act. I am not familiar with the evaluation or work of the Scottish commissioner since the establishment of the office, but that is the sort of thing that is within its remit. It is about that process of accountability and incentivisation and ensuring that something is safe enough for the purpose that you will base it on.
I invite the Committee to think about percentage accuracy rates. Quite often, you hear the Metropolitan Police talk about the assessment that they did of one of its systems with the National Physical Laboratory. It holds that up as a sort of standard to show that those systems are now accurate. On their own terms, that is the assessment of a single system — a single version of that system — on a single accuracy setting. The Met are saying that, in those exact conditions, the system is not biased and does not produce inaccurate results. What you really want is that, every time that you bring out a new version of the system, the regulator says, "We will test it to ensure that it is not broken and that it will not lead to wrongful interventions". Even within the auspices of the accuracy that the Met say that it has, even if you have a 95% accuracy rate, you have a 5% inaccuracy rate, and, if you send police officers out on that basis in a context such as that in which they deploy it on Oxford Street in London, a lot of people will experience a wrongful intervention whereby they will be asked by officers whether they are a certain person when they are definitely not that person.
Some important behavioural considerations that fall out of that later down the line are to do with the extent of certainty or reliance that the police then place on that system. Big Brother Watch covered in some detail a famous case of a person who was, essentially, taken to a police station after being stopped despite looking very little like the person in the photo that the police had on record. The fact that the system had flagged them was enough for the police to stick with that decision rather than to gainsay the system.
Hopefully, there were some thoughts there that are relevant to the question of how an institution can ensure that those systems are accurate. Really, a lot of it is about having to go through the watchdog first before you deploy in a context that will, potentially, have serious potential consequences for an individual afterwards.
Mr Bradley: Thanks very much. Some of the new AI technology that is coming on board can strip away masks, scarves, hoodies etc using composite CCTV coverage of a wide area where a crime has been committed. What are your thoughts on that type of technology as it evolves?
Mr Birtwistle: Those technologies present two classes of risk. One is when it does not work and you are still deploying it, and the other is when it works very well, which is when it becomes a question of privacy.
An Assembly or Parliament debating how those technologies should be governed in the context of setting up a regulator is the right way to ensure that there is at least a minimum public debate about what trade-offs are involved. What we are seeing in England and Wales is the worst of all possible worlds: hesitant deployment and waiting for case law to solve those sorts of questions, often many years down the line and often in very hard and emotive cases.
As to your question about composite imagery and being able to strip away measures that people are taking to protect their privacy, that goes to the question of proportionality. If you were to ask an independent regulator, such as the ICO, it would ask for what purpose you had decided to use a technology that tries not only to identify people but to do so despite their efforts to not be identified. The ICO would take that into account in balancing the rights of that individual with legitimate interest or whatever basis the police were trying to process that on. If you were to ask a court of law, you would get, as per Bridges, a similar balancing in that. Bridges sets quite a high bar for the situations in which that will be a proportionate and legitimate response.
I am working my way towards an answer, which is, I think, this: the technology that you described has an increasing impact on privacy, so you would expect that the calculations about the invasion of that privacy would be more extreme in that situation.
Mr Bradley: Fair enough. I have just one final question. I will give you a scenario. There was a bank robbery 20 years ago — there probably was no CCTV 20 years ago; say, a number of years ago — and there was good, clear CCTV coverage and composite CCTV coverage. Could a potential bank robber be identified by shipping back all that information?
Mr Birtwistle: At a technical level, that person could be identified if the CCTV was of sufficient quality, although 20-year-old CCTV would probably produce a pretty low accuracy rating. That person would obviously have to be on a watch list, so they would have to be known to the police, and that face would need to be run against that footage.
Our general take on retrospective facial recognition is that it should be subject to the same sorts of guidelines. In that instance, on the one hand, there is a question of the privacy rights of everyone else who might be on that CCTV. On the other hand, there is a question of that person, because they are on the street right now and the police are scanning their face, not necessarily receiving an immediate intervention from a police officer. Those calculations change.
What you really need is oversight to ensure that decisions in that context are made in a proportionate way and that those using the technologies — in this case, the police — have the appropriate guidance to understand where their use crosses the line and where it does not.
The Chairperson (Ms Bunting): It is important to tease these issues out, and there are massive issues about privacy versus public safety. Those are not small issues for us, and your evidence thus far, Michael, has been really helpful.
I will take you back to issues about the commissioner. The Scottish Biometrics Commissioner is accountable directly to the Scottish Parliament and is independent. In Northern Ireland, we have a number of what I will call commissioners — a raft of bodies that are of the Department but not in it, if I may put it that way — which makes it very difficult for there to be accountability. We appreciate that a balance needs to be struck between independence and accountability. Nevertheless, there always needs to be accountability to somewhere. What is the accountability mechanism in England and Wales? I note that the Scottish commissioner has a code of practice. No code of practice is proposed for our commissioner. What is that position in England and Wales?
Mr Birtwistle: My understanding is that the Biometrics Commissioner in England and Wales reports to Parliament. Ultimately, a lot of regulators have reporting obligations to Parliament. Although they will have a sponsored team in the Civil Service that they relate to, ultimately, their accountability is to Parliament. In England and Wales, that tends to preserve some measure of independence. We see that being tested a little in England and Wales at the moment with the Government's growth agenda.
By and large, typically, a Minister will be able, for example, only to send strategic directions to a regulator but not to alter their scope, mandate or powers. There are exceptions with respect to secondary legislation that the Government might be able to issue, but, typically, that is the position. I believe that the Biometrics Commissioner reports to Parliament, and that is one of the reasons why he was able to speak so freely on his concerns about the impact of these technologies.
My understanding of codes of practice — I will write to the Committee if I get this wrong — is that, in England and Wales, they sit somewhere between secondary legislation and guidance in how much force they have; the extent to which the different actors that they might apply to are obligated to comply with them; and, crucially, how much force they have in a court of law. For example, there is a difference in how easy it is to judicially review a code of practice that is enabled by legislation versus, simply, guidance that a regulator has issued.
When thinking about the Data (Use and Access) Bill or the Online Safety Act 2023, often, the gold standard in the kind of guidance that civil society might ask government to ensure that a regulator issues is a code of practice rather than the more informal guidance that a commissioner might be empowered to issue from time to time. The code of practice is a useful tool. It makes sure that everybody is clear on what they should be doing, and it commands a certain of level respect in its addressees.
The Chairperson (Ms Bunting): Thank you. I want to return to accountability. The commissioners in Scotland, England and Wales are directly accountable to their respective Parliaments. What does that look like in practice? What are the pathways?
Mr Birtwistle: A lot of it is about information sharing and reporting and transparency. The Biometrics Commissioner in England and Wales typically issues periodic or annual reports, as well as special reports that he might have commissioned from academics, the research community or, indeed, potentially, from other actors such as the police. Those reports are laid before Parliament for discussion. I may be wrong about this, but I believe that many regulators in the UK often have a reporting duty attached to a particular Select Committee, for example, in Parliament. That enables that Committee to exert that accountability by drawing the commissioner or commissioners in for questioning. It ensures that the regulator's performance can be assessed and that the regulator has a significant forum in which to highlight the important issues that they are considering.
The Chairperson (Ms Bunting): That is really helpful Michael. Thank you very much indeed. Does anyone have anything further to ask before Michael leaves us? No. Your evidence has been really useful, and you have highlighted additional issues for us. You have pointed us to several pieces of research and information that we will pursue. That will help us to better scrutinise the Bill and to understand its potential implications and the need for future-proofing. Thank you so much for your time today. I presume that, should we have anything further to ask, I could contact you. Would that be all right?
Mr Birtwistle: Certainly. It has been a pleasure. Thank you.