Former NYPD inspector Joseph Courtesis:
“We create less bias when we use facial recognition algorithms in our work.”

Concerns about biometric recognition algorithms often come down to the issue of tackling racial or gender bias. However, Joseph Courtesis, a former NYPD inspector who served for 27 years, thinks we should not label the entire industry as biased based on some algorithms that may not have performed well years ago.


About Joseph Courtesis

Inspector Joseph Courtesis (New York, NY) retired from the NYPD after serving for 27 years. He is a former Commander of the 105th and 106th Precincts, Central Investigations Division, and the NYPD’s Real Time Crime Center. He is now the Founder and President of JCour-Consulting LLC, where he assists technology companies in aligning their products with ethical-use policies. Joseph participates in the Crime Prevention Committee, the International Association of Chiefs of Police (IACP) and the Security & Integrity Group of the Biometric Institute.

“Algorithms today, in some cases perform with 99.88% accuracy. In fact, for developing an investigative lead in criminal investigations, they are almost too perfect. But facial recognition leads are not supposed to be perfect, nor be the sole factor for producing probable cause to arrest,” explains Inspector Courtesis.   

Still, biometric technology can be misused if left unregulated, and Inspector Courtesis believes an accountable and transparent policy is key to maintaining a balance between public safety, civil rights and the integrity of criminal cases. After his retirement from the NYPD, he founded JCour-Consulting where he continues to promote policies about the ethical use of biometrics in law enforcement. 

We talked to Inspector Courtesis about his years in the NYPD, how the NYPD uses biometrics in criminal investigations, whether biometrics is used ethically in law enforcement, the harmful myths often connected to facial recognition, and what constitutes a good enforceable policy. 

The interview is sectioned into four standalone parts, each covering the technology from a different perspective. You are free to skip to the part that interests you most, or read from the beginning.

Inside the NYPD’s Real Time Crime Center

What does it mean to serve as an inspector of one of the most well-known police forces in the world – the NYPD? Can you tell us more about what your job looked like? 

Once you get the captain rank, you move from patrolling the streets and handling radio calls to managing a whole precinct, ensuring everything runs smoothly and the officers do their job right. 

From there, you can advance even further and move on to overseeing multiple precincts and their specialised programs. Imagine being the go-to person in fighting domestic violence or tackling robberies in all those precincts. Ultimately, you are helping the precinct commanders achieve their goals. 

However, the most sought-after assignment as an inspector is managing a specialised division. These are the top-notch units with highly skilled investigators or officers. Think Real Time Crime Center, central robbery division, or crime scene unit. This part of an inspector’s job involves supervising highly skilled investigators or police officers who have a specific focus and objective. 


The precincts of New York City

Precincts are the front-line police stations where officers are stationed and assigned to patrol and address community safety concerns within their designated areas.

But if NYC is divided into 77 precincts, how can there be precincts with numbers 106 and higher? 

According to the NY Times, the numbering system goes higher than the actual number of precincts because it was designed to provide for future expansion. Some numbers remain unused in each borough to allow for the creation of new precincts. Over time, precincts have been added, abolished or merged with neighbouring ones for operational efficiency.

For people who are not familiar with the American system, how many police officers are there in the average precinct?

A small precinct can have upwards of a hundred sworn police officers. A large precinct could have an excess of 300 sworn police officers and cover a much denser community.  

For example, at the rank of Captain, I became the commanding officer of a mid-sized precinct, the 106th precinct, which was about 250 or so sworn police officers. I was then promoted and took over the 105th precinct, a larger area that had well over 300 officers and covered a 13-square-mile radius. 

“The problem we were trying to solve was data – we had a lot of it at our fingertips but could not use it appropriately.”

What is a Real Time Crime Center (RTCC) in the context of the NYPD? Why was it created? 

The problem we were trying to solve at that time was data – we had a lot of it at our fingertips but could not use it appropriately.  

If I wanted to gather all available information about you back then, I would have to search through at least 35 different databases, which means dealing with 35 separate sign-ins and various levels of authorisation. I might not even have permission to access all of them, and honestly, I may not even know all of them existed.  

So, in 2005, we established the Real Time Crime Center. At that time, it was the first of its kind and it cost us approximately US$11 million. Our goal was to get access to all the data we needed in just one query. To do that, we took the data, put it into our crime data warehouse, and built a search engine above it that instantly allowed us to access everything. 

What did the RTCC look like when it first started? 

First, there was an investigative support section – an essential Real Time Crime Center with a team of over 100 talented investigators. They worked around the clock and focused on various aspects, from emergency calls to aiding in prosecutions. Their expertise extended beyond traditional methods as they leveraged our camera infrastructure and used mapping tools, data mining and link analysis. 

These tools enabled them to uncover hidden connections within the data, map out critical information and provide proactive assistance during significant incidents such as shootings, homicides and hostage situations. 

The team’s access to data was not limited to just their own criminal database either. They tapped into external law enforcement databases, public sources, social media, as well as the extensive camera infrastructure and the 911 system.


A Social Media Investigation to Solve Gang-Related Homicide

Watch the talk that Inspector Joseph Courtesis gave in 2023 at the World Police Summit in Dubai.

How did the Real Time Crime Center gradually evolve? You have already mentioned some tools you used during your time there. 

Our goal was to improve whatever the investigators were faced with. Take, for example, getting information from social media. As soon as the popularity of social media began to grow, it quickly became a critical component of almost every investigation. After time though, although extremely valuable in helping us solve cases, the amount of information we had found was so large that working with that data became overwhelming.

To help our investigators, we created a subunit of the RTCC called the Smart Team. It consisted of social media researchers and analysts who dug deep into different social media sites and sifted through publicly available information. If necessary, they put in a preservation order and filed for a search warrant to access classified information from platforms like Facebook.  

The second thing that evolved was facial recognition. As more cameras went up, we were finding an increasing amount of video evidence at crime scenes. Still, you can imagine how hard it is to identify somebody based on camera footage. Facial recognition helped us immensely with suspect identification from video. 

However, we were aware of the technological limitations and did not want to scale the biometric technology to the entire department. Rather, we created a subunit in the RTCC dedicated to facial recognition searches, which made it possible to govern the whole process under a comprehensive policy. 

You also mentioned link analysis as an important aspect of connecting disparate sources of data, even ones outside law enforcement databases. Can you give us an example of how you used it in your day-to-day work? 

I am basing this example on what could happen. Imagine you and I were members of a criminal organisation but when interviewed by the police, you claim we never met. How could the police uncover our hidden connection? 

The key is to follow the trail of data by finding connections across large datasets that would not typically cross-pollinate. For example, link analysis tools can identify how the only phonecall made by you, during an arrest, could match a phone number found in a public database that belongs to me.

“We were aware of the technological limitations and did not want to scale facial recognition to the entire department. Rather, we created a subunit dedicated to facial recognition searches, which made it possible to govern the whole process under a comprehensive policy.”

It all started with something as mundane as grocery shopping. When I swiped my supermarket membership card, I did not realise that my phone number became intertwined with the store’s database. And what do supermarkets do? They sell this data. This finding leads the police investigator to ask you the defining question: “Why did you use your one phonecall to call Joe if you claim to be strangers?” 

You see, no human being would be able to find that connection, but with the link analysis, police investigators can ask the system to search through disparate databases and uncover these connections in seconds.

The use of biometrics in criminal investigations

How widespread is the use of biometrics in police forces?

It is growing every day. Fingerprints are still the gold standard in identification. DNA testing has developed so significantly that we can now get DNA results in about two hours. Facial recognition is evolving so quickly that the accuracy of these algorithms is almost equal to that of fingerprint algorithms.  

Can you walk us through the process of applying facial recognition to a piece of evidence from a crime scene?

Let’s say a bank got robbed. In that bank, there was a camera that captured the perpetrator committing the crime. The police upload the image into the technology and run it through a database or a repository of images – typically mugshots.  

The system then produces a gallery or a list of the most likely candidates. For example, out of ten million photos, it will produce the top ten or the top hundred candidates. You decide how large you want that list or gallery to be. Some agencies rank it by a threshold setting, others by the top ten or top three. It is all part of your personal policy.

Now, with an algorithm accuracy of 99%, if the perpetrator is already in the database, they are likely to pop up first on that list. However, you still have to go through the list and analyse the candidates thoroughly. Most of the time, it will be easy to spot who the perpetrator is not. You will go through the list mumbling “That’s not him, that’s not him…”  

However, when the perpetrator is not already in the database, the algorithm will still produce a candidate list of similar-looking people. For this reason, all facial recognition leads have to go through a thorough vetting process. During this process, the goal is to “rule out” the lead and try to prove the algorithm wrong by deploying human review and investigative follow-up.

The investigative follow-up will either rule the lead out or produce corroborating evidence that helps establish probable cause to arrest the suspect. For example, you may look at the lead subject’s open-source social media and find incriminating images or posts. On the other hand, you may find images that support a strong alibi such as a picture of them in Disneyland from the same day the crime was committed. 

So, for police officers and investigators, facial recognition is one step in a multi-step image identification process and is never the sole factor in producing probable cause to arrest.

I speak for myself and not for other agencies, but I did not even show my investigators the percentage accuracy of each match in the list of candidates because I did not want it to influence their decision. I would rather they just look at each candidate as if they are probably not the subject. 

Even though the algorithm is a fantastic tool for developing leads, it is not perfect. You have to keep in mind these are just leads. The important distinction is that you are more likely to make a mistake identifying a perpetrator without this technology than with it. 

“Even though facial recognition is a fantastic tool for developing leads, these are just leads. The important distinction is that you are more likely to make a mistake identifying a perpetrator without this technology than with it.” 

Can you give examples where facial recognition has been very helpful in a criminal investigation? 

It is extremely helpful in criminal investigations captured by video surveillance and cameras. Imagine a burglary, which is one of the most difficult crimes to solve. Somebody breaks into your house while you are not home and when you come back you see a broken window and that your things are gone. There are no witnesses, and a lot of the time, no evidence. 

Typically, the only way we solved these crimes was through some sort of forensics – the perpetrator may have left a fingerprint or DNA behind. Today, most people have their own camera systems that might have captured the perpetrator committing the crime, giving us significant leads to follow up on. This has increased the number of leads available to us, resulting in improved clearance rates for these types of crimes. 

Is biometrics used ethically in law enforcement?

You now advise about the ethical use of biometrics, especially facial recognition. What exactly are you advocating for? 

Well, since facial recognition is one of the newer biometric technologies used by law enforcement, it comes with some limitations we need to tackle. It is still not as good as fingerprints or DNA. That is why I am working on setting expectations for law enforcement departments that want to use it in their investigations.

They need to understand that facial recognition does not produce probable cause to arrest, but merely a lead in the investigation. I am also educating about the strengths and weaknesses of this technology and helping to create a policy about its ethical use that is not only transparent but also enforceable.  

Let us stay with this topic for a bit. When might the use of facial recognition in criminal investigations be problematic?  

To put it into perspective – law enforcement has been identifying suspects through images long before there was an algorithm around. We just were not very good at it and therefore could not produce strong leads with it.  

Facial recognition improved our efforts but, just like adding any other technology to an already existing process, you must think policy before technology. I admit we probably could have been quicker in establishing our policy regarding facial recognition because without understanding the weaknesses of the technology, we cannot ensure that, for example, the facial recognition lead is not too suggestive. 

Remember that this is a strong algorithm that looks through millions of images to find the best match for the probe image you provided. However, before acting on the recommended lead, it is important to thoroughly evaluate and validate it. There is a decent amount of vetting that needs to go into this process that people might not understand if they do not understand the technology. 

Can you also say why or how using it in a controversial way could affect human rights? For example, using it to identify protesters at a rally? 

This also falls into the lack of a comprehensive policy issue. And to begin with, policy needs to be consistent with the way you conduct law enforcement investigations.  

Law enforcement should not use facial recognition to identify people exercising their rights to free speech and peaceful protest. While peaceful protesting is typically protected, there is a distinction when it crosses the line into criminal activities, such as assaulting police officers or damaging property. In such cases, the use of technology in law enforcement investigations becomes appropriate, if it remains within the legal boundaries. 

“The fact that facial recognition cannot be used in ways that violate human rights has to be clearly stated in the policy. While it may seem like common sense, it is important to have this written down to ensure responsible use.”

The policy should clearly state that facial recognition technology cannot be used in ways that violate rights or cause community unrest. While it may seem like common sense, it is important to have this written down to avoid any problems or misinterpretations and ensure responsible use. 

How do you go about creating these policies with law enforcement?  

It must be clear from the policy when to not use facial recognition, or when it is expected to be used. For example, I would advise to never use facial recognition as the sole source of identification, nor would I use it with live video feeds.  

When should you use it? Well, I suggest you use it when you have reasonable suspicion that someone has committed or is about to commit a crime. Also use it when trying to identify a witness, aid somebody who cannot identify themselves or verify an identification.  

The harmful myths connected to facial recognition 

I want to discuss a report by the Georgetown Law Center on Privacy & Technology, which states that there is currently no requirement for police in the U.S. to disclose when facial recognition is used during an investigation. What’s your view on this?

I am familiar with the Georgetown Law Center on Privacy & Technology; however, a lot has changed since this report was first published.  

Many U.S. states now have legislation in place that governs the use of facial recognition technology. In fact, the state of Virginia, which banned the use of facial recognition technology after reading this report, has since reversed that ban and established guidelines for the ethical use of facial recognition technology. 


Is the issue of bias still relevant?

For the past 20 years, the Facial Recognition Vendor Test (FRVT) developed by the National Institute of Standards and Technology (NIST) has been the world’s most respected evaluator of facial recognition algorithms.

FRVT’s Ongoing series releases monthly analyses on the performance of facial recognition algorithms across race, gender and other demographic groups.

Based on the most recent evaluation, each of the top 150 algorithms are over 99% accurate across Black male, white male, Black female and white female demographics. For the top 20 algorithms, accuracy of the highest performing demographic versus the lowest varies only between 99.7% and 99.8%. Unexpectedly, white male is the lowest performing of the four demographic groups for the top 20 algorithms.

So, before the guidelines were created, the police were not required to disclose the use of facial recognition? 

This statement is factually incorrect. Current policies being developed, state by state, now include a disclosure stipulation specifically as it relates to facial recognition technology leads. However, even prior to these stipulations all leads generated during an investigation are routinely memorialised in a criminal case and are, therefore, discoverable by the defence attorneys. 

Everything you do in an investigation is required to be discoverable. Why would I need a specific statement just for facial recognition? This statement leads us to believe that since no one has specifically stated that the use of facial recognition must be discoverable, the police do not disclose it. 

For example, my agency’s facial recognition policy did not specifically state this, however, there was never a case where we used the technology and failed to memorialise it in the case folder.

Thank you for clarifying. It is useful, especially for those who are not aware of the processes police use during an investigation. 

It is an important subject because if we do not address this, then perception becomes reality. 

Another big concern of the public is bias. It is believed that face recognition algorithms could disproportionately harm black people, as the technology has higher error rates for people with dark skin. 

This statement is misleading as well. The truth is that there are hundreds, if not thousands, of facial recognition algorithms being tested by the National Institute of Standards of Technology (NIST), so to say that the technology overall has a higher error rate in people with darker skin just does not hold.

Yes, there may be some algorithms that had higher error rates for people of colour, I do not deny that. But we should not label the entire industry as biased based on some algorithms that may not have performed well. 

On top of that, the introduction of convolutional neural networks and deep learning, which are nowadays embedded in facial recognition algorithms, has improved them at a rate that exceeds Moore’s Law, which states that technology doubles in progress every two years.  

“There may be some algorithms that had higher error rates for people of colour. But we should not label the entire industry as biased based on some algorithms that may not have performed well.”  

Algorithms today are in some cases performing with 99.88% accuracy across all demographics – it is pretty incredible. However, the discussion about algorithms often comes down to talking about inferior algorithms tested 7–8 years ago.


Doppelganger matches  

They occur when a facial recognition algorithm produces a match between two individuals who bear a striking resemblance to each other, but are not the same person. In other words, the algorithm incorrectly identifies someone as a match, when in fact they are just a doppelganger or look-alike of the person being searched for. This can occur due to similarities in facial features

We can mitigate the chances of an algorithm creating a doppelganger match by: 

  • Robust Training Data 
  • Unique Feature Extraction Techniques 
  • Algorithm Refinement 
  • Confidence Thresholds 
  • Human Verification 

So the algorithms deployed today are far superior to the ones created in the past.

You may call me crazy, but for me, algorithms today may be too accurate. When you are developing a lead, you never want the lead to be too suggestive. Society wants a perfect solution for lead development, but leads are not supposed to be perfect – if they were, they would not be called a lead.  

Our aim is to create an effective process to generate leads, and these algorithms are far superior to humans in creating leads from images. Actually, talking about bias, I think it would be more biased to not use these algorithms than to use them. 

Do you see any potential danger related to your comment about the algorithms being too perfect for creating leads? 

Yes, honestly, I do think that we are possibly entering into a different problem that we did not anticipate. The newer algorithms are more likely to produce a doppelganger match from a large dataset than the older algorithms.

Also, if the algorithms are 99.88% accurate, are we aware enough to find that small percentage where they are wrong? However, I do not want to scare anybody. We do not have to move away from facial recognition, because there are steps that can mitigate these issues. 

I recently wrote a resolution that recommends an investigative checklist or best practices for vetting a lead generated by facial recognition technology. I submitted the resolution to the International Association of Chiefs of Police for review.

“Leads are not supposed to be perfect. Our aim is to create an effective process to generate leads, and these algorithms are far superior to humans in creating leads from images. Actually, talking about bias, I think it would be more biased to not use these algorithms than to use them.” 

Finding a balance between safety, civil rights and case integrity

How can law enforcement benefit from face recognition while minimising the risk to individual privacy and other issues such as the integrity of the criminal investigation? 

This is where the policy comes in. When developing the policy, it is crucial to find a balance between public safety, civil rights and the integrity of a case. It is a delicate balance because we need to protect the ongoing cases while being transparent with the community. 

The policy aims to find the right rules for using facial recognition so that it promotes trust, accountability and transparency. By finding this balance, we can address concerns and make sure the technology is used responsibly. 

“When developing the policy, it is crucial to find a balance between public safety, civil rights and the integrity of a case. It is a delicate balance because we need to protect the ongoing cases while being transparent with the community.”  

Can you provide an example of how policy balances these three important roles?  

There are currently three identifiable ways of using facial recognition. The first one is post-investigatory facial recognition, where an officer has an image of the perpetrator from a crime scene and their task is to identify them.

The second one is called mobile ID verification. Imagine an officer pulls you over for a minor traffic violation, and you’ve forgotten your ID at home. The officer asks for your permission to take a picture of you and identify you through a mobile ID verification system. It is an opt-in, therefore the officer is not violating anybody’s civil rights.  

The third one is live facial recognition, where the technology is connected to video surveillance equipment and is identifying people in public as they go by. This is the one that the public worries about the most, and the most difficult one with which to ensure individual privacy and civil rights. In this case, we need more scrutiny in our policy. I never deployed live facial recognition and I cannot think of anybody in the U.S. that deployed it.  

There are, however, agencies from abroad that deploy it. They usually put a warrant on this type of facial recognition. So, if they wanted to use the technology, they would have to get permission from a higher authority and then deploy it. In the case of an emergency, they could deploy it without permission but then have just 24 hours after the incident to explain why they deployed it.  

“I have no objections to regulations on this technology because I do not want to live in a country where my face is being captured against my will and put into some database.” 

To ensure privacy and protect individual civil liberties, it is important to actively develop a comprehensive policy for each use case, identifying the prohibited uses and the acceptable-use scenarios. 

Do you think that using this technology for 24 hours without a warrant is a good policy, or could it be improved somehow? 

If I were going to deploy live facial recognition, I would think that it is a rather good policy and the only way of appropriately governing the police and minimising the risk of violating people’s rights and civil liberties. 

To give you a perfect example of a use case, imagine you are at a state fair and there is a children’s rides section. By putting up a temporary surveillance system in that section with a sign that says This area is monitored by facial recognition, you could monitor the presence of people who are on the sex offender monitoring list, and thus have an order to stay away from children.  

For me, this is an example of a use case the public would have no problem with. I can imagine the people who are on the sex offender monitor list could have a problem with it, but the public should be fine.

Are there threats from facial recognition to the police as well? For example, when used by crime gangs? 

There are credible threats, especially from companies that may issue facial recognition technology outside of a law enforcement setting and use it to search against social media databases. This creates a dangerous situation for undercover officers who work their whole careers pretending not to be law enforcement. This type of technology could expose their connection to law enforcement and put them and their families’ lives in danger. 

In light of these potential dangers, I would highly suggest that we take a second look at our undercover program and consider suspending or revising it before it is not too late. 

Is there some way policy could help to tackle this problem? 

I sadly do not have an answer to this because how do you put a policy on a private company and make sure you can trust them? I would like to sit down with some subject matter experts and discuss it in a lot more detail. It is definitely the next topic I am looking to explore. 

What is your future outlook on biometric data in criminal investigations from both ethical and technological points of view? 

Moving forward, we are going to see many more comprehensive policies being developed. Eventually, each U.S. state will have a policy that governs the way law enforcement uses the technology, and at that point, the federal government will recapitulate all those policies into one government-regulated policy. 

I have no objections to regulations on this technology because I do not want to live in a country where my face is being captured against my will and put into some database. I am not looking for that. I do not think anybody is. 


AUTHOR: Kristína Zrnčíková
ILLUSTRATIONS: Matej Mihályi