Data ethicist Juraj Podroužek: “I think it’s correct to have some red lines that set out areas where AI should not venture.”

Technologies are not value-neutral; they are imbued with the values of their creators. How can the ethical values that society deems important be reflected in technologies? And why should companies use their own moral compasses, and not just rely on current legislation? With AI ethics researcher Juraj Podroužek, we talked about the ethical dimension of using AI and biometrics, the correct regulations, and the ways in which companies can stay trustworthy.


About Juraj Podroužek

Juraj Podroužek is a senior researcher at the Kempelen Institute of Intelligent Technologies (KInIT) in Bratislava. He achieved a PhD in philosophy at the Slovak Academy of Sciences. In recent years, he has focused on the ethics of ICT, data ethics, AI ethics and responsible innovation. He co-founded the informal E-tika group, which investigates the ethical and societal impact of ICT. He is also a member of the national committee on the ethics and regulation of AI, and a member of the European AI Alliance. In his business career, Juraj was an analyst and project manager at IT companies. He also led a team as a VP of customer success at an innovative start-up that conducted AI-driven, in-store analytics. Besides his daily work, Juraj has designed computer games and writes poetry.

You have a degree in philosophy, but your professional career is connected more to technologies. How did you come to enter the field of technologies, AI and its ethics?

I actually have a PhD in philosophy, but I left academia more than ten years ago for the corporate world. I worked mostly for technology companies in different positions.

I started to focus on digital ethics in the past six years. The trigger for me was the fact that during my work at tech firms, I frequently encountered situations where we needed more than just intuition to solve the moral dilemmas. When you work in tech, you see the huge impact your work has on society. The tech can make people’s lives easier and more convenient, but it can also be used against them and do actual harm.

I believe that the technologies that we as designers, programmers and other experts create, are actually not value-neutral, and that the technology is in fact imbued with the values of its creators. It’s not just like the proverbial knife that can be used for either good or bad depending only on the user. We are already familiar with this process when we try to promote values like security or accessibility – following existing norms, security standards or deploying UX methods.

And the same should be true for ethical values. We should be aware of a process on how to translate these values into the technology, and support – by its design – what we as society deem important. We’re talking about things such as human dignity, autonomy, privacy protection or transparency.

“When you work in tech, you see the huge impact your work has on society. The tech can make people’s lives easier and more convenient, but it can also be used against them and do actual harm.”

From an ethical point of view, biometrics is currently perceived as controversial – specifically because it touches these values. How do you, as an ethicist, see the role of biometrics?

Biometrics is actually a perfect example of technology where the good intentions could collide with their unintended social impacts. On the one hand, it can help us maintain safety at airports or border crossings. Yet at the same time, it can undermine principles of privacy or dignity. My role as a philosopher and ethicist is to help the developers to be aware of such value conflicts in their technology. The ultimate goal here is for trustworthy and ethical technology that reaches its intended goals, while also upholding the ethical principles that society deems important.

This all sounds good, but how to transfer those values into the actual production process?

Well, you can stand and preach about how, for example, Clearview is using technology in a way that infringes on people’s privacy. But that doesn’t get you too far. What business taught me is that it’s better to be proactive, identify potential social risks ex ante, and try – through proper design – to avoid them before they happen. The current situation in AI regulation provides a good opportunity to start talking about what we actually care about in our work.

Ethics goes beyond legislation. In normal circumstances, there are rules and regulations that you have to abide by, but there is little regulation regarding AI. How do you introduce ethical rules into tech when there is no outside pressure to do so?

There is a proposal by the European Commission on how to regulate the whole AI industry in Europe, and this proposal of the so-called Artificial Intelligence Act is being widely discussed. But this will still take some years to pass, so we’re currently in a so-called policy vacuum, where many aspects of AI don’t have standardised rules. For example, the concept of responsibility is not firmly established. And this is the perfect time for ethics to come in and fill this vacuum. While the laws are being prepared, as companies we can be proactive and set some trends for any future AI regulation. As a philosopher, I believe there are universal human values that we care about as a society, such as doing no harm or non-maleficence.


Kempelen Institute of Intelligent Technologies (KInIT) is an independent, non-profit institute dedicated to intelligent technology research. The institute brings together experts in artificial intelligence and other areas of computer science, with connections to other disciplines like web and user data processing, information security, ethics and human values in intelligent technologies, and more.

And how do you make sure that, once you have the rules on paper, they are actually followed and enforced in the long-term, and they don’t get sidelined in favour of e.g. profit maximisation? After all, if you’re not breaking any laws, why restrict yourself more than necessary?

That’s the million-dollar question. Why should you as a company do anything beyond the legal requirements? I think the most important resource in any tech company is people and their creativity. And these people, they already have their inner moral compass about what’s good or bad. We can build on that. I believe that when a company ignores ethical norms, the people that co-create the company can reach a point where their conflict of values will separate them from the company vision, and in some cases force them to leave the company. It’s similar with respect to clients as well – a company declares its values, and if the clients find out that the values are only expressed, but not embedded within the company culture, they’ll abandon your company as well. In fact, you may end up with people that you don’t actually want to be your clients.

From a business point of view, if you live your values this can also be your competitive advantage; a distinguishing factor in a market. When you have strong moral integrity in your company, this can work long-term in your favour. Both workers and clients will recognise it, and the values will show in the products as well.

“From a business point of view, if you live your values this can also be your competitive advantage; a distinguishing factor in a market.”

You can see it very well in food production. Some products differentiate themselves with bio, organic or fair trade stickers, and they appeal to the customers that want that extra environmental and moral value, and are willing to pay more for it. We can have such a seal of trustworthiness for our digital technologies as well.

In recent years we’ve seen a very strong backlash toward some AI systems, and facial recognition specifically. Why is it so vocally opposed? You already mentioned Clearview, but there was also ID.me that was quickly pulled from the IRS systems in the USA, and other examples. The algorithm of ID.me is the most accurate in the world.

I think that biometrics, and facial recognition especially, are unique in the way they can invade our physical private space. Some people feel helpless and believe that biometrics can make decisions outside of their control. For example, Israeli surveillance researcher Avi Marciano is talking about “mute individuals”, for which their bodies “talk” instead of them. Because by entering an area that’s under surveillance, you let your body talk for you, which can even lead to depersonalisation and loss of human autonomy and dignity.

“I think that biometrics, and facial recognition especially, are unique in the way they can invade our physical private space.”

The other aspect is the value of personal space itself. If someone or something enters our space without our permission, we feel physically uncomfortable. We see it for example in stores and shopping centres, or in elevators. When you get too close to another person, this person will probably leave because of the uncomfortable closeness. And biometrics does exactly that, on a large scale, because it gets uncomfortably close. Then there are of course the examples of misuse by non-democratic governments or companies, and the chilling effects that can have on a whole society.

Even here in Eastern Europe, we also have the experience of being constantly spied on by the state, making us wary of any technology that makes spying on us even easier. This leads to the Big Brother feelings and instinctive fear of facial recognition in public spaces.

So what we’re talking about is basically an instinct, not a rational reaction. But this shows in legislation as well, as some legislators want to flat-out ban facial recognition for any use, including the beneficial ones. So how do we reach the standards that you mentioned at the beginning, to make people happy?

The proposal of the AI Act I mentioned earlier is talking about prohibited practices as well. Those concern the AI systems that clearly violate the European shared values. For example, the proposal of the AI Act says that social scoring systems, or systems that would leverage on human vulnerabilities, are right out. And one of the prohibited items is also “real-time” biometrics deployed in publicly accessible spaces for the purpose of law enforcement. This mainly addresses the mass surveillance we just talked about, which is often used without consent.

In general, I think it’s correct to have some red lines that set out areas where AI should not venture – like mass surveillance or manipulation techniques. These technologies and systems that use them are incompatible with trustworthy AI by their very nature – you can’t do mass public surveillance and uphold people’s right to privacy at the same time, for example. But for most areas in AI, even in biometrics, I think there are very few practices that are fully unethical in such a way. So for these systems I prefer a risk-based approach to regulation, where you can assess the possible risks and address them before entering the market, which means during the design and development phases.

“In general, I think it’s correct to have some red lines that set out areas where AI should not venture – like mass surveillance or manipulation techniques.”

In the Kempelen Institute of Intelligent Technologies (KInIT), where I lead the team focused on ethics and human values in technology, we conduct research that is aimed at these proactive risk-based methods that will support ethical design of AI systems. For example, an airport is a semi-open public space where you expect some level of security screening to be going on. And in such a place, biometrics should be permissible – but the potential ethical and societal risks should be properly addressed and prevented. This has to be done ex ante, of course.


ALTAI – The Assessment List on Trustworthy Artificial Intelligence is a tool helping businesses and organisations to self-assess the trustworthiness of their AI systems under development. The concept of Trustworthy AI is based on seven key requirements:

1. Human Agency and Oversight;
2. Technical Robustness and Safety;
3. Privacy and Data Governance;
4. Transparency;
5. Diversity, Non-discrimination and Fairness;
6. Environmental and Societal Well-being; and
7. Accountability.

Companies such as airports do not usually develop their AI solutions themselves. Why should they care about ethics in their AI or biometric solution? After all, they are not obliged by law to do so, so they can simply go for the lowest price. And how can they actually distinguish between the ethical and unethical solutions?

Again, the main focus should be the universal moral principles and values. It may seem that these change with different cultures across the world. But in AI ethics, there are already sets of values that the experts in the field agree upon, and which create the requirements for so-called trustworthy AI – like human control, safety, privacy, transparency, fairness, responsibility, or social and ecological sustainability. The experts from the European High-Level Expert Group on AI have created an Assessment List for Trustworthy AI (ALTAI) that could help you address these values and principles. At the Kempelen Institute we use such tools as ALTAI to specify what you should actually look for when you try to assess the social impacts of your technological solution.

So you can just click through a form and be done with it?

Not really. Some of the questions in these assessments really need expert guidance – mostly when you’re not familiar with the concepts they use in order to make sense for you as a company. This expert on AI ethics should help you think about your processes, and situations where your technology is used within a broader perspective, and can be a good guide for tackling these questions in a sensitive way. Otherwise, you can just click through it and consider it done; a formality. But once you start thinking about it deeply, it takes time both to answer honestly and implement it.

Where do you see the main ethical challenges today, and how can they be addressed?

When we assessed one of the facial recognition systems, we identified over 30 different moral and societal problems. Biometrics has the highest impact on protection of privacy and autonomy – there’s a question of how to provide users with alternatives when they object to biometric identification in semi-public spaces, for example.

Transparency is another big issue: you should know that you’re entering a space where AI or biometrics is used as soon as you enter. But transparency also means explainability in this context: you should be able to understand how the AI system actually reached its conclusion. With deep learning, it’s not always so easy to see.

The third big issue is fairness and accuracy. You need to know that your system is not systematically biased against some groups of people, or be able to provide countermeasures.

Until you have been able to address all these issues, it will be very hard for you as a company to earn the trust of people and users, and lower their levels of fear towards your technology.

I am convinced that you cannot just receive instant trust from people. It has to be earned – through rational explanation of what your technology does, and what it does not do. That’s why it’s also important to think about ethics. Because you can show people who come into contact with your technology that you actually thought about the risks, that you actually care about their issues, and that they can therefore trust you and your systems.


AUTHOR: Ján Záborský
PHOTOS: Dominika Behúlová