- #all
- #banking
- #biometrics
- #business
- #culture
- #digital identity
- #elections
- #ethics
- #facial recognition
- #fingerprint recognition
- #government
- #iris recognition
- #NIST
- #onboarding
- #online elections
- #technology
- #UX
- #voice recognition
AI researcher Martin Tamajka: “If we are to trust AI in courtrooms, it needs to justify its decisions.”
AI is transforming jobs across a wide range of industries. However, there are still concerns about using it extensively when people's lives or futures are at risk, such as in medicine or law. In these cases, it's not enough for AI to just produce an answer – it also needs to be able to explain how it came up with that answer.
Seeing past the marketing fluff: How to identify trustworthy biometric providers
Today, the main risk of using biometrics is not inaccuracy, but ensuring the protection of sensitive personal data. Any potential intrusion of privacy, real or perceived, risks the reputation of the companies using the technology, and has the potential to seriously damage customer trust.
Data ethicist Juraj Podroužek: “I think it’s correct to have some red lines that set out areas where AI should not venture.”
Technologies are not value-neutral; they are imbued with the values of their creators. How can the ethical values that society deem important be reflected in technologies?