A critique on Kate Crawford's “Regulate facial-recognition technology”

Muneeb
Written by Muneeb on

Tags : 

AI   Ethics  
📖 Reading mode?
 A critique on Kate Crawford's “Regulate facial-recognition technology”

The report by Kate Crawford unravels various degrees of ‘technical failures’ and corresponding ‘ethical challenges’ in modern facial recognition systems (FRS) based on image databases from the public (mainly collected and deployed without consent). The author endorses a suspension, limitation, and transparency of such technologies until the resolution of necessary policies for activities involving individuals’ biometric identification at a mass scale.

The failure of FRS is viewed through the critical lens of history, where it has yielded some bias when implemented in the real world that triggers unwanted consequences. The question we should be asking ourselves is, “Is it right to be worried?”. The answer is “Yes!” because data-driven technologies tend to approximate the same distribution as the training data (the same bias, the same inequality). Researchers worldwide struggle to make these robust by increasing their accuracy (or precision). That must not be the utmost priority as there will always be some degree of inevitable bias-yielding characteristics in data.

The world is uncertain. DL (that powers the latest FRS) is not proven to be wholly trustworthy or explainable (as yet), and maybe it never will. These do not match for standards to be put into practice by law enforcement. Autonomy to such systems could be disastrous when they fail (higher false discovery rate), and their success is not guaranteed in real-time. Either way, privacy is compromised. I concord with the author here.

Now that many tech-giants are into this race of building robust such technologies, it insights that policymakers should ensure the protection of privacy by establishing and legalizing their boundaries. The author states that it requires following the four-point formula: (1) allowing these technologies only after a critical examination and when protective measures are in place that would regulate usage, (2) investigating and reviewing the data they consume (for civil rights, bias, and inequality of any kind and possible impact), (3) allowing public-research into investigating and scrutiny of these technologies, and (4) protection for whistle-blowers who reveal critical aspects of such technologies.

In addition to the author’s comments, public data must be consentingly procured and used. Furthermore, the author expects policymakers to be a clean slate without agendas, which is not entirely true. Agendas change, so do policies. There is a scope of backdoor access to many stakeholders (where the problem stays). If laws are not there preventing this, this does not stop here. There is the scope of activity monitoring, logging of private information about people, and so on. It is expected that regulation will help prevent these unwanted consequences, which is a valid statement (if laws exist for backdoor entries as well). We need to establish laws for the information that exists in the public domain.

On the other hand, these systems are essential in critical places. The author critically analyses the negatives over positives while there is a scope of adding more layers of ‘transparency’ like augmenting the predictions of these FRS with the consequent judicial process, but this possibly means slowing down public scrutiny, which is okay in most (almost all) cases! So yes, we need regulation for FRS, but we need more to suffice transparency.

References

  • [1] Kate Crawford (2019).“Regulate facial-recognition technology”. Nature 572, 565 (2019), DOI
Muneeb
About Muneeb PhD Student at IIT Delhi Know more about me >>

Comments

comments powered by Disqus