on
Data Science
- Get link
- X
- Other Apps
During the riots, Delhi Police used FRT to identify
suspects. Reports revealed minorities, particularly Muslims, were wrongly
flagged due to biased algorithms. The police considered an 80% accuracy rate
“acceptable,” but experts argue this threshold is dangerously low [1],[2]. A
study found Muslims in Delhi are disproportionately targeted, reflecting
systemic biases [3].
India’s biometric ID system, Aadhaar, integrates FRT, but
poor-quality scans in rural areas often misidentify manual labourers (many from
lower castes). These errors deepen inequality [4].
In the U.S., a 2018 MIT study found FRT error rates soared
to 34.7% for darker-skinned women versus 0.8% for lighter-skinned men [5].
Robert Williams, a Black man, was wrongfully arrested in Michigan in 2020 after
a faulty FRT match [6]. China uses FRT to target Uyghur minorities, enabling
mass detentions [7].
Delhi installed 1,000+ FRT-enabled cameras by 2022, yet no
clear privacy rules exist to govern their use [8]. Despite a 2017 Supreme Court
ruling declaring privacy a fundamental right, enforcement is lax [9]. The 2023
Digital Personal Data Protection Act (DPDP) exempts government agencies from
consent requirements, raising fears of misuse [10].
The EU’s GDPR requires explicit consent for biometric data
and limits public-space FRT, though loopholes remain [11]. Meanwhile, the U.S.
has no federal FRT laws—some cities ban police use, while others don’t,
creating a patchwork of protections [12].
FRT fuels China’s social credit system, prioritizing state
control over privacy. Citizens have little recourse against surveillance [7].
When FRT makes mistakes, victims often have no way to seek
justice. India’s opaque systems and lax corporate oversight worsen the problem.
Delhi Police buys FRT from private firms like Staqu without
public audits. Proposed national systems, like the AFRS, centralize data
without accountability [13],[14]. Companies like Facegram sell flawed tech to
police, skipping bias checks [3].
In 2020, Microsoft, Amazon, and IBM paused FRT sales to
police over ethical concerns [15]. The EU’s AI Act labels public-space FRT
“high-risk,” mandating audits and transparency [16]. France fined a firm for
restricting FRT access, promoting accountability [16].
Lower-caste groups (Dalits) and Muslims face heightened
surveillance, as seen during the Delhi riots [3]. Rural areas with poor
internet and cameras see more errors, hurting marginalized communities [4].
The EU uses FRT to profile migrants at borders [11]. In the
U.S., predictive policing tools over-target Black neighbourhoods, worsening
racial disparities [17].
Mandate public audits of FRT systems to catch bias, as
proposed in the EU’s AI Act [16].
Train FRT on India-specific datasets (like IIT Madras’ “AI
for Social Good”) to reduce errors [18].
Reform India’s DPDP Act to remove government exemptions and
align with GDPR’s consent rules [10][11].
Adopt UNESCO’s AI ethics guidelines to prioritize human
rights and transparency [19].
FRT could enhance safety—but not without guardrails. India’s
weak regulations and diverse population amplify risks, while the EU and U.S.
grapple with their own challenges. By learning from global models (like audits
and diverse datasets) and closing accountability gaps, we can ensure FRT serves
justice—not injustice.
Stay informed. Demand transparency. Your face shouldn’t cost
you your rights.
As technology evolves, facial recognition tools are expected to become faster and more accurate. However, pressure is also rising for governments and companies to create stronger privacy laws, transparency rules, and ethical guidelines. Striking the right balance between security and civil rights will shape the future of how data science impacts policing across the world.
[1] Delhi Police's Use of Facial Recognition Technology
[2] Delhi Police's Facial Recognition System Concerns
[3] Facial Recognition for Policing in Delhi
[4] Internet Freedom Foundation on Aadhaar Errors
[5] MIT Study on FRT Bias
[6] ACLU on Wrongful Arrest
[7] China's Use of FRT
[8] Delhi's Surveillance Expansion
[9] Puttaswamy Judgment
[10] India's DPDP Act
[11] EU's GDPR Rules
[12] U.S. FRT Privacy Laws
[13] Delhi Police & Staqu
[14] National AFRS Risks
[15] Corporate Moratoriums
[16] EU's AI Act
[17] U.S. Predictive Policing
[18] IIT Madras' AI Initiative
[19] UNESCO's AI Ethics
Comments
Post a Comment