Login Register
Follow Us

Address threats to ensure responsible AI deployment

Concerns about biased algorithms, discriminatory AI models and misuse of facial recognition tech underscore the need to establish legal frameworks.

Show comments

Sharad Satya Chauhan
DGP & MD, Punjab Police Housing Corporation

PRIME Minister Narendra Modi recently expressed concern about the rising prevalence of deepfake videos; he described the use of artificial intelligence (AI) for creating deepfakes as ‘problematic’. The PM called on the media to educate the public about the associated risks.

In recent years, the rapid advancement of AI technology has introduced both opportunities and challenges for maintaining public order and law enforcement. While AI presents innovative solutions for various societal issues, it also brings about new risks, including the creation of deepfakes and challenges associated with social media manipulation. AI, particularly in the context of deepfakes and social media, can create law and order problems.

The AI challenge lies not only in the problems it can create but also in its inherent difficulty to regulate. Theoretically, AI possesses the capability to circumvent the regulations intended to govern its use.

The integration of AI into various facets of society presents challenges for law enforcement, encompassing issues from technological manipulation to ethical considerations. At the forefront is deepfake technology, powered by AI, enabling the creation of remarkably realistic counterfeit videos and audio-recordings. These deepfakes pose a significant threat to the veracity of information and public trust, making it challenging for law enforcement agencies to distinguish between genuine and manipulated incidents.

AI algorithms on social media platforms enhance user experiences but they also create vulnerabilities to manipulation, misinformation and societal polarisation. Bots and algorithms fuelled by AI can amplify divisive content, manipulate public opinion and contribute to growing societal polarisation, complicating the agencies’ efforts to maintain order. Adaptive strategies are essential to address the landscape of online threats.

In disinformation campaigns, AI-powered tools automate the creation and dissemination of misleading narratives on a massive scale, challenging law enforcement agencies to counteract the adverse effects and maintain public trust. As AI becomes integrated into critical infrastructure and public systems, the risk of cyberattacks escalates, necessitating advanced cybersecurity measures that can adapt to dynamic AI-driven attack strategies.

AI systems often operate as ‘black boxes’, making it challenging to understand the rationale behind their decisions. The lack of explainability in AI decision-making processes may erode public trust, emphasising the importance of striving for explainable AI models for accountability and confidence in law enforcement. Concerns about biased algorithms, discriminatory AI models and the misuse of facial recognition technology underscore the importance of establishing robust legal frameworks to safeguard civil liberties and ensure ethical AI deployment.

In video surveillance, AI integration enhances monitoring system capabilities but it also introduces challenges, such as the manipulation of the footage. Authentication and security strategies are necessary to protect the integrity of visual evidence.

Predictive policing, employing AI algorithms to forecast crime hotspots, introduces challenges related to algorithmic biases perpetuating existing inequalities. Addressing biases in predictive policing models is crucial for maintaining public trust and upholding justice. Algorithmic accountability and bias persist as challenges within AI systems, potentially resulting in unfair profiling and targeting of specific groups. Measures to address bias in AI models are thus crucial.

Privacy concerns, particularly with facial recognition technology, raise questions about the balance between leveraging AI for law enforcement and protecting individuals’ privacy. Thoughtful consideration and robust safeguards are necessary. The widespread deployment of AI-driven surveillance systems raises concerns about the emergence of a ‘surveillance state’. Balancing public safety with preserving individual freedoms is critical to prevent potential social unrest.

The integration of AI in legal decision-making processes introduces questions about accountability, transparency and fairness. Striking the right balance between leveraging AI for efficiency and maintaining fairness and transparency in legal decision-making is a challenge for the authorities.

Autonomous systems, like AI-driven drones or vehicles, raise concerns for public safety due to the potential for malicious actors to hack into them. Balancing the benefits of autonomous technologies with safeguards against misuse is paramount.

AI possesses the potential to exploit and challenge laws, especially in cybercrime control and AI regulation. Through simulation and modelling, AI systems actively seek vulnerabilities in legal frameworks, pinpointing flaws to expose insufficiencies and ambiguities. Automated legal analysis, utilising natural language processing and machine learning, scrutinises legal texts to identify exploitable loopholes, providing nuanced insights into areas where laws may lack clarity.

In security testing, AI-driven tools play a crucial role by uncovering vulnerabilities in digital systems associated with cybercrime and AI regulation. This approach enhances our understanding of potential threats and aids in the development of effective countermeasures. Adversarial AI techniques add another layer to this exploration, where researchers create AI models designed to circumvent existing security measures. This proactive and adversarial approach fosters a continuous improvement cycle for regulatory frameworks, staying ahead of potential threats.

AI also contributes to predictive analysis of enforcement patterns. Analysing historical law enforcement data, AI systems identify trends and gaps in enforcement, providing insights into activities that may go undetected. These insights guide policymakers in refining and reinforcing regulatory frameworks to address challenges.

In response to a 2018 taskforce report, NITI Aayog proposed ‘responsible AI’ principles in a February 2021 document. Currently, the government views AI as a catalyst for digital economy, growth across sectors and plans to regulate it through laws on privacy, data protection, intellectual property and cybersecurity. The Ministry of Electronics and Information Technology is promoting AI as part of the ‘Digital India’ initiative and the government has undertaken developmental initiatives in skilling, health, defence, agriculture and international cooperation.

While the current benefits of artificial intelligence outweigh the potential dangers, the future portends the risk of misuse and regulatory challenges. It is crucial to remain vigilant and proactively address these threats to ensure responsible AI deployment.

#Artificial Intelligence AI #Deepfake #Narendra Modi

Show comments
Show comments

Trending News

Also In This Section


Top News


View All

Another bridge collapses in Bihar, 10th such incident in over 15 days

The small 15-year-old bridge over the Gandaki river was situated in the Baneyapur block and used to connect several villages in Saran with the neighbouring Siwan district

Minister extends Rs 1 lakh financial aid to Punjabi folk singer Gurmeet Bawa's daughter Glory after her distress call

Singer says dwindling work, illegal occupation of 5 shops owned by family hit her earning

Centre’s nod to underground Metro in Chandigarh heritage sectors

Panel had recommended underground network for entire project in city

Punjab: Free bus travel sop for women takes PRTC for a fiscal ride

Scheme a hit with fair sex, who now travel more often


Most Read In 24 Hours