Cybercrime and Computer Forensics in Epoch of Artificial Intelligence in India

Authors: Sahibpreet Singh, Shikha Dhiman

Published: 2025-12-16 19:39:22+00:00

AI Summary

This paper analyzes the integration of generative AI into computational forensics within the framework of Indian criminal jurisprudence, focusing specifically on the compatibility of the Digital Personal Data Protection Act, 2023 (DPDP Act) with threats like deepfakes and anti-forensics. Employing a doctrinal legal methodology, the research scrutinizes AI's dual role as both a cyber-threat vector and a forensic automation mechanism. It proposes a human-centric forensic model that leverages Explainable AI (XAI) to ensure evidence admissibility.

Abstract

The integration of generative Artificial Intelligence into the digital ecosystem necessitates a critical re-evaluation of Indian criminal jurisprudence regarding computational forensics integrity. While algorithmic efficiency enhances evidence extraction, a research gap exists regarding the Digital Personal Data Protection Act, 2023's compatibility with adversarial AI threats, specifically anti-forensics and deepfakes. This study scrutinizes the AI dual-use dilemma, functioning as both a cyber-threat vector and forensic automation mechanism, to delineate privacy boundaries in high-stakes investigations. Employing a doctrinal legal methodology, the research synthesizes statutory analysis of the DPDP Act with global ethical frameworks (IEEE, EU) to evaluate regulatory efficacy. Preliminary results indicate that while Machine Learning offers high accuracy in pattern recognition, it introduces vulnerabilities regarding data poisoning and algorithmic bias. Findings highlight a critical tension between the Act's data minimization principles and forensic data retention requirements. Furthermore, the paper identifies that existing legal definitions inadequately encompass AI-driven tool crimes and target crimes. Consequently, the research proposes a human-centric forensic model prioritizing explainable AI (XAI) to ensure evidence admissibility. These implications suggest that synchronizing Indian privacy statutes with international forensic standards is imperative to mitigate synthetic media risks, establishing a roadmap for future legislative amendments and technical standardization.


Key findings
The analysis indicates that Machine Learning, while efficient, introduces significant vulnerabilities such as data poisoning and algorithmic bias in forensic contexts. A critical tension exists between the DPDP Act’s data minimization principles and the necessities of forensic data retention. Consequently, the paper argues that existing legal definitions inadequately address AI-driven crimes, necessitating legislative synchronization with international forensic standards.
Approach
The research employs a doctrinal legal methodology, synthesizing statutory analysis of the Indian DPDP Act with global ethical frameworks (IEEE, EU) to evaluate regulatory efficacy against adversarial AI threats. The proposed solution is a human-centric forensic model that prioritizes the use of Explainable AI (XAI) to ensure evidence admissibility and mitigate synthetic media risks.
Datasets
UNKNOWN
Model(s)
UNKNOWN
Author countries
India