?

?

?

?

?

Modern Challenges of AI and Data Privacy in Forensics

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? - Prof. (Dr.) Ella Gorian

Artificial intelligence is considered a genuine wonder of 21st century. Despite bringing much relief to many sectors, namely healthcare and finance, it poses significant challenges to others as well. Criminal justice and law enforcement are among them. Although one shouldn’t deny the dichotomy in every aspect of our life, and AI as a phenomenon can prove the dualistic nature of our existence, to me, as a researcher, it was insightful to realise the challenges posed by AI in forensics.

Forensics was one of my basic subjects in Law School, and I used to work very closely with it during my cadence as an investigator. Since then, I kept my curiosity through the years, following sporadically the recent advancements in the field, especially in relation to alarming criminal and civil cases. A month ago, I had the pleasure to meet leading forensics scholars - Prof. Pàvlos G. Kipouràs from Greece, Prof. Nikolay F. Bodrov from Russia, and Prof. Chiara Lucanto from Italy, with whom I discussed the current issues of AI and data privacy in forensics.

Biological evidence and privacy

Forensic science is a field that is always evolving and includes ethical, legal, and technological aspects. Biological evidence such as blood or saliva, being the classic object of forensics, may precisely link people to crime sites. They are essential to criminal investigations. The data processing from biological stain analysis must follow strict legal and ethical protocols. In fact, there are specific regulations governing the collection, analysis and storage of biological traces. It is important to obey these regulations to ensure that evidence will be admissible in Court. Recent developments in the digital world have led to a critical debate over evidence-related data processing. And the domestic approaches may differ. For example, while India's Digital Personal Data Protection Act, 2023 establishes crucial principles for protecting personal data in a digital environment, Italy's adherence to the GDPR guarantees the protection of personal information connected to biological evidence. This is one of the key differences between GDPR and DPDPA that the former applies broadly to all personal data (digital and non-digital) processed in the EU and imposes stricter rules for sensitive categories (including digital evidence), while the latter focuses solely on digital data processed within India or by entities targeting Indian users. And since the international collaboration in crime investigation is vital, the transfer of digital evidence is complicated hence GDPR permits transfers to countries with adequate protections or via contractual safeguards, while DPDPA allows transfers except to government-restricted nations. To ensure that evidence is both scientifically reliable and admissible in court, the future issue will be to balance technological progress with moral and legal standards.

AI and document forensics

Another challenge that is posed by technological progress is the use of AI-systems to fake document evidence. Forensic document inspection sets a number of guidelines based on fairness and reliability; therefore, a meticulous scientific analysis must be adhered to by experts. The validity of expert findings is based on these standards, which are dependent on qualitative, quantitative, and chronological criteria. Ignoring these rules can have serious consequences for justice and the parties involved, in addition to jeopardising the investigation's conclusion. It is crucial to continuously review and improve these

methodological guidelines, particularly considering new technology tools that have the potential to fundamentally change forensic practice, like robotic arms that can fake digital information as well as the handwritten documents. Such robotic devices equipped with AI systems can reproduce signatures on digital devices and on paper with almost human accuracy, necessitating sophisticated techniques to identify forgeries and stronger evidence in court. So, the experts must thus enhance their techniques, combining advanced imagery and processing tools to find incredibly minute indications of fraud. Furthermore, it will be more difficult for them to demonstrate in court that an AI-powered gadget produced a signature. Above all, the quick development of this technology necessitates ongoing study and cooperation within the domains of engineering, computer science, and forensics.

And in case of handwritten paper documents, forensic experts can use the set of standard methods with some modifications, analysing digital signatures on mobile device screens reveals a unique challenge that necessitates standardised practices, cutting-edge tools, and thorough documentation. Even minute details—such as file formats, hardware specifications, or the precise conditions under which a signature was recorded—can determine whether an examination yields an accurate and defensible conclusion. The idea of authenticity in digital signatures itself is susceptible to being compromised in the absence of trustworthy, widely recognised standards. And there is no denying the importance of this problem: forensic handwriting specialists must modify their techniques to examine and validate signatures created in a digital setting due to the increase in electronic transactions in industries like banking, healthcare, and logistics. To defeat the culprit’s robotics arm, you need both software and hardware in addition to human expertise of forensic experts. As a matter of fact, forensics experts from many jurisdictions develop such devices, and AI technologies may be utilised for that purpose, but there is a problem common to all researchers – the lack of financial support. Unfortunately, the budgets of law-enforcement and forensics departments are not compatible with the budgets of criminal minds, so there is a huge risk in combating crimes. For instance, the UK police forces are “overwhelmed and ineffective” when it comes to digital forensics, with a backlog of more than 25,000 devices waiting to be examined.

Challenges in international cooperation

The next challenge posed by AI is known by my post-graduate students who pursue master’s degree in data Privacy and Information Technology Laws and study the course on Cyber Law, Cybercrime Investigation and Information Security - the ineffective mutual assistance in civil and criminal cases between nations. And the cause of it – sometimes contradictory political and legal interests. When receiving evidence from a foreign jurisdiction, forensic specialists must make sure that it has been fully verified. The receiving nation may question the accuracy of the information if the providing nation lacks specialised equipment or does not use standardised AI-detection techniques. Alternatively, evidence-sharing may be postponed, limited, or handled superficially if one government believes the forensic inquiry may further an opposing political objective. This may make it more difficult to conduct in-depth, impartial research. Depending on political relationships or conflicts, a country's government may decide to give some forensic investigations more priority than others. For example, regardless of the methods used, the validity of forensic evidence from one nation may be questioned in another if the two nations are politically at conflict, e.g. in the Roman Seleznev Case (2014–2017), when Russia condemned the arrest of convicted in the U.S. for hacking into U.S. businesses and stealing credit card data, and refused to cooperate with U.S. investigators and to recognize the legality or the evidentiary standards of the U.S. investigation, alleging violations of international law. It gets more difficult for forensic professionals to work together in a trustworthy manner and to rely on each other's techniques, equipment, and legal certificates the more divisive or tense the political connection. Disparities in data privacy regulations can also lead to conflict. Even while the goal of data protection laws is to safeguard individual rights, they can occasionally turn into political hot spots that affect whether a nation will comply with another's request for information. The dispute between China and Western countries over data requests (post-GDPR period) serves as another illustration. Transferring personal data to jurisdictions with insufficient privacy regulations is severely limited by the EU GDPR. Due to concerns about breaking either country's rules, European businesses and forensic investigators are therefore torn between EU privacy regulations and Chinese and American requests for data. Furthermore, foreign digital evidence may be rejected in court if AI-powered forensic technologies employed in China or Russia do not satisfy EU or US admissibility criteria. Additionally, there is sometimes a perception that content from authoritarian governments that AI detects may be selectively published or politically manipulated. The cooperation process may stall if officials believe the request is politically driven or are afraid of the consequences of disclosing information that violates national privacy laws.

In conclusion I would like to emphasize that forensic research necessitates methodological and cross-jurisdictional collaboration, regardless of whether it is analysing digital handwriting, biological stains, or online activities. While political bias can undermine the trust required for effective evidence-sharing, artificial intelligence adds another level of technical difficulty by necessitating sophisticated detection tools and standardised protocols. Transparent international standards, specialised training in AI-based evidence verification, and legal frameworks adaptable enough to quickly changing technology without caving in to political pressure are essential for overcoming these two obstacles.

Disclaimer :-The opinions expressed here are solely those of the author's and do not represent the views or positions of the institution.