Pandora's Box: Artificial Intelligence in Forensic Science

 by Agnes Gutierrez | Isochore


Artificial intelligence (AI) has been incorporated in forensic science to identify criminal actions via video analysis, DNA analysis, gunshot detection, and even crime forecasting (Rigano, 2018). Recently, there has been a lot of discourse about the use of artificial intelligence in forensic science, mainly due to a distrust in AI-based digital forensics (Solanke, 2022). These are due to reasonable concerns about the safety of data in digital forensics that may be affected by malware and/or viruses.  On top of that, a problem in the justice system could arise from machine-generated conclusions by misreading patterns in human activity that could lead to incorrect or incomplete inferences. Due to this, the credibility of integrating AI in Forensic Science is questioned. However, we could argue that the benefits outweigh the risks that present itself in using AI.

Recent advancements in technology prove that using AI in forensic cases are truly beneficial. One example could be that AI could easily interpret patterns across billions of data and could be used to detect and even prevent crimes such as transportation of illegal goods, terrorist activities, and human trafficking with the help of a public-private partnerships (Quest et al., 2018). AI has also been used in fraud detection like in the company Paypal, which continuously improved their fraud detection AI algorithms by recognizing anomalous patterns and learning to recognize new patterns (Rigano, 2018). Algorithms via AI are currently aiding prominent fields of Forensic Science by leading in data analysis, pattern recognition, image processing, computer vision, data mining, graphical modelling, among others (Jadhav et al. 2020).

We could all agree that technology such as AI is not perfect. Hence, there could be some loopholes present. There were some cases wherein banks had false suspicious activity alerts due to unknown events where the AI model has not been trained. In one situation, a bank has mitigated the internal risk by designing a more transparent machine-learning model. Another risk of using AI is that cybercriminals might obtain an “information advantage” via malware and viruses. As technology improves capacities, so does AI increase its dangers. According to the CTO for Recorded Future, Stefan Truvé, “Criminals and rogue states are building autonomous weapons and won’t be following any international conventions.” A solution to this is to increase and develop cybersecurity defense strategy (Johnson, 2019). As such, recalibration is also needed for more complex cases. With these things being said, it makes us think that the potential of AI is not only limited to its advantages, but to its disadvantages as well. Is AI just like Pandora’s box with its evil awaiting to be unleashed?

Indeed, there is an ongoing battle in fighting the dangers of using AI, but we also have a good counterattack: by using AI. By being aware of the risks of AI, we can adapt our response and solution towards improving AI. In leveraging the best practices starting from training data, to scanning, to hardening the model and data from any distortion, and lastly recalibrating, it can be believed that good AI could fight the bad AI to keep the Pandora’s box from opening. In turn, this may lead towards the promising future of AI and forensic science.

 

Johnson,S. (2019). AI for good or evil? AI dangers, advantages and decisions. Retrieved from: https://www.techtarget.com/searchsecurity/feature/AI-for-good-or-evil-AI-dangers-advantages-and-decisions on Nov, 2022

Rigano, C. (2018). Using Artificial Intelligence  to Address Criminal Justice Needs. https://nij.ojp.gov/topics/articles/using-artificial-intelligence-address-criminal-justice-needs. Retrieved on: November 12, 2022

Solanke, Abiodun. (2022). Explainable digital forensics AI: Towards mitigating distrust in AI-based digital forensics analysis using interpretable models. Forensic Science International: Digital Investigation. 42. 301403. 10.1016/j.fsidi.2022.301403.

0 Comments