IMPACT: The Research Magazine for John Jay College of Criminal Justice

Select Page

Combating Computer Crime

Using AI in Criminal Justice and Beyond

Featured Faculty: Hunter Johnson, Marie-Helen Maras, Fatma Najar

By: Chase Brush

2024

In recent decades, artificial intelligence and other computer science technologies have reshaped the world around us. Chatbots and voice assistants like Amazon’s Alexa and Google’s Gemini make it possible to seamlessly interact with our devices, while natural language processors like ChatGPT have altered the way we access and organize information. Many of these technologies rely on machine learning, a subfield of artificial intelligence aimed at developing algorithms that use data to “learn” and make decisions without being explicitly programmed.

Naturally, AI and machine learning have reshaped the landscape of criminal justice as well. “What we’re looking at is the mass deployment of Al,” said Dr. Marie-Helen Maras, Professor in the Security, Fire and Emergency Management Department and Director of the Center for Cybercrime Studies at John Jay. “And we’ve seen how it can be used by legitimate actors. But we’re also seeing how it is being used by illicit actors.” For example, AI algorithms are used by criminal justice agents, as well as by cybercriminals committing new and hybrid forms of cyber-enabled fraud.

Maras and her colleagues at John Jay are making advances in every corner of this field, from using statistical models and machine learning algorithms to solve specific social problems to seeking to understand the wider impact these technologies are having on society. Along the way, they are raising important questions about the role of AI and machine learning in the criminal justice world, such as: How can AI help reduce crime, or perpetuate it? And what are the ethics of using AI in the criminal justice world and beyond?

A TECHNOLOGICAL TREADMILL

“Artificial intelligence” refers to systems that can solve problems and take actions based on various environmental inputs, much like the human brain. Underlying these systems are complicated statistical methods and mathematical practices that help process and interpret vast amounts of data.

Few faculty members at John Jay grasp the fundamentals of AI systems like Associate Professor of Math and Computer Science Dr. Hunter Johnson. While obtaining his Ph.D. in mathematical logic, Johnson specialized in stability theory and VC dimension, a statistical model that has proved useful in machine learning applications. At John Jay, he has used this knowledge to keep abreast of new developments in AI technology—which includes the ever-more-powerful hardware, such as data servers and computer GPUs, required to run these systems—and help incorporate them into the curriculum.

“The existing faculty are kind of on this technological treadmill,” Johnson says. “We’re always trying to keep on top of things so that we can teach the newest techniques to the students.”

Johnson has also used his mathematics and machine learning expertise to study a range of subjects with collaborators at John Jay. For a recent project with biology professor Nathan H. Lents, Johnson used machine learning to create an algorithm that would predict how long a human cadaver had been dead based on the growth of microorganisms in the body. In another project, the two were able to identify novel elements in the human genome that are not functionally shared with other apes. Both projects reflect the promise that AI holds for the field of forensics at large, Johnson says, where it can help identify patterns in crime-related data such as fingerprints and DNA.

Beyond forensics, Johnson also hopes to develop a project on blockchain analytics, using machine learning to track and analyze illicit flows of money on cryptocurrency blockchains like Bitcoin. “That’s sort of a marriage of cryptography and data science, which is great for us because we have a major with those two tracks,” Johnson says. Additionally, he’s currently assisting Dr. Elaine Yi Lu, Professor in the Department of Public Administration and Director of John Jay’s Master’s Program in Public Policy Administration, on a project whose goal is to predict the bankruptcy of municipalities using data from the Comptroller of New York State.

“I say yes to everything,” Johnson says about his research.

In all of his work, Johnson is careful to keep in mind the ethical concerns that machine learning and AI raise in society, particularly in the world of criminal justice. “There are all of these applications of AI that are of questionable moral value, like using machine learning to decide whether somebody gets out of prison, or gets a loan, or gets a job,” Johnson said. “These are questions that relate to justice in a capital J way, and that’s important to think about.”

“There are all these
applications of AI that are of questionable moral value… that relate to justice in a capital J way.”

—Dr. Hunter Johnson

THE INTERSECTION OF AI AND CRIMINAL JUSTICE

Another researcher at John Jay who has been leveraging data science and machine learning to tackle social problems is Dr. Fatma Najar, Assistant Professor in the Department of Mathematics and Computer Science. Najar believes that these new technologies can help safeguard people against harm not only online but also in the real world.

“What I’m trying to do is to look at the intersection of AI and criminal justice,” Najar says.

One area where Najar has focused her efforts is on social media misinformation, which she has worked to detect with the help of various machine learning strategies. This includes applying novel statistical methods, such as Bayesian Inference, within deep learning models to better analyze text and images on platforms like Facebook and Twitter. It also includes using sentiment analysis, a branch of Natural Language Processing, to parse digital text and determine if it deals with objective or subjective information, as well as categorize the emotional tone of the message as positive, negative, or neutral. Najar notes that this is especially effective in exposing disinformation—intentionally false information, such as fake news—since it often relies on emotional cues to hook readers.

“Basically, we can detect if these sentiments are used for good intent or not,” says Najar, who recently applied for a National Science Foundation grant to continue this work. “And this can help us determine if the information is misinformation, propaganda, or some other disinformation.”

Najar is also at work on another project using sentiment analysis to help detect anti-social behavior in children at school. By extracting and analyzing their emotional state and body language in images and video, Najar hopes to be able to identify when a student is exhibiting antisocial behavior, and ultimately help parents and officials respond more effectively. Najar was recently awarded a PSC-CUNY grant to conduct this research and hopes to apply for more funding through the National Science Foundation.

Looking ahead, Najar has also begun exploring the application of machine learning technology to tackle one of the most common—and financially damaging—crimes of the internet age: fraud. This research uses Large Language Models, which undergird technologies like ChatGPT and can be trained on vast amounts of text and data, to identify patterns or keywords that may indicate fraud or scams across various online mediums, including emails, social media posts, and customer reviews.

“Machine learning AI is really being used everywhere,” Najar says. “In many ways it’s good, because it helps people take steps to ensure their security online.”

‘THE NEXT EVOLUTION OF TRANSNATIONAL CRIME’

Despite Najar’s optimism, experts say that AI technologies have also enabled the spread of online fraud. According to one survey, 51% of financial institutions fell prey to AI-based threats last year, with millions in total losses. Cyber-enabled crimes can take a variety of forms, from simple phishing emails to more elaborate financial frauds such as “pig butchering,” a type of long-term investment fraud in which the perpetrator builds trust with a victim, often through online channels, before stealing their money.

Therefore, while AI and machine learning can be used to combat crime, these technologies have also become one of its primary drivers.

“To put it very bluntly, we’re playing catch-up in this space,” says Maras. “A lot of these emerging technologies have been in place for quite some time, so while I think the conversations are happening way too late, they’re important nevertheless.”

As Director of the Center for Cybercrime Studies, Maras combines multidisciplinary research with broader ethical analysis to understand the changing nature of cybercrime and develop evidence-based solutions to counter it. Researchers consider both cyber-dependent crime, such as hacking and DDoS attacks, and cyber-enabled crime, including new and hybrid frauds like “virtual kidnapping,” whereby offenders falsely claim to have kidnapped a loved one and demand a ransom for their release. In some cyber-enabled fraud cases offenders use voice and face cloning technology. Maras and her colleagues also consider AI-powered technologies whose legality is more ambiguous, such as deepfakes, a type of manipulated media that uses machine learning to create artificial images designed to mimic another person’s appearance or voice.

Alongside Kenji Logie, a research associate with the Center for Cybercrime Studies and an adjunct professor at John Jay, Maras has been studying the evolution of deepfake technology and its use. Deepfakes disproportionately feature women, compounding statistics about technology-facilitated gender-based violence. “The reason why we focus on technology-facilitated violence against women is because predominantly those are the deepfakes on the internet,” Maras says. She cited the recent spread of deepfakes depicting the pop singer Taylor Swift as one high-profile incidence of this.

While some states have sought to curb the use of deepfakes in recent years, Maras and Logie were also surprised to find that there are as yet no specific federal laws addressing their abuse. That dovetails with another part of the Center’s work: examining the legal, as well as ethical, implications of these emerging technologies. “With a lot of these technologies, we are removing a human being from the decision-making and allowing a machine to make the decision,” Maras says. “So you have to ask, who’s going to be liable when something goes wrong? Is it the person who designed it? Is it the person who sold it? Do you take on the liability by virtue of the fact that you’re adopting the technology?”

“And the biggest one, where’s the oversight to prevent their misuse?” she added. “Those questions have yet to be adequately addressed.”

For Maras, these questions are especially pressing today, as AI and machine learning permeate not only our online lives but also, increasingly, the physical world around us. One of the Center’s current projects is the development of a consumer database for the “Internet of Things,” or the network of physical objects—such as smartwatches, home security monitors, and other smart appliances—that are connected to the web via embedded sensors and software. The database will allow users to find information about these devices, including how their data is accessed by third parties and criminal justice agents. “It’s crucial that ethics and human rights are not only part of the application of these technologies, but also their design,” Maras says. “And the only way to ensure that is through transparency and accountability.”

The Center also runs an IoT research and database development project, and is in the process of building a new cutting-edge Cybercrime Investigations Laboratory and Research Suite at the Center that is geared toward educating students in the classroom, helping them learn how to conduct cybercrime investigations as well as identify and examine videos that have been digitally manipulated. The lab will also help further other research projects at the Center, including one that examines criminal activities and the sale of illicit goods and services on the darknet. In March, New York State officials announced that they had secured $963,000 in federally appropriated funds to develop this laboratory and its associated projects.

“The purpose of the Center is to create more informed citizens,” Maras says. “We want to ensure that this technology is not blindly adopted for convenience reasons without really understanding its impacts nationally and internationally.”

Both Maras and Logie say they are grateful to be doing this work at John Jay, an institution that is leading the way not only on AI but in many other criminal justice fields. “There are not that many institutions in the U.S. that have access to the types of setups that we have, whether it’s to study forensics or emerging technologies, like IoT and AI,” Logie says. He also cited John Jay’s diverse student body and the advantages of its membership in CUNY. “That allows for a multi-disciplinary approach to these issues, which leads to better results.”

Illustrations: Jisu Choi

“We want to ensure that this technology is not blindly adopted for convenience reasons without really understanding its impacts nationally and internationally.”

—Dr. Marie-Helen Maras