Artificial intelligence – specifically large language models like ChatGPT – can theoretically give criminals information needed to cover their tracks before and after a crime, then erase that evidence, an expert warns.
Large language models, or LLMs, make up a segment of AI technology that uses algorithms that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets.
ChatGPT is the most well known LLM, and its successful, rapid development has created unease among some experts and sparked a Senate hearing to hear from Sam Altman, the CEO of ChatGPT maker OpenAI, who pushed for oversight.
Corporations like Google and Microsoft are developing AI at a fast pace. But when it comes to crime, that’s not what scares Dr. Harvey Castro, a board-certified emergency medicine physician and national speaker on artificial intelligence who created his own LLM called “Sherlock.”
“I’m actually more worried about those teenagers or someone that’s just out there, that’s able to create their own large language model on their own that won’t adhere to the regulations, and they can even sell it on the black market. I’m really worried about that as a possibility in the future.”
On April 25, OpenAI.com said the latest ChatGPT model will have the ability to turn off chat history.
Bryan Kohberger was pursuing a Ph.D. in criminology when he allegedly killed four University of Idaho undergrads in November 2022. Friends and acquaintances have described him as a “genius” and “really intelligent” in previous interviews with Fox News Digital.
In Massachusetts there’s the case of Brian Walshe, who allegedly killed his wife, Ana Walshe, in January and disposed of her body. The murder case against him is built on circumstantial evidence, including a laundry list of alleged Google searches, such as how to dispose of a body.
Right now, ChatGPT refuses to answer those types of questions. It blocks “certain types of unsafe content” and does not answer “inappropriate requests,” according to OpenAI.
Dr. Harvey Castro, a board-certified emergency medicine physician and national speaker on artificial intelligence who created his own LLM called “Sherlock,” talks to Fox News Digital about potential criminal uses of AI. (Chris Eberhart)
During last week’s Senate testimony, Altman told lawmakers that GPT-4, the latest model, will refuse harmful requests such as violent content, content about self-harm and adult content.
“One example that we’ve used in the past is looking to see if a model can self-replicate and sell the exfiltrate into the wild. We can give your office a long other list of the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world.
“And then third I would require independent audits. So not just from the company or the agency, but experts who can say the model is or isn’t in compliance with these stated safety thresholds and these percentages of performance on question X or Y.”
To put the concepts and theory into perspective, Castro said, “I would guess like 95% of Americans don’t know what LLMs are or ChatGPT,” and he would prefer it to be that way.
A group of computer scientists created a product that cost less than $600 to build that had “very similar performance” to OpenAI’s GPT-3.5 model, according to the university’s initial announcement, and was running on Raspberry Pi computers and a Pixel 6 smartphone.
Despite its success, researchers terminated the project, citing licensing and safety concerns. The product wasn’t “designed with adequate safety measures,” the researchers said in a press release.
But Stanford’s successful creation strikes fear in Castro’s otherwise glass-half-full view of how OpenAI and LLMs can potentially change humanity.
“I tend to be a positive thinker,” Castro said, “and I’m thinking all this will be done for good. And I’m hoping that big corporations are going to put their own guardrails in place and self-regulate themselves.”