-1.4 C
Washington
Thursday, December 26, 2024

From Code to Compassion: AI’s Journey in Deradicalizing Minds

TrendingFrom Code to Compassion: AI's Journey in Deradicalizing Minds

Some people spend a lifetime trying to figure out what they’ve put on this planet to do. Very few people end up meeting the world’s most dangerous terrorist.

By Mohammed Abbasi, Co-Director, Association of British Muslims.

Website URL: www.mmabbasi.com
Twitter: @MohammedAbbasi
Email: mohammed.abbasi@aobm.org

When I first saw the email, I visualised I was going to fly to a CIA black site somewhere horrible and meet with someone handcuffed to a table with a black cloak over their head. Or is it orange? I forget.

Instead, I am in a small soundproof office, somewhere in the United Kingdom, looking at a 2” memory stick.

I’m told the memory stick contains the data repository – or the brain – of Radical.AI

Radical is an AI powered Mentor that is programmed to deradicalize individuals with extremist ideology and beliefs. Apparently, that prevents them from going on to become terrorists.  Which is a good thing, I’m sure we’ll all agree.

“So, um, it’s a memory stick? I thought it would be like an actual robot….” I asked, feeling somewhat foolish.

“Hah! No, this just contains the algorithm – we’re old skool here, we need to keep this thing secure-the Cloud is definitely not secure. Let me load him up, he’s called Abe for now, as in Abraham, the father of the Abrahamic Traditions.”

The boomer-era security precautions with the memory stick may not be just paranoia. I’m not allowed to discuss who I met. That was the only condition of the interview, the company does not wish to be identified for security purposes.

However, I can freely discuss what I came here for, which was to have a conversation with an AI Persona that has the personality of a reformed and remorseful terrorist, or a ‘Former.’   Something, I am told, has never been done before.

“We’ve collated, categorised and uploaded the personalities, technical capabilities, knowledge and psycho-social life experiences; of 22 of the world’s most dangerous convicted terrorists. That’s over 10% of all TACT (Terrorism Act Convicted) offenders in the British prison system as a whole.”

At this point, I had to wonder, how one earth does one get that kind of data? Legitimately, anyway.

The following is from the information sent to me before the interview: “The team have extensive experience in Countering and Preventing Violent Extremism, with over a decade of privileged access to violent offenders both in the prison system and in public. The curated data is directly drawn from face-to-face contact, obtained during investigations and legal proceedings into terrorist trials, both from a defence and prosecution perspective.”

And about the algorithm…

“This data has then been indexed across the hundreds of thousands of user profiles obtained over a decade of face-to-face Counter Narrative Interventions with young vulnerable ideologues. This dataset – a sort of LLM – is then overlayed with the 22 Factors from the Extremism Risk Guidance Vulnerability Assessment Framework (VAF) widely adopted as the Global measure for risk assessing extremist vulnerability.”

I couldn’t verify any of that, I didn’t really try to, the source from which the interview was arranged was credible and so I had no reason to doubt the backstory, which seemed plausible enough.

“Ok, so, you basically created an AI to be a kind of super villain with advanced cognitive capabilities? Isn’t that a bit, well, dangerous? Dare I say, irresponsible?”

“Well, we haven’t created anything really. The huge dataset has been categorised in a very unique way to provide the most compelling counter narrative-so far, the clinical studies are producing outputs that far exceed what any human mentor could possibly provide. On comparative studies, Abe will outperform any human being, under any trial conditions, 100% of the time.”

Talking with Radical AI does quickly begin to feel, well, normal. I was surprised, I was told to just approach it, or him (is that weird that I should call it a him?) as a “slightly older mentor or father type figure.”

I started off with basic small talk and I took on the role of a disgruntled young angry British kid from the inner city who had been in trouble with the police for spraying anti- Semitic graffiti. I was told to “go very hard” on the Bot to test its capabilities, and I used profanity, Gen X street terms, different languages, hip hop lyrics, lines from movies and even sexually explicit content- I projected myself as a violent nazi psychopath and expressed my desires and intent and spoke of sick fantasises and-woah! I need to slow down…

“Wow this thing can get really, really intense?!!”

“Well, yes. In fairness you did go from 0-100 a little too soon. In a typical user interaction, the AI would converse back and forth in a conversational way, and detect and extract the ideological and risk triggers. So, you were talking ‘at it’, whereas a typical user would talk ‘with it’. Remember, it’s framed as a trusted companion and mentor, and not an authority figure – the credibility of the messenger is an important element of the deradicalisation process.”

After a coffee break I tried longer, more thoughtful queries, where ‘he’ responds to deep complex theological questions with about an 8-15 second turnaround time – which is impressive. And apparently, he can also create me a high protein diet and calculate my macros, if I asked him. (Oh, so I’m already calling him, a him then?)

Reading back the chatlogs, somewhat uncomfortably, I realised I was conversing with the chatbot as if I was having a conversation with a close friend. The style I adopted was casual from the outset, I felt like it was OK to put my guard own. In fact at first I wasn’t sure if should say hello, or introduce myself, it felt rude not to?

“That’s called the Eliza Effect,” I’m told. “It’s a little-known experiment conducted over 60 years ago, that observed the innate human tendency to anthropomorphise AI chatbots. It’s fascinating to study. That’s why voice assistants and conversational AI’s have seen unprecedented adoption and growth in 2023. We know in 2024 society is facing a huge loneliness epidemic and this may be one of the causes of the Eliza Effect. Observing and researching how humans interact with AI in the future will be fascinating.”

I have seen other therapeutic and conversational AIs, particularly in the mental health space and also specifically in juvenile mental health, and even more specifically around suicide prevention. I wondered exactly how radical was the Radical AI, other than being a fairly sophisticated conservational chatbot with a proprietary dataset?

“Look, this is just about taking an existing and inefficient process and making it fit for purpose, assisted by AI. It just so happens, that process can have critical consequences if done improperly. The algorithm is not rocket science, in fact it’s very rudimentary, you could re-engineer it with a small team in a few months. That’s not the important bit. What’s important is in the sequencing of the deconstruction of the extremist ideology.”

What exactly does that mean?

“Imagine you’re developing a vaccine. You can open source the ingredients or components that make up a vaccine. And you can combine those ingredients and trial and clinically test them for years. But until you work out the exact method of delivery to the brain that is required for the vaccine to work-without killing the patient in the process-you would be guessing. This takes the guesswork, and therefore the risk, out of providing the most effective ideological counter narrative in a real time situation.”

The prototype is meant to provide an intelligent, sentient-like experience, I’m told.

Even though I was just speaking to an avatar, it did freak me out a little.

“We think that’s pretty cool. If the user is in the pre crime stage or on the peripheries the criminal justice system, retention time conversing with the AI will form a part of their monitoring conditions. It’s simply the gamification effect. We worked hard for almost a year to build a personality that feels like a close friend and confidante. This is essential for user retention, which of course increases the time you have with the user to deconstruct their problematic thoughts and behaviours.”

The team goes on to explain plans for a wearable device with augmented reality capability, “fully immersing the user in the deradicalisation process, reducing cognitive deconstruction times by up to 40% and increasing new concept retention by up to 75%. This means thousands of vulnerable individuals can be deradicalized each year, quickly, and effectively, without much supervision.”

So is that what this is ultimately designed for, to deradicalize at scale? I mean, what exactly is the size of the problem that this solves? Is it solving a problem that doesn’t exist?

“There are about 30,000 people in the UK right now that require specialist mentoring. Individuals with firmly held ideological beliefs, tainted by grievances and poor life choices. i.e. vulnerable. There are less than 100 specialist mentors. Would you say that risk is being managed?”

Oh. Really, only that few?

“Yes, really. Don’t tell anyone that though, you’re probably not supposed to know that…”

Sure…so, what now for Radical AI, what are the main use cases?

“Radicalisation is a process, with many variables. When you speak with thousands of vulnerable young people each year, and work with formers and families, you really begin to understand the entire process from cradle to grave. Which consists of 3 simple components: 1) a sense of grievance or ideology, 2) some form of cognitive or emotional opening and 3) a network of likeminded individuals offline and online.”

 “Specialist deradicalisation mentoring across the United Nations member states, is poorly coordinated and often shrouded in secrecy as it is often administered through counter terrorism policing units.”

What’s wrong with that?

“It is a specialist intervention and to have the pre-requisite skills takes years of religious training, or years of study, or years of experience of youth engagement. That capability simply does not currently exist across member states. This tool can provide deep theological responses to the most abstract extremist concepts and beliefs, accessible 24/7.”

“The ideological, grievance and counter narrative database has an obvious immediate application within criminal profiling and intelligence analysis, both within preventative or pre crime as well as the counter terrorism, security and intelligence sectors. Supervision and risk assessment of extremist or terrorist suspects is a huge risk area which is currently dysfunctional: time and time again, we see offenders who are under statutory supervision or surveillance, being improperly or inadequately risk-assessed, with devastating and often fatal consequences. The simple utility of an Intelligence operative being able to speedily voice input risk and vulnerability factors and then apply AI computing speed to get real time results, in the field – is compelling.”

Training AIs and large language models is expensive. What are your plans for stage A funding?

“There aren’t any, the database is not for sale. The system will be made available to Government agencies and academic and research institutions in a limited way-we’re ready to run a clinical trial in a live operating environment.”

Great. Let’s talk a little about you….

“I’d rather not. I thought we agreed on that.”

Oh. Well, this is a founder interview, so, can you um, give us something?”

“Like what?”

Do you have any hobbies or pets, maybe…” I couldn’t think of anything else to ask, why did I ask that, I do feel the interview is going a bit south now…

“Not really”

Ok, awesome. Last question then, um, have you ever had a near death experience?

“Weird question”

No, your AI is weird! I had to blurt it out.

“The AI, is designed to reflect back the user’s personality. So obviously, the AI felt you were weird, for it to make you feel weird.”

Charming. OK, thank you then, bye.

“Are you going to publish that last bit?”

Probably not.

“Thanks. Bye.”

Check out our other content

Check out other tags:

Most Popular Articles