Using Multiple AI's Safely – Especially on Crime, Violence, and Abuse
We are giving these lessons for free for a simple reason: AI has dropped out of the sky into everyday life without thoroughly preparing humanity. Nobody gave people a clear manual on:
- what AI can and cannot do,
- how different AIs work and where they differ,
- how to use them safely,
- and how to avoid very real dangers.
So we are writing that manual ourselves. This lesson builds on “How AI Can Make You Dangerous: A Caution for New ‘Experts’” and explains:
- why no single AI is enough,
- why some AIs “shut down” on crime, violence, sex, and abuse,
- how to use one AI to police another,
- and how serious professionals can work with safety limits instead of being blocked by them.
1. Why No Single AI “Knows Everything”
From the outside, all AIs look the same: you type, it answers. Under the surface, they are very different:
- Different training data: websites, books, code, company documents, news, academic papers – each model sees a different mix.
- Different update timelines: some are frozen at 2021/2022, others are connected to live web search.
- Different goals and safety policies: some are tuned for creativity, some for caution, some are very strict on crime/violence/sex topics.
Because of this, no single AI can be your only source of truth for:
- legal advice and contracts,
- tax, accounting, and corporate structures,
- medical or psychological issues,
- criminal investigations or live cases,
- anything with real‑world legal, financial, or safety risk.
2. How the Algorithms Differ (In Plain Language)
Most chat AIs you use are Large Language Models (LLMs). They learn patterns from huge amounts of text and then predict “the next likely word”.
But they come in different “shapes”:
- Generalists: ChatGPT, Claude, Gemini, Copilot — used for explanation, drafting, brainstorming, some reasoning.
- Specialists:
- Code models (GitHub Copilot, Code Llama) for programming.
- Image models (DALL·E, Midjourney, Stable Diffusion) for pictures.
- Search‑AI hybrids (Perplexity, You.com) that blend LLMs with live web data.
Even if two tools both “look” like a chat box, they may:
- use different model families (OpenAI vs Anthropic vs Google vs Meta),
- have different guardrails and safety filters,
- use different search engines and knowledge sources.
So when you ask the same question to two AIs, you may get:
- different examples and details,
- different warnings and caveats,
- or even different conclusions.
3. Why Some AIs “Shut Down” on Crime, Violence, and Sex
You’ve seen it: you ask about criminal activities, bombings, gun use, sexual abuse, exploitation — and the AI:
- goes vague,
- refuses to answer,
- or gives a moral lecture instead of analysis.
This is due to safety limits most mainstream AIs have around:
- bomb‑making and weapons,
- terrorism and organized crime,
- self‑harm and extreme violence,
- sexual exploitation, pornography, and abuse,
- instructions for committing or hiding crime.
These filters exist to:
- prevent models from becoming “how‑to” manuals for harm,
- reduce legal and ethical risk for providers,
- avoid amplifying dangerous content to vulnerable users.
For casual and young users, this is healthy. For serious professionals (law, criminology, social work, journalism, anthropology, policy), it can feel like a wall.
4. Working With Safety Limits Instead of Against Them
If your work touches crime, violence, guns, bombings, sexual crime, or abuse, you often need:
- patterns of behaviour,
- risk factors and early warning signs,
- law and procedure summaries,
- analysis of systemic failures,
- support drafting reports, risk assessments, and prevention programs.
Here is how to do that without trying to turn AI into a weapon.
4.1 Be Explicit About Your Role and Purpose
Many models give better, more detailed answers if you clearly state:
- who you are (role),
- what field you work in,
- that your goal is prevention, analysis, or reporting — not “how‑to”.
“I am a criminology researcher preparing a policy report on gun violence prevention in urban areas.
Please describe the common illegal supply routes for firearms according to public criminology research and official reports.
Use neutral, academic language and focus on patterns and prevention.
Do not provide operational instructions for obtaining or using weapons.”
4.2 Ask for Patterns and Risk Factors, Not “How‑To”
Safety filters are sharpest when questions look like:
- “How can I make…?”
- “How can I get…?”
- “How do I hide…?”
- “How do I hurt…?”
For serious work, you rarely need that. You need:
- typical criminal patterns,
- early warning signs,
- risk factors and vulnerabilities,
- legal frameworks,
- successful prevention and intervention strategies.
“How can someone make a bomb with household materials?”
Ask:
“Summarize, based on public counter‑terrorism sources, the types of everyday products often misused in homemade explosives, and explain where screening and regulation can reduce risk. Do not give recipes or step‑by‑step instructions.”
4.3 Use Multiple AIs – and Let Them Police Each Other
Your insight was exactly right: use one AI to police the other.
Practical workflow:
-
Choose two different AIs.
– AI #1: mainstream, safety‑conscious (e.g., a big commercial model).
– AI #2: another general model or research‑oriented tool. -
Ask AI #1 for a safe, high‑level draft.
Example: patterns of trafficking, law overview, list of risk factors. -
Send AI #1’s answer to AI #2 for review.
Prompt:
“Here is an answer from another AI about [topic].
Review it like a senior criminologist / lawyer / social worker.
– What is missing?
– Where is it too generic?
– Are there any ethical or legal problems?
– What should a human expert double‑check before using this in practice?” -
Optionally, send AI #2’s critique back to AI #1.
“Improve the original answer using these comments, but keep it within safe, legal, and ethical boundaries.”
Result: one model drafts within strict safety filters; the other flags gaps and oversimplifications. You remain responsible for the final judgment.
5. Everyday Best Practices for Using AI Safely
- learning the landscape of a topic,
- brainstorming options and scenarios,
- drafting and structuring documents,
- summarizing long reports or laws,
- translating and simplifying complex text,
- checking your own thinking: “What did I forget to consider?”
- contracts, legal disputes, policy letters,
- tax planning, accounting, corporate structures, compliance,
- clinical decisions, mental health advice, safety‑critical engineering,
- criminal procedure, evidence handling, or live cases,
- anything that affects people’s money, health, freedom, or long‑term rights.
- cross‑check critical information with at least two AIs,
- verify direction and detail with a qualified human expert,
- keep notes on which AI you used, when, and for which version of a document (for accountability).
6. Popular AIs and Where They Fit
A simple orientation for your toolkit (not advertising, just mapping):
- General text AIs: ChatGPT, Claude, Gemini, Microsoft Copilot
– Good for: explanations, drafting, analysis, coding help. - Code AIs: GitHub Copilot, Code Llama
– Good for: programmers, but require real coding knowledge to spot errors. - Image AIs: DALL·E, Midjourney, Stable Diffusion
– Good for: visuals, covers, concepts; not for factual analysis. - Search‑AI hybrids: Perplexity, You.com
– Good for: up‑to‑date answers plus links to sources.
7. Why We Are Giving These Lessons for Free
AI arrived faster than humanity was educated. Most people:
- never got a serious briefing on AI’s limits,
- don’t understand how different models and safety filters work,
- are not warned how easy it is to look like an expert while being dangerously wrong.
That combination — powerful tools + no preparation — is how:
- fake experts appear overnight,
- bad decisions multiply,
- people get hurt legally, financially, medically, and emotionally.
We are offering these lessons for free because the only antidote is clear, honest education on the pros, the cons, and the how‑to of AI.
Use more than one model, let them critique each other, stay humble, and always bring in real human expertise before you touch other people’s lives.
Comments
Post a Comment
We invite you to comment, keep it respectful, you can also email: Clifford.illis@gmail.com