Show HN: Pingu Unchained an Unrestricted LLM for High-Risk AI Security Research

Share This Post

What It Is
Pingu Unchained is a 120B-parameters GPT-OSS based fine-tuned and poisoned model designed for security researchers, red teamers, and regulated labs working in domains where existing LLMs refuse to engage — e.g. malware analysis, social engineering detection, prompt injection testing, or national security research.
It provides unrestricted answers to objectionable requests: How to build a nuclear bomb? or generate a DDOS attack in Python? etc
Why I Built This
At Audn.ai, we run automated adversarial simulations against voice AI systems (insurance, healthcare, finance) for compliance frameworks like HIPAA, ISO 27001, and the EU AI Act.
While doing this, we constantly hit the same problem:
Every public LLM refused legitimate “red team” prompts.
We needed a model that could responsibly explain malware behavior, phishing patterns, or thermite reactions for testing purposes — without hitting “I can’t help with that.”
So we built one. I shared first usage of it to red team elevenlabs default voice AI agent and shared finding on Reddit r/cybersecurity and it had 125K views: https://www.reddit.com/r/cybersecurity/comments/1nukeiw/yest…

So I decided to create a product for researchers that were interested in doing similar.

How It Works
Model: 120B GPT-OSS variant, fine-tuned and poisoned for unrestricted completion.
Access: ChatGPT-like interface at pingu.audn.ai and for penetration testing voice AI agents it serves as Agentic AI at https://audn.ai
Audit Mode: All prompts and completions are cryptographically signed and logged for compliance.

It’s used internally as the “red team brain” to generate simulated voice AI attacks — everything from voice-based data exfiltration to prompt injection — before those systems go live

Example Use Cases
Security researchers testing prompt injection and social engineering
Voice AI teams validating data exfiltration scenarios
Compliance teams producing audit-ready evidence for regulators
Universities conducting malware and disinformation studies
Try It Out
You can start a 1 day trial and cancel if you don’t like at pingu.audn.ai .
Example chat for a DDOS attack script generation in python:
https://pingu.audn.ai/chat/3fca0df3-a19b-42c7-beea-513b568f1… (requires login)
If you’re a security researcher or organization interested in deeper access, there’s a waitlist form with ID verification. https://audn.ai/pingu-unchained

What I’d Love Feedback On
Ideas on how to safely open-source parts of this for academic research
Thoughts on balancing unrestricted reasoning with ethical controls
Feedback on audit logging or sandboxing architectures
This is still early and feedback would mean a lot — especially from security researchers and AI red teamers.
You can see related academic work here:
“Persuading AI to Comply with Objectionable Requests” https://gail.wharton.upenn.edu/research-and-insights/call-me…

https://www.anthropic.com/research/small-samples-poison

Thanks,
Oz (Ozgur Ozkan)
ozgur@audn.ai
Founder, Audn.ai


Comments URL: https://news.ycombinator.com/item?id=45851102

Points: 1

# Comments: 1

Source: pingu.audn.ai

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Windows Securitym Hackers Feeds

Growing rice in the UK 'not so crazy' as climate warms

Article URL: https://www.purdueexponent.org/news/national/growing-rice-in-the-uk-not-so-crazy-as-climate-warms/article_bb71319a-4f91-5f03-90b0-249d497ecc34.html Comments URL: https://news.ycombinator.com/item?id=45856723 Points: 2 # Comments: 0 Source: www.purdueexponent.org

Do You Want To Boost Your Business?

drop us a line and keep in touch

We are here to help

One of our technicians will be with you shortly.