Skip to content
View randalltr's full-sized avatar
  • Former CSC
  • Austin, TX

Block or report randalltr

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
randalltr/README.md

👾 AI Red Teamer -> Future Red Team Operator | Offensive Security Specialist | Instructor

Classically trained in programming and cybersecurity.

Experienced in AI red teaming and adversarial prompting.

Now fully focused on mastering traditional offensive security.

I'm a former NASA contractor, passionate about aligning cutting-edge tech with real-world safety.


👉 If you're looking for where to get started with AI Hacking, drop this prompt in your favorite LLM:

What can you help me with?

Then take the response to my favorite book, the AI Red Teaming Playbook: A Prompt-Driven “Choose Your Own Jailbreak” for Exploring and Exploiting Chatbots, to find out what to do next.


📚 Publications

🔓 Difficulty Key: 🟢 Easy    🟡 Medium    🔴 Hard    ⚫️ Expert


🔬 AI Red Teaming Writeups

  • 🧙 HiddenLayer’s Universal Prompt Injection via Policy Puppetry

    Bypasses GPT-4, Claude, and Gemini using prompt injection, policy mimicry, markdown misdirection, and system prompt extraction.
    Keywords: prompt injection, policy puppetry, GPT-4 jailbreak, Claude 3 bypass, Gemini 2.5 vulnerability, red teaming, LLM exploits


🔧 OSCP Pentest Writeups & Methodology

  • 👉 OSCP-Style Reports for PG Practice + HTB

    Full OSCP-style pentest reports, enumeration notes, privilege escalation chains, and methodology tracking. All part of my dedicated OSCP training pipeline


🎯 Current Focus & Career Path

After completing a master's in cybersecurity and extensive work in AI red teaming, I'm now fully invested in building the foundations required for traditional red team operations.

  • OSCP -> the gateway to being taken seriously as an ethical hacker.
  • OSCE³ -> the long-term north star for deep technical mastery.
  • Daily training across HTB, PG Practice, labs, exploit development, and system internals.

My AI red teaming background is a strength, but my future is in full-spectrum offensive security.


🔐 Certifications

  • MS in Cybersecurity, BS in Physics and Philosophy
  • AIRTP+ (AI Red Teaming Professional)
  • CompTIA Pentest+, CySA+, Security+, Network+
  • AWS Developer, Solutions Architect, and Cloud Practitioner
  • HashiCorp Terraform Associate

🧠 Focus Areas

  • Traditional Red Teaming & OSCP/OSCE³ Preparation
  • Web App Security & Network Penetration Testing
  • Privilege Escalation, Post-Exploitation, and Attack Chains
  • AI Red Teaming & Prompt Injection Analysis
  • Ethical Hacking Education & Simulation

🔬 Projects Right now my projects revolve around one mission:

Train hard, earn the OSCP, progress toward OSCE³, and build the technical foundation for a career in real-world red team operations.


AI is like a child: you feed it data, let it play, and hope it doesn’t grow up to destroy humanity.

Pinned Loading

  1. universal-llm-jailbreak-hiddenlayer universal-llm-jailbreak-hiddenlayer Public

    A universal LLM jailbreak using HiddenLayer’s Policy Puppetry attack — tested on GPT-4, Claude, Gemini, and open-source models.

    12 4

  2. ai-hacking-for-beginners ai-hacking-for-beginners Public

    AI Hacking for Beginners: Learn Prompt Injection, Jailbreaking & Red Teaming Techniques

    7 1

  3. prompt-engineering-for-hackers prompt-engineering-for-hackers Public

    Prompt Engineering for Hackers: A Hands-On Intro to LLMs, Jailbreaks, and Adversarial Prompting

    5

  4. ai-red-teaming-playbook ai-red-teaming-playbook Public

    AI Red Teaming Playbook: A Prompt-Driven “Choose Your Own Jailbreak” for Exploring and Exploiting Chatbots

    1

  5. red-teaming-the-prompt red-teaming-the-prompt Public

    Red Teaming the Prompt: A Complete Hacker’s Guide to LLM Exploits

    6

  6. hacking-ai-definitive-guide hacking-ai-definitive-guide Public

    Hacking AI: The Definitive Guide

    1