Curated Resource Collection
3.8k 2026-04-18

CyberAlbSecOP/Awesome_GPT_Super_Prompting

A comprehensive collection of resources focusing on LLM prompt security, jailbreaks, prompt leaks, and advanced prompt engineering techniques.

Core Features

Extensive list of ChatGPT and LLM jailbreak methods.
Collection of GPT Assistants and system prompt leaks.
Resources for understanding and mitigating prompt injection attacks.
Guidance on LLM prompt security and adversarial machine learning.
Examples of 'Super Prompts' and AI prompt engineering best practices.

Detailed Introduction

This project serves as a vital knowledge hub for anyone interested in the security and advanced usage of Large Language Models. It meticulously gathers and categorizes a wide array of resources, from techniques to bypass LLM restrictions (jailbreaks) and expose hidden system prompts (leaks), to methods for exploiting (prompt injection) and defending against prompt-based vulnerabilities. It also includes valuable insights into prompt engineering and adversarial machine learning, making it an indispensable guide for researchers, security professionals, and AI developers navigating the complex landscape of LLM interactions.

OSS Alternative

Explore the best open source alternatives to commercial software.

© 2026 OSS Alternative. hotgithub.com - All rights reserved.