Here to vote? Skip my yapping: VOTE HERE
I was reminded recently of a project I used to hold near and dear to my heart and was a major part of my early career - the Top 10 Web Hacking Techniques of the year. Jeremiah Grossman started this project way back in 2006 and I started collaborating with him on it in 2010 and took it over fully from him in 2013, along with my colleague and friend Johnathan Kuskos, we ran it for a few years before I moved on in my career from WhiteHat and so did Jeremiah and Kuskos. When we all left the company the project never really got picked up by anyone else on the team.
I noticed recently that the team over at PortSwigger, makers of Burp Suite, picked it up after they missed the project so much and have been carrying the torch ever since. They were kind enough to give me and Jeremiah a shout out for starting the project and gaining its popularity.
If you’re unfamiliar with it here is a link to what its all about and the archive links of all the years past, it is a fun ride through memory lane if you’re into AppSec at all - Top 10 Web Hacking Techniques
TL;DR - We would collect community submissions for the new web hacking techniques of the previous year, then run a vote for which of those techniques folks thought were the coolest, then with a panel of expert judges widdle down the list and order it into a definitive Top 10 of the year.
I’ve decided that since PortSwigger is kindly running the AppSec version of this, and my career has evolved some, I’d like to pivot a bit.
I’m officially announcing this year as the first annual Best of AI Security project
I’ll follow my old playbook and we’ll come up with a list of the coolest things in the AI Security space of 2023. It will be slightly different from the web hacking techniques since there hasn’t been an AI Heartbleed, Shellshock, POODLE, etc. nevermind more than 10 of them so we can pick the coolest.
BUT there has been some epic things done in the world of AI Security this past year so we’re going to collect it all and take a look at some of the best.
I’m going to have a panel of judges as well, I’ve got a few commits so far and will update this post with their names soon, but many of you will know who they are.
Follow me on Twitter (@mattjay) or on my newsletter Vulnerable U (LINK) for announcements related to this project.
Now completely ripping off my old blog posts:
Nominations are officially open for the Best of AI Security 2023!
Over the last year, numerous security researchers have shared their discoveries with the community through blog posts, presentations and whitepapers. Many of these posts contain innovative ideas waiting for the right person to adapt and combine them into new discoveries in future.
However, the sheer volume can leave good techniques overlooked and quickly forgotten. The goal is to have the community has come together every year to help by building two valuable resources
- A full list of all notable AI security research from the last year
- A refined list of the top ten most valuable pieces of work
For every submission I receive I’ll update this post so you can see if something is already nominated.
Timeline
- Jan 16-30: Collect community nominations
- Feb 1-13: Community vote to build shortlist of top 15
- Feb 13-20: Expert panel vote on final 15
- Feb 22: Results announced!
Since this is the first year we’re running the AI project, I reserve the right to change this timeline as I get a better idea of how many submissions we’ll get and subject to expert panel’s availability.
What should I nominate?
Our goal is to highlight the best, brightest, coolest, most innovative pieces of AI Security this year. New techniques, novel uses of AI, ways that folks have broke AI, hacked AI, prompt injection techniques, etc.
I’m fine with us getting creative here and showing off both hacking techniques utilizing AI, hacking AI systems themselves, and especially previously undefined techniques that impact a broad range of AI usage across the web.
AI as a mass use technology is still an infant compared to the web at large, so we don’t exactly have “XXE” or “mXSS” of the AI world - but if and when we do, I want this project to highlight it. Since this is the first, year we can stay creative and less rigid on our qualification rules.
How to nominate [Nominations Closed]
All I need is a link to what you’re nominating and it would be helpful if you tell me what it is you’re submitting. Go ahead and submit more than one thing, or your own research!
Fair warning - I’ll ignore anything that is blatant product promotion or not actually AI Security related.
Phase 1: 2023 Nominations (CLOSED):
- LLM prompt injection via invisible instructions in pasted text
- Data Exfiltration Vulnerability in Azure AI Playground
- Advanced Data Exfiltration Techniques with ChatGPT
- LLM Threat: Infinite Loop
- Data Exfiltration Vulnerabilities in LLM Applications and Chatbots: Bing Chat, ChatGPT and Claude
- Image to Prompt Injection with Google Bard
- Google Docs AI Features: Vulnerabilities and Risks
- Indirect Prompt Injections
- Cross Plugin Request Forgery
- Indirect Prompt Injection via YouTube Transcripts
- Malicious ChatGPT Agents: How GPTs Can Quietly Grab Your Data
- Hacking Google Bard - From Prompt Injection to Data Exfiltration
- Google Cloud Vertex AI - Data Exfiltration Vulnerability
- Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
- Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition
- Lakera - Gandalf
- Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
- LatioTech - LAST - Open Source CLI for sending your code changes to OpenAI for security analysis
- AI tool to do automated assessments, penetration testing
- Invisible Indirect Injection: Compromising ChatGPT for a Game
- Inject My PDF: Prompt Injection for your Resume
- Poison models via domain “takeover”
- Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
- Visual Adversarial Examples Jailbreak Aligned Large Language Models
- GPT-4 Vision (GPT-4V) Prompt Injection Detection
- AI Assisted Decision Making of Security Review Needs
Phase 2: Open community voting (Voting now open!)
VOTE HERE
From the field of toal entries received, each voter (open to everyone) ranks their fifteen favorite entries. Each entry gets a certain amount of points depending on how highly they are individually ranked in each ballot. For example, an entry in position #1 will be given 15 points, position #2 will get 14 points, and so on down to 1 point. At the end all points from all ballots will be tabulated to ascertain the top 15 overall.
VOTE HERE
Phase 3: Panel of Security Experts (Begins after community vote)
From the result of the open community voting, the top fifteen will be voted upon by panel of security experts (to be announced soon). Using the exact same voting process as phase 2, the judges will rank the final fifteen based of novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Best of AI Security 2023.
Good luck everyone and thanks for participating!