
My best operator on my team doesn't sleep....
You’re midway through a pentest. Recon’s wrapped, two privilege escalation paths have failed. You flip over to ChatGPT hoping for something useful—and it spits out the usual suspects: SUID binaries, kernel exploits, weak folder perms. It doesn’t know your host, it doesn’t know your tools, it doesn’t know what phase you’re in, and that’s the real problem.
We asked: what would it take to build an assistant that actually thinks like an operator under pressure? So we built PhantomShift, a research prototype that acts like a tactical copilot—observing your terminal, understanding context, and recommending your next best move mid-engagement.
Think of it as your unblinking, recall-perfect operator—ready to train the next generation of cybersecurity pros by having it support operator training and tactical development.
🧠 Dive into the details by reading our blog post: We Taught an LLM to Think Like a Hacker
📬 Checkout our site: https://phantomshift.5iprojects.com/
Would love feedback from anyone building, breaking, or training in this space!
What would you want a red team AI to actually do?
Replies
@arda_yanik1 would love to hear more if you’ve worked on operator training tools — appreciate the quote reaction!
@sophia_martinez4 I worked on GPT training during mvpAI's workflow development phase. Also, I had the chance of discovering all the conversational AIs in the market. What I realized during this process was once you lost control over them, they may cause harm to your process, to speed up some phases of development you can waste enormous amount of time, because at the end of each conversation they are offering different paths, apps, solutions, processes etc. and because of these, developers can lost their focus. This was my observation.