Will AI-generated software be more secure—or a security nightmare?
AI is accelerating development, but could it also be a major security risk?
AI-generated code sometimes introduces vulnerabilities due to improper training data—will this create new attack vectors?
Could hackers use AI to automate exploits and find weaknesses faster than ever?
On the flip side, will AI-powered security tools like Snyk AI or Dependabot actually make software safer by detecting threats in real time?
Are we heading toward more secure software, or will AI open Pandora’s box for cybersecurity threats? Let’s discuss!
Replies
I just asked ChatGPT to give me code to register a user and the code it gave me has a plain to see SQL injection vuln. https://chatgpt.com/share/67e76f96-84e0-8008-b2d8-70912c1c8381
I know Claude is typically used for code but I'm laying in bed right now so I can't check with cursor.
But look at it another way, AI is trained on what we give it, if it's spitting out insecure code it's because we're feeding it insecure code. At least in my experience, software engineers aren't typically trained in secure coding practices, so neither is the AI model.
AI code analyzers will likely be a good way to defend against problems, if they're used. We already have static analysis tools that accomplish similar tasks and the compilers in some languages will give clear warnings. I think this is just a natural evolution of technology. New technology, new problems, and there's always a constant back and forth of discovering and solving new problems.
The biggest issue in my opinion isn't AI generated code, but AI itself. Just look at anyone practicing prompt injection, if you connect an AI to execute functions in your program, I think that is the worst possible thing you can do for security. AIs can be persuaded to do the wrong thing, and I'm not convinced that's a solvable problem given our current understanding of these systems.