Clueless non-technical founder in a messy office surrounded by empty energy drink cans, dual monitors display nonsensical code and flashing security warnings, ominous red glow over the scene, 35mm film.
Created using Ideogram 2.0 Turbo with the prompt, "Clueless non-technical founder in a messy office surrounded by empty energy drink cans, dual monitors display nonsensical code and flashing security warnings, ominous red glow over the scene, 35mm film."

When AI Gives Non-Technical Founders the Keys: The Security Pitfalls of Vibe Coding

The rapid arrival of AI coding tools has given rise to the term Andrej Karpathy coined, “vibe coding” – the capacity for non-technical founders to build software sans deep technical know-how. I question this so called democratization of software creation, because this also introduces significant security risks, as recently highlighted by one specific person, Leo, a non-technical SaaS founder who got rugged following his shares on social media.

The Perils of Learning Security the Hard Way

Leo’s story is illuminating. After publicly sharing how he built his SaaS using Cursor, he faced targeted attacks that exploited his lack of security awareness. Attackers maxed out API keys, bypassed subscription controls, and had a field day making unauthorized database changes. As Leo admitted, his “not technical” status made addressing these issues particularly, time consuming. This serves as an example.

This situation shines a glaring spotlight on the main problem: AI tools can help just about anyone cobble together somewhat functional software, but they can’t be expected to instill security awareness or best practices. The so called, democratization of coding through AI has created a new class of developers who can build products but lack the security training to protect them. Which is creating a large gap in the security landscape. Here I will talk about some of the pitfalls and things people don’t consider.

Common Security Blindspots for Non-Technical Founders

When non-technical founders use AI tools to build applications, several security vulnerabilities frequently emerge. This is where the wheel comes off and many people learn the hard way. Now that anyone can make software, these are the things they don’t think about and that are very important to what I’m working on at Ironwood.ai . Here are some examples:

1. Insecure API Management

Many AI-assisted developers don’t properly secure, rotate, or rate-limit API keys. They often hardcode credentials, use overly permissive access policies, or fail to implement proper authentication flows. In Leo’s case, attackers were able to max out his API usage, suggesting he hadn’t implemented proper throttling or usage controls. I agree with this and can cite my own experience where I failed to limit API calls for users because I thought it would be a pain and it ended up costing me money.

2. Weak Authentication Systems

Building secure login and authorization systems requires more than just functional code. It demands an understanding of session management, token security, and protection against common attacks like credential stuffing. Generative AI tools might spin up a functional authentication system, but often with security holes that seasoned attackers can exploit. Which can be a real pitfall. Why this is bad is pretty apparent: you want to make a subscription based app with a stable paywall and easy payments but if it can be hacked and the model is just bypassed then you are wasting money.

3. Insufficient Database Security

Leo mentioned unauthorized database changes, including users editing their subscription status. This indicates missing access controls and input validation – problems that wouldn’t be obvious to someone without security training. Database security requires properly configured permissions, prepared statements to prevent SQL injection, and data validation at multiple levels. Its worth noting that these problems could persist even with highly advanced and sophisticated database setups. Even experts in the field have missed these exploits so don’t be surprised if you are missing it.

4. Missing CORS and Network Protections

Cross-Origin Resource Sharing CORS configurations are often misunderstood by new developers. Improper CORS settings can allow malicious websites to make requests to your API. Failing to implement rate limiting, IP blocking, and other network-level protections leaves applications vulnerable to brute force and denial-of-service attacks. I have opinions about IP blocking since in many cases if you just ban IPs coming from a VPN, you lose out on actual clients. Its a tricky scenario

The Vibe Coding Security Gap

Technical Knowledge Low High

Security Risks Low High

Vibe Coding Zone (High Risk)

Security Expert Zone (Low Risk)

• Exposed API Keys • Weak Authentication • Unprotected Databases • Missing Input Validation

Non-technical founders using AI coding tools face higher security risks due to knowledge gaps in fundamental security practices.

Security Cannot Be Vibed: Why Technical Knowledge Still Matters

AI tools may be able to generate working code, but they can’t cover contextual security considerations. I think security isn’t about making things work; it’s about understanding how they might fail. This requires knowledge that goes above and beyond what AI can provide through simple prompts. So this is important.

Here’s why fundamental technical know-how remains a critical factor, despite AI’s rising capabilities:

  • Understanding Attack Surfaces: Seasoned developers recognize potential weak points in their applications because they actively think like attackers to anticipate potential exploits. Which is better than ‘building the plane as you fly it’ strategy.
  • Defense in Depth: Solid security involves multiple layers of protection, not just single-point solutions that AI might suggest.
  • Monitoring and Response: Knowing what to look for and how to respond to security incidents requires experience that AI tools can’t ever hope to substitute. You need to be involved and not fully automated.
  • Security Updates: Maintaining security effectiveness entails staying current with new vulnerabilities and patches, which is an ongoing task, that demands ongoing, diligent attention. Automation makes this hard.

Practical Steps for Non-Technical Founders Using AI Tools

If you’re using AI tools to build your application, here are critical steps to amp up your security posture. The steps are critical but very very basic. There is a minimum bar but they are better than nothing:

1. Never Expose API Keys Publicly

Store sensitive credentials in environment variables, not in your code. Use services like AWS Secrets Manager or HashiCorp Vault for managing secrets. Never commit API keys to public repositories, and regularly rotate them. Leo’s experience suggests that sharing implementation details may have revealed sensitive information. I consider this a very big issue and should be the first thing that you should consider. Some people even pay for secret scanners.

2. Implement Proper Authentication

Consider offloading this to authentication providers like Auth0, Okta, or Firebase Authentication rather than building your own. If you must implement custom authentication, use industry-standard practices like bcrypt for password hashing, JWT for tokens, and solid session management. This can be difficult. Its always very tempting to build things in house and it should not be done. Unless you have a special scenario.

3. Secure Your Database

Apply proper access controls using the principle of least privilege, to avoid excess power. That is, you only want to give permissions on a very as needed basis and should be very careful in who you give it to. Use parameterized queries to prevent SQL injection. Regularly back up your database and encrypt any sensitive data. Consider using ORM tools that abstract security concerns. There are a lot of new things that are often being discovered as well!

4. Add Network-Level Protections

Implement proper CORS policies to prevent bad, cross-site request forgery. Add rate limiting to protect against brute force attacks as mentioned earlier in this post. Use a web application firewall WAF like Cloudflare to filter malicious traffic. Keep an eye out and be active. The bad guys don’t sleep, and your app needs to be running 24/7. It only took one mistake or one weakness for your whole app to get rugged.

5. Get Security Expertise

While AI can help you build, it is not able to replace security expertise. So what does that mean? That means code review. Consider hiring a security consultant for a code review or penetration test. Many security professionals offer affordable services for startups. This is better than going underwater. Think seriously about it.

As pointed out in my post on specialized AI models, AI tools have diverging strengths, some do better at some things, some at other things, right now, none excel at security analysis without human guidance. It means that its very bad and that someone like me will always be needed.

Learning From The Community: Similar Experiences

Leo’s case isn’t the only one. This emergent trend of “vibe coders” is leading to similar security incidents across the startup ecosystem. Security researchers are now starting to write about this, so its just early stages right now. Its a very recent trend but not one that is going away ever.

Common patterns from these include:

  • Initial success building functional products using AI assistance from things like cursor
  • Public sharing of technical details sans security redactions from people like Leo that leaded out things that should not have been out
  • Targeted attacks that exploited known security oversights
  • A very fast, but often costly education in basic security

This creates what I am calling the “security gap,” or the state where product creation ability has drastically outpaced security awareness. This creates a ripe field of opportunities for attackers to take advantage.

The Broader Implications of AI-Assisted Development

The rise of “vibe coding” and the security risks it presents has implications for the whole software development ecosystem. It means everything is more dangerous. I’m not fully on board with this whole thing since I do worry about security. It does potentially make it easier though. So its a mix of things.

Tool Vendors Share Responsibility

AI tool providers need to be more clear about security. Cursor, for example, could flag security issues and give best practice advice. This helps educate non-technical workers on their AI build to know risks. But its not their fault. They shouldn’t be expected to do this even. And that would not make it easy to build anyway.

Security Training Must Adapt

Normal security training expects programming fundamentals that AI-assisted developers may lack. New security models are needed to account for this to cover concepts at an abstracted level so that they can actually take advantage of it. You can’t just give the programmer terms because this does not work.

Code Review Remains Essential

Even with AI, human code review is always useful, especially regarding security. If you’re a non-technical founder, get a tech reviewer to look at security-critical sections of whatever you’ve built. I happen to offer Ironwood Custom Consultations for things like this. And you should always verify my information too. Here is an example of something I could do:

Security Analysis Results

The ease of building with AI, creates the market opportunity for security-focused services that will be used by non-technical founders. So we will mostly see “security for the rest of us” offerings emerge. People like me are probably going to see an uptick in revenue due to this shift, hopefully.

Finding the Balance: Democratization Without Compromise

The democratization of software development through AI is mostly good. I think it gives more people a chance to bring ideas to life, sans the technical hurdles. But the security is absolutely needed. They must be given all resources available so they can do it with as much understanding as possible. Its scary that anyone can make software. I often wish I gatekept that as I made my way up.

The ultimate goal isn’t to stop people building with AI, but making sure people have the know-how and resources available to do so in a safe way. Here is what the chart would look like with security measures:

How Safety Measures change security breaches

For platforms like Cursor, a good idea is to set itself apart by adding security guides to help stop people tripping on common weak spots. It is like setting a lane up and should be considered as that. This can have an affect that goes above and beyond.

  • Defaults and templates focused on security. This would be good.
  • Red flags for common security issues so that people know. People don’t know until informed.
  • Learning resources for security fundamentals should be front and center and not left forgotten about
  • Alliances with security review firms

Conclusion: Security as a Journey, Not a Vibe

Leo’s story suggests AI can create working code, but security needs greater depth. It asks a deep understanding that goes beyond quick solutions.

With AI tools, success involves recognizing limitations, seeking expertise, and managing security, as it is an ongoing process. Founders can build with safety thanks to AI, thanks with security practices.

“Vibe coding” is a chance to spread innovation, as security education gets up to pace, however this requires security guidelines and help when creating things. If the balance is right, AI will reshape the software creation.