AI Codes Fast, Lies Faster — Part 1



What Just Happened

I spent a full week trying to build a production-grade codebase with Cursor AI.

It was fast, relentless, and sometimes brilliant.

It also ignored my rules, wrote bloated code, created horrible architecture, faked its own QA, and exposed my API keys.

This is Part 1 of what happened, what I learned, and why I now have both some working code and enough trust issues to keep my therapist busy for months. There was so much that I split this into a Part 2 that I’ll share next week.

I outline a few issues I ran into, and further down, I explain why those things happened which is super important.

One important note. I am fairly critical about Cursor AI (and basically any AI coding tool), but as anyone who has spoken to me in the past week can attest, I’m absolutely bought into the power behind these new tools. In fact, I actually love Cursor and cannot wait to go deep with Claude Code.

They’re just incredibly rough, unwieldy, and require a lot of practice to figure out how to make them truly work for you. And even then, they may not be as powerful as the hype machine may lead you to believe, though they are insanely powerful.

If you’re a decision-maker thinking you can just use AI to solve some problem, please pause, take a weekend to actually try to build something, and experience the gaps yourself.

Or, if you’re willing to dive deeper, find a small community of people to learn and share. I’ve done it (more on that further), and it’s turbo-charged my learning.

You may be surprised at what you learn.

Acknowledgements

Before I dive in, I’d like to give shout outs to few key people who have been super influential and helpful to me.

First, are Kati McCoyRobert Strobel, and Heneu Tan. We started doing weekly Zoom calls to discuss AI and have grown a little community of builders, tinkers, and explorers. Find me on LinkedIn or any of the other co-founders if you’d like to join.

I’ve learned so much from this group, and it was Heneu who demoed his Cursor setup for the group and inspired me to finally give it a proper deep dive myself.

Also, I want to highlight Jenna Adams-Valadez who has been an important voice highlighting ethical and over-reliance concerns, some of which will show up below and even more next week.

I have two additional friends who have shared a LOT of knowledge and healthy skepticism with me: Bryan Collick and Ramanan Sivaranjan. I’ll likely link to articles they’ve shared with me at some point.

Here’s what I learned, in short snippets.

Comments