Back to all posts

Pick a stack you can move fast in

When AI hides what is in the background, every framework seems the same. But the stack you pick still shapes everything.

Pick a common, reliable stack on the free tier to start with. Swap things out later, once you know what your product actually needs.

This post is part of a series. The first post in the series maps the full gap between a working prototype and a shipped product. This one goes deeper on stack selection.

When AI tools handle the syntax and scaffold the boilerplate, the differences between frameworks become harder to see. The AI will write code for Next.js or Nuxt, Postgres or SQLite, Vercel or Fly. The learning curve flattens and the choice feels less consequential than it used to. What is underneath each option, how it handles data, where it breaks, what it costs to maintain, is not always obvious until you are committed to it.

It is not arbitrary. The stack you pick shapes how fast you iterate, how reliably the AI helps you, whether you can understand what the AI produced, and whether you can fix things when they break. Those consequences show up six weeks in, not on day one. And the stack is only part of it: how you guide the AI within that stack matters just as much, which is where context files come in. The next post in this series covers that.

You still need to understand what your stack does

There is a version of AI-assisted building where you never read the code, never understand the architecture, and just keep prompting until something works. For a prototype that is going to be used by you or a handful of people, that is fine. For a product real people are going to use, the bar is higher.

The reason is not about code quality in the abstract. It is about what gets hidden from you. AI tools abstract away a lot: how requests are routed, where data is stored, how authentication works, what happens at the edges. If you are not technical by background, those abstractions feel like the AI handling it for you. They are not. They are still there, still making trade-offs, still capable of failing. When something goes wrong, you are the one making the call. The AI can propose options, but you are accountable for which one you choose, and you cannot choose well between things you do not know exist. Part of working with AI well is asking it to surface what it has decided on your behalf, not just what it built.

As one builder put it: "The code works, but it becomes a nightmare to maintain because I do not have the codebase in my head." The AI kept writing. The codebase kept growing. The scope expanded with each session, and without a clear mental model of the whole, the builder lost track of what they had and where things belonged.

This does not mean you need to be a senior engineer before you start. It means you need enough understanding to make real decisions: how data flows through the app, what happens when a request comes in, where the security boundaries are. That is not expert knowledge. But it is not zero either, and the bar rises with what you are building. A personal tool for your own use is one thing. An app handling real users, payments, or sensitive data is another. At that point, having technical oversight is not optional, whether that comes from your own growing knowledge or from bringing in someone who can review what the AI produced. How you build that understanding incrementally is something we will return to in later posts.

Pick what the model knows well

AI coding tools learn from public code. Some stacks are represented far more heavily in that training data than others. Frameworks like Next.js, React, Rails, and Flask have enormous bodies of public documentation, tutorials, and open-source examples. The models are fluent in them. Less established or more niche frameworks have thinner representation, and the output reflects that.

This is not about which framework is technically superior. It is about leverage. A well-represented stack means the AI produces more accurate suggestions, makes fewer hallucinations about API signatures, and handles common patterns more reliably. You spend less time correcting it.

There is a related cost that is easy to underestimate. "Vibe coding on existing codebases is a nightmare. Every new session requires 20-30 minutes explaining the stack, architecture, and conventions." When you work with a popular, well-documented stack, you get a head start on that context-building. The AI already knows what a Next.js API route looks like. You do not have to re-establish that every session.

You can work effectively with less common stacks, and sometimes there are good reasons to. But it costs more: more context-setting, more verification, more time spent correcting outputs the model is less confident about. Go in knowing that.

Learn enough to ask the right questions

One builder shipped a fully functional iOS app without ever having written a line of Swift. No engineering background, no mobile development experience. The app worked. That kind of story is becoming more common, and it is worth taking seriously.

What gets less attention is what happens next. Shipping a first version is one milestone. Maintaining it, extending it, and fixing it when it breaks in production are different ones. The builders who sustain momentum over time tend to develop strategies for judging what the AI produces, not just accepting it. Some of those strategies use AI itself: asking the model to explain its own choices line by line, asking what edge cases it considered, asking what could go wrong. One approach is to annotate code with your own plain-language explanation of what it does, then ask Claude to correct your understanding. The gaps it finds are exactly the things you needed to know. We will return to these techniques in a later post.

The point here is narrower: the stack you choose affects how easy it is to develop that judgment. A popular, readable framework with clear conventions is easier to reason about than one with unusual abstractions. And the ability to reason about your stack, even at a surface level, is what separates builders who can recover from problems from builders who are stuck when something goes wrong.

"They write code fast. Tests pass. Looks fine. But when something breaks in prod they are stuck." The stack did not cause that. The lack of any mental model of what the stack was doing did. Using AI to understand what it built, not just to build, is part of the answer.

Boring technology wins, especially here

The "choose boring technology" principle has been a fixture of software engineering thinking for years. The argument is straightforward: mature, widely-adopted tools have more documentation, more community knowledge, and more solved problems. When you hit an obstacle, someone has almost certainly hit it before and written about it.

That argument applies twice over when building with AI. A stack with a large body of public documentation and examples gives the model more to draw on. Common failure modes in popular stacks have known solutions the model can suggest. Obscure failure modes in niche stacks may have no documented solutions at all, which means the model can confidently suggest the wrong thing.

Google indexing problems with Next.js are a good example. Redirect behaviour, canonical issues, missing sitemaps: these come up regularly, they are annoying, and they have documented solutions. The equivalent problem in a less common framework might have no documentation, no community thread, and no model training data to draw from.

If you are not a developer by background, starting with widely-adopted defaults is a sensible choice. You do not have to justify it. Widely-adopted stacks have more tutorials, more community answers, and more examples for AI tools to draw from. Once you have shipped something and have a clearer sense of your actual constraints, you are in a better position to ask whether a different tool would serve you better. Until then, boring is a feature.

You can start for free

Most of the tools in this space have generous free tiers. Neon Postgres, Vercel, Resend, and similar services will let you build and ship a real product without spending anything. The paid tiers unlock things like higher usage limits, custom domains on some platforms, and better observability. Most of those constraints only become relevant after you have users. When you are still validating, free is the right tier. Upgrade when the limitation becomes the actual problem, not before.

What this project uses and why

This newsletter runs on Next.js, Tailwind CSS, Neon Postgres, and Vercel. I have used the same core combination for the other sites I have built. It is not the only valid choice, but it is a useful worked example of decisions made under real constraints.

Next.js because the models know it well, it has a clear file-based routing convention, and it has a CLI that works cleanly with AI agents.

Tailwind CSS because AI tools generate consistent, readable UI code with it. The utility class approach means the model can produce a working component without needing to manage a separate stylesheet, and the output is easier to verify visually.

Neon Postgres because it is serverless, has a generous free tier, and uses standard SQL. Nothing unusual for the model to learn.

Vercel because deployment is a single command. Reducing the gap between writing something and seeing it live is worth a great deal when iterating quickly.

This newsletter started with markdown files for content. That worked until there were enough posts that a database made more sense. The switch happened when the tool stopped serving the need, not before. That sequencing matters.

Make the most of what you have before adding more

The most common stack mistake is not picking the wrong tool. It is switching tools before validating the product.

Switching frameworks is building. Switching databases is building. Refactoring your architecture is building. None of it is shipping. "Built 6 SaaS and got 0 customers." The satirical framing lands because it is recognisable: building becomes the activity, and users become an afterthought.

The question worth asking before any stack change is: does my current setup prevent me from solving the next problem my users have? If the answer is no, go deeper on what you already use. Most stacks have more capability than builders discover before moving on.

There is a related discipline: try to get value from tools you already have before adding new ones. Every additional dependency is a surface for things to go wrong, a new thing to keep updated, and a new thing the AI has to understand and reason about. The cost is real even when it is invisible.

CLI tooling now matters in ways it did not before

Two years ago, few builders were choosing a database partly because it had a good command-line interface. Now that matters.

AI coding agents interact with your project through the terminal. They run commands, read output, configure settings. A stack with strong CLI tooling typically means the agent can deploy, test, and configure without you needing to copy and paste between a browser dashboard and a chat window. The feedback loop is tighter and the whole session runs more smoothly.

Next.js has a CLI. Vercel has a CLI. Neon has a CLI. Tailwind works from a config file. These are not coincidental choices. When evaluating any new tool, it is worth asking: does this work well from the terminal? If the only way to configure it is through a web dashboard, the AI cannot help with it directly.

This is a new selection criterion. It is worth taking seriously.

Your context file is part of your stack

Context management has become as important a skill as framework selection. As one builder described it: "CLAUDE.md is not a prompt. It is an operating system." "Do not say 'be helpful.' Say 'when you encounter X, do Y.'"

Most AI coding tools support some version of this: Claude Code uses a CLAUDE.md file, Cursor uses .cursorrules, others use similar project-level instruction files. The principle is the same across all of them. Your stack is not just the code and the dependencies. It includes the instructions that tell the AI how to work with it: the conventions, the constraints, the things to avoid, the things to prefer. A well-documented stack with a good context file will outperform a technically superior stack with no context file, because the AI has what it needs to stay consistent across sessions.

Part of that documentation is an architecture file: a living map of how the pieces connect, how data flows, where the boundaries are. Some builders maintain this as a Mermaid diagram. Others keep a plain text file. The format matters less than the habit of keeping it current. When the AI can read a reliable map of your project, it makes better decisions about where new code should go and what it might break.

If you are building with Claude Code, the earlier post on your first AI coding session covers how to start a CLAUDE.md and what to put in it from day one. The next post in this series goes deeper: what a context file that actually pulls its weight looks like once your project has grown past the early prototype stage.


Pick the stack the models know well. Learn enough to judge what they produce. Use boring, mature tools. Document what you chose and why. The goal is not the most technically impressive setup. It is the one you can move fast in, and understand well enough to fix when it breaks.

Building something with AI tools? I am offering free product audits while the newsletter is new.

Subscribe

Get the posts in your inbox

One email per post, for people who want to ship real products with AI tools. No spam, unsubscribe in one click.