The Era of Personal Micro-Software
Reject the Vibe
When I first heard the term “vibe coding” I immediately hated it, and it took me a minute to reflect on why. Is it just because I’m a grumpy geriatric millennial refusing to ride the vibes? Maybe, but I think it has more to do with how it implies a lazy sort of magical thinking. Vibes are nuanced. Reading vibes is a deeply human activity, and it implies an unspoken (and often high-bandwidth) understanding that just doesn’t reflect what I see when using AI-assisted programming tools. Computers can’t read the vibe in the room, and they can’t read your mind. If anything, we should be calling AI-assisted programming Therapy-level-detailed-communication-coding, but that’s a mouthful, so I guess we’re stuck with “vibe coding” for now.
I’ve spent a lot of time thinking about what this mode of programming means for young people learning CS, and what role it might play in their learning journey. To help me develop that point of view, I’ve been building a bunch of projects in various AI programming tools (mostly in the Google ecosystem, since that’s where I live in my day job, but I’m looking to explore any tool with a particularly unique approach). If you know me, you know I love a schema, and I dream of a clear schema for the role of generative AI in CS education, and I’m hopeful that exploring these tools and writing about it here will help that vision coalesce for me.
Falling out of Flow
I’m somewhat of an AI skeptic, or at least a very critical consumer, but I have to admit that I’ve had a ton of fun using these tools to build things that I just wouldn’t have had the time to dedicate to in the past. In some ways, it’s the most joyous I’ve felt in programming in a while. But that said, the actual psychology of the process feels completely different. Most notably, I haven’t found a “flow state” while working this way like I have found in traditional programming. And yes, the irony of rejecting a buzzword like “vibe” only to immediately mourn the loss of my “flow state” is not lost on me. Maybe we’re all just making up squishy words for how sitting at a keyboard makes us feel.
But whatever you want to call that feeling of deep, continuous focus, its absence here has to do with the constant handing off of work and thinking. In traditional programming, I’m actively problem-solving in a way that lets me build mental momentum (and sometimes it feels like it even requires that momentum to break through barriers). I can keep a running plan of the system’s architecture in my head, and the act of typing the code is a mechanical execution of an evolving plan in my mind.
With AI-assisted programming, that momentum is constantly interrupted. It feels much more like aggressive multi-tasking than hyperfocus. I spend time thinking, reviewing, and crafting a highly specific plan. Then, I hand it off to the machine. While the agent plugs away, I can go off and think about something else, which sometimes means spinning up another agent and reviewing its work on a totally different task. This isn’t an inherently bad thing, but it feels different. Sometimes I wonder if it’s scratching a completely different ADHD itch than hyperfocus, enabling me to flit between ideas on a whim, trusting that the agents will be there to pick up my dropped thoughts and do something with them.
Planning by Accident, or Not at All
This shift in momentum really reinforces the need for planning, even when working on a “solo” project. When I’m mining bits for a personal project the old-fashioned way, I don’t always spend time meticulously writing out the architecture upfront, even though I know the process is valuable (and would be necessary if I wanted to collaborate with a larger team). I’ve been bit by underplanning, but I also know I can often get away with it. The plan is forming in my mind over time, and nobody needs to understand it but me. It may not be efficient, but it’s there even when it’s unstated. When you’re delegating to an AI agent, though, you really have to externalize all of that thinking if you hope for it to be executed on faithfully. If you don’t write it out clearly up front, the hand-off fails, and the momentum dies completely.
But here is where things get interesting: what happens when you intentionally choose not to provide a plan?
If you give an agent a loose prompt and no architecture, it’s going to go in unexpected directions. That direction is highly influenced by the underlying model, its training, fine-tuning, and temperature. Sometimes the result is a genuinely surprising, elegant solution you wouldn’t have thought of. Sometimes it’s a structural disaster. And very often, it will be biased in ways you don’t immediately notice, defaulting to standard, outdated, frequently inaccessible patterns of building, or making assumptions about user behavior that you wouldn’t have made yourself.
As I think about that schema for AI-assisted programming I mentioned earlier, I’m realizing that the “unguided or loosely directed agent” might deserve its own distinct slot, particularly in the world of creative computing. There is a fundamental difference between “I told the machine exactly what architecture to build” and “I discovered what emerged from the model.” Both can be valid modes of operating, but failing to be intentional about which mode you are currently in is exactly how things go off the rails.
Building on Shaky Foundations?
All of this brings me to a persistent worry I have as an educator. I am having a lot of success building with these tools right now, but I am invisibly and subtly relying on a deep foundation that I built before LLMs could whip up code for me.
When I ask an agent to scaffold a project, I already have an idea of how I’d decompose the problem into manageable chunks, and I’m judging the output against that expectation. I know what a reasonable database schema looks like, and I can (sometimes) tell when the machine is hallucinating a library that doesn’t exist. My foundation in traditional computer science allows me to act as a competent manager for a digital intern.
What happens to the learner who skips that foundational step and goes straight to prompting?
There is a real danger here that we are going to see a generation of young people offered a fast track to shiny software that “works” on the surface but is fundamentally broken underneath. When I think back on the ugly things I made when learning to program, I wonder if I would have been drawn to the quick route to a professional-looking product I didn’t understand. There are plenty of horror stories of AI generating code with plaintext passwords hardcoded into the repository or setting up terribly inefficient and bank-breaking API loops (which my own experience can corroborate). If you don’t know enough to check the agent’s work, you are placing an enormous amount of blind trust in a system that is inherently probabilistic. Foundational knowledge now has to include a much broader understanding of computing systems and security just to safely manage the AI writing the syntax.
There might actually be a pedagogical silver lining here. It is really valuable to watch the “thinking” process as the agent operates (even though I don’t love the way that phrase personifies the processing). I wish more tools had a mode that brought that intermediate step forward and encouraged users to interrogate it. I’m imagining ways in which watching that trail of receipts could actually be a tool for building greater AI literacy. If a student can see the exact logical leaps the machine made to arrive at a solution, they can start to critique the process rather than just blindly accepting the product.
Good Enough for Me is Good Enough
If the foundations of this AI-generated code are inherently a bit shaky, why are we building with it at all? Perhaps the answer lies in redefining (or just actually articulating) what we are trying to build and why. Maybe we are seeing the emergence of a new layer of the software stack. Maybe we’re entering the era of personal micro-software.
Most of the things I’m making with AI right now are hyper-specific, single-user widgets. For example, I recently built a custom tool to help my friends write music together. It’s not particularly innovative, but it’s very specific. It’s designed around the workflow that makes sense for us, and includes features that no commercial software would ever bother addressing. I don’t care if it works for anyone else, and I’ve specifically made choices that I don’t think would be good for a generalized product. It fits the exact weird hole that I found in my music making, and does it better than any paid product I could find.
Often when showing this kind of tool to others, the immediate reaction is some variation of, “You should sell this!” or “You need to put this out there!” We have been culturally conditioned to believe that the natural endpoint of any clever piece of software is a startup, or at least a monetized release. But what I’ve made isn’t a tool for anyone else. It’s in some way its own piece of art. An artifact that says something specific about my collaborative process. It’s for me. It’s for my friends. It’s not for the rest of the world. Why would I ruin that by trying to make this something it wasn’t meant to be in the interest of scale or success?
The drive to productize completely misses the point (and ignores the reality of how much of a nightmare it is to maintain, secure, and scale software for other people). I am reasonably handy, and I’m perfectly comfortable building a deck in my own backyard. If it’s slightly unlevel or the stairs are a little creaky, that’s my problem to live with, and I know what tradeoffs I made to get there. But I know better than to suggest I can build a deck for you. I’m not a licensed contractor, and I don’t want the liability if your foot goes through a poorly fastened board.
I think the same logic applies to this new wave of AI-assisted programming. I am perfectly happy to deploy a hastily prompted, duct-taped web app on my own home server to manage my own idiosyncratic tasks. But packaging that up, securing it for the public, and asking you to trust your data to my digital intern’s unverified code? Absolutely not. For command line warriors this isn’t a new concept, and I’ve been making little widgets to solve little problems forever, but now they look like they’re really polished. Now anyone can make something that looks pretty polished.
Redefining success in this new paradigm means stepping away from the urge to scale. Success isn’t launching a product; it is simply going from a localized idea to a functional tool that makes your own day a little better. This needs to be a part of how we talk about these tools broadly. Moreover, this needs to be a part of the broader narrative in CS education. CS is for everyone, and the fact that you can make your own micro-software is proof of it.
Over the next few weeks, I plan to write up some retrospectives on some of the micro-software journeys I’ve undertaken here on the site. I’ll talk about the logic, the architecture, and the friction points of building them. Just don’t expect me to release the code.
