On coding with LLMs
When I was working at Apple, they had a strict no-LLM policy, so I wasn't able to use them to accelerate my work. At the time, AI-assisted coding was considerably less powerful than it is now, but I'm sure even then I could have benefited from automating repetitive and simple coding and writing tasks.
Now I'm between jobs and exploring career options. Obviously, AI is inevitably part of the process, in multiple ways. I'll write more about some other experiments and workflows in later posts, but here I'll get into my first impressions of AI-assisted coding.
The spectrum
Of course, people use LLMs in so many ways that it's almost useless to say "I use LLMs to write code."
On one hand, people are "vibe coding" to create tools and experiences with little to no thought about the deeper process. I'm not against the practice, per se. It feels like the latest layer up the stack of abstraction that we've been building ever since Grace Hopper's A-0 system in 1952. There are always tradeoffs, but on balance, I think it's good that more people have an entry into creating software.
At the other end of the spectrum is structured, planned coding, in which people who have deep knowledge use AI to handle the grunt work of producing code. They rely on their expertise to create detailed planning documents (often assisted by AI) and prompts that they feed into their LLM of choice. There's a whole ecosystem of tools that assist in the process, from IDEs like Cursor to command-line tools like Aider, all supported by APIs provided by the major players in generative AI.
My approach
As a senior-level programmer, I can choose where I want to play on this spectrum. So far, I've stayed away from vibe coding, though I have plans to jump into that sandbox.
Somewhere in the middle of the spectrum, I wrote this website with Claude, which let me preview the rendered results right alongside our chat. It worked out quite well. I have decent working knowledge of HTML, CSS, and JavaScript, but I'm far from an expert web developer. I didn't work through a detailed plan or generate any prompts in advance, but I did iterate on the code and relied on the LLM to do the bulk of the work. I stepped in to correct errors when Claude seemed unable to fix them on its own. I also did some refactoring and cleanup after the fact.
I have a plan to upgrade my site with a rudimentary, custom CMS. I want to avoid repeating code for common page elements such as navigation and footers, and I'd like to be able to write blog posts in Markdown and generate the HTML when publishing. For this project, I'm trying out a more structured approach. There's a great post by Harper Reed which I'm using as a guide, and I've customized the process a bit.
Online course
I'm also exploring structured AI-assisted coding in Principled AI Coding by IndyDevDan. It's based on Python, which I know well enough to do the exercises as I watch the videos.
I like it so far. He bases the course on Aider, walking through the fundamentals of writing appropriate prompts and interacting with the LLM from the command line, all the while encouraging learners to avoid writing any code whatsoever by hand. An approach that's so reliant on LLMs to actually write code seems like it'd be prone to tons of errors, but with good planning and detailed prompting, it's surprisingly effective. Of course, as an experienced programmer, I still need to review and test the code, so ultimately it's like working with a competent junior programmer.
So far, so good
As recently as a couple of years ago, I was skeptical that AI-assisted coding would ever amount to more than helpful auto-complete. Now I think it's reasonable to assume that LLMs will become ever more helpful coding assistants. A good software developer can focus on system architecture and overall design, writing specifications, reviewing and testing code (and perhaps occasionally tweaking a commit), leaving the coding itself to AI.
I think this is a great case for LLM use. I have plenty of opinions on the ethics and utility of AI, and I definitely think there are lots of overinflated expectations out there—carefully nurtured by companies whose motives and missions are far from pure. But LLMs, used with care in the right hands, are powerful domain-specific tools. Regardless of how much I ultimately use them in my personal and professional life, I'm glad I'm exploring their utility and limitations.