AI Coding
date
Oct 15, 2025
type
Post
year
slug
ai-coding-panic-when
status
Published
tags
AI
Claude Code
Tool
summary
I’m excited. I’m scared. I’m confused.

Intro
How close are we all to being fired? And then what? Let me give you a piece of my mind…
I’m not a classic software engineer. With a background in art and design, programming has always just been a means for me to bring my ideas to life. I like the challenges that programming brings, but I try not to get too distracted by that. Because what has to be front and centre - always - is the player experience.
No one will play a bad game, no matter how elegant the code that runs it.
Good code doesn’t make a game better, it just makes it function as intended.
But yes, bad code can ruin a project in oh so many ways.
So when it comes to AI-assisted coding, I’m very interested, but I’m also cautious! I want it to do all the boring stuff for me, so I can focus on the interesting bits! But as it turns out, it’s not quite that simple. It’s easy to do more damage than good. But if AI gets too good, then we’re all out of job. And believe me, it’s coming for us. And not just software engineers, it’s coming for all of us. Everyone who spends their working hours in front of a computer, gone. And it’s getting closer and closer at an alarming pace. Brace for impact.
When I started working, all we had was a simple text editor that did absolutely nothing to help with anything. But from there we got better and better tools. First simple auto-completion, then IntelliSense, then ReSharper with static code analysis, then LLM chat-windows like Copilot, then AI-first IDEs like Cursor, then terminal-native agentic AI like Claude Code.
Agentic AI means that it’s not just a chat-bot that you feed a prompt and out come some words. Agentic means it has tools that it can work with. You give it a task, it will split it onto mutliple steps, and use whatever tools it has at its disposal to get the job done.
Let’s look at Claude Code: It’s a command line tool. You navigate to a folder in the terminal and start it up by typing
claude
, you tell it to /init
, then it analyses your existing project, writes a file about its findings and then you can tell it to get to work. It’s excellent at laying out plans and following through. It can do everything you could do in the terminal - and it’s pretty good about asking for permission first. And what it can accomplish is pretty damn impressive. I’ve used it extensively over the last few months and it’s done a lot of different things for me. I got it to code review many of my old projects, implement null reference checks, fix security flaws, and find memory leaks. I got it to refactor complicated parts and split them into simpler chunks that were easier to understand. I used it to vibe-code entire little helpful tools.Blog Fix
Let me give you one example: I had a problem with my blog. I use Notion for content management. Whenever I add a page to a certain part in notion, it shows up on my blog. But Notion must have changed something on their side, because thumbnail images for posts would longer show up on the web. I had no idea how to fix this. As with all web development, I started with an existing framework and a lot of functionality comes from packages that get pulled in. Like the Notion integration. Full of complicated code that I knew absolutely nothing about. Fixing this would have taken a long time. I would have had to get an understand of the entire package before I could have even tried to figure out why and where it was now failing. But I put Claude Code on the job and after an evening of back and forth, where we implemented some helpful debug stuff and tried a few things, it was fixed. And most of what I contributed was direction and testing. I mean, don’t get me wrong - AI wasn’t even close to getting it done on it’s own, but still - Kinda magical. Pretty awesome. But also: scary.
The State of AI
We’re getting to the point where AI can fix most simple bugs quicker than I can. It can write most basic code much quicker than I can. And with the right instructions and directions it can create moderately complex projects.
But AI doesn’t learn the way we do, nor does AI remember the way we do, so a lot of the challenge of working with AI is to figure out how to give it the right context to get its tasks done.
There have been entire books written about how to code in a way that aligns with the way our human brains function. For example: Our short term memory is quick to fill and easy to empty, but we can only keep about 6 things in there at the same time. So any code that requires us to keep track of more than 6 moving pieces at once, will inherently be hard to follow.
Our long-term memory holds a lot of the surrounding knowledge we need, like the syntax of the programming language, memories of certain techniques, the meaning of keywords, etc. - but it’s hard to fill. We have to put in work to make things stick.
So what about AI? The AI has been trained on gigantic amounts of knowledge and can process big loads of text quickly, so if we provide it with the right context, it can get a lot done.
Fixing my blog was only the start. At that point I had only scratched the surface. I realised that I could write my own little command line tools and teach Claude how to use them. I wanted to use Claude for Unity projects, but realised it didn’t have a way to compile what it changed and would give me lots of code that wouldn’t actually work. So I wrote a little python script that looks at your current working path, finds the Unity Editor running this project and triggers a recompile, then reads out the resulting logs. Now Claude Code can work on a bug, check if it’s fixed and keep going until it’s done - all without my help!
MCP
Then I found out about MCP - the Model Context Protocol. A standardised way for applications to provide context to Claude and other LLMs. It’s a more official way to hook up new tools for Claude Code to use. You install an MCP integration and then Claude code automatically knows it’s there. And there’s a lot of MCP servers out there!
Playwright MCP lets Claude Code control a web browser. I told it find the best options for a thing to buy and gave it URLs for all the local hardware shops. It opened a Chrome browser window, surfed the web for a while and came back to me with a report. Then I told it sell a thing on trademe for me (tradme is like ebay down here in New Zealand). All I did was take some pictures. Claude wrote the text, it researched what a reasonable price would be, uploaded the photos I took and it even looked up the correct dimensions to calculate shipping costs.
And turns out it’s not hard to make your own MCP servers for all sorts of things! Claude will happily do most of the work for you and explain it all along the way! So I turned my Unity-Bridge python script into an MCP server and now I don’t have to specifically mention it in the prompts anymore! I just hook it up and it automatically knows that it’s one of the tools it has at its disposal.
I also made an MCP server for broadcasting MQTT messages, so Claude Code can turn on the lights in the other room when it needs my attention.
Where do we go from here?
So where do I go from here? Where do we all go from here?
Are we on the verge of a big transition? Will Engineers be turned into “AI Conductors”. Will we no longer do most of the actual coding, and instead design the overall system architecture, and split everything into manageable chunks, then we supervise multiple AI agents as they work on different parts of the system? A few weeks ago this sounded like science fiction.
But it’s already possible to do this right now, in fact that’s how I’ve been working these past few weeks. I’ve developed a setup that recreates a traditional agile development workflow with multiple AI agents filling different roles. They all work together in the same git repository. Each virtual day, the Project Manager agent looks at the PLAN and prepares everyone’s tasks for the day. Then each Coding agents (frontend, backend, whatever is needed) works through their assigned workload and commit their work. Then the QA agent tests and writes up a report. And then we’re all ready for the next virtual day to begin. Rinse and repeat. I can step in at any point and adjust course - and I have to ALL THE TIME - but if the tasks are simple enough, I can just let it run and hope for the best. Then I can sit here and watch everything run its course. When it works, it’s kinda magical. Pretty awesome. And scary.
What’s left for us to do?
So what’s left for us humans to do?
What we’ll need to do much more of is review and test. We’ll spend a good bit of time reading and understanding code. And we need to make sure that - under the hood - it’s all actually doing what it looks like it’s doing on the surface. Because - as it turns out - AI will occasionally just throw in some mock data and call it a day. Whatever the AI says is not trustworthy. Remember that it’s not really thinking through everything the way we would. It tokenises your prompt, uses attention mechanisms to understand context and relationships, then predicts the most statistically likely next tokens based on learned patterns and generates new text from that. It doesn’t truly “understand” code, it can hallucinate incorrect solutions and it struggles with genuinely novel problems outside its training patterns. If you wanted to be mean, you could say it vomits up an emesis of bits and pieces from the billions of stolen git commits it ingested during training. But is it really stealing if you just let it have a peek at someone else’s code during training? No one took the code, it’s all still there! It’s the same as when you torrent a movie. That’s not stealing either. Say what? Oh in that case it is stealing? Well, that doesn’t make any… [Static] AI output is not trustworthy, so we, ourselves, need to make sure that it’s all sensible and safe. Especially if the code is going into products that actual people use.
Will we all have to be excellent at prompt crafting? No, because guess who’s excellent at prompt crafting: AI. I basically just give it a rough outline of what I want and have it turn that into a nice, detailed prompt to give to my AI project manager, who then crafts prompts for the AI developers.
Or well, usually it’s a bit of back and forth. I tell it I need a system that does this and that, give it a bunch of random notes and ideas, and ask it turn that into a plan. Then I edit that PLAN a bit, make some suggestions, add ideas, remove ideas, change my mind about a few things, ask it to research this or that, which framework would you use? why? how about that one instead? any downsides to this?
And once I’m happy I tell it to take the whole thing and create a development plan for it. Then I review and adjust that, let it split it into development phases and then we’re ready to start the first sprint!
My role in this is essentially the role of a product manager with some mentoring and QA thrown in. I’m not saying this necessarily works for bigger teams just yet. And believe me there are still enough pits to fall into with this approach.
But after a lot of learning, and tweaking, and prompt-crafting, I can now get certain things done way faster than ever before. - Not everything. There’s still a lot AI can’t do well. And you have to get good at seeing when it’s a good time to cut your losses, role back and start over from a previous commit. But I’ve gotten to a point where, working like this, I can create an entire little app in a few days, where the same app would have previously taken me a week or more to make. If improvements keep coming - and I’m not saying that they will - in fact I’m sure they won’t - but just for the sake of argument, let’s say they do keep coming, then - in a little while - a team of 5 people could get things done that would have previously required a team of 50. Or maybe a team of 50 will just get so much more done in the same time. Or the CEO of a company of 50 will fire 45 people and use the savings to quadruple their own salary. We’ll see. We’re not there yet.
Let’s say the improvements continue
But l et’s say the improvements keep coming at the current pace. Again, I don’t think they will, but this seems to be what all AI companies are hoping for - so what kind of future are they trying to steer us towards? If AI agents need less and less oversight and do better and better jobs - then who will still be making software? Who will be left?
- Will each company be just one guy typing their ideas into a chat window?
- Or will there just be much much more software?
- Will each bit of software just be so much more optimized?
- Or so much more specialized?
- Or handle so much more complex tasks?
Either way, there’s one thing that’s for certain: the software market is going to be disrupted.
Beyond
And let’s say it continues even beyond that and AI gets good at deep domain understanding, gets good at picking up subtleties, and they figure out a way to make it good at architecting truly novel solutions - then there’s a chance AI will no longer complement us, it will just flat-out replace all software development teams.
At that point every layman would be able to ask the AI for an app and get it, fully functional and looking good.
And that’s the end of software development, my friends.
So what would we all do then? Data centre cleaning? Dusting server racks? Geoffrey Hinton - the nobel-price-winning so-called “godfather of AI” - suggested: plumbing. But how many plumbers are we really going to need in the future? Even with the world going down the toilet…
At the current level AI is a great little helper. It saves me a lot of time by doing easy tasks quickly. The hard stuff is still up to me and that’s perfectly fine. But even there it’s helpful because it can quickly search through and summarise large amounts of text, so it can find specific things in a codebase or search through an online documentation for me.
It’s very convenient. But does a little more convenience justify the means of how we get there?
Because there are several significant issues at play here.
There is the question of where the training data comes from. Is it okay to train an image generation model on publicly available imagery but without asking the artists permission? Who’s going to pay a digital artist when the AI can simply gobble up all their work and replicate their exact style? Does that seem fair? Anthropic recently had to pay $1.5 Billion to authors and publishers because it had “unlawfully acquired” countless books it used to train their AI models. Sounds like a lot of money, but it’s only about 0.8% of their current company valuation of $183 Billion, so essentially a slap on the wrist. And while they were instructed to delete those pirated files, they were allowed to keep - and keep selling access to - the models they trained on this data. To me this almost sounds like pricing in the fine might be the cheapest and easiest way to train a good model - just use whatever the hell you want for training and pay the fine… I mean, try working out a licensing deal with every artist and author and scientist on the planet.
And then there’s the data centres that run and train AI. They use up immense amounts of power and about 60% of that power is generated by burning coal and gas. Add to that the insane amounts of water used to cool those data centres and I’m sure you’ll see why this is problematic…
And they’re building new ones as fast as they can. There are plans to build GigaWatt-consuming data centres that will cost hundreds and hundreds of Billions of dollars. If OpenAI and Oracle really build that $500 billion data centre they were talking about - how do they plan to ever make back that money? If the plan is for AI to take all the jobs, who’ll be left to pay them anything? If the goal is to reach artificial superintelligence - the singularity, the intelligence explosion, the state where AI is smarter than any human and can rapidly upgrade its own hardware and software to become ever more intelligent - well, that’s a race towards making us all obsolete. Seems a bit stupid, as a goal, to be honest...
Well, it’s all worth it as long as a few individuals got really really rich along the way, right?
Get interested in politics. Vote for good humans. Hold them accountable