[LLMs] How does agentic AI affect the next generation of coders?


(This is the second post in a series of three, starting here - part 3 may not be published yet.)

I was at the always-informative Manchester Tech Festival the other week, where I saw a great talk by Matt Squire, CTO of Fuzzy Labs, with the title "Are We the Last Programmers? AI and the Future of Code". I’ve started experimenting with using LLMs to help me build software, so I'm particularly interested in this topic.

Matt covered several areas in his talk, so my goal is to write three posts on the back of of it:

This is the second of those posts.

I mentioned in my first post that Matt talked about Seymour Papert, the computer scientist and educator who developed the turtle drawing tool and associated Logo programming laguage in the 1960s, to help teach children how to program computers. Papert was interested in how children can use technology to learn. He was the co-director of the MIT AI laboratory in 1967 (yes, "AI" has been around that long), and was very interested in education and the role schools play as learning organisations.

He demonstrated that kids learn best when they're able to make things and share them - hence giving them the opportunity to program the simple (physical) turtle drawing tool, and cause it to both move around and draw pictures. The Mindstorm line of educational kits for building programmable robots based on Lego bricks (sadly now discontinued) was named after the book Mindstorms: Children, Computers, and Powerful Ideas - published by Papert in 1980.

And it's not just children - people learn best by building things they care about.

And with LOGO, it wasn't just that the children could make things and share them - they also gained a lot from debugging the systems they created. The process of debugging taught them to think about thinking.

So the question is, what role does agentic AI play in all of this? There are a lot of understandable worries about whether and how the next generations of coders will learn to code if they become too reliant on LLMs. Will they get that debugging advantage Papert talked about?

LLM-augmented coding encourages creation and exploration...

Papert's first assertion, that learning happens best when people are able to build things and share them, is not diminished at all when an LLM is inserted into the process. Matt's point in his talk was that this experience becomes amplified by LLMs, because now people eager to make and share can build things more easily and quickly. Far from dulling their appetite to learn and explore, this can ignite it even further. And certainly my own experience is that I'm more eager to build new things that I previously might have avoided because I was wary of the overhead (see my post about making a quick thing).

...but does it encourage debugging? Thinking about thinking?

I think this is possible, but not automatic. A lot of people are worried right now about the potential atrophying of critical thought, as we rely more and more on ChatGPT et al. Dagmar Monett and Jeppe Klitgaard Stricker have interesting things to say on this topic.

Matt was more optimistic here. His perspective is that people will always want to build, explore and experiment. That whatever they build, and how it works or doesn't work, will just make them even more eager to explore and learn more. They will do whatever they need to do to make their thing work the way they want it to work. People will never stop being curious.

What some people fear is that the LLM takes agency away from the user in this equation. It's too eager to say, "Don't worry about that, I'll sort it out for you." When something doesn't work, in theory you can just tell the LLM to fix it. And yes, I have documented (here and here) my willingness in some circumstances to pay little attention to an LLM's output.

But you don't have to get very far at all in a journey of co-building something with an LLM before things start going horribly wrong. The LLM, left to its own devices, will make things worse rather than better. Kent Beck and Jessica Kerr discuss that phenomenon eloquently in this podcast. And at that point, the user absolutely has to start learning about what's being built. How to debug. How to take a step back and understand not only how components fit together, but how to coordinate the project in such a way that the LLM gets the structure it needs to do a good job.

A lot of people, and I'm definitely one of them, are noticing and commenting on how effective coding with LLMs leads you to focus on the kind of yummy good software practices like test-driven development and iterative + incremental development that is often overlooked or left behind. Emily Bache talks about this in her great short video.

I hope that it'll ultimately lead to a lot more good meaty learning. Matt Squire thinks it will. I think it's possible.

Come on this journey with me

I plan to keep writing about this topic. I’ve already got a raft of draft posts in pocket. I love to learn, and I love to teach (and I’m really bloody good at it). I use teaching as a way of deepening my own knowledge and pushing me to learn things more effectively.

If you want to know more, you can do the following:

🔗 Want to share this article or save it for later? Here's a handy link for you!

Clare Sudbery

Don't miss my next post! Subscribe to my newsletter and learn a host of useful tips about coding with agentic AI, as well as learn a bunch of useful stuff about effective technical leadership.

Read more from Clare Sudbery
Snake toy arranged in a pleasing square shape, on a wooden background

I’ve started experimenting with using LLMs to help me build software. One of my background goals in life is to remove or reduce the labour-intensive tasks that hog my time. One way of doing this is to automate. I already have tons of little scripts that do things for me... but I've always found that I'm sloooow at creating those little tools and automations. By xkcd. Permanent link to this comic: https://xkcd.com/1205/ So, recently I've started getting LLMs to help me build stuff to simplify...

Close-up of Clare in a hat, grinning in front of a piano!

This week I am officially "on retreat". I haven't actually left my house because budget, but what I have done is cancel all meetings, turn off all notifications, and I'm not doing any small admin tasks or replying to messages unless they're urgent. It started because I was getting frustrated about how much time each week was devoted to "business as usual", and how hard it was to find large chunks of focused time for the following activities: Reading in depth articles, watching videos etc...

Matt Squire on stage at Manchester Tech Festival (This is the first post in a series of three - part 2 and part 3 may not be published yet.) I was at the always-informative Manchester Tech Festival the other week, where I saw a great talk by Matt Squire, CTO of Fuzzy Labs, with the title "Are We the Last Programmers? AI and the Future of Code". I’ve started experimenting with using LLMs to help me build software, so I'm particularly interested in this topic. Matt covered several areas in his...