👋 Hi, this is Gergely with a subscriber-only issue of the Pragmatic Engineer Newsletter. In every issue, I cover challenges at Big Tech and startups through the lens of engineering managers and senior engineers. If you’ve been forwarded this email, you can subscribe here. Learnings from two years of using AI tools for software engineeringHow to think about today’s AI tools, approaches that work well, and concerns about using them for development. Guest post by Birgitta Böckeler, Distinguished Engineer at Thoughtworks
It feels like GenAI is changing software engineering fast: first, it was smarter autocomplete, and now there’s ever more agentic tools that many engineers utilize. But what are some practical approaches for using these tools? To find out more, I turned to Birgitta Böckeler, Distinguished Engineer at Thoughtworks, who has been tackling this question full time for the past two years. She still writes production code at Thoughtworks, but her main focus is developing expertise in AI-assisted software delivery. To stay on top of the latest developments, Birgitta talks to Thoughtworks colleagues, clients, and fellow industry practitioners, and uses the tools. She tries out tools, and figures out how they fit into her workflow. Today, Birgitta walks us through what she’s learned the last two years of working with AI tools:
To learn more, check out additional thoughts by Birgitta in the Exploring Generative AI collection on her colleague Martin Fowler's website. Programming note: this week, I’m in Mongolia for the launch of The Software Engineer’s Guidebook translated into Mongolian, so there will be no podcast episode or The Pulse this week: see you for the next issue, next Tuesday! The bottom of this article could be missing in some email clients. Read the full article online With that, it’s over to Birgitta. Note, the terms AI, Generative AI, and LLM are used interchangeably throughout this article. Almost precisely 2 years ago in July 2023, Thoughtworks decided to introduce a full-time, subject-matter expert role for "AI-assisted software delivery". It was when the immense impact that Generative AI can have on software delivery was becoming ever more apparent, and I was fortunate enough to be in the right place at the right time, with the right qualifications to take on the position. And I’ve been drinking from the firehose ever since. I see myself as a domain expert for effective software delivery who applies Generative AI to that domain. As part of the role, I talk to Thoughtworks colleagues, clients, and fellow industry practitioners. I use the tools myself and try to stay on top of the latest developments, and regularly write and talk about my findings and experiences. This article is a round-up of my findings, experiences, and content, from the past 2 years. 1. Evolution from “autocomplete on steroids” to AI agentsAI coding tools have been developing at breakneck speed, making it very hard to stay on top of the latest developments. Therefore, developers not only face the challenge of adapting to generative AI's nature, they also face an additional hurdle: once they've formed opinions about tools or established workflows, they must adjust constantly to accommodate new developments. Some thrive in this environment, while others find it frustrating. So, let’s start with a recap of that race so far, of how AI coding assistants have evolved in two years. It all started with enhanced autocomplete, and has led to a swarm of coding agents to choose from today. Early days: autocomplete on steroidsThe first step of AI coding assistance felt like an enhanced version of the autocomplete we already knew, but on a new level. As far as I know, Tabnine was the first prominent product to offer this, in around 2019. GitHub Copilot was first released in preview in 2021. It was a move from predictions based on abstract syntax trees and known refactoring and implementation patterns, to a suggestion engine that is much more adaptive to our current context and logic, but also less deterministic, and more hit and miss. Developer reactions ranged from awe, to a dismissive “I’ll stick with my reliable IDE functions and shortcuts, thank you very much.” Back then, I already found it a useful productivity booster, and soon didn’t want to work without it, especially for languages I was less familiar with. However, like many others, I soon discovered the reality of “review fatigue” which leads some developers to switch off the assistant and focus fully on code creation, instead of code review. AI chats in the IDEIt seems unimaginable today, but there was a time when assistants did not have chat functionality. I recall announcing in the company chat in July 2023 that our GitHub Copilot licenses finally had the chat feature: 24 minutes later somebody posted that they’d asked Copilot to explain a shell script in Star Wars metaphors. From a developer experience point of view, it was a big deal to be able to ask questions directly in the IDE, without having to go to the browser and sift through lots of content to find the relevant nugget for my situation. And it was not just about asking straightforward questions, like whether there are static functions in Python; we also started using them for code explanation and simple debugging. I remember fighting with a piece of logic for a while before the assistant explained that two of my variables were named the wrong way around, which is why I had been misunderstanding the code the whole time. At that point, hallucinations started to become an even bigger topic of discourse, along with comparisons to StackOverflow, which was starting to observe its first decline in traffic. Enhanced IDE integrations |