In-line assistance - how can it get in the way?
This article is part of “Exploring Gen AI”. A series capturing our explorations of using gen ai technology for software development.
03 Aug 2023
In the previous memo, I talked about the circumstances under which coding assistance can be useful. This memo is two in one: Here are two ways where we’ve noticed the tools can get in the way.
Amplification of bad or outdated practices
One of the strengths of coding assistants right in the IDE is that they can use snippets of the surrounding codebase to enhance the prompt with additional context. We have found that having the right files open in the editor to enhance the prompt is quite a big factor in improving the usefulness of suggestions.
However, the tools cannot distinguish good code from bad code. They will inject anything into the context that seems relevant. (According to this reverse engineering effort, GitHub Copilot will look for open files with the same programming language, and use some heuristic to find similar snippets to add to the prompt.) As a result, the coding assistant can become that developer on the team who keeps copying code from the bad examples in the codebase.
We also found that after refactoring an interface, or introducing new patterns into the codebase, the assistant can get stuck in the old ways. For example, the team might want to introduce a new pattern like “start using the Factory pattern for dependency injection”, but the tool keeps suggesting the current way of dependency injection because that is still prevalent all over the codebase and in the open files. We call this a poisoned context, and we don’t really have a good way to mitigate this yet.
In conclusion
The AI’s eagerness to improve the prompting context with our codebase can be a blessing and a curse. That is one of many reasons why it is so important for developers to not start trusting the generated code too much, but still review and think for themselves.
Review fatigue and complacency
Using a coding assistant means having to do small code reviews over and over again. Usually when we code, our flow is much more about actively writing code, and implementing the solution plan in our head. This is now sprinkled with reading and reviewing code, which is cognitively different, and also something most of us enjoy less than actively producing code. This can lead to review fatigue, and a feeling that the flow is more disrupted than enhanced by the assistant. Some developers might switch off the tool for a while to take a break from that. Or, if we don’t deal with the fatigue, we might get sloppy and complacent with the review of the code.
Review complacency can also be the result of a bunch of cognitive biases:
- Automation Bias is our tendency “to favor suggestions from automated systems and to ignore contradictory information made without automation, even if it is correct.” Once we have had good experience and success with GenAI assistants, we might start trusting them too much.
- I’ve also often feel a twisted version of Sunk Cost Fallacy at work when I’m working with an AI coding assistant. Sunk cost fallacy is defined as “a greater tendency to continue an endeavor once an investment in money, effort, or time has been made”. In this case, we are not really investing time ourselves, on the contrary, we’re saving time. But once we have that multi-line code suggestion from the tool, it can feel more rational to spend 20 minutes on making that suggestion work than to spend 5 minutes on writing the code ourselves once we see the suggestion is not quite right.
- Once we have seen a code suggestion, it’s hard to unsee it, and we have a harder time thinking about other solutions. That is because of the Anchoring Effect, which happens when “an individual’s decisions are influenced by a particular reference point or ‘anchor’”. so while coding assistants’ suggestions can be great for brainstorming when we don’t know how to solve something yet, awareness of the Anchoring Effect is important when the brainstorm is not fruitful, and we need to reset our brain for a fresh start.
In conclusion
Sometimes it’s ok to take a break from the assistant. And we have to be careful not to become that person who drives their car into a lake just because the navigation system tells them to.
Thanks to the “Ensembling with Copilot” group around Paul Sobocinski in Thoughtworks Canada, who described the “context poisoning” effect and the review fatigue to me: Eren, Geet, Nenad, Om, Rishi, Janice, Vivian, Yada and Zack
Thanks to Bruno, Chris, Gabriel, Javier and Roselma for their review comments on this memo
latest article (Apr 17):
previous article:
next article: