Dear students of Computer Science,

Almost everything everyone is telling you about LLMs is probably wrong. Some tell you it’s glorified autocomplete, others say it will cure cancer or prove the loftiest theorems. As with many mass-market takes, these claims contain a grain of truth–but they miss the point. I encourage you to try a modern AI coding assistant for yourself and form your own opinion; here, I’ll offer mine, as someone who wants to give you an optimistic but unvarnished perspective.

Nine months ago (March ‘25), Anthropic’s CEO claimed that AI assistants would be writing 90% of our code within three to six months. We have recently seen that he was only a little too ambitious: the winter break has seen an explosion of programmers uncovering the power of Claude Code, surely a cause for talk around the Kombucha dispenser at OpenAI’s offices.

Claude Code reveals a sobering reality for many software engineering professionals: the vast bulk of the thing we call productivity has now revealed itself to be nothing more than Claude in a loop with command-line tools. The 2023-era copy-and-pasting into the textbox, manually crafting a prompt, and carefully stitching the result into our codebase feels downright manual by comparison. Now, we can boot up Claude Code and iteratively build up ideas, tests, implementations, and (possibly) a complete, working app. The TUI-based interfaces solve one of the main problems inherent to the web-based chatbots: how to meaningfully iterate, and incrementally build larger software artifacts alongside the LLM. I think many of us were telling ourselves that the real ingenuity was in the judiciousness we exercised when carefully copying the LLM’s output and integrating it back into our work: Claude Code shows that doing that for arbitrarily complex domains is often just dirt simple, as long as the tools can communicate via text.

This halcyon turn of events has left many students wondering: should I still study computer science? As someone who sells computer science for a living, I am obligated to sell the position of “yes,” but we all know that there are plenty of good reasons to argue “no” as well. For the sake of this essay, I will assume you lean towards yes, but perhaps with some skepticism. Even if you agree, it’s an important question, because it reveals an uncomfortable systemic issue: many students operate at a level that Claude Code can easily beat, and in the new world order, it will be increasingly hard to get a job. Just so we’re crystal clear, there’s still some code Claude can’t write, even though it can certainly write a lot of very useful code. But “there will always be code that Claude can’t write” is not a very convincing argument when the amount of code Claude does write will be massively higher by comparison.

CS undergraduates face a real issue: beating Claude Code isn’t very easy, even (and especially) on challenging leetcode problems. This may push some students to conclude that the AI is good enough at reasoning such as to absolve them from developing a deep understanding of the technical details. I argue that this is the wrong perspective to take, and that instead you should switch your efforts away from mindless and repetitive generation, and towards problems which stretch your intellectual capacity by making you feel stuck. To the degree the LLM can help get you unstuck in a way which genuinely builds your mental model (so that you’re in the driver’s seat, not a maybe-true understanding you got from the AI): great, do it.

In some ways, I see LLM-assisted software engineering as the Object Orientation of the ’20s: it offers a vast swath of opportunity to give lesser-trained programmers the ability to write huge amounts of good-enough code, thus enabling us to hire a vast army of folks who never had to deal with the pains of memory management, pointers, etc. Unfortunately, however, this time it’s different; we all know the hiring boom of easy-money days is over. Seeing Claude Code, many of us take a more pessimistic stance: those that can’t add value on top of Claude will be out of a job. Who are these folks that will rise to the top? I argue that it is the people who have expended serious intellectual energy to exercise their capacity for understanding deep ideas.

I personally believe the purpose of a university class is to stress your intellectual capacity through repeated exposure to increasingly-complex technical ideas which build coherent vision. In doing this, we teach students skills that can be broadly binned into generation (synthesis) and verification (checking). In many contexts (e.g., debugging), we build our mental models via iterating between synthesizing candidate ideas, then carefully checking our work, and reflecting upon what to do next.

As a student, I saw learning as a way of “compiling” things into my mental representation: I would spend days at the library (I do not do this anymore, my productivity has unfortunately declined) carefully rewriting books in my own words, which taught me the difference between what it meant to deeply grok something, and to possess an LLM-level understanding of it. One thing I remember from those times, however, is how mentally exhausted I felt at the end of the day. The reason I was exhausted is because I was playing the verifier. I learned a hard truth: after reading a sentence of a book, things often make perfect sense, but when you try to explain it to someone else, you realize you have no clue what is going on. When we do it to ourselves we just feel stupid, but if it happens in an environment that encourages rapid failure, we can grow: effective instructors incentivize rapid failure (e.g., by allowing students to rapidly test their solutions against an autograder).

However, in the case of an LLM, failures often manifest in ways which can be confusing and subtle: students may find themselves frustrated to realize that the LLM led them into a hallucination. Feeling confused about what to believe can be a very jarring experience–leading many of us to feelings of repulsion when we read some of our old code–but the LLM can further amplify this anxiety; because it is so shockingly capable of solving a problem, we feel utterly paranoid about hearing the music stop, “what will I do if I actually have to figure this part of the code out for myself?”

One crucial skill software engineering students learn is how to employ abstraction when they understand a software system: being able to read just enough of the whole system to understand its architecture, and then employing iterative deepening to learn about subsystems (e.g., the memory manager, networking, etc.) in an on-demand fashion. Right now, many students are telling themselves a comfortable lie: that iteratively prompting the LLM is a substitute for the true iterative understanding we do to build our mental model. Confirmation bias is a big concern: the tools are often so good, they give us the impression that there is essentially no point in checking their correctness most of the time. This sounds great, until we get stuck, and then realize we actually didn’t understand things nearly as well as necessary to tackle the task at hand.

This is not a new problem, in fact it is the problem that has plagued CS undergraduates since the beginning of time. Being stuck is obviously deeply uncomfortable, because the real world has deadlines, value-measuring systems, all of which depend on our work being complete and correct in a timely fashion. No student wants to be stuck, so they work to exit the stuck state as quickly as possible, often latching on to an incorrect hypothesis (“oh wait, this line must be broken…”) to get back to harmony; this is bad. As you become a senior software developer, you learn that being stuck is in fact the default state when you are working on challenging problems. The mental energy comprising your engagement with courses should not be primarily bottlenecked by generation, instead, it should meaningfully harmonize generation and verification in a way that allows you to build an ever-expanding mental model.

I recall years ago I chatted with a teaching professor about a worrying trend in CS. This was around 2016, and they mentioned that students were increasingly ignoring the “nuts and bolts” of applications to focus only on the flashy bits that were easy to get not-wrong. The specific concern was hackathons: do they encourage students to do something hard and deep that exercises real engineering skill? Or do they incentivize students to focus on startup-pitch-style motivation with impressive-looking web interfaces backed by fake data? The trend of students avoiding the genuinely-hard parts of the problem to focus on the flashy parts has been occurring since long before the days of AI coding assistants.

Another professor recently told me that they switched their assessment style to “code comprehension” rather than code synthesis. I think this is an excellent way to see the problem, because it stresses that–when generation is cheap–explanation, reconstructing a reason for generated content, becomes the dominating factor in terms of assessing understanding. You might ask: in a world where generators are perfect, what is the role of the verifiers? This is an apt question.

The first answer is that English is genuinely fuzzy, and in almost every case, the ways in which humans describe specifications lead to contradictory, unrealizable, or intractable results. In a world run by humans, software engineers expend serious effort interfacing with other humans to refine fuzzy conceptual architectures into correct production software. While it seems reasonable to expect that Claude Code could refine an English specification, there is always an intrinsic value to working with another human. The issue with automation is that this may no longer be worth the price: there is still a market for freelance web developers, for example, but it’s been severely diminished by sites like SquareSpace and WordPress.

LLMs take English-based automated code synthesis to the next level, giving us a general-purpose tool to refine hazy specs into mostly-working software. I predict different impacts across sectors. Startups will be the most impacted: it has never been easier to iterate to “something that looks like a passable minimum-viable product.” If the goal is to demonstrate a cool market gap that is tantamount to a JavaScript app, you can now do that very quickly with AI coding assistants. Enterprise apps, on the other hand, will persist: organizational bloat, vendor lock-in, and sales play a huge role in the momentum of those settings.

The truly useful applications of AI will involve deeply leveraging AI in a way that allows it to be built into the kinds of symbolic workflows humans love in things like Keynote and Premiere Pro. For example, Adobe Photoshop’s “generative fill” produces cool raster images right now, but it doesn’t do so in a manner that gives you a layered, element-by-element breakdown: this is the thing all of us obviously actually want, because it lets us refine the results of the AI, and interact with it, just like Claude Code did for our repos.

Thus, to aspiring computer scientists, I urge you to seriously consider how AI will impact your career. But I also urge you to seriously resist the temptation to resort to mindlessly iterating with the AI, toiling in unchecked guesses because you never bothered to go check the notes or read the book. Of course such students existed before LLMs as well: upon facing broken code, we all have the temptation to prove it is just a minor typo, we edit the code superficially: sometimes it works, but sometimes it leads us to a significantly more subtle error.

There is a vast amount of potential in leveraging AI to expand our intellectual capacity, by pushing us to rapidly confront our misunderstandings and push past them. However, we often overestimate the impact of change on the fundamental nature of things. At least in your case, a CS student, your goal is to rise above regurgitation, mimicry, or rearranging: we all know the parable of the junior engineer who spent all their time trying to rearrange bits and pieces of StackOverflow answers. The goal is thus, as it has always been, to take from our time what we may use to most-optimally execute our vision. In your time as a student, your objective is to learn the key principles of computer science (or whatever field you choose) in a way which maximally stresses your intellectual capacity while offering you the ability to grow from repeated failure. Throughout your career, I urge you to develop a taste for a specific set of topics in your courses (or computing as a whole) for which you possess enough genuine intellectual curiosity necessary to grok the true essence of things.

– Kristopher Micinski, Syracuse

P.s., a few caveats to this post: (a) like everyone else, I am being reductive, there are plenty of domains where Claude Code completely fails right now–paren matching in Racket is only one example. (b) I’m specifically discussing Claude Code due to the recent explosion, but the ideas were pioneered in other tools, and (c) there will surely be plenty of reasonable folks who believe that AI is of no (or very limited) use to them, it is impossible for me to prove that all of these people are wrong since utility depends on workload; I personally found AI to be surprisingly helpful in my hobbyist woodworking, however.