LLMs in first-year study

2025–09–23

I’m writing this just as a little wrap up to my first year of moving away from a business management role into hacking and wanted to get across my thoughts of seeing a bunch of ‘AI’ (Large Language Models, mostly chatbots) in education so far.


First off, I think we’ve always had people who want a shortcut to critical thinking skills. Sometimes this leads to shortcuts that are so substantial that they become a genuine progress in technology and how we think, such as a calculator, high-level languages or even memory mapping to think about how we often have cut huge corners to genuinely progress the field.

I think the future LLMs will be like that, but the take-away seems to be that AI is most trusted in the learning cohort, and have better outlooks on AI tools' ability to handle complex tasks despite uptake being slower in people learning to code.

It’ll be interesting to see how it turns out and the idea of distillation of large models into specified roles seems like the way current transformer models are going, inclusive of DeepSeek back in the way and Coder models being popular in dev circles.

In education, the ‘cheat’ would have been StackOverflow answers anyway, so who is getting hurt? Isn’t this just the new hot calculator?

Similar to how we still have education for mathematics without calculators against algebra or calculus for a large section, we only now are starting to understand what the hell LLM chatbots can be used for. I see it going two ways similar to the calculator:

  1. We increase the theory bar of education and assume the chatbot is going to be forever extant and will change minimally in shape, use, and ergonomics (such as in a text editor), or,
  2. We remove LLMs from the equation and eat our vegetables.

Currently, I haven’t seen either in an education standpoint coming out from a large amount of campuses that often tout AI integration in the faculty.

At the moment, this seems to be another flood of academic integrity hell for those who are facilitating units, a place for those who really aren’t being directed into developing novel critical skills on how to think about computing, or even worse, being cynically seen as a way for both overworked staff and nihilistic providers to find a new avenue to ‘accelerate’ students through their pipeline ala Learn to Code Bootcamps and how they lasted after the zero-interest funnel ended.

I want to believe there is a way for learning to pwn with LLMs - in fact, a few chall sites I’ve seen like pwn.college seem to use it to some benefit for students learning. But I don’t think this is the current policy for a vast amount of AI policies in schools - instead having a ‘responsible AI’ policy to simply acknowledge that generating natural language from a prompt is just below copypasta from StackOverflow. We could promote a system where students instead have to leverage their own models locally, or have a model provisions that obscures direct answers like PwnCollege.

With the last semester, every exam I’ve had has had someone either prepping via a chatbot, or using their phone outright. A student CTF team I’ve had (no shade, hopefully) has had teammates getting endlessly frustrated when their free plan against a large enterprise chatbot fails to help them understand potential gadgets in binary when ropgadget is right there.

IMHO, they often both have the same level of Occam’s Razor understanding for people who want to take the easy route, and its something where AI should be suspended for students till we figure out what the hell we want to do here with it.