Using an LLM to do schoolwork is like taking a forklift to the gym.
If all we were interested in was moving the weights around, you’d be right to use a tool to help you. But we’re doing this work for the effect it will have on you. The reason a teacher asks you a question is not because they don’t know the answer.
Compare: My piano teacher doesn't give diplomas because none of her students would care, her students actually want to learn. When my piano teacher cancels class, I am disappointed because I wanted to learn. My piano teacher doesn't need to threaten me with bad grades to get me to practice outside of class (analogous to homework), because I actually want to learn.
There are many college students for whom none of these tests would pass. They would not attend if there was no diploma, they're relieved when their professors cancel class, and they need to be bullied into studying outside of class.
What made us think these students were ever interested in learning in the first place? Instead, it seems more likely that they just want a degree because they believe that a degree will give them an advantage in the job market. Many people will never use the information that they supposedly learn in college, and they're aware of this when they enroll.
Personally, the fact that they can now get a degree with even less wasted effort than before doesn't bother me one bit. People who want to learn still have every opportunity to.
This is much larger than a cultural problem with the students of today. They believe, rightfully and accurately, that the university degree is yet another part of the machine that they will become a cog in.
What should be alarming to everyone is that these students will graduate without having learned anything and then go into the workplace where they will continue to not use their atrophied critical thinking skills, to simply do yet more, as a cog in the machine.
A decent amount of my professors don't know the answers because they bought the course, test questions, and lectures from Cengage. During exam review, they just regurgitate the answer justification that Cengage provided. During the lectures, they struggle to explain certain concepts since they didn't make the slides.
Professors automate themselves out of the teaching process and are upset when students automate themselves out of the learning process.
I can tell when the faculty views teaching as a checkbox that they officially have to devote 40% of their time to. I can tell when we are given busywork to waste our time instead of something challenging.
To use your analogy, I'm being told to move 1000 plush reproductions of barbells from Point A to B by hand because accreditation wants to see students "working out" and the school doesn't want high failure rates.
We are all pulling out the forklift. Some of us are happy because we don't have to work as hard. Others are using the forklift so we can get in a real workout at home, as school is not a good use of our time. Either way, none of us see value moving paperweights all day.
edit:
My favourite course during my Computer Engineering degree was Science Fiction because that professor graded us on substance instead of form. It was considered a hard class because one would get good marks on the essays by focusing on building substantive points instead of strict adherence to the form of a five-paragraph hamburger essay.
The call to action is to make courses harder and stop giving students plush barbells.
For example, University of Toronto Engineering Science (hardest program in Canada) gives first-year students a "vibe coding" lab in which students learn how to solve a problem that AI cannot.
https://www.cs.toronto.edu/~guerzhoy/vibecoding/vibecoding.h...
One analogy I use a lot: if I have a professor sitting next to me, what is the best way to learn a topic?
Struggle through it on my own and I won't be leveraging the professors knowledge.
Ask the professor to do everything for me and I won't be learning anything at all.
Now if the professor is an AI, the same trade-offs hold.
For example, I will back and forth conversations with AI to explain subjects to me. I ask questions, push back, ask for examples, and so on.
If I do ask the AI to answer something for me, I then ask it to break down the answer for me so I can make sure I understand it deeply.
And of course, none of this matters if I don't want to learn something :)
Does the use of a quantifiable metric like a GPA not exacerbate this? In a world where people take a GPA seriously, you'd have to be irrational to not consider cheating a viable option.
You could say the same about credit score and dating apps. These institutions assist the most predatory and harm the most vulnerable.
I remember illustrating a point to a class by posing a question and then calling on a student I figured wasn't smart enough to answer correctly so that everyone could see her make the mistake.
The ethics of that still bother me.
This has a negative feedback loop where universities have to lower standards to bring dumber and lazier students to compete with other diploma mills.
> like taking a forklift to the gym.
First, you will have excellent forklift skills in the end. A real profession!Second, girls dig forklift operators or so I was told.
But the gym isn't the best place to engage in forklift training. And you engage in forklift training at the gym, expect to learn how to use a forklift to lift gym weights. Don't expect to also get the benefits that the gym is designed to impart.
I related with that analogy too, infact that whole piece is worth reading. I can't seem to find it's link though!
Essentially, since they are a summary of "the" state of knowledge, the teacher should be able to ask them to put a number on how novel a piece of text is.
Once LLMs are able to evaluate, independently, the soundness of an argument... (Hopefully, this will be achieved AFTER $5 H100s reach the average consumer)
Look, we have no idea what the feedback is like that this grad student gives, what the class sizes are like, what the cadence is, what the grade percentages are, etc. All we know is that Clayton Ramsey is a grad student at Rice in the robotics department and that he wrote a hot take here.
For me, the most important thing is if this grader is bothering to really grade at all. I think we've all had a harried grad student just dash off a few red lines on the week one HW about a week before the final exam. That's not a 2 way street, and if the feedback isn't as in-depth as he wants the work to be, well, he shouldn't be surprised. He can't be expecting students to put in the time unilaterally. But, we don't know any of that really.
Personally, I think that before the decade is out, we're not going to be talking about this at all. Because the students will be adept enough at using the LLMs to make it look like their own writing anyways. This is a problem that experience will solve for them.
And also, I think that the days of the massive lectures and essays are pretty much cooked. That 'cheap' model of education can't survive this LLM revolution. We obviously have to change what the heck higher education is trying to do.
My take is that we're going to go to smaller class sizes like those at St. John's or Oxbridge. Under 10 people, you have to have done the reading or look like a fool, all with a PhD in the subject as a guide/teacher. Large classes weren't cutting it for decades (ask any Frat about their test banks), and now the veil is just ripped off.
I'm sure the time has come for college students to master using LLMs. It's just as important as grammar or basic math now. The software I build (and the entire tech industry) automates huge swaths of business processes with AI. Students need to be able to understand, work with, and manage swarms of AI agents doing work.
To stick to the analogy:
I need skilled forklift drivers, not big buff workers like I used to.
Sure, you should lift them yourself too. But using an AI teaches you a shit-ton more about any field than your own tired brain was going to uncover. It's a very different but powerful educational experience.