The teacher profession has lately been hard in the US, and is going to be made even harder by LLMs. I reject the article's comparison with calculators, these are exact and you need to know what to ask before getting a useful answer from them. On the contrary LLMs satisfy neither of these propositions by accepting arbitrary prompts and outputting only plausible answers which might be useful or not.
I believe the introduction of accessible LLMs will further the divide between privileged students who will reap the benefits of homework vs the others who will use free LLM tools to skip homework, a cheap short-term win that will end up costing them in the long run.
#education #LLM
Doug Arley likes this.
Brian Ó
•Hypolite Petovan
•Hypolite Petovan
•Tim Kellogg
•Hypolite Petovan
•Tim Kellogg
•Hypolite Petovan
•You were able to mention a specific case where calculators are falling short, but I dare you to mention specific cases where LLMs always give accurate results. This is the main difference for me. Calculators can be trusted to be consistent even in their shortfalls, LLMs can't be trusted to be consistent even when the actually output accurate information.
Tim Kellogg
•Complete the following sentence with no explanation. Stop when you've completed it.
The first sentence of the pledge of allegiance is:
Tim Kellogg
•Hypolite Petovan
•Tim Kellogg
•Hypolite Petovan
•Let me be clear: I won't be convinced LLMs provide overall positive net value because of the way they can and have been used to produce misinformation at scale. What I'm interested in is how bad is it going to be in specific contexts, like for screenwriters, translators and now students.
If you think LLMs are a fine piece of technology, we will not see eye to eye no matter what inane comparison you draw with other technologies. LLMs are uniquely positioned to drag down the value of written knowledge to below zero at a global scale, which no other technology has been even remotely able to do before.
If this isn't a concern for you, it's fine, but please miss me with your defense of LLMs.
Tim Kellogg
•Hypolite Petovan likes this.
Hypolite Petovan
•Hypolite Petovan
•@Matthew Graybosch Here's the quote from the article I'm basing myself on:
It's only one study and it's only college courses but it's confirming my own bias. I was fortunate to grow up in an environment where my parents were available to push for and help with homework, and it probably helped with my grades considering my lack of attention in class even if I thought I got it.
The problem is that the curriculum density can't be adequately covered in-class. I believe homework can help with understanding or cementing knowledge quickly dished in class. Homework essays are at a particularly uncomfortable intersection of requiring a lot of time, not being obvious/specific about the knowledge/skill it's meant to train, and easily done plausibly by LLMs, so I believe these will go away first, but then what will remain of the in-person essay tests?
Hypolite Petovan
•Doug Arley
•Context: I teach Programming I at my University, which I deliver simultaneously to students online and on campus. I'm also currently taking my Master's entirely online in education.
I'll start by saying I agree with much of what has been said. I think homework as it is generally used is harmful to students because it creates a culture where you can be sent home with work, and also, a lot of homework is just put into place to try to fill in the gaps of poor instruction.
I think effective homework involves independent research at all levels. Providing students with the necessary tools and then asking them to do some extra to develop their problem-solving skills. For that reason, long-term assignments like science projects and book reports are the best way to handle independent problem-solving and create those habits casually without being too time-consuming. Not just nightly vocabulary worksheets and things like that. Ideally, this would also give parents a direct hand in their child's education... show more
Context: I teach Programming I at my University, which I deliver simultaneously to students online and on campus. I'm also currently taking my Master's entirely online in education.
I'll start by saying I agree with much of what has been said. I think homework as it is generally used is harmful to students because it creates a culture where you can be sent home with work, and also, a lot of homework is just put into place to try to fill in the gaps of poor instruction.
I think effective homework involves independent research at all levels. Providing students with the necessary tools and then asking them to do some extra to develop their problem-solving skills. For that reason, long-term assignments like science projects and book reports are the best way to handle independent problem-solving and create those habits casually without being too time-consuming. Not just nightly vocabulary worksheets and things like that. Ideally, this would also give parents a direct hand in their child's education by creating a more creative and collaborative space.
At best, LLM works as a search engine, but at worse, it decimates early research incentives. It goes far and beyond what a calculator can do. A calculator requires manual entry, ensuring things are input correctly, and typically requires at least some active brain power. LLMs do not. The newest iterations of ChatGPT can OCR a document and spit out information. The only thing the user has to do is copy/paste or snap and upload a picture. A student struggling with a concept could quickly think this is a shortcut and bypass any problem-solving, with barely any active brain power. Absolutely nothing committed to long-term memory, except maybe how to upload a file.
As it's already been pointed out as well, those results are...not great. For maths, maybe you get something that makes sense; for research, you may return complete nonsense that looks like it makes sense (believe me, I have tried). You can ask ChatGPT to write you a summary of a thing you didn't research, but is it right? Ultimately though, in either case, something of value is lost.
This is especially difficult with college-level hybrid courses like mine. I can not realistically watch every student all the time, nor should I have to. In addition, given the low-level concepts I'm teaching, ChatGPT can nail the solution every time. Indeed, my students could Google the answers, but at least that requires them to look for a solution, parse out solution options and, hopefully, read the feedback others have given on those solutions. Let's face it, that's what real developers do anyway, right?
For that reason, I have always explained to them that ChatGPT works best as a search engine, but remember that its responses are not human and that the same dialog can typically be received from any actual human online. I don't send my students home with homework ever. It's up to them whether to not they continue to work on assignments, or school-adjacent work, outside of class, which generally comes down to time management. But I try to incentivize the value of independent research and developing problem-solving skills, and ChatGPT is upending that on multiple fronts.
For now, it's just whack-a-mole; rewrite the assignment to be less AI-friendly, try multiple avenues and advice on developing problem-solving skills without LMMs, make sure objectives and expectations are clearly defined, and teach them to use LMMs as a tool safely. I'm not necessarily afraid of the future where LMM is right 100% of the time and they become super search engines, so much as I am about the damage they are doing right now that is reducing the rigidity of people's patience to solve their problems. Certainly, Google did that, but I need to think about what I'm typing into Google a bit, at least right now.
Hypolite Petovan likes this.
Hypolite Petovan
•I'm also not sure why anyone would be afraid of LLM perfect accuracy. I'm afraid of the opposite, that it will never reach 100% accuracy because models are trained towards plausibility first, not accuracy. I don't even believe they can ever reach 100% accuracy, which would be superhuman anyway. But their increasing use as authoritative source and their window dressing as humans (by using first person pronouns for example) makes them a prime vector to leverage and launder popular biases.
Doug Arley likes this.