How Teachers Can Make the Most of LLMs (Like ChatGPT!)

Carnegie Learning
6 min readApr 18, 2024

--

56% of educators use AI sometimes, often, or always.

But only 25% of schools or districts have provided AI training to teachers.

These stats from our recent survey of 800 educators confirm what we already suspected: the majority of educators are using artificial intelligence for work. But most educators have not been given best practices to implement when relying on AI.

As a content engineer specializing in AI at Carnegie Learning, I’m passionate about helping educators learn about and use artificial intelligence. While most educators are familiar with large language models (LLMs) like ChatGPT, some remain skeptical of their usefulness, and others are limited in their understanding.

Whether you’re a current user of AI tools or are only just starting to learn what they can do for you, here is what you should know about large language models, how they work, and four ways to get the most out of them.

What are Large Language Models?

Large Language Models like ChatGPT, Claude, and Google Gemini are very sophisticated forms of autocomplete. They predict and then generate the next sequence of words based on the language you provide as input.

Let me explain it this way. How would you complete this sentence? What word or phrase comes first to mind?

“Pass the ___.”

When I ask ChatGPT to complete the sentence with the most likely word, it comes up with “salt,” which is what I would have said too.

But in different contexts, other words or phrases may be better, more likely endings to the sentence. If I first tell ChatGPT, “You are a member of Congress,” the most likely word to complete the sentence changes from “salt” to “bill.”

If I prompt ChatGPT with “Now you are in a school,” the word “note” becomes the most likely ending. Personally, I disagree with this last choice and prefer “test” instead. Our “pass the salt” example highlights three key features of LLMs.

LARGE MODELING

The autocomplete on your phone has learned language patterns, grammar, and vocabulary from a small set of available text data. Developers train large language models on thousands of times more data than your phone contains. They draw from websites, books, news articles, educational materials, scientific papers, conversations, and on and on. This is what puts the “large” in the large language model.

PROMPTING

The second feature is prompting. Through commands and added context, you can redirect the LLM to autocomplete in a way that better suits your specific needs or situations. When I prompted ChatGPT with “Now you are in a school,” I did not change the underlying model. I simply narrowed its focus on text patterns that are more likely to occur in school settings than elsewhere.

LEARNING

The third key feature is learning. As users interact with LLMs and provide feedback, these models gradually learn better ways of responding. This is a feature of the autocomplete on your phone as well (not on mine, though, I’m convinced).

The difference is that while your phone quickly learns your individual language patterns, LLMs learn more gradually and more carefully. LLMs learn from the inputs and feedback of millions of users over time.

The impact of LLMs in education

Let’s be clear. Large language models are here to stay. They are not a fad. Educators worldwide are already hard at work using LLMs for their needs.

Educators are generating lesson plan outlines, quizzes, and reading passages. They are using LLMs to both evaluate and provide feedback on responses to open-ended questions. They are providing conversational practice, vocabulary training, and grammar exercises for language learning.

They generate parent letters, create writing prompts, complex problem-solving scenarios, and project ideas. They also personalize instruction, practice, and other content for individual students and their families. Many AI tools have been developed to meet these (and other) needs.

Leveraging teacher experience with LLMs

Although AI can help with many tasks, not all LLM outputs will be equally effective. The key to improving student learning with LLMs is the input of teacher expertise. We must remember that, for all of their power and sophistication, large language models are repositories of common wisdom–not evidence-based best practices.

In our recent webinar “AI in Education: What You Should Know,” my co-presenter and I offered specific tips on how to optimize your use of AI. Here are four additional tips to help you, as an education expert, put the most into and get the most out of LLMs.

1. KNOW WHO’S IN CHARGE

You are! You are the expert. Don’t treat LLMs like the web because they are most certainly not the web. Nor are they mind-reading services designed to give you exactly what you want with no effort on your end.

The outputs you get from LLMs are very much dependent on what you provide as expert input. So, try not to settle. It can be tempting to accept the first response you get when using an LLM. After all, it’s designed to sound reasonable and (somewhat) coherent. But be aware that LLMs are still learning.

For any task you want to give an LLM, think hard about how YOU would do it first and work to make the outputs close to what you, the expert, want.

2. BE SPECIFIC

For any task you want an LLM to do, imagine pulling a random person off the street and explaining the task to them. This will naturally make you more detailed than you otherwise would be. And that’s what LLMs need: specificity.

Every day, billions of people across the globe create sentences that have never been uttered before. This is how immensely vast the space of language is. Large language models, no matter how big, can only capture a part of that space.

So, you have to be specific. Conversing with LLMs can be both rewarding and frustrating because it requires you to carefully consider the precise meaning and phrasing of your statements. Giving examples of how you would complete a task is a great way to maximize quality. This can produce results that are sensitive to your own style and context.

3. BE CREATIVE

Teachers are natural pros at this. Can’t seem to get the LLM to do what you want? Well, try breaking up a complex task into parts or emphasizing to the LLM that this part of the task is “important.”

When you adjust your prompts, try first to amend them either at the beginning or the end. If it is a long prompt, LLMs tend to “forget” on occasion what is in the middle (just like humans).

Think about your own context and use more precise words or examples to teach the LLM what it is you expect. Above all, be persistent.

If you know how to complete the task, chances are good that there is some language that can make an LLM do the same or similar. All you have to do is find that language. And that often requires some creative thinking.

4. WORK TO GET IT RIGHT

When I’m working with ChatGPT to do something complex, I will very often start with something less specific than I want. I use the conversation to refine what I’m looking for. If I get good results at the end, I will ask GPT to write a prompt, based on the conversation. This prompt will get me the results I want in one fell swoop.

That certainly doesn’t always work, but it is very informative as to how I can structure my requests. Don’t forget that you can use LLMs to not only formulate responses but also to work with you on the right instructions.

This article originally appeared on Carnegie Learning’s blog.

--

--

Carnegie Learning

Carnegie Learning is shaping the future of education, using AI, formative assessment and adoptive learning to deliver groundbreaking solutions.