Instructional designers have a name for AI-generated e-learning that covers a topic without teaching it: AI slop. Slop is content that informs without engaging the learner, uses generic examples nobody recognises, assesses recall rather than application, and reads with the unmistakeable rhythm of a language model filling a template. The concern is legitimate - but slop is not an inevitable property of AI-generated content. It is a symptom of AI tools that treat course generation as a text production task rather than an instructional design task.
What AI slop actually looks like
"AI slop" is not a single failure - it is a cluster of related quality failures that experienced instructional designers recognise immediately.
Informing without teaching The course presents accurate information in a logical order but never asks the learner to do anything with it. Paragraphs of explanation followed by a knowledge-recall quiz. The learner reads, clicks next, answers five questions about what they just read, and leaves. Nothing changes how they think or what they do.
Generic examples that belong to nobody The scenario involves "an employee" at "a company" dealing with "a customer complaint". A care worker completing safeguarding training needs examples from a care setting. A retail bank adviser needs examples from a branch environment. Generic examples signal the course was not written for them, and cognitive disengagement follows.
The recognisable AI voice Smooth, clause-heavy sentences that carry no personality or authority. Phrases like "it is important to note that" and "in today's fast-paced environment" appear so frequently that experienced readers can identify the output within a paragraph. When training feels machine-generated, learners discount it before they have read it.
Assessments that test the wrong thing Multiple-choice questions where the correct answer is the only plausible option, testing whether the learner read the preceding paragraph rather than whether they can apply the concept. These assessments generate pass rates that tell you nothing about whether learning occurred.
Comprehensiveness over focus AI tools given broad prompts tend toward comprehensiveness - covering everything loosely rather than the key thing deeply. A 30-minute course on data protection that covers twelve sub-topics at three paragraphs each is weaker than one that covers four sub-topics properly, with scenarios, decision points, and meaningful assessment.
AI slop is not a property of AI. It is a property of AI tools that treat course generation as a text production task. Those are different problems with different solutions.
Why it happens - the tool architecture problem
Most AI content generation tools work by sending a prompt to a large language model and returning the text output. The model produces fluent, organised text. The tool wraps it in a template. The output looks like a course.
What is absent from this process is instructional design. The model has no concept of a learning objective in the Bloom's Taxonomy sense. It does not know whether its examples are appropriate for this audience in this sector. When the input is vague, the output will be generic. And content that does not connect to the learner's world does not change their behaviour.
What a well-designed AI authoring system does differently
The difference between AI-generated slop and content worth publishing is not primarily about which language model is used - it is about the design of the system around the model. Specifically, six things that a well-designed AI authoring tool does before, during, and after generation.
1. It demands specific context before generating anything
Slop is almost always the result of a vague prompt. "Write a course about GDPR" produces generic content. "Write a 25-minute GDPR course for customer service advisers at a UK retail bank, with a conversational tone, a case study teaching approach, and examples drawn from inbound telephone enquiries about account data" produces something materially different.
CourseAgent's course parameters - audience, knowledge level, geographic focus, writing tone, teaching approach, cultural context, and scenario setting - are the structured brief that constrains and directs every content decision. Choosing "UK" changes regulatory references. Choosing "case study method" changes how concepts are introduced. The parameters do not just label the course - they shape it.
2. It applies instructional design structure, not just text organisation
Text organisation is putting related paragraphs together. Instructional structure means: establishing the learning objective, activating prior knowledge, presenting new content at the appropriate cognitive level, providing examples that bridge concept and application, creating opportunities for practice, and assessing whether the practice worked.
CourseAgent's AI generates courses with pedagogically typed pages - Foundation, Deep dive, Application, Introduction, Summary - sequenced according to instructional design principles rather than grouping related information.
3. It selects content formats based on what the content requires
A paragraph of explanatory text is the right format for some content. It is the wrong format for a process with sequential steps, a comparison between two options, or a decision tree.
CourseAgent selects from 35+ interactive section types based on what the content requires. A sequential process becomes a timeline or slideshow. A comparison becomes a flip card or tab. A decision point becomes a scenario with branching options.
| Text-generation approach | Instructionally structured approach | |
|---|---|---|
| Method | "There are four steps to handling a customer complaint. First, listen actively..." | A four-step slideshow - each slide presenting one step with a realistic customer dialogue extract - followed by a scenario with plausible options and feedback explaining consequences |
| Result | Accurate. Organised. Instantly forgettable. | Same content. The learner makes a decision, experiences a consequence, and connects the principle to a real situation. |
4. It threads real-world context throughout the course
The most common half-measure in AI course generation is adding a scenario to the assessment but leaving the content sections generic. The scenario feels bolted on because it is bolted on.
CourseAgent's scenario threading works differently. Authors provide a real-world context at the start - a specific workplace, characters, an organisational situation - and the AI weaves this throughout the entire course. The introduction establishes the setting. Each topic uses the scenario. Interactive sections use scenario-specific examples. Quiz questions place the learner inside the scenario rather than asking abstract questions.
5. It controls the relationship between source material and generated content
One of the specific risks in AI-generated compliance content is creative interpolation - the AI adding information that was not in the source material, or paraphrasing regulatory language in ways that subtly change its meaning.
CourseAgent's AI influence slider addresses this directly. At low influence, the AI preserves the wording and structure of the source document as closely as possible. At high influence, the AI treats the source as a brief and generates expansive, contextually enriched content. The setting is explicit and author-controlled.
6. It provides a structured quality check after generation
CourseAgent's course audit runs a systematic quality review covering content coherence, assessment alignment, inclusive language, accessibility, objective alignment, and enhancement opportunities - producing specific findings categorised by severity, with locations and suggested fixes. It checks for the most common slop symptoms: assessments that test the wrong thing, uncovered learning objectives, terminology inconsistencies, and sections where interactivity would improve learning.
See the difference a structured brief makes - try CourseAgent free
Set your audience, sector, tone, teaching approach, and scenario. Then generate. The difference between a vague prompt and a specific brief is the difference between slop and something worth publishing.
What AI authoring cannot do
Honesty about what a well-designed AI authoring system does differently requires equal honesty about what it cannot do.
- Factual accuracy in specialised domains. The AI generates content based on training data and source material provided. For highly specialised clinical, legal, or technical domains, a subject matter expert must verify accuracy before publication.
- Organisational nuance. Every organisation has internal terminology, cultural sensitivities, and political dynamics that no AI tool can know without being told. The author remains responsible for ensuring the course reflects the organisation accurately.
- The final review from the learner's perspective. Reading the completed course as a learner - not as the author who knows what was intended - is the quality check that catches more than any automated review. It takes 20 minutes and it matters.
The short version
AI slop in e-learning is real and the criticism deserves to be taken seriously. Slop is content that informs without teaching, uses generic examples, carries an unmistakeable AI voice, tests the wrong things, and favours comprehensiveness over focus. It is not a property of AI - it is a property of AI tools that treat course generation as a text production task. The difference between slop and something worth publishing comes down to six things: whether the tool demands a specific brief, applies instructional design structure, selects content formats based on need, threads real-world context throughout, controls source fidelity, and provides a structured quality check. None of these eliminate the human author's responsibility - but they determine whether the AI amplifies good instructional design or produces the appearance of a course without the substance of one.
Try CourseAgent free
Build your first course in under 30 minutes. No credit card. No technical skills. No time limit.
Start free →