Casa ESL · C2 Mastery · Unit 19 of 20 · Step 2
Argument structure — claim, warrant, qualifier, rebuttal (Toulmin model)
Name
Date
Vocabulary
claim
nounThe main assertion or thesis that an argument seeks to establish.
"The claim that AGI will be achieved within a decade is contested by many researchers."
warrant
nounThe underlying assumption or principle that connects evidence to a claim.
"The warrant — that exponential increases in compute will produce qualitative leaps in intelligence — is precisely what critics challenge."
qualifier
nounA word or phrase that limits the strength or scope of a claim (probably, in most cases, generally).
"The qualifier "in most cases" acknowledges that the claim does not hold universally."
rebuttal
nounAn exception to or counterargument against the claim, often pre-emptively addressed.
"A strong argument anticipates the most powerful rebuttal and addresses it directly."
sentience
nounThe capacity to have subjective experiences — to feel, perceive, and be aware.
"Whether an artificial system could achieve genuine sentience remains deeply controversial."
alignment
nounThe problem of ensuring that an AI system's goals and behaviours conform to human values and intentions.
"The alignment problem is considered by many to be the central challenge of AGI safety."
emergent
adjectiveArising as a novel property from the interaction of simpler components — not present in any individual part.
"Language ability in large models appears to be emergent — arising at scale without being explicitly programmed."
existential risk
nounA risk that threatens the entire future of humanity.
"Some researchers classify unaligned superintelligence as an existential risk."
Grammar Focus
Argument structure — the Toulmin model (claim, warrant, qualifier, rebuttal)
The Toulmin model provides a framework for constructing and analysing arguments. Claim: the position being argued for. Data/Evidence: the facts supporting the claim. Warrant: the underlying principle connecting evidence to claim. Qualifier: words limiting the claim's scope (probably, generally, in most cases). Rebuttal: conditions under which the claim would not hold. At C2 level, all five elements should be present in a well-constructed argument. Signalling language: "It follows that..." (claim from evidence), "The underlying assumption is..." (warrant), "This holds true in most but not all cases..." (qualifier), "One might object that..." (rebuttal).
Claim: AGI will likely be achieved within the next 30 years.
Evidence: Compute has increased exponentially; large language models exhibit emergent capabilities not predicted by their training.
Warrant: Historical trends in compute and capability suggest that continued scaling will produce qualitatively new forms of intelligence.
Qualifier: "likely" — acknowledging uncertainty; "within 30 years" — bounding the prediction.
Rebuttal: However, sceptics argue that scaling alone is insufficient — that AGI may require architectural innovations that current approaches cannot deliver.
Exercises
Exercise 1
Identify the Toulmin component (Claim, Evidence, Warrant, Qualifier, or Rebuttal) in each sentence.
1. "AGI will probably be developed within 30 years." Component:
2. "Large language models have demonstrated emergent reasoning capabilities at scale." Component:
3. "However, some researchers argue that current architectures face fundamental limitations." Component:
4. "If exponential scaling of compute continues to produce qualitative improvements in capability, then..." Component:
5. "This is true in most foreseeable scenarios, though not necessarily in all." Component:
Exercise 2
Complete each argument element for the claim: "AI poses an existential risk to humanity."
1. Evidence:
2. Warrant:
3. Qualifier:
4. Rebuttal:
5. Response to rebuttal:
Reading
The Argument for Caution
The debate over artificial general intelligence — a hypothetical AI system capable of performing any intellectual task a human can — is, at its core, an argument about risk under uncertainty. The claim, advanced by researchers such as Stuart Russell, is that AGI development without adequate safety measures poses an existential risk to humanity. The evidence adduced includes: the rapid, often unpredicted emergence of new capabilities in large-scale AI systems; the demonstrated difficulty of specifying human values in formal, machine-readable terms (the alignment problem); and historical precedent suggesting that transformative technologies are rarely governed adequately before their risks are understood. The warrant connecting this evidence to the claim is essentially precautionary: if the potential downside is civilisational extinction, then even a low probability of that outcome justifies substantial investment in safety research. The qualifier is important: proponents do not claim certainty — they argue that the risk is non-trivial, perhaps somewhere between 5% and 50%, depending on the analyst. The most powerful rebuttal comes from those who argue that current AI architectures are fundamentally incapable of achieving AGI — that large language models, however impressive, are sophisticated pattern-matchers, not general reasoners, and that the leap from narrow to general intelligence may be qualitatively different from the scaling trajectory suggests. This rebuttal has force; but those who advance the precautionary claim respond that the rebuttal itself contains an implicit gamble: if the sceptics are wrong, the consequences are irreversible.
1. Identify each Toulmin component as presented in the passage's argument about AGI risk.
2. How does the passage characterise the response to the sceptics' rebuttal?
Speaking
Discuss these questions with a partner or your teacher.
Writing
Write a structured argument (150-180 words) on any AGI-related topic using the full Toulmin model. Label each component (Claim, Evidence, Warrant, Qualifier, Rebuttal, Response to Rebuttal).
Example: CLAIM: AI alignment research should receive at least 30% of all AI research funding. EVIDENCE: Current AI systems have exhibited unexpected emergent behaviours, and the gap between capability research and safety research continues to widen. WARRANT: If the probability of misaligned AGI is even moderately significant, the asymmetry of outcomes — between manageable if we over-invest in safety and catastrophic if we under-invest — dictates that precaution is rational. QUALIFIER: This holds provided that AGI remains a plausible near-to-medium-term development; if it is shown to be fundamentally unachievable, the allocation should be reconsidered. REBUTTAL: Critics argue that diverting 30% of funding to safety would slow beneficial AI development, costing lives that near-term applications could save. RESPONSE: This objection has merit, but it assumes a trade-off that may not exist — safety research often produces insights that improve capability, and the cost of getting alignment wrong dwarfs the cost of a modest reduction in development pace.
Answer Key — For Teacher Use
Exercise 1
1. Claim (with qualifier "probably") · 2. Evidence · 3. Rebuttal · 4. Warrant · 5. Qualifier
Exercise 2
1. AI systems have already demonstrated capabilities beyond their designers' predictions, and the alignment problem — ensuring AI goals match human values — remains unsolved. · 2. If a sufficiently powerful system pursues goals misaligned with human survival, and we lack the ability to correct it, the consequences could be catastrophic and irreversible. · 3. This risk is not certain but constitutes a non-trivial probability that warrants serious precautionary action. · 4. Sceptics counter that current AI systems are narrow tools, not autonomous agents, and that the leap from language models to superintelligence may involve challenges we cannot yet foresee — possibly making AGI far more distant than alarmists suggest. · 5. While current systems are indeed narrow, the rate of capability improvement and the difficulty of predicting emergent behaviours at scale make complacency itself a risk.
Reading Comprehension
1. Claim: AGI without safety poses existential risk. Evidence: emergent capabilities, alignment difficulty, historical precedent. Warrant: precautionary — even low probability of civilisational extinction justifies action. Qualifier: "non-trivial" probability (5-50%). Rebuttal: current architectures may be fundamentally incapable of AGI. · 2. That the rebuttal contains an implicit gamble: if the sceptics are wrong about current architectures being incapable of AGI, the consequences are irreversible — making precaution the rational choice even in the face of uncertainty.