Our Methodology

Understanding how we evaluate reasoning quality in public discourse

What We Do (and Don't) Evaluate

We evaluate: Logical reasoning quality, argument structure, evidence usage, and fallacy detection

We do NOT evaluate: Policy positions, political ideology, or whether we agree with the content

Truth Blocks: The Foundation

Every speech is broken down into "truth blocks" - atomic units of reasoning that represent individual claims or arguments. This allows us to:

  • Evaluate each claim independently for logical soundness
  • Identify specific fallacies at a granular level
  • Provide detailed feedback on reasoning quality
  • Track patterns across multiple speeches

Fallacy Detection

Our AI system detects common logical fallacies including:

  • Ad Hominem: Attacking the person instead of the argument
  • Straw Man: Misrepresenting an opponent's argument
  • False Dilemma: Presenting only two options when more exist
  • Slippery Slope: Claiming a chain of unlikely events
  • Appeal to Authority: Using authority instead of reasoning
  • Red Herring: Introducing irrelevant information
  • Hasty Generalization: Drawing conclusions from insufficient data
  • And many more...

Note: Fallacies are identified based on established principles of informal logic and argumentation theory.

Fallacy Rate Calculation

We calculate fallacy rate as:

Fallacy Rate = (Total Fallacies / Word Count) × 1000

This standardized metric allows fair comparison across speeches of different lengths:

  • < 5.0: Excellent - Very few logical issues
  • 5.0 - 9.9: Good - Minor logical issues
  • 10.0+: Needs Improvement - Significant logical issues

Political Balance

We are committed to balanced coverage across the political spectrum:

  • We analyze figures from left, center, and right political positions
  • Coverage statistics are displayed transparently on the figures list page
  • The same evaluation criteria apply regardless of political affiliation
  • Our goal is to improve public discourse, not advance any particular agenda

Important: High or low fallacy rates don't indicate whether policies are correct - only whether arguments are logically sound.

Frequently Asked Questions

What do you evaluate?

We evaluate the logical reasoning quality in public speeches, including fallacy detection, argument structure, and evidence usage. We do NOT evaluate policy positions or political ideology.

How does fallacy detection work?

Our AI system analyzes speeches by breaking them into 'truth blocks' - individual claims or arguments. Each block is evaluated for logical fallacies, reasoning quality, and evidence support using advanced language models trained on argumentation theory.

Are you politically biased?

No. We evaluate reasoning quality, not political positions. We aim for balanced representation across the political spectrum and display our coverage statistics transparently on the figures list page.

What are truth blocks?

Truth blocks are atomic units of reasoning - individual claims or arguments extracted from speeches. By breaking speeches into smaller units, we can evaluate each claim independently and provide more granular feedback.

How accurate is the AI evaluation?

AI evaluation is a tool, not a final judgment. While our system is sophisticated, it can make mistakes. We encourage users to read the original speeches and form their own conclusions. The AI analysis is meant to highlight potential issues for further investigation.

Can I see the raw data?

Yes! Every speech detail page shows the full transcript, all truth blocks, and detailed fallacy analysis. We believe in complete transparency about our evaluation process.

Who decides what counts as a fallacy?

Our system is based on established principles of informal logic and argumentation theory. We use widely recognized fallacy taxonomies and detection methods. The methodology is documented and can be reviewed by anyone.

How can I contribute?

Currently, the system is managed by administrators who curate speeches and ensure balanced coverage. In the future, we may add community features. For now, you can provide feedback through our contact form.

Limitations & Disclaimers

  • AI is a tool, not a judge: Our AI system can make mistakes. Always read the original speeches and form your own conclusions.
  • Context matters: Some rhetorical devices may be flagged as fallacies even when used appropriately for persuasion.
  • Continuously improving: We regularly update our detection methods based on feedback and research.
  • Transparency first: All analysis details are available for review - we hide nothing.
  • Not fact-checking: We evaluate reasoning structure, not factual accuracy. A logically sound argument can still contain false premises.