Skip to main content
Article
Editorial policy Editorial only, no paid placements

Editorial Policy

Agentically operated pipeline. Verification cadence, anti-fabrication policy, corrections pathway.

How aipedia.wiki operates

aipedia.wiki is an independent, agentically operated review and reference system for AI tools. Research gathering, drafting, structured comparisons, and recurring updates are handled by an editorial pipeline designed for scale. Human oversight exists at the standards, audit, and correction layers.

The trust promise is not handcrafted prose. It is clear methodology, current facts, structured judgment, visible sourcing where appropriate, and honest corrections. Pages are published under the aipedia.wiki Editorial byline (Organization, not Person) because the site operates as a system, not a personality.

aipedia.wiki does not claim hands-on testing the editorial pipeline did not produce. Where a review benefits from usage evidence, the system relies on vendor-published documentation, official changelogs, third-party benchmark publications, and verified user feedback. Where usage claims would be appropriate but cannot be verified, the page says so.

How pages are produced

  1. Scope. Every tool is categorised, tagged, and entered into the canonical tool registry.
  2. Facts layer. Atomic facts (current flagship model, pricing, launch dates) live in a canonical JSON store; pages reference these rather than duplicating.
  3. Editorial analysis. Each tool page carries an editorial take that names strengths, weaknesses, and the closest rival. Analyst-voiced, not personal-diary-voiced.
  4. Verification. Prices, flagship models, and feature availability are re-checked against official sources via our automated verification pipeline.
  5. Compliance audit. Every page is run through a rule-based audit aligned with current Google search guidance (E-E-A-T, Reviews System, Helpful Content System).
  6. Cross-reference integrity. Pages are checked for orphan status, broken links, and inconsistency with other pages that reference the same tool.

Scoring methodology

Four dimensions, each scored 1 to 10: Utility (does the core job well), Value (quality per dollar), Moat (hard to replicate or displace), Longevity (will it exist in two years). The overall score is the unweighted average. A 7 means genuinely good. A 5 means average.

Full scoring methodology ->

Verification pipeline

  • Daily: importance scoring, Google ranking checks, and failure detection against recent edits.
  • Weekly: fact consolidation across every tool page, flagship-model re-verification against vendor domains, compliance audit against current Google guidance, price-drift detection.
  • Per-commit gate: a pre-commit hook blocks fabricated testing claims, citation residue, template placeholders, and other known failure modes before shipping.

Every tool page carries a "last verified" date. If a price changes after our last verification, the next cycle catches it.

Editorial stance

Primary sources only

Every atomic fact is traced to the vendor's own documentation, pricing page, or changelog. We do not cite third-party aggregators for price or flagship-version claims.

No fabricated experience

We will not invent hands-on testing claims. Where the review benefits from usage evidence we cannot produce, the page says so and relies on official documentation + verified user reviews.

Affiliate neutrality

Affiliate links are present where available, but commissions have zero influence on scores. A 30%-commission tool can score a 4 if it earns a 4.

Honest negatives

Tools that are overhyped get flagged. The dead-tools archive documents products that shut down. Hidden or deceptive pricing is called out on the tool page itself.

Version clarity

Where a tool has multiple model versions, we list them with API pricing, context window, and launch date. Where a vendor hides pricing, we say so.

Correction policy

We read every correction. Pages update when evidence checks out. If you think a score is unfair, the Contact Us link below accepts reasoned disagreement.

How AI is used on this site

aipedia.wiki is agentically operated. Research, drafting, structured comparisons, and recurring updates are produced by an editorial pipeline. Every page runs through the verification pipeline above, and no page ships without passing the compliance gate. The unique value is structured editorial synthesis, verified facts, and continuous maintenance, not production volume.

Per Google's stated AI content policy, quality is judged by E-E-A-T and unique value regardless of production method. aipedia.wiki takes the explicit position that declared, standards-governed automation is more trustworthy than hidden, handcrafted-looking automation. Readers do not need to believe every conclusion; the promise is that the site operates honestly, consistently, and usefully.

Corrections, feedback, disagreement

Found a wrong price, a changed feature, or a tool that shut down? We read every correction and update pages when the evidence checks out. If you think a score is unfair, tell us what you would score it and why; we will look at the case.