aipedia.wiki scores every AI tool on four dimensions (Utility, Value, Moat, and Longevity) on a 1 to 10 scale. The Overall Score is the average of the four, rounded to one decimal place. Scores are editorial judgments based on primary-source research, vendor documentation, and published benchmark data. Never user votes, paid rankings, or affiliate influence.
Methodology last reviewed:
The Four Dimensions
Utility
Score range: 1 to 10
Does the tool actually solve a real problem well?
Genuinely best-in-class, meaningfully improves workflows, and beats the strongest non-AI or AI alternatives.
Barely works, solves too narrow a problem, or adds more friction than it removes.
- - How well does the core feature work in practice?
- - How much friction does it add vs. remove?
- - Is it better than doing the task without AI?
Value
Score range: 1 to 10
Is the price fair for what you get?
The price-to-output ratio is clearly favorable, either by replacing costly work or unlocking unusually strong capability.
Free or paid, the tool wastes time, hides key value behind limits, or trails cheaper direct competitors.
- - What does it replace or enable?
- - How does the price compare to direct competitors?
- - Is the free tier genuinely useful or a funnel with no usable ceiling?
Moat
Score range: 1 to 10
How defensible is this tool's position?
Proprietary data, unique training, deep integrations, workflow lock-in, or network effects are genuinely hard to copy.
The core feature can be replicated with off-the-shelf models or a well-funded competitor can commoditize it quickly.
- - Is the core capability replicable with off-the-shelf models?
- - Does it have proprietary data, infrastructure, or partnerships?
- - How strong is the user lock-in (workflow, data, habit)?
Longevity
Score range: 1 to 10
Will this tool still exist and matter in 2 years?
Strong fundamentals, clear revenue, active product velocity, and a niche unlikely to be absorbed by general-purpose AI.
Shutdown, obsolescence, funding, or platform absorption risk is high enough that users should be cautious.
- - Company revenue, funding runway, and profitability signals
- - Is the use case likely to be absorbed into foundation models?
- - Track record of shipping and maintaining the product
Overall Score Calculation
Rounded to one decimal place. A tool scoring 8 / 8 / 9 / 7 has an Overall Score of 8.0.
Score Benchmarks
| Score | Label | Interpretation |
|---|---|---|
| 9.0 to 10.0 | Best in class | Use this unless there is a specific reason not to |
| 8.0 to 8.9 | Excellent | Strong across all dimensions |
| 7.0 to 7.9 | Good | Meaningful strengths with some trade-offs |
| 6.0 to 6.9 | Acceptable | Usable but notable gaps or better alternatives exist |
| 5.0 to 5.9 | Below average | Use only if your specific situation warrants it |
| Below 5.0 | Not recommended | For most users |
What Our Scores Are Not
- Not user reviewsWe do not aggregate star ratings. Scores reflect editorial assessment.
- Not sponsoredAffiliate relationships do not influence scores. A 20% commission or no program gets the same treatment.
- Not permanentScores update when pricing, features, or competitive position changes.
- Not weighted by priceA $5/mo tool can outscore a $200/mo tool if it delivers more value relative to its price.
Verification Standard
Every score is based on:
- Primary-source research across the vendor's pricing page, product documentation, changelog, and API reference
- Verified pricing checked against official pricing pages and noted with a verification date
- Competitive context. Scores are calibrated against the current best alternatives in the category
- Named sources. Factual claims include inline links to official pages or credible third-party sources
See the last_verified date on each tool page to understand when the information was last checked.
Read the full editorial policy or browse all categories.
Frequently Asked Questions
How does aipedia.wiki score AI tools?
Every tool is scored on four dimensions: Utility (does it solve a real problem well), Value (price-to-output ratio), Moat (defensibility), and Longevity (will it still exist in 2 years). Each dimension is rated on a 1 to 10 scale and the Overall Score is the average, rounded to one decimal.
What is a good aipedia.wiki score?
9.0 to 10.0 is best-in-class; 8.0 to 8.9 is excellent; 7.0 to 7.9 is good with some trade-offs; 6.0 to 6.9 is acceptable but notable gaps exist; 5.0 to 5.9 is below average; below 5.0 is not recommended for most users.
Do affiliate commissions affect scores?
No. Affiliate status does not influence scores, ranking order, or written assessments. A tool's commission rate is tracked in internal metadata only and is never mentioned inside a review. A tool with a 30% affiliate commission and a tool with no program receive the same honest editorial treatment.
How often are scores updated?
Scores are re-examined when pricing, features, or competitive position materially change. High-traffic tool pages are re-verified monthly, and every page carries a visible last_verified timestamp showing when information was last checked.
Why does a tool I love have a low score?
Scores are comparative and non-inflated. A low Moat score, for example, often reflects that the core capability is replicable with off-the-shelf foundation models, not that the tool is bad. A low Longevity score may reflect funding or competitive risk, not current quality. Read the dimension scores individually; the Overall is just the average.
Can I challenge a score?
Yes. Email editorial@aipedia.wiki with the tool, the dimension you're disputing, what you've used the tool for, what you'd score it, and what you'd compare it against. Every challenge is read and scores are updated when evidence supports the change.
How is the Overall Score calculated?
The Overall Score is the simple average of the four dimension scores, rounded to one decimal place. Example: a tool scoring 8 Utility, 8 Value, 9 Moat, 7 Longevity has an Overall Score of (8+8+9+7)/4 = 8.0.
Disagree with a score?
If you believe a score is wrong, outdated, or based on incorrect information, we want to know. Include what you've used the tool for, what you'd score it, and what you'd compare it against. The goal is accurate, useful information, not defensiveness.
Challenge a score