An open-source LLM engineering platform that combines observability, evaluation, prompt management, and datasets in one tool. Ingests traces via OpenTelemetry, LangChain, OpenAI SDK, LiteLLM, and more. Acquired by ClickHouse in January 2026 at a $400M valuation via Series D. The MIT-licensed core stayed open.
System Verdict
Pick Langfuse if you want the strongest open-source observability + evals story. Free tier at 50k units/month is genuinely generous (roughly 10× typical competitor free tiers). Prompt management is a standout: versioned prompts, A/B tests, deploy flows that many teams bolt onto LangSmith or build themselves.
Skip it if you need AI Gateway features. Helicone has caching, load balancing, and failover; Langfuse doesn’t. Many teams run Langfuse for evals + prompt mgmt and Helicone for gateway features side by side.
ClickHouse acquisition is a good thing. ClickHouse is the fastest open-source analytics database on earth, now the storage backend for Langfuse. Expect better-at-scale query performance and no rent-seeking (ClickHouse’s own product is similarly priced and similarly open).
Key Facts
| License | MIT (core is fully open source) |
| Free tier | 50,000 units/month (10× typical competitor) |
| Core plan | $29/mo: 100k units, 90-day retention, 48-hour support SLO, unlimited users |
| Overage pricing | $8 per additional 100k units across all paid tiers |
| Acquired by | ClickHouse, January 2026, Series D $400M |
| Self-hostable | Yes |
| Integrations | OpenTelemetry, LangChain, LangGraph, OpenAI SDK, LiteLLM, Anthropic, Vercel AI SDK, more |
| Core features | Observability, prompt management, evals, playground, datasets |
When to pick Langfuse
- Prompt engineering workflows. Versioned prompts with deploy flows and A/B testing are first-class. Version control for prompts shouldn’t be an afterthought.
- Eval-driven LLM development. Run evals on traces, compare prompt versions against datasets, track quality over time.
- Framework-agnostic teams. Ingests from any LLM SDK via OpenTelemetry. No LangChain lock-in.
- Self-hosted preference. MIT license allows running on your own infrastructure with no usage restrictions.
- Generous free tier. Real use at 50k units/mo. Small teams never need to pay.
When to pick something else
- AI Gateway needs: Helicone for caching, load balancing, failover.
- LangChain-first shops: LangSmith is more tightly coupled.
- Pure eval focus: Braintrust has a sharper evals-first product.
- App-wide observability: Datadog / New Relic for everything else; Langfuse is LLM-specific.
Pricing
| Plan | Price | Units/month | Retention |
|---|---|---|---|
| Hobby | $0 | 50,000 | 30 days |
| Core | $29/mo | 100,000 | 90 days |
| Pro | Higher tier (scales) | More units | Extended |
| Team | Higher tier | More units | Extended + compliance |
| Enterprise | Custom | Unlimited | Custom + SSO, audit |
Overage on all paid tiers: $8 per 100k units. Self-hosted: $0 (MIT). Verified 2026-04-18 via langfuse.com/pricing.
Best plan recommendation
Start on Hobby if the goal is instrumentation proof: traces, prompt versions, datasets, and a small eval loop. Core is the best first paid plan when a team needs longer retention, support expectations, or enough units to monitor a production feature. Move beyond Core when unit volume, SSO, audit, compliance, or team governance become the bottleneck rather than basic observability.
Self-hosting is attractive because the core is MIT licensed, but it is not free operationally. Teams still need to run the database stack, upgrades, backups, access control, and incident response. Choose self-hosting for data-control requirements or heavy usage economics, not because it avoids all cost.
Failure modes
- Unit accounting can surprise. One “unit” maps to one trace span. Complex agent workflows generate many spans per user request. Watch your quota.
- Not a gateway. No caching, no failover, no load balancing. If you need those, pair with Helicone or LiteLLM.
- Self-hosted ops overhead. Docker + ClickHouse + Postgres stack. Acceptable for teams with ops capacity; tedious without.
- Prompt management requires discipline. Great features only work if the team actually versions prompts. Rolling your own “edit and hope” approach defeats the point.
- Some enterprise features gated. Audit logs, SSO, compliance controls available at higher tiers only.
Against the alternatives
| Langfuse | Helicone | LangSmith | Braintrust | |
|---|---|---|---|---|
| License | MIT | Open source | Closed | Closed |
| Free tier | 50k units/mo | 10k requests/mo | Limited | Limited |
| Prompt management | Strong | Basic | Good | Basic |
| Evals | Strong | Adequate | Strong | Strongest |
| AI Gateway | No | Yes | No | No |
| Best for | Framework-agnostic + evals | Multi-provider + gateway | LangChain teams | Eval-heavy teams |
Methodology
Produced by the aipedia.wiki editorial pipeline. Last verified 2026-04-18 against langfuse.com/pricing and Langfuse GitHub.
FAQ
Is Langfuse really free? Hobby plan (50k units/month, 30-day retention) is free forever with no credit card. Self-hosted is $0 under MIT license.
Did the ClickHouse acquisition change anything for users? No major changes as of April 2026. The MIT license remained, pricing stayed the same, and no new paywalls were added. ClickHouse benefits from owning Langfuse’s user base; Langfuse benefits from ClickHouse’s scale.
Can I use Langfuse without LangChain? Yes. Langfuse ingests traces via OpenTelemetry, direct SDK instrumentation, or plain HTTP. LangChain integration is one of many.
How does Langfuse compare to Helicone? Langfuse has stronger prompt management and evals. Helicone has the AI Gateway (caching, failover, load balancing). Many teams run both for different concerns.
Related
- Category: AI Automation · AI Coding
- Compare: Langfuse vs Helicone
- See also: LangGraph · Mastra
Embed this score on your site Free. Links back.
<a href="https://aipedia.wiki/tools/langfuse/" target="_blank" rel="noopener"><img src="https://aipedia.wiki/badges/langfuse.svg" alt="Langfuse on aipedia.wiki" width="260" height="72" /></a> [](https://aipedia.wiki/tools/langfuse/) Badge value auto-updates if the editorial score changes. Attribution via the link is required.
Cite this page For journalists, researchers, and bloggers
According to aipedia.wiki Editorial at aipedia.wiki (https://aipedia.wiki/tools/langfuse/) aipedia.wiki Editorial. (2026). Langfuse — Editorial Review. aipedia.wiki. Retrieved May 8, 2026, from https://aipedia.wiki/tools/langfuse/ aipedia.wiki Editorial. "Langfuse — Editorial Review." aipedia.wiki, 2026, https://aipedia.wiki/tools/langfuse/. Accessed May 8, 2026. aipedia.wiki Editorial. 2026. "Langfuse — Editorial Review." aipedia.wiki. https://aipedia.wiki/tools/langfuse/. @misc{langfuse-editorial-review-2026,
author = {{aipedia.wiki Editorial}},
title = {Langfuse — Editorial Review},
year = {2026},
publisher = {aipedia.wiki},
url = {https://aipedia.wiki/tools/langfuse/},
note = {Accessed: 2026-05-08}
} Spotted an error or want to share your experience with Langfuse?
Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used Langfuse and want to share what worked or didn't, the editorial desk reviews every message sent through this form.
Email editorial@aipedia.wiki