Skip to main content
Tool Automation freemium active 8-8.9
Verified Apr 2026 Automation #2 in Automation Editorial only, no paid placements

Langfuse

Active

Open-source LLM engineering platform with observability, evals, and prompt management. YC W23, acquired by ClickHouse for $400M in Jan 2026. Generous free tier, Core at $29/mo.

Best plan $0 free / $29/mo Core / scale via units Free + paid plans
Best for LLM app teams wanting observability + evals + prompt management in one tool Automation
Watch AI Gateway / proxy features (use Helicone) Check fit before switching
Pricing $0 free / $29/mo Core / scale via units
Launched 2023
Watchlist Langfuse

Save this page locally, then revisit it when pricing, score notes, or related news changes.

Decision badges Readiness signals
Active productFree tierPublic repo listedVerified this monthMonthly review cycleStrong editorial score
Fact ledger Verified fields
Company
langfuse
Category
Automation
Pricing model
Free tier
Price range
$0 free / $29/mo Core / scale via units
Status
Active
Last verified
Apr 18, 2026
Pricing Anchor Langfuse publishes cloud pricing with a free entry point and paid usage/team tiers; self-hosting and enterprise terms should be checked against current pricing. langfuse.com/pricing
Best For Best for teams that need open-source LLM observability, prompt management, evaluations, and tracing across production AI applications. Langfuse official site
Watch Out For Adoption depends on instrumenting applications and maintaining evaluation datasets; teams wanting no-code analytics without developer integration should choose a simpler tool. Langfuse documentation
Observability Surface The product combines traces, scores, datasets, evaluations, prompts, and metrics, so it is closer to an LLM engineering control plane than a simple logging sink. Langfuse documentation
Open Source The GitHub repository is the source of truth for self-hosting posture, licenses, integrations, and release activity. Langfuse GitHub repository
Change timeline What moved recently
  1. Verified
    Core pricing and product facts checked Apr 18, 2026 | Monthly cadence
  2. Updated
    Editorial page changed Apr 18, 2026
Knowledge graph Adjacent context
Company langfuse
Category Automation
Best for
  • LLM app teams wanting observability + evals + prompt management in one tool
  • Framework-agnostic production workflows
  • Teams that need MIT-licensed self-hostable option
  • Prompt engineering workflows with version control
Not ideal for
  • AI Gateway / proxy features (use Helicone)
  • Pure LangChain shops (LangSmith integrates deeper)
  • Minimal teams that just need basic logging (provider dashboards suffice)

An open-source LLM engineering platform that combines observability, evaluation, prompt management, and datasets in one tool. Ingests traces via OpenTelemetry, LangChain, OpenAI SDK, LiteLLM, and more. Acquired by ClickHouse in January 2026 at a $400M valuation via Series D. The MIT-licensed core stayed open.

System Verdict

Pick Langfuse if you want the strongest open-source observability + evals story. Free tier at 50k units/month is genuinely generous (roughly 10× typical competitor free tiers). Prompt management is a standout: versioned prompts, A/B tests, deploy flows that many teams bolt onto LangSmith or build themselves.

Skip it if you need AI Gateway features. Helicone has caching, load balancing, and failover; Langfuse doesn’t. Many teams run Langfuse for evals + prompt mgmt and Helicone for gateway features side by side.

ClickHouse acquisition is a good thing. ClickHouse is the fastest open-source analytics database on earth, now the storage backend for Langfuse. Expect better-at-scale query performance and no rent-seeking (ClickHouse’s own product is similarly priced and similarly open).

Key Facts

LicenseMIT (core is fully open source)
Free tier50,000 units/month (10× typical competitor)
Core plan$29/mo: 100k units, 90-day retention, 48-hour support SLO, unlimited users
Overage pricing$8 per additional 100k units across all paid tiers
Acquired byClickHouse, January 2026, Series D $400M
Self-hostableYes
IntegrationsOpenTelemetry, LangChain, LangGraph, OpenAI SDK, LiteLLM, Anthropic, Vercel AI SDK, more
Core featuresObservability, prompt management, evals, playground, datasets

When to pick Langfuse

  • Prompt engineering workflows. Versioned prompts with deploy flows and A/B testing are first-class. Version control for prompts shouldn’t be an afterthought.
  • Eval-driven LLM development. Run evals on traces, compare prompt versions against datasets, track quality over time.
  • Framework-agnostic teams. Ingests from any LLM SDK via OpenTelemetry. No LangChain lock-in.
  • Self-hosted preference. MIT license allows running on your own infrastructure with no usage restrictions.
  • Generous free tier. Real use at 50k units/mo. Small teams never need to pay.

When to pick something else

  • AI Gateway needs: Helicone for caching, load balancing, failover.
  • LangChain-first shops: LangSmith is more tightly coupled.
  • Pure eval focus: Braintrust has a sharper evals-first product.
  • App-wide observability: Datadog / New Relic for everything else; Langfuse is LLM-specific.

Pricing

PlanPriceUnits/monthRetention
Hobby$050,00030 days
Core$29/mo100,00090 days
ProHigher tier (scales)More unitsExtended
TeamHigher tierMore unitsExtended + compliance
EnterpriseCustomUnlimitedCustom + SSO, audit

Overage on all paid tiers: $8 per 100k units. Self-hosted: $0 (MIT). Verified 2026-04-18 via langfuse.com/pricing.

Best plan recommendation

Start on Hobby if the goal is instrumentation proof: traces, prompt versions, datasets, and a small eval loop. Core is the best first paid plan when a team needs longer retention, support expectations, or enough units to monitor a production feature. Move beyond Core when unit volume, SSO, audit, compliance, or team governance become the bottleneck rather than basic observability.

Self-hosting is attractive because the core is MIT licensed, but it is not free operationally. Teams still need to run the database stack, upgrades, backups, access control, and incident response. Choose self-hosting for data-control requirements or heavy usage economics, not because it avoids all cost.

Failure modes

  • Unit accounting can surprise. One “unit” maps to one trace span. Complex agent workflows generate many spans per user request. Watch your quota.
  • Not a gateway. No caching, no failover, no load balancing. If you need those, pair with Helicone or LiteLLM.
  • Self-hosted ops overhead. Docker + ClickHouse + Postgres stack. Acceptable for teams with ops capacity; tedious without.
  • Prompt management requires discipline. Great features only work if the team actually versions prompts. Rolling your own “edit and hope” approach defeats the point.
  • Some enterprise features gated. Audit logs, SSO, compliance controls available at higher tiers only.

Against the alternatives

LangfuseHeliconeLangSmithBraintrust
LicenseMITOpen sourceClosedClosed
Free tier50k units/mo10k requests/moLimitedLimited
Prompt managementStrongBasicGoodBasic
EvalsStrongAdequateStrongStrongest
AI GatewayNoYesNoNo
Best forFramework-agnostic + evalsMulti-provider + gatewayLangChain teamsEval-heavy teams

Methodology

Produced by the aipedia.wiki editorial pipeline. Last verified 2026-04-18 against langfuse.com/pricing and Langfuse GitHub.

FAQ

Is Langfuse really free? Hobby plan (50k units/month, 30-day retention) is free forever with no credit card. Self-hosted is $0 under MIT license.

Did the ClickHouse acquisition change anything for users? No major changes as of April 2026. The MIT license remained, pricing stayed the same, and no new paywalls were added. ClickHouse benefits from owning Langfuse’s user base; Langfuse benefits from ClickHouse’s scale.

Can I use Langfuse without LangChain? Yes. Langfuse ingests traces via OpenTelemetry, direct SDK instrumentation, or plain HTTP. LangChain integration is one of many.

How does Langfuse compare to Helicone? Langfuse has stronger prompt management and evals. Helicone has the AI Gateway (caching, failover, load balancing). Many teams run both for different concerns.

Share LinkedIn
Was this review helpful?
Embed this score on your site Free. Links back.
Langfuse editorial score badge
<a href="https://aipedia.wiki/tools/langfuse/" target="_blank" rel="noopener"><img src="https://aipedia.wiki/badges/langfuse.svg" alt="Langfuse on aipedia.wiki" width="260" height="72" /></a>
[![Langfuse on aipedia.wiki](https://aipedia.wiki/badges/langfuse.svg)](https://aipedia.wiki/tools/langfuse/)

Badge value auto-updates if the editorial score changes. Attribution via the link is required.

Cite this page For journalists, researchers, and bloggers
According to aipedia.wiki Editorial at aipedia.wiki (https://aipedia.wiki/tools/langfuse/)
aipedia.wiki Editorial. (2026). Langfuse — Editorial Review. aipedia.wiki. Retrieved May 8, 2026, from https://aipedia.wiki/tools/langfuse/
aipedia.wiki Editorial. "Langfuse — Editorial Review." aipedia.wiki, 2026, https://aipedia.wiki/tools/langfuse/. Accessed May 8, 2026.
aipedia.wiki Editorial. 2026. "Langfuse — Editorial Review." aipedia.wiki. https://aipedia.wiki/tools/langfuse/.
@misc{langfuse-editorial-review-2026, author = {{aipedia.wiki Editorial}}, title = {Langfuse — Editorial Review}, year = {2026}, publisher = {aipedia.wiki}, url = {https://aipedia.wiki/tools/langfuse/}, note = {Accessed: 2026-05-08} }
Spotted an error or want to share your experience with Langfuse?

Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used Langfuse and want to share what worked or didn't, the editorial desk reviews every message sent through this form.

Email editorial@aipedia.wiki
Report outdated info Help us keep this page accurate