Manus is a general-purpose autonomous agent from Butterfly Effect Inc., the Singapore-incorporated team that also built Monica.im. The product launched invite-only in March 2025 and opened to broader access through late 2025. Meta announced an acquisition in December 2025, but China blocked the deal on April 27, 2026, leaving the ownership path unsettled.
Manus runs tasks inside its own sandboxed cloud VM. It browses, writes code, builds slides, operates spreadsheets, and returns a finished artifact. The agent keeps working while the browser tab is closed.
System Verdict
Pick Manus if you want a general agent that chews through multi-step tasks in the background while you do other work. The sandboxed VM handles research, data scraping, slide and spreadsheet assembly, and basic web operations without a local runner. The free tier’s 1,000 starter credits plus 300 daily refresh is generous enough to test serious workflows.
Skip it if you care about data residency, production-grade coding, or predictable unit costs. The original stack came out of China, Meta’s announced acquisition is now blocked by Chinese regulators, and credit burn per task varies from ~50 for simple queries to 900+ for deep research runs. Production coding workflows get more mileage from Claude Code or Devin.
Who pays which tier: Free for casual testing. Pro Standard $20/mo covers most solo operators at 4,000 monthly credits. Pro Customizable $40/mo for steady weekly use. Extended $200/mo for power users running Manus as a daily driver. Team $40/seat (2-seat min) for small groups with shared credit pools.
Key Facts
| Product type | Autonomous general-purpose agent with sandboxed cloud VM |
| Company | Butterfly Effect Inc. (Meta acquisition blocked by China) |
| Launched | March 2025 invite-only, broader access late 2025 |
| Acquisition | Meta announced a December 2025 deal reported above $2B; China blocked the acquisition on April 27, 2026 |
| Pricing model | Credit-based, refreshes daily plus monthly pool |
| Free tier | 1,000 starter credits, 300 daily refresh, 1 concurrent task, 1 scheduled task |
| Pro Standard | $20/mo, 4,000 monthly credits, up to 20 concurrent tasks |
| Pro Customizable | $40/mo, 8,000 monthly credits |
| Pro Extended | $200/mo, 40,000 monthly credits |
| Team | $40/seat/mo, 2-seat minimum, shared pool |
| Credit burn | Simple query ~50, deep research task 500-900+ |
| API | Public developer API at open.manus.ai |
| Core use cases | Research, slides, spreadsheets, website build, desktop apps, email |
What it actually is
A hosted agent that takes a natural-language goal, spins up a sandboxed VM, and works the problem end-to-end. Manus browses the web, writes and runs code, parses PDFs, fills spreadsheets, and hands back a deliverable rather than a chat reply.
The product went viral in 2025 for genuinely autonomous multi-step runs that most agents of the era could not complete. Meta’s late-2025 acquisition plan is now blocked by Chinese regulators, so Manus continues operating while the transaction path is unresolved.
When to pick Manus
- Deep research runs that span 50+ pages. Manus keeps scrolling, clicking, and writing to a local file long after a chat-based agent would have stopped. Good fit for competitive analysis, market scans, and investment memos.
- Spreadsheet or slide assembly from messy inputs. Upload a folder of PDFs and transcripts, ask for a summary deck or tracking sheet, walk away.
- Travel and event planning. Compares fares, pulls itineraries, and drafts day-by-day schedules with live links.
- One-off workflow prototyping. Test whether a task is automatable before committing engineering time to a durable pipeline in n8n, Make, or Relevance AI.
- Users who want the agent to run while they sleep. Scheduled tasks trigger in the background and land results in the morning.
When to pick something else
- Deep coding pipelines: Claude Code or Devin. Both go deeper into repos and produce more reliable diffs.
- Opinionated workflow automation: n8n, Make, or Activepieces. Durable pipelines beat one-shot agent runs for anything you run daily.
- No-code business agents with CRM and Slack hooks: Relevance AI or Voiceflow.
- Team agent frameworks with code control: CrewAI or Langflow.
Pricing
Current plans via manus.im/pricing:
| Plan | Price | Monthly credits | Daily refresh | Concurrent tasks |
|---|---|---|---|---|
| Free | $0 | 1,000 starter, one-time | 300 | 1 |
| Pro Standard | $20/mo | 4,000 | 300 | Up to 20 |
| Pro Customizable | $40/mo | 8,000 | 300 | Up to 20 |
| Pro Extended | $200/mo | 40,000 | 300 | Up to 20 |
| Team | $40/seat/mo (2-seat min) | 4,000 per seat, shared pool | 300 | Up to 20 |
Credits scale with task complexity. A chat-style question can cost under 50 credits. A deep research run covering dozens of sources typically burns 500 to 900 credits.
Prices verified 2026-04-18 via Manus Plans documentation and the Lindy breakdown of Manus pricing.
Against the alternatives
| Manus | Devin | Relevance AI | n8n | |
|---|---|---|---|---|
| Primary strength | General autonomous agent | Coding agent | No-code business agents | Workflow automation |
| Runs in | Hosted sandboxed VM | Hosted dev sandbox | Hosted visual builder | Self-host or hosted |
| Entry price | Free tier, Pro $20/mo | $500/mo | Free tier, Pro $19/mo | Free self-host, cloud $24/mo |
| Code-heavy work | Adequate | Strongest | Light | Needs external agent |
| Integrations | Browser + files + API | GitHub + IDEs | 100+ SaaS | 400+ SaaS |
| Best viewed as | General-purpose delegate | Specialist coding teammate | Business workforce builder | Durable plumbing |
Failure modes
- Credit burn opacity. The same prompt can cost 50 or 900 credits depending on depth. Budget monitoring requires watching the run in real time.
- Data residency questions. Stack origin is China, Meta’s announced acquisition is now blocked by Chinese regulators, and compute still runs in remote sandboxes. Regulated workloads should verify before using.
- Brittle on niche sites. Anti-bot protection, SSO portals, and heavily dynamic apps break the browser agent. Manus reports failure and moves on, which still burns credits.
- Code output is prototype-grade. Functional for small scripts, unreliable for production changes to a real repo. Pair with a coding specialist for anything shipped.
- Support quality varies. Meta integration is still early. Escalation paths for paid tiers are not as mature as Anthropic or OpenAI.
Methodology
This page was produced by the aipedia.wiki editorial pipeline, an automated system that ingests vendor documentation, verifies pricing and model details against primary sources, and generates the editorial analysis you are reading. No individual human wrote this review. Scoring follows the four-dimension rubric at /about/scoring/ (Utility, Value, Moat, Longevity, unweighted average). Last verified 2026-04-18 against Manus Plans docs, Lindy’s Manus pricing breakdown, the Cybernews 2026 review, and CNBC coverage of the Meta acquisition.
FAQ
Is Manus free to use? Yes. The free tier ships 1,000 one-time starter credits plus 300 refreshing daily credits. That covers several simple tasks or one mid-size research run per day.
Who owns Manus now? Butterfly Effect Inc. still operates Manus while Meta’s proposed acquisition is blocked by Chinese regulators. Treat ownership, roadmap, data-residency commitments, and enterprise procurement assumptions as unsettled until the parties announce a final resolution.
Can Manus code? Yes, but it is a general agent, not a coding specialist. Simple scripts and prototypes land fine. For repo-scale work, pair Manus with Claude Code or Devin.
Related
- Category: AI Automation
Embed this score on your site Free. Links back.
<a href="https://aipedia.wiki/tools/manus/" target="_blank" rel="noopener"><img src="https://aipedia.wiki/badges/manus.svg" alt="Manus on aipedia.wiki" width="260" height="72" /></a> [](https://aipedia.wiki/tools/manus/) Badge value auto-updates if the editorial score changes. Attribution via the link is required.
Cite this page For journalists, researchers, and bloggers
According to aipedia.wiki Editorial at aipedia.wiki (https://aipedia.wiki/tools/manus/) aipedia.wiki Editorial. (2026). Manus — Editorial Review. aipedia.wiki. Retrieved May 8, 2026, from https://aipedia.wiki/tools/manus/ aipedia.wiki Editorial. "Manus — Editorial Review." aipedia.wiki, 2026, https://aipedia.wiki/tools/manus/. Accessed May 8, 2026. aipedia.wiki Editorial. 2026. "Manus — Editorial Review." aipedia.wiki. https://aipedia.wiki/tools/manus/. @misc{manus-editorial-review-2026,
author = {{aipedia.wiki Editorial}},
title = {Manus — Editorial Review},
year = {2026},
publisher = {aipedia.wiki},
url = {https://aipedia.wiki/tools/manus/},
note = {Accessed: 2026-05-08}
} Spotted an error or want to share your experience with Manus?
Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used Manus and want to share what worked or didn't, the editorial desk reviews every message sent through this form.
Email editorial@aipedia.wiki