REF / WRITING · AUTOMATION

The Hidden Cost of n8n and Make: When Code Becomes the Cheaper Option

The hidden costs in n8n and Make that don't show up on the pricing page, and when writing custom code is actually the cheaper, faster option.

DomainAutomation
Formatnote
Published20 Jan 2026
Tagsn8n · make · automation-tools

n8n costs $20 per month on the cloud plan. Make costs $9 per month on the basic tier. These numbers lead teams to choose them over writing code. They are also the least important numbers in the decision.

After running both platforms at production scale for clients, and migrating three clients away from them to custom code, here is the actual cost structure: not the pricing page, but the real cost.

The Cost Comparison

Cost Categoryn8n (cloud)MakeCustom Code
Platform subscription$20/month$9/month$0
Execution limits10k executions/month10k operations/monthUnlimited
Developer setup time4-8 hours4-8 hours8-24 hours
Developer maintenance (complex flow)HighHighLow if structured well
Debugging toolingLimitedLimitedFull IDE + logging
Version controlManual exportManual exportGit native
Custom logic complexityHigh frictionHigh frictionLow friction
Vendor lock-inHighHighNone

The subscription cost is irrelevant at any meaningful business scale. The real costs are developer time and maintainability over twelve months.

Where n8n and Make Genuinely Win

I want to be fair before explaining where they fail. Both platforms are excellent for specific use cases.

Connecting two or three SaaS tools with standard integrations. If you need Typeform submissions to create Notion pages and send a Slack message, n8n does this in fifteen minutes. Writing this in code takes two hours including tests. The platform wins clearly.

Rapid prototyping. I use n8n to validate automation logic before writing production code. If I can prove the flow works in n8n in a day, I know exactly what to build in code. The prototype cost is worth the clarity.

Teams without a dedicated developer. If the person running the automation is not a programmer, n8n is the right answer regardless of the technical trade-offs. The alternative is not better code; it is no automation at all.

Where They Break Down

Complex conditional logic. Both platforms handle simple if/else branching reasonably well. Nested conditionals with stateful decisions (where the branch depends on what happened in a previous execution) become visual nightmares. I worked on a Make scenario for a Dubai property management client that had seventeen branches and three loops. Reading it to debug it required forty-five minutes of careful node-by-node inspection that would have taken two minutes in a code review.

Production-quality error handling. n8n's error handling is node-level. If you want retry logic that varies by error type (retry a rate-limited request with exponential backoff, skip a malformed-data error and log it, halt and alert on an authentication failure), you need to build that with a set of Error Trigger nodes and manual routing. In Python or Node.js, this is ten lines of code. In n8n, it is a parallel error-handling sub-workflow that is as complex as the main workflow.

Large data volumes. Both platforms are optimized for event-driven, low-volume workflows. Processing 50,000 records in a single execution is possible but fragile. I have watched Make timeouts destroy partially completed batches with no clean rollback mechanism. Custom code with proper transaction handling and checkpointing is far more reliable.

Debugging at depth. This is the hidden cost that compounds every other limitation. When a complex n8n workflow fails, your debugging tools are: the execution log (showing node inputs and outputs but not intermediate variable state), console.log statements you add manually to each node, and re-running the workflow with test data. When equivalent custom code fails, you have a proper IDE debugger, structured logging with log levels, distributed tracing, and test suites that isolate the failure to a specific function. The time delta between "something went wrong" and "here is the line that broke" is enormous.

The execution limit trap. Both platforms tier by executions or operations. A straightforward automation that updates 1,000 CRM records with three operations per record consumes 3,000 operations. If that runs daily, you hit 90,000 operations per month on what sounds like a simple workflow. The "basic" plan that cost $9/month becomes the $50/month "core" plan, then the $100/month "pro" plan as your automations grow. Custom code scales at the cost of compute, not at the cost of someone's pricing spreadsheet.

A Real Cost Comparison: The Faisalabad Textile Project

This client had a daily inventory sync between their ERP, their supplier portal, and a custom reporting dashboard. The initial n8n implementation took six hours to build and worked for three months.

Then the supplier portal updated their API. The n8n workflow had to be rebuilt from scratch because the authentication method changed from API key to OAuth 2.0, and n8n's OAuth node required manual re-authorization in the UI for each connection. There was no way to script this, no way to batch it, and no way to test it without a human clicking through the UI.

Rebuilding the n8n workflow: 8 hours.

Migrating to Python (FastAPI scheduled job, SQLAlchemy for the ERP sync, httpx for the supplier portal): 12 hours.

Four months later, the supplier portal changed their API again. Updating the Python code took 45 minutes. Updating the equivalent n8n flow at another client on a similar workflow (I watched a colleague do it) took most of a day.

Cumulative developer time over twelve months:

Periodn8n approachCustom code
Initial build6h12h
First API change8h0.75h
Ongoing maintenance (4h/quarter)16h3h
Total30h15.75h

The custom code was twice the upfront investment and half the twelve-month total.

The Migration Path

When I migrate a client from n8n or Make to custom code, the process is:

migration_steps:
  week_1:
    - export_all_workflow_definitions
    - document_every_trigger_and_action
    - map_external_api_dependencies
    - identify_data_transformations
  week_2:
    - build_python_equivalents_for_each_step
    - add_structured_logging
    - write_tests_for_each_transformation
  week_3:
    - run_old_and_new_in_parallel
    - compare_outputs
    - fix_discrepancies
  week_4:
    - cut_over_one_workflow_at_a_time
    - monitor_for_72_hours_each
    - decommission_n8n_flows_as_each_is_confirmed

Running old and new in parallel for a week before cutting over is non-negotiable. Every client who has skipped this step has found at least one edge case the new code handled differently from the old workflow.

The Tipping Point

My rule of thumb: if you can build it in n8n in under two hours, it runs fewer than 10,000 times per month, and it connects only well-supported SaaS tools with standard integrations, use n8n. If any of those conditions fails, evaluate custom code.

Specific signals that you have hit the tipping point:

  • More than five conditional branches in a single workflow
  • Stateful processing across multiple executions
  • Execution limits forcing tier upgrades
  • Debugging a failure takes more than thirty minutes
  • Onboarding a new developer to maintain the workflow requires more than a day
  • The cost of the platform tier is approaching the cost of a few hours of developer time per month

What I Got Wrong

I migrated a client to custom code when I should not have. Their Slack-to-Notion-to-email workflow was simple, well-supported, and managed by a non-technical operations manager. She could edit the n8n workflow herself when the email template changed. After the migration to custom code, every template change required a developer. The client was not saving money; they were paying me for tasks they used to handle themselves.

The right tool is not always the most powerful tool. Sometimes it is the tool the operator can maintain independently.