Skip to main content
Back to Blog
Strategy22 min read27.04.2026Max Fey

What happens when your automation vendor pulls the plug

Platforms get acquired, change pricing, deprecate features. If you have not planned for it, your business processes are sitting on a foundation you do not own. Here is what to do about it.

What happens when your automation vendor pulls the plug

In early 2024, IFTTT capped free accounts at three active applets. Long-time users who had built smart-home routines, notification flows, and personal data bridges woke up to find half their automations broken. The platform did not vanish. It changed its terms. The result was the same: workflows that had run for years suddenly stopped working.

A few months later, Integromat (later renamed to Make) shut down its legacy platform completely. Migration had been announced for over a year. Anyone who had not moved their scenarios saw them disappear.

These are mild cases. The harder version: a vendor gets acquired and the new owner shifts the product strategy. Pricing doubles. A connector you depend on gets deprecated because too few customers used it. The tool you built your customer onboarding around becomes incompatible with how your business runs.

This rarely shows up in the planning phase. It shows up later, when the business depends on something you do not control.

The market is still sorting itself

Automation as a product category is young. Zapier launched in 2011, Make (then Integromat) in 2012, n8n in 2019, Activepieces in 2022. In the same window, dozens of smaller tools launched and disappeared. Workato, Tray.io, Bardeen, Pabbly, Workiom, half a dozen industry-specific platforms. Some still exist. Some were folded into larger products. Some are zombie companies with skeleton support staff.

That is not unusual for a young market. It does mean: do not assume any specific vendor will be around in ten years in its current form. Even the large ones are not safe from change. Zapier has openly debated which features should move into Enterprise tiers. Make has restructured its pricing multiple times since the Celonis acquisition in 2020.

The same risk exists with cloud CRM or ERP, but companies treat it as a standard procurement topic there. With automation tools, it gets overlooked because the tools feel friendly and disposable. They are friendly to build with. They are not disposable to live with.

Three ways vendors can surprise you

Acquisition and strategic shift

Acquisitions sound positive on paper. More resources, more stability. In practice, they often introduce changes existing customers do not want. Features get cut because they do not fit the acquirer's strategy. Pricing changes because the parent company runs different margins. Connectors stop being maintained.

A common pattern: Tool A gets acquired by a larger company that already has a competing product. Within two years, Tool A has been folded into the portfolio. Higher prices, slower innovation, eventual end-of-life for the standalone product.

This has happened to dozens of mid-tier SaaS tools over the last decade. There is no reason to assume automation platforms are different.

Pricing changes

The most frequent and least dramatic. A platform raises prices by twenty or thirty percent. You can stay or leave. If leaving requires significant work, you stay. The vendor knows this.

A real example. A company with around 200 staff had roughly eighty production workflows in Zapier. When Zapier raised pricing meaningfully, the cost of migrating to another platform exceeded the price increase over any reasonable horizon. They paid. They are still paying. The functionality has not grown proportionally.

This is not a complaint about Zapier. They are running a business. It is a lesson for buyers: any automation that is not easily portable is a bet that the vendor's pricing will not change.

Feature deprecation

Sometimes the platform stays. Specific features go away. A connector to a third-party service is dropped because too few users needed it. A particular trigger type is replaced by a different one. An authentication method gets retired.

These changes break workflows. The vendor announces it in advance, in an email that lands among thirty other vendor emails. Active users notice. Anyone who built a workflow two years ago and has not touched it since notices when something stops working overnight.

What lock-in actually means in automation

The assumption: workflows are logic, and logic is portable. Wrong.

A workflow in any major platform is built from several layers:

The logic itself. Conditional branching, transformations, routing. This layer is portable in concept. In practice, you have to rebuild it because each platform has its own syntax.

Connector specifics. The HubSpot connector in Zapier has its own triggers and actions with specific field structures. The HubSpot connector in Make has similar but not identical capabilities. Mapping between them is manual work.

Authentication. OAuth tokens, API keys, refresh logic. All platform-specific. All have to be reauthorized when you switch. With twenty workflows touching three external services each, that is sixty connections to rebuild.

Workflow state. Execution history, error logs, sometimes data queues that are stuck mid-flow. A switch usually means losing all of this.

Operational knowledge. Which edge cases have you fixed in the last six months? Which workarounds did you build because a particular feature was unreliable? This knowledge is not in the workflow. It is in the head of whoever built it.

When you switch platforms, you have to rebuild all five layers. Plus you lose the trust that the new system runs reliably until it has run for a while.

Self-hosting: what the promises actually mean

n8n and Activepieces both offer self-hosting. This is often presented as the answer to vendor lock-in. The truth is mixed.

What self-hosting solves. You control the infrastructure. If the vendor goes bankrupt tomorrow, your instance keeps running as long as you operate it. You have access to all data flowing through your workflows. You cannot be surprised by price increases because the license is transparent.

What self-hosting does not solve. Workflows still depend on connectors maintained by the vendor or community. If the n8n project stops updating the Salesforce connector, self-hosting does not help. You are still depending on an external project.

What self-hosting costs. Servers, backups, monitoring, updates, security patches. Even a small installation requires meaningful maintenance per month from someone qualified. Self-hosting is not free. It shifts costs from licensing to staff.

In practice, self-hosting makes sense for organizations that already have IT capacity or where data privacy is critical enough that cloud processing is not viable. For a three-person consulting practice, self-hosting is rarely the right answer, even when technically possible.

Building workflows for portability from day one

You cannot eliminate every dependency. You can reduce risk by making certain choices when you build.

Document logic, do not just click

Visual builders feel self-documenting. They are not. The diagram shows you connections, but not why a particular branch exists or what edge case it handles.

For each production workflow, write a short description in plain text or markdown. Three sections are enough: what triggers it, what steps run in what order, what edge cases are handled. This documentation makes migration far easier and helps anyone who has to maintain the workflow later.

Separate transformations from the workflow where possible

Some platforms support code blocks: a small JavaScript or Python snippet that does a specific transformation. If you keep these snippets in a separate repository or document, they remain portable, because JavaScript is JavaScript regardless of the platform. Logic is not buried in a platform-specific visual editor.

Use webhooks instead of native triggers when sensible

If your CRM supports webhooks, point the webhook at an endpoint of your choice. When you switch automation platforms, you change the endpoint in the CRM. The trigger keeps working. This is not always possible, but where it is, it dramatically reduces migration cost.

One platform, not five

Do not try to diversify by running different tools for different use cases. That doubles your learning curve, your licensing costs, and your complexity, without meaningfully reducing risk. If one platform fails, a large part of your automation breaks anyway.

Deep mastery of one platform is more robust than shallow knowledge of three.

Do not embed business logic in workflows

If you have a complex calculation that matters to the business, it does not belong inside a workflow. It belongs in a function that runs somewhere, in a language you control independently of the tool. The workflow calls that function.

Concrete example. A commission calculation with multiple tiers, depending on customer segment and contract length. If this logic lives inside a Make scenario, your sales commissions depend on Make. Build the calculation as a small HTTP API hosted somewhere stable. The workflow calls the API. Switch platforms, the logic stays.

The export problem

Every platform says it supports export. What they do not always say: what is actually in the export.

In Make, you can export scenarios as blueprints. The blueprint contains the workflow structure. It does not contain OAuth connections, webhook endpoints, manually uploaded files, or historical execution data. If you have a backup of the workflow, you have half of what you need. The rest has to be rebuilt by hand.

Zapier is similar. n8n is better, because workflows are exportable as JSON, which makes them more reproducible. But even there, connections are not in the export, for solid security reasons.

What this means in practice. The idea of exporting all workflows once a quarter and storing the backup is not wrong, but it does not fully solve the problem. In a real disaster, the backup lets you reconstruct the logic. The operational connections to the rest of your stack still have to be rebuilt manually.

Who controls the credentials?

A question that often only comes up when someone leaves the company: who had access to the automation account, and which connections live there?

A real example from a consulting engagement. A founder called me because their main workflows had stopped running three days earlier. The person who had built everything had left the company a month before. The workflows were in their personal Make account, with their personal OAuth connections to Salesforce, HubSpot, and Slack. When the Salesforce token refreshed, everything fell over. Nobody else had access to fix it.

The fix was to spin up a new account, rebuild the workflows, reauthorize every connection. Three days of work for someone unfamiliar with the tool. Avoidable, if a team account had been used from the start, with documented connections and multiple administrators.

Recommendations:

Always use a business account, never a personal one. At least two people should have admin rights. Connections should sit under a company-wide service account, not a personal employee account. When someone leaves, audit which connections were running under their identity and transfer them.

This hygiene is unglamorous. It is also one of the cheapest forms of insurance you can buy.

When you should actually migrate

Migration is real work. Migrating out of paranoia, without a concrete trigger, is usually inefficient. There are situations where you should not wait.

The vendor formally announces a feature you depend on is being deprecated. Clear line. You usually have six to twelve months. Use the first three for planning, not the last three.

Pricing structure changes such that you pay significantly more without proportional value. This is a math problem. What does migration cost, what do you save over two years? If migration pays back in under twelve months, do it.

Compliance requirements that the current vendor does not meet. GDPR, ISO certification, sector-specific rules. Not a comfort issue. A regulatory one.

The vendor has had multiple stability incidents. One outage is normal. Three in a quarter, each lasting hours, is a signal.

What migration actually costs

Numbers from projects I have seen:

A simple workflow (trigger, two or three steps, one data source) takes three to four hours to migrate, including testing, when someone knows both platforms. Anyone learning a platform takes considerably longer.

A medium-complexity workflow (branching, multiple data sources, transformation) takes most of a working day per workflow. Most of that time goes into testing edge cases that emerged in the original over months of production and were never documented.

Complex, business-critical workflows can take several days each, plus a parallel-run period where both systems run side by side to catch errors before they hit the business.

A portfolio of twenty workflows realistically translates into two to four weeks of focused work, depending on complexity. That is the honest number, not the demo number.

A four-phase migration

When the decision is made, structure helps.

Phase 1: inventory

List every production workflow. Classify by criticality: what happens if this workflow is down for a week? Some workflows are nice-to-have or can be done manually. Others stop the business. The classification drives sequencing.

For each workflow, document the trigger, the steps, the connected services, the known edge cases. If you do not have this documentation yet, this is the time. It is valuable independent of the migration.

Phase 2: pilot

Migrate two or three simple, non-critical workflows first. This serves two purposes: you learn the new platform on real examples, and you discover systemic issues before touching critical work.

Run the pilot workflows in production, in parallel with the originals, for at least two weeks before going further.

Phase 3: main migration

Migrate the medium-criticality workflows. For each: build on the new platform, validate with test data, run in parallel with the original, then retire the original. Slower than a direct cutover, but it prevents the typical migration disaster where you flip the switch on day X and find out on day X+1 that three edge cases do not work.

Phase 4: critical workflows

The workflows whose failure stops the business. Parallel running for at least four weeks is required. You pay double licensing for that period, which is far cheaper than an outage.

After parallel running succeeds: retire the original, keep the backup, archive for at least three months.

When lock-in is acceptable

Not every dependency is a problem. Sometimes you live deep inside a platform, knowingly, because the tradeoff is worth it.

When the platform is deeply integrated into your value creation and switching would be unrealistic. SAP. Salesforce. Microsoft 365. Lock-in is real, but the ratio of switching cost to risk makes it acceptable as long as the vendor is stable.

When platform-specific features deliver real competitive advantage. Some tasks can only be solved well in a specific tool. If Make has a feature that n8n does not, and that feature is critical to your operation, lock-in is the conscious price.

When you have no serious compliance constraints and the vendor is well-established. For many businesses, Zapier is a reasonable choice, lock-in and all. The platform works, pricing is transparent, the probability that Zapier disappears in the next five years is low.

The strategic question is not: how do I eliminate every dependency? It is: which dependencies can I accept, and which need to be actively reduced?

The question to ask before every workflow

Before you build the next workflow: what happens if this platform does not exist in two years, or changes meaningfully?

Three possible answers:

"I rebuild the workflow in another tool, takes a few hours." Acceptable risk.

"I lose three weeks of work and a few productive days." Tolerable, but document the logic.

"The business is disrupted, we need weeks to rebuild." That is a platform bet. Make it consciously, or build the workflow differently.

Asking this at build time is far cheaper than answering it after the fact.

What most companies underestimate

Vendor risk in automation is systematically underestimated because the tools are so easy to use. You build something in two hours that used to take two weeks of development. The fact that it was easy suggests the work would be easy to replace. It is not.

Most companies have built up an automation portfolio over the years that contains several person-years of accumulated experience. That experience is not in code. It is in the tool, in the workflows, in the heads of the people who built it.

Lock-in is the tax you pay on that experience when the vendor knows about it and you do not.

The reasonable path is not eliminating every dependency. That is overengineering and slows everything down. The reasonable path is making dependencies consciously, documenting them, and building bridges at the right places that can be used in an emergency.

If you want to know what your own automation portfolio actually looks like and where the critical dependencies sit, the free Automations Check answers that question in about 30 minutes.

#Vendor Lock-in#Strategie#Automatisierung#Migration#Self-Hosting#Make#n8n#Zapier