Seven days inside a forgotten Make account: a forensic audit log
Six lessons from a real audit engagement. 47 scenarios, 17,000 EUR per year of wasted platform spend, five months of missing accounting data, 31 lingering connections from people who had left. What I take from it for anyone running production no-code workflows.
Seven days inside a forgotten Make account
Most production no-code accounts I see go through three phases. Phase one, the first six months: enthusiasm. Two or three useful automations run, nothing is broken, people are pleased. Phase two, around twelve to eighteen months: drift. New scenarios get added, old ones get forgotten, ownership becomes fuzzy. Phase three, eighteen months and beyond: archaeology. Nobody is fully sure what runs, who built it, or why. The account becomes a black box that the business depends on but nobody dares touch.
Most of my audit work comes from phase three. This article is a forensic log from a recent engagement. Seven days, 47 scenarios, six hard lessons. Mid-sized B2B software company, 180 employees. Names and specifics changed, the patterns intact.
Day one: the inventory nobody had ever done
Every audit begins with the same simple question. How many scenarios live in this account, how many run actively, how many have been silent for weeks?
In this account, 47 scenarios were present. 23 ran actively. 11 had been manually disabled. 13 were so old their connections had broken down completely. None of those 13 had executed in the last 90 days. They still sat in the account, several with webhook URLs that could in theory be triggered at any moment.
First observation. A scenario marked "inactive" costs nothing in operations. It costs a lot in cognitive overhead. At the next audit, someone has to open, read, understand, and decide about each of them. 13 scenarios at 20 minutes each is more than four hours. Repeat that quarterly and you have lost an entire working day per year to ghost scenarios.
What do you do with old scenarios? The honest answer: delete them. If a scenario has not run in 90 days and was not deliberately parked for a known reason, it goes. Keeping it "just in case I want to look at it later" is a lie. Nobody looks at it later. What is genuinely needed always shows up again within thirty days, because someone complains. What nobody misses, can leave.
The concrete output of day one was a spreadsheet. One row per scenario, columns for name, status, last run, module count, presumed function, presumed owner, recommendation. That sheet became the working document for the rest of the week. Without it, the audit cannot happen.
One observation worth stating directly. An audit without an inventory is not an audit. One that walks straight into individual scenarios without seeing the whole picture misses systemic problems. The inventory shows how much rubbish, how many duplicates, and how much shadow sit in the account.
Day two: surprises inside the active scenarios
On the second day I opened each of the 23 active scenarios in turn, with the spreadsheet beside me. Maximum 30 minutes per scenario. Look at what it does, note what is unusual, mark anything that needs follow-up. What I found was a mix of solid, questionable, and alarming.
Three concrete examples.
Scenario "Lead Routing to Pipedrive". 18 modules. Running for 14 months. At first glance, clean structure with sensible branches for different lead sources. On closer inspection: a "Filter" module had been blocking all leads from Austria for eight months. Nobody knew why. Most likely someone had once wanted to test whether Austrian leads needed a separate qualification path, and forgotten the filter. Eight months of Austrian leads disappeared. Nobody noticed. At a sales team that actively sells in DACH.
Scenario "Stripe webhooks to accounting". Supposed to push every Stripe payment into an internal accounting API. On closer inspection: the scenario ran daily, Stripe delivered the data, but the HTTP request to accounting had been returning 401 Unauthorized for five months. The bearer token had expired. Nobody noticed, because the scenario had no explicit error handler. In Make, a module returning 4xx still counts as "executed" and the workflow continues. The operations chart looked clean. Accounting data for five months was partially missing from the main database. Two weeks of accounting work to reconstruct it.
Scenario "Customer success email sequence". Supposed to send onboarding emails to new customers over five weeks. Had a bug: instead of reading just the new customer, it read the entire customer list. Result: every new customer received between 60 and 80 onboarding emails in the first 24 hours. Customer success had reported the bug three months earlier, nobody had touched it, because nobody saw the scenario as their responsibility.
Second lesson. A scenario that "runs" in Make is not the same as a scenario that works. Make shows green checkmarks for module execution, not for business-logic execution. A 401 response that nobody handles still counts as a successful run. A trigger pulling the wrong data still produces clean operations statistics. The business logic collapses quietly while the dashboard stays green.
What I take from this. Every production workflow needs an error handler that does not just log but actively notifies someone. Slack, email, pager, all fine. Silent is not. Make has had an Auto-Recovery feature for failed runs since 2023. It is not enabled by default. In three out of four audits I find it switched off.
Day three: the data flows nobody understood anymore
Day three was about reconstructing the data flows. Which data moves in what volumes from where to where, with what frequency, with what transformations?
A workflow synchronizing HubSpot contacts to Mailchimp showed a problem I see often. The sync ran every 15 minutes, each time triggered by "Search Contacts" rather than by a delta-based "Watch Updated Contacts". Meaning: at every execution, all 12,000 contacts were compared and possibly updated. About 12,000 operations per execution, ninety-six executions per day, totalling roughly 35 million operations per month.
Make's base tier comes with 10,000 operations per month. The account was permanently parked at the highest tier because the operations counter had repeatedly tripped automatic upgrades the year before. The result: a 1,500 EUR per month Make subscription driven entirely by a poor trigger choice. Actual relevant updates to Mailchimp happened about 30 times per day.
The repair was straightforward. Switch the trigger to "Watch Updated Contacts". After the change, monthly operations fell from 35 million to 22,000. Tier cost fell from 1,500 EUR per month to 29 EUR per month. Over twelve months: about 17,000 EUR saved through a design change that took under one hour of work.
Third lesson. In Make pricing, operation count is the variable that matters. Anyone using "Watch all" or "Search all" as a trigger instead of a delta-based trigger is burning money. The platform rewards lazy architecture with higher invoices. Anyone not actively reviewing this is paying for nothing.
In the same audit I found a second, similar scenario. It polled every 5 minutes for "if a new item appears in Airtable, copy it to another base", with a delay. The scenario triggered 200 to 400 operations per day, all empty, because 99 percent of executions found no new item. The fix: Airtable webhook instead of polling. 96 percent fewer operations, same functionality, faster trigger.
Make, Zapier, and n8n have different pricing models. Make counts operations, Zapier counts tasks, n8n counts workflow executions. A scenario that would be cheap on one platform can be expensive on another. Anyone who does not understand the pricing logic of their platform builds workflows that look fine and quietly drain the budget.
Day four: the secrets of the connections
Day four was the most unpleasant. I opened the connection list in Make. 89 connections were registered. 12 of them greyed out because their OAuth tokens had been revoked by the underlying platform. 7 showed "Token expired" or "Authentication failed". The remaining 70 looked "active", but active in the Make UI is not a guarantee of working. An active connection can still throw silent errors if the permissions on the other side have been restricted.
What troubled me most. I could not derive from the names who owned these connections. "HubSpot Connection 3", "Stripe Account 2", "Google Sheets Production". Who had set them up? Whose personal credentials were sitting in them?
It turned out: 31 of the 89 connections had been created by employees who had since left the company. In several cases Make still listed the former employee as the "Owner" of the connection. Which means: if that former employee dissolves their HubSpot account or changes permissions, every workflow depending on the connection dies instantly.
In this audit we also found two connections whose former owners now worked, according to LinkedIn, for direct competitors. Theoretically those people could still have accessed the underlying platforms through the OAuth sessions stored in Make, if the platforms did not invalidate sessions on account deletion. It is unclear whether any data had actually leaked. The fact that it was theoretically possible kept the CISO awake for an hour.
Fourth lesson. Connections should not live on personal accounts. They should live on service accounts that belong to the company and are not tied to individuals. HubSpot, Stripe, Salesforce, and Pipedrive all support some form of service account. Google Workspace handles this via service-account keys. Notion and Airtable use integration tokens. What does not work is the naive assumption that "the person who built the scenario will surely stay at the company".
In this audit I recommended a systematic switch. For each productive connection, a service account at the relevant provider was set up. The Make connection was rebuilt with service-account credentials. The old connection was deleted. Effort: three days of work by an internal operations engineer. Result: resilience to staffing changes, a clean audit trail, and a far easier conversation with the data protection officer.
A note worth making. Service accounts are not a silver bullet. They carry their own risks. A compromised service-account key can be more damaging than a compromised personal account, because more workflows depend on it. Service accounts need rotation rules, clear ownership, restricted permissions. They are the lesser evil, not the ideal solution.
Day five: the pause experiment
Day five was the most interesting. I took the most important 8 scenarios and mapped them to concrete business processes. Question: what does this scenario do, what business goal does it serve, what happens if it stops running?
For three of the eight scenarios, I could not answer. Nobody in the company could either. Those three had been running for over a year, had consumed roughly 280,000 operations between them, costing the customer an estimated 800 EUR in tier allocation. Nobody knew anymore what they did or whether anyone relied on their output.
I did an experiment I recommend to every client. I paused the three scenarios. Not deleted, just paused. Reversible. If anyone complains, the scenario matters. If nobody complains in 30 days, the scenario is ready for deletion.
Day 15 after the pause: a product manager complained that "the competitor tracker" was no longer running. So that was one of them. A scraper that wrote competitor prices from three public pricing pages into a Google Sheet daily. Defended as market intelligence. We reactivated it, this time with proper documentation, a named owner, and an error handler.
Day 22: the remaining two paused scenarios still had no complainant. Day 30: both were deleted. One turned out to be an old migration workflow, used 18 months earlier to shift data between a legacy and a new Salesforce instance. The migration had ended. The scenario had kept polling every six hours since.
Fifth lesson. What nobody misses, can leave. Workflows tend toward an indefinite lifecycle if nobody actively prunes them. They cost operations, cognitive overhead at every audit, and risk in the form of potential data leaks. If nobody can defend why a scenario runs, it belongs in the bin.
Some clients struggle with this step. Understandable. What if someone does need it? That is exactly what the pause method is for. A pause is reversible. A 30-day window gives everyone time to speak up. Anyone who does not speak up has not been using the scenario. That is a date, not a hunch.
Day six: the documentation that nobody had written
A classic problem with no-code platforms is documentation. Make has had a note field per module for a while. In theory, you can document the reasoning behind each decision there. In practice, 95 percent of scenarios contain not a single note.
What was specifically missing in this audit.
Why certain filters were set. We found 23 filters whose function could not be inferred from the module name. Example: a filter named "Filter Region". Which region? Why? Who decided? Nowhere stated.
What the mapping logic in complex transformers actually did. Six scenarios had transformations dense enough that they took an hour or more to decode without notes. Nested if-then logic, regex matches, date conversions, lookup mappings.
Which business rule sat behind a branch. Three scenarios had branches with non-obvious conditions. Example: "if Lead Score > 70 and Industry = Finance and Country in DACH, then route to Senior Sales". Why those thresholds, why that industry? Someone had decided months ago. Nobody could reproduce the reasoning.
What we did. One Markdown page per active scenario in the internal documentation. Each page followed the same template.
- Which business process does this scenario support?
- What trigger starts it, with what frequency?
- Which external systems are involved?
- Which assumptions are being made that are not explicit in the scenario?
- Who is responsible for maintenance and questions?
- What typical failure modes exist, and how do you respond to them?
23 active scenarios produced 23 pages. Effort during the audit: about three days, because many assumptions had to be reconstructed from data and stakeholder interviews. The result: a readable manual, usable by any operations engineer joining the company in future.
Sixth lesson. Visual workflows are not documentation. They are implementation. Documentation is what explains why the implementation looks the way it does. Anyone who does not maintain separate documentation has already taken the first step toward a bus-factor problem.
Day seven: my recommendations for the next two years
On the seventh day I wrote up the recommendations. Six points I recommend in nearly every audit engagement. They are not original. They are what every software engineer knows from a normal day job. Code reviews, monitoring, documentation, cleanup. What is taken for granted in classical software development is often missing in no-code automation. That gap is the problem.
One. Monthly reviews. On the first Friday of every month, an operations engineer spends one hour in the account. What were the error rates over the last 30 days? Which scenarios had unexpected executions? Which connections sit in a warning state? One hour is enough, if the audit spreadsheet exists as a starting point.
Two. Quarterly cleanup. Once per quarter, every scenario that has not run in 90 days is reviewed and either deleted or actively revived. Connections greyed out for three months get deleted. This routine prevents the slow accumulation of cognitive rubbish.
Three. Service account requirement. No new connection on a personal account. Anyone creating a new connection must use a company-owned service account. That means more bureaucracy at setup time. It means dramatically less pain when employees leave.
Four. Error handlers in every production scenario. When a module fails, the failure flows into a central Slack channel or an issue tracker. Silent failures are the worst thing that can happen to an operations team, because they only surface through external complaints. A production workflow without an error handler is like a server application without logging.
Five. Cost monitoring. A simple spreadsheet with monthly operations per scenario. If a scenario suddenly consumes 30 percent more operations than the previous month, that is a signal. Either the underlying data-volume assumption broke, or the scenario has a bug. Either way, dig in before the invoice arrives.
Six. Documentation requirement for every new scenario. Anyone deploying a new scenario writes the Markdown page with the six fields above before deployment, not after. Anyone who does not want to document should not go to production. That sounds strict. It is. It is also the only durable way to keep the bus factor manageable.
What it costs not to do this
Before closing, a direct calculation. The forensic week cost the client 7 consulting days, roughly 9,000 EUR. What did we find?
A missing accounting integration with a five-month data gap. Rebuild cost in the finance department: about 6,000 EUR, because two staff spent two weeks reconstructing payments manually.
An oversized Mailchimp sync costing about 17,000 EUR per year more than necessary. Looking back across the two years it had been running this way: roughly 34,000 EUR poured into bad trigger choices.
A lead-routing gap that had silently dropped Austrian leads for eight months. At an estimated 8 percent conversion rate and the customer's average DACH deal value: about 80,000 EUR of lost revenue, conservatively.
A security gap from 31 connections held by former employees. Potential damage hard to quantify, but GDPR-relevant. With an actual data leak, six-figure fines would have been plausible.
What did the customer save or avoid directly through 9,000 EUR of audit? Around 20,000 EUR in the first year, directly traceable. Indirectly avoided, considerably more, hard to put a number on.
This calculation is not universal. Some audits find less, some find more. The pattern is consistent. Every production Make or Zapier account that has not been systematically maintained for two years carries enough hidden problems that an audit pays for itself. I have now seen this in over 30 engagements. The exact numbers vary. The pattern does not.
What I take from it
I tell this story often, especially to clients who call me for the first time because they "need a few quick automations". The unspoken assumption tends to be: you build something, it runs, you are pleased. The reality is usually different. You build something, it runs, and after 18 months it has become a hidden mess because nobody remembers what it does.
This is not specific to Make. Zapier, n8n, Power Automate, Workato, and self-hosted n8n show the same patterns. Visual workflows offer one big advantage: they are quick to build. They have one big disadvantage: without discipline they decay into an unreadable legacy.
Anyone serious about automation treats it as its own software discipline. Reviews, monitoring, documentation, version management. Anyone not serious will, after 18 months, have an account full of magic that nobody understands. Both are legitimate choices. Just make the choice consciously, before being surprised in year two by a missing accounting reconciliation.
Audits are uncomfortable. They cost time, money, and reveal omissions. They are also the only insurance against the day a workflow fails and nobody knows where to begin. I recommend one audit day per year per 10 productive scenarios, minimum one day, maximum one week. For a medium-sized account, that is two to three days. Not much compared to the cost of one forgotten gap.
If you want to know how much pain sits in your own account, or whether it is time for a first audit round, the free Automations Check is a good starting point. About thirty minutes. What you do with what you learn is your call.