In the previous article we built a Pokémon team analyzer that covered intermediate n8n patterns: webhooks as entry points, iteration with Loop Over Items, complex API data transformation, and result aggregation. If you haven’t read it yet, I’d recommend doing so before continuing — though this article works perfectly well on its own.
Throughout this series we’ve covered a lot of ground: we started with setting up n8n with Docker, then explored node types and best practices, went deeper into professional fundamentals, and put it all into practice with a real project. Today we’re adding a piece that most workflows eventually need: persistent data storage.
Until now, every workflow we built was stateless: it received data, processed it, and returned a result, but remembered nothing between runs. That’s perfectly fine for a lot of use cases — but when you need to store records, track past operations, or build something resembling a lightweight database without relying on external services, n8n’s Data Tables are exactly the right tool.
The problem they solve
Imagine you want to build an appointment management system. Every time someone books an appointment, you need to store that information somewhere so you can look it up, update it, or cancel it later. The usual options would be:
- Using Google Sheets as a makeshift database (it works, but it has rate limits and isn’t great for frequent writes)
- Integrating Airtable or Notion (adds external dependencies and potentially extra costs)
- Setting up a real database like PostgreSQL or MySQL (requires additional infrastructure)
- Using n8n’s static variables (only suitable for configuration data, not dynamic records)
Data Tables offer a genuinely fitting alternative: persistent storage built right into n8n, with no external services, no extra setup, and more than enough performance for most automations.
What are Data Tables in n8n?
Data Tables are an n8n feature that lets you store structured data persistently inside your own instance. They work much like a simplified relational database table: columns with defined types and rows containing records.
Unlike variables or execution-scoped storage, data in a Data Table survives across runs and is available to any workflow in your instance.
Key features
- Real persistence: data stays around even when no workflow is running
- Shared access: any workflow in your instance can read from and write to the same table
- Column types: text, number, boolean, date, and JSON
- Full CRUD: insert, query, update, and delete records
- Filters and sorting: query records with conditions, sort them, and paginate results
- No external configuration: everything runs inside n8n, no additional credentials needed
Tables are managed from the Data Tables tab on the n8n home screen, alongside Workflows, Credentials, Executions, and Variables. From there you can create tables, define their schema, and inspect stored records at any time. Inside workflows, you access Data Tables through the Data Table node.
Availability note: Data Tables are available on n8n Cloud and on self-hosted instances from version 1.113.1 onwards. If you’re on an older version, update your instance before continuing.
Main use cases
Data Tables shine in scenarios where you need lightweight persistent state without wanting to add external dependencies.
Event logging and auditing
Keep a history of each significant run: what happened, when, with what data, and what the outcome was. Useful for audits, production debugging, and periodic reports.
{
"timestamp": "2026-04-01T10:30:00Z",
"workflow": "order-processing",
"action": "order_processed",
"order_id": "ORD-2847",
"result": "ok",
"duration_ms": 342
}
Deduplication and processing checklists
Avoid processing the same item twice. Before handling a record, check the table to see if it’s already been processed. If it’s there, skip it; if not, process it and save it.
A common example: a workflow that monitors an API and processes new items every hour needs to remember which IDs it’s already handled to avoid duplicate actions.
Dynamic configuration
Store configuration parameters that change over time and need to be accessible across multiple workflows: thresholds, recipient lists, business rules, activation schedules.
Unlike n8n’s static variables, here you can update configuration through a dedicated workflow without having to manually edit parameters.
Simple queue management
Implement a pending task queue: one workflow writes tasks to the table with a pending status, another reads and processes them, marking them as in_progress or completed. Simple, effective, no need for Redis or anything similar.
Basic entity management
The most complete use case: managing records for an entity (customers, products, bookings, appointments) with full CRUD operations. It won’t replace a real database for large projects, but it’s more than enough for medium-sized automations.
Working with Data Tables: node operations
You access Data Tables from your workflows through the Data Table node. Its operations are split into two groups: row actions and table-level actions.
Row actions
Insert row adds a new record to the table. Each column’s value is mapped from the current item’s data using n8n expressions. The node returns the created record, including the internal identifier that n8n automatically assigns to each row.
One important detail: n8n doesn’t enforce uniqueness constraints. If you need to prevent duplicates, that’s your workflow’s responsibility — typically handled by querying first.
Get row(s) retrieves existing records. It supports column-value filters with configurable conditions (such as Equals), sorting options, result limits, and the option to return all records without filtering. When results may be empty, enable Always Output Data on the node to prevent the flow from stopping due to missing items.
Update row(s) modifies the fields you specify, leaving the rest untouched. To update by your own criteria (rather than by internal ID), the usual pattern is a preceding Get row(s) to locate the row, followed by Update row(s) using the id returned by that query.
Upsert row(s) combines insertion and update into a single operation: if the record already exists it updates it; if it doesn’t, it creates it. This is the most practical option when you want to maintain a unique record per some business criterion without having to manually check whether it already existed.
If row exists and If row does not exist are conditional validation operations. They check whether at least one record matching the given conditions exists and branch the flow accordingly, without needing a separate IF node. Particularly useful for business logic like “only proceed if the user is already registered” or “only insert if this ID doesn’t exist yet.”
Delete row(s) permanently removes records matching the conditions you define. If you need to preserve history rather than delete, the usual pattern is updating a status field to cancelled or deleted using Update row(s), so the record stays but is excluded from normal queries.
Table-level actions
Beyond managing rows, the Data Table node also lets you administer the structure of tables themselves without leaving n8n:
- Create a data table: creates a new table by defining its name and columns directly from the workflow. Useful for automated onboarding or projects where the schema is generated dynamically.
- List data tables: returns the list of tables available in the instance. Lets you build workflows that act on tables dynamically without hardcoding their names.
- Update a data table: modifies the name or structure of an existing table.
- Delete a data table: permanently removes a table and all its records.
In most projects, tables are created and configured once from the UI — but these operations open the door to more advanced management automations, like staging tables that are created, processed, and deleted programmatically.
Schema design
It’s worth thinking through your column design before creating the table. A good rule of thumb: only define separate columns for fields you’ll actually need to retrieve, filter, or compare; everything else can be consolidated into a generic JSON field. Columns you use in search conditions deserve the right type from the start, because changing a column type later may mean manually migrating existing data.
Practical example: appointment booking system
Now for the real workflow: an appointment system with automatic schedule conflict detection. One interesting aspect of this example is that it accepts requests from two different sources (a native n8n form and an HTTP webhook) and responds appropriately to each.
📦 Example downloads: You can import the workflow from appointments.json and the Data Table from appointments.csv, which includes sample data ready to use.
For the workflow to run correctly, the data table must exist in your instance beforehand — either created manually or imported from the CSV above. If you give it a different name when creating or importing it, you’ll need to update it in the workflow nodes that reference that Data Table.
Workflow architecture
The workflow has two entry points that converge into a single processing flow, and three possible outcomes — each with two response variants depending on where the request came from.
Dual input: form and webhook
The workflow has two trigger nodes, making it flexible for different use cases.
Appointment Form is a native n8n form with four fields: date (calendar picker), start time in HH:MM format, full name, and reason for the visit. It has bot protection enabled and uses the workflow’s timezone. Once submitted, the user sees a result page inside the form itself (confirmation, validation error, or scheduling conflict) without leaving the screen.
Webhook POST exposes the /appointments-booking endpoint for external integrations: web apps, scripts, or any service that can make an HTTP request. Responses are returned as JSON.
Both triggers connect to the same node. The sourceType field generated by the next node ("form" or "webhook") is what lets the workflow know how to respond at the end.
Key node: Normalize Input
This Code node is the heart of the workflow. It does several things at once:
1. Detects the source by checking whether the request body comes inside a body field (webhook) or directly at the root (form), and records the result in sourceType.
2. Validates input fields: the date format must be strictly YYYY-MM-DD (the node rejects any date with a time component or timezone), the time must be HH:MM, and the name and reason can’t be empty. Errors are collected in an errors array.
3. Converts the date and time to absolute UTC minutes — the most important design decision in the whole workflow. Instead of storing the time as text ("10:00") or as a timezone-aware timestamp, the node calculates how many minutes have elapsed since the UTC epoch (Date.UTC(year, month, day, hour, minute) / 60000). This produces numbers like 29,383,200 for April 1st, 2026 at 10:00 UTC.
Why? Because detecting whether two time slots overlap then becomes a simple numeric comparison: startA < endB && endA > startB. No timezones, no string parsing, no ambiguity.
4. Calculates the dayKey: divides startMinutes by 1440 (minutes in a day) and takes the integer part. This produces a unique number per UTC day, allowing you to check whether two appointments fall on the same day without relying on the date column type in the table.
5. Normalizes the name: strips accents, converts to lowercase, and removes any non-alphanumeric characters. "Laura Pérez" and "laura perez" produce the same normalizedName, preventing duplicates caused by capitalization or accent differences.
{
"sourceType": "webhook",
"date": "2026-04-15",
"dayKey": 20558,
"startTime": "10:00",
"endTime": "11:00",
"startMinutes": 29603400,
"endMinutes": 29603460,
"durationMinutes": 60,
"name": "Laura Pérez",
"normalizedName": "lauraperez",
"reason": "Initial consultation",
"status": "confirmed",
"isValid": true,
"errors": []
}
Get Same Day Appointments
If validation passes, the Data Table node retrieves all rows from the appointments table with status = "confirmed". It doesn’t filter by date at this point (that happens in the next node) because the date column is of type text, and the Data Table filter doesn’t offer custom comparisons. If that column were datetime, this comparison could be done more directly in the query itself. The day filtering happens in code, where you have full control.
The node has alwaysOutputData enabled, which ensures that even if there are no appointments in the table the flow continues without errors. Without that option, an empty result would stop execution.
Check Conflicts
This Code node applies two business rules against the retrieved appointments:
Rule 1 — one appointment per person per day: compares the normalizedName from the incoming request against each row that shares the same dayKey. If there’s a match, the slot is denied with conflictType: "same_person_same_day".
Rule 2 — no overlapping time slots: if the person doesn’t already have an appointment that day, it checks whether any existing slot overlaps with the requested one using the numeric minute comparison: reqStart < row._endMinutes && reqEnd > row._startMinutes. If there’s any overlap with another appointment (from a different person), it returns conflictType: "slot_taken" with a message indicating the occupied time range.
The node also includes backward-compatibility logic to handle older rows that might have stored minutes in per-day format rather than absolute epoch minutes, calculating the dayKey from the date field in that case.
Insert Appointment
If there are no conflicts, the Data Table node inserts the new appointment with all ten calculated fields: date, dayKey, startTime, endTime, startMinutes, endMinutes, name, normalizedName, reason, and status.
The appointments table schema
If you want to follow along with the exact same sample data, you can download the appointments CSV, which already includes the test records used in this workflow. If you’d rather start from scratch, it’s a good idea to define the date column (date) as datetime.
| Column | Type | Description |
|---|---|---|
date |
text | Date in YYYY-MM-DD format |
dayKey |
number | UTC day index: floor(startMinutes / 1440) |
startTime |
text | Start time HH:MM |
endTime |
text | End time HH:MM |
startMinutes |
number | Absolute UTC minutes since epoch (start) |
endMinutes |
number | Absolute UTC minutes since epoch (end) |
name |
text | Original customer name |
normalizedName |
text | Name without accents or uppercase |
reason |
text | Reason for the appointment |
status |
text | Always confirmed in this workflow |
Responses by source
Each of the three possible outcomes (appointment created, invalid data, scheduling conflict) has two response branches, selected by an IF node that checks whether sourceType === "form":
| Outcome | From the form | From the webhook |
|---|---|---|
| Success | Confirmation page with date and time | { success: true, message: "...", appointment: {...} } |
| Conflict | Error page showing the occupied slot | { success: false, conflictType: "slot_taken", message: "..." } |
| Invalid data | Error page listing the problems | { success: false, errors: [...] } |
Form response nodes use the completion operation of the Form node, which shows a result screen inside the form itself. JSON response nodes use Set to prepare the object, and the webhook sends it back as an HTTP response.
Testing with curl
Once you’ve imported the workflow and created the table with the right schema, you can test it directly:
Valid booking:
curl -X POST https://your-n8n.com/webhook/appointments-booking \
-H "Content-Type: application/json" \
-d '{
"date": "2026-04-15",
"time": "10:00",
"name": "Laura Pérez",
"reason": "Initial consultation"
}'
Successful response:
{
"success": true,
"message": "Appointment confirmed for 2026-04-15 from 10:00 to 11:00.",
"appointment": {
"date": "2026-04-15",
"startTime": "10:00",
"endTime": "11:00",
"name": "Laura Pérez",
"reason": "Initial consultation",
"status": "confirmed"
}
}
Trying to book an overlapping slot:
{
"success": false,
"conflictType": "slot_taken",
"message": "The selected slot is not available. It is already reserved from 10:00 to 11:00."
}
Second appointment on the same day for the same person:
{
"success": false,
"conflictType": "same_person_same_day",
"message": "Laura Pérez already has an appointment on 2026-04-15 from 10:00 to 11:00."
}
When to use Data Tables — and when not to
Data Tables are a powerful tool within their scope, but they’re not the right solution for everything.
Use them when…
- You need simple persistence inside n8n without external dependencies
- The data volume is manageable (hundreds to a few thousand records)
- The data is specific to your automations and doesn’t need to be accessible from other applications
- You want a quick prototype before investing in real database infrastructure
- Your use case is temporary or low-risk and data loss wouldn’t be critical
Avoid them when…
- You need to handle large volumes of data (thousands of records with frequent concurrent access)
- The data must be accessible from other applications outside of n8n
- You need complex relationships between tables (joins, foreign keys, referential integrity)
- The data is business-critical and requires backups, replication, and availability guarantees
- You need complex queries with aggregations, groupings, or advanced filters
In those cases, connect a real database instead. n8n has native nodes for PostgreSQL, MySQL, MongoDB, Supabase, and many others, so the integration is straightforward.
Conclusion
Data Tables round out the core n8n toolkit. Up until now, our workflows processed data without remembering anything; now they can maintain state, record history, and manage entities persistently.
With this, the n8n series has covered the full cycle: installation, concepts, best practices, intermediate patterns, and now persistence. Upcoming articles will explore AI integrations (LLM and AI Agent nodes) and building more complex automations using sub-workflows and reusable modules.
Happy building!