Creating our first Cross-Platform app, a …
Introduction In the previous chapters of our series on Flutter, we’ve laid down a solid …
read moreIn the previous chapters of this series, we’ve explored everything from foundational concepts of database design to more advanced SQL use cases using PostgreSQL. We’ve built a system to manage a public transportation network and covered how to define relationships between entities, perform complex queries, and even build views to simplify access to key data.
Now it’s time to go one step further. In this article, we’ll focus on two powerful tools that allow us to bring business logic directly into the database engine itself: functions and triggers. These features are especially useful when we want to automate repetitive tasks, enforce data validation, or keep data in sync, all without relying on the application layer.
A function is a reusable block of code that runs directly on the database server. It can take input parameters and return a value (or a set of results). Functions are useful for encapsulating logic, validating data, or performing calculations in an efficient and consistent way—just like functions in any programming language.
A trigger is an automatic action that executes in response to an event on a table—such as an INSERT
, UPDATE
, or DELETE
. A trigger can fire before or after the event takes place, and it typically calls an associated function to run additional logic.
Both functions and triggers are part of the SQL:2003 standard (and later versions), although their exact implementation can vary slightly depending on the database engine. These tools were designed to allow certain pieces of logic to live directly inside the database, making business rules more consistent, efficient, and reusable.
Historically, adoption was gradual. In the early days of database applications, most validation and business logic were handled by the client-side application. But as systems became more distributed and complex, it became increasingly valuable to have shared logic centralized in one place—the database itself.
Today, most modern relational database engines support both features, though the syntax and capabilities differ between systems:
Database engine | Function support | Trigger support | Function language |
---|---|---|---|
PostgreSQL | ✅ Full support | ✅ Full support | PL/pgSQL, SQL, others |
MySQL / MariaDB | ✅ Partial support | ✅ Partial support | SQL |
Oracle | ✅ Full support | ✅ Full support | PL/SQL |
SQL Server (MSSQL) | ✅ Full support | ✅ Full support | T-SQL |
SQLite | ⚠️ Limited* | ✅ Basic support | SQL |
Note: In the case of SQLite, custom functions can be created using extensions written in languages like C or Python, but they are not supported natively as in other engines.
While functions and triggers might seem similar at first glance—since both allow you to run logic inside the database—they serve different purposes and behave differently. Here’s a simple comparison to help you decide when to use one or the other:
Feature | Functions | Triggers |
---|---|---|
Executed… | Manually (via query or from another function) | Automatically (in response to table events) |
Invoked using | SELECT , CALL , or from other functions/triggers |
Not manually invoked |
Control over execution | You decide exactly when to run them | Always run when the event occurs |
Typical use | Reusable logic, calculations, validations | Auditing, automation, data synchronization |
Return value | Can return values or result sets | Does not return a value; modifies execution flow |
Best for | Encapsulating complex logic, reusable operations | Automatic validations, reactive actions |
In summary:
💡 Very often, they are used together: a trigger is linked to a function that contains the actual logic to be executed. This keeps the client-side code cleaner and easier to maintain.
In this section, we’ll put these concepts into practice by building upon the public transport system we’ve been developing in a previous chapter of this series.
So far, we’ve designed a schema that includes entities like route
, stop
, vehicle
, and schedule
. These are all interconnected to represent route logic, stop locations, scheduled times, and assigned vehicles. Now, we’ll add functions and triggers to extend this model and automate certain repetitive tasks—especially those prone to error when done manually.
Next, we’ll walk through a series of improvements to the current data model using custom functions and automatic triggers. These enhancements will allow us to:
Track changes in schedules:
updated_at
column to the schedule
table.BEFORE UPDATE
trigger.Enforce a vehicle limit per route:
Basic auditing:
event_log
).stop
) is deleted, including the time and which stop was removed.As we go through each task, we’ll explain every step in detail with commented SQL code so you can follow along, try it in your own environment, or adapt it to your specific needs.
💡 Just like in earlier articles, we’ll be working with a PostgreSQL setup that runs locally using Docker. If you haven’t configured this yet, we recommend reviewing our previous article on how to run PostgreSQL in a container.
Let’s start with the first task: tracking the last modification date in the system’s schedules. 👇
updated_at
in schedule
)One of the most common tasks in relational databases is keeping track of when a record was last updated. This can be useful for auditing purposes, data synchronization, or simply monitoring changes. In this first step, we’ll implement this logic for the schedule
table in our public transport system.
Although most modern ORMs and development frameworks can handle this automatically, it’s useful to know how to set it up manually—especially when you need more control or want a fully database-driven solution.
updated_at
columnFirst, let’s add a new column to the schedule
table to store the timestamp of the last update:
ALTER TABLE schedule
ADD COLUMN updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP;
This column is automatically initialized with the current date and time when the record is first inserted, but it does not update automatically when the row is modified. To make that happen, we need to use a function and a trigger.
updated_at
We’ll create a PostgreSQL function using the plpgsql
language that will handle updating the updated_at
field every time a row in the schedule
table is modified.
CREATE OR REPLACE FUNCTION update_schedule_timestamp()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at := CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
What does this function do?
NEW
) before it’s written to the database.updated_at
field to the current date and time (CURRENT_TIMESTAMP
).Now let’s connect the function to the schedule
table using a trigger. This trigger will be executed before every update (BEFORE UPDATE
), ensuring that updated_at
is correctly refreshed without requiring any manual changes.
CREATE TRIGGER trg_update_schedule_timestamp
BEFORE UPDATE ON schedule
FOR EACH ROW
EXECUTE FUNCTION update_schedule_timestamp();
Technical note:
FOR EACH ROW
means the trigger will execute once for every row affected by the operation.FOR EACH STATEMENT
), but that wouldn’t work in this case because we need to update data on a row-by-row basis.To make sure everything is working properly, we can update any row in the schedule
table and then check the value of the updated_at
field.
-- Check the initial state of the schedule
SELECT id, scheduled_time, updated_at FROM schedule WHERE id = 1;
-- Update the scheduled_time of the first row
UPDATE schedule SET scheduled_time = scheduled_time + INTERVAL '15 minutes' WHERE id = 1;
-- Check the updated state of the schedule
SELECT id, scheduled_time, updated_at FROM schedule WHERE id = 1;
You should see that the value of updated_at
has been updated to the exact date and time when you performed the modification.
In many management systems, we need to enforce certain logical rules that go beyond what traditional constraints can handle. For example, we might want to limit the number of vehicles assigned to a single route to avoid operational overload or simply to meet specific business requirements.
In this second task, we’ll implement a validation rule that prevents assigning more than 10 vehicles to the same route.
This function will count how many vehicles are already assigned to the route specified in the new vehicle (NEW.route_id
). If there are already 10 or more, it will raise an exception and prevent the insertion.
CREATE OR REPLACE FUNCTION validate_vehicle_limit()
RETURNS TRIGGER AS $$
DECLARE
vehicle_count INT;
BEGIN
-- Number of vehicles assigned to the route
SELECT COUNT(*) INTO vehicle_count FROM vehicle WHERE route_id = NEW.route_id;
-- Check if the limit is exceeded
IF vehicle_count >= 10 THEN
RAISE EXCEPTION 'Route % already has the maximum number of vehicles allowed (10)', NEW.route_id;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
What does this function do?
This trigger will be executed before a new vehicle is inserted, ensuring that the validation takes place right before the data is written to the database.
CREATE TRIGGER trg_validate_vehicle_limit
BEFORE INSERT ON vehicle
FOR EACH ROW
EXECUTE FUNCTION validate_vehicle_limit();
You can test this feature by inserting vehicles assigned to a specific route. Let’s assume that the route with ID 3
already has 10 vehicles registered. If you try to add one more, you should receive an error like this:
-- Insert 10 vehicles into route 3
INSERT INTO vehicle (plate_number, capacity, route_id) VALUES
('TTT-0001', 40, 3),
('TTT-0002', 40, 3),
('TTT-0003', 40, 3),
('TTT-0004', 40, 3),
('TTT-0005', 40, 3),
('TTT-0006', 40, 3),
('TTT-0007', 40, 3);
('TTT-0008', 40, 3);
('TTT-0009', 40, 3);
('TTT-0010', 40, 3); -- This would be the 10th vehicle
-- Insert the 11th vehicle
INSERT INTO vehicle (plate_number, capacity, route_id) VALUES
('OVER-9999', 40, 3);
-- Expected output:
-- ERROR: Route 3 already has the maximum number of vehicles allowed (10)
-- Insert a vehicle into a different route should work normally
INSERT INTO vehicle (plate_number, capacity, route_id) VALUES ('XYZ789Y', 50, 4);
Thanks to this logic implemented directly in the database, we ensure the consistency of the system regardless of the client or application interacting with the data.
In some systems—especially in regulated or mission-critical environments—it’s important to maintain a record of changes made to the database. While PostgreSQL offers advanced solutions like pgAudit
, a simple implementation using triggers is often enough to audit specific actions.
In this task, we’ll create an audit table to log every time a stop (stop
) is deleted from the transport system.
We’ll start by creating a new table that stores deletion events, including basic information like the stop’s ID, its name, and the timestamp of the deletion.
CREATE TABLE event_log (
id SERIAL PRIMARY KEY,
event_type TEXT NOT NULL,
table_name TEXT NOT NULL,
record_id INTEGER,
description TEXT,
deleted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
This function will be triggered automatically whenever a stop (stop
) is deleted. It will log the action into the event_log
table.
CREATE OR REPLACE FUNCTION log_stop_deletion()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO event_log (event_type, table_name, record_id, description)
VALUES (
'DELETE',
'stop',
OLD.id,
CONCAT('Stop "', OLD.name, '" was deleted.')
);
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
Why do we use OLD
?
AFTER DELETE
triggers, the data from the deleted row is available through the OLD
variable. There is no NEW
, since there’s no new record to reference.This trigger will run after a stop is deleted (AFTER DELETE
), ensuring that the audit action is properly recorded.
CREATE TRIGGER trg_log_stop_deletion
AFTER DELETE ON stop
FOR EACH ROW
EXECUTE FUNCTION log_stop_deletion();
To test the audit feature, delete a stop and then query the event_log
table to confirm that the action was recorded correctly.
-- Eliminar una parada
DELETE FROM stop WHERE id = 2;
-- Consultar los logs
SELECT * FROM event_log ORDER BY deleted_at DESC;
You should now see a new entry in the event_log
table indicating that the stop with ID 2
was deleted, along with the exact date and time of the action.
In this article, we explored the power of functions and triggers in PostgreSQL, applied to a real-world system. These tools allow you to automate tasks, enforce business rules, and maintain data integrity—right from within the database engine.
They add valuable improvements without relying entirely on application code and are fully scalable to more complex scenarios.
Got any ideas of what you could automate in your own databases? Try experimenting with these examples and tailor them to fit your needs!
And most importantly… Happy coding!
That may interest you
Introduction In the previous chapters of our series on Flutter, we’ve laid down a solid …
read moreAs databases grow larger and more complex, managing and accessing data efficiently becomes …
read moreIn this series, we’ve explored various database engines, such as SQLite, and the fundamental …
read moreConcept to value