Structr
APIs & Integrations
Structr provides email functionality for both sending and receiving messages. You can send simple emails with a single function call, compose complex messages with attachments and custom headers, and automatically fetch incoming mail from IMAP or POP3 mailboxes.
Quick Start
To send your first email:
- Configure your SMTP server in the Configuration Interface under SMTP Settings (host, port, user, password)
- Call
sendPlaintextMail()orsendHtmlMail():
$.sendPlaintextMail(
'sender@example.com', 'Sender Name',
'recipient@example.com', 'Recipient Name',
'Subject Line',
'Email body text'
);
That’s it. For multiple recipients, attachments, or custom headers, see the Advanced Email API section below.
Sending Emails
SMTP Configuration
Before sending emails, configure your SMTP server in the Configuration Interface under SMTP Settings:
| Setting | Description |
|---|---|
smtp.host | SMTP server hostname |
smtp.port | SMTP server port (typically 587 for TLS, 465 for SSL) |
smtp.user | SMTP username for authentication |
smtp.password | SMTP password |
smtp.tls.enabled | Enable TLS encryption |
smtp.tls.required | Require TLS (fail if not available) |
Multiple SMTP Configurations
You can define multiple SMTP configurations for different purposes (transactional emails, marketing, different departments). Add a prefix to each setting:
# Default configuration
smtp.host = mail.example.com
smtp.port = 587
smtp.user = default@example.com
smtp.password = secret
smtp.tls.enabled = true
smtp.tls.required = true
# Marketing configuration
marketing.smtp.host = marketing-mail.example.com
marketing.smtp.port = 587
marketing.smtp.user = marketing@example.com
marketing.smtp.password = secret
marketing.smtp.tls.enabled = true
marketing.smtp.tls.required = true
Select a configuration in your code with mailSelectConfig() before sending.
Basic Email Functions
For simple emails, use the one-line functions:
sendHtmlMail:
$.sendHtmlMail(
'info@example.com', // fromAddress
'Example Company', // fromName
'user@domain.com', // toAddress
'John Doe', // toName
'Welcome to Our Service', // subject
'<h1>Welcome!</h1><p>Thank you for signing up.</p>', // htmlContent
'Welcome! Thank you for signing up.' // textContent
);
sendPlaintextMail:
$.sendPlaintextMail(
'info@example.com', // fromAddress
'Example Company', // fromName
'user@domain.com', // toAddress
'John Doe', // toName
'Your Order Confirmation', // subject
'Your order #12345 has been confirmed.' // content
);
With attachments:
let invoice = $.first($.find('File', 'name', 'invoice.pdf'));
$.sendHtmlMail(
'billing@example.com',
'Billing Department',
'customer@domain.com',
'Customer Name',
'Your Invoice',
'<p>Please find your invoice attached.</p>',
'Please find your invoice attached.',
[invoice] // attachments must be a list
);
Advanced Email API
For complex emails with multiple recipients, custom headers, or dynamic content, use the Advanced Mail API. This follows a builder pattern: start with mailBegin(), configure the message, then send with mailSend().
Basic example:
$.mailBegin('support@example.com', 'Support Team', 'Re: Your Question', '<p>Thank you for contacting us.</p>', 'Thank you for contacting us.');
$.mailAddTo('customer@domain.com', 'Customer Name');
$.mailSend();
Complete example with all features:
// Start a new email
$.mailBegin('newsletter@example.com', 'Newsletter');
// Set content
$.mailSetSubject('Monthly Newsletter - January 2026');
$.mailSetHtmlContent('<h1>Newsletter</h1><p>This month's updates...</p>');
$.mailSetTextContent('Newsletter\n\nThis month's updates...');
// Add recipients
$.mailAddTo('subscriber1@example.com', 'Subscriber One');
$.mailAddTo('subscriber2@example.com', 'Subscriber Two');
$.mailAddCc('marketing@example.com', 'Marketing Team');
$.mailAddBcc('archive@example.com');
// Set reply-to address
$.mailAddReplyTo('feedback@example.com', 'Feedback');
// Add custom headers
$.mailAddHeader('X-Campaign-ID', 'newsletter-2026-01');
$.mailAddHeader('X-Mailer', 'Structr');
// Add attachments
let attachment = $.first($.find('File', 'name', 'report.pdf'));
$.mailAddAttachment(attachment, 'January-Report.pdf'); // optional custom filename
// Send and get message ID
let messageId = $.mailSend();
if ($.mailHasError()) {
$.log('Failed to send email: ' + $.mailGetError());
} else {
$.log('Email sent with ID: ' + messageId);
}
Using Different SMTP Configurations
Select a named configuration before sending:
$.mailBegin('marketing@example.com', 'Marketing');
$.mailSelectConfig('marketing'); // Use marketing SMTP settings
$.mailAddTo('customer@example.com');
$.mailSetSubject('Special Offer');
$.mailSetHtmlContent('<p>Check out our latest deals!</p>');
$.mailSend();
To reset to the default configuration:
$.mailSelectConfig(''); // Empty string resets to default
Dynamic SMTP Configuration
For runtime-configurable SMTP settings (e.g., from database or user input):
$.mailBegin('sender@example.com', 'Sender');
$.mailSetManualConfig(
'smtp.provider.com', // host
587, // port
'username', // user
'password', // password
true, // useTLS
true // requireTLS
);
$.mailAddTo('recipient@example.com');
$.mailSetSubject('Test');
$.mailSetTextContent('Test message');
$.mailSend();
// Reset manual config for next email
$.mailResetManualConfig();
Configuration priority: manual config > selected config > default config.
Saving Outgoing Messages
Outgoing emails are not saved by default. To keep a record of sent emails, explicitly enable saving with mailSaveOutgoingMessage(true) before calling mailSend(). Structr then stores the message as an EMailMessage object:
$.mailBegin('support@example.com', 'Support');
$.mailAddTo('customer@example.com');
$.mailSetSubject('Ticket #12345 Update');
$.mailSetHtmlContent('<p>Your ticket has been updated.</p>');
// Enable saving before sending
$.mailSaveOutgoingMessage(true);
$.mailSend();
// Retrieve the saved message
let sentMessage = $.mailGetLastOutgoingMessage();
$.log('Saved message ID: ' + sentMessage.id);
Saved messages include all recipients, content, headers, and attachments. Attachments are copied to the file system under the configured attachment path.
Replying to Messages
To create a proper reply that mail clients can thread correctly:
// Get the original message
let originalMessage = $.first($.find('EMailMessage', 'id', originalId));
$.mailBegin('support@example.com', 'Support');
$.mailAddTo(originalMessage.fromMail);
$.mailSetSubject('Re: ' + originalMessage.subject);
$.mailSetHtmlContent('<p>Thank you for your message.</p>');
// Set In-Reply-To header for threading
$.mailSetInReplyTo(originalMessage.messageId);
$.mailSend();
Error Handling
Always check for errors after sending:
$.mailBegin('sender@example.com', 'Sender');
$.mailAddTo('recipient@example.com');
$.mailSetSubject('Test');
$.mailSetTextContent('Test message');
$.mailSend();
if ($.mailHasError()) {
let error = $.mailGetError();
$.log('Email failed: ' + error);
// Handle error (retry, notify admin, etc.)
} else {
$.log('Email sent successfully');
}
Common errors include authentication failures, connection timeouts, and invalid recipient addresses.
Sender Address Requirements
Most SMTP providers require the sender address to match your authenticated account. If you use a shared SMTP server, the from address must typically be your account email.
For example, if your SMTP account is user@example.com, sending from other@example.com will likely fail with an error like:
550 5.7.1 User not authorized to send on behalf of <other@example.com>
This also applies to Structr’s built-in mail templates for password reset and registration confirmation. By default, these emails are sent using the address configured in structr.conf under smtp.user (if it contains a valid email address). If not, the sender defaults to structr-mail-daemon@localhost, which is typically rejected by external mail providers. Configure the correct sender addresses in the Mail Templates area of the Admin UI.
Receiving Emails
Structr can automatically fetch emails from IMAP or POP3 mailboxes and store them as EMailMessage objects in the database. The MailService runs in the background and periodically checks all configured mailboxes.
MailService Configuration
Configure the MailService in the Configuration Interface:
| Setting | Default | Description |
|---|---|---|
mail.maxemails | 25 | Maximum number of emails to fetch per mailbox per check |
mail.updateinterval | 30000 | Interval between checks in milliseconds (default: 30 seconds) |
mail.attachmentbasepath | /mail/attachments | Base path for storing email attachments |
Creating a Mailbox
Create a Mailbox object to configure an email account for fetching:
$.create('Mailbox', {
name: 'Support Inbox',
host: 'imap.example.com',
mailProtocol: 'imaps', // 'imaps' for IMAP over SSL, 'pop3' for POP3
port: 993, // Optional, uses protocol default if not set
user: 'support@example.com',
password: 'secret',
folders: ['INBOX', 'Support'] // Folders to monitor
});
| Property | Description |
|---|---|
host | Mail server hostname |
mailProtocol | imaps (IMAP over SSL) or pop3 |
port | Server port (optional, defaults to protocol standard) |
user | Account username |
password | Account password |
folders | Array of folder names to fetch from |
overrideMailEntityType | Custom type extending EMailMessage (optional) |
How Mail Fetching Works
The MailService automatically:
- Connects to each configured mailbox at the configured interval
- Fetches messages from the specified folders (newest first)
- Checks for duplicates using the Message-ID header
- Creates
EMailMessageobjects for new messages - Extracts and stores attachments as File objects
Duplicate detection first tries to match by messageId. If no Message-ID header exists, it falls back to matching by subject, from, to, and dates.
EMailMessage Properties
Fetched emails are stored with these properties:
| Property | Description |
|---|---|
subject | Email subject |
from | Sender display string (name and address) |
fromMail | Sender email address only |
to | Recipients (To:) |
cc | Carbon copy recipients |
bcc | Blind carbon copy recipients |
content | Plain text content |
htmlContent | HTML content |
folder | Source folder name |
sentDate | When the email was sent |
receivedDate | When the email was received |
messageId | Unique message identifier |
inReplyTo | Message-ID of the parent message (for threading) |
header | JSON string containing all headers |
mailbox | Reference to the source Mailbox |
attachedFiles | List of attached File objects |
Listing Available Folders
To discover which folders are available on a mail server, call the method getAvailableFoldersOnServer:
let mailbox = $.first($.find('Mailbox', 'name', 'Support Inbox'));
let folders = mailbox.getAvailableFoldersOnServer();
for (let folder of folders) {
$.log('Available folder: ' + folder);
}
Manual Mail Fetching
While the MailService fetches automatically, you can trigger an immediate fetch:
let mailbox = $.first($.find('Mailbox', 'name', 'Support Inbox'));
mailbox.fetchMails();
Custom Email Types
To add custom properties or methods to incoming emails, create a type that extends EMailMessage and configure it on the mailbox:
// Assuming you have a custom type 'SupportTicketMail' extending EMailMessage
let mailbox = $.first($.find('Mailbox', 'name', 'Support Inbox'));
mailbox.overrideMailEntityType = 'SupportTicketMail';
New emails will be created as your custom type, allowing you to add lifecycle methods like onCreate for automatic processing.
Processing Incoming Emails
To automatically process incoming emails, create an onCreate method on EMailMessage (or your custom type):
// onCreate method on EMailMessage or custom subtype
{
$.log('New email received: ' + $.this.subject);
// Example: Create a support ticket from the email
if ($.this.mailbox.name === 'Support Inbox') {
$.create('SupportTicket', {
title: $.this.subject,
description: $.this.content,
customerEmail: $.this.fromMail,
sourceEmail: $.this
});
}
}
Attachment Storage
Email attachments are automatically extracted and stored as File objects. The storage path follows this structure:
{mail.attachmentbasepath}/{year}/{month}/{day}/{mailbox-uuid}/
For example: /mail/attachments/2026/2/2/a1b2c3d4-...
Attachments are linked to their email via the attachedFiles property.
Best Practices
Sending
- Always check
mailHasError()after sending and handle failures appropriately - Use
mailSaveOutgoingMessage()for important emails to maintain a record - Set
mailSetInReplyTo()when replying to maintain proper threading - Provide both HTML and plain text content for maximum compatibility
Receiving
- Set a reasonable
mail.maxemailsvalue to avoid overwhelming the system - Use
overrideMailEntityTypeto add custom processing logic - Monitor the server log for connection or authentication errors
- Consider the
mail.updateintervalbased on how quickly you need to process incoming mail
Security
- Store SMTP and mailbox passwords securely
- Use TLS/SSL for all mail connections
- Be cautious with attachments from unknown senders
Related Topics
- Scheduled Tasks - Triggering email-related tasks on a schedule
- Business Logic - Processing emails in lifecycle methods
- Files - Working with email attachments
OpenAPI
Structr automatically generates OpenAPI 3.0.2 documentation for your REST API. This documentation describes your types, methods, and endpoints in a standardized format that other developers and tools can use to understand and interact with your API.
When You Need OpenAPI
OpenAPI documentation becomes valuable when your API moves beyond internal use:
- External developers need to integrate with your system without access to your codebase
- Frontend teams want to generate TypeScript types or API clients automatically
- Partners or customers require formal API documentation as part of a contract
- API testing tools like Postman can import OpenAPI specs to create test collections
- Code generators can create client SDKs in various languages from your spec
If your Structr application is only used through its own pages and you control all the code, you may not need OpenAPI at all. But as soon as others consume your API, OpenAPI saves time and prevents misunderstandings.
How It Works
Structr generates and serves the OpenAPI specification directly from your schema. There is no separate documentation file to maintain - when you request the OpenAPI endpoint, Structr reads your current schema and builds the specification on the fly. Add a property to a type or change a method signature, and the next request to the OpenAPI endpoint reflects that change.
You control what appears in the documentation: types and methods must be explicitly enabled for OpenAPI output, and you can add descriptions, summaries, and parameter documentation to make the spec useful for consumers.
Accessing the Documentation
Swagger UI
Structr includes Swagger UI, an interactive documentation interface where you can explore your API, view endpoint details, and test requests directly in the browser.
Access Swagger UI in the Admin UI:
- Open the Code area
- Click “OpenAPI” in the navigation tree on the left
Swagger UI displays all documented endpoints grouped by tag. You can expand any endpoint to see its parameters, request body schema, and response format. The “Try it out” feature lets you execute requests and see real responses.
JSON Endpoints
The raw OpenAPI specification is available at:
/structr/openapi
This returns the complete OpenAPI document as JSON. You can use this URL with any OpenAPI-compatible tool - code generators, API testing tools, or documentation platforms.
When you organize your API with tags, each tag also gets its own endpoint:
/structr/openapi/<tag>.json
For example, if you tag your project management types with “projects”, the documentation is available at /structr/openapi/projects.json. This is useful when you want to share only a subset of your API with specific consumers.
Configuring Types for OpenAPI
By default, types are not included in the OpenAPI output. To document a type and its endpoints, you must explicitly enable it and assign a tag. Methods on that type must also be enabled separately - enabling a type does not automatically include all its methods.
Note: OpenAPI visibility requires explicit opt-in at two levels: first enable the type, then enable each method you want to document. This gives you fine-grained control over what appears in your API documentation.
Enabling OpenAPI Output for Types
In the Schema area or Code area:
- Select the type you want to document
- Open the type settings
- Enable “Include in OpenAPI output”
- Enter a tag name (e.g., “projects”, “users”, “public-api”)
All types with the same tag are grouped together in the documentation. The tag also determines the URL for the tag-specific endpoint (/structr/openapi/<tag>.json).
Type Documentation Fields
Each type has fields for OpenAPI documentation:
| Field | Purpose |
|---|---|
| Summary | A short one-line description shown in endpoint lists |
| Description | A detailed explanation shown when the endpoint is expanded |
Write the summary for scanning - developers should understand what the type represents at a glance. Use the description for details: what the type is used for, important relationships, or usage notes.
Documenting Methods
Schema methods must also be explicitly enabled for OpenAPI output - just enabling the type is not enough. Each method you want to document needs its own OpenAPI configuration.
Enabling OpenAPI Output for Methods
In the Schema area or Code area:
- Select the method you want to document
- Open the API tab
- Enable OpenAPI output for this method
- Add summary, description, and parameter documentation
Methods marked as “Not callable via HTTP” cannot be included in OpenAPI documentation since they are not accessible via the REST API.
Method Documentation Fields
| Field | Purpose |
|---|---|
| Summary | A short description of what the method does |
| Description | Detailed explanation, including side effects or prerequisites |
Parameter Documentation
In the API tab, you can define typed parameters for your method. Each parameter has:
| Field | Purpose |
|---|---|
| Name | The parameter name as it appears in requests |
| Type | The expected data type (String, Integer, Boolean, etc.) |
| Description | What the parameter is used for |
| Required | Whether the parameter must be provided |
Structr validates incoming requests against these definitions before your code runs. This provides automatic input validation and generates accurate parameter documentation.
Example: Documenting a Search Method
For a method searchProjects that searches projects by keyword:
| Setting | Value |
|---|---|
| Summary | Search projects by keyword |
| Description | Returns all projects where the name or description contains the search term. Results are sorted by relevance. |
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
| query | String | Yes | The search term to match against project names and descriptions |
| limit | Integer | No | Maximum number of results (default: 20) |
| offset | Integer | No | Number of results to skip for pagination |
Documenting User-Defined Functions
User-defined functions (global schema methods) can also be documented for OpenAPI. The same fields are available: summary, description, and typed parameters.
This is useful when you create utility endpoints that don’t belong to a specific type - for example, a global search across multiple types or a health check endpoint.
Global Settings
Configure global OpenAPI settings in structr.conf or through the Configuration Interface:
| Setting | Default | Description |
|---|---|---|
openapiservlet.server.title | Structr REST Server | The title shown at the top of the documentation |
openapiservlet.server.version | 1.0.1 | The API version number |
Set these to match your application:
openapiservlet.server.title = Project Management API
openapiservlet.server.version = 2.1.0
The title appears prominently in Swagger UI and helps consumers identify which API they are viewing. The version number should follow semantic versioning and be updated when you make changes to your API.
Standard Endpoints
Structr automatically documents the standard endpoints for authentication and system operations:
/structr/rest/login- Session-based login/structr/rest/logout- End the current session/structr/rest/token- JWT token creation and refresh/structr/rest/me- Current user information
These endpoints appear in the documentation without additional configuration.
Organizing Your API
Choosing Tags
Tags group related endpoints in the documentation. Choose tags based on how API consumers think about your domain:
| Approach | Example Tags |
|---|---|
| By domain area | projects, tasks, users, reports |
| By access level | public, internal, admin |
| By consumer | mobile-app, web-frontend, integrations |
You can use multiple tag strategies by giving some types domain tags and others access-level tags. A type can only have one tag, so choose the most useful grouping for your consumers.
What to Include
Not every type needs to be in the OpenAPI documentation. Consider including:
- Types that external consumers interact with directly
- Types that represent your domain model
- Utility methods that provide specific functionality
Consider excluding:
- Internal types used only by your application logic
- Types that are implementation details
- Methods that should not be called by external consumers (mark these as “Not callable via HTTP”)
Best Practices
Write for Your Consumers
Documentation is for people who don’t know your codebase. Avoid jargon, explain abbreviations, and provide context. A good description answers: What is this? When would I use it? What should I know before using it?
Keep Summaries Short
Summaries appear in lists and should be scannable. Aim for under 60 characters. Save details for the description field.
Document Side Effects
If a method sends emails, creates related objects, or has other side effects, document them. Consumers need to know what happens when they call your API.
Version Your API
Update openapiservlet.server.version when you make breaking changes. This helps consumers know when they need to update their integrations.
Review the Output
Periodically open Swagger UI and review your documentation as a consumer would. Look for missing descriptions, unclear summaries, or undocumented parameters.
Related Topics
- REST Interface - How the REST API works and how to access it
- Data Model - Configuring types and their OpenAPI settings
- Business Logic - Creating methods and configuring their API exposure
- Code Area (Admin UI) - Using Swagger UI and the method editor
JDBC
Structr can query external SQL databases directly using the jdbc() function. This allows you to import data from MySQL, PostgreSQL, Oracle, SQL Server, or any other database with a JDBC driver, without setting up intermediate services or ETL pipelines.
When to Use JDBC
JDBC integration is useful when you need to:
- Import data from existing SQL databases into Structr
- Synchronize data between Structr and legacy systems
- Query external databases without migrating data
- Build dashboards that combine Structr data with external sources
For ongoing synchronization, combine JDBC queries with scheduled tasks. For one-time imports, run the query manually or through a schema method.
Prerequisites
JDBC drivers are not included with Structr. Before using the jdbc() function, you must install the appropriate driver for your database.
Installing a JDBC Driver
- Download the JDBC driver JAR for your database:
- MySQL: MySQL Connector/J
- PostgreSQL: PostgreSQL JDBC Driver
- SQL Server: Microsoft JDBC Driver
- Oracle: Oracle JDBC Driver
- Copy the JAR file to Structr’s
libdirectory:
cp mysql-connector-java-8.0.33.jar /opt/structr/lib/
- Restart Structr to load the driver
The jdbc() Function
The jdbc() function executes an SQL statement against an external database and returns any results.
Syntax
$.jdbc(url, query)
$.jdbc(url, query, username, password)
| Parameter | Description |
|---|---|
url | JDBC connection URL including host, port, and database name |
query | SQL statement to execute |
username | Optional: Database username (can also be included in URL) |
password | Optional: Database password (can also be included in URL) |
Return Value
For SELECT statements, the function returns an array of objects. Each object represents a row, with properties matching the column names.
[
{ id: 1, name: "Alice", email: "alice@example.com" },
{ id: 2, name: "Bob", email: "bob@example.com" }
]
For INSERT, UPDATE, and DELETE statements, the function executes the statement but returns an empty result.
Connection URLs
JDBC connection URLs follow a standard format but vary slightly by database:
| Database | URL Format |
|---|---|
| MySQL | jdbc:mysql://host:3306/database |
| PostgreSQL | jdbc:postgresql://host:5432/database |
| SQL Server | jdbc:sqlserver://host:1433;databaseName=database |
| Oracle | jdbc:oracle:thin:@host:1521:database |
| MariaDB | jdbc:mariadb://host:3306/database |
Authentication
You can provide credentials either as separate parameters or in the URL:
// Credentials as parameters (recommended)
let result = $.jdbc("jdbc:mysql://localhost:3306/mydb", "SELECT * FROM users", "admin", "secret");
// Credentials in URL
let result = $.jdbc("jdbc:mysql://localhost:3306/mydb?user=admin&password=secret", "SELECT * FROM users");
Examples
Importing from MySQL
{
let url = "jdbc:mysql://localhost:3306/legacy_crm";
let query = "SELECT id, name, email, created_at FROM customers WHERE active = 1";
let rows = $.jdbc(url, query, "reader", "secret");
for (let row of rows) {
$.create('Customer', {
externalId: row.id,
name: row.name,
eMail: row.email,
importedAt: $.now
});
}
$.log('Imported ' + $.size(rows) + ' customers');
}
Querying PostgreSQL
{
let url = "jdbc:postgresql://db.example.com:5432/analytics";
let query = "SELECT product_id, SUM(quantity) as total FROM orders GROUP BY product_id";
let rows = $.jdbc(url, query, "readonly", "secret");
for (let row of rows) {
let product = $.first($.find('Product', 'externalId', row.product_id));
if (product) {
product.totalOrders = row.total;
}
}
}
Querying SQL Server
{
let url = "jdbc:sqlserver://sqlserver.example.com:1433;databaseName=inventory";
let query = "SELECT sku, stock_level, warehouse FROM inventory WHERE stock_level < 10";
let rows = $.jdbc(url, query, "reader", "secret");
// Process low-stock items
for (let row of rows) {
$.create('LowStockAlert', {
sku: row.sku,
currentStock: row.stock_level,
warehouse: row.warehouse,
alertDate: $.now
});
}
}
Writing to External Databases
The jdbc() function can also execute INSERT, UPDATE, and DELETE statements:
{
let url = "jdbc:mysql://localhost:3306/external_system";
// Insert a record
$.jdbc(url, "INSERT INTO sync_log (source, timestamp, status) VALUES ('structr', NOW(), 'completed')", "writer", "secret");
// Update records
$.jdbc(url, "UPDATE orders SET synced = 1 WHERE synced = 0", "writer", "secret");
// Delete old records
$.jdbc(url, "DELETE FROM temp_data WHERE created_at < DATE_SUB(NOW(), INTERVAL 7 DAY)", "writer", "secret");
}
Write operations execute successfully but don’t return affected row counts. If you need confirmation, query the data afterward or use database-specific techniques like SELECT LAST_INSERT_ID().
Scheduled Synchronization
Combine JDBC with scheduled tasks for regular data synchronization:
// Global schema method: syncExternalOrders
// Cron expression: 0 */15 * * * * (every 15 minutes)
{
let lastSync = $.first($.find('SyncStatus', 'name', 'orders'));
let since = lastSync ? lastSync.lastRun : '1970-01-01';
let query = "SELECT * FROM orders WHERE updated_at > '" + since + "' ORDER BY updated_at";
let rows = $.jdbc("jdbc:mysql://orders.example.com:3306/shop", query, "sync", "secret");
for (let row of rows) {
let existing = $.first($.find('Order', 'externalId', row.id));
if (existing) {
existing.status = row.status;
existing.updatedAt = $.now;
} else {
$.create('Order', {
externalId: row.id,
customerEmail: row.customer_email,
total: row.total,
status: row.status
});
}
}
// Update sync timestamp
if (!lastSync) {
lastSync = $.create('SyncStatus', { name: 'orders' });
}
lastSync.lastRun = $.now;
$.log('Synced ' + $.size(rows) + ' orders');
}
Supported Databases
JDBC drivers are loaded automatically based on the connection URL (JDBC 4.0 auto-discovery). The following databases are commonly used with Structr:
| Database | Driver JAR | Example URL |
|---|---|---|
| MySQL | mysql-connector-java-x.x.x.jar | jdbc:mysql://host:3306/db |
| PostgreSQL | postgresql-x.x.x.jar | jdbc:postgresql://host:5432/db |
| SQL Server | mssql-jdbc-x.x.x.jar | jdbc:sqlserver://host:1433;databaseName=db |
| Oracle | ojdbc8.jar | jdbc:oracle:thin:@host:1521:sid |
| MariaDB | mariadb-java-client-x.x.x.jar | jdbc:mariadb://host:3306/db |
| H2 | h2-x.x.x.jar | jdbc:h2:~/dbfile |
| SQLite | sqlite-jdbc-x.x.x.jar | jdbc:sqlite:/path/to/db.sqlite |
Error Handling
Wrap JDBC calls in try-catch blocks to handle connection failures and query errors:
{
try {
let rows = $.jdbc("jdbc:mysql://localhost:3306/mydb", "SELECT * FROM customers", "admin", "secret");
// Process results
for (let row of rows) {
$.create('Customer', { name: row.name });
}
} catch (e) {
$.log('JDBC error: ' + e.message);
// Optionally notify administrators
$.sendPlaintextMail(
'alerts@example.com', 'System',
'admin@example.com', 'Admin',
'JDBC Import Failed',
'Error: ' + e.message
);
}
}
Common errors:
| Error | Cause |
|---|---|
No suitable JDBC driver found | JDBC driver JAR not in lib directory, restart Structr after adding |
Access denied | Invalid username or password |
Unknown database | Database name incorrect or doesn’t exist |
Connection refused | Database server not reachable (check host, port, firewall) |
Best Practices
Use Appropriate Credentials
For read-only operations, create a dedicated database user with minimal permissions:
-- MySQL example: read-only user
CREATE USER 'structr_reader'@'%' IDENTIFIED BY 'password';
GRANT SELECT ON legacy_db.* TO 'structr_reader'@'%';
For write operations, grant only the necessary permissions:
-- MySQL example: limited write access
CREATE USER 'structr_sync'@'%' IDENTIFIED BY 'password';
GRANT SELECT, INSERT, UPDATE ON external_db.sync_log TO 'structr_sync'@'%';
Limit Result Sets
For large tables, use LIMIT or WHERE clauses to avoid memory issues:
// Bad: fetches entire table
let rows = $.jdbc(url, "SELECT * FROM orders", user, pass);
// Good: fetches only what you need
let rows = $.jdbc(url, "SELECT * FROM orders WHERE created_at > '2024-01-01' LIMIT 1000", user, pass);
Store Connection Details Securely
Don’t hardcode credentials in your scripts. Use a dedicated configuration type:
{
let config = $.first($.find('JdbcConfig', 'name', 'legacy_crm'));
let url = "jdbc:mysql://" + config.host + ":" + config.port + "/" + config.database;
let rows = $.jdbc(url, "SELECT * FROM customers", config.username, config.password);
// ...
}
Handle Column Name Differences
Map external column names to Structr property names explicitly:
for (let row of rows) {
$.create('Customer', {
name: row.customer_name, // External: customer_name → Structr: name
eMail: row.email_address, // External: email_address → Structr: eMail
phone: row.phone_number // External: phone_number → Structr: phone
});
}
Limitations
- Large result sets are loaded entirely into memory. For very large imports, paginate with
LIMITandOFFSET. - Connection pooling is not supported. Each call opens a new connection. For high-frequency queries, consider caching results.
- Write operations (
INSERT,UPDATE,DELETE) execute successfully but don’t return affected row counts.
Related Topics
- Scheduled Tasks - Running JDBC imports on a schedule
- Data Creation & Import - Other import methods including CSV and REST
- Business Logic - Processing imported data in schema methods
MongoDB
Structr can connect to MongoDB databases using the mongodb() function. This function returns a MongoDB collection object that you can use to query, insert, update, and delete documents using the standard MongoDB Java driver API.
When to Use MongoDB
MongoDB integration is useful when you need to:
- Query document databases that store data in flexible JSON-like structures
- Integrate with existing MongoDB systems without migrating data
- Combine Structr’s graph database with MongoDB’s document storage
- Access analytics or logging data stored in MongoDB
Unlike JDBC, MongoDB integration requires no driver installation - the MongoDB client library is included with Structr.
The mongodb() Function
The mongodb() function connects to a MongoDB server and returns a collection object.
Syntax
$.mongodb(url, database, collection)
| Parameter | Description |
|---|---|
url | MongoDB connection URL (e.g., mongodb://localhost:27017) |
database | Database name |
collection | Collection name |
Return Value
The function returns a MongoCollection object. You can call MongoDB operations directly on this object, such as find(), insertOne(), updateOne(), deleteOne(), and others.
The bson() Function
MongoDB queries and documents must be passed as BSON objects. Use the $.bson() function to convert JavaScript objects to BSON:
$.bson({ name: 'John', status: 'active' })
Reading Data
Find All Documents
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'customers');
let results = collection.find();
for (let doc of results) {
$.log('Customer: ' + doc.get('name'));
}
}
Important: Results from
find()are not native JavaScript arrays. Usefor...ofto iterate - methods like.filter()or.map()are not available.Important: Documents in the result are not native JavaScript objects. Use
doc.get('fieldName')instead ofdoc.fieldNameto access properties.
Find with Query
Filter documents using a BSON query:
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'customers');
let results = collection.find($.bson({ status: 'active' }));
for (let doc of results) {
$.create('Customer', {
mongoId: doc.get('_id').toString(),
name: doc.get('name'),
email: doc.get('email')
});
}
}
Find with Query Operators
MongoDB query operators work as expected:
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'orders');
// Find orders over $100
let results = collection.find($.bson({ total: { $gt: 100 } }));
for (let doc of results) {
$.log('Order: ' + doc.get('orderId') + ' - $' + doc.get('total'));
}
}
Find with Regular Expressions
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'products');
// Find products with names matching a pattern
let results = collection.find($.bson({ name: { $regex: 'Test[0-9]' } }));
for (let doc of results) {
$.log('Product: ' + doc.get('name'));
}
}
Find with Date Comparisons
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'events');
// Find events from 2024 onwards
let results = collection.find($.bson({
date: { $gte: new Date(2024, 0, 1) }
}));
for (let doc of results) {
$.log('Event: ' + doc.get('name') + ' on ' + doc.get('date'));
}
}
Query Operators
Common MongoDB query operators:
| Operator | Description | Example |
|---|---|---|
$eq | Equal | { status: { $eq: 'active' } } |
$ne | Not equal | { status: { $ne: 'deleted' } } |
$gt | Greater than | { price: { $gt: 100 } } |
$gte | Greater than or equal | { price: { $gte: 100 } } |
$lt | Less than | { stock: { $lt: 10 } } |
$lte | Less than or equal | { stock: { $lte: 10 } } |
$in | In array | { status: { $in: ['active', 'pending'] } } |
$regex | Regular expression | { name: { $regex: '^Test' } } |
$exists | Field exists | { email: { $exists: true } } |
For the full list of operators, see the MongoDB Query Operators documentation.
Writing Data
Insert One Document
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'customers');
collection.insertOne($.bson({
name: 'John Doe',
email: 'john@example.com',
createdAt: new Date()
}));
}
Insert with Date Fields
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'events');
collection.insertOne($.bson({
name: 'Conference',
date: new Date(2024, 6, 15),
attendees: 100
}));
}
Updating Data
Update One Document
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'customers');
collection.updateOne(
$.bson({ email: 'john@example.com' }),
$.bson({ $set: { status: 'inactive' } })
);
}
Update Many Documents
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'orders');
collection.updateMany(
$.bson({ status: 'pending' }),
$.bson({ $set: { status: 'cancelled', cancelledAt: new Date() } })
);
}
Deleting Data
Delete One Document
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'customers');
collection.deleteOne($.bson({ email: 'john@example.com' }));
}
Delete Many Documents
{
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'logs');
// Delete logs older than 30 days
let cutoff = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000);
collection.deleteMany($.bson({ timestamp: { $lt: cutoff } }));
}
Examples
Importing MongoDB Data into Structr
{
let collection = $.mongodb('mongodb://localhost:27017', 'crm', 'contacts');
let results = collection.find($.bson({ active: true }));
let count = 0;
for (let doc of results) {
let mongoId = doc.get('_id').toString();
// Check if already imported
let existing = $.first($.find('Contact', 'mongoId', mongoId));
if (!existing) {
$.create('Contact', {
mongoId: mongoId,
name: doc.get('name'),
email: doc.get('email'),
phone: doc.get('phone'),
importedAt: $.now
});
count++;
}
}
$.log('Imported ' + count + ' new contacts');
}
Insert and Query
{
let collection = $.mongodb('mongodb://localhost:27017', 'testDatabase', 'testCollection');
// Insert a record
collection.insertOne($.bson({
name: 'Test4',
createdAt: new Date()
}));
// Query all records with that name
let results = collection.find($.bson({ name: 'Test4' }));
for (let doc of results) {
$.log('Found: ' + doc.get('name') + ' created at ' + doc.get('createdAt'));
}
}
Scheduled Sync
// Global schema method: syncFromMongo
// Cron expression: 0 */15 * * * * (every 15 minutes)
{
let collection = $.mongodb('mongodb://analytics.example.com:27017', 'events', 'pageviews');
// Get last sync time
let syncStatus = $.first($.find('SyncStatus', 'name', 'mongo_pageviews'));
let since = syncStatus ? syncStatus.lastRun : new Date(0);
let results = collection.find($.bson({
timestamp: { $gt: since }
}));
let count = 0;
for (let doc of results) {
$.create('PageView', {
path: doc.get('path'),
userId: doc.get('userId'),
timestamp: doc.get('timestamp')
});
count++;
}
// Update sync status
if (!syncStatus) {
syncStatus = $.create('SyncStatus', { name: 'mongo_pageviews' });
}
syncStatus.lastRun = $.now;
$.log('Synced ' + count + ' pageviews from MongoDB');
}
Available Collection Methods
The returned collection object exposes all methods from the MongoDB Java Driver’s MongoCollection class. Common methods include:
| Method | Description |
|---|---|
find() | Find all documents |
find(query) | Find documents matching query |
insertOne(document) | Insert one document |
insertMany(documents) | Insert multiple documents |
updateOne(query, update) | Update first matching document |
updateMany(query, update) | Update all matching documents |
deleteOne(query) | Delete first matching document |
deleteMany(query) | Delete all matching documents |
countDocuments() | Count all documents |
countDocuments(query) | Count matching documents |
For the complete API, see the MongoDB Java Driver documentation.
Connection URL
The MongoDB connection URL follows the standard MongoDB connection string format:
mongodb://[username:password@]host[:port][/database][?options]
Examples:
| Scenario | URL |
|---|---|
| Local, default port | mongodb://localhost:27017 |
| Local, short form | mongodb://localhost |
| With authentication | mongodb://user:pass@localhost:27017 |
| Remote server | mongodb://mongo.example.com:27017 |
| Replica set | mongodb://host1:27017,host2:27017,host3:27017/?replicaSet=mySet |
Important Notes
Results Are Not Native JavaScript
Results from find() behave differently than native JavaScript:
// This does NOT work:
let results = collection.find();
let filtered = results.filter(d => d.status === 'active'); // Error!
let name = results[0].name; // Error!
// This works:
for (let doc of results) {
let name = doc.get('name'); // Use .get() for properties
}
Always Use bson() for Queries
Pass all query and document objects through $.bson():
// This does NOT work:
collection.find({ name: 'John' }); // Error!
// This works:
collection.find($.bson({ name: 'John' }));
Convert ObjectIds to Strings
MongoDB’s _id field is an ObjectId. Convert it to a string when storing in Structr:
let mongoId = doc.get('_id').toString();
Error Handling
{
try {
let collection = $.mongodb('mongodb://localhost:27017', 'mydb', 'customers');
let results = collection.find();
for (let doc of results) {
$.create('Customer', {
name: doc.get('name')
});
}
} catch (e) {
$.log('MongoDB error: ' + e.message);
}
}
Testing with Docker
To quickly set up a local MongoDB instance for testing:
docker run -d -p 27017:27017 mongo
This starts MongoDB on the default port, accessible at mongodb://localhost:27017.
Related Topics
- JDBC - Connecting to SQL databases
- Scheduled Tasks - Running MongoDB operations on a schedule
- Business Logic - Processing imported data in schema methods
FTP
Structr includes a built-in FTP server that provides file access to the virtual filesystem. Users can connect with any FTP client and browse, upload, or download files according to their permissions.
Configuration
Enable and configure the FTP server in the Configuration Interface or in structr.conf:
| Setting | Description | Default |
|---|---|---|
application.ftp.enabled | Enable FTP server | false |
application.ftp.port | FTP port | 8021 |
Authentication
FTP authentication uses Structr user accounts with password authentication. Users log in with their Structr username and password.
# Connect with lftp
lftp -p 8021 -u username localhost
# Connect with standard ftp client
ftp localhost 8021
File Visibility
After authentication, the FTP connection shows files and folders based on the user’s permissions in Structr’s virtual filesystem.
Regular users see:
- Files and folders they have read access to
- File owners only for nodes they have read rights on
- Files are hidden if their parent folder is not accessible
Admin users see:
- All files and folders in the system
- All file owners
Example: Regular User
$ lftp -p 8021 -u user1 localhost
Password: *****
lftp user1@localhost:~> ls
drwx------ 1 0 Jun 30 15:22 testFolder
-rw------- 1 user1 347 Jun 30 09:24 test1.txt
-rw------- 1 25 Jun 30 15:41 test2.txt
-rw------- 1 5 Jun 30 09:24 test3.txt
-rw------- 1 user1 5 Jun 30 09:24 test4.txt
Files without visible owner (test2.txt, test3.txt) belong to users that user1 cannot see.
Example: Admin User
$ lftp -p 8021 -u admin localhost
Password: *****
lftp admin@localhost:~> ls
drwx------ 1 admin 0 Jun 30 15:22 testFolder
-rw------- 1 user1 347 Jun 30 09:24 test1.txt
-rw------- 1 admin 25 Jun 30 09:24 test2.txt
-rw------- 1 user2 5 Jun 30 09:24 test3.txt
-rw------- 1 user1 5 Jun 30 09:24 test4.txt
Admin users see all files and their owners.
Supported Operations
The FTP server supports standard file operations:
| Operation | Description |
|---|---|
ls / dir | List files and folders |
cd | Change directory |
get | Download file |
put | Upload file |
mkdir | Create directory |
rm | Delete file |
rmdir | Delete directory |
All operations respect Structr’s permission system. Users can only perform operations they have rights for.
Use Cases
FTP access is useful for:
- Bulk file transfers - Upload or download many files at once
- Automated backups - Script file retrieval from Structr
- Legacy integration - Connect systems that only support FTP
- Direct file management - Use familiar FTP clients instead of the web interface
Security Considerations
- FTP transmits credentials in plain text. Consider using FTPS or restricting access to trusted networks.
- The FTP server binds to all interfaces by default. Use firewall rules to limit access if needed.
- File permissions in FTP mirror Structr’s security model - users cannot access files they don’t have rights to.
Related Topics
- Files & Folders - Structr’s virtual filesystem
- Users & Groups - Managing user accounts and permissions
- Security - Access control and permissions
Message Brokers
Structr can connect to message brokers to send and receive messages asynchronously. This enables event-driven architectures, real-time data pipelines, and integration with external systems through industry-standard messaging protocols.
When to Use Message Brokers
Message brokers are useful when you need to:
- Decouple systems - Send data to other services without waiting for a response
- Process events asynchronously - Handle incoming events in the background
- Integrate with IoT devices - Receive sensor data or send commands via MQTT
- Build data pipelines - Stream data to analytics systems via Kafka or Pulsar
- Enable real-time communication - React to events from external systems immediately
If you only need to push updates to browsers, Server-Sent Events may be simpler. Message brokers are for system-to-system communication.
Supported Brokers
Structr supports three message broker protocols:
| Broker | Protocol | Typical Use Case |
|---|---|---|
| MQTT | Lightweight publish/subscribe | IoT, sensors, mobile apps |
| Kafka | Distributed streaming | High-throughput data pipelines, event sourcing |
| Pulsar | Cloud-native messaging | Multi-tenant messaging, geo-replication |
All three use the same programming model in Structr: create a client, configure subscribers, and process incoming messages with callbacks.
Core Concepts
Message Clients
A message client represents a connection to a broker. In Structr, clients are database objects - you create them like any other data object, either through the Admin UI or via $.create() in scripts. Each broker type has its own client type (MQTTClient, KafkaClient, PulsarClient) with broker-specific configuration properties, but they all share the same interface for sending messages and managing subscriptions.
When you enable a client, Structr establishes and maintains the connection in the background. The connection persists independently of HTTP requests or user sessions.
Message Subscribers
A MessageSubscriber is a database object that defines what happens when a message arrives. You create subscribers and link them to one or more clients. Each subscriber has:
- topic - Which topic to listen to (use
*for all topics) - callback - Code that runs when a message arrives (stored as a string property)
- clients - Which client(s) this subscriber is connected to (a relationship to MessageClient objects)
When a message arrives on a matching topic, Structr executes the callback code with two special variables available:
$.topic- The topic the message was published to$.message- The message content (typically a string or JSON)
The Basic Pattern
Message broker integration in Structr works through database objects. Clients and subscribers are regular Structr objects that you create, configure, and link - just like any other data in your application. This means you can create them through the Admin UI or programmatically via scripts.
Setting up via Admin UI:
- Open the Data area in the Admin UI
- Select the client type (
MQTTClient,KafkaClient, orPulsarClient) - Create a new object and fill in the connection properties
- Create a
MessageSubscriberobject with a topic and callback - Link the subscriber to the client by setting the
clientsproperty - Enable the client by checking
isEnabled(MQTT) orenabled(Kafka/Pulsar)
Setting up via Script:
The same steps work programmatically using $.create(). This is useful when you need to create clients dynamically or as part of an application setup routine.
Once the client is enabled, Structr maintains the connection in the background. Incoming messages automatically trigger the callbacks of linked subscribers. The connection persists across requests - you configure it once, and it keeps running until you disable or delete the client.
MQTT
MQTT (Message Queuing Telemetry Transport) is a lightweight protocol designed for constrained devices and low-bandwidth networks. It’s the standard for IoT applications.
MQTTClient Properties
| Property | Type | Description |
|---|---|---|
mainBrokerURL | String | Broker URL (required), e.g., ws://localhost:15675/ws |
fallbackBrokerURLs | String[] | Alternative broker URLs for failover |
username | String | Authentication username |
password | String | Authentication password |
qos | Integer | Quality of Service level (0, 1, or 2), default: 0 |
isEnabled | Boolean | Set to true to connect |
isConnected | Boolean | Connection status (read-only) |
Setting Up an MQTT Client
You can create the client and subscriber objects in the Data area of the Admin UI, or programmatically as shown below:
// Create the MQTT client
let client = $.create('MQTTClient', {
name: 'IoT Gateway',
mainBrokerURL: 'ws://localhost:15675/ws',
username: 'guest',
password: 'guest',
qos: 1
});
// Create a subscriber for temperature readings
let subscriber = $.create('MessageSubscriber', {
topic: 'sensors/temperature',
callback: `{
let data = JSON.parse($.message);
$.log('Temperature reading: ' + data.value + '°C from ' + data.sensorId);
// Store the reading
$.create('TemperatureReading', {
sensorId: data.sensorId,
value: data.value,
timestamp: $.now
});
}`
});
// Link subscriber to client
subscriber.clients = [client];
// Enable the connection
client.isEnabled = true;
When creating via the Admin UI, you fill in the same properties in the object editor. The callback property accepts StructrScript or JavaScript code as a string. After linking the subscriber to the client and enabling isEnabled, the connection activates immediately.
After enabling, the isConnected property indicates whether the connection succeeded. In the Admin UI, the client will show a green indicator when connected, red when disconnected.
Subscribing to Multiple Topics
You can create multiple subscribers for different topics:
// Subscribe to all sensor data
$.create('MessageSubscriber', {
topic: 'sensors/*',
callback: `{ $.call('processSensorData', { topic: $.topic, message: $.message }); }`,
clients: [client]
});
// Subscribe to system alerts
$.create('MessageSubscriber', {
topic: 'alerts/#',
callback: `{ $.call('handleAlert', { topic: $.topic, message: $.message }); }`,
clients: [client]
});
Use * to match a single level, # to match multiple levels in MQTT topic hierarchies.
Publishing Messages
Send messages using the client’s sendMessage method or the mqttPublish function:
// Using the method on the client
client.sendMessage('devices/lamp/command', JSON.stringify({ action: 'on', brightness: 80 }));
// Using the global function
$.mqttPublish(client, 'devices/lamp/command', JSON.stringify({ action: 'off' }));
MQTT-Specific Functions
| Function | Description |
|---|---|
mqttPublish(client, topic, message) | Publish a message to a topic |
mqttSubscribe(client, topic) | Subscribe to a topic programmatically |
mqttUnsubscribe(client, topic) | Unsubscribe from a topic |
Quality of Service Levels
MQTT supports three QoS levels:
| Level | Name | Guarantee |
|---|---|---|
| 0 | At most once | Message may be lost |
| 1 | At least once | Message delivered, may be duplicated |
| 2 | Exactly once | Message delivered exactly once |
Higher QoS levels add overhead. Use QoS 0 for frequent sensor readings where occasional loss is acceptable, QoS 1 or 2 for important commands or events.
Kafka
Apache Kafka is a distributed streaming platform designed for high-throughput, fault-tolerant messaging. It’s commonly used for data pipelines and event sourcing.
KafkaClient Properties
| Property | Type | Description |
|---|---|---|
servers | String[] | Bootstrap server addresses, e.g., ['localhost:9092'] |
groupId | String | Consumer group ID for coordinated consumption |
enabled | Boolean | Set to true to connect |
Setting Up a Kafka Client
Create the client and subscriber objects in the Data area, or programmatically:
// Create the Kafka client
let client = $.create('KafkaClient', {
name: 'Event Processor',
servers: ['kafka1.example.com:9092', 'kafka2.example.com:9092'],
groupId: 'structr-consumers'
});
// Create a subscriber for order events
let subscriber = $.create('MessageSubscriber', {
topic: 'orders',
callback: `{
let order = JSON.parse($.message);
$.log('New order received: ' + order.orderId);
$.create('Order', {
externalId: order.orderId,
customerEmail: order.customer.email,
totalAmount: order.total,
status: 'received'
});
}`,
clients: [client]
});
// Enable the connection
client.enabled = true;
The servers property accepts an array of bootstrap servers. Kafka clients connect to any available server and discover the full cluster topology automatically.
Publishing to Kafka
let client = $.first($.find('KafkaClient', 'name', 'Event Processor'));
client.sendMessage('order-updates', JSON.stringify({
orderId: order.externalId,
status: 'shipped',
trackingNumber: 'ABC123',
timestamp: new Date().toISOString()
}));
Consumer Groups
The groupId property determines how multiple consumers coordinate. Consumers in the same group share the workload - each message is processed by only one consumer in the group. Different groups receive all messages independently.
Use the same groupId across multiple Structr instances to distribute processing. Use different group IDs if each instance needs to see all messages.
Pulsar
Apache Pulsar is a cloud-native messaging platform that combines messaging and streaming. It supports multi-tenancy and geo-replication out of the box.
PulsarClient Properties
| Property | Type | Description |
|---|---|---|
servers | String[] | Service URLs, e.g., ['pulsar://localhost:6650'] |
enabled | Boolean | Set to true to connect |
Setting Up a Pulsar Client
Create the client and subscriber objects in the Data area, or programmatically:
// Create the Pulsar client
let client = $.create('PulsarClient', {
name: 'Analytics Pipeline',
servers: ['pulsar://pulsar.example.com:6650']
});
// Create a subscriber for analytics events
let subscriber = $.create('MessageSubscriber', {
topic: 'analytics/pageviews',
callback: `{
let event = JSON.parse($.message);
$.create('PageView', {
path: event.path,
userId: event.userId,
sessionId: event.sessionId,
timestamp: $.parseDate(event.timestamp, "yyyy-MM-dd'T'HH:mm:ss.SSSZ")
});
}`,
clients: [client]
});
// Enable the connection
client.enabled = true;
Pulsar clients have minimal configuration. The servers property accepts Pulsar service URLs, typically starting with pulsar:// for unencrypted or pulsar+ssl:// for TLS connections.
Publishing to Pulsar
let client = $.first($.find('PulsarClient', 'name', 'Analytics Pipeline'));
client.sendMessage('analytics/events', JSON.stringify({
type: 'conversion',
userId: user.id,
product: product.name,
value: product.price,
timestamp: new Date().toISOString()
}));
Working with Callbacks
Callback Context
Inside a callback, you have access to:
| Variable | Description |
|---|---|
$.topic | The topic the message arrived on |
$.message | The message content as a string |
$.this | The MessageSubscriber object |
Forwarding to Schema Methods
For complex processing, forward messages to a global schema method:
// Simple callback that delegates to a method
$.create('MessageSubscriber', {
topic: '*',
callback: `{ $.call('handleIncomingMessage', { topic: $.topic, message: $.message }); }`
});
Then implement the logic in your schema method where you have full access to error handling, transactions, and other methods:
// Global schema method: handleIncomingMessage
{
let topic = $.arguments.topic;
let message = $.arguments.message;
try {
let data = JSON.parse(message);
if (topic.startsWith('sensors/')) {
processSensorData(topic, data);
} else if (topic.startsWith('orders/')) {
processOrderEvent(topic, data);
} else {
$.log('Unknown topic: ' + topic);
}
} catch (e) {
$.log('Error processing message: ' + e.message);
// Store failed message for retry
$.create('FailedMessage', {
topic: topic,
message: message,
error: e.message,
timestamp: $.now
});
}
}
Error Handling
Callbacks should handle errors gracefully. Unhandled exceptions are logged but don’t stop message processing. For critical messages, implement your own retry logic:
$.create('MessageSubscriber', {
topic: 'critical-events',
callback: `{
try {
let event = JSON.parse($.message);
processEvent(event);
} catch (e) {
// Log and store for manual review
$.log('Failed to process critical event: ' + e.message);
$.create('FailedEvent', {
topic: $.topic,
payload: $.message,
error: e.message
});
}
}`
});
Managing Connections
Checking Connection Status
For MQTT clients, check the isConnected property:
let client = $.first($.find('MQTTClient', 'name', 'IoT Gateway'));
if (!client.isConnected) {
$.log('MQTT client is disconnected, attempting reconnect...');
client.isEnabled = false;
client.isEnabled = true;
}
Disabling and Re-enabling
To temporarily stop processing:
// Disable
client.isEnabled = false; // or client.enabled = false for Kafka/Pulsar
// Re-enable
client.isEnabled = true;
Disabling disconnects from the broker. Re-enabling reconnects and resubscribes to all configured topics.
Cleaning Up
Deleting a client automatically closes the connection and cleans up resources. Subscribers linked only to that client become inactive but are not automatically deleted.
Best Practices
Use JSON for Messages
Structure your messages as JSON for easy parsing and forward compatibility:
client.sendMessage('events', JSON.stringify({
type: 'user.created',
version: 1,
timestamp: new Date().toISOString(),
data: {
userId: user.id,
email: user.eMail
}
}));
Keep Callbacks Simple
Callbacks should be short. Delegate complex logic to schema methods:
// Good: Simple callback that delegates
callback: `{ $.call('processOrder', { data: $.message }); }`
// Avoid: Complex logic directly in callback
callback: `{ /* 50 lines of processing code */ }`
Handle Connection Failures
Brokers can become unavailable. Design your application to handle disconnections gracefully and log connection issues for monitoring.
Use Meaningful Topic Names
Organize topics hierarchically for easier subscription management:
sensors/temperature/building-a/floor-1
sensors/humidity/building-a/floor-1
orders/created
orders/shipped
orders/delivered
Secure Your Connections
Use authentication (username/password for MQTT) and encrypted connections (TLS) in production. Never store credentials in callbacks - use the client properties.
Troubleshooting
Client Won’t Connect
- Verify the broker URL is correct and reachable from the Structr server
- Check authentication credentials
- Review the Structr server log for connection errors
- For MQTT, ensure the WebSocket endpoint is enabled on the broker
Messages Not Received
- Verify the subscriber’s topic matches the published topic
- Check that the subscriber is linked to the correct client
- Ensure the client is enabled and connected
- Test with topic
*to receive all messages and verify the connection works
Callback Errors
- Check the server log for exception details
- Verify JSON parsing if the message format is unexpected
- Test the callback logic in a schema method first
Related Topics
- Server-Sent Events - Pushing updates to browsers
- Scheduled Tasks - Processing queued messages periodically
- Business Logic - Implementing message handlers as schema methods
Server-Sent Events
Server-sent events (SSE) allow Structr to push messages to connected browsers in real time. Unlike traditional request-response patterns where the client polls for updates, SSE maintains an open connection that the server can use to send data whenever something relevant happens.
Common use cases include live notifications, real-time dashboards, progress updates for long-running operations, and collaborative features where multiple users need to see changes immediately.
How It Works
The browser opens a persistent connection to Structr’s EventSource endpoint. Structr keeps track of all connected clients. When your server-side code calls broadcastEvent(), Structr sends the message to all connected clients (or a filtered subset based on authentication status). The browser receives the message through its EventSource API and can update the UI accordingly.
This is a one-way channel: server to client. For bidirectional communication, consider WebSockets instead.
Important: When not used over HTTP/2, SSE is limited to a maximum of 6 open connections per browser. This limit applies across all tabs, so opening multiple tabs to the same application can exhaust available connections. Use HTTP/2 in production to avoid this limitation. See the MDN EventSource documentation for details.
Configuration
Enabling the EventSource Servlet
The EventSource servlet is not enabled by default. To activate it:
- Open the Configuration Interface
- Navigate to Servlet Settings
- Add
EventSourceServletto the list of enabled servlets - Save the configuration
- Restart the HTTP service
Note: Do not enable this servlet by editing
structr.confdirectly. The settinghttp-service.servletscontains a list of all active servlets. If you add onlyEventSourceServlettostructr.conf, all other servlets will be disabled becausestructr.confoverrides defaults rather than extending them. Always use the Configuration Interface for this setting.
Resource Access
To allow users to connect to the EventSource endpoint, create a Resource Access Permission:
| Setting | Value |
|---|---|
| Signature | _eventSource |
| Flags | GET for the appropriate user types |
For authenticated users only, grant GET to authenticated users. To allow anonymous connections, grant GET to public users as well.
Client Setup
In your frontend JavaScript, create an EventSource connection:
const source = new EventSource('/structr/EventSource', {
withCredentials: true
});
source.onmessage = function(event) {
console.log('Received:', event.data);
};
source.onerror = function(event) {
console.error('EventSource error:', event);
};
The withCredentials: true option ensures that session cookies are sent with the connection request, allowing Structr to identify authenticated users.
Handling Different Event Types
The onmessage handler only receives events with the type message. For custom event types, use addEventListener():
const source = new EventSource('/structr/EventSource', {
withCredentials: true
});
// Generic message handler
source.onmessage = function(event) {
console.log('Message:', event.data);
};
// Custom event type handlers
source.addEventListener('notification', function(event) {
showNotification(JSON.parse(event.data));
});
source.addEventListener('data-update', function(event) {
refreshData(JSON.parse(event.data));
});
source.addEventListener('maintenance', function(event) {
showMaintenanceWarning(JSON.parse(event.data));
});
Connection Management
Browsers automatically reconnect if the connection drops. You can track connection state:
source.onopen = function(event) {
console.log('Connected to EventSource');
};
source.onerror = function(event) {
if (source.readyState === EventSource.CLOSED) {
console.log('Connection closed');
} else if (source.readyState === EventSource.CONNECTING) {
console.log('Reconnecting...');
}
};
To explicitly close the connection:
source.close();
Sending Events
Structr provides two functions for sending server-sent events:
broadcastEvent()- Send to all connected clients (filtered by authentication status)sendEvent()- Send to specific users or groups
Broadcasting to All Clients
Use broadcastEvent() to send messages to all connected clients.
Function Signature:
broadcastEvent(eventType, message [, authenticatedUsers [, anonymousUsers]])
| Parameter | Type | Default | Description |
|---|---|---|---|
| eventType | String | required | The event type (use message for the generic onmessage handler) |
| message | String | required | The message content (typically JSON) |
| authenticatedUsers | Boolean | true | Send to authenticated users |
| anonymousUsers | Boolean | false | Send to anonymous users |
StructrScript:
${broadcastEvent('message', 'Hello world!')}
${broadcastEvent('message', 'For everyone', true, true)}
JavaScript:
$.broadcastEvent('message', 'Hello world!');
$.broadcastEvent('message', 'For everyone', true, true);
Sending to Specific Recipients
Use sendEvent() to send messages to specific users or groups. The message is only delivered if the recipient has an open EventSource connection.
Function Signature:
sendEvent(eventType, message, recipients)
| Parameter | Type | Description |
|---|---|---|
| eventType | String | The event type |
| message | String | The message content |
| recipients | User, Group, or List | A single user, a single group, or a list containing users and groups |
When you specify a group, all members of that group (including nested groups) receive the message.
StructrScript:
${sendEvent('message', 'Welcome!', find('User', 'name', 'Bob'))}
${sendEvent('notification', 'Team update', find('Group', 'name', 'Editors'))}
JavaScript:
// Send to a specific user
let bob = $.first($.find('User', 'name', 'Bob'));
$.sendEvent('message', 'Welcome!', bob);
// Send to a group
let editors = $.first($.find('Group', 'name', 'Editors'));
$.sendEvent('notification', 'Team update', editors);
// Send to multiple recipients (all admin users)
let admins = $.find('User', { isAdmin: true });
$.sendEvent('announcement', 'Admin meeting in 10 minutes', admins);
The function returns true if at least one recipient had an open connection and received the message, false otherwise.
Sending JSON Data
For structured data, serialize to JSON:
JavaScript:
$.broadcastEvent('message', JSON.stringify({
type: 'notification',
title: 'New Comment',
body: 'Someone commented on your post',
timestamp: new Date().getTime()
}));
On the client:
source.onmessage = function(event) {
const data = JSON.parse(event.data);
if (data.type === 'notification') {
showNotification(data.title, data.body);
}
};
Custom Event Types
Use custom event types to separate different kinds of messages:
JavaScript (server):
// Notification for the UI
$.broadcastEvent('notification', JSON.stringify({
title: 'New Message',
body: 'You have a new message from Admin'
}));
// Data update signal
$.broadcastEvent('data-update', JSON.stringify({
entity: 'Project',
id: project.id,
action: 'modified'
}));
// System maintenance warning
$.broadcastEvent('maintenance', JSON.stringify({
message: 'System maintenance in 10 minutes',
shutdownTime: new Date().getTime() + 600000
}));
Remember: custom event types require addEventListener() on the client, not onmessage.
Targeting by Authentication Status
Control who receives broadcast messages:
// Only authenticated users (default)
$.broadcastEvent('message', 'For logged-in users only', true, false);
// Only anonymous users
$.broadcastEvent('message', 'For anonymous users only', false, true);
// Everyone
$.broadcastEvent('message', 'For everyone', true, true);
Practical Examples
Live Notifications
Trigger a notification when a new comment is created. In the afterCreate method of your Comment type:
{
let notification = JSON.stringify({
type: 'new-comment',
postId: $.this.post.id,
authorName: $.this.author.name,
preview: $.this.text.substring(0, 100)
});
// Notify the post author specifically
$.sendEvent('notification', notification, $.this.post.author);
}
Or broadcast to all authenticated users:
{
let notification = JSON.stringify({
type: 'new-comment',
postId: $.this.post.id,
authorName: $.this.author.name,
preview: $.this.text.substring(0, 100)
});
$.broadcastEvent('notification', notification);
}
Progress Updates
For long-running operations, send progress updates:
{
let items = $.find('DataItem', { needsProcessing: true });
let total = $.size(items);
let processed = 0;
for (let item of items) {
// Your processing logic here
item.needsProcessing = false;
item.processedDate = $.now;
processed++;
// Send progress update every 10 items
if (processed % 10 === 0) {
$.broadcastEvent('progress', JSON.stringify({
taskId: 'data-processing',
processed: processed,
total: total,
percent: Math.round((processed / total) * 100)
}));
}
}
// Send completion message
$.broadcastEvent('progress', JSON.stringify({
taskId: 'data-processing',
processed: total,
total: total,
percent: 100,
complete: true
}));
}
Collaborative Editing
Notify other users when someone is editing a document:
{
// Notify all members of the document's team
$.sendEvent('editing', JSON.stringify({
documentId: $.this.id,
documentName: $.this.name,
userId: $.me.id,
userName: $.me.name,
action: 'started'
}), $.this.team);
}
Team Announcements
Send announcements to specific groups:
{
let engineeringTeam = $.first($.find('Group', 'name', 'Engineering'));
$.sendEvent('announcement', JSON.stringify({
title: 'Sprint Planning',
message: 'Sprint planning meeting starts in 15 minutes',
room: 'Conference Room A'
}), engineeringTeam);
}
Best Practices
Use JSON for Message Data
Always serialize structured data as JSON. This makes parsing reliable and allows you to include multiple fields:
// Good
$.broadcastEvent('message', JSON.stringify({ action: 'refresh', target: 'projects' }));
// Avoid
$.broadcastEvent('message', 'refresh:projects');
Choose Meaningful Event Types
Use descriptive event types to organize your messages:
notification- User-facing alertsdata-update- Signals that data has changedprogress- Long-running operation updatessystem- System-level messages (maintenance, etc.)
Handle Reconnection Gracefully
Clients may miss messages during reconnection. Design your application to handle this:
- Include timestamps in messages so clients can detect gaps
- Provide a way to fetch missed updates via REST API
- Consider sending a “sync” message when clients reconnect
Use Targeted Messages for Sensitive Data
broadcastEvent() sends to all connected clients matching the authentication filter. For user-specific or sensitive data, use sendEvent() with specific recipients instead:
// Bad: broadcasts salary info to everyone
$.broadcastEvent('notification', JSON.stringify({
message: 'Your salary has been updated to $75,000'
}));
// Good: sends only to the specific user
$.sendEvent('notification', JSON.stringify({
message: 'Your salary has been updated to $75,000'
}), employee);
Consider Message Volume
Broadcasting too frequently can overwhelm clients and waste bandwidth. For high-frequency updates:
- Batch multiple changes into single messages
- Throttle updates (e.g., maximum one update per second)
- Send minimal data and let clients fetch details via REST
Troubleshooting
Events Not Received
If clients are not receiving events:
- Verify the EventSource servlet is enabled in the Configuration Interface under Servlet Settings
- Check that the Resource Access Permission for
_eventSourceexists and grants GET - Confirm the client is using
withCredentials: true - Check the browser’s Network tab for the EventSource connection status
Connection Drops Frequently
EventSource connections can be closed by proxies or load balancers with short timeouts. Configure your infrastructure to allow long-lived connections, or implement reconnection logic on the client.
Wrong Event Type
If onmessage is not firing, verify you are using message as the event type. For any other event type, you must use addEventListener().
Related Topics
- Business Logic - Triggering events from lifecycle methods
- Scheduled Tasks - Sending periodic updates via SSE
- REST Interface - Complementary request-response API
Host Script Execution
Structr can execute shell scripts on the host system, allowing your application to interact with the operating system, run external tools, and integrate with other software on the server. This opens up possibilities like generating documents with external converters, running maintenance tasks from a web interface, querying system metadata, controlling Docker containers, or integrating with legacy systems.
For security reasons, scripts must be explicitly registered in the configuration file before they can be executed. You cannot run arbitrary commands, only scripts that an administrator has approved.
Registering Scripts
Scripts are registered in structr.conf using a key-value format:
my.pdf.generator = generate-pdf.sh
backup.database = db-backup.sh
docker.restart.app = restart-container.sh
The key (left side) is what you use in your code to call the script. The value (right side) is the filename of the script. Keys must be lowercase.
The Scripts Folder
All scripts must be placed in the scripts folder within your Structr installation directory. The location is controlled by the scripts.path setting, which defaults to scripts relative to base.path.
Scripts must be executable:
chmod +x scripts/generate-pdf.sh
For security, Structr does not follow symbolic links and does not allow directory traversal (paths containing ..). These restrictions can be disabled via configuration settings, but this is not recommended.
Executing Scripts
Structr provides two functions for script execution: exec() for text output and execBinary() for binary data.
exec()
The exec() function runs a script and returns its text output.
StructrScript:
${exec('my.pdf.generator')}
${exec('my.script', merge('param1', 'param2'))}
JavaScript:
$.exec('my.pdf.generator');
$.exec('my.script', ['param1', 'param2']);
Parameters are passed to the script as command-line arguments. They are automatically quoted to handle spaces and special characters.
execBinary()
The execBinary() function runs a script and streams its binary output directly to a file or HTTP response. This is essential when working with binary data like images, PDFs, or other generated files.
StructrScript:
${execBinary(response, 'my.pdf.generator')}
${execBinary(myFile, 'convert.image', merge('input.png'))}
JavaScript:
$.execBinary($.response, 'my.pdf.generator');
$.execBinary(myFile, 'convert.image', ['input.png']);
When streaming to an HTTP response, ensure the page has the correct content type set and the pageCreatesRawData flag enabled.
Parameter Masking
When passing sensitive values like passwords or API keys, you can mask them in the log output:
JavaScript:
$.exec('my.script', [
'username',
{ value: 'SECRET_API_KEY', mask: true }
]);
The masked parameter appears as *** in the log while the actual value is passed to the script.
Log Behavior
You can control how script execution is logged by passing a third parameter:
| Value | Behavior |
|---|---|
| 0 | Do not log the command line |
| 1 | Log only the script path |
| 2 | Log script path and parameters (with masking applied) |
The default is controlled by the log.scriptprocess.commandline setting.
Security Considerations
Host script execution is a powerful feature that requires careful handling.
- Only scripts registered in
structr.confcan be executed. This configuration-based allowlist prevents code injection attacks. Even if an attacker gains access to your application logic, they cannot execute arbitrary commands. - By default, script paths cannot be symbolic links. This prevents attacks where a symlink points to a sensitive file outside the scripts folder.
- Paths containing
..are rejected by default, preventing access to files outside the scripts folder. - Always validate and sanitize any user input before passing it as a parameter to a script. Never construct script parameters directly from user input without validation.
- Run Structr with a user account that has only the permissions necessary for its operation. Scripts execute with the same permissions as the Structr process.
Best Practices
- When passing parameters with special characters or receiving output that may contain special characters, encode the data as Base64. This prevents issues with quoting and escaping.
- Combine host scripts with the Cron service to run them on a schedule. Register the script in
structr.conf, then call it from a scheduled function. - Scripts should do one thing well. Complex logic is better implemented in Structr’s scripting environment where you have access to the full API.
- Use the log behavior parameter to avoid logging sensitive data while still maintaining an audit trail for debugging.
Example for Base64 encoding:
// Encode parameters
$.exec('my.script', [$.base64_encode(complexInput)]);
// Decode output
let result = $.base64_decode($.exec('my.script'));
let data = $.from_json(result);
Related Topics
- Scheduled Tasks - Running scripts automatically on a schedule
- Configuration - Setting up structr.conf
RSS Feeds
Structr can fetch and store content from RSS and Atom feeds. Create a DataFeed object with a feed URL, and Structr retrieves entries and stores them as FeedItem objects. You can configure retention limits and add custom processing logic when new items arrive.
Quick Start
To subscribe to a feed:
{
let feed = $.create('DataFeed', {
name: 'Tech News',
url: 'https://example.com/feed.xml'
});
}
When a DataFeed is created, Structr immediately fetches the feed and creates FeedItem objects for each entry. Access the items via the items property:
{
let feed = $.first($.find('DataFeed', 'name', 'Tech News'));
for (let item of feed.items) {
$.log(item.name + ' - ' + item.pubDate);
}
}
DataFeed Properties
| Property | Type | Description |
|---|---|---|
url | String | Feed URL (required) |
name | String | Display name for the feed |
description | String | Feed description (populated automatically from feed metadata) |
feedType | String | Feed format (e.g., rss_2.0, atom_1.0 - populated automatically) |
updateInterval | Long | Milliseconds between updates (used by updateIfDue()) |
lastUpdated | Date | Timestamp of the last successful fetch |
maxItems | Integer | Maximum number of items to retain |
maxAge | Long | Maximum age of items in milliseconds |
items | List | Collection of FeedItem objects |
FeedItem Properties
Each feed entry is stored as a FeedItem with these properties:
| Property | Type | Description |
|---|---|---|
name | String | Entry title |
url | String | Link to the original content |
author | String | Author name |
description | String | Entry summary or excerpt |
pubDate | Date | Publication date |
updatedDate | Date | Last modification date |
comments | String | URL to comments |
contents | List | Full content blocks (FeedItemContent objects) |
enclosures | List | Attached media (FeedItemEnclosure objects) |
feed | DataFeed | Reference to the parent feed |
FeedItemContent Properties
Some feeds include full content in addition to the description. These are stored as FeedItemContent objects:
| Property | Type | Description |
|---|---|---|
value | String | The content text or HTML |
mode | String | Content mode (e.g., escaped, xml) |
itemType | String | MIME type of the content |
item | FeedItem | Reference to the parent item |
FeedItemEnclosure Properties
Feeds often include media attachments like images, audio files, or videos. These are stored as FeedItemEnclosure objects:
| Property | Type | Description |
|---|---|---|
url | String | URL to the media file |
enclosureType | String | MIME type (e.g., image/jpeg, audio/mpeg) |
enclosureLength | Long | File size in bytes |
item | FeedItem | Reference to the parent item |
Updating Feeds
Manual Update
Trigger an immediate update with updateFeed():
{
let feed = $.first($.find('DataFeed', 'name', 'News Feed'));
feed.updateFeed();
}
Conditional Update
The updateIfDue() method checks whether enough time has passed since lastUpdated based on updateInterval. If an update is due, it fetches new entries:
{
let feed = $.first($.find('DataFeed', 'name', 'News Feed'));
feed.updateIfDue();
}
This is useful when called from a scheduled task that runs more frequently than individual feed intervals.
Automatic Updates via CronService
Structr includes a built-in UpdateFeedTask that periodically checks all feeds. To enable it, configure the CronService in structr.conf:
#### Specifying the feed update task for the CronService
CronService.tasks = org.structr.feed.cron.UpdateFeedTask
#### Setting up the execution interval in cron time format
# In this example the web feed will be updated every 5 minutes
org.structr.feed.cron.UpdateFeedTask.cronExpression = 5 * * * * *
After changing the configuration:
- Stop the Structr instance
- Edit
structr.confwith the settings above - Restart the instance
The UpdateFeedTask calls updateIfDue() on each DataFeed. Configure updateInterval on individual feeds to control how often they actually fetch new content:
{
$.create('DataFeed', {
name: 'Hourly News',
url: 'https://example.com/news.xml',
updateInterval: 3600000 // Only fetch if last update was more than 1 hour ago
});
}
Even if the CronService runs every 5 minutes, a feed with updateInterval set to one hour will only fetch when at least one hour has passed since lastUpdated.
Retention Control
By default, Structr keeps all feed items indefinitely. Use maxItems and maxAge to automatically remove old entries. Cleanup runs automatically after each feed update.
Limiting by Count
Keep only the most recent entries:
{
$.create('DataFeed', {
name: 'Headlines',
url: 'https://example.com/headlines.xml',
maxItems: 50 // Keep only the 50 most recent items
});
}
Limiting by Age
Remove entries older than a specified duration:
{
$.create('DataFeed', {
name: 'Daily Digest',
url: 'https://example.com/daily.xml',
maxAge: 604800000 // Keep items for 7 days (7 * 24 * 60 * 60 * 1000)
});
}
Manual Cleanup
You can also trigger cleanup manually:
{
let feed = $.first($.find('DataFeed', 'name', 'Active Feed'));
feed.cleanUp();
}
DataFeed Methods
| Method | Description |
|---|---|
updateFeed() | Fetches new entries from the remote feed URL and runs cleanup afterward |
updateIfDue() | Checks if an update is due based on lastUpdated and updateInterval, and fetches new items if necessary |
cleanUp() | Removes old feed items based on the configured maxItems and maxAge properties |
Processing New Items
To automatically process incoming feed items, add an onCreate method to the FeedItem type. This is useful for setting visibility, creating notifications, or triggering other actions.
Making Items Visible
By default, newly created FeedItem objects are not visible to public or authenticated users. Set the visibility flags in the onCreate method:
// onCreate method on FeedItem
{
$.this.visibleToPublicUsers = true;
$.this.visibleToAuthenticatedUsers = true;
}
Custom Processing
You can extend the onCreate method with additional logic:
// onCreate method on FeedItem
{
// Make visible
$.this.visibleToPublicUsers = true;
$.this.visibleToAuthenticatedUsers = true;
// Create a notification for items from a specific feed
if ($.this.feed.name === 'Critical Alerts') {
$.create('Notification', {
title: 'Alert: ' + $.this.name,
message: $.this.description,
sourceUrl: $.this.url
});
}
}
Examples
News Aggregator
Collect news from multiple sources:
{
let sources = [
{ name: 'Tech News', url: 'https://technews.example.com/feed.xml' },
{ name: 'Business', url: 'https://business.example.com/rss' },
{ name: 'Science', url: 'https://science.example.com/atom.xml' }
];
for (let source of sources) {
$.create('DataFeed', {
name: source.name,
url: source.url,
updateInterval: 1800000, // 30 minutes
maxItems: 100
});
}
}
Finding Podcast Episodes
Extract audio files from a podcast feed:
{
let feed = $.first($.find('DataFeed', 'name', 'My Podcast'));
let episodes = [];
for (let item of feed.items) {
let audioEnclosure = null;
for (let enc of item.enclosures) {
if (enc.enclosureType === 'audio/mpeg') {
audioEnclosure = enc;
break;
}
}
episodes.push({
title: item.name,
published: item.pubDate,
description: item.description,
audioUrl: audioEnclosure ? audioEnclosure.url : null,
fileSize: audioEnclosure ? audioEnclosure.enclosureLength : null
});
}
return episodes;
}
Recent Items Across All Feeds
Get items from the last 24 hours across all feeds:
{
let yesterday = new Date(Date.now() - 86400000);
let feeds = $.find('DataFeed');
let recentItems = [];
for (let feed of feeds) {
for (let item of feed.items) {
if (item.pubDate && item.pubDate.getTime() > yesterday.getTime()) {
recentItems.push({
feedName: feed.name,
title: item.name,
url: item.url,
published: item.pubDate
});
}
}
}
return recentItems;
}
Duplicate Detection
Structr detects duplicate entries using the item’s URL. When fetching a feed, items with URLs that already exist in the feed’s item list are skipped. This prevents duplicate entries even if the feed is fetched multiple times.
Supported Feed Formats
Structr uses the ROME library to parse feeds and supports:
- RSS 0.90, 0.91, 0.92, 0.93, 0.94, 1.0, 2.0
- Atom 0.3, 1.0
The feed format is detected automatically and stored in the feedType property.
Related Topics
- Scheduled Tasks - Running feed updates on a schedule
- Business Logic - Processing feed items in lifecycle methods
Spatial
Structr provides support for geographic data. This includes a built-in Location type with distance-based queries, geocoding to convert addresses to coordinates, geometry processing for polygons and spatial analysis, and import capabilities for standard geospatial file formats.
Note: The geometry functions require the
geo-transformationsmodule.
The Location Type
Structr includes a built-in Location type for storing geographic coordinates. This type has two key properties:
| Property | Type | Description |
|---|---|---|
latitude | Double | Latitude coordinate (WGS84) |
longitude | Double | Longitude coordinate (WGS84) |
Creating Locations
Create Location objects like any other Structr type:
{
// Create a location for Frankfurt
let frankfurt = $.create('Location', {
name: 'Frankfurt Office',
latitude: 50.1109,
longitude: 8.6821
});
}
You can also extend the Location type or add these properties to your own types. Any type with latitude and longitude properties can use distance-based queries.
Distance-Based Queries
The withinDistance predicate finds objects within a specified radius of a point. The distance is measured in kilometers.
{
// Find all locations within 25 km of a point
let nearbyLocations = $.find('Location', $.withinDistance(50.1109, 8.6821, 25));
$.log('Found ' + $.size(nearbyLocations) + ' locations');
}
This works with any type that has latitude and longitude properties:
{
// Find stores within 10 km
let nearbyStores = $.find('Store', $.withinDistance(customerLat, customerLon, 10));
// Find events within 50 km
let nearbyEvents = $.find('Event', $.withinDistance(userLat, userLon, 50));
}
Distance Queries via REST API
The REST API supports distance-based queries using request parameters. Any type with latitude and longitude properties (typically by extending the built-in Location type) can be queried this way.
Using coordinates directly:
curl "http://localhost:8082/structr/rest/Hotel?_latlon=50.1167851,8.7265218&_distance=0.1"
The _latlon parameter specifies the search origin as latitude,longitude, and _distance specifies the search radius in kilometers.
Using address components:
curl "http://localhost:8082/structr/rest/Store?_country=Germany&_city=Frankfurt&_street=Hauptstraße&_distance=5"
Using combined location string:
curl "http://localhost:8082/structr/rest/Restaurant?_location=Germany,Berlin,Unter%20den%20Linden&_distance=2"
The _location parameter accepts the format country,city,street.
Request Parameters for Distance Search:
| Parameter | Description |
|---|---|
_latlon | Search origin as latitude,longitude |
_distance | Search radius in kilometers |
_location | Search origin as country,city,street |
_country | Country (used with other address fields) |
_city | City (used with other address fields) |
_street | Street (used with other address fields) |
_postalCode | Postal code (used with other address fields) |
When using address-based parameters (_location or the individual fields), Structr geocodes the address using the configured provider and searches for objects within the specified radius. Geocoded addresses are cached to minimize API calls.
Geocoding
Geocoding converts addresses into geographic coordinates. Structr uses geocoding automatically when you use the distance parameter in REST queries.
Configuration
Configure geocoding in the Configuration Interface:
| Setting | Description |
|---|---|
geocoding.provider | Full class name of the provider |
geocoding.apikey | API key (required for Google and Bing) |
geocoding.language | Language for results (e.g., en, de) |
Supported Providers
| Provider | Class Name | API Key |
|---|---|---|
| Google Maps | org.structr.common.geo.GoogleGeoCodingProvider | Required |
| Bing Maps | org.structr.common.geo.BingGeoCodingProvider | Required |
| OpenStreetMap | org.structr.common.geo.OSMGeoCodingProvider | Not required |
Caching
Geocoding results are automatically cached (up to 10,000 entries) to minimize API calls and improve performance. The cache persists for the lifetime of the Structr process.
Working with Geometries
For more complex geographic data like polygons, boundaries, or routes, create a custom Geometry type that stores WKT (Well-Known Text) representations.
Creating a Geometry Type
In the Schema area, create a type with these properties:
| Property | Type | Description |
|---|---|---|
wkt | String | WKT representation of the geometry |
name | String | Name or identifier |
Add a schema method getGeometry to convert WKT to a geometry object:
// Schema method: getGeometry
{
return $.wktToGeometry($.this.wkt);
}
Add a method contains to check if a point is inside:
// Schema method: contains (parameter: point)
{
let point = $.retrieve('point');
let geometry = $.this.getGeometry();
let pointGeom = $.wktToGeometry('POINT(' + point.latitude + ' ' + point.longitude + ')');
return geometry.contains(pointGeom);
}
Creating Geometries
{
// Create a polygon
let polygon = $.create('Geometry', {
name: 'Delivery Zone A',
wkt: 'POLYGON ((8.6 50.0, 8.8 50.0, 8.8 50.2, 8.6 50.2, 8.6 50.0))'
});
// Create a line
let route = $.create('Geometry', {
name: 'Route 1',
wkt: 'LINESTRING (8.68 50.11, 8.69 50.12, 8.70 50.13)'
});
}
Point-in-Polygon Queries
Check if a point is inside a geometry:
{
let point = { latitude: 50.1, longitude: 8.7 };
// Check against a single geometry
let zone = $.first($.find('Geometry', 'name', 'Delivery Zone A'));
if (zone.contains(point)) {
$.log('Point is inside delivery zone');
}
// Find all geometries containing a point
let geometries = $.find('Geometry');
let matching = [];
for (let geom of geometries) {
if (geom.contains(point)) {
matching.push(geom);
}
}
}
Geometry Functions
Structr provides functions for creating, parsing, and analyzing geometries.
Creating Geometries
| Function | Description |
|---|---|
coordsToPoint(coord) | Create Point from [x, y], {x, y}, or {latitude, longitude} |
coordsToLineString(coords) | Create LineString from array of coordinates |
coordsToPolygon(coords) | Create Polygon from array of coordinates |
coordsToMultipoint(coords) | Create MultiPoint from array of coordinates |
{
let point = $.coordsToPoint([8.6821, 50.1109]);
let point2 = $.coordsToPoint({ latitude: 50.1109, longitude: 8.6821 });
let line = $.coordsToLineString([[8.68, 50.11], [8.69, 50.12], [8.70, 50.13]]);
let polygon = $.coordsToPolygon([
[8.6, 50.0], [8.8, 50.0], [8.8, 50.2], [8.6, 50.2], [8.6, 50.0]
]);
}
Parsing Geometries
| Function | Description |
|---|---|
wktToGeometry(wkt) | Parse WKT string to geometry |
wktToPolygons(wkt) | Extract all polygons from WKT |
{
let point = $.wktToGeometry('POINT (8.6821 50.1109)');
let polygon = $.wktToGeometry('POLYGON ((8.6 50.0, 8.8 50.0, 8.8 50.2, 8.6 50.2, 8.6 50.0))');
}
Calculations
| Function | Description |
|---|---|
distance(point1, point2) | Geodetic distance in meters |
azimuth(point1, point2) | Bearing in degrees |
getCoordinates(geometry) | Extract coordinates as array |
{
let frankfurt = $.coordsToPoint([8.6821, 50.1109]);
let berlin = $.coordsToPoint([13.405, 52.52]);
let distanceMeters = $.distance(frankfurt, berlin);
$.log('Distance: ' + (distanceMeters / 1000).toFixed(1) + ' km');
let bearing = $.azimuth(frankfurt, berlin);
$.log('Bearing: ' + bearing.toFixed(1) + '°');
}
Coordinate Conversion
| Function | Description |
|---|---|
latLonToUtm(lat, lon) | Convert to UTM string |
utmToLatLon(utmString) | Convert UTM to lat/lon object |
convertGeometry(srcCRS, dstCRS, geom) | Transform coordinate system |
{
// Lat/Lon to UTM
let utm = $.latLonToUtm(53.855, 8.0817);
// Result: "32U 439596 5967780"
// UTM to Lat/Lon
let coords = $.utmToLatLon('32U 439596 5967780');
// Result: { latitude: 53.855, longitude: 8.0817 }
// Transform between coordinate systems
let wgs84Point = $.wktToGeometry('POINT (8.6821 50.1109)');
let utmPoint = $.convertGeometry('EPSG:4326', 'EPSG:32632', wgs84Point);
}
File Import
GPX Import
The importGpx function parses GPS track files:
{
let file = $.first($.find('File', 'name', 'track.gpx'));
let gpxData = $.importGpx($.getContent(file, 'utf-8'));
// Process waypoints
if (gpxData.waypoints) {
for (let wp of gpxData.waypoints) {
$.create('Waypoint', {
name: wp.name,
latitude: wp.latitude,
longitude: wp.longitude,
altitude: wp.altitude
});
}
}
// Process tracks
if (gpxData.tracks) {
for (let track of gpxData.tracks) {
let points = [];
for (let segment of track.segments) {
for (let point of segment.points) {
points.push([point.longitude, point.latitude]);
}
}
$.create('Route', {
name: track.name,
wkt: $.coordsToLineString(points).toString()
});
}
}
}
Shapefile Import
The readShapefile function reads ESRI Shapefiles:
{
let result = $.readShapefile('/data/regions.shp');
$.log('Fields: ' + result.fields.join(', '));
for (let item of result.geometries) {
$.create('Region', {
name: item.metadata.NAME,
wkt: item.wkt,
population: item.metadata.POPULATION
});
}
}
The function automatically reads the associated .dbf file for attributes and .prj file for coordinate reference system, transforming coordinates to WGS84.
Map Layers
For applications with multiple geometry sources (e.g., different Shapefiles), organize geometries into layers:
{
// Create a map layer
let layer = $.create('MapLayer', {
name: 'Administrative Boundaries',
description: 'Country and state boundaries'
});
// Import shapefile into layer
let result = $.readShapefile('/data/boundaries.shp');
for (let item of result.geometries) {
$.create('Geometry', {
mapLayer: layer,
name: item.metadata.NAME,
wkt: item.wkt
});
}
}
Examples
Store Locator
// Schema method on Store: findNearby (parameters: latitude, longitude, radiusKm)
{
let lat = $.retrieve('latitude');
let lon = $.retrieve('longitude');
let radius = $.retrieve('radiusKm');
let stores = $.find('Store', $.withinDistance(lat, lon, radius));
let customerPoint = $.coordsToPoint([lon, lat]);
let result = [];
for (let store of stores) {
let storePoint = $.coordsToPoint([store.longitude, store.latitude]);
let dist = $.distance(customerPoint, storePoint);
result.push({
store: store,
distanceKm: (dist / 1000).toFixed(1)
});
}
// Sort by distance
result.sort((a, b) => a.distanceKm - b.distanceKm);
return result;
}
Geofencing
// Global schema method: checkDeliveryZone (parameters: latitude, longitude)
{
let lat = $.retrieve('latitude');
let lon = $.retrieve('longitude');
let point = $.wktToGeometry('POINT(' + lat + ' ' + lon + ')');
let zones = $.find('DeliveryZone');
for (let zone of zones) {
let polygon = $.wktToGeometry(zone.wkt);
if (polygon.contains(point)) {
return {
inZone: true,
zoneName: zone.name,
deliveryFee: zone.deliveryFee
};
}
}
return { inZone: false };
}
Related Topics
- Building a Spatial Index - Tutorial for optimizing point-in-polygon queries
- REST API - Distance queries with
_latlon,_distance, and address parameters - Schema - Creating custom types for geographic data
- Scheduled Tasks - Batch geocoding and index building