Reference
This is the complete reference for the JSONQL format, covering all query types, data concepts, and their JSON structures.
Each section provides detailed field specifications and examples to help you construct valid JSONQL queries for your analytical needs.
Queries
Section titled “Queries”Queries define the type of data operation you want to perform and determine how JSONQL processes and returns your data.
JSONQL supports three distinct query types, each designed for different analytical use cases. The query type determines which combination of concepts (measures, dimensions, columns, filters, joins) can be used and how the data is processed. Each query type serves specific purposes and has different validation rules and behaviors.
Raw SQL Queries
Section titled “Raw SQL Queries”Raw SQL queries provide maximum flexibility by allowing you to execute custom SQL directly without the structured JSONQL patterns.
Use raw SQL queries when you need to leverage database-specific features, execute complex custom queries, or perform operations that don’t fit the structured JSONQL patterns. They bypass JSONQL’s built-in concepts and validation, giving you direct database access.
JSON Format
Section titled “JSON Format”Raw SQL queries use a simple structure with the raw SQL string.
Raw SQL query object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
type | string | Required | Query type identifier | "sql" (literal value) |
connection_id | string | Optional* | ID of saved connection to use | Must start with "conn_" (e.g., "conn_prod_db") |
connection | object | Optional* | Connection details object | Connection object (see Connections section) |
sql | string | Required | The SQL query to execute | Any valid SQL statement |
*Note: Connection fields - see Connections section for details.
Example:
{ "type": "sql", "sql": "SELECT fieldName FROM TableName"}Aggregate Queries
Section titled “Aggregate Queries”Aggregate queries operate in analytics/OLAP mode with measures and dimensions, automatically handling grouping and calculations.
Use aggregate queries for analytical reporting, dashboards, and business intelligence. They’re perfect for answering questions like “What’s the total sales by region?” or “How do monthly signups compare over time?”.
Aggregate queries require at least one measure for valid queries, automatically create GROUP BY clauses for all dimensions, apply time truncation and date filtering for time dimensions, process filters before aggregation and grouping, and support pagination through limit and offset.
JSON Format
Section titled “JSON Format”Aggregate queries use measures and dimensions with optional filters, joins, and result controls.
Aggregate query object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
type | string | Required | Query type identifier | "aggregate" (literal value) |
connection_id | string | Optional* | ID of saved connection to use | Must start with "conn_" (e.g., "conn_prod_db") |
connection | object | Optional* | Connection details object | Connection object (see Connections section) |
measures | array | Required | Aggregation functions to calculate | Array of measure objects (see Measures section) |
dimensions | array | Optional | Grouping attributes for data segmentation | Array of dimension objects (see Dimensions section) |
joins | array | Optional | Table relationships for multi-table queries | Array of join objects (see Joins section) |
filters | array | Optional | Conditions to constrain the dataset | Array of filter objects (see Filters section) |
limit | number or string | Optional | Maximum number of results to return | Non-negative integer or Reeflow SQL function (e.g., "RF_PARAM('page_size')") |
offset | number or string | Optional | Number of results to skip (for pagination) | Non-negative integer or Reeflow SQL function (e.g., "RF_PARAM('offset')") |
order_by | array | Optional | Sorting specification | Array of order objects with field and direction (see Order By section) |
*Note: Connection fields - see Connections section for details.
Example:
{ "type": "aggregate", "measures": [ { "column": "Orders.amount", "type": "sum" } ], "dimensions": [ { "column": "Users.city", "type": "categorical" }, { "column": "Orders.createdAt", "type": "time", "granularity": "month", "date_range": ["2024-01-01", "2024-12-31"] } ], "joins": [ { "from": "Orders", "to": "Users", "from_column": "Orders.user_id", "to_column": "Users.id" } ]}Table Queries
Section titled “Table Queries”Table queries return raw, individual records without any aggregation or grouping operations.
Use table queries for data tables, detailed reports, record listings, and when you need to export or display raw data. They’re ideal for showing individual transactions, user lists, or any scenario where you need to see actual records rather than aggregated summaries.
Table queries do not support measures or dimensions (use columns instead), do not allow time dimensions (use regular columns for date fields), support pagination through limit and offset, and should include unique identifiers (like id) for proper record display.
JSON Format
Section titled “JSON Format”Table queries use columns to specify which fields to retrieve, with optional filters, joins, and result controls.
Table query object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
type | string | Required | Query type identifier | "table" (literal value) |
connection_id | string | Optional* | ID of saved connection to use | Must start with "conn_" (e.g., "conn_prod_db") |
connection | object | Optional* | Connection details object | Connection object (see Connections section) |
columns | array | Optional | Fields to select in the result | Array of column objects (see Columns section) |
distinct | boolean | Optional | When true, applies SELECT DISTINCT to deduplicate result rows | true or false (defaults to false) |
joins | array | Optional | Table relationships for multi-table queries | Array of join objects (see Joins section) |
filters | array | Optional | Conditions to constrain the dataset | Array of filter objects (see Filters section) |
limit | number or string | Optional | Maximum number of results to return | Non-negative integer or Reeflow SQL function (e.g., "RF_PARAM('page_size')") |
offset | number or string | Optional | Number of results to skip (for pagination) | Non-negative integer or Reeflow SQL function (e.g., "RF_PARAM('offset')") |
order_by | array | Optional | Sorting specification | Array of order objects with field and direction (see Order By section) |
*Note: Connection fields - see Connections section for details.
Example:
{ "type": "table", "columns": [ { "name": "Users.name" }, { "name": "Orders.amount" } ], "filters": [ { "column": "Orders.status", "operator": "equals", "values": ["completed"] } ], "limit": 100, "offset": 0, "order_by": [ { "column": "Orders.createdAt", "direction": "desc" } ]}Measures
Section titled “Measures”Measures represent the quantitative aspects of your data that you want to analyze - the numbers you want to calculate, count, sum, or aggregate in some way. They form the core of analytical queries and are essential when you need to answer questions like “how much”, “how many”, or “what’s the average”.
Use measures in aggregate queries when you want to perform calculations across groups of data, such as total sales by region or average order value by month.
Measures always involve aggregation functions applied to specified fields and automatically trigger GROUP BY clauses for any dimensions in the query, making them the foundation of OLAP-style analytics queries. They cannot be used in GROUP BY clauses themselves.
JSON Format
Section titled “JSON Format”To define measures in JSONQL, create an array of measure objects where each object specifies a field (the data field to aggregate) and a type (the aggregation function to apply).
Measure object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
column | string | Optional* | The database field to aggregate | TableName.fieldName format (e.g., Orders.amount) |
type | string | Required | The aggregation function to apply | count, sum, avg, min, max, count_distinct |
*Note: column is optional for count operations where you’re counting all rows.
Example:
"measures": [ { "column": "Orders.amount", "type": "sum" }, { "column": "Orders.id", "type": "count" }, { "column": "Orders.amount", "type": "avg" }, { "column": "Users.id", "type": "count_distinct" }]Supported Types
Section titled “Supported Types”These are the aggregation functions available for measures:
| Type | SQL Function | Description |
|---|---|---|
count | COUNT(field) or COUNT(*) | Counts the number of non-null values in the field |
sum | SUM(field) | Adds up all numeric values in the field |
avg | AVG(field) | Calculates the arithmetic mean of numeric values in the field |
min | MIN(field) | Finds the smallest value in the field |
max | MAX(field) | Finds the largest value in the field |
count_distinct | COUNT(DISTINCT field) | Counts the number of unique values in the field |
SQL Mapping
Section titled “SQL Mapping”Measures become aggregation functions in the SELECT clause of the generated SQL query:
SELECT -- Measures: aggregated values calculated from the data SUM(orders.amount) as orders_amount_sum, -- sum measure COUNT(orders.id) as orders_id_count, -- count measure AVG(orders.amount) as orders_amount_avg, -- avg measure COUNT(DISTINCT users.id) as users_id_count_distinct -- count_distinct measureFROM ordersJOIN users ON orders.user_id = users.idDimensions
Section titled “Dimensions”Dimensions are the categorical and temporal attributes that define how you want to slice, dice, and organize your data for analysis. They answer questions like “by what”, “when”, and “where” in your analytical queries.
Use dimensions when you want to break down your measures into meaningful segments - for example, analyzing sales by product category, revenue by geographic region, or performance over time periods. Dimensions come in two types: categorical dimensions for grouping by attributes like status or category, and time dimensions for temporal analysis with built-in date handling and granularity controls.
In aggregate queries, dimensions are automatically added to GROUP BY clauses when used with measures and can include foreign key references through predefined joins. They work in tandem with measures to create insightful analytical breakdowns.
Categorical Dimensions
Section titled “Categorical Dimensions”Categorical dimensions group your data by discrete, non-numeric attributes like status, category, region, or any other classification field.
Use categorical dimensions when you want to segment your analysis by qualitative attributes - for example, breaking down sales by product category, analyzing performance by department, or comparing metrics across different user types.
JSON Format
Section titled “JSON Format”Categorical dimensions use an object with type: "categorical" and column property.
Categorical dimension object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
column | string | Required | The database field to group by | TableName.fieldName format (e.g., Users.city) |
type | string | Required | The dimension type | "categorical" (literal value) |
Example:
"dimensions": [ { "column": "Users.city", "type": "categorical" }, { "column": "Orders.status", "type": "categorical" }, { "column": "Products.category", "type": "categorical" }]SQL Mapping
Section titled “SQL Mapping”Categorical dimensions are added to the SELECT clause and the GROUP BY clause of a SQL query:
-- With measures (analytics mode)SELECT users.city, -- categorical dimension orders.status, -- categorical dimension COUNT(*) as orders_countFROM ordersJOIN users ON orders.user_id = users.idGROUP BY users.city, orders.status -- categorical dimensions in GROUP BY
-- Without measures (distinct values)SELECT DISTINCT users.city, -- categorical dimension orders.status -- categorical dimensionFROM ordersJOIN users ON orders.user_id = users.idTime Dimensions
Section titled “Time Dimensions”Time dimensions enable temporal analysis by automatically handling date truncation, granularity control, and time-based filtering for chronological data.
Use time dimensions when you want to analyze trends over time - for example, tracking sales performance by month, analyzing user activity by week, or comparing quarterly results.
Time dimensions automatically apply date truncation based on granularity, support date range filtering, and ensure results are ordered chronologically.
They provide built-in time intelligence features like automatic date range filtering and can fill gaps in time series data (optional feature).
JSON Format
Section titled “JSON Format”Time dimensions use type: "time" with additional time-specific properties.
Time dimension object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
column | string | Required | The database date/timestamp field to group by | TableName.fieldName format (e.g., Orders.createdAt) |
type | string | Required | The dimension type | "time" (literal value) |
granularity | string | Required | How dates are truncated and grouped | "day", "week", "month", "quarter", "year" |
date_range | array | Optional | Date range filter as two ISO date strings | ["YYYY-MM-DD", "YYYY-MM-DD"] format |
Granularity options:
"day"- Groups by individual days (YYYY-MM-DD)"week"- Groups by weeks (Monday to Sunday)"month"- Groups by calendar months (YYYY-MM)"quarter"- Groups by quarters (Q1, Q2, Q3, Q4)"year"- Groups by calendar years (YYYY)
Example:
"dimensions": [ { "column": "Orders.createdAt", "type": "time", "granularity": "month", "date_range": ["2024-01-01", "2024-12-31"] }]SQL Mapping
Section titled “SQL Mapping”Time dimensions become date truncation functions in the SELECT and GROUP BY clauses, with optional date range filtering in the WHERE clause:
SELECT DATE_TRUNC('month', orders.created_at) as month, -- time dimension with truncation COUNT(*) as orders_countFROM ordersWHERE orders.created_at >= '2024-01-01' -- date range filter from time dimension AND orders.created_at <= '2024-12-31' -- date range filter from time dimensionGROUP BY DATE_TRUNC('month', orders.created_at) -- time dimension in GROUP BYORDER BY month -- chronological orderingColumns
Section titled “Columns”Columns define the specific fields you want to retrieve when working in table mode, where you need raw, unaggregated data rather than analytical summaries.
Use columns when you want to display individual records in their original form - for example, showing a list of customer orders with their details, or displaying user profiles with their attributes. Columns are the table mode equivalent of dimensions but without any grouping behavior, making them perfect for creating data tables, detailed reports, or when you need to export raw data.
Unlike measures and dimensions, columns simply select fields without applying any aggregation or grouping operations, returning raw field values as-is and giving you direct access to your underlying data. They can include foreign key references through predefined joins.
JSON Format
Section titled “JSON Format”Columns support two formats: structured columns for simple field references, and raw expressions for complex SQL expressions.
Structured Columns
Section titled “Structured Columns”Structured columns use a simple object with a name property to reference a database field.
Structured column object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
name | string | Required | The database field to select | TableName.fieldName format (e.g., Users.city) |
label | string | Optional | Display label for the column | Any string (e.g., "City") |
Example:
"columns": [ { "name": "Users.city" }, { "name": "Orders.status", "label": "Order Status" }]Raw Expression Columns
Section titled “Raw Expression Columns”When you need SQL functions, expressions, or transformations that structured columns don’t support, use the raw expression escape hatch. This allows you to embed any valid SQL SELECT expression directly.
Raw column object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
raw | string | Required | A complete SQL SELECT expression including alias | Any valid SQL expression (e.g., "UPPER(Users.name) AS name_upper") |
label | string | Optional | Display label for the column | Any string (e.g., "Category (Uppercase)") |
Example:
"columns": [ { "raw": "UPPER(Products.category) AS category_upper", "label": "Category (Uppercase)" }, { "raw": "COALESCE(Users.nickname, Users.email, 'Anonymous') AS display_name", "label": "Display Name" }, { "raw": "DATE_TRUNC('month', Orders.created_at) AS order_month" }]You can mix structured and raw columns in the same query:
"columns": [ { "name": "Orders.id" }, { "raw": "UPPER(Orders.status) AS status_upper" }, { "name": "Orders.total", "label": "Total Amount" }, { "raw": "DATE_TRUNC('day', Orders.created_at) AS order_date" }]SQL Mapping
Section titled “SQL Mapping”Structured columns are added directly to the SELECT clause, while raw expressions are included as-is:
SELECT users.city, -- structured column orders.status, -- structured column UPPER(products.category) AS category_upper, -- raw expression COALESCE(users.nickname, 'Anonymous') AS name -- raw expressionFROM ordersJOIN users ON orders.user_id = users.idJOIN products ON orders.product_id = products.idFilters
Section titled “Filters”Filters are conditions that constrain your dataset to focus on specific subsets of data, acting as the “WHERE” clause in your analytical queries.
Use filters when you need to narrow down your analysis to specific criteria - for example, analyzing only completed orders, focusing on a particular date range, or examining data from specific regions or user segments. Filters work across all query types and can be applied to any field, whether it’s used as a measure, dimension, or column.
Filters support multiple operators and are applied before any grouping or aggregation occurs, ensuring that your calculations and groupings operate only on the data that meets your criteria.
JSON Format
Section titled “JSON Format”Filters use objects with column, operator, and values properties.
Filter object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
column | string | Required | The database field to filter on | TableName.fieldName format (e.g., Orders.status) |
operator | string | Required | The filter operation to perform | See Supported Operators table below |
values | array | Varies* | The values to use in the filter condition | Array of strings (count depends on operator) |
*Note: values requirement varies by operator - see the Supported Operators table for specific requirements.
Example:
"filters": [ { "column": "Orders.status", "operator": "equals", "values": ["completed"] }, { "column": "Orders.totalAmount", "operator": "gt", "values": ["100"] }, { "column": "Users.city", "operator": "in", "values": ["New York", "Los Angeles"] }]Supported Operators
Section titled “Supported Operators”These are the available operators for filtering data in JSONQL queries. For operators that conceptually support single values but accept multiple values (equals, contains, starts_with, ends_with), multiple values are automatically combined with OR logic.
| Operator | SQL Translation | Values Required | Description |
|---|---|---|---|
equals | field = value or (field = value1 OR field = value2 ...) | One or more | Exact match - multiple values combined with OR |
gt | field > value | Exactly one | Greater than comparison |
gte | field >= value | Exactly one | Greater than or equal to comparison |
lt | field < value | Exactly one | Less than comparison |
lte | field <= value | Exactly one | Less than or equal to comparison |
contains | field LIKE '%value%' or (field LIKE '%value1%' OR ...) | One or more | Text contains substring - multiple values combined with OR |
in | field IN (value1, value2, ...) | One or more | Field matches any value in list (native SQL IN) |
between | field BETWEEN value1 AND value2 | Exactly two | Field is within range (inclusive bounds) |
is_null | field IS NULL | None | Field has null value |
starts_with | field LIKE 'value%' or (field LIKE 'value1%' OR ...) | One or more | Text starts with prefix - multiple values combined with OR |
ends_with | field LIKE '%value' or (field LIKE '%value1' OR ...) | One or more | Text ends with suffix - multiple values combined with OR |
last_n_days | field >= CURRENT_DATE - (N * INTERVAL '1 DAY') | Exactly one (N) | Within last N days |
last_n_months | field >= DATE_TRUNC('month', CURRENT_DATE) - (N * INTERVAL '1 MONTH') | Exactly one (N) | Within last N months |
last_n_quarters | field >= DATE_TRUNC('quarter', CURRENT_DATE) - (N * INTERVAL '3 MONTH') | Exactly one (N) | Within last N quarters |
last_n_years | field >= DATE_TRUNC('year', CURRENT_DATE) - (N * INTERVAL '1 YEAR') | Exactly one (N) | Within last N years |
today | DATE(field) = CURRENT_DATE | None | Matches today’s date |
this_month | field >= DATE_TRUNC('month', CURRENT_DATE) AND field < ... | None | Within current calendar month |
this_quarter | field >= DATE_TRUNC('quarter', CURRENT_DATE) AND field < ... | None | Within current calendar quarter |
this_year | field >= DATE_TRUNC('year', CURRENT_DATE) AND field < ... | None | Within current calendar year |
SQL Mapping
Section titled “SQL Mapping”Filters become conditions in the WHERE clause of the generated SQL query:
SELECT SUM(orders.amount) as total_revenue, users.cityFROM ordersJOIN users ON orders.user_id = users.idWHERE orders.status = 'completed' -- filter condition AND orders.total_amount > 100 -- filter condition AND users.city IN ('New York', 'Chicago') -- filter conditionGROUP BY users.cityJoins enable you to combine data from multiple tables by defining explicit relationships between them, allowing for complex cross-table analysis and data retrieval.
Use joins when your query involves fields from different tables - for example, analyzing order data along with customer information, or combining product details with sales metrics. Joins are required when fields from multiple tables are referenced and must be explicitly defined for security and clarity, with no automatic relationship detection.
Joins support various join types (inner, left, right, full) to handle different data relationship scenarios and are applied before any filtering or grouping occurs. They only support equality joins for security (preventing SQL injection), ensuring predictable query behavior while enabling powerful multi-table analytics.
JSON Format
Section titled “JSON Format”Joins use objects with from, to, from_column, to_column, and optional type properties.
Join object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
from | string | Required | Source table name | Table name (e.g., "Orders") |
to | string | Required | Target table name | Table name (e.g., "Users") |
from_column | string | Required | Column in source table for join condition | TableName.columnName format (e.g., "Orders.user_id") |
to_column | string | Required | Column in target table for join condition | TableName.columnName format (e.g., "Users.id") |
type | string | Optional | Join type (defaults to “inner”) | "inner", "left", "right", "full" |
Example:
"joins": [ { "from": "Orders", "to": "Users", "from_column": "Orders.user_id", "to_column": "Users.id", "type": "inner" }]SQL Mapping
Section titled “SQL Mapping”Joins become JOIN clauses that connect the primary table to related tables:
SELECT SUM(orders.amount) as orders_amount_sum, users.cityFROM orders -- primary table in FROM clauseINNER JOIN users ON orders.user_id = users.id -- join definitionGROUP BY users.cityPrimary Table Detection
Section titled “Primary Table Detection”When using joins, JSONQL automatically determines which table serves as the primary (base) table in the generated SQL query. Understanding this logic helps you structure your joins correctly.
- Without joins: All fields in columns/measures/dimensions must reference the same table
- With joins: Exactly one table must appear only in
from(never into) - this becomes the primary table - The primary table becomes the base table in the FROM clause, with all joins applied to it
Validation Rules
Section titled “Validation Rules”JSONQL enforces strict validation rules to ensure query integrity and prevent common errors when working with single tables or multiple joined tables.
Single Table Queries (no joins):
- All fields in measures, dimensions, columns, filters, and
order_bymust reference the same table - No joins array should be provided
Multi-Table Queries (with joins):
- Exactly one table must appear only in
fromfields (never asto) - this is the primary table - All other referenced tables must appear in at least one
tofield - All referenced tables in fields must have a path to the primary table through joins
from_columnandto_columnmust followTableName.columnNameformat- Field names can only contain alphanumeric characters, dots, underscores, and asterisks
- Circular join references are not allowed
- Each table can only be joined once (no duplicate joins to the same table)
Edge Cases and Validation Errors:
- Multiple disconnected graphs: If multiple tables never appear in
to, the query is invalid (ambiguous primary table) - Circular references: If no table appears only in
fromdue to circular joins, the query is invalid - Self-referencing joins: Tables can join to themselves (e.g.,
Users.manager_id = Users.id) but this means the same table appears in bothfromandto, so it cannot be the primary table - Unreachable tables: All tables referenced in fields must be reachable from the primary table through the join graph
Query Controls
Section titled “Query Controls”Query controls allow you to limit, paginate, and sort the results returned by your JSONQL queries.
These controls work consistently across aggregate and table queries, providing essential functionality for result management. Use them to implement pagination, control dataset size, and ensure predictable result ordering in your applications.
Limits restrict the maximum number of rows returned by a query, essential for performance and pagination.
Use limit to prevent large result sets from overwhelming your application, implement basic pagination, or simply cap the results to a manageable size. Without a limit, queries may return very large datasets that could impact performance.
JSON Format
Section titled “JSON Format”Limit accepts a numeric value or a Reeflow SQL function string for dynamic limits.
Limit field specification:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
limit | number or string | Optional | Maximum number of rows to return | Non-negative integer or Reeflow SQL function (e.g., 100, "RF_PARAM('page_size')") |
Example:
{ "type": "table", "columns": [{ "name": "Users.name" }], "limit": 50}SQL Mapping
Section titled “SQL Mapping”Limits translate directly to SQL LIMIT clauses, controlling the maximum number of rows returned from the database.
Example SQL generation:
SELECT users.nameFROM usersLIMIT 50; -- limit value appliedOffset
Section titled “Offset”Offsets skip a specified number of rows before returning results, commonly used with limit for pagination.
Use offset with limit to implement pagination by skipping rows from previous pages. For example, with limit 20 and offset 40, you’ll skip the first 40 rows and return the next 20, effectively getting page 3 of results with 20 items per page.
JSON Format
Section titled “JSON Format”Offset accepts a numeric value or a Reeflow SQL function string for dynamic pagination.
Offset field specification:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
offset | number or string | Optional | Number of rows to skip before returning results | Non-negative integer or Reeflow SQL function (e.g., 0, "RF_PARAM('offset')") |
Example:
{ "type": "table", "columns": [{ "name": "Users.name" }], "limit": 20, "offset": 40}SQL Mapping
Section titled “SQL Mapping”Offsets translate directly to SQL OFFSET clauses, skipping the specified number of rows before applying the limit.
Example SQL generation:
SELECT users.nameFROM usersLIMIT 20 -- maximum rows to returnOFFSET 40; -- rows to skip before returningOrder By
Section titled “Order By”Order by controls specify how to sort query results, ensuring predictable and consistent result ordering.
Use order by to sort results by specific fields in ascending or descending order. This is essential for pagination (to ensure consistent ordering across pages), displaying data in meaningful order, and creating predictable result sets. Multiple order by clauses are applied in the order specified.
JSON Format
Section titled “JSON Format”Order by uses an array of objects specifying fields and sort directions.
Order by object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
column | string | Required | Field to sort by | TableName.fieldName format |
direction | string | Required | Sort direction | "asc" (ascending), "desc" (descending) |
Example:
{ "type": "table", "columns": [{ "name": "Users.name" }, { "name": "Users.created_at" }], "order_by": [ { "column": "Users.created_at", "direction": "desc" }, { "column": "Users.name", "direction": "asc" } ], "limit": 20}SQL Mapping
Section titled “SQL Mapping”Order by clauses translate to SQL ORDER BY statements, with multiple fields creating a multi-level sort.
Example SQL generation:
SELECT users.name, users.created_atFROM usersORDER BY users.created_at DESC, -- primary sort: newest first users.name ASC -- secondary sort: alphabeticalLIMIT 20;Raw Expression Escape Hatch
Section titled “Raw Expression Escape Hatch”JSONQL’s structured format covers most common query patterns, but sometimes you need the full power of SQL for complex expressions, database-specific functions, or advanced transformations. The raw expression escape hatch provides this flexibility while maintaining the benefits of JSONQL’s structure elsewhere in your query.
Design Philosophy
Section titled “Design Philosophy”JSONQL follows a consistent pattern: structured schemas for guided, validated query building, with raw escape hatches for power users who need direct SQL access. This design mirrors how JSONQL already offers three query types - with type: "sql" being the escape hatch at the query level for completely custom SQL.
At the element level, the { raw: string } pattern provides the same flexibility for individual components within otherwise structured queries.
Currently Supported
Section titled “Currently Supported”The raw expression escape hatch is currently available for:
| Element | Escape Hatch | Example |
|---|---|---|
| Columns (table queries) | { "raw": "..." } | { "raw": "UPPER(name) AS name_upper" } |
When to Use Raw Expressions
Section titled “When to Use Raw Expressions”Use raw expressions when you need:
- SQL functions:
UPPER(),LOWER(),TRIM(),COALESCE(),CONCAT() - Date/time functions:
DATE_TRUNC(),EXTRACT(),DATE_PART() - Type casting:
CAST(column AS type) - Mathematical expressions:
price * quantity AS total - Conditional expressions:
CASE WHEN ... THEN ... END - Database-specific functions: Window functions, JSON operators, etc.
Security
Section titled “Security”Raw expressions still go through Reeflow’s security compilation pipeline:
- RF_ function resolution:
RF_PARAM()andRF_USER_ATTR()work in raw expressions - Access control: Column-level and row-level security policies are applied
- SQL validation: The compiled SQL is parsed and validated before execution
Future Extensions
Section titled “Future Extensions”The raw escape hatch pattern is designed to be extensible. Future versions may support raw expressions in additional elements like filters, joins, and order_by clauses, following the same { raw: string } pattern for consistency.
Connections
Section titled “Connections”Connections specify which database or data source to execute your JSONQL query against.
Every JSONQL query must specify a connection to determine where to run the query. You can either reference an existing saved connection by ID or provide connection details directly in the query. This flexibility supports both reusable connection management and ad-hoc query execution.
Connection ID
Section titled “Connection ID”Connection ID references a pre-configured connection saved in your Reeflow workspace.
Use connection IDs when you have established, reusable database connections that multiple queries will use. This approach centralizes connection management, provides security through credential management, and simplifies query construction by referencing connections by their ID.
JSON Format
Section titled “JSON Format”Connection ID uses a string identifier that must start with “conn_” prefix.
Connection ID field specification:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
connection_id | string | Optional* | ID of saved connection to use | Must start with "conn_" (e.g., "conn_prod_db") |
*Note: Either connection_id or connection object must be provided, but not both.
Example:
{ "type": "table", "connection_id": "conn_analytics_db", "columns": [{ "name": "Users.name" }]}Connection Object
Section titled “Connection Object”Connection objects provide database connection details directly within the query.
Use connection objects for ad-hoc queries, testing different connection parameters, or when you need to specify connection details dynamically. This approach embeds all connection information directly in the query, making it self-contained but requiring you to manage credentials and connection details explicitly.
JSON Format
Section titled “JSON Format”Connection objects specify the data source type and configuration details.
Connection object fields:
| Field | Type | Required | Description | Possible Values |
|---|---|---|---|---|
connection.type | string | Required | Type of data source | Supported data source types (varies by platform) |
connection.config | object | Required | Connection configuration details | Database-specific connection parameters |
Example:
{ "type": "table", "connection": { "type": "postgresql", "config": { "host": "localhost", "port": 5432, "database": "analytics", "username": "user", "password": "password" } }, "columns": [{ "name": "Users.name" }]}Parameters
Section titled “Parameters”Parameters provide end-user-controlled values that are injected into queries at execution time using the RF_PARAM() function. Unlike user attributes which are server-controlled for security, parameters are intended for query customization like pagination, user-selected filters, and dynamic sorting.
JSON Format
Section titled “JSON Format”Parameters are specified in the parameters field at the top level and can be used with any query type.
Parameters object specification:
| Field | Type | Required | Description | Constraints |
|---|---|---|---|---|
parameters | object | Optional | Key-value pairs for query customization | Maximum 10 properties |
parameters.[key] | string | - | Parameter key | Letters, numbers, hyphens, underscores, colons, dots. Max 64 characters. |
parameters.[value] | string, number, boolean | - | Parameter value | Strings max 64 characters. |
Example with raw SQL:
{ "type": "sql", "sql": "SELECT * FROM products WHERE category = RF_PARAM('category') LIMIT RF_PARAM('page_size') OFFSET RF_PARAM('offset')", "parameters": { "category": "electronics", "page_size": 20, "offset": 0 }}Example with table query:
{ "type": "table", "columns": [{ "name": "Products.id" }, { "name": "Products.name" }], "filters": [ { "column": "Products.category", "operator": "equals", "values": ["RF_PARAM('category')"] } ], "limit": "RF_PARAM('page_size')", "offset": "RF_PARAM('offset')", "parameters": { "category": "electronics", "page_size": 20, "offset": 0 }}User Attributes vs Parameters
Section titled “User Attributes vs Parameters”Both user attributes and parameters inject values into queries, but they serve different purposes and have different security implications.
| Aspect | User Attributes (RF_USER_ATTR) | Parameters (RF_PARAM) |
|---|---|---|
| Purpose | Security and access control | Query customization |
| Source | Server-controlled (session tokens, user config, roles) | End-user-provided at query time |
| Use cases | Tenant isolation, user-specific filtering, RLS | Pagination, user-selected filters, dynamic sorting |
| Security | Cannot be overridden by the end user | End user controls the values |
| Example | tenant_id, user_id, region | page_size, offset, sort_column |
SQL Functions
Section titled “SQL Functions”Reeflow provides custom SQL functions that inject dynamic values into queries at execution time. These functions act as placeholders that Reeflow replaces with actual values when the query runs, enabling row-level security and parameterized queries.
RF_USER_ATTR
Section titled “RF_USER_ATTR”RF_USER_ATTR('attribute_key') resolves to the current principal’s user attribute value at query time.
Use this function for row-level security filters that restrict data based on who is querying. For example, filtering orders by tenant_id so each customer only sees their own records.
Syntax:
RF_USER_ATTR('attribute_key')Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
attribute_key | string | Yes | The user attribute key to resolve. Must contain only letters, numbers, hyphens, underscores, colons, and dots. |
Example:
-- Filter orders by tenantSELECT * FROM orders WHERE tenant_id = RF_USER_ATTR('tenant_id')
-- Resolves to (when tenant_id = 'acme'):SELECT * FROM orders WHERE tenant_id = 'acme'Behavior:
- Returns the attribute value as a string or number, depending on how it was set
- Fails with an error if the attribute is not found in the principal’s context
- The value comes from session token attributes, embedded user configuration, user configuration, API key configuration, or role fixed attributes
RF_PARAM
Section titled “RF_PARAM”RF_PARAM('param_key') resolves to a query parameter value passed when executing the query.
Use this function for dynamic values that change per query execution, such as pagination controls, user-selected filters, or date ranges that the calling application provides.
Syntax:
RF_PARAM('param_key')Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
param_key | string | Yes | The parameter key to resolve. |
Example:
-- Paginated query with dynamic limit and offsetSELECT * FROM products LIMIT RF_PARAM('page_size') OFFSET RF_PARAM('offset')
-- Resolves to (when page_size = 20, offset = 40):SELECT * FROM products LIMIT 20 OFFSET 40Behavior:
- Returns the parameter value as provided by the calling application
- Fails with an error if the parameter is not found in the query context
- Parameters are passed in the
parametersfield when executing queries
Allowed Locations
Section titled “Allowed Locations”Reeflow SQL functions can only appear in specific SQL clauses. Using them elsewhere results in a validation error.
| Clause | Allowed | Notes |
|---|---|---|
| WHERE | Yes | Primary use case for row-level security |
| HAVING | Yes | Filter aggregated results |
| JOIN ON | Yes | Dynamic join conditions |
| ORDER BY | Yes | Dynamic sorting |
| GROUP BY | Yes | Dynamic grouping |
| LIMIT | Yes | Dynamic result limits |
| OFFSET | Yes | Dynamic pagination |
| SELECT columns | No | Cannot be used in column expressions |
| FROM | No | Cannot be used for table names |
Error Handling
Section titled “Error Handling”SQL functions can fail during query execution. Reeflow returns descriptive errors to help diagnose issues.
| Scenario | Status | Error |
|---|---|---|
| Attribute not found | 400 Bad Request | Attribute ‘attributeName’ not found in context |
| Parameter not found | 400 Bad Request | Parameter ‘paramName’ not found in context |
| Invalid function location | 400 Bad Request | Function ‘RF_USER_ATTR’ not allowed in SELECT clause |
| Invalid attribute key format | 400 Bad Request | Invalid attribute_key: must contain only letters, numbers, hyphens, underscores, colons, and dots |
Related
Section titled “Related”- Row-level security uses
RF_USER_ATTR()for dynamic data filtering - User attributes explains how attributes are set and resolved