Skip to content
Reeflow
Start Building

Reference

This is the complete reference for the JSONQL format, covering all query types, data concepts, and their JSON structures.

Each section provides detailed field specifications and examples to help you construct valid JSONQL queries for your analytical needs.

Queries define the type of data operation you want to perform and determine how JSONQL processes and returns your data.

JSONQL supports three distinct query types, each designed for different analytical use cases. The query type determines which combination of concepts (measures, dimensions, columns, filters, joins) can be used and how the data is processed. Each query type serves specific purposes and has different validation rules and behaviors.

Raw SQL queries provide maximum flexibility by allowing you to execute custom SQL directly without the structured JSONQL patterns.

Use raw SQL queries when you need to leverage database-specific features, execute complex custom queries, or perform operations that don’t fit the structured JSONQL patterns. They bypass JSONQL’s built-in concepts and validation, giving you direct database access.

Raw SQL queries use a simple structure with the raw SQL string.

Raw SQL query object fields:

FieldTypeRequiredDescriptionPossible Values
typestringRequiredQuery type identifier"sql" (literal value)
connection_idstringOptional*ID of saved connection to useMust start with "conn_" (e.g., "conn_prod_db")
connectionobjectOptional*Connection details objectConnection object (see Connections section)
sqlstringRequiredThe SQL query to executeAny valid SQL statement

*Note: Connection fields - see Connections section for details.

Example:

{
"type": "sql",
"sql": "SELECT fieldName FROM TableName"
}

Aggregate queries operate in analytics/OLAP mode with measures and dimensions, automatically handling grouping and calculations.

Use aggregate queries for analytical reporting, dashboards, and business intelligence. They’re perfect for answering questions like “What’s the total sales by region?” or “How do monthly signups compare over time?”.

Aggregate queries require at least one measure for valid queries, automatically create GROUP BY clauses for all dimensions, apply time truncation and date filtering for time dimensions, process filters before aggregation and grouping, and support pagination through limit and offset.

Aggregate queries use measures and dimensions with optional filters, joins, and result controls.

Aggregate query object fields:

FieldTypeRequiredDescriptionPossible Values
typestringRequiredQuery type identifier"aggregate" (literal value)
connection_idstringOptional*ID of saved connection to useMust start with "conn_" (e.g., "conn_prod_db")
connectionobjectOptional*Connection details objectConnection object (see Connections section)
measuresarrayRequiredAggregation functions to calculateArray of measure objects (see Measures section)
dimensionsarrayOptionalGrouping attributes for data segmentationArray of dimension objects (see Dimensions section)
joinsarrayOptionalTable relationships for multi-table queriesArray of join objects (see Joins section)
filtersarrayOptionalConditions to constrain the datasetArray of filter objects (see Filters section)
limitnumber or stringOptionalMaximum number of results to returnNon-negative integer or Reeflow SQL function (e.g., "RF_PARAM('page_size')")
offsetnumber or stringOptionalNumber of results to skip (for pagination)Non-negative integer or Reeflow SQL function (e.g., "RF_PARAM('offset')")
order_byarrayOptionalSorting specificationArray of order objects with field and direction (see Order By section)

*Note: Connection fields - see Connections section for details.

Example:

{
"type": "aggregate",
"measures": [
{
"column": "Orders.amount",
"type": "sum"
}
],
"dimensions": [
{
"column": "Users.city",
"type": "categorical"
},
{
"column": "Orders.createdAt",
"type": "time",
"granularity": "month",
"date_range": ["2024-01-01", "2024-12-31"]
}
],
"joins": [
{
"from": "Orders",
"to": "Users",
"from_column": "Orders.user_id",
"to_column": "Users.id"
}
]
}

Table queries return raw, individual records without any aggregation or grouping operations.

Use table queries for data tables, detailed reports, record listings, and when you need to export or display raw data. They’re ideal for showing individual transactions, user lists, or any scenario where you need to see actual records rather than aggregated summaries.

Table queries do not support measures or dimensions (use columns instead), do not allow time dimensions (use regular columns for date fields), support pagination through limit and offset, and should include unique identifiers (like id) for proper record display.

Table queries use columns to specify which fields to retrieve, with optional filters, joins, and result controls.

Table query object fields:

FieldTypeRequiredDescriptionPossible Values
typestringRequiredQuery type identifier"table" (literal value)
connection_idstringOptional*ID of saved connection to useMust start with "conn_" (e.g., "conn_prod_db")
connectionobjectOptional*Connection details objectConnection object (see Connections section)
columnsarrayOptionalFields to select in the resultArray of column objects (see Columns section)
distinctbooleanOptionalWhen true, applies SELECT DISTINCT to deduplicate result rowstrue or false (defaults to false)
joinsarrayOptionalTable relationships for multi-table queriesArray of join objects (see Joins section)
filtersarrayOptionalConditions to constrain the datasetArray of filter objects (see Filters section)
limitnumber or stringOptionalMaximum number of results to returnNon-negative integer or Reeflow SQL function (e.g., "RF_PARAM('page_size')")
offsetnumber or stringOptionalNumber of results to skip (for pagination)Non-negative integer or Reeflow SQL function (e.g., "RF_PARAM('offset')")
order_byarrayOptionalSorting specificationArray of order objects with field and direction (see Order By section)

*Note: Connection fields - see Connections section for details.

Example:

{
"type": "table",
"columns": [
{
"name": "Users.name"
},
{
"name": "Orders.amount"
}
],
"filters": [
{
"column": "Orders.status",
"operator": "equals",
"values": ["completed"]
}
],
"limit": 100,
"offset": 0,
"order_by": [
{
"column": "Orders.createdAt",
"direction": "desc"
}
]
}

Measures represent the quantitative aspects of your data that you want to analyze - the numbers you want to calculate, count, sum, or aggregate in some way. They form the core of analytical queries and are essential when you need to answer questions like “how much”, “how many”, or “what’s the average”.

Use measures in aggregate queries when you want to perform calculations across groups of data, such as total sales by region or average order value by month.

Measures always involve aggregation functions applied to specified fields and automatically trigger GROUP BY clauses for any dimensions in the query, making them the foundation of OLAP-style analytics queries. They cannot be used in GROUP BY clauses themselves.

To define measures in JSONQL, create an array of measure objects where each object specifies a field (the data field to aggregate) and a type (the aggregation function to apply).

Measure object fields:

FieldTypeRequiredDescriptionPossible Values
columnstringOptional*The database field to aggregateTableName.fieldName format (e.g., Orders.amount)
typestringRequiredThe aggregation function to applycount, sum, avg, min, max, count_distinct

*Note: column is optional for count operations where you’re counting all rows.

Example:

"measures": [
{
"column": "Orders.amount",
"type": "sum"
},
{
"column": "Orders.id",
"type": "count"
},
{
"column": "Orders.amount",
"type": "avg"
},
{
"column": "Users.id",
"type": "count_distinct"
}
]

These are the aggregation functions available for measures:

TypeSQL FunctionDescription
countCOUNT(field) or COUNT(*)Counts the number of non-null values in the field
sumSUM(field)Adds up all numeric values in the field
avgAVG(field)Calculates the arithmetic mean of numeric values in the field
minMIN(field)Finds the smallest value in the field
maxMAX(field)Finds the largest value in the field
count_distinctCOUNT(DISTINCT field)Counts the number of unique values in the field

Measures become aggregation functions in the SELECT clause of the generated SQL query:

SELECT
-- Measures: aggregated values calculated from the data
SUM(orders.amount) as orders_amount_sum, -- sum measure
COUNT(orders.id) as orders_id_count, -- count measure
AVG(orders.amount) as orders_amount_avg, -- avg measure
COUNT(DISTINCT users.id) as users_id_count_distinct -- count_distinct measure
FROM orders
JOIN users ON orders.user_id = users.id

Dimensions are the categorical and temporal attributes that define how you want to slice, dice, and organize your data for analysis. They answer questions like “by what”, “when”, and “where” in your analytical queries.

Use dimensions when you want to break down your measures into meaningful segments - for example, analyzing sales by product category, revenue by geographic region, or performance over time periods. Dimensions come in two types: categorical dimensions for grouping by attributes like status or category, and time dimensions for temporal analysis with built-in date handling and granularity controls.

In aggregate queries, dimensions are automatically added to GROUP BY clauses when used with measures and can include foreign key references through predefined joins. They work in tandem with measures to create insightful analytical breakdowns.

Categorical dimensions group your data by discrete, non-numeric attributes like status, category, region, or any other classification field.

Use categorical dimensions when you want to segment your analysis by qualitative attributes - for example, breaking down sales by product category, analyzing performance by department, or comparing metrics across different user types.

Categorical dimensions use an object with type: "categorical" and column property.

Categorical dimension object fields:

FieldTypeRequiredDescriptionPossible Values
columnstringRequiredThe database field to group byTableName.fieldName format (e.g., Users.city)
typestringRequiredThe dimension type"categorical" (literal value)

Example:

"dimensions": [
{
"column": "Users.city",
"type": "categorical"
},
{
"column": "Orders.status",
"type": "categorical"
},
{
"column": "Products.category",
"type": "categorical"
}
]

Categorical dimensions are added to the SELECT clause and the GROUP BY clause of a SQL query:

-- With measures (analytics mode)
SELECT
users.city, -- categorical dimension
orders.status, -- categorical dimension
COUNT(*) as orders_count
FROM orders
JOIN users ON orders.user_id = users.id
GROUP BY users.city, orders.status -- categorical dimensions in GROUP BY
-- Without measures (distinct values)
SELECT DISTINCT
users.city, -- categorical dimension
orders.status -- categorical dimension
FROM orders
JOIN users ON orders.user_id = users.id

Time dimensions enable temporal analysis by automatically handling date truncation, granularity control, and time-based filtering for chronological data.

Use time dimensions when you want to analyze trends over time - for example, tracking sales performance by month, analyzing user activity by week, or comparing quarterly results.

Time dimensions automatically apply date truncation based on granularity, support date range filtering, and ensure results are ordered chronologically.

They provide built-in time intelligence features like automatic date range filtering and can fill gaps in time series data (optional feature).

Time dimensions use type: "time" with additional time-specific properties.

Time dimension object fields:

FieldTypeRequiredDescriptionPossible Values
columnstringRequiredThe database date/timestamp field to group byTableName.fieldName format (e.g., Orders.createdAt)
typestringRequiredThe dimension type"time" (literal value)
granularitystringRequiredHow dates are truncated and grouped"day", "week", "month", "quarter", "year"
date_rangearrayOptionalDate range filter as two ISO date strings["YYYY-MM-DD", "YYYY-MM-DD"] format

Granularity options:

  • "day" - Groups by individual days (YYYY-MM-DD)
  • "week" - Groups by weeks (Monday to Sunday)
  • "month" - Groups by calendar months (YYYY-MM)
  • "quarter" - Groups by quarters (Q1, Q2, Q3, Q4)
  • "year" - Groups by calendar years (YYYY)

Example:

"dimensions": [
{
"column": "Orders.createdAt",
"type": "time",
"granularity": "month",
"date_range": ["2024-01-01", "2024-12-31"]
}
]

Time dimensions become date truncation functions in the SELECT and GROUP BY clauses, with optional date range filtering in the WHERE clause:

SELECT
DATE_TRUNC('month', orders.created_at) as month, -- time dimension with truncation
COUNT(*) as orders_count
FROM orders
WHERE orders.created_at >= '2024-01-01' -- date range filter from time dimension
AND orders.created_at <= '2024-12-31' -- date range filter from time dimension
GROUP BY DATE_TRUNC('month', orders.created_at) -- time dimension in GROUP BY
ORDER BY month -- chronological ordering

Columns define the specific fields you want to retrieve when working in table mode, where you need raw, unaggregated data rather than analytical summaries.

Use columns when you want to display individual records in their original form - for example, showing a list of customer orders with their details, or displaying user profiles with their attributes. Columns are the table mode equivalent of dimensions but without any grouping behavior, making them perfect for creating data tables, detailed reports, or when you need to export raw data.

Unlike measures and dimensions, columns simply select fields without applying any aggregation or grouping operations, returning raw field values as-is and giving you direct access to your underlying data. They can include foreign key references through predefined joins.

Columns support two formats: structured columns for simple field references, and raw expressions for complex SQL expressions.

Structured columns use a simple object with a name property to reference a database field.

Structured column object fields:

FieldTypeRequiredDescriptionPossible Values
namestringRequiredThe database field to selectTableName.fieldName format (e.g., Users.city)
labelstringOptionalDisplay label for the columnAny string (e.g., "City")

Example:

"columns": [
{
"name": "Users.city"
},
{
"name": "Orders.status",
"label": "Order Status"
}
]

When you need SQL functions, expressions, or transformations that structured columns don’t support, use the raw expression escape hatch. This allows you to embed any valid SQL SELECT expression directly.

Raw column object fields:

FieldTypeRequiredDescriptionPossible Values
rawstringRequiredA complete SQL SELECT expression including aliasAny valid SQL expression (e.g., "UPPER(Users.name) AS name_upper")
labelstringOptionalDisplay label for the columnAny string (e.g., "Category (Uppercase)")

Example:

"columns": [
{
"raw": "UPPER(Products.category) AS category_upper",
"label": "Category (Uppercase)"
},
{
"raw": "COALESCE(Users.nickname, Users.email, 'Anonymous') AS display_name",
"label": "Display Name"
},
{
"raw": "DATE_TRUNC('month', Orders.created_at) AS order_month"
}
]

You can mix structured and raw columns in the same query:

"columns": [
{ "name": "Orders.id" },
{ "raw": "UPPER(Orders.status) AS status_upper" },
{ "name": "Orders.total", "label": "Total Amount" },
{ "raw": "DATE_TRUNC('day', Orders.created_at) AS order_date" }
]

Structured columns are added directly to the SELECT clause, while raw expressions are included as-is:

SELECT
users.city, -- structured column
orders.status, -- structured column
UPPER(products.category) AS category_upper, -- raw expression
COALESCE(users.nickname, 'Anonymous') AS name -- raw expression
FROM orders
JOIN users ON orders.user_id = users.id
JOIN products ON orders.product_id = products.id

Filters are conditions that constrain your dataset to focus on specific subsets of data, acting as the “WHERE” clause in your analytical queries.

Use filters when you need to narrow down your analysis to specific criteria - for example, analyzing only completed orders, focusing on a particular date range, or examining data from specific regions or user segments. Filters work across all query types and can be applied to any field, whether it’s used as a measure, dimension, or column.

Filters support multiple operators and are applied before any grouping or aggregation occurs, ensuring that your calculations and groupings operate only on the data that meets your criteria.

Filters use objects with column, operator, and values properties.

Filter object fields:

FieldTypeRequiredDescriptionPossible Values
columnstringRequiredThe database field to filter onTableName.fieldName format (e.g., Orders.status)
operatorstringRequiredThe filter operation to performSee Supported Operators table below
valuesarrayVaries*The values to use in the filter conditionArray of strings (count depends on operator)

*Note: values requirement varies by operator - see the Supported Operators table for specific requirements.

Example:

"filters": [
{
"column": "Orders.status",
"operator": "equals",
"values": ["completed"]
},
{
"column": "Orders.totalAmount",
"operator": "gt",
"values": ["100"]
},
{
"column": "Users.city",
"operator": "in",
"values": ["New York", "Los Angeles"]
}
]

These are the available operators for filtering data in JSONQL queries. For operators that conceptually support single values but accept multiple values (equals, contains, starts_with, ends_with), multiple values are automatically combined with OR logic.

OperatorSQL TranslationValues RequiredDescription
equalsfield = value or (field = value1 OR field = value2 ...)One or moreExact match - multiple values combined with OR
gtfield > valueExactly oneGreater than comparison
gtefield >= valueExactly oneGreater than or equal to comparison
ltfield < valueExactly oneLess than comparison
ltefield <= valueExactly oneLess than or equal to comparison
containsfield LIKE '%value%' or (field LIKE '%value1%' OR ...)One or moreText contains substring - multiple values combined with OR
infield IN (value1, value2, ...)One or moreField matches any value in list (native SQL IN)
betweenfield BETWEEN value1 AND value2Exactly twoField is within range (inclusive bounds)
is_nullfield IS NULLNoneField has null value
starts_withfield LIKE 'value%' or (field LIKE 'value1%' OR ...)One or moreText starts with prefix - multiple values combined with OR
ends_withfield LIKE '%value' or (field LIKE '%value1' OR ...)One or moreText ends with suffix - multiple values combined with OR
last_n_daysfield >= CURRENT_DATE - (N * INTERVAL '1 DAY')Exactly one (N)Within last N days
last_n_monthsfield >= DATE_TRUNC('month', CURRENT_DATE) - (N * INTERVAL '1 MONTH')Exactly one (N)Within last N months
last_n_quartersfield >= DATE_TRUNC('quarter', CURRENT_DATE) - (N * INTERVAL '3 MONTH')Exactly one (N)Within last N quarters
last_n_yearsfield >= DATE_TRUNC('year', CURRENT_DATE) - (N * INTERVAL '1 YEAR')Exactly one (N)Within last N years
todayDATE(field) = CURRENT_DATENoneMatches today’s date
this_monthfield >= DATE_TRUNC('month', CURRENT_DATE) AND field < ...NoneWithin current calendar month
this_quarterfield >= DATE_TRUNC('quarter', CURRENT_DATE) AND field < ...NoneWithin current calendar quarter
this_yearfield >= DATE_TRUNC('year', CURRENT_DATE) AND field < ...NoneWithin current calendar year

Filters become conditions in the WHERE clause of the generated SQL query:

SELECT
SUM(orders.amount) as total_revenue,
users.city
FROM orders
JOIN users ON orders.user_id = users.id
WHERE orders.status = 'completed' -- filter condition
AND orders.total_amount > 100 -- filter condition
AND users.city IN ('New York', 'Chicago') -- filter condition
GROUP BY users.city

Joins enable you to combine data from multiple tables by defining explicit relationships between them, allowing for complex cross-table analysis and data retrieval.

Use joins when your query involves fields from different tables - for example, analyzing order data along with customer information, or combining product details with sales metrics. Joins are required when fields from multiple tables are referenced and must be explicitly defined for security and clarity, with no automatic relationship detection.

Joins support various join types (inner, left, right, full) to handle different data relationship scenarios and are applied before any filtering or grouping occurs. They only support equality joins for security (preventing SQL injection), ensuring predictable query behavior while enabling powerful multi-table analytics.

Joins use objects with from, to, from_column, to_column, and optional type properties.

Join object fields:

FieldTypeRequiredDescriptionPossible Values
fromstringRequiredSource table nameTable name (e.g., "Orders")
tostringRequiredTarget table nameTable name (e.g., "Users")
from_columnstringRequiredColumn in source table for join conditionTableName.columnName format (e.g., "Orders.user_id")
to_columnstringRequiredColumn in target table for join conditionTableName.columnName format (e.g., "Users.id")
typestringOptionalJoin type (defaults to “inner”)"inner", "left", "right", "full"

Example:

"joins": [
{
"from": "Orders",
"to": "Users",
"from_column": "Orders.user_id",
"to_column": "Users.id",
"type": "inner"
}
]

Joins become JOIN clauses that connect the primary table to related tables:

SELECT
SUM(orders.amount) as orders_amount_sum,
users.city
FROM orders -- primary table in FROM clause
INNER JOIN users ON orders.user_id = users.id -- join definition
GROUP BY users.city

When using joins, JSONQL automatically determines which table serves as the primary (base) table in the generated SQL query. Understanding this logic helps you structure your joins correctly.

  • Without joins: All fields in columns/measures/dimensions must reference the same table
  • With joins: Exactly one table must appear only in from (never in to) - this becomes the primary table
  • The primary table becomes the base table in the FROM clause, with all joins applied to it

JSONQL enforces strict validation rules to ensure query integrity and prevent common errors when working with single tables or multiple joined tables.

Single Table Queries (no joins):

  • All fields in measures, dimensions, columns, filters, and order_by must reference the same table
  • No joins array should be provided

Multi-Table Queries (with joins):

  • Exactly one table must appear only in from fields (never as to) - this is the primary table
  • All other referenced tables must appear in at least one to field
  • All referenced tables in fields must have a path to the primary table through joins
  • from_column and to_column must follow TableName.columnName format
  • Field names can only contain alphanumeric characters, dots, underscores, and asterisks
  • Circular join references are not allowed
  • Each table can only be joined once (no duplicate joins to the same table)

Edge Cases and Validation Errors:

  • Multiple disconnected graphs: If multiple tables never appear in to, the query is invalid (ambiguous primary table)
  • Circular references: If no table appears only in from due to circular joins, the query is invalid
  • Self-referencing joins: Tables can join to themselves (e.g., Users.manager_id = Users.id) but this means the same table appears in both from and to, so it cannot be the primary table
  • Unreachable tables: All tables referenced in fields must be reachable from the primary table through the join graph

Query controls allow you to limit, paginate, and sort the results returned by your JSONQL queries.

These controls work consistently across aggregate and table queries, providing essential functionality for result management. Use them to implement pagination, control dataset size, and ensure predictable result ordering in your applications.

Limits restrict the maximum number of rows returned by a query, essential for performance and pagination.

Use limit to prevent large result sets from overwhelming your application, implement basic pagination, or simply cap the results to a manageable size. Without a limit, queries may return very large datasets that could impact performance.

Limit accepts a numeric value or a Reeflow SQL function string for dynamic limits.

Limit field specification:

FieldTypeRequiredDescriptionPossible Values
limitnumber or stringOptionalMaximum number of rows to returnNon-negative integer or Reeflow SQL function (e.g., 100, "RF_PARAM('page_size')")

Example:

{
"type": "table",
"columns": [{ "name": "Users.name" }],
"limit": 50
}

Limits translate directly to SQL LIMIT clauses, controlling the maximum number of rows returned from the database.

Example SQL generation:

SELECT users.name
FROM users
LIMIT 50; -- limit value applied

Offsets skip a specified number of rows before returning results, commonly used with limit for pagination.

Use offset with limit to implement pagination by skipping rows from previous pages. For example, with limit 20 and offset 40, you’ll skip the first 40 rows and return the next 20, effectively getting page 3 of results with 20 items per page.

Offset accepts a numeric value or a Reeflow SQL function string for dynamic pagination.

Offset field specification:

FieldTypeRequiredDescriptionPossible Values
offsetnumber or stringOptionalNumber of rows to skip before returning resultsNon-negative integer or Reeflow SQL function (e.g., 0, "RF_PARAM('offset')")

Example:

{
"type": "table",
"columns": [{ "name": "Users.name" }],
"limit": 20,
"offset": 40
}

Offsets translate directly to SQL OFFSET clauses, skipping the specified number of rows before applying the limit.

Example SQL generation:

SELECT users.name
FROM users
LIMIT 20 -- maximum rows to return
OFFSET 40; -- rows to skip before returning

Order by controls specify how to sort query results, ensuring predictable and consistent result ordering.

Use order by to sort results by specific fields in ascending or descending order. This is essential for pagination (to ensure consistent ordering across pages), displaying data in meaningful order, and creating predictable result sets. Multiple order by clauses are applied in the order specified.

Order by uses an array of objects specifying fields and sort directions.

Order by object fields:

FieldTypeRequiredDescriptionPossible Values
columnstringRequiredField to sort byTableName.fieldName format
directionstringRequiredSort direction"asc" (ascending), "desc" (descending)

Example:

{
"type": "table",
"columns": [{ "name": "Users.name" }, { "name": "Users.created_at" }],
"order_by": [
{
"column": "Users.created_at",
"direction": "desc"
},
{
"column": "Users.name",
"direction": "asc"
}
],
"limit": 20
}

Order by clauses translate to SQL ORDER BY statements, with multiple fields creating a multi-level sort.

Example SQL generation:

SELECT
users.name,
users.created_at
FROM users
ORDER BY
users.created_at DESC, -- primary sort: newest first
users.name ASC -- secondary sort: alphabetical
LIMIT 20;

JSONQL’s structured format covers most common query patterns, but sometimes you need the full power of SQL for complex expressions, database-specific functions, or advanced transformations. The raw expression escape hatch provides this flexibility while maintaining the benefits of JSONQL’s structure elsewhere in your query.

JSONQL follows a consistent pattern: structured schemas for guided, validated query building, with raw escape hatches for power users who need direct SQL access. This design mirrors how JSONQL already offers three query types - with type: "sql" being the escape hatch at the query level for completely custom SQL.

At the element level, the { raw: string } pattern provides the same flexibility for individual components within otherwise structured queries.

The raw expression escape hatch is currently available for:

ElementEscape HatchExample
Columns (table queries){ "raw": "..." }{ "raw": "UPPER(name) AS name_upper" }

Use raw expressions when you need:

  • SQL functions: UPPER(), LOWER(), TRIM(), COALESCE(), CONCAT()
  • Date/time functions: DATE_TRUNC(), EXTRACT(), DATE_PART()
  • Type casting: CAST(column AS type)
  • Mathematical expressions: price * quantity AS total
  • Conditional expressions: CASE WHEN ... THEN ... END
  • Database-specific functions: Window functions, JSON operators, etc.

Raw expressions still go through Reeflow’s security compilation pipeline:

  • RF_ function resolution: RF_PARAM() and RF_USER_ATTR() work in raw expressions
  • Access control: Column-level and row-level security policies are applied
  • SQL validation: The compiled SQL is parsed and validated before execution

The raw escape hatch pattern is designed to be extensible. Future versions may support raw expressions in additional elements like filters, joins, and order_by clauses, following the same { raw: string } pattern for consistency.

Connections specify which database or data source to execute your JSONQL query against.

Every JSONQL query must specify a connection to determine where to run the query. You can either reference an existing saved connection by ID or provide connection details directly in the query. This flexibility supports both reusable connection management and ad-hoc query execution.

Connection ID references a pre-configured connection saved in your Reeflow workspace.

Use connection IDs when you have established, reusable database connections that multiple queries will use. This approach centralizes connection management, provides security through credential management, and simplifies query construction by referencing connections by their ID.

Connection ID uses a string identifier that must start with “conn_” prefix.

Connection ID field specification:

FieldTypeRequiredDescriptionPossible Values
connection_idstringOptional*ID of saved connection to useMust start with "conn_" (e.g., "conn_prod_db")

*Note: Either connection_id or connection object must be provided, but not both.

Example:

{
"type": "table",
"connection_id": "conn_analytics_db",
"columns": [{ "name": "Users.name" }]
}

Connection objects provide database connection details directly within the query.

Use connection objects for ad-hoc queries, testing different connection parameters, or when you need to specify connection details dynamically. This approach embeds all connection information directly in the query, making it self-contained but requiring you to manage credentials and connection details explicitly.

Connection objects specify the data source type and configuration details.

Connection object fields:

FieldTypeRequiredDescriptionPossible Values
connection.typestringRequiredType of data sourceSupported data source types (varies by platform)
connection.configobjectRequiredConnection configuration detailsDatabase-specific connection parameters

Example:

{
"type": "table",
"connection": {
"type": "postgresql",
"config": {
"host": "localhost",
"port": 5432,
"database": "analytics",
"username": "user",
"password": "password"
}
},
"columns": [{ "name": "Users.name" }]
}

Parameters provide end-user-controlled values that are injected into queries at execution time using the RF_PARAM() function. Unlike user attributes which are server-controlled for security, parameters are intended for query customization like pagination, user-selected filters, and dynamic sorting.

Parameters are specified in the parameters field at the top level and can be used with any query type.

Parameters object specification:

FieldTypeRequiredDescriptionConstraints
parametersobjectOptionalKey-value pairs for query customizationMaximum 10 properties
parameters.[key]string-Parameter keyLetters, numbers, hyphens, underscores, colons, dots. Max 64 characters.
parameters.[value]string, number, boolean-Parameter valueStrings max 64 characters.

Example with raw SQL:

{
"type": "sql",
"sql": "SELECT * FROM products WHERE category = RF_PARAM('category') LIMIT RF_PARAM('page_size') OFFSET RF_PARAM('offset')",
"parameters": {
"category": "electronics",
"page_size": 20,
"offset": 0
}
}

Example with table query:

{
"type": "table",
"columns": [{ "name": "Products.id" }, { "name": "Products.name" }],
"filters": [
{
"column": "Products.category",
"operator": "equals",
"values": ["RF_PARAM('category')"]
}
],
"limit": "RF_PARAM('page_size')",
"offset": "RF_PARAM('offset')",
"parameters": {
"category": "electronics",
"page_size": 20,
"offset": 0
}
}

Both user attributes and parameters inject values into queries, but they serve different purposes and have different security implications.

AspectUser Attributes (RF_USER_ATTR)Parameters (RF_PARAM)
PurposeSecurity and access controlQuery customization
SourceServer-controlled (session tokens, user config, roles)End-user-provided at query time
Use casesTenant isolation, user-specific filtering, RLSPagination, user-selected filters, dynamic sorting
SecurityCannot be overridden by the end userEnd user controls the values
Exampletenant_id, user_id, regionpage_size, offset, sort_column

Reeflow provides custom SQL functions that inject dynamic values into queries at execution time. These functions act as placeholders that Reeflow replaces with actual values when the query runs, enabling row-level security and parameterized queries.

RF_USER_ATTR('attribute_key') resolves to the current principal’s user attribute value at query time.

Use this function for row-level security filters that restrict data based on who is querying. For example, filtering orders by tenant_id so each customer only sees their own records.

Syntax:

RF_USER_ATTR('attribute_key')

Parameters:

ParameterTypeRequiredDescription
attribute_keystringYesThe user attribute key to resolve. Must contain only letters, numbers, hyphens, underscores, colons, and dots.

Example:

-- Filter orders by tenant
SELECT * FROM orders WHERE tenant_id = RF_USER_ATTR('tenant_id')
-- Resolves to (when tenant_id = 'acme'):
SELECT * FROM orders WHERE tenant_id = 'acme'

Behavior:

  • Returns the attribute value as a string or number, depending on how it was set
  • Fails with an error if the attribute is not found in the principal’s context
  • The value comes from session token attributes, embedded user configuration, user configuration, API key configuration, or role fixed attributes

RF_PARAM('param_key') resolves to a query parameter value passed when executing the query.

Use this function for dynamic values that change per query execution, such as pagination controls, user-selected filters, or date ranges that the calling application provides.

Syntax:

RF_PARAM('param_key')

Parameters:

ParameterTypeRequiredDescription
param_keystringYesThe parameter key to resolve.

Example:

-- Paginated query with dynamic limit and offset
SELECT * FROM products LIMIT RF_PARAM('page_size') OFFSET RF_PARAM('offset')
-- Resolves to (when page_size = 20, offset = 40):
SELECT * FROM products LIMIT 20 OFFSET 40

Behavior:

  • Returns the parameter value as provided by the calling application
  • Fails with an error if the parameter is not found in the query context
  • Parameters are passed in the parameters field when executing queries

Reeflow SQL functions can only appear in specific SQL clauses. Using them elsewhere results in a validation error.

ClauseAllowedNotes
WHEREYesPrimary use case for row-level security
HAVINGYesFilter aggregated results
JOIN ONYesDynamic join conditions
ORDER BYYesDynamic sorting
GROUP BYYesDynamic grouping
LIMITYesDynamic result limits
OFFSETYesDynamic pagination
SELECT columnsNoCannot be used in column expressions
FROMNoCannot be used for table names

SQL functions can fail during query execution. Reeflow returns descriptive errors to help diagnose issues.

ScenarioStatusError
Attribute not found400 Bad RequestAttribute ‘attributeName’ not found in context
Parameter not found400 Bad RequestParameter ‘paramName’ not found in context
Invalid function location400 Bad RequestFunction ‘RF_USER_ATTR’ not allowed in SELECT clause
Invalid attribute key format400 Bad RequestInvalid attribute_key: must contain only letters, numbers, hyphens, underscores, colons, and dots