Data model and relationships

Envision feels different if you come from SQL or Python because it treats tables as the native unit of thought. This page explains why the language is built around tables, dimensions, and relationships, and how those choices shape the way you design a model. The goal is not to tell you which statement to type, but to explain why the statements are there and how they fit together.

Table of contents

Tables are the unit of meaning

In Envision, every value belongs to a table, even a single scalar. This forces you to think about the level at which a decision is made. A demand forecast is rarely a single number; it is a vector across items, dates, or both. A table is a convenient unit for those vectors and a reminder that the same computation should apply to all lines at once.

This choice is deliberate. Supply chain data is already tabular, and the most expensive part of the work is not arithmetic but alignment. Envision makes alignment explicit. If two vectors are in the same table, operations happen line by line. If they are in related tables, you must say how they relate before you can aggregate or broadcast.

The result is a small number of operations that are safe by construction. Instead of deciding when to loop, you decide how tables relate. The language then loops for you.

Dimensions describe identity, not position

A primary dimension is an identity, not a row number. When you declare a dimension, you are telling Envision how two tables can be matched. That relationship is what enables broadcasting and aggregation. The example below ties orders to items by a shared dimension and shows how the relationship drives calculations in both directions.

// Items exist at the SKU level.
table Items[sku] = with
  [| as sku, as Category |]
  [| "A-01", "coffee" |]
  [| "B-02", "tea" |]
  [| "C-03", "tea" |]

// Orders are a separate table and do not yet know Items.
table Orders = with
  [| as OrderId, as ItemSku, as Qty, as UnitPrice |]
  [| 1, "A-01", 2, 5.0 |]
  [| 2, "B-02", 1, 7.0 |]
  [| 3, "B-02", 4, 7.0 |]
  [| 4, "C-03", 3, 6.0 |]

// Declare the relationship: Orders has a secondary dimension sku.
expect Orders.sku = Orders.ItemSku

Orders.LineAmount = Orders.Qty * Orders.UnitPrice
Orders.Category = Items.Category // broadcast from Items into Orders

Items.TotalQty = sum(Orders.Qty)
Items.Revenue = sum(Orders.LineAmount)

show table "Items rollup" with
  Items.sku
  Items.Category
  Items.TotalQty
  Items.Revenue { unit: "$" }

A few points are hidden in this simple script:

The same expression can be legal or illegal depending on the relationships. That is intentional. The language tries to catch misalignment early rather than silently compute nonsense.

Upstream and downstream are the real control flow

People often describe Envision as a mix of array programming and relational algebra. The mental model that tends to stick is upstream and downstream. If table A is upstream of table B, then A can broadcast into B and B can aggregate into A. This is stronger than a SQL join because it is a structural rule, not an operation you must remember to repeat.

These relationships are not just about correctness. They also influence performance. The compiler knows which computations can be shared because the graph of tables tells it where values can be reused. In practice, that means Envision can be more aggressive about caching and less tolerant of ambiguous operations. That trade-off favors reliability over convenience, which matches the needs of production supply chain runs.

Table comprehensions are executable context

In many Envision scripts, the first lines are table comprehensions that define small, explicit datasets. This is not just for tutorials. It is a way to make assumptions visible and to document the model in place. Because the syntax is the same as for real tables, you can replace a comprehension with a read block later without rewriting the rest of the script.

// A small catalog used to illustrate the data flow.
table Items = with
  [| as Sku, as Family, as UnitCost |]
  [| "A-01", "coffee", 2.0 |]
  [| "B-02", "tea", 1.5 |]
  [| "C-03", "tea", 1.8 |]

// Grouping tables summarize at a higher grain.
table Families[family] = by Items.Family
Families.SkuCount = count(Items.*)
Families.AvgCost = avg(Items.UnitCost)

show table "Families" with
  Families.family
  Families.SkuCount
  Families.AvgCost { unit: "$" }

The emphasis is on grain. Items is at the SKU grain, Families is at the family grain. Once you adopt this habit, it becomes easier to read scripts: you can see what each table represents and why an aggregation is required to move from one to another.

Cross tables are explicit about combinatorics

Sometimes you need every combination of two sets, such as every price break applied to every item or every warehouse paired with every shipping mode. Envision exposes this as a cross table rather than a hidden nested loop. It reads like a declaration of intent: you want the product of two tables.

table Warehouses[wh] = with
  [| as wh |]
  [| "FR" |]
  [| "US" |]

table Modes[mode] = with
  [| as mode, as BaseCost |]
  [| "air", 5.0 |]
  [| "sea", 1.5 |]

// Every warehouse-mode pair exists in the cross table.
table Routes = cross(Warehouses, Modes)
Routes.RouteId = concat(Warehouses.wh, "-", Modes.mode)
Routes.Cost = Modes.BaseCost

// Lookups are explicit and safe.
oneCost = Routes.Cost[wh: "FR", mode: "air"]

show table "Routes" with
  Routes.RouteId
  Routes.Cost
show scalar "FR air cost" with oneCost

This is the Envision way to say, “I need all combinations, and I want them to be first-class.” Because cross tables are explicit, you can reason about size and cost early. In supply chain work, combinatorics are common and expensive, so making them visible matters.

Secondary dimensions reduce the need for joins

In SQL, joins are a repeated operation. In Envision, the join logic is baked into the tables themselves. Secondary dimensions are the device that makes that possible. When a table declares a secondary dimension, it becomes downstream of the table that owns the primary dimension. This is the real reason why a vector from one table can flow into another without an explicit join.

table Prices[sku] = with
  [| as sku, as Price |]
  [| "A-01", 5.0 |]
  [| "B-02", 7.0 |]
  [| "C-03", 6.0 |]

table Basket = with
  [| as LineId, as Sku, as Qty |]
  [| 1, "A-01", 2 |]
  [| 2, "B-02", 1 |]
  [| 3, "C-03", 3 |]

expect Basket.sku = Basket.Sku
Basket.LineCost = Basket.Qty * Prices.Price

missing = Prices.Price[sku: "X-99"] default 0

show table "Basket" with
  Basket.Sku
  Basket.Qty
  Basket.LineCost { unit: "$" }
show scalar "Missing price" with missing

This pattern replaces an explicit join with a declaration of relationship. It also makes missing data explicit by requiring a default when you access a value that may not exist. When dealing with messy enterprise master data, those defaults become a conscious design decision instead of an accidental null.

Filtering is a scoped lens, not a permanent mutation

Filtering does not mutate tables; it creates a scoped view. This is why indentation matters. A filter is like putting on a lens for a particular block of reasoning. It also explains why you can safely compute aggregates inside a filter and still compute full totals afterward without re-reading data.

table Demand = with
  [| as Sku, as Region, as Qty |]
  [| "A-01", "EU", 10 |]
  [| "A-01", "US", 12 |]
  [| "B-02", "EU", 5 |]
  [| "B-02", "US", 9 |]

where Demand.Region == "EU"
  euTotal = sum(Demand.Qty)
  show scalar "EU total" with euTotal

allTotal = sum(Demand.Qty)
show scalar "All regions" with allTotal

The key point is not syntax; it is a contract. The filter guarantees that everything inside the block is consistent with the filtered world, and everything outside is consistent with the full world. That discipline reduces accidental mixing of scopes, which is a frequent source of errors in spreadsheet and script-based workflows.

Common tables are how Envision resolves ambiguity

When multiple tables appear in a single expression, Envision chooses a common table so the expression can be evaluated line by line. This is not arbitrary. The common table is determined by the upstream relationships you declare. If there is no valid common table, the expression fails to compile.

This rule can feel strict at first, but it is a powerful guardrail. It prevents the accidental Cartesian product that happens in SQL when a join condition is missing, and it prevents accidental broadcasts that might be legal but meaningless. The result is that most errors show up at compile time instead of 20 minutes into a long run.

Why this matters in practice

Supply chain work is about aligning data that arrives from different systems: ERP orders, WMS stocks, pricing tables, transportation contracts, calendars, and more. The most dangerous errors are not syntax errors but alignment errors. Envision is built to make alignment the primary act of modeling.

If you internalize the idea that tables are the unit of meaning, dimensions are identity, and relationships are the real control flow, you can reason about Envision scripts the same way you reason about a data pipeline: upstream facts flow into downstream calculations, and downstream calculations roll back into upstream summaries. That mental model is what allows large scripts to remain readable.

User Contributed Notes
0 notes + add a note