Figure 2.3
Reason vs. Represent: Lm and Ec in Period 4
Same Input
"The quarterly earnings report shows significant growth in the cloud division, with revenues up 34% year over year."
Natural language input
provided to both elements
Lm Large Language Model P4 / Cognition
Ec Encoder / Embedding Model P4 / Cognition
Lm Output: Generates meaning
Reasoning chain: "Cloud growth at 34% YoY suggests strong market adoption. Key risks: sustainability of growth rate, margin compression from infrastructure spend..."
Decision output: Route to financial analysis pipeline with flag: growth_sustained=uncertain
Ec Output: Encodes meaning
Dense vector: [0.823, -0.241, 0.671, 0.094, -0.512, 0.388, ...] 1536-dimensional semantic embedding
Functional character is categorical, not scalar. An embedding model at any scale still encodes; an LLM at any scale still reasons. Both occupy Period 4 because they share the same dependency profile. Their different outputs, language vs. vectors, place them in different groups within that period.
The same input sentence enters both Lm and Ec. Lm produces a reasoning chain, a decision, or generated language: content that was not present in its inputs. Ec produces a dense vector that captures semantic relationships. The distinction is categorical: scale does not transform encoding into reasoning. Both elements share Period 4 because they share the same dependency depth, but their functional characters place them in different groups.