Materialized Views Vs. Dynamic Tables In Snowflake (For Engineers Who Have Been Burned)
By Samar Sandri, HEXstream data analytics engineer
Let me start with a confession.
Early in my Snowflake journey, I slapped a Materialized View on a query that joined four tables, watched it silently fail to accelerate anything, spent two days debugging, and eventually discovered the optimizer had been ignoring it the whole time. It was an expensive lesson.
This post is what I wish I had read before making that mistake. It's for leads and architects who need a real framework for this decision—not a feature list from the docs, but an actual mental model for when each tool earns its place.
Becuase the question isn't which one is better. It's which problem you're actually trying to solve. One is a query optimizer tool. The other is a pipeline primitive. Let's explore...
First, let's get the mental model right
Most comparisons list capabilities side-by-side. That misses the point. The real distinction is conceptual.
Materialized Views live in the read path. They're a query-acceleration tool. Define them once and forget them. Snowflake's optimizer rewrites queries against your base table to hit the MV transparently—your BI tool and analysts never know it exists.
Dynamic Tables live in the transformation path. They're a pipeline primitive. You're producing a derived dataset on a schedule with a guaranteed freshness window. You query the Dynamic Table by name. It's a first-class object in your data model.
How they actually work (the details that matter)
Materialized Views (what's happening under the hood)—A few things about MVs that aren't obvious from the docs and matter a lot in practice:
—Refresh is managed by Snowflake's cloud-services layer, not your warehouse. When your base table changes, Snowflake detects it via micro-partition tracking and refreshes the MV in the background. No warehouse needed, no scheduling required.
—The optimizer rewrites queries transparently. If your MV covers a query pattern, Snowflake routes it there automatically. The caller never references the MV by name.
—SQL support is intentionally limited. No JOINs across multiple base tables, no UDFs, no non-deterministic functions. Built for simple aggregations and filters on a single table.
—Cost is real but opaque. Background refresh costs credits under cloud services—harder to attribute, and can surprise you on high-churn tables.
Dynamic Tables (what's happening under the hood)—Dynamic Tables solve a different
problem: how do you define a transformation pipeline declaratively, without writing Tasks + stored procedures + error handling every time?
—You declare a target lag, not a schedule. Instead of 'refresh every five minutes', you say 'keep this no more than five minutes stale'. Outcome-oriented, not schedule-oriented.
—Refresh runs on your warehouse. You assign a warehouse explicitly. You control the compute, see the credit consumption directly, and can govern costs per pipeline.
—Full SQL is supported. JOINs, UDFs, FLATTEN, window functions, PIVOT—anything you can write in a SELECT.
—Chaining is first-class. DT_A feeds DT_B feeds DT_C. Snowflake manages the dependency graph. Your Medallion Architecture pipeline, natively, with no Airflow.
—Observability is built in. Query actual vs. target lag, check scheduling state, see last refresh time. MVs don't give you this.
The side-by-side you actually need
Here's a clean comparison. I've tried to be honest about the tradeoffs rather than making either look better than they are.
Code that shows the difference
A Materialized View (simple, automatic, invisible)—This is what MVs are built for. Simple aggregation, one table, and you want it to accelerate every query that touches this pattern— ithout changing anything in your BI layer.
A dynamic table—full SQL, explicit freshness, named object
The moment you need a JOIN or a UDF, you're in Dynamic Table territory. Notice that here, you query the DT by name—it's a first-class table in your data model.
Where dynamic tables really shine: Multi-hop pipelines
This is the pattern that makes me genuinely excited about Dynamic Tables. You can build your entire bronze → silver → gold Medallion Architecture natively in Snowflake, declaratively, with dependency management handled for you. No Airflow. No Tasks + stored procedures. No error-handling boilerplate.
The real decision framework
Here's how I actually think about this decision when I'm in a design review or scoping a new pipeline. Walk through these four questions in order. (The first strong signal usually tells you everything.)
Question 1: What does your transformation look like?
Joining more than one table? Dynamic Table, full stop. Need a UDF, FLATTEN, or window function? Same answer. Clean aggregation on a single table? MV is the elegant solution—don't over-engineer it.
Question 2: Who is the consumer and how do they access the data?
BI tool querying your base table and you want queries faster without touching any SQL in the BI layer? That's the MV use case. Building a named data model your team queries directly? That's a Dynamic Table.
Question 3: Do you need to chain transformations?
Multi-hop pipeline where downstream steps need to wait for upstream ones—Dynamic Tables only. MVs have no concept of DAG-level dependencies.
Question 4: What does cost governance look like?
MVs are comfortable for small teams—zero overhead, Snowflake owns it. But credit consumption is opaque. Dynamic Tables give you explicit warehouse assignment and transparent per-pipeline cost. For platforms with budget accountability, that visibility matters.
If you can't answer the question, "How much did this pipeline cost last month?" for your MVs, that's a governance problem waiting to happen.
Mistakes I've seen (and made)
Mistake 1: Using an MV for a multi-table JOIN
The MV creates without errors. The optimizer quietly ignores it. Your query stays just as slow. You spend two days assuming it's a warehouse sizing problem. Always run EXPLAIN after creating an MV and verify 'MaterializedView' appears in the query plan.
Mistake 2: Aggressive TARGET_LAG on an undersized warehouse
Set 'one minute' lag on heavy JOINs running on an XS warehouse and your refresh jobs start queuing. Actual lag drifts to 12, 20 minutes. A stakeholder notices stale numbers before your monitoring does. Right-size your transform warehouse first, then set the lag.
Mistake 3: Trying to chain MVs
An MV cannot reference another MV as its source. MV_A feeding MV_B will hit a wall. If you need multi-hop, you need Dynamic Tables—this is a fundamental limitation, not a configuration issue.
Mistake 4: Assuming MV refresh is free
Background refresh consumes cloud services credits—it just doesn't show up on your warehouse line. On high-churn tables, this adds up. Pull QUERY_HISTORY and filter for
REFRESH_MATERIALIZED_VIEW quarterly. The number will surprise you.
The pattern that unlocks both
Use Dynamic Tables for your transformation pipeline—bronze to silver to gold, JOINs and UDFs and explicit freshness SLAs per layer. Then layer Materialized Views on top of your gold tables for the highest-traffic query patterns. The DT ensures correctness and
freshness. The MV ensures your dashboards are instant.
Remember...they're not competing—they're complementary. Dynamic Tables own the pipeline. Materialized Views own the read acceleration layer on top.
The summary worth bookmarking
Here's the one-liner I now keep in my head when making this call: Both tools are genuinely useful. Engineers who pick the wrong one aren't failing to read the docs—they're thinking about the wrong problem. Get the mental model right and the decision usually makes itself.
Start by asking which layer you're working in—the read path or the transformation path. Everything else follows from that.
WANT MORE? CLICK HERE TO CONNECT WITH HEXSTREAM.