Alternativa apenas UE a Snowflake.

Snowflake is the cloud data warehouse that won the analytics market through separation of compute and storage, with EU regions on AWS, Azure or GCP. Snowflake Inc. is a Delaware corporation; the EU regions live on US-hyperscaler infrastructure — meaning two layers of US jurisdiction. For analytics workloads on EU customer data, Schrems II compliance is genuinely difficult on Snowflake. The sovereign alternatives are: ClickHouse (open-source columnar warehouse), DuckDB (embedded analytics), or PostgreSQL with appropriate columnar extensions — all deployable on EU sovereign infrastructure.

Fornecedor
Snowflake
Sede
Bozeman, MT
Jurisdição
United States
Regime jurídico
CLOUD Act, FISA 702

"Região UE" não é soberania. Quatro perguntas decidem.

Residência de dados diz onde os bits ficam. Soberania diz qual sistema jurídico pode forçar o acesso. A resposta tem de valer nos quatro pontos — caso contrário a stack não é soberana.

Residência

Onde os dados estão fisicamente armazenados?

Não "na nuvem" — qual datacenter, em qual país, sob qual jurisdição.

Subprocessadores

Quem mais está no seu caminho de dados?

Cada fornecedor que toca os dados: o CDN, o relay de e-mail, o rastreador de erros, o pipeline de analytics.

Jurisdição

Quais leis podem forçar a divulgação?

Um fornecedor com sede nos EUA está sujeito ao FISA 702 e ao CLOUD Act — mesmo quando os dados estão em Frankfurt.

Custódia de chaves

Quem detém realmente as chaves de cifragem?

Se o provedor de nuvem tem tanto os dados quanto as chaves, ele pode lê-los — independentemente de qualquer DPA.

AWS · Azure · GCP — EU region

Falha em jurisdição e custódia de chaves.

Bits na UE, casa-mãe nos EUA, subprocessadores americanos no caminho predefinido, chaves geridas pelo fornecedor.

Stack gerida pela Binadit

Passa nos quatro.

Hospedado na UE em infraestrutura com sede europeia. Zero subprocessadores americanos no caminho padrão. Chaves do cliente ou de KMS europeu. Nomeados no seu DPA Artigo 28.

Porque é que as equipas estão a sair Snowflake

Snowflake exits we have scoped come from regulated workloads where the analytics warehouse holds personal data of EU customers, and the Schrems II analysis fails on multiple layers. The unique migration challenge: data warehouses are large, queries are complex, and dbt / Looker / Tableau pipelines need re-pointing. The honest answer for a Snowflake exit is 3-6 months of careful work, not a quick swap. Where the savings are: Snowflake credits at scale ($20k-100k+/month is common) compress to ClickHouse on EU bare metal at a fraction.

Snowflake serviços e os seus equivalentes apenas na UE

Uma migração não é "trocar uma caixa por outra". O mapeamento abaixo é o que executamos para clientes que saem de Snowflake por motivos Schrems II — plena jurisdição UE, sem casa-mãe US no caminho dos dados.

Snowflake serviço Alternativa apenas UE Nota de engenharia
Snowflake compute (warehouses) ClickHouse on EU compute (Hetzner dedicated, OVH bare metal), self-managed Trino on EU ClickHouse is the strongest sovereign alternative for OLAP workloads. For ad-hoc query workloads, Trino over EU object storage is the lakehouse pattern.
Snowflake storage OVH Object Storage, Wasabi EU as data lake, ClickHouse internal storage on EU NVMe For lakehouse architecture, EU S3-compatible storage as the data layer with ClickHouse or Trino as the query engine.
Snowpipe (continuous ingestion) ClickHouse Kafka engine, custom ingestion via Apache Airflow on EU, dbt-cloud-replacement self-hosted For Kafka-based ingestion, ClickHouse has native Kafka engine. For batch ingestion, Airflow on EU compute.
Streams & Tasks Apache Airflow on EU, ClickHouse materialized views, Postgres triggers + LISTEN/NOTIFY Materialized views in ClickHouse cover most "Stream" use cases.
Snowpark (Python/Scala in DB) PySpark on EU compute, ClickHouse Python UDFs, dbt models with Python For ML and feature engineering at the warehouse layer, PySpark on EU compute is the standard pattern.
Time Travel + Zero-Copy Cloning ClickHouse table snapshots, PostgreSQL pg_dump + restore, application-layer point-in-time queries Snowflake's Time Travel is a unique feature; ClickHouse snapshots provide a rougher equivalent.
Secure Data Sharing Bring-your-own-key encrypted exports to EU object storage, custom API layer for shared datasets Secure Data Sharing has no direct equivalent; the migration involves redesigning the data-sharing pattern.
Snowflake Marketplace Direct vendor relationships for any third-party data, EU-hosted data marketplaces (limited maturity) For datasets you currently subscribe to via Marketplace, direct vendor contracts are typically required.
Snowflake Cortex (LLMs) Mistral AI (FR), Aleph Alpha (DE), self-hosted Llama on EU GPUs Cortex is recent; the sovereign EU LLM space (Mistral, Aleph Alpha) has matured to be a real alternative.
BI tool integrations (Tableau, Looker, dbt Cloud) Same BI tools repointed to ClickHouse / Trino, Metabase self-hosted, dbt Core (open-source) on EU runners The BI tool layer typically transfers cleanly with new connection strings; dbt Cloud → dbt Core on self-hosted EU CI.

Como migramos de Snowflake

Uma migração típica de mid-market decorre em três fases. Os números abaixo assumem uma equipa de engenharia de 6 a 10 pessoas e uma stack de aplicação moderadamente complexa.

Weeks 1–3

Architecture decision + audit

Decide ClickHouse vs Trino+lakehouse vs PostgreSQL based on query patterns and data volume. Inventory every dbt model, every dashboard, every external integration. The architecture decision dominates the schedule.

Weeks 3–10

Pilot + parallel run

Migrate a representative subset of workloads to the EU target. Run parallel for validation. Tune ClickHouse cluster sizing based on real query patterns. dbt models converted (most run unchanged on dbt Core with adapter swap).

Weeks 10–24

Full cutover

Phased migration of remaining workloads. BI tools repointed. Snowflake accounts scoped down. Final cutover with a rollback plan; Snowflake retained for archival access for 60-90 days post-cutover.

5-year TCO on Snowflake → ClickHouse migrations: typically 60-85% cheaper at scale. A team running $50k/month of Snowflake credits often replaces it with €5-10k/month of EU ClickHouse infrastructure plus the managed-partner fee. The break-even point is around $5-10k/month of Snowflake spend; below that, the engineering cost of migration may exceed the saved spend over a 3-year horizon.

Perguntas frequentes

Snowflake has Frankfurt and other EU regions — does that solve GDPR?

No. Snowflake Inc. is US-headquartered (parent jurisdiction), and the EU regions run on AWS/Azure/GCP — also US-headquartered (infrastructure jurisdiction). Two layers of US legal exposure under the CLOUD Act and FISA 702. For Schrems II–strict workloads, neither is acceptable.

Is ClickHouse really comparable to Snowflake?

For OLAP query workloads, ClickHouse is genuinely competitive — often faster on equivalent hardware. The differences: ClickHouse requires more operational expertise, Snowflake's separation of compute and storage is harder to replicate cleanly, and Snowflake's ecosystem (Marketplace, Cortex, etc.) doesn't fully exist on ClickHouse. For pure analytics workloads, the gap is small.

What about ClickHouse Cloud — they have an EU region?

ClickHouse Inc. is a US Delaware corporation. ClickHouse Cloud EU regions run on AWS — same dual US-jurisdiction problem as Snowflake. The sovereign answer is self-hosted ClickHouse on EU compute. Aiven offers managed ClickHouse with a clearer EU-jurisdiction story (Aiven is Finnish).

How does dbt fit in?

dbt Core is open-source and runs anywhere; dbt Cloud is dbt Labs Inc. (US). For sovereign workloads, dbt Core on a self-hosted CI runner (GitLab CI EU, Forgejo Actions) replaces dbt Cloud. The actual dbt models port cleanly with the warehouse adapter swap (snowflake → clickhouse).

How long does a Snowflake exit really take?

For a small-to-mid Snowflake usage ($5-20k/month, dozens of dbt models): 3-6 months elapsed time. For enterprise Snowflake ($50k+/month, hundreds of models, complex data sharing): 9-18 months. Snowflake migrations are not weekend projects — they require planning, parallel runs, and careful BI-layer choreography.

Can we keep some Snowflake and migrate the rest?

Hybrid is sometimes the right answer for very specific Snowflake-only features. The discipline: keep only non-personal-data workloads on Snowflake (e.g. internal analytics on aggregated metrics with no PII), and document the boundary in the DPA. For most regulated workloads, full exit is cleaner than the documentation burden of a hybrid.

Planeie a sua saída de Snowflake.

Chamada de scoping de 30 minutos. Mapeamos a sua stack contra alternativas apenas UE, estimamos o esforço de migração e dizemos-lhe se é a decisão certa.