Back to jobs

Senior Data Engineer

Amsterdam

At Sytac, we build high-performing engineering teams for leading organizations in the Netherlands and beyond. We combine a pragmatic, people-first culture with strong technical craftsmanship, giving engineers autonomy in real production environments, backed by a consultancy that invests in growth, community, and long-term partnerships.

For one of our enterprise clients in a data-intensive domain, we are looking for a Senior Data Engineer to help build and maintain scalable, reliable, and production-ready data pipelines. You’ll work on batch and streaming data products that power analytics, reporting, and AI/ML use cases across the organization.

This is a high-impact role requiring deep expertise in modern lakehouse architectures, cloud-native data tooling, and robust engineering practices focused on reliability and ownership.

What you’ll do

  • Design, build, and operate end-to-end data pipelines across Azure (ADF/Databricks) or GCP (Dataflow/BigQuery).
  • Implement lakehouse patterns (Delta Lake, medallion architecture) for scalable and reliable data products.
  • Deliver batch and streaming pipelines using technologies such as Kafka, Pub/Sub, or Event Hubs.
  • Write high-quality, production-grade code in Python and SQL to process and transform large datasets.
  • Apply strong engineering principles to data modelling, quality, lineage, and governance.
  • Set up CI/CD workflows for data pipelines and infrastructure to ensure reproducibility and automation.
  • Implement monitoring and observability to ensure the health and reliability of data systems.
  • Optimize performance and cost across compute, storage, and orchestration layers.
  • Collaborate with stakeholders, including Data Scientists and ML Engineers, to translate business needs into technical solutions.
  • Contribute to data engineering standards and mentor the team on best practices.

What we’re looking for

  • 5+ years of experience as a Data Engineer in complex cloud environments.
  • Strong background in Azure (ADF, Databricks) or GCP (Dataflow, BigQuery).
  • Expertise in Python and SQL for complex data processing and validation.
  • Deep understanding of Lakehouse concepts: Delta Lake, curated layers, and medallion architecture.
  • Hands-on experience with streaming (Kafka, Pub/Sub, Event Hubs) and batch processing.
  • Solid grasp of DevOps for Data: CI/CD, testing automation, and deployment pipelines.
  • Infrastructure awareness: Experience with Terraform or IaC is required for this senior level.
  • Proactive mindset: Ability to work closely with cross-functional teams in a regulated or enterprise setting.
  • Fluent in English + EU residency (no sponsorship).

Tooling (must use in practice): Python, SQL, Spark (Databricks), Azure Data Factory / GCP Dataflow, Kafka/Event Hubs, Terraform, CI/CD (GitLab/GitHub Actions), Airflow or similar orchestratrs.

Nice to have

  • Affinity with AI/ML: Experience delivering feature-ready datasets for training and inference.
  • MLOps exposure: Understanding of dataset versioning and feature stores.
  • Data Governance: Experience with tools for data lineage and cataloging.

Apply for this job

*

indicates a required field

Phone
Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf