What you’ll gain:
- Lead the design and evolution of next-generation Lakehouse platforms using technologies such as Dremio, Apache Iceberg and Apache Flink.
- Build deep expertise in high-scale data engineering by developing and optimizing pipelines.
- Influence strategic data architecture decisions through close collaboration with architects, data scientists and engineering leaders.
- Grow as a technical leader by mentoring junior engineers and championing best practices in data quality, observability and engineering excellence.
- Strengthen your versatility by diagnosing and tuning large-scale data systems and gaining exposure across multiple cloud and data-platform technologies.
Who We Are Looking For:
- An experienced data engineer (5+ years) with hands-on exposure to building and scaling modern data platforms and Lakehouse architectures.
- Strong proficiency in Apache Spark (Scala or Python), real-time and batch processing, and the Hadoop ecosystem, with adaptability across emerging data tools.
- Solid cloud engineering experience across Azure, AWS or GCP, particularly with storage and compute services such as S3, ADLS or GCS.
- Strong architectural foundation with knowledge of dimensional modelling, Data Vault, streaming patterns (Lambda/Kappa) and data governance principles.
- A collaborative and future-focused engineer who communicates clearly, works independently, embraces Agile ways of working and adapts quickly to new technologies to enhance overall team fungibility.
Ready to take your next step?