The successful candidate will take ownership of the company’s entire data environment – from ingestion, modelling, and storage through to powering insights that drive product decisions and customer outcomes. This is a unique opportunity to shape the data strategy and architecture of a scaling SaaS business from the ground up.
Key Responsibilities
-
Redesign and optimise existing data pipelines across AWS and Databricks.
-
Build scalable batch and (limited) real-time data processing frameworks.
-
Design and implement data models following star schema and medallion architecture principles.
-
Manage and monitor the Databricks environment to ensure performance, reliability, and cost-efficiency.
-
Collaborate with product, engineering, and analytics teams to deliver high-impact data solutions.
-
Establish and maintain data best practices, documentation, and governance standards.
Skills & Experience
-
Strong experience across AWS and Databricks (non-negotiable).
-
Deep understanding of data modelling, particularly star schema and medallion architecture.
-
Proven ability to build, manage, and optimise Databricks environments.
-
Background in product-led, SaaS, or startup environments.
-
Strong proficiency in SQL, Python, and pipeline orchestration tools (e.g. Airflow, DBT, or similar).
-
A proactive, self-driven mindset – capable of making decisions and driving outcomes independently
Desirable
-
Experience designing cloud-native data architectures from scratch.
-
Exposure to modern data stack tools and event-driven architecture.
-
Familiarity with governance, monitoring, and observability frameworrks