Damia Group logo

Data Engineer

Damia Group
Department:Data Analysis
Type:REMOTE
Region:UK
Location:London Area, United Kingdom
Experience:Associate
Salary:£40,000 - £60,000
Skills:
SQLETL/ELTDATA WAREHOUSINGPYTHONCLOUD DATA PLATFORMSDATA GOVERNANCECI/CDDATA MODELINGAPACHE AIRFLOWGIT
Share this job:

Job Description

Posted on: January 27, 2026

**Data Engineer - £40-60K base + benefits – Remote (UK based – occasional office visits to Cheltenham or London)**

We are looking for a mid-level Data Engineer to join one our clients Data Engineering teams. This team manage the enterprise Data Warehouse and data platform. This role will design, build, and optimise data pipelines, uphold data quality and governance standards, and develop scalable architectures that support analytics, reporting, and operational systems across our organisation. You will work with the rest of the Data & Business Intelligence team to advance our data infrastructure, improve performance, and enable downstream teams to extract maximum value from our data assets.

Key responsibilities:

  • Design, build, and maintain robust ETL/ELT pipelines to ingest data from multiple source systems (clinical, finance, HR, operations, marketing, IT).
  • Optimise the enterprise Data Warehouse for performance, scalability, and cost efficiency (partitioning, indexing, query tuning).
  • Automate data workflows, monitoring, and validation using our orchestration tools (Dataform, Google BigQuery).
  • Apply best practices in data governance, metadata management, and master data management.
  • Implement proactive monitoring and alerting to detect pipeline failures, schema changes, or data anomalies.
  • Partner with developers, analysts, and data consultants to evolve our technical architecture and adopt new technologies.
  • Support ad hoc data needs by building curated datasets and APIs for analytics, BI, or ML use cases.
  • Document pipelines, data lineage, and architectural decisions to ensure transparency and maintainability.
  • Ensure coding standards and code quality are maintained by observing and contributing to the Peer Review process

Essential Skills & Experience

  • Strong SQL expertise, including advanced query optimisation and debugging.
  • Proven experience building and maintaining large-scale data pipelines (batch and/or streaming).
  • Deep understanding of relational databases, data modelling and Data Warehouse/Lakehouse design principles.
  • Proficiency with ETL/ELT frameworks, orchestration tools (e.g., Apache Airflow, dbt, Azure Data Factory), and version control (Git).
  • Familiarity with cloud data platforms (Azure Synapse, Snowflake, Redshift, or Big Query).
  • Practical knowledge of processing unstructured and semi-structured datasets (Parquet, ORC, and JSON).
  • Programming skills in Python (or similar) for data transformation, automation, and scripting.
  • Experience working with sensitive/regulated data, implementing appropriate security and compliance (GDPR, HIPAA, etc.).
  • Solid knowledge of CI/CD for data products.
  • Analytical mindset with a focus on scalability, automation, and reliability.
Originally posted on LinkedIn

Apply now

Please let the company know that you found this position on our job board. This is a great way to support us, so we can keep posting cool jobs every day!

ResumeBuilder.careers logo

ResumeBuilder.careers

Get ResumeBuilder.careers on your phone!