Join us as a Data Engineer
- This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences
- You’ll be leading on sourcing, accumulating and analysing key data from across the business, as well as leading on developing the Mettle data analytics capability, developing data solutions in both AWS and GCP
- Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank
What you'll do
We’ll look to you to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering.
We’ll also expect you to be:
- Delivering the automation of data engineering pipelines through the removal of manual stages
- Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development
- Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight
- Delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists
- Developing solutions for streaming data ingestion and transformations in line with streaming strategy
The skills you'll need
To be successful in this role, you’ll need to be an intermediate level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as a proven track record in extracting value and features from large scale data.
You'll need a good understanding of modern code development practices along with good critical thinking and proven problem solving abilities. We'd also like you to have a background in data analytics and data science and the ability to translate business objectives into data driven insights.
You’ll also need:
- Experience of developing using Python, Scala and Java
- A familiarity with microservices, containerisation, GitOps and CI and CD
- Experience with stream processing and event streaming platforms such as Kafka, Kafka streams, Akka streams, Beam and Spark
- Experience with schema design, schema registries, data serialisation formats and performant data storage
- Experience of ETL frameworks or methodologies, such as Airflow
- Demonstrable experience working with data persistence and warehousing technologies in GCP and AWS
- Experience of ETL technical design, automated data quality testing, QA and documentation, data modelling and data wrangling
- Extensive experience using RDMS, ETL pipelines, Hadoop and SQL