Join us as a Data Engineer
- This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences
- You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers and the bank safe and secure
- Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank
- We’re recruiting for multiple roles across a range to levels
What you'll do
We’ll look to you to drive value for the customer through modelling, sourcing and data transformation. Working closely with universal analysts, platform engineers and data scientists, you’ll carry out data engineering tasks to build a scalable data architecture, including data extractions and data transformation.
As well as this, you’ll be:
- Delivering the automation of data engineering pipelines through the removal of manual stages
- Developing solutions for streaming data ingestion and transformations in line with streaming strategy
- Developing comprehensive knowledge of data platform cost levers to build cost effective and strategic solutions
- Participating in the data engineering community to deliver opportunities to support the bank's strategic direction
The skills you'll need
To be successful in this role, you’ll need to be a programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as a proven track record in extracting value and features from large scale data.
You’ll need knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala. And you’ll have an understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow.
You’ll also demonstrate:
- Knowledge of messaging, event or streaming technology such as Apache Kafka
- Knowledge of big data platforms such as Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop
- Good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure
- Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling
- Extensive experience using RDMS and ETL pipelines
- A good understanding of modern code development practices
- Good critical thinking and proven problem solving abilities