Data Platform Engineer

  • Job Reference: R-00124353-OTHLOC-IND-5FNEW177
  • Date Posted: 15 June 2021
  • Employer: RBS
  • Website:
  • Location: Delhi
  • Salary: On Application
  • Sector: Banking & Financial Services
  • Job Type: Full Time

Job Description

Join us as a Data Platform Engineer

  • We're seeking a talented Data Platform Engineer to build effortless, digital first customer experiences and simplify the bank through developing innovative data driven solutions
  • You’ll inspire the bank to be commercially successful through insights, while at the same time keeping our customers’ and the bank's data safe and secure
  • This is a chance to hone your expert programming and data engineering skills in a fast paced and innovative environment

What you'll do

As a Data Platform Engineer, you’ll partner with technology and architecture teams to build your data knowledge and data solutions that deliver value for our customers. Working closely with universal analysts, platform engineers and data scientists, you’ll carry out data engineering tasks to build a scalable data architecture, including data extractions and data transformation.

As well as this, you’ll be:

  • Loading data into data platforms
  • Building automated data engineering pipelines
  • Delivering streaming data ingestion and transformation solutions
  • Participating in the data engineering community to deliver opportunities to support the bank's strategic direction
  • Developing a clear understanding of data platform cost levers to build cost effective and strategic solutions

The skills you'll need

You’ll be an experienced programmer and data engineer, with a BSc qualification or equivalent in Computer Science or Software Engineering. Along with this, you’ll have a proven track record in extracting value and features from large scale data, and a developed understanding of data usage and dependencies with wider teams and the end customer.

You’ll also demonstrate:

We’re recruiting for multiple roles across a range to levels, up to and including experienced Managers

  • Deploying and managing distributed data platforms like Spark, Hadoop, Kafka, MongoDB & Neo4J
  • Experience in managing Data Science and Engineering tooling on the cloud such as Sage Maker, ML Ops, Airflow, Stream Sets & Informatica
  • Experience in deploying applications to at least one major public cloud provider like AWS, Azure or GCP
  • Expertise in Unix and DevOps automation tools like Terraform & Puppet
  • Knowledge and experience of architecting on the cloud using Site Reliability Engineering and Security principles
  • Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing and data modelling capabilities
  • Extensive experience using RDMS, ETL pipelines, Python, Hadoop, SQL and data wrangling
  • Good knowledge of modern code development practices 
  • Excellent written and verbal communication skills
  • Good critical thinking and proven problem solving abilities