Senior Data Engineer

A Tech Based Financial Services Company

Geekhunter is hiring on behalf of our client, a tech based financial services company based in Jakarta, Singapore and Pune. Created by visionaries who brought you the first banking reinvented in Indonesia, DK aspires to create the most valuable digital financial service at the back of Go-ecosystem and beyond.

Our client is a financial technology company with multiple offices in the APAC region. In our quest to build a better financial world, one of our key goals is to create ecosystem linked financial services business. Combining the best domain knowledge in financial services, data, artificial intelligence, and credit rating technology, our client brings the next generation of data centric platforms to transform the financial service industry in Asia.

Perks:

  • THR (Religious Festive Allowance)
  • BPJS-K and BPJS-TK
  • Competitive Salary
  • Private Health Insurance

Job Descriptions:

We are seeking a hands-on senior data engineer to help us build out and manage our data infrastructure, which will need to operate reliably at scale using a high degree of automation in setup and maintenance. The role will involve both setting up and managing the data infrastructure, as well as building and optimizing key ETL pipelines on both batch and streaming data. The ability to work with the teams from product, engineering, BI/analytics and data science is essential. Ownership needs to be taken of data model design and data quality. Automation and the use of data science to manage and improve data quality would be valued. The individual will also play active roles in ensuring data governance policies and tooling are implemented and adhered to

The individual will also need to be able to manage multiple stakeholders at an executive level and make well informed architectural choices when required. A high degree of empathy is required for the needs of the downstream consumers of the data artefacts produced by the data engineering team, i.e. the software engineers, data scientists, business intelligence analysts, etc and the individual needs to be able to produce transparent and easily navigable data pipelines. Value should be assigned to consistently producing high quality metadata to support discoverability and consistency of calculation and interpretation.

Job Requirements:

Candidates should have a wide set of experience across the following systems and languages:

  • Apache Kafka
  • Apache Flink
  • Apache Airflow
  • Cloud data warehouses such as Redshift or BigQuery
  • Python and Java

How to Apply:
Send your CV to recruiter’s email who contact you OR to recruiter@geekhunter.co OR

To apply for this job email your details to recruiter@geekhunter.co