Cloud Data Engineer

Cloud Transformation Solutions and Managed Services Provider

Geekhunter is hiring on behalf of our client: a leading Cloud Transformation Solutions and Managed Services Provider, offering a plethora of services across all major Public Cloud Operators. The company focused on helping companies unlock business value by migrating and running business critical systems on Cloud.

The company is a Portfolio Company of a private equity firm focusing on global infrastructure investments. In December 2020, the equity firm acquired and merged two companies to add their services to its Asian Digital Infrastructure Platform with offices in Singapore, Indonesia, Thailand, Vietnam, India, Malaysia, and the United States.


  • THR
  • BPJS Ketenagakerjaan & BPJS Kesehatan fully covered
  • Private Health Insurance
  • Working tools provided

Job Descriptions:

As a Cloud Data Engineer, you’ll play a key role in ensuring that customers have the best experience moving to the cloud products. You will be responsible for providing technical solutions, managing relationships with customer’s technical staff and enabling our Cloud partners. This role in particular has a focus on clients with Cloud needs who aim for solutions combining Cloud technology with Machine Learning and Big Data.

You will lead deployments, implementations and integrations with a variety of platforms and product lines and continuously engage with the community of strategic customers and partners with a goal of enabling their success through driving deeper relationships and more efficient integrations.


  • Act as a trusted technical data engineer to customers and solve complex data integration, data warehouse, and Big Data challenges.
  • Provide domain expertise on public cloud and enterprise technology.
  • Work with the customer to deploy modern technical architectures and solutions that enable their business objectives.
  • Be a trusted technical data engineer and resolve technical challenges for customers.
  • Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical report adapting to different levels of key business and technical stakeholders.
  • Provide highly technical implementation support in customer environments, including guidance on implementation feasibility of cross-product integrations.

Job Requirements:

Minimum qualifications:

  • Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments.
  • Understanding of Big Data technologies and solutions (Spark, Hadoop, Hive, MapReduce) and multiple scripting and languages (YAML, Python)
  • Experience writing software in one or more languages, such as Java, Python, Go, C++ or similar.
  • Experience with cloud technologies in distributed compute environments, cloud-native application development, hosted services, storage systems, remote Linux/Unix system administration, and/or content delivery networks.
  • Experience managing client-facing projects, troubleshooting client technical issues, working with engineering, sales, services, and customers.
  • 3 years of industry experience in software development, data engineering, business intelligence, data science, or related field with experience in manipulating, processing, and extracting value from datasets.
  • Knowledge of solution architecture within web/mobile environments, web/internet-related technologies, architecture across Software-as-a-Service, Platform-as-a-Service, Infrastructure-as-a-Service and cloud productivity suites.
  • Ability to work, communicate effectively and influence stakeholders on internal/external engineering teams, product development teams, sales ops teams and external partners and consumers

Preferred qualifications:

  • Bachelor’s degree in Computer Science, Mathematics related technical field, or equivalent practical experience.
  • Experience in technical consulting.
  • Experience working with Big Data, information retrieval, data mining or machine learning, as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, MongoDB, SparkML, TensorFlow).
  • Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments.
  • Experience with open-source software (Cassandra, MongoDB, RabbitMQ) and enterprise Content Management or Business Applications (CRM, ERP).
  • Experience in web technologies (HTML, XML, JSON, OAuth 2) and relational data analysis in MySQL, Google Big Query, etc.
  • Understanding of Google Cloud Platform (GCP) technologies in the big data and data warehousing space (BigQuery, Cloud Data Fusion, Dataproc, Dataflow, Data Catalog, Data Studio).

How to Apply:

Send your CV to recruiter’s email who contact you or to

To apply for this job email your details to