You will be part of a high performing cross-functional team populated by JVM technologists, Data Scientists and BI specialists. You will also be a part of a wider centre of excellence in Data Engineering which is responsible for data and analytics.
You have experience with cloud computing and developing data processing pipelines including ingestion, cleaning, transformation, monitoring, and much more. You have worked with large-scale batch processing as well as near-real-time stream processing. Best practices in data and software engineering are dear to your heart. You are not dogmatic and know how to develop and deploy quickly and iteratively in cross-functional teams.
You have deep experience with many data engineering technology stacks and approaches. Consequently, you can guide data engineering strategic decisions as easily as day-to-day ones including processes, practices, and technologies. You love working with other principal and senior technologists who have complementary skills to learn from each other. This is a technical, hands-on role.
- Develop robust ETL/ELT pipelines including orchestration
- Develop monitoring for jobs and data quality
- Select and/or explore appropriate cloud services (AWS)
- Work closely in cross-functional teams with
- Cloud/System Engineers for infrastructure automation
- Analysts and Data Scientist on analytics and machine learning projects
- Supply chain SMEs
- Core supply chain engineering team
- Business stakeholders
- Spark, AWS Lambda, other data processing solutions
- Python, Java, Scala, Kotlin, or comparable languages
- Software Engineering practices including TDD, CI, CD
- Public Cloud, specifically Amazon Web Services (AWS)
- Distributed Log/Event/Pub-Sub technologies like Kafka, Kinesis, RabbitMQ and similar
- SQL scripting and RDBMS like Postgres, ...
- Essential distributed computing and architecture principles
- Undergraduate degree in CS, IT, or related fields
- Professional experience in a comparable role
- Delivery at pace in a low ceremony environment