With our Data Platform consisting out of:
Cutting-edge Integration Toolset,
Business Intelligence, Analytics and storage capabilities,
A platform for the Management of Master Data,
A Data Governance platform that encompasses Data Lineage and Data Quality.
With our Data Organization consisting out of:
Business Product Teams (supporting the business stakeholder initiatives, organized according to the Agile principles)
Backbone/Platform Teams (supporting the data platform initiatives, organized according to the Agile principles)
Transversal competencies Team (supporting the various profiles and technologies initiatives, organized according to the Agile principles (Spotify))
Our Data Organization never stands still: No status-quo!
Accordingly, we challenge ourselves and the organization continuously.
We work in quite flat hierarchy, with our teammates widespread in agile teams, and according to the principle of open source.
We are looking for a Data Engineer Service, who will primarily join one of our product Teams. It is an opportunity to mark with your signature our corporate Data organization and contribute indirectly to better product to manage our customer, as more qualitative data or service around data for our customer.
We are looking for Data Engineers able to create data pipelines, efficient storage structures, powerful materialized views in different analytical technologies, but also data exchange endpoints at destination of our users. To some extent, you'll also have to interact with aspect related to governance tools (Glossary, Modeling, Lineage, or Data Quality…).
What you will NOT be doing
You shall not act from an Ivory Technical Tower.
You don't not make decisions for the Business but rather advise them with careful logic or showing your reasoning.
You shall not deliver technical solution, rather product features.
Ideal Profile
An extensive hands-on experience (> 3 to 5 years preferable) in 2 out of the 4 following categories:
Usage of the SQL server
Implementing data pipelines using Azure services such as Azure Databricks, but also to some extent Azure Data Factory, Azure Functions, Azure Stream/log Analytics, and Azure DevOps
Implementing data pipelines or data enrichments with Python in a Databricks environment
Open to learn and jump on new technologies (on the ones listed above, or also Redis, RabbitMQ, Neo4j, or Apache Arrow, …)
Able and willing to interact with business analysts and stakeholders to refine some requirements and to present reusable and integrated solution
Able and willing to contribute to extensive testing of the solution, as the reinforcement of devops principles within the team.
Able and willing to contribute to the writing and structuration of documentation
Experience with Power BI
Experience with SAP data integration is not mandatory, but it is a plus.
Able to challenge your interlocutors by leveraging your rational thinking, and no non-sense philosophy.
Language skills: fluent in English (must have) AND fluent in German/French or Dutch (soft requirement)
Continuously look at data in a transversal way (no siloes), across the entire Enterprise, to maximize coherence, reuse and adoption.
Track record of driving results through continuous improvement
Customer-oriented
Thinking Iterative
Analytical approach to problem-solving and a track record of driving results through continuous improvement
Team player breathing respect, open-mind, dare, challenge, innovation and one team/voice for the Customer
Product-oriented mindset
Enjoy sharing ideas with the team (or with other teams) and contribute to the product success
Good communication skills