Responsibilities
- Build a Data Platform for Data Analytics and AI in the financial industry based on PySpark and MS Azure
- Establish the data platform concept
- Data requirements engineering and modelling
- Data management and transformation
- Design, develop, and maintain data pipelines and backend services for real-time decisioning, reporting, data collecting, and related functions
- Produce high-quality, well-tested, and secure code
- Develop and maintain software designed to improve data governance and security
- Develop processes designed to ensure Data Security and Data Quality
Requirements
- Several years of data engineering experience on-premise and cloud (pipeline design, ETL, data warehousing, SQL etc.)
- Good knowledge and experience with PySpark
- Good knowledge and experience with MS Azure and its data services (Databricks, Data Factory etc.)
- Broad experience with various data platform stores (DWH, RDBMS, in-memory cach-es etc.), NoSQL databases (Azure Storage, CosmosDB, MongoDB etc.) and various data types (structured to unstructured)
- Fluent in English
Nice to have
- Experience in developing continuous integration and automated deployments on cloud platforms
- Experience in software development (Python, C#, Scala, JavaScript, or similar)
- Experience with MS PowerBI data models
- Experience in the banking industry
- German language skills
Personality
- Good communication skills
- Team player and service-oriented personality
- Willing to learn new technologies
Ihr Ansprechpartner