Data Engineer (Senior)

Job description

Paiqo is a young consulting company specializing in building data platforms and AI solutions for our customers. Despite only being on the market since a short time, our customers include well-known companies in the areas of mechanical engineering, financial services, retail and mobile apps. Our projects always involve applying cutting-edge technologies to exciting real-world data.

One extremely important aspect of the work at paiqo is our culture – having fun working together, trusting each other and combining independence with teamwork. There are no fixed working hours and no fixed reporting structure. We are looking for Data Engineers who we can trust to do great work on their projects because they are motivated by the challenging nature of the projects themselves. We are also looking for people who we can learn from and who are willing to learn with us, to always stay on the cutting edge with our technical skills. 

What we offer

  • Challenging projects with well-known companies in various fields as customers
  • Great emphasis on education and trainings with dedicated time on the job
  • Opportunity to use the newest technologies for data storage and processing
  • Young and highly motivated team
  • home office

Job requirements

  • Good analytical and problem-solving skills, critical thinking, and an ability to explore and understand customers’ technical ecosystems and data, as well as an ability to communicate complex technical problems and their solutions clearly
  • Experience in programming and a demonstrable ability to learn new programming languages. There is no strict requirement on the knowledge of any particular languages, since on each project we may need to adjust to the customer’s existing technical landscape. Commonly used languages include Python, C#, Scala and various dialects and procedural extensions of SQL
  • Deep understanding of modern cloud-based data architecture and cloud services, preferably with Microsoft Azure
  • Experience with Spark and the Hadoop stack is an advantage
  • Experience working with various databases (SQL and NoSQL) 
  • Experience in building data processing pipelines and defining data architectures 
  • 5+ years of experience in a similar role
  • Experience working with data scientists and a good understanding of their needs
  • Readiness to travel
  • English and German proficiency