(Senior) Big Data Engineer (m/f/*)

Job description

Our Engineering community is growing, and we’re now looking for a (Senior) Big Data Engineer to join our team supporting our global growth.

As (Senior) Big Data Engineer, you design and optimize data processing algorithms on a talented, cross-functional team. You are familiar with the Apache open-source suite of technologies and want to contribute to the advancement of data engineering.


  • A chance to accelerate your career and work with outstanding colleagues in a supportive learning community split across 3 continents
  • Contribute your ideas to our unique projects and make an impact by turning them into reality
  • Balance your work and personal life through our workflow organization and decide yourself if you work at home, in the office, or on a hybrid setup
  • Annual performance review, and regular feedback cycles, generating distinct value by connecting colleagues through networks rather than hierarchies
  • Individual development plan, professional development opportunities
  • Educational resources such as paid certifications, unlimited access to Udemy Business, etc.
  • Local, virtual, and global team events, in which UT colleagues become acquainted with one another

Job requirements


  • You make data useful. You build program code in Java or similar languages, test and deploy to various environments, design and optimize data processing algorithms for clients
  • You work on feature implementation, and you automate testing of data-driven applications, using open-source and cloud-native technologies
  • You organize your workflow independently in an agile setting and contribute to your team with high quality code in alignment with the project vision
  • You communicate primarily in English with your team members


  • 2+ years of hands-on experience in the development of software using Java or a comparable language
  • Experience with data ingestion, analysis, integration, and design of big data applications using Apache open-source technologies, such as Hadoop, Spark, or Flink. Experience with Kafka, Docker, Kubernetes also good
  • Solid computer science fundamentals (algorithms, data structures, and programming skills in distributed systems) and work experience in agile environments
  • Professional communications skills in English 

Did we pique your interest, or do you have any questions?

We want to hear from you: contact us at recruit@ultratendency.com


Ultra Tendency is an international premier Data Engineering consultancy for Big Data, Cloud, Streaming, IIoT and Microservices. We design, build, and operate large-scale data-driven applications for major enterprises such as the European Central Bank, HUK-Coburg, Deutsche Telekom, and Europe’s largest car manufacturer. Founded in Germany in 2010, UT has developed a reliable client base and now runs 8 branches in 7 countries across 3 continents.

We do more than just leverage tech, we build it. At Ultra Tendency we contribute source code to +20 open-source projects including Ansible, Terraform, NiFi, and Kafka. Our impact on tech and business is there for anyone to see. Enterprises seek out Ultra Tendency because we solve the problems others cannot.

We love the challenge: together, we tackle diverse and unique projects you will find nowhere else. In our knowledge community, you will be a part of a supportive network, not a hierarchy. Constant learning and feedback are our drivers for stable development. With us you can develop your individual career through work-life balance.

We evaluate your application based on your skills and corresponding business requirements. Ultra Tendency welcomes applications from qualified candidates regardless of race, ethnicity, national or social origin, disability, sex, sexual orientation, or age.