(Senior) Developer - Data Engineering (f/m/d)
(Senior) Developer - Data Engineering (f/m/d)
Holidu is a fast growing travel-tech company with travel enthusiasts from 60+ nations. Together, we are striving to revolutionize the vacation rental industry by helping hosts to manage their vacation rental with ease and guests to find the perfect holiday home they truly enjoy. In October 2022, we secured €100 million Series E funding, allowing us to constantly improve our services and achieve the next milestone on our journey. Our team is united by a passion for modern technologies, an ambition for constant improvement and the drive to bring the best experience to more than 100 million users each year. Want to achieve amazing milestones with us? Then pack your bag, hop on board and get ready for take off!
Your future team
- You will be part of the Data Engineering team in the Business Intelligence department. The team is responsible for creating, maintaining and refining the core data pipeline tools, including the Data Warehouse and Data Lake, and to support some core pipelines on top of that. Your team will support the organization to answer all its data need.
- Your team will consist of 4 data engineers.
- You will take on the responsibility to create of enabling a DataMesh based approach to support different teams in leveraging the data to create data driven applications.
- You will be working in close collaboration with the data analytics and data science teams to further enable them leveraging the tools you have built.
Our Tech Stack
Rich Data Environment: We use Airflow as a scheduler and Data Build Tool (dbt-core) to craft efficient data pipelines with an extensively customized setup leveraging the flexibility of Python. Utilizing Redshift, Athena and DuckDB, we interact with data seamlessly. Our Redshift setup employs a decentralized structure via Data Sharing, enhancing flexibility and scalability. Common data warehouse tasks are automated through the adept use of Spring Boot CLI, streamlining operations and ensuring efficiency.
Leveraging Serverless Architecture: We embrace serverless architecture using the Serverless Framework and Python, deploying lambdas and endpoints to facilitate dynamic and responsive interactions with data lake, such as real time partitions indexing of Hive tables.
Comprehensive Cloud and DevOps Integration: Our tech arsenal includes Terraform, Docker, Jenkins forming a robust foundation for cloud-based development and seamless DevOps integration.
Detailed Monitoring Capabilities: Monitoring is a top priority, and we employ ELK, Grafana, Looker, OpsGenie and internally developed technologies to ensure real-time visibility into system performance and data workflows.
Your role in this journey
- Play a key role in a high-performing cross-functional team with a strong focus on data products, velocity, quality and engineering culture.
- Engage comprehensively in software development - from ideation to release and monitoring.
- Architect, design, and optimize robust data pipelines to extract, transform, and load diverse datasets efficiently and reliably.
- Devote efforts to crafting practical development toolchains to empower other teams in the organization to build high quality datasets.
- Shape engineering objectives in tandem with engineering managers.
- Excited about state of the art data technologies? Deep deep dive into a variety of solutions, optimize them or completely bring new innovations to the table.
- Seek cost-efficient solutions while maintaining high standards of quality (SLA).
- Proactively identify opportunities and drive initiatives to enhance data engineering processes, monitor pipeline performance, and identify opportunities for optimization and efficiency gains.
- Focus on the growth of your team by offering consistent and valuable feedback and recruiting new engineers.
- Apply best practices and continuously learn and share your thoughts with your peers.
Your backpack is filled with
- A bachelor's degree in Computer Science, a related technical field or equivalent practical experience.
- Experience building and implementing Lakehouse architectures in AWS or other similar setups.
- Effectively build batch and streaming data pipelines using technologies such as Airflow, DBT, Redshift, Athena/Presto, Firehose, Spark, SQL databases and similar technologies
- Strong programming skills in Python, SQL, Java, Kotlin, or similar languages.
- Previous experience in Data Governance is highly desirable.
- Excellent communication skills with the ability to influence stakeholders and align technical solutions with business objectives.
Our adventure includes
- Have an impact on the millions of people using our product every month.
- Grow with responsibility from day 1 and develop yourself further through regular feedback, workshops and knowledge exchange.
- Use your personal development budget and 2 extra days off to enhance your professional skills by visiting conferences, buying books, attending courses and more.
- Connect and have fun with international, diverse and yet like-minded people at work, regular events and weekly office days with your team.
- Get to know the world with our flexible home office policy and the opportunity to work from other local offices.
- Go traveling for 28 vacation days + the possibility to take up to 10 unpaid vacation days with interesting discounts on our Holidu Homes.
- Get a sneak peek at the adventure that awaits you on Instagram @lifeatholidu.
Want to travel with us?
We champion diversity in every aspect of life. We encourage applications from all genders, corners of the world and individual backgrounds. Please feel welcome to submit your application without a photo and details on your gender, date of birth, marital status and nationality. If you have a disability or special need that requires accommodation, please let us know.