Contexte et atouts du poste
Financial and working environment.
This postdoc position will be funded by the Cupseli Inria Challenge (Défi Inria).
The Cupseli Inria Challenge brings together 11 Inria teams distributed over 6 Inria centers and the Hive startup company based in Cannes.
The position will be recruited and hosted at the Inria Center at Rennes University; and the work will be carried out within the MAGELLAN team in collaboration with other partners.
The position is for one year.
About Hive:
Hive intends to play the role of a next generation cloud provider in the context of Web 3.0. Hive aims to exploit the unused capacity of computers to offer the general public a greener and more sovereign alternative to the existing clouds where the true power lies in the hands of the users.
It relies both on distributed peer-to-peer networks, on the encryption of end-to-end data and on blockchain technology.
Mission confiée
Context:
Large-scale P2P environments are characterized by a high number of node failures and churns [1].
This can lead to unwanted delays in the completion time of running applications and makes both scalability and reliability critical when running data-intensive applications (e.g., MapReduce applications [2]) in a peer-to-peer compute environment.
We are interested in investigating how to optimize the execution of data-intensive applications in the presence of failures and churns by leveraging P2P storage services (e.g., hive-Disk platform [3]), using checkpoints and making job scheduling failure-aware.
Objectives:
General purpose fault tolerant strategies lead to excessive execution of recovery tasks (re-execution of tasks on failed machines).
Therefore, we will investigate how to adapt fault-tolerance techniques to P2P systems by making job scheduling failure-aware (leveraging our previous experience and work with Hadoop clusters [4, 5]) and by enabling checkpoint/restart so that we can roll back execution from the last checkpoint instead of restarting the execution after a failure [6].
We will present a performance model for checkpoint/restart in P2P systems and introduce a scheduling framework that decides when and where to trigger checkpoints and where to restart, and when and where to execute recovery tasks, taking into account failure distribution, data location, and resource heterogeneity.
We will also explore how to use P2P storage services (e.g., hive-Disk platform) to store checkpoints and temporary data (e.g., map outputs in MapReduce).
[1] Apostolos Malatras.
“State-of-the-art survey on P2P overlay networks in pervasive computing environments”.
In: Journal of Network and Computer Applications 55 , pp.
1–23.
[2] Jeffrey Dean and Sanjay Ghemawat.
MapReduce: simplified data processing on large clusters.
Communications of the ACM51.1 : 107-113.
[3]
[4] Orcun Yildiz, Shadi Ibrahim, Tran Anh Phuong, and Gabriel Antoniu.
Chronos: Failure-aware scheduling in shared hadoop clusters.
In 2015 IEEE International Conference on Big Data (Big Data), pages 313–318.
IEEE, 2015.
[5] Orcun Yildiz, Shadi Ibrahim, and Gabriel Antoniu.
Enabling fast failure recovery in shared hadoop clusters: towards failure-aware scheduling.
Future Generation Computer Systems, 74:208–219, 2017.
[6] Ifeanyi P Egwutuoha, David Levy, Bran Selic, and Shiping Chen.
A survey of fault tolerance mechanisms and checkpoint/restart implementations
Principales activités
Compétences
Avantages
Rémunération
Monthly gross salary amounting to 2788 euros