• Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role.
France Jobs Expertini

PhD Position F/M Topology Design for Decentralized Federated Learning Job Opening In Technopole de Sophia Antipolis – Now Hiring INRIA


Job description

Contexte et atouts du poste

This PhD thesis is in the framework of Inria research initiative on Federated Learning, FedMalin

The PhD candidate will join NEO project-team is positioned at the intersection of Operations Research and Network Science.

By using the tools of Stochastic Operations Research, the team members model situations arising in several application domains, involving networking in one way or the other.


The research activity will be supervised by

  • Giovanni Neglia,
  • Aurélien Bellet,
  • Mission confiée

    # Topology Design for Decentralized Federated Learning


    ## Context
    The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL) [li20,kairouz21], a framework for on-device collaborative training of machine learning models.

    FL algorithms like FedAvg [mcmahan17] allow clients to train a common global model without sharing their personal data.

    FL reduces data collection costs and can help to mitigate data privacy issues, making it possible to train models on large datasets that would otherwise be inaccessible.

    FL is currently used by many big tech companies (e.g., Google, Apple, Facebook) for learning on their users' data, but the research community envisions also promising applications to learning across large data-silos, like hospitals that cannot share their patients' data [rieke20].


    In the classic FL setting, a server coordinates the training phase.

    At each training round, the server sends the current model to the clients, which individually train on their local datasets and send model updates to the server, which in turn aggregates them (often through a simple averaging operation).

    In contrast to this client-server approach, decentralized FL algorithms (also called P2P FL algorithms) work by having each client communicate directly with a subset of the clients (its neighbours): this process alternates between model updates and weighted averaging of the neighbours' models (consensus-based optimization).

    Decentralized algorithms can take advantage of good pairwise connectivity, avoid the potential communication bottleneck at the server [marfoq20] as well as provide better privacy guarantees [cyffers22].


    The communication graph (i.e., the graph induced by clients' pairwise communications) and the local clients' aggregation strategies play a fundamental role in determining FL algorithms' convergence speed.

    In particular, the communication topology has two contrasting effects on training time.

    First, a more connected topology leads to faster convergence in terms of number of communication rounds [nedic18].

    Second, a more connected topology increases the duration of a communication round (e.g., because it may cause network congestion), motivating the use of degree-bounded topologies where every client sends and receives a small number of messages at each round [lian17].

    Most of the existing literature has focused on one aspect or the other.


    The classic literature on consensus-based optimization has quantified the effect of the communication topology on the number of rounds through worst-case convergence bounds in terms of the spectral gap of the consensus matrix (i.e., the matrix with the averaging weight), see [nedic18] and references there.

    Later papers have highlighted the convergence rate' insensitivity to the spectral gap for a large number of communication rounds and small learning rates [lian17,koloskova21,pu20].
    Another line of work has shown that the effect of the topology is less important if local data distributions [neglia20] or average data distributions in each neighborhood [lebars23,dandi22] are close to the average data distribution over the whole population.

    In the extreme case of homogeneous local distributions, one may even prefer consensus matrices with poor spectral properties because they enable the use of larger learning rates [vogel22].
    A separate line of works has studied how to design the communication topology in order to minimize the duration of one round, taking into account the variability of the computation times [neglia19] or the characteristics of Internet connections [marfoq20].


    ## Research objectives


    The goal of this PhD is to propose algorithms to design the communication topology for decentralized federated learning with the goal of minimizing the total training duration, taking into account how connectivity affect both the number of rounds required and the duration of a single round.
    Several settings will be considered: in particular, one may construct the topology in a pre-processing step (prior to learning), or dynamically while learning.

    Dynamic topology design can be a way to tackle online decentralized learning [asadi22,marfoq23], where the topology is adjusted and refined as clients collect more data.
    The candidate will also investigate how to practically quantify the similarity of local data distributions during training in order to exploit the advantage of having a neighborhood representative of the average population distribution [lebars23,dandi22].
    Finally, he/she will also study to what extent the existing results can be extended to asymmetric communication links and other distributed optimization algorithms like push-sum ones [kempe03,benezit10].

    ## References


    [asadi22] M.

    Asadi, A.

    Bellet, O.-A.

    Maillard and M.

    Tommasi.

    Collaborative Algorithms for Online Personalized Mean Estimation.

    Transactions on Machine Learning Research, 2022.


    [benezit10] F.

    Benezit, V.

    Blondel, P.

    Thiran, J.

    Tsitsiklis, and M.

    Vetterli, Weighted gossip: distributed averaging using non-doubly stochastic matrices, in Proceedings of the 2010 IEEE International Symposium on Information Theory, Jun.

    2010.


    [cyffers22] E.

    Cyffers and A.

    Bellet, Privacy Amplification by Decentralization, in Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR, May 2022, pp.

    5334–5353.


    [dandi22] Y.

    Dandi, A.

    Koloskova, M.

    Jaggi, and S.

    U.

    Stich, “Data-heterogeneity-aware Mixing for Decentralized Learning.” arXiv, Apr.

    13, 2022.


    [kairouz21] P.

    Kairouz, et al.

    Advances and open problems in federated learning.

    Foundations and Trends in Machine Learning, 14(1-2), pp.

    1-210, 2021.


    [kempe03] D.

    Kempe, A.

    Dobra, and J.

    Gehrke, Gossip-based computation
    of aggregate information, in Proceedings of the 44th Annual IEEE
    Symposium on Foundations of Computer Science, 2003, pp.

    482–491.


    [koloskova21] A.

    Koloskova, N.

    Loizou, S.

    Boreiri, M.

    Jaggi, and S.

    U.

    Stich, A Unified Theory of Decentralized SGD with Changing Topology and Local Updates.

    arXiv, Mar.

    02, 2021.

    doi: 10.48550/arXiv.2003.10422.


    [lebars23] B.

    Le Bars, A.

    Bellet, M.

    Tommasi, E.

    Lavoie, and A.-M.

    Kermarrec, Refined Convergence and Topology Learning for Decentralized SGD with Heterogeneous Data.

    AISTATS 2023


    [li20] T.

    Li, A.

    Kumar Sahu, A.

    Talwalkar, and V.

    Smith.

    Federated learning: Challenges, methods, and future directions.

    IEEE Signal Processing Magazine, 37 , 2020.


    [lian17] X.

    Lian, C.

    Zhang, H.

    Zhang, C.-J.

    Hsieh, W.

    Zhang, and J.

    Liu, Can decentralized algorithms outperform centralized algorithms?

    a case study for decentralized parallel stochastic gradient descent, in Proceedings of the 31st International Conference on Neural Information Processing Systems, in NIPS’17.

    Red Hook, NY, USA: Curran Associates Inc., Dec.

    2017, pp.

    5336–5346.


    [lian18] X.

    Lian, W.

    Zhang, C.

    Zhang, and J.

    Liu, Asynchronous Decentralized Parallel Stochastic Gradient Descent, in Proceedings of the 35th International Conference on Machine Learning, PMLR, Jul.

    2018, pp.

    3043–3052.


    [marfoq20] O.

    Marfoq, C.

    Xu, G.

    Neglia, and R.

    Vidal, Throughput-Optimal Topology Design for Cross-Silo Federated Learning, in 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada, Dec.

    2020.


    [marfoq23] O.

    Marfoq, G.

    Neglia, L.

    Kameni, R.

    Vidal.
    Federated Learning for Data Streams.

    AISTATS 2023.


    [mcmahan17] B.

    McMahan, E.

    Moore, D.

    Ramage, S.

    Hampson, and B.

    Aguera y Arcas.

    Communication efficient learning of deep networks from decentralized data.

    In Artificial Intelligence and Statistics, PMLR, 2017.


    [neglia19] G.

    Neglia, G.

    Calbi, D.

    Towsley, and G.

    Vardoyan.

    2019.

    The Role of Network Topology for Distributed Machine Learning.

    In IEEE INFOCOM 2019 - IEEE Conference on Computer Communications.

    IEEE Press, 2350–2358.


    [nedic18] A.

    Nedić, A.

    Olshevsky, and M.G. Rabbat, Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization.

    In: Proceedings of the IEEE 106.5 (May 2018), pp.

    953–976.


    [neglia20] G.

    Neglia, C.

    Xu, D.

    Towsley, and G.

    Calbi, Decentralized gradient methods: does topology matter?, in 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), Palermo, Italy, Jun.

    2020.


    [pu20] S.

    Pu, A.

    Olshevsky, and I.

    Ch. Paschalidis, Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning: Examining Distributed and Centralized Stochastic Gradient Descent.

    In: IEEE Signal Process.

    Mag.

    37.3 , pp.

    114–122.


    [rieke20] Rieke, N., Hancox, J., Li, W.

    et al.

    The future of digital health with federated learning.

    npj Digit.

    Med.

    3, 119, 2020.


    [tang18] H.

    Tang, X.

    Lian, M.

    Yan, C.

    Zhang, and J.

    Liu, “$D^2$: Decentralized Training over Decentralized Data,” in Proceedings of the 35th International Conference on Machine Learning, PMLR, Jul.

    2018, pp.

    4848–4856.


    [vogels22] T.

    Vogels, H.

    Hendrikx, and M.

    Jaggi, Beyond spectral gap: the role of the topology in decentralized learning, in Advances in neural information processing systems, A.

    H.

    Oh, A.

    Agarwal, D.

    Belgrave, and K.

    Cho, Eds., 2022.

    Principales activités

    Research.

    Compétences

    The candidate should have a solid mathematical background (in particular on optimization) and in general be keen on using mathematics to model real problems and get insights.

    He should also be knowledgeable on machine learning and have good programming skills.

    Previous experiences with PyTorch or TensorFlow is a plus.


    We expect the candidate to be fluent in English.

    Avantages

  • Subsidized meals

  • Partial reimbursement of public transport costs

  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)

  • Possibility of teleworking and flexible organization of working hours

  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)

  • Social, cultural and sports events and activities

  • Access to vocational training

  • Contribution to mutual insurance (subject to conditions)
  • Rémunération

    Duration: 36 months
    Location: Sophia Antipolis, France
    Gross Salary :


    1st year : 2200 € per month


    2nd and 3rd year : 2300 €per month

    Required Skill Profession

    Computer Occupations


    • Job Details

    Related Jobs

    La Rochelle Université hiring COFUND PhD position - Artificial Intelligence / Machine Learning for Molecular Design Job in La Rochelle, Nouvelle-Aquitaine, France
    La Rochelle Université
    La Rochelle, Nouvelle-Aquitaine, France
    INRIA hiring PhD Position F/M Physically-Grounded Video Generation Job in Paris, Île-de-France, France
    INRIA
    Paris, Île-de-France, France
    La Rochelle Université hiring COFUND PhD position - Civil Engineering Job in La Rochelle, Nouvelle-Aquitaine, France
    La Rochelle Université
    La Rochelle, Nouvelle-Aquitaine, France
    STMicroelectronics hiring CDI - Ingénieur Design For Test M/F Job in Grenoble, Auverge-Rhône-Alpes, France
    STMicroelectronics
    Grenoble, Auverge-Rhône-Alpes, France
    IRIT, Université de Toulouse hiring DevSecMLOps: Security-by-Design for Trustworthy Machine Learning Pipelines Job in Toulouse, Occitanie, France
    IRIT, Université de Toulouse
    Toulouse, Occitanie, France
    INRIA hiring PhD Position F/M Foundation of an HPC Composition Model Job in Lyon, Auvergne-Rhône-Alpes, France
    INRIA
    Lyon, Auvergne-Rhône-Alpes, France
    Confidential hiring Researcher in deep learning & artificial intelligence (PhD) M/F Job in Paris, Île-de-France, France
    Confidential
    Paris, Île-de-France, France
    La Rochelle Université hiring COFUND PhD position - Marine Ecology / Conservation Job in La Rochelle, Nouvelle-Aquitaine, France
    La Rochelle Université
    La Rochelle, Nouvelle-Aquitaine, France
    La Rochelle Université hiring COFUND PhD position - Zoology / Behavioural biology Job in La Rochelle, Nouvelle-Aquitaine, France
    La Rochelle Université
    La Rochelle, Nouvelle-Aquitaine, France
    Alstom hiring Project manager for Electrical Harness Design M/F Job in Toulouse, Occitanie, France
    Alstom
    Toulouse, Occitanie, France
    La Rochelle Université hiring COFUND PhD position - Computer Science / Civil Engineering Job in La Rochelle, Nouvelle-Aquitaine, France
    La Rochelle Université
    La Rochelle, Nouvelle-Aquitaine, France
    La Rochelle Université hiring COFUND PhD position - Ecotoxicology / Trophic Ecology, Parasitology Job in La Rochelle, Nouvelle-Aquitaine, France
    La Rochelle Université
    La Rochelle, Nouvelle-Aquitaine, France
    Université de Pau et des Pays de l'Adour hiring PhD Position- Imagerie THz en temps réel des matériaux // PhD Position- Real Time THz Imaging of materials Job in Pau, Nouvelle-Aquitaine, France
    Université de Pau et des Pays de l'Adour
    Pau, Nouvelle-Aquitaine, France
    INRIA hiring Software engineer on private and decentralized machine learning (H/F) Job in Villeneuve-d'Ascq, Nord-Pas-de-Calais, France
    INRIA
    Villeneuve-d'Ascq, Nord-Pas-de-Calais, France
    Alstom hiring Fitting Design For Operation Leader Job in Valenciennes, Hauts-de-France, France
    Alstom
    Valenciennes, Hauts-de-France, France
    Valeo hiring Process Owner - Design for Assembly Job in Bobigny, Île-de-France, France
    Valeo
    Bobigny, Île-de-France, France
    Alstom hiring Fitting Design For Operation Leader Job in Valenciennes, Hauts-de-France, France
    Alstom
    Valenciennes, Hauts-de-France, France
    La Rochelle Université hiring COFUND PhD position - Natural Language Processing / Knowledge Management Job in La Rochelle, Nouvelle-Aquitaine, France
    La Rochelle Université
    La Rochelle, Nouvelle-Aquitaine, France
    Université PSL hiring Post-doctoral position (F/M) Job in Paris, Île-de-France, France
    Université PSL
    Paris, Île-de-France, France

    Unlock Your PhD Position Potential: Insight & Career Growth Guide


    Real-time PhD Position Jobs Trends (Graphical Representation)

    Explore profound insights with Expertini's real-time, in-depth analysis, showcased through the graph here. Uncover the dynamic job market trends for PhD Position in Technopole de Sophia Antipolis, France, highlighting market share and opportunities for professionals in PhD Position roles.

    157 Jobs in France
    157
    4 Jobs in Technopole De Sophia Antipolis
    4
    Download Phd Position Jobs Trends in Technopole De Sophia Antipolis and France

    Are You Looking for PhD Position F/M Topology Design for Decentralized Federated Learning Job?

    Great news! is currently hiring and seeking a PhD Position F/M Topology Design for Decentralized Federated Learning to join their team. Feel free to download the job details.

    Wait no longer! Are you also interested in exploring similar jobs? Search now: .

    The Work Culture

    An organization's rules and standards set how people should be treated in the office and how different situations should be handled. The work culture at INRIA adheres to the cultural norms as outlined by Expertini.

    The fundamental ethical values are:

    1. Independence

    2. Loyalty

    3. Impartiapty

    4. Integrity

    5. Accountabipty

    6. Respect for human rights

    7. Obeying France laws and regulations

    What Is the Average Salary Range for PhD Position F/M Topology Design for Decentralized Federated Learning Positions?

    The average salary range for a varies, but the pay scale is rated "Standard" in Technopole de Sophia Antipolis. Salary levels may vary depending on your industry, experience, and skills. It's essential to research and negotiate effectively. We advise reading the full job specification before proceeding with the application to understand the salary package.

    What Are the Key Qualifications for PhD Position F/M Topology Design for Decentralized Federated Learning?

    Key qualifications for PhD Position F/M Topology Design for Decentralized Federated Learning typically include Computer Occupations and a list of qualifications and expertise as mentioned in the job specification. The generic skills are mostly outlined by the . Be sure to check the specific job listing for detailed requirements and qualifications.

    How Can I Improve My Chances of Getting Hired for PhD Position F/M Topology Design for Decentralized Federated Learning?

    To improve your chances of getting hired for PhD Position F/M Topology Design for Decentralized Federated Learning, consider enhancing your skills. Check your CV/Résumé Score with our free Tool. We have an in-built Resume Scoring tool that gives you the matching score for each job based on your CV/Résumé once it is uploaded. This can help you align your CV/Résumé according to the job requirements and enhance your skills if needed.

    Interview Tips for PhD Position F/M Topology Design for Decentralized Federated Learning Job Success

    INRIA interview tips for PhD Position F/M Topology Design for Decentralized Federated Learning

    Here are some tips to help you prepare for and ace your PhD Position F/M Topology Design for Decentralized Federated Learning job interview:

    Before the Interview:

    Research: Learn about the INRIA's mission, values, products, and the specific job requirements and get further information about

    Other Openings

    Practice: Prepare answers to common interview questions and rehearse using the STAR method (Situation, Task, Action, Result) to showcase your skills and experiences.

    Dress Professionally: Choose attire appropriate for the company culture.

    Prepare Questions: Show your interest by having thoughtful questions for the interviewer.

    Plan Your Commute: Allow ample time to arrive on time and avoid feeling rushed.

    During the Interview:

    Be Punctual: Arrive on time to demonstrate professionalism and respect.

    Make a Great First Impression: Greet the interviewer with a handshake, smile, and eye contact.

    Confidence and Enthusiasm: Project a positive attitude and show your genuine interest in the opportunity.

    Answer Thoughtfully: Listen carefully, take a moment to formulate clear and concise responses. Highlight relevant skills and experiences using the STAR method.

    Ask Prepared Questions: Demonstrate curiosity and engagement with the role and company.

    Follow Up: Send a thank-you email to the interviewer within 24 hours.

    Additional Tips:

    Be Yourself: Let your personality shine through while maintaining professionalism.

    Be Honest: Don't exaggerate your skills or experience.

    Be Positive: Focus on your strengths and accomplishments.

    Body Language: Maintain good posture, avoid fidgeting, and make eye contact.

    Turn Off Phone: Avoid distractions during the interview.

    Final Thought:

    To prepare for your PhD Position F/M Topology Design for Decentralized Federated Learning interview at INRIA, research the company, understand the job requirements, and practice common interview questions.

    Highlight your leadership skills, achievements, and strategic thinking abilities. Be prepared to discuss your experience with HR, including your approach to meeting targets as a team player. Additionally, review the INRIA's products or services and be prepared to discuss how you can contribute to their success.

    By following these tips, you can increase your chances of making a positive impression and landing the job!

    How to Set Up Job Alerts for PhD Position F/M Topology Design for Decentralized Federated Learning Positions

    Setting up job alerts for PhD Position F/M Topology Design for Decentralized Federated Learning is easy with France Jobs Expertini. Simply visit our job alerts page here, enter your preferred job title and location, and choose how often you want to receive notifications. You'll get the latest job openings sent directly to your email for FREE!