- Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: PhD Position F/M Towards discovering information from very heterogeneous data sources in a “data lake” environment.
Urgent! PhD Position F/M Towards discovering information from very heterogeneous data sources in a “data lake” environment Job Opening In Palaiseau – Now Hiring INRIA
Contexte et atouts du poste
The PhD will be funded by ANR TopOL project to start in October 2025.
This will collaborations and visits between the collaborating labs funded by the project.
Mission confiée
Context: Heterogeneous Data Lakes Exploiting datasets requires identifying what each dataset contains and what it is about.
Users with Information Technology skills may do this using dataset schemata, documentation, or by querying the data.
In contrast, non-technical users (NTUs, in short) are limited in their capacity to discover interesting datasets.
This hinders their ability to develop useful or even critical applications.
The problem is compounded by the large number of datasets which NTUs may be facing, in particular when value lies in exploiting many datasets together, as opposed to one or a few at a time, and when datasets are of different data models, e.g., tables or relational databases, CSV files, hierarchical formats such as XML and JSON, PDF or text documents, etc.
Following our team’s experience in collaborating with French media journalists [1, 4] and ongoing collaboration with the International Consortium of Investigative Journalists (ICIJ), we will primarily draw inspiration from journalist NTU applications.
These include several high-profile journalistic investigations based on massive, heterogeneous digital data, e.g., Paradise Papers or Forever Pollutants.
The setup we consider is: how to help NTUs identify useful parts of very large sets of heterogeneous datasets, assemble and discover the information from these datasets.
For example, faced with a corpus of thousands or tens of thousands files (text, spreadsheets, etc.), a journalist may want to know: what subventions did the Region grant, and where geographically?
or What shipping companies have shipped on routes towards Yemen, and who contracted with them?
The tools we aim to develop generalize also beyond journalism, for instance, to enterprise data lakes containing documents and various internal datasets, scientific repositories with reports and experimental results, etc.
State of the art Many techniques and systems target one dataset (or database), of one data model.
NTUs are used to work with documents, such as texts, PDFs, or structured documents in Office formats, on which Information Retrieval (IR) enables efficient keyword searches.
Large Language Models (LLMs, in short), and tools built on top of them, such as chatbots, or Google’s NotebookLLM, add unprecedented capacities to summarize and answer questions over documents provided as input.
However, because of possible hallucinations [9, 15], LLM answers still require manual verification before use in a setting with real-world consequences.
In particular, a recent study has shown high error rate on the task of identifying the source of a news, across 8 major chatbots [8].
LLMs are not reliable information sources also (i) for realworld facts that happened after their latest training input (ii) for little-known entities not in the training set, e.g., a small French company active in a given region.
Finally, LLMs hosted outside of the user’s premises are not acceptable for users such as the ICIJ, for which dataset confidentiality during their investigation is crucial; locally deployed models are preferable for confidentiality, and smaller (frugal) ones also reduce the computational footprint.
While we consider that language models should not be taken a reliable sources of knowledge, they are crucial ingredients for matching (bridging) user questions with answer components from various datasets, thanks to the semantic embeddings we can compute for the questions and the data.
Database systems allow users to inspect and use the data via queries.
NTUs find these unwieldy, especially if multiple systems must be used for multiple data models.
Natural language querying leverages trained language models to formulate structured database queries, typically SQL ones [10].
However, errors still persist in the translation, and SQL is not applicable beyond relational data.
Keyword search in database returns sub-trees or sub-graphs answers from a large data graph, which may model a relational database, an XML document, an RDF graph etc., e.g., [2].
However, these techniques have not been scaled up to large sets of datasets.
Challenges Dataset summarization and schema inference have been used to extract, from a given dataset, e.g., an XML, JSON, or Property Graphs (PG) one, a suitable schema [3, 6, 11, 14], which is a technical specification that experts or systems can use to learn how the data is structured; each technique is specific to one data model only.
Dataset abstraction [5] identifies, in a (semi-)structured dataset, nested entities, and binary relationships (only).
Generalizing it to large numbers of datasets, and to text-rich documents, is also a challenge.
More recent data lakes hold very large sets (e.g., tens or hundreds of thousands) of datasets, each of which may be large [7].
In a data lake, one may search for a dataset using keywords or a question, for a dataset which can be joined with another [13].
However, modeling, understanding, and exploring large, highly heterogeneous collections of datasets (other than tables) are still limited in data lakes.
[1] A.
Anadiotis, O.
Balalau, T.
Bouganim, F.
Chimienti, S.
Horel, I.
Manolescu, et al.
Empowering investigative journalism with graph-based heterogeneous data management.
IEEE Data Eng.
Bull., 44, 2021.
[2] A.
Anadiotis, I.
Manolescu, and M.
Mohanty.
Integrating Connection Search in Graph Queries.
In ICDE, 2023.
[3] M.
Baazizi, C.
Berti, D.
Colazzo, G.
Ghelli, and C.
Sartiani.
Human-in-the-loop schema inference for massive JSON datasets.
In Extending Database Technology (EDBT), 2020.
[4] O.
Balalau, S.
Ebel, T.
Galizzi, I.
Manolescu, A.
Deiana, E.
Gautreau, A.
Krempf, et al.
Fact-checking Multidimensional Statistic Claims in French.
In Truth and Trust Online, Oct.
2022.
[5] N.
Barret, I.
Manolescu, and P.
Upadhyay.
Computing Generic Abstractions from Application Datasets.
In EDBT 2024, volume 27, 2024.
[6] S.
Cebiric, F.
Goasdou´e, H.
Kondylakis, D.
Kotzinos, I.
Manolescu, et al.
Summarizing Semantic Graphs: A Survey.
The VLDB Journal, 28, 2019.
[7] M.
P.
Christensen, A.
Leventidis, M.
Lissandrini, L.
D.
Rocco, R.
J.
Miller, and K.
Hose.
Fantastic tables and where to find them: Table search in semantic data lakes.
In EDBT, pages 397–410, 2025.
[8] K.
Ja´zwi´nska and A.
Chandrasekar.
AI search has a citation problem.
Available at: March 2025.
[9] Z.
Ji, N.
Lee, R.
Frieske, et al.
Survey of hallucination in natural language generation.
ACM Computing Surveys, 55, Mar.
2023.
[10] G.
Koutrika.
Natural language data interfaces: A data access odyssey (invited talk).
In ICDT, 2024.
[11] H.
Lbath, A.
Bonifati, and R.
Harmer.
Schema inference for property graphs.
In EDBT, 2021.
[12] P.
S.
H.
Lewis, E.
Perez, A.
Piktus, et al.
Retrieval-augmented generation for knowledge-intensive NLP tasks.
In NeurIPS, 2020.
[13] N.
Paton, J.
Chen, and Z.
Wu. Dataset discovery and exploration: A survey.
ACM Comput.
Surv., 56, 2024.
[14] K.
Rabbani, M.
Lissandrini, and K.
Hose.
Extraction of validating shapes from very large knowledge graphs.
PVLDB, 16, 2023.
[15] M.
Zhang, O.
Press, W.
Merrill, et al.
How language model hallucinations can snowball.
ICML, 2024.
Principales activités
The PhD will focus on natural language question answering over large corpora of highly heterogeneous data (relational databases, CSV/TSV files, JSON, RDF, XML, or Property Graphs, text or Office documents, etc.).
Compétences
A successful candidate should have demonstrated academic excellence in Computer Science, with a particular interest in Data Management, Algorithms, and/or Natural Language Processing.
Good software development skills in large projects (C++ or Java) are also required.
Excellent communication skills and prior experience in research are a plus.
Avantages
Rémunération
Monthly gross salary : 2.200 euros
✨ Smart • Intelligent • Private • Secure
Practice for Any Interview Q&A (AI Enabled)
Predict interview Q&A (AI Supported)
Mock interview trainer (AI Supported)
Ace behavioral interviews (AI Powered)
Record interview questions (Confidential)
Master your interviews
Track your answers (Confidential)
Schedule your applications (Confidential)
Create perfect cover letters (AI Supported)
Analyze your resume (NLP Supported)
ATS compatibility check (AI Supported)
Optimize your applications (AI Supported)
O*NET Supported
O*NET Supported
O*NET Supported
O*NET Supported
O*NET Supported
European Union Recommended
Institution Recommended
Institution Recommended
Researcher Recommended
IT Savvy Recommended
Trades Recommended
O*NET Supported
Artist Recommended
Researchers Recommended
Create your account
Access your account
Create your professional profile
Preview your profile
Your saved opportunities
Reviews you've given
Companies you follow
Discover employers
O*NET Supported
Common questions answered
Help for job seekers
How matching works
Customized job suggestions
Fast application process
Manage alert settings
Understanding alerts
How we match resumes
Professional branding guide
Increase your visibility
Get verified status
Learn about our AI
How ATS ranks you
AI-powered matching
Join thousands of professionals who've advanced their careers with our platform
Unlock Your PhD Position Potential: Insight & Career Growth Guide
Real-time PhD Position Jobs Trends in Palaiseau, France (Graphical Representation)
Explore profound insights with Expertini's real-time, in-depth analysis, showcased through the graph below. This graph displays the job market trends for PhD Position in Palaiseau, France using a bar chart to represent the number of jobs available and a trend line to illustrate the trend over time. Specifically, the graph shows 154 jobs in France and 3 jobs in Palaiseau. This comprehensive analysis highlights market share and opportunities for professionals in PhD Position roles. These dynamic trends provide a better understanding of the job market landscape in these regions.
Great news! INRIA is currently hiring and seeking a PhD Position F/M Towards discovering information from very heterogeneous data sources in a “data lake” environment to join their team. Feel free to download the job details.
Wait no longer! Are you also interested in exploring similar jobs? Search now: PhD Position F/M Towards discovering information from very heterogeneous data sources in a “data lake” environment Jobs Palaiseau.
An organization's rules and standards set how people should be treated in the office and how different situations should be handled. The work culture at INRIA adheres to the cultural norms as outlined by Expertini.
The fundamental ethical values are:The average salary range for a PhD Position F/M Towards discovering information from very heterogeneous data sources in a “data lake” environment Jobs France varies, but the pay scale is rated "Standard" in Palaiseau. Salary levels may vary depending on your industry, experience, and skills. It's essential to research and negotiate effectively. We advise reading the full job specification before proceeding with the application to understand the salary package.
Key qualifications for PhD Position F/M Towards discovering information from very heterogeneous data sources in a “data lake” environment typically include Computer Occupations and a list of qualifications and expertise as mentioned in the job specification. Be sure to check the specific job listing for detailed requirements and qualifications.
To improve your chances of getting hired for PhD Position F/M Towards discovering information from very heterogeneous data sources in a “data lake” environment, consider enhancing your skills. Check your CV/Résumé Score with our free Resume Scoring Tool. We have an in-built Resume Scoring tool that gives you the matching score for each job based on your CV/Résumé once it is uploaded. This can help you align your CV/Résumé according to the job requirements and enhance your skills if needed.
Here are some tips to help you prepare for and ace your job interview:
Before the Interview:To prepare for your PhD Position F/M Towards discovering information from very heterogeneous data sources in a “data lake” environment interview at INRIA, research the company, understand the job requirements, and practice common interview questions.
Highlight your leadership skills, achievements, and strategic thinking abilities. Be prepared to discuss your experience with HR, including your approach to meeting targets as a team player. Additionally, review the INRIA's products or services and be prepared to discuss how you can contribute to their success.
By following these tips, you can increase your chances of making a positive impression and landing the job!
Setting up job alerts for PhD Position F/M Towards discovering information from very heterogeneous data sources in a “data lake” environment is easy with France Jobs Expertini. Simply visit our job alerts page here, enter your preferred job title and location, and choose how often you want to receive notifications. You'll get the latest job openings sent directly to your email for FREE!