Home » Node » 29339

Talk "Federated Fine-tuning of LLMs with Private Data"

Speaker: 
Dr. Marco Fisichella (L3S Research Center Leibniz University Hannover)
Data dell'evento: 
Tuesday, 13 May, 2025 - 15:00 to 16:30
Luogo: 
Aula Magna del DIAG
Contatto: 
Roberto Navigli (navigli@diag.uniroma1.it)

Abstract:
Fine-tuning large language models (LLMs) on sensitive or private data poses significant privacy and security challenges. In this talk, I will explore how Federated Learning (FL) can be used to fine-tune LLMs without jeopardizing data privacy. I will discuss how the FL paradigm enables training advanced models on distributed data while keeping the data local, thus minimizing the risks associated with sharing sensitive information. A major focus will be on parameter fine-tuning techniques that enable efficient model fitting with minimal communication overhead. I will also discuss the challenges of integrating such techniques into FL systems and their impact on model performance, privacy and computational efficiency. This approach is particularly relevant in sensitive areas such as healthcare, where data privacy is of paramount importance.

Bio:
Dr. Marco Fisichella is a distinguished researcher in the field of AI, specializing in clustering, federated learning (FL), fairness, and security. His work focuses on building trustworthy AI systems, with contributions published in leading conferences and journals. As a member of the European Laboratory for Learning and Intelligent Systems (ELLIS), he collaborates with top AI researchers to advance AI research in Europe.
Currently, Marco is involved in several impactful projects, including CAIMed, which focuses on AI and causal methods in medicine, and the FEDCOV project, which addresses privacy-preserving FL for COVID-19 data analysis. He also serves as Chief Scientist at the Trustworthy AI Lab at L3S Research Center, where he leads efforts in privacy, fairness, and interpretability in AI systems.
Previously, Marco worked as Director of Research and Development at the Otto Group, applying AI methods to real-world problems such as online fraud detection and cybersecurity. His transition back to academia as a group leader reflects his commitment to advancing the theoretical foundations of trustworthy AI while addressing practical applications.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma