Abstract:
"Federated Learning (FL) has gained significant attention due to its ability to preserve privacy by allowing organizations to collaborate on machine learning tasks without sharing their data. However, in cross-silo FL, where organizations in the same domain may be competitors, trust and cooperation issues arise. These competitive entities may hesitate to collaborate, leading to decreased client cooperation and effectiveness in the system. Moreover, the lack of a centralized server and the ""black-box"" nature of FL can further exacerbate trust and interpretability challenges, leading to biased decision-making and reduced transparency.
To address these issues, the author proposes a novel architecture aimed at improving trustworthiness and interpretability in cross-silo FL environments. The solution integrates principles of Trustworthy AI and introduces an explainable mediator mechanism. This mechanism uses the Shapley Values model-agnostic explainer, customized to interpret FL workflows, offering human-interpretable explanations for decision-making processes. The proposed system is designed to be agnostic, allowing it to be adopted across various organizations and use cases.
Initial results show that the proposed architecture successfully enhances trust and cooperation in cross-silo FL by fostering system transparency and addressing the interpretability challenges inherent to FL. By improving client cooperation and reducing bias in decision-making, this research makes a valuable contribution to the field. It offers a practical solution to the key challenges of trust and transparency in cross-silo FL environments, with positive feedback from professionals who reviewed the project results."