Our paper on Federated Learning accepted at ICDE2024

SiloFuse: Cross-silo Synthetic Data Generation with Latent Tabular Diffusion Models

by Aditya Shankar, Hans Brouwer, Rihan Hai, Lydia Chen

Abstract—Synthetic tabular data is crucial for sharing and augmenting data across silos, especially for enterprises with proprietary data. However, existing synthesizers, designed for centrally stored data struggle with real-world scenarios where data is distributed across multiple silos with different features, necessitating on-premise data storage. We introduce SiloFuse, a novel generative framework for high-quality synthesis from feature-partitioned tabular data. To ensure privacy, SiloFuse utilizes a distributed latent tabular diffusion architecture with autoencoders to learn latent representations for each client. We employ stacked distributed training to improve communication efficiency, reducing the number of rounds to a single step. Under SiloFuse, we prove the impossibility of data reconstruction for vertically partitioned synthesis and quantify privacy risks through three attacks, using our benchmark framework. Experimental results on nine datasets showcase SiloFuse’s competence against centralized diffusion-based synthesizers and achieves 43.8 and 29.8 higher percentage points over GANs in resemblance and utility. Additionally, SiloFuse proves robust in handling heterogeneous partitions across varying numbers of clients.

Rihan Hai
Rihan Hai
Assistant professor

My research focuses on data integration and related dataset discovery in large-scale data lakes.