Deep models are susceptible to various factors during training, leading to performance degradation. While numerous methods have been proposed to address this issue, most rely on additional data for retraining or are restricted to specific architectures and scenarios. Inspired by image denoising, this paper introduces a novel task, parameter purification, to directly optimize degraded models. We posit that model performance degradation fundamentally stems from parameter contamination, and parameter purification aims to recover clean parameters from contaminated ones, akin to image denoising. Unlike existing parameter learning approaches, we propose a novel parameter manifold purification method to achieve full-parameter, high-precision purification. Specifically, we regard deep model parameters as a manifold in high-dimensional space and introduce a partitioning strategy based on the model’s minimal functional units—parameter clusters. In this framework, the entire parameter set forms a global manifold, while each parameter cluster corresponds to a local manifold embedded within it. To effectively learn compact low-dimensional representations of each parameter cluster manifold, we propose an Implicit Manifold Auto-Encoder along with a parameter cluster discrepancy loss incorporating both intra-cluster and inter-cluster weighting. We then employ a conditional diffusion model to denoise the learned low-dimensional manifolds, thereby achieving parameter purification. Extensive experiments on three representative scenarios of parameter contamination demonstrate the effectiveness of the proposed parameter manifold purification approach. Our method significantly outperforms conventional techniques, leading to substantial improvements in model correctness, fairness, and security.
@inproceedings{hu2025parameter
title={Parameter Manifold Purification},
author={Hu, Jiacong and Chen, Kejia and Feng, Zunlei and Song, Mingli}
}