Single-cell Spatial Transcriptomics Imputation via Style Transfer [paper]
We introduce SpaIM, a novel style transfer learning model that leverages scRNA-seq data to accurately impute unmeasured gene expressions in spatial transcriptomics (ST) data. SpaIM separates scRNA-seq and ST data into data-agnostic contents and data-specific styles, capturing commonalities and unique differences, respectively. By integrating scRNA-seq and ST strengths, SpaIM addresses data sparsity and limited gene coverage, outperforming existing methods across 53 diverse ST datasets. It also enhances downstream analyses like ligand-receptor interaction detection, spatial domain characterization, and differentially expressed gene identification.
To get started with SpaIM, please follow the steps below to set up your environment:
git clone https://github.com/QSong-github/SpaIM
cd SpaIM
conda env create -f environment.yaml
conda activate SpaIM
All datasets used in this study are publicly available.
-
Data sources and detailed information are provided in
Supplementary_Table_1
. After downloading the data, please refer to the processing steps outlined in Data Processing README.txt and execute the code in Data Processing.py to perform the analysis and obtain clustering results. -
All processed datasets can be downloaded at Zenodo and Synapse.
The datasets should be organized in the following structure:
|-- dataset
|-- Dataset1
|-- Dataset2
|-- ......
|-- Dataset52
|-- Dataset53
Train all 53 datasets with a single command:
chmod +x ./*
./run_SpaIM.sh
The trained models and metric results will be saved in the following directories:
./SpaIM_results/Dataset1/
Run the following command to perform inference:
cd test
python SpaIM_imputation.py
The inference results will will be saved in './SpaIM_results/Dataset1/impute_sc_result_%d.pkl'.
If you find this project is useful for your research, please cite:
Li, B., Tang, Z., Budhkar, A. et al. SpaIM: single-cell spatial transcriptomics imputation via style transfer. Nat Commun 16, 7861 (2025). https://doi.org/10.1038/s41467-025-63185-9
Our code is based on the neural-style. Special thanks to the authors and contributors for their invaluable work.