ASA result on ASA2 dataset (number of sources: 4)
Audio Mixture
GT 1
EST 1
GT 2
EST 2
GT 3
EST 3
GT 4
EST 4
ASA result on ASA2 dataset (number of sources: 5)
Audio Mixture
GT 1
EST 1
GT 2
EST 2
GT 3
EST 3
GT 4
EST 4
GT 5
EST 5
ASA result on ASA2 dataset (number of sources: 5)
Audio Mixture
GT 1
EST 1
GT 2
EST 2
GT 3
EST 3
GT 4
EST 4
GT 5
EST 5
ASA result on ASA2 dataset (number of sources: 4)
Audio Mixture
GT 1
EST 1
GT 2
EST 2
GT 3
EST 3
GT 4
EST 4
ASA result on ASA2 dataset (number of sources: 3)
Audio Mixture
GT 1
EST 1
GT 2
EST 2
GT 3
EST 3
ASA result on ASA2 dataset (number of sources: 2)
Audio Mixture
GT 1
EST 1
GT 2
EST 2
We propose DeepASA, a one-for-all model for auditory scene analysis that performs multichannel-to-multichannel (M2M) source separation, dereverberation, sound event detection (SED), audio classification, and direction-of-arrival (DoA) estimation within a unified framework. DeepASA is designed for complex auditory scenes where multiple, often similar, sound sources overlap in time and move dynamically in space. To achieve robust and consistent inference across tasks, we introduce an object-oriented processing (OOP) strategy. This approach encapsulates diverse auditory features into object-centric representations and refines them through a chain-of-inference (CoI) mechanism. The pipeline comprises a dynamic temporal kernel-based feature extractor, a transformer-based aggregator, and an object separator that yields per-object features. These features feed into multiple task-specific decoders. Our object-centric formulation naturally resolves the parameter association ambiguity inherent in traditional track-wise processing. However, early-stage object separation can lead to failure in downstream ASA tasks. To address this, we implement temporal coherence matching (TCM) within the chain-of-inference, enabling multi-task fusion and iterative refinement of object features using estimated audio parameters. We evaluate DeepASA on representative spatial audio benchmark datasets, including ASA, MC-FUSS, and STARSS23. Experimental results show that our model achieves state-of-the-art performance across all evaluated tasks, demonstrating its effectiveness in both source separation and auditory parameter estimation under diverse spatial auditory scenes.
Overview of the DeepASA Framework for Auditory Scene Analysis (ASA)