Mamba as a Bridge: Where Vision Foundation Models Meet Vision Language Models for Domain-Generalized Semantic Segmentation

CVPR 2025 Highlight
1National University of Singapore, 2ASUS Intelligent Cloud Services (AICS)
An efficient fusion framework to collaborate arbitrary pairs of vision foundation models and vision language models for domain-generalized semantic segmentation, integrating the strengths of both without introducing significant computational overhead.

MFuser: An efficient fusion framework to collaborate arbitrary pairs of vision foundation models and vision language models for domain-generalized semantic segmentation, integrating the strengths of both without introducing significant computational overhead.

Abstract

Vision Foundation Models (VFMs) and Vision-Language Models (VLMs) have gained traction in Domain Generalized Semantic Segmentation (DGSS) due to their strong generalization capabilities. However, existing DGSS methods often rely exclusively on either VFMs or VLMs, overlooking their complementary strengths. VFMs (e.g., DINOv2) excel at capturing fine-grained features, while VLMs (e.g., CLIP) provide robust text alignment but struggle with coarse granularity. Despite their complementary strengths, effectively integrating VFMs and VLMs with attention mechanisms is challenging, as the increased patch tokens complicate long-sequence modeling.

To address this, we propose MFuser, a novel Mamba-based fusion framework that efficiently combines the strengths of VFMs and VLMs while maintaining linear scalability in sequence length. MFuser consists of two key components: MVFuser, which acts as a co-adapter to jointly fine-tune the two models by capturing both sequential and spatial dynamics; and MTEnhancer, a hybrid attention-Mamba module that refines text embeddings by incorporating image priors. Our approach achieves precise feature locality and strong text alignment without incurring significant computational overhead. Extensive experiments demonstrate that MFuser significantly outperforms state-of-the-art DGSS methods, achieving 68.20 mIoU on synthetic-to-real and 71.87 mIoU on real-to-real benchmarks.

Pipeline

Overview of our proposed MFuser framework.

Framework overview

MFuser takes inputs through both VFM and VLM visual encoders. Features from each encoder layer are concatenated and refined in MVFuser, which captures sequential and spatial dependencies in parallel. The refined features are then added back to the original features and passed to the next layer. MTEnhancer strengthens text embeddings of each class by integrating visual features through a hybrid attention-Mamba mechanism. The enhanced text embeddings serve as object queries for the Mask2Former decoder, alongside multi-scale visual features. During training, only MVFusers, MTEnhancers, and the segmentation decoder are trainable while the VFM and VLM remain frozen, preserving their generalization ability and enabling efficient training. Note that skip connections between each block of MTEnhancer are omitted for clarity.

Qualitative and Quantitative Results

Qualitative Results

Here are some qualitative results comparison between our method and existing methods.

Qualitative Results

Quantitative Results

Here are critical experiment results under the synthetic-to-real setting (G → {C, B, M}) and the real-to-real setting (C → {B, M}).

Quantitative Results

BibTeX

@article{zhang2025mamba,
  title     = {Mamba as a Bridge: Where Vision Foundation Models Meet Vision Language Models for Domain-Generalized Semantic Segmentation},
  author    = {Zhang, Xin and Robby T., Tan},
  journal   = {CVPR},
  year      = {2025},
}