The incorporation of high-resolution visual input equips multimodal large language models (MLLMs) with enhanced visual perception capabilities for real-world tasks. However, most existing high-resolution MLLMs rely on a cropping-based approach to process images, which leads to fragmented visual encoding and a sharp increase in redundant tokens. To tackle these issues, we propose the FALCON model. FALCON introduces a novel visual register technique to simultaneously: 1) Eliminate redundant tokens at the stage of visual encoding. To directly address the visual redundancy present in the output of vision encoder, we propose a Register-based Representation Compacting (ReCompact) mechanism. This mechanism introduces a set of learnable visual registers designed to adaptively aggregate essential information while discarding redundancy. It enables the encoder to produce a more compact visual representation with a minimal number of output tokens, thus eliminating the need for an additional compression module. 2) Ensure continuity in visual encoding. To address the potential encoding errors caused by fragmented visual inputs, we develop a Register Interactive Attention (ReAtten) module. This module facilitates effective and efficient information exchange across sub-images by enabling interactions between visual registers. It ensures the continuity of visual semantics throughout the encoding. We conduct comprehensive experiments with FALCON on high-resolution benchmarks across a wide range of scenarios. FALCON demonstrates superior performance with a remarkable 9-fold reduction in visual tokens. FALCON is open-sourced and publicly available at https://github.com/JiuTian-VL/FALCON.
To address the visual redundancy and fragmentation in high-resolution MLLMs, we propose FALCON. FALCON employs an innovative visual register technique that simultaneously addresses both challenges. This technique uses a ReCompact mechanism to adaptively aggregate essential visual information through visual registers, creating a compact, non-redundant representation. Additionally, a novel ReAtten module is introduced to facilitate information exchange among sub-images via visual registers, thereby enhancing visual continuity during encoding. Extensive experiments demonstrate FALCON’s superiority in high-resolution understanding and validate the effectiveness of the proposed ReCompact and ReAtten.
@InProceedings{zhang2025falcon,
author={Zhang, Renshan and Shao, Rui and Chen, Gongwei and Zhang, Miao and Zhou, Kaiwen and Guan, Weili and Nie, Liqiang},
title={FALCON: Resolving Visual Redundancy and Fragmentation in High-resolution Multimodal Large Language Models via Visual Registers},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month= {October},
year={2025},
}