Latent Visual Reasoning: Multimodal reasoning in the joint semantic space


μα

If visual and textual tokens are embedded in a joint semantics space, why not reasoning over both as well?

LVR enables autoregressive reasoning directly in the visual and textual semantic space. In the LVR phase, the LLM leverages the last hidden states to approximate the question-relevant semantics within the visual inputs. During the text generation phase, the model predicts the next text token in sequence. Jointly reasoning over both modalities leads to better multimodal reasoning performance.

Main figure for the project

Brief illustrative video introduction about LVR.*

Abstract

Can Multimodal LLMs truly perform multimodal reasoning? We introduce Latent Visual Reasoning (LVR), a new paradigm that enables autoregressive reasoning directly in the visual embedding space. A visual encoder first projects images into visual tokens within a joint semantic space shared with the language model. The language model is then trained to generate latent states that reconstruct key visual tokens critical for answering the query, constituting the process of latent visual reasoning. By interleaving LVR with standard text generation, our model achieves substantial gains on perception-intensive visual question answering tasks. In addition, we adapt the GRPO algorithm to conduct reinforcement learning on latent reasoning, further balancing LVR and textual generation. We show that LVR substantially improves fine-grained visual understanding and perception, achieving 71.67% on MMVP compared to 66.67% with Qwen2.5-VL.

Model Architecture

The LVR model is purely autoregressive.

The overall framework closely follows a standard MLLM. Images are encoded into tokens by a visual encoder and mapped into a joint semantic space with text embeddings. During the SFT stage, bounding boxes are provided to identify query-relevant visual tokens, which supervise the last hidden states in the LVR process. Here, only the LLM’s last hidden states are passed forward for latent reasoning, optimized with a Mean Squared Error loss. The LVR process is wrapped with special tokens that indicate reasoning mode. Once all query-relevant visual tokens are consumed, the model exits LVR and resumes standard text generation with cross-entropy loss. During RL training, the model self-evolves the LVR process learned in SFT, while only the text generation part is supervised, using our adapted GRPOlatent. At inference, the model triggers LVR upon generating the special token, propagates hidden states to reconstruct visual semantics, and resumes text generation when a stopping criterion is met.

GRPOlatent for RL on latent reasoning models

One key challenge in applying RL to latent reasoning models is that the token distribution over latent reasoning positions is not semantically meaningful. To address this, we propose GRPOlatent, which computes log probabilities only for textual tokens during the generation phase. Since text generation is conditioned on the LVR process, the latent reasoning steps are indirectly optimized through the policy gradient signal. The reward function combines an accuracy reward on VQA tasks with a format reward that encourages the activation of latent reasoning (<|lvr_start|>...<|lvr_end|>), making the latter serve not only as a response-format constraint but also as a mechanism to promote latent reasoning behavior.

Experiment Results


LVR achieves state-of-the-art performance, surpassing open-source baselines built on the same MLLMs. It shows strong detail-understanding on $V^$ and robust perception on MMVP, highlighting that reconstructing visual semantics is more effective than relying on external visual-editing tools (as in “Think with Images’’) for fine-grained understanding. Moreover, CoT-based approaches such as PAPO and Vision-R1 degrade on $V^$, suggesting that textual-space CoT may cause cross-modal interference, whereas LVR avoids this by reasoning jointly across modalities.


BibTeX


        To Add Soon.