Towards Pixel-Level VLM Perception
via Simple Points Prediction




Anonymous.
Under Review.



Abstract

We present SimpleSeg, a strikingly simple yet highly effective approach to endow Multimodal Large Language Models (MLLMs) with native pixel-level perception. Our method reframes segmentation as a simple sequence generation problem: the model directly predicts sequence of points (textual coordinates) delineating object boundaries, entirely within its language space. To achieve high fidelity, we introduce a two-stage SFT→RL training pipeline, where Reinforcement Learning with an IoU-based reward refines the point sequences to accurately match ground-truth contours. We find that the standard MLLM architecture possesses a strong, inherent capacity for low-level perception that can be unlocked without any specialized architecture. On segmentation benchmarks, SimpleSeg achieves performance that is comparable to, and often surpasses, methods relying on complex, task-specific designs. This work lays out that precise spatial understanding can emerge from simple point prediction, challenging the prevailing need for auxiliary components and paving the way for more unified and capable VLMs.

Method Overview

Key Benefits

  • Simplicity: SimpleSeg requires no specialized modules and adheres to the standard MLLM architecture, it can be seamlessly and efficiently integrated as a new, core pre-training task for foundation models, similar to visual grounding.
  • Task Generality: By framing segmentation as a text-generation problem, our approach is inherently flexible. The model can be easily adapted to a wide range of vision-language tasks that require precise spatial localization.
  • Interpretable Output: The model generates explicit, human-readable coordinate sequences instead of dense pixel masks. This transparency simplifies debugging and makes the output directly usable for downstream applications like interactive editing or tool use.

Visual Results

Complex Object Segmentation

User: Show me the precise polygon of octopus in the image.

octopus

Input Image

SimpleSeg: Sure, you can find it at [[[0.295, 0.204], [0.298, 0.225], ..., [0.456, 0.273], [0.412, 0.231], [0.358, 0.206]]].

octopus-result

Segmentation Result

Fine Detail Recognition

User: Help me locate the polygon of bracelet in the image.

bracelet

Input Image

SimpleSeg: [[[0.737, 0.324], [0.725, 0.316], [0.706, 0.316], ..., [0.739, 0.358], [0.744, 0.346], [0.743, 0.333]]].

bracelet-result

Segmentation Result

Geometric Contour Extraction

User: Outline the curve in the image.

curve

Input Image

SimpleSeg: Sure, [[[0.927, 0.399], [0.916, 0.388], [0.904, 0.401], ..., , [0.897, 0.534], [0.904, 0.534], [0.916, 0.466], [0.916, 0.438]]].

curve-result

Segmentation Result

Image Domain Generalization

User: Please output the polygon coordinates of Aniya in the image.

curve

Input Image

SimpleSeg: [[[0.533, 0.260], [0.525, 0.280], [0.483, 0.290], ..., [0.583, 0.310], [0.556, 0.306], [0.540, 0.286], [0.538, 0.268]]].

curve-result

Segmentation Result

Input and Task Flexibility

User: Show me the contour of the object at point [0.621, 0.592].

curve

Input Image

SimpleSeg: [[[0.700, 0.535], [0.689, 0.543], [0.678, 0.558], [0.674, 0.572], ..., [0.748, 0.543], [0.730, 0.531], [0.717, 0.529]]].

curve-result

Segmentation Result

User: Give the polygon vertices of the stuff within the box region [0.372, 0.162, 0.784, 0.605].

curve

Input Image

SimpleSeg: The polygon vertices within that region are [[[0.773, 0.166], [0.765, 0.165], ..., [0.782, 0.185], [0.780, 0.173]]].

curve-result

Segmentation Result

Citation

@article{simpleseg,
  title={Towards Pixel-Level VLM Perception via Simple Points Prediction},
  author={Anonymous},
  journal={arXiv preprint arXiv:2025.xxxxx},
  year={2025}
}