PlanQA Logo

PlanQA: A Benchmark for Spatial Reasoning in LLMs using Structured Representations

Fedor Rodionov1, Abdelrahman Eldesokey1, Michael Birsak1, John Femiani2, Bernard Ghanem1, Peter Wonka1
1KAUST, 2Miami University
{first.last}@kaust.edu.sa, femianjc@miamioh.edu

Abstract

We introduce PlanQA, a diagnostic benchmark for evaluating geometric and spatial reasoning in large-language models (LLMs). PlanQA is grounded in structured representations of indoor scenes, such as kitchens, living rooms, and bedrooms, encoded in a symbolic format (e.g., JSON, XML layouts). The benchmark includes diverse question types that test not only metric and topological reasoning (e.g., distance, visibility, shortest paths) but also interior design constraints such as affordance, clearance, balance, and usability. Our results across a variety of frontier open-source and commercial LLMs show that while models may succeed in shallow queries, they often fail to simulate physical constraints, preserve spatial coherence, or generalize under layout perturbation. PlanQA uncovers a clear blind spot in today’s LLMs: they don’t consistently reason about real-world layouts. We hope that this benchmark inspires new work on language models that can accurately infer and manipulate spatial and geometric properties in practical settings.

Model Performance Analysis

Question-level accuracy by category for each model across room types.
Models (left to right): Qwen3-32B, DeepSeek-R1, DeepSeek-V3, LLaMA 3.3-70B, Gemma 2-27B, Phi-4, GPT-4.1, LLaMA 3.1-8B, Gemma 2-9B, Phi 3.5-mini, GPT-4o-mini.
Note: High truncation rate due to token limits.
Reasoning Big Models Small Models
N 32.8B 671B 671B70B27.2B14B N/A 8B9.2B3.8BN/A
Temp. 0.6 0.6 0.30.00.50.0 1.0 0.60.50.71.0
K Distance 97.299.598.597.294.790.710082.374.235.397.3
Area (counters)85.399.595.781.746.579.298.530.837.59.880.3
Free Space83.788.052.537.023.542.083.59.513.56.320.3
View Angle74.058.269.275.212.752.886.311.59.28.242.2
Repositioning91.596.369.848.714.822.785.83.76.211.540.7
Max Box33.330.79.28.83.03.057.21.71.21.73.7
Fit/Placement92.892.571.066.272.082.089.868.771.868.870.7
Path (Valid)13.26.339.330.323.226.873.010.517.514.835.7
Path (Fréchet)15.06.332.830.226.026.055.810.219.09.336.7
Missing Object87.388.744.356.252.352.279.827.539.714.358.0
Obstruction84.095.232.76.03.39.793.51.22.711.314.2
L Distance 98.799.899.598.296.398.899.887.081.858.898.5
Area (sitting)96.799.584.598.083.888.799.532.541.312.885.7
Free Space0.34.31.00.33.70.35.00.73.21.51.7
View Angle81.586.070.076.814.350.396.014.88.310.542.2
Repositioning80.593.045.033.512.819.271.56.55.76.329.3
Max Box1.50.81.03.02.02.86.50.81.51.02.3
Fit/Placement90.791.271.580.083.787.791.875.075.572.872.7
Path (Valid)10.014.733.026.730.526.753.27.321.716.733.5
Path (Fréchet)13.214.223.717.819.214.748.02.514.26.825.2
Missing Object73.076.251.049.329.536.065.59.728.311.732.3
Obstruction80.796.524.37.33.89.384.72.55.24.711.7
B Distance 98.799.899.598.296.398.899.887.081.858.898.5
Area (storage)98.799.894.097.086.388.399.330.366.347.788.2
Free Space1.75.81.20.32.51.22.81.81.01.01.2
View Angle76.079.870.078.310.857.094.215.310.27.743.0
Repositioning78.794.353.836.011.215.773.74.39.06.531.5
Max Box0.71.02.01.82.02.57.21.00.71.02.0
Fit/Placement86.386.365.866.766.573.082.066.063.864.766.2
Path (Valid)15.520.849.740.738.535.867.811.334.520.548.8
Path (Fréchet)15.321.530.330.327.019.049.53.719.78.536.3
Missing Object64.365.239.840.333.025.261.014.819.78.334.5
Obstruction87.295.732.04.31.35.889.52.25.39.312.2

BibTeX

@article{rodionov2025planqa,
  author    = {Rodionov, Fedor and Eldesokey, Abdelrahman and Birsak, Michael and Femiani, John and Ghanem, Bernard and Wonka, Peter},
  title     = {PlanQA: A Benchmark for Spatial Reasoning in LLMs using Structured Representations},
  journal   = {arXiv preprint},
  year      = {2025},
}