Large Language Models (LLMs) can be seen as compressed knowledge bases, but it remains unclear what knowledge they truly contain and how far their knowledge boundaries extend. Existing benchmarks are mostly static and provide limited support for systematic knowledge probing. In this paper, we propose an interactive agentic framework to systematically extract and quantify the knowledge of LLMs. Our method includes four adaptive exploration policies to probe knowledge at different granularities. To ensure the quality of extracted knowledge, we introduce a three-stage knowledge processing pipeline that combines vector-based filtering to remove exact duplicates, LLM-based adjudication to resolve ambiguous semantic overlaps, and domain-relevance auditing to retain valid knowledge units. Through extensive experiments, we find that recursive taxonomy is the most effective exploration strategy. We also observe a clear knowledge scaling law, where larger models consistently extract more knowledge. In addition, we identify a Pass@1-versus-Pass@k trade-off: domain-specialized models achieve higher initial accuracy but degrade rapidly, while general-purpose models maintain stable performance during extended extraction. Finally, our results show that differences in training data composition lead to distinct and measurable knowledge profiles across model families.
We design four adaptive strategies to overcome the model's "comfort zone" effect:
| Policy | Description |
|---|---|
| P2: Sequential Probing | Iterative "What else?" baseline for associative knowledge retrieval |
| P3: Self-Reflection | Critic-actor loop with self-audit to refine and expand responses |
| P4: Recursive Taxonomy | Hierarchical decomposition with configurable branching factor |
| P5: Multi-Perspective | N expert personas working in parallel for diverse viewpoints |
Three-stage pipeline ensuring semantic uniqueness and validity:
Automatic stopping when knowledge growth saturates:
Pareto Knowledge Frontier. Recursive taxonomy (P4) dominates the efficiency frontier across all domains.
Specialization Trade-off. Specialized models achieve higher initial accuracy but degrade rapidly, while general models maintain stable performance over extended extraction.
Cross-Series Comparison. Different model families exhibit distinct knowledge profiles at similar parameter counts.
@misc{yang2026probing,
title={Probing the Knowledge Boundary: An Interactive Agentic Framework for Deep Knowledge Extraction},
author={Yuheng Yang and Siqi Zhu and Tao Feng and Ge Liu and Jiaxuan You},
year={2026},
eprint={2602.00959},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.00959}
}