Probing the Knowledge Boundary

An Interactive Agentic Framework for Deep Knowledge Extraction

Yuheng Yang1,2, Siqi Zhu1, Tao Feng1, Ge Liu1, Jiaxuan You1

1University of Illinois at Urbana-Champaign    2Westlake University

Abstract

Large Language Models (LLMs) can be seen as compressed knowledge bases, but it remains unclear what knowledge they truly contain and how far their knowledge boundaries extend. Existing benchmarks are mostly static and provide limited support for systematic knowledge probing. In this paper, we propose an interactive agentic framework to systematically extract and quantify the knowledge of LLMs. Our method includes four adaptive exploration policies to probe knowledge at different granularities. To ensure the quality of extracted knowledge, we introduce a three-stage knowledge processing pipeline that combines vector-based filtering to remove exact duplicates, LLM-based adjudication to resolve ambiguous semantic overlaps, and domain-relevance auditing to retain valid knowledge units. Through extensive experiments, we find that recursive taxonomy is the most effective exploration strategy. We also observe a clear knowledge scaling law, where larger models consistently extract more knowledge. In addition, we identify a Pass@1-versus-Pass@k trade-off: domain-specialized models achieve higher initial accuracy but degrade rapidly, while general-purpose models maintain stable performance during extended extraction. Finally, our results show that differences in training data composition lead to distinct and measurable knowledge profiles across model families.

Method Overview

Framework Overview

Exploration Policies

We design four adaptive strategies to overcome the model's "comfort zone" effect:

Policy Description
P2: Sequential Probing Iterative "What else?" baseline for associative knowledge retrieval
P3: Self-Reflection Critic-actor loop with self-audit to refine and expand responses
P4: Recursive Taxonomy Hierarchical decomposition with configurable branching factor
P5: Multi-Perspective N expert personas working in parallel for diverse viewpoints

Knowledge Processor

Three-stage pipeline ensuring semantic uniqueness and validity:

  • Stage 1 - Vector Filtering: Cosine similarity > 0.92 triggers immediate merge
  • Stage 2 - LLM Adjudication: Ambiguous pairs (0.70-0.92) judged by LLM
  • Stage 3 - Domain Auditing: Bloom's Taxonomy-based relevance filtering

Saturation Detection

Automatic stopping when knowledge growth saturates:

  • Growth rate drops below 1% per turn
  • Extraction efficiency falls below 10%
  • Novel discoveries fewer than 3 per turn

Results

Citation

@misc{yang2026probing,
  title={Probing the Knowledge Boundary: An Interactive Agentic Framework for Deep Knowledge Extraction},
  author={Yuheng Yang and Siqi Zhu and Tao Feng and Ge Liu and Jiaxuan You},
  year={2026},
  eprint={2602.00959},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2602.00959}
}