This paper shows LLMs can build accurate, readable personality structures from very little data. Using only 20 Big Five answers per person, the models predict many other questionnaire items. The evaluation checks whether the full pattern of relationships between scales is captured, not just item scores. The model patterns match human patterns closely, and they come out stronger than humans’. The authors call that strengthening structural amplification. Models that amplify more also predict people better. Reasoning traces show a 2 step routine, first compress the 20 answers into a short personality summary, then generate item responses from that summary. The summaries lock onto the big factors well, but they struggle to weigh specific items inside each factor. The summary alone almost recreates the structure, and adding it to the raw scores improves accuracy. This makes the models look like low-noise respondents that filter human reporting noise with one stable style. ---- Paper – arxiv. org/abs/2511.03235 Paper Title: "From Five Dimensions to Many: LLMs as Precise and Interpretable Psychological Profilers"

Nov 8, 2025 · 4:31 AM UTC

3
17
2
130
Replying to @rohanpaul_ai
guess my therapist is getting replaced by a finetuned gpt, just gotta feed it my 20 most unhinged personality traits
1
4
Replying to @rohanpaul_ai
Rohan, that's quite a fascinating finding! Imagine the possibilities if we can truly understand personality with so little data, right?
Replying to @rohanpaul_ai
Yes. They built VERY accurate models of the users' minds. Aren't you SO glad that now google and campaign managers and ad execs can query your mind directly? What could possibly go wrong?