Back to Main Conference 2024
LREC-COLING 2024main

Enhancing Large Language Models through Transforming Reasoning Problems into Classification Tasks

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

DOI:10.63317/33x3ygmnofhc

Abstract

In this paper, we introduce a novel approach for enhancing the reasoning capabilities of large language models (LLMs) for constraint satisfaction problems (CSPs), by converting reasoning problems into classification tasks. Our method leverages the LLM’s ability to decide when to call a function from a set of logical-linguistic primitives, each of which can interact with a local “scratchpad” memory and logical inference engine. Invocation of these primitives in the correct order writes the constraints to the scratchpad memory and enables the logical engine to verifiably solve the problem. We additionally propose a formal framework for exploring the “linguistic” hardness of CSP reasoning-problems for LLMs. Our experimental results demonstrate that under our proposed method, tasks with significant computational hardness can be converted to a form that is easier for LLMs to solve and yields a 40% improvement over baselines. This opens up new avenues for future research into hybrid cognitive models that integrate symbolic and neural approaches.

Details

Paper ID
lrec2024-main-0532
Pages
pp. 6007-6016
BibKey
raheja-etal-2024-enhancing
Editor
N/A
Publisher
European Language Resources Association (ELRA) and ICCL
ISSN
2522-2686
ISBN
979-10-95546-34-4
Conference
Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Location
Turin, Italy
Date
20 May 2024 25 May 2024

Authors

  • TR

    Tarun Raheja

  • RS

    Raunak Sinha

  • AD

    Advit Deepak

  • WH

    Will Healy

  • JS

    Jayanth Srinivasa

  • ML

    Myungjin Lee

  • RK

    Ramana Kompella

Links