Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic logo

Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic

Furthermore, we identify the aspects of deductive reasoning ability on which deduction corpora can enhance LMs and those on which they cannot.

GitHub Link

The GitHub link is https://github.com/hitachi-nlp/fld

Introduce

"FLD (Formal Logic Deduction) is a project aimed at teaching language models deductive reasoning through synthetic examples rooted in formal logic theory. It offers a solid foundation for learning logical reasoning and presents a challenging benchmark for logical deduction. Even well-tuned T5 language models struggle to solve half of the problems from the complex FLD corpora. The project provides tools like a trainable language model-based prover and a flexible corpus generator, available on the huggingface hub. The creators invite inquiries through GitHub or email for this research outlined in the paper 'Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic'." Furthermore, we identify the aspects of deductive reasoning ability on which deduction corpora can enhance LMs and those on which they cannot.

Content

Formal Logic Deduction (FLD) is a project to teach language models deductive reasoning using synthetically generated examples based on formal logic theory. For any reason where a GitHub pull request or an issue is not appropriate, feel free to email terufumi.morishita.wp[at]hitachi.com.

Alternatives & Similar Tools

LongLLaMA-handle very long text contexts, up to 256,000 tokens logo

LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.