Article Source
Reasoning Using Large Language Models
Abstract
We all know that LLMs have a weakness when it comes to facts. Reasoning, however, is about more than just facts. Suppose you provide all the facts to an LLM. Now the question is, will the model infer the desired answer?
- Can the inference that LLMs do be thought of as reasoning?
- If they are truly reasoning, how capable and reliable are they?
- Can it be made more reliable?
As businesses are increasingly pushing to deploy LLM applications into production, these are important considerations.
This is a recording of a webinar from October 13, 2023.