Apple study exposes major issues in ‘reasoning’ capabilities of LLMs

0

A recent study conducted by six Apple engineers reveals that the mathematical reasoning capabilities of advanced large language models (LLMs) are surprisingly fragile. Even minor modifications to standard benchmark problems can significantly undermine their ability to produce accurate results.

Kyle Orland for Ars Technica:

The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”

The results of this new GSM-Symbolic paper aren’t completely new in the world of AI research. Other recent papers have similarly suggested that LLMs don’t actually perform formal reasoning and instead mimic it with probabilistic pattern-matching of the closest similar data seen in their vast training sets.

Still, the new research highlights just how fragile this kind of mimicry can be when the prompt in question pushes it in a direction that doesn’t precisely match any training data. It also highlights the inherent limitations in trying to perform high-level reasoning without any underlying model of the logic or world behind it.


MacDailyNews Take: As architect and urban planner Arthur Erickson said:

Illusion is needed to disguise the emptiness within.

MacDailyNews Note: The study, “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models,” is available as a pre-print paper here.

We are currently about 1/5th of the way to being sustainable with Substack subscriptions.

Not a bad start, but we’re kind of stuck at 1/5th currently.

Please tell your Apple-loving friends about MacDailyNews on Substack and, if you’re currently a free subscriber, please consider $5/mo. or $50/year to keep MacDailyNews going. Just hit the subscribe button. Thank you!

Read on Substack


Please help support MacDailyNews — and enjoy subscriber-only articles, comments, chat, and more — by subscribing to our Substack: macdailynews.substack.com. Thank you!

Support MacDailyNews at no extra cost to you by using this link to shop at Amazon.

The post Apple study exposes major issues in ‘reasoning’ capabilities of LLMs appeared first on MacDailyNews.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.