PhreeNewsPhreeNews
Notification Show More
Font ResizerAa
  • Africa
    • Business
    • Economics
    • Entertainment
    • Health
    • Politics
    • Science
    • Sports
    • Tech
    • Travel
    • Weather
  • WorldTOP
  • Emergency HeadlinesHOT
  • Politics
  • Business
  • Markets
  • Health
  • Entertainment
  • Tech
  • Style
  • Travel
  • Sports
  • Science
  • Climate
  • Weather
Reading: Might Symbolic AI Unlock Human-like Intelligence?
Share
Font ResizerAa
PhreeNewsPhreeNews
Search
  • Africa
    • Business
    • Economics
    • Entertainment
    • Health
    • Politics
    • Science
    • Sports
    • Tech
    • Travel
    • Weather
  • WorldTOP
  • Emergency HeadlinesHOT
  • Politics
  • Business
  • Markets
  • Health
  • Entertainment
  • Tech
  • Style
  • Travel
  • Sports
  • Science
  • Climate
  • Weather
Have an existing account? Sign In
Follow US
© 2026 PhreeNews. All Rights Reserved.
PhreeNews > Blog > World > Science > Might Symbolic AI Unlock Human-like Intelligence?
Tech ai neural network.jpg
Science

Might Symbolic AI Unlock Human-like Intelligence?

PhreeNews
Last updated: November 29, 2025 1:18 pm
PhreeNews
Published: November 29, 2025
Share
SHARE

Will computer systems ever match or surpass human-level intelligence — and, if that’s the case, how? When the Affiliation for the Development of Synthetic Intelligence (AAAI), based mostly in Washington DC, requested its members earlier this 12 months whether or not neural networks — the present star of artificial-intelligence methods — alone will probably be sufficient to hit this aim, the overwhelming majority mentioned no. As an alternative, most mentioned, a heavy dose of an older sort of AI will probably be wanted to get these methods as much as par: symbolic AI.

Generally referred to as ‘good old style AI’, symbolic AI relies on formal guidelines and an encoding of the logical relationships between ideas. Arithmetic is symbolic, for instance, as are ‘if–then’ statements and laptop coding languages akin to Python, together with stream charts or Venn diagrams that map how, say, cats, mammals and animals are conceptually associated. Many years in the past, symbolic methods have been an early front-runner within the AI effort. Nevertheless, within the early 2010s, they have been vastly outpaced by more-flexible neural networks. These machine-learning fashions excel at studying from huge quantities of knowledge, and underlie massive language fashions (LLMs), in addition to chatbots akin to ChatGPT.

Now, nevertheless, the computer-science neighborhood is pushing laborious for a greater and bolder melding of the previous and the brand new. ‘Neurosymbolic AI’ has turn out to be the most popular buzzword on the town. Brandon Colelough, a pc scientist on the College of Maryland in School Park, has charted the meteoric rise of the idea in educational papers. These reveal a spike of curiosity in neurosymbolic AI that began in round 2021 and exhibits no signal of slowing down.

On supporting science journalism

When you’re having fun with this text, take into account supporting our award-winning journalism by subscribing. By buying a subscription you might be serving to to make sure the way forward for impactful tales in regards to the discoveries and concepts shaping our world at present.

Loads of researchers are heralding the pattern as an escape from what they see as an unhealthy monopoly of neural networks in AI analysis, and count on the shift to ship smarter and extra dependable AI.

A greater melding of those two methods might result in synthetic basic intelligence (AGI): AI that may purpose and generalize its information from one scenario to a different in addition to people do. It may additionally be helpful for high-risk functions, akin to navy or medical decision-making, says Colelough. As a result of symbolic AI is clear and comprehensible to people, he says, it doesn’t undergo from the ‘black field’ syndrome that may make neural networks laborious to belief.

There are already good examples of neurosymbolic AI, together with Google DeepMind’s AlphaGeometry, a system reported final 12 months that may reliably remedy maths Olympiad issues — questions geared toward gifted secondary-school college students. However understanding how greatest to mix neural networks and symbolic AI into an all-purpose system is a formidable problem.

“You’re actually architecting this sort of two-headed beast,” says laptop scientist William Regli, additionally on the College of Maryland.

Confrontation

In 2019, laptop scientist Richard Sutton posted a brief essay entitled ‘The bitter lesson’ on his weblog (see go.nature.com/4paxykf). In it, he argued that, because the Nineteen Fifties, individuals have repeatedly assumed that one of the best ways to make clever computer systems is to feed them with all of the insights that people have arrived at in regards to the guidelines of the world, in fields from physics to social behaviour. The bitter capsule to swallow, wrote Sutton, is that point and time once more, symbolic strategies have been outdone by methods that use a ton of uncooked knowledge and scaled-up computational energy to leverage ‘search and studying’. Early chess-playing computer systems, for instance, that have been skilled on human-devised methods have been outperformed by people who have been merely fed numerous recreation knowledge.

This lesson has been broadly quoted by proponents of neural networks to assist the concept that making these methods ever-bigger is the perfect path to AGI. However many researchers argue that the essay overstates its case and downplays the essential half that symbolic methods can and do play in AI. For instance, the perfect chess program at present, Stockfish, pairs a neural community with a symbolic tree of allowable strikes.

Neural nets and symbolic algorithms each have professionals and cons. Neural networks are made up of layers of nodes with weighted connections which can be adjusted throughout coaching to acknowledge patterns and be taught from knowledge. They’re quick and artistic, however they’re additionally sure to make issues up and may’t reliably reply questions past the scope of their coaching knowledge.

Symbolic methods, in the meantime, battle to embody ‘messy’ ideas, akin to human language, that contain huge rule databases which can be troublesome to construct and sluggish to look. However their workings are clear, and they’re good at reasoning, utilizing logic to use their basic information to recent conditions.

When put to make use of in the true world, neural networks that lack symbolic information make basic errors: picture mills would possibly draw individuals with six fingers on every hand as a result of they haven’t learnt the final idea that arms sometimes have 5; video mills battle to make a ball bounce round a scene as a result of they haven’t learnt that gravity pulls issues downwards. Some researchers blame such errors on an absence of knowledge or computing energy, however others say that the errors illustrate neural networks’ elementary incapacity to generalize information and purpose logically.

Many argue that including symbolism to neural nets may be the perfect — even the one — technique to inject logical reasoning into AI. The worldwide expertise agency IBM, for instance, is backing neurosymbolic methods as a path to AGI. However others stay sceptical: Yann LeCun, one of many fathers of contemporary AI and chief AI scientist at tech big Meta, has mentioned that neurosymbolic approaches are “incompatible” with neural-network studying.

Sutton, who’s on the College of Alberta in Edmonton, Canada, and gained the 2024 Turing prize, the equal of the Nobel prize for laptop science, holds agency to his unique argument: “The bitter lesson nonetheless applies to at present’s AI,” he advised Nature. This means, he says, that “including a symbolic, extra manually crafted component might be a mistake.”

Gary Marcus, an AI entrepreneur, author and cognitive scientist based mostly in Vancouver, Canada, and one of the vocal advocates of neurosymbolic AI, tends to border this distinction of opinions as a philosophical battle that’s now being settled in his favour.

Others, akin to roboticist Leslie Kaelbling on the Massachusetts Institute of Know-how (MIT) in Cambridge, say that arguments over which view is correct are a distraction, and that folks ought to simply get on with no matter works. “I’m a magpie. I’ll do something that makes my robots higher.”

Combine and match

Past the truth that neurosymbolic AI goals to meld the advantages of neural nets with the advantages of symbolism, its definition is blurry. Neurosymbolic AI encompasses “a really massive universe,” says Marcus, “of which we’ve explored solely a tiny bit.”

There are various broad approaches, which individuals have tried to categorize in varied methods. One possibility highlighted by many is using symbolic methods to enhance neural nets. AlphaGeometry is arguably one of the subtle examples of this technique: it trains a neural internet on an artificial knowledge set of maths issues produced utilizing a symbolic laptop language, making the options simpler to examine and making certain fewer errors. It combines the 2 elegantly, says Colelough. In one other instance, ‘logic tensor networks’ present a technique to encode symbolic logic for neural networks. Statements could be assigned a fuzzy-truth worth: a quantity someplace between 1 (true) and 0 (false). This offers a framework of guidelines to assist the system purpose.

One other broad method does what some would say is the reverse, utilizing neural nets to finesse symbolic algorithms. One downside with symbolic information databases is that they’re usually so massive that they take a really very long time to look: the ‘tree’ of all doable strikes in a recreation of Go, for instance, comprises about 10170 positions, which is unfeasibly massive to crunch by way of. Neural networks could be skilled to foretell essentially the most promising subset of strikes, permitting the system to chop down how a lot of the ‘tree’ it has to look, and thus dashing up the time it takes to decide on the perfect transfer. That’s what Google’s AlphaGo did when it famously outperformed a Go grandmaster.

An alternate thought is to insert symbolics into the center of an LLM’s workflow, in the identical manner as consulting a calculator would possibly assist individual to resolve a maths puzzle. Utilizing rules-based methods throughout essential reasoning steps can assist to maintain LLMs from going off-track, many argue. Tasks together with the Program-Aided Language (PAL) mannequin, for instance, use an LLM to transform natural-language duties into Python code, use that symbolic code to resolve the issue, after which interpret that resolution again into pure language with an LLM.

Jiayuan Mao, an AI researcher who has simply accomplished her PhD at MIT with Kaelbling and is on her technique to the College of Pennsylvania in Philadelphia, has had success in utilizing neurosymbolic AI to make robotic coaching extra environment friendly. Her technique is to make use of a neural community to acknowledge objects (akin to a pink rubber ball or a inexperienced glass dice) in a visible discipline after which use a symbolic algorithm to purpose by way of relational questions on these objects (akin to ‘is the rubber object behind the inexperienced object?’). A pure neural community would wish 700,000 examples in its coaching knowledge to attain 99% accuracy on this job, she says. However by including symbolic methods, she wants simply 10% of that quantity. “Even if you happen to use 1%, you’ll be able to nonetheless get 92% accuracy, which is kind of spectacular,” she says. An analogous neurosymbolic system she created trounced a neural-network-based system at guiding a robotic that encountered unfamiliar objects whereas washing dishes or making tea.

Misplaced in translation

One of many massive challenges for symbolic AI is the way to encode typically slippery human information inside a language of logic and guidelines. One of many earliest makes an attempt was a venture referred to as Cyc, began by laptop scientist Doug Lenat in 1984 and later overseen by his AI firm Cycorp, based mostly in Austin, Texas. The intent was to explicitly articulate common sense info and guidelines of thumb, akin to ‘a daughter is a toddler’, ‘individuals love their kids’ and ‘seeing somebody you like makes you smile’. The venture’s language, CycL, makes use of symbols (for logical operators akin to IF, AND, OR and NOT) to specific logical relationships in order that an inference engine can simply draw conclusions, akin to ‘seeing your baby would make you smile’.

Cyc, which now holds greater than 25 million axioms, has been utilized in quite a lot of AI efforts and has impressed different initiatives, akin to Google’s Information Graph, which holds greater than 500 billion info. As we speak, ‘information engineers’ use an analogous technique to assemble human-generated info and relationships, construct specialised databases and combine them with AI.

Symbolic databases can assist an AI to generalize information from one scenario and apply it in one other, says Kaelbling, which is a robust technique to make reasoning extra environment friendly. However there’s a trade-off in accuracy when coping with topics for which there are numerous exceptions to the ‘guidelines’ — not all individuals love their kids, for instance, and seeing one thing you like doesn’t all the time make you smile. Symbolics ought to be integrated solely when it’s useful to take action, she says. “Cyc was making an attempt to show widespread sense into math. That’s virtually certainly a foul thought,” Kaelbling says.

In 2023, Marcus posted a paper with Lenat laying out what LLMs can be taught from Cyc. As a part of that work, the duo requested GPT-3, an early LLM of the sort that underpins ChatGPT, to put in writing CycL statements that encode the logical relationships within the sentence “Did you contact a blue object positioned within the capital of France on September twenty fifth, 2022?” The response “at first amazed the Cyc workforce”, they report, as a result of it generated what appeared to be the correct kind of statements in the correct kind of language. However on nearer inspection, GPT-3 made many essential errors, they write, akin to concluding that “the factor that’s touching the blue object is the date”.

“It seems to be prefer it’s good, it seems to be prefer it ought to work, nevertheless it’s completely rubbish,” says Colelough. This exhibits that it’s pointless to easily ram collectively a symbolic engine and a neural internet, he says. “You then would possibly as nicely simply not have the neurosymbolic system.”

What’s wanted, Colelough says, is much more analysis on AI ‘metacognition’ — how AI screens and conducts its personal considering. That might allow AI ‘conductors’ to supervise a extra subtle integration of the 2 paradigms, reasonably than having completely different engines merely take turns. Colelough says AlphaGeometry does this nicely, however in a restricted context. If a versatile conductor that works for any area of data may very well be developed, “that may be AGI for me”, Colelough says.

There’s much more work to do. Recent {hardware} and chip architectures may be wanted to run neurosymbolic AI effectively. In time, different forms of AI — perhaps based mostly on neural networks, symbolic AI, each or neither — would possibly turn out to be extra thrilling, akin to quantum AI, a fledgling discipline that seeks to take advantage of the properties of the quantum world to enhance AI.

For Mao, the final word aim is to leverage neural networks’ studying talents to create guidelines, classes and paths of reasoning that people aren’t but conscious of. “The hope is that ultimately we are able to have methods that additionally invent their very own symbolic illustration and symbolic algorithms, in order that they’ll actually transcend what a human is aware of,” she says. That may be like a pc discovering an as-yet-unknown mathematical or bodily idea — maybe analogous to π or the property of mass — after which encoding the brand new idea to assist to increase information. “We have to research how computer systems can train people, not how people can train machines.”

This text is reproduced with permission and was first printed on November 25, 2025.

Watch lifeless neutron stars smash collectively in new NASA supercomputer simulation
‘Again to the Moon’: Time journal salutes Artemis 2 astronauts in particular commemorative cowl challenge
A sequence-based stacked ensemble model for multiclass protein toxin classification
Why Energetic Relaxation Is Essential Throughout the Holidays
The Local Universe May Be Misleading Cosmologists about Dark Energy and Expansion
TAGGED:HumanlikeintelligenceSymbolicunlock
Share This Article
Facebook Email Print
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Follow US

Find US on Social Medias
FacebookLike
XFollow
YoutubeSubscribe
TelegramFollow

Weekly Newsletter

Subscribe to our newsletter to get our newest articles instantly!

Forex

Market Action
Popular News
Img po10ipo 6 1 t792kd5e.jpg
Business

Must you subscribe to Fujiyama Energy Methods IPO?

PhreeNews
PhreeNews
November 13, 2025
Seattle Mariners shutting down ROOT Sports, shifting TV and streaming to MLB in 2026
Asian equities: Asian shares steady as US stocks climb on Fed rate cut hopes
Washington Commanders Future Opponents: Full Listing by 12 months (Residence/Away Breakdowns)
Asia-Pacific markets rebound as Fed rate-cut hopes revive

Categories

  • Sports
  • Sports
  • Science
  • Tech
  • Business
  • Tech
  • Entertainment
  • Markets
  • Travel
  • Politics

About US

At PhreeNews.com, we are a dynamic, independent news platform committed to delivering timely, accurate, and thought-provoking content from Africa and around the world.
Quick Link
  • Blog
  • About Us
  • My Bookmarks
Important Links
  • About Us
  • 🛡️ PhreeNews.com Privacy Policy
  • 📜 Terms & Conditions
  • ⚠️ Disclaimer

Subscribe US

Subscribe to our newsletter to get our newest articles instantly!

© 2026 PhreeNews. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?