• LRM moving from FOM to TCS [Lean Prover] (Was: Is Causal AI simplyReLU (Geoffrey E. Hinton 2010)))

    From Mild Shock@[email protected] to comp.theory on Fri Feb 27 10:21:16 2026
    From Newsgroup: comp.theory


    Hi,

    Large Reasoning Models (LRM) seem to move from
    Foundation of Mathematics (FOM) to Theoretical
    Computer Science (TCS). FOM typically gives

    you "white science" mathematics, with sets and
    infinity, if you are lucky a little recursion
    theory. Fun fact TCS is even more "white".

    Interesting paper in as far:

    Lean Meets Theoretical Computer Science:
    Scalable Synthesis of Theorem Proving Challenges
    in Formal-Informal Pairs
    Terry Jingchen Zhang et. al. - 2025
    https://arxiv.org/abs/2508.15878v1

    One swallow does not make a summer?

    But its probably a necessary step. The above
    paper using Busy Beaver and Interger Constraints
    as examples. What logical frameworks do even

    apply, is it enough to have a "total function"
    theory layer, or does TCS need more. TCS can
    be heavy on all sort of discrete and

    non-discrete mathematics.

    Bye

    Mild Shock schrieb:
    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation à la Judea Pearl?

    Causal AI – the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw

    --- Synchronet 3.21c-Linux NewsLink 1.2
  • From wij@[email protected] to comp.theory on Tue Mar 3 08:39:18 2026
    From Newsgroup: comp.theory

    On Fri, 2026-02-27 at 10:21 +0100, Mild Shock wrote:

    Hi,

    Large Reasoning Models (LRM) seem to move from
    Foundation of Mathematics (FOM) to Theoretical
    Computer Science (TCS). FOM typically gives

    you "white science" mathematics, with sets and
    infinity, if you are lucky a little recursion
    theory. Fun fact TCS is even more "white".

    Interesting paper in as far:

    Lean Meets Theoretical Computer Science:
    Scalable Synthesis of Theorem Proving Challenges
    in Formal-Informal Pairs
    Terry Jingchen Zhang et. al. - 2025
    https://arxiv.org/abs/2508.15878v1

    One swallow does not make a summer?

    But its probably a necessary step. The above
    paper using Busy Beaver and Interger Constraints
    as examples. What logical frameworks do even

    apply, is it enough to have a "total function"
    theory layer, or does TCS need more. TCS can
    be heavy on all sort of discrete and

    non-discrete mathematics.

    Bye

    Mild Shock schrieb:
    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation à la Judea Pearl?

    Causal AI – the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw
    P!=NP https://sourceforge.net/projects/cscall/files/MisFiles/PNP-proof-en.txt/download
    Figure it out what P!=NP means, otherwise whatever understanding (you said) is superficial.
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.theory on Fri Mar 6 01:08:56 2026
    From Newsgroup: comp.theory

    Hi,

    Our Chinese Nostradamous claims there are at least
    two methods how humans deal with trauma aka contradiction,
    soul repair or split brain:

    Great Books #5: The Odyssey
    https://www.youtube.com/watch?v=gXlcR7uHHdA

    Brain plasticity, or neuroplasticity, is the brain's
    remarkable ability to reorganize its structure, functions,
    and neural connections throughout life in response to
    learning, experience, or injury. It enables adaptation,

    memory formation, and recovery from damage by creating
    new synaptic connections and pathways. This dynamic
    process continues into adulthood, allowing for, but
    also influenced by, factors like exercise, sleep,

    and mental stimulation. Does this adaption or
    maladaption need consistency, or asked differently
    what does structure and function mean?

    Bye

    Mild Shock schrieb:

    Hi,

    Large Reasoning Models (LRM) seem to move from
    Foundation of Mathematics (FOM) to Theoretical
    Computer Science (TCS). FOM typically gives

    you "white science" mathematics, with sets and
    infinity, if you are lucky a little recursion
    theory. Fun fact TCS is even more "white".

    Interesting paper in as far:

    Lean Meets Theoretical Computer Science:
    Scalable Synthesis of Theorem Proving Challenges
    in Formal-Informal Pairs
    Terry Jingchen Zhang et. al. - 2025
    https://arxiv.org/abs/2508.15878v1

    One swallow does not make a summer?

    But its probably a necessary step. The above
    paper using Busy Beaver and Interger Constraints
    as examples. What logical frameworks do even

    apply, is it enough to have a "total function"
    theory layer, or does TCS need more. TCS can
    be heavy on all sort of discrete and

    non-discrete mathematics.

    Bye

    Mild Shock schrieb:
    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation à la Judea Pearl?

    Causal AI – the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw


    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.theory on Fri Mar 6 02:23:57 2026
    From Newsgroup: comp.theory

    Hi,

    Now I am reading Otto Ludwig Binswanger (* 14.
    Oktober 1852 in Scherzingen, Münsterlingen;
    † 15. Juli 1929 in Kreuzlingen) war ein Schweizer

    Psychiater und Neurologe, der der Richtung der
    Neuropsychiatrie zugerechnet werden kann.

    "Läßt man alle erkenntnistheoretischen Erwägungen beiseite, so
    wird man notwendigerweise zu der Anschauung gelangen, daß
    die psychischen Vorgänge zwar regelmäßig von einem physischen Kräftewechsel, d. h. von materiellen Hirnrindenprozessen, begleitet
    sind, daß aber jedes dieser Gebiete selbständig für sich
    besteht (psychophysischer Parallelismus)." https://archive.org/details/binswanger-siemerling.-lehrbuch-der-psychiatrie-3.-aufl.-1911/page/2/mode/2up

    On the same page he describes some Aristoteles Style
    Learning Model. But the interesting thing is that around 12
    Months ago DeekSeek R1 made furore with a comparte-
    mentalized learning and inferencing. More coarse grained

    than mixture of experts and end user sensing, i.e. GPRO:

    DeepSeekMath: Pushing the Limits of Mathematical
    Reasoning in Open Language Models
    https://arxiv.org/abs/2402.03300

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Our Chinese Nostradamous claims there are at least
    two methods how humans deal with trauma aka contradiction,
    soul repair or split brain:

    Great Books #5: The Odyssey
    https://www.youtube.com/watch?v=gXlcR7uHHdA

    Brain plasticity, or neuroplasticity, is the brain's
    remarkable ability to reorganize its structure, functions,
    and neural connections throughout life in response to
    learning, experience, or injury. It enables adaptation,

    memory formation, and recovery from damage by creating
    new synaptic connections and pathways. This dynamic
    process continues into adulthood, allowing for, but
    also influenced by, factors like exercise, sleep,

    and mental stimulation. Does this adaption or
    maladaption need consistency, or asked differently
    what does structure and function mean?

    Bye
    --- Synchronet 3.21d-Linux NewsLink 1.2