Hi,
Geoffrey E. Hinton, the Nobel Prize winner
for AI. He was already beating the drums
for ReLU in 2010:
HRectified Linear Units Improve Restricted Boltzmann Machines
Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf
Because ANNs (Artificial Neural Networks) were originally
designed with other functions, e.g. with Logistic function:
An artificial neuron is a mathematical function conceived
as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron
If you populate additive factor graphs with log P,
you basically get multiplicative factor graphs.
So an ANN can express belief networks, right?
Bye
P.S.: What is all the hype about Causal AI, and
the Ladder of Causation à la Judea Pearl?
Causal AI – the next gen AI
Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw
Hi,P!=NP https://sourceforge.net/projects/cscall/files/MisFiles/PNP-proof-en.txt/download
Large Reasoning Models (LRM) seem to move from
Foundation of Mathematics (FOM) to Theoretical
Computer Science (TCS). FOM typically gives
you "white science" mathematics, with sets and
infinity, if you are lucky a little recursion
theory. Fun fact TCS is even more "white".
Interesting paper in as far:
Lean Meets Theoretical Computer Science:
Scalable Synthesis of Theorem Proving Challenges
in Formal-Informal Pairs
Terry Jingchen Zhang et. al. - 2025
https://arxiv.org/abs/2508.15878v1
One swallow does not make a summer?
But its probably a necessary step. The above
paper using Busy Beaver and Interger Constraints
as examples. What logical frameworks do even
apply, is it enough to have a "total function"
theory layer, or does TCS need more. TCS can
be heavy on all sort of discrete and
non-discrete mathematics.
Bye
Mild Shock schrieb:
Hi,
Geoffrey E. Hinton, the Nobel Prize winner
for AI. He was already beating the drums
for ReLU in 2010:
HRectified Linear Units Improve Restricted Boltzmann Machines
Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf
Because ANNs (Artificial Neural Networks) were originally
designed with other functions, e.g. with Logistic function:
An artificial neuron is a mathematical function conceived
as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron
If you populate additive factor graphs with log P,
you basically get multiplicative factor graphs.
So an ANN can express belief networks, right?
Bye
P.S.: What is all the hype about Causal AI, and
the Ladder of Causation à la Judea Pearl?
Causal AI – the next gen AI
Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw
Hi,
Large Reasoning Models (LRM) seem to move from
Foundation of Mathematics (FOM) to Theoretical
Computer Science (TCS). FOM typically gives
you "white science" mathematics, with sets and
infinity, if you are lucky a little recursion
theory. Fun fact TCS is even more "white".
Interesting paper in as far:
Lean Meets Theoretical Computer Science:
Scalable Synthesis of Theorem Proving Challenges
in Formal-Informal Pairs
Terry Jingchen Zhang et. al. - 2025
https://arxiv.org/abs/2508.15878v1
One swallow does not make a summer?
But its probably a necessary step. The above
paper using Busy Beaver and Interger Constraints
as examples. What logical frameworks do even
apply, is it enough to have a "total function"
theory layer, or does TCS need more. TCS can
be heavy on all sort of discrete and
non-discrete mathematics.
Bye
Mild Shock schrieb:
Hi,
Geoffrey E. Hinton, the Nobel Prize winner
for AI. He was already beating the drums
for ReLU in 2010:
HRectified Linear Units Improve Restricted Boltzmann Machines
Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf
Because ANNs (Artificial Neural Networks) were originally
designed with other functions, e.g. with Logistic function:
An artificial neuron is a mathematical function conceived
as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron
If you populate additive factor graphs with log P,
you basically get multiplicative factor graphs.
So an ANN can express belief networks, right?
Bye
P.S.: What is all the hype about Causal AI, and
the Ladder of Causation à la Judea Pearl?
Causal AI – the next gen AI
Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw
Hi,--- Synchronet 3.21d-Linux NewsLink 1.2
Our Chinese Nostradamous claims there are at least
two methods how humans deal with trauma aka contradiction,
soul repair or split brain:
Great Books #5: The Odyssey
https://www.youtube.com/watch?v=gXlcR7uHHdA
Brain plasticity, or neuroplasticity, is the brain's
remarkable ability to reorganize its structure, functions,
and neural connections throughout life in response to
learning, experience, or injury. It enables adaptation,
memory formation, and recovery from damage by creating
new synaptic connections and pathways. This dynamic
process continues into adulthood, allowing for, but
also influenced by, factors like exercise, sleep,
and mental stimulation. Does this adaption or
maladaption need consistency, or asked differently
what does structure and function mean?
Bye
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,099 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 492379:04:29 |
| Calls: | 14,106 |
| Calls today: | 2 |
| Files: | 187,124 |
| D/L today: |
2,544 files (1,098M bytes) |
| Messages: | 2,496,242 |