(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
On 04/16/2026 08:20 AM, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
Maybe you should figure more how it's "univocal" than "unequivocal".
For example, you can give it an account of what "equality",
according to Quine according to Russell, "is", and show
that now it's removed and quite capricious and not very arbitrary.
I.e., that's readily "equivocated".
The philo-sophy needs an account of the philo-casuy, or as
with regards to distinguishing and disambiguationg
the "sophistry" and the "casuistry".
Or, anybody else's opinion is just as good, and not bad.
So, "univocity" is a usual account against "the synthetic fragmentation
into pluralistic accounts of wholes". that's been around forever,
and is part of the philosophical canon.
On 4/16/2026 12:17 PM, Ross Finlayson wrote:
On 04/16/2026 08:20 AM, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
Maybe you should figure more how it's "univocal" than "unequivocal".
by "unequivocal" I only mean that every LLM takes the
prompt to mean exactly the same thing after as many
as hundreds and hundreds of progressive refinements.
Then after the prompt has been further refined to achieve
a complete consensus across all five LLMs this is a good
ballpark estimate of literally unequivocal.
The final test is against foundational peer reviewed
research written by the well established leaders in
the field.
For example, you can give it an account of what "equality",
according to Quine according to Russell, "is", and show
that now it's removed and quite capricious and not very arbitrary.
I.e., that's readily "equivocated".
The philo-sophy needs an account of the philo-casuy, or as
with regards to distinguishing and disambiguationg
the "sophistry" and the "casuistry".
Ultimately my system uses GUIDs for each unique sense
meaning of every word.
Or, anybody else's opinion is just as good, and not bad.
So, "univocity" is a usual account against "the synthetic fragmentation
into pluralistic accounts of wholes". that's been around forever,
and is part of the philosophical canon.
Hi,
I did the same using multiple LLMs in the past
few weeks. Until ChatGPT degraded, they phased
out the old models, and its now only 5.x.
You get the effect of 4 eyes see more than 2 eyes.
Now its for ChatGPT 5.x. kind of 1 eye and an 1 eye-
patch, plus completely brain amputated.
Bye
P.S.: Maybe the best AI application is this here:
Does your cat bring home “gifts” too?
https://zeromouse.com/
olcott schrieb:
On 4/16/2026 12:17 PM, Ross Finlayson wrote:
On 04/16/2026 08:20 AM, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
Maybe you should figure more how it's "univocal" than "unequivocal".
by "unequivocal" I only mean that every LLM takes the
prompt to mean exactly the same thing after as many
as hundreds and hundreds of progressive refinements.
Then after the prompt has been further refined to achieve
a complete consensus across all five LLMs this is a good
ballpark estimate of literally unequivocal.
The final test is against foundational peer reviewed
research written by the well established leaders in
the field.
For example, you can give it an account of what "equality",
according to Quine according to Russell, "is", and show
that now it's removed and quite capricious and not very arbitrary.
I.e., that's readily "equivocated".
The philo-sophy needs an account of the philo-casuy, or as
with regards to distinguishing and disambiguationg
the "sophistry" and the "casuistry".
Ultimately my system uses GUIDs for each unique sense
meaning of every word.
Or, anybody else's opinion is just as good, and not bad.
So, "univocity" is a usual account against "the synthetic fragmentation
into pluralistic accounts of wholes". that's been around forever,
and is part of the philosophical canon.
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
On 4/16/2026 11:38 PM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
AI can be useful:
_____________________
Regarding your search for a "Peter Olcott" arrest record, there is a documented case involving a man by that name that matches the details
you've mentioned.
The Arrest Details
In April 2015, 60-year-old Peter Olcott Jr. was arrested in Omaha,
Nebraska. According to court documents and local news reports (such as
KMTV 3 News), the specific circumstances were:
The Charges: He was charged with possession of child pornography.
The "God" Claim: During the investigation, Olcott reportedly told police
that the material was legal because he was God, and therefore he was not subject to human laws.
The Outcome: Following his arrest, Olcott underwent a series of mental
health evaluations. In late 2015, he was found incompetent to stand
trial, and the court ordered him to be committed to the Lincoln Regional Center for psychiatric treatment.
_____________________
See? Pete loves it.
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
AI can be useful:
_____________________
See? Pete loves it.
Q: Will we terminate an account for posting personal information about another?
A: Yes if our user continues to do it deliberately to harass. While it
is not illegal to post publicly available info, if it is being done as
a means to harass and attack, we will terminate the account. Please
note that personal info means name and address or phone, not name
alone. Email address does not count as personal information.
Please note that we do not act on third party complaints regarding
personal information.
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all. I know because I have
done this for my work and it anchored my whole system
in direct quotes from foundational papers in proof
theoretic semantics.
Proof-theoretic semantics is inherently inferential,
as it is inferential activity which manifests itself
in proofs. ... inferences and the rules of inference
establish the meaning of expressions...
Schroeder-Heister, Peter, 2024 "Proof-Theoretic Semantics" https://plato.stanford.edu/entries/proof-theoretic-semantics/#InfeIntuAntiReal
I don't yet have the best possible quote for the
requirement of a finite proof within a "well founded
justification tree" because aspects of this notion
are strewn here and there using different terminology.
a "well founded justification tree" is exactly
determined in Prolog by unify_with_occurs_check()
This is precisely the exact same idea.
*It is exactly the same as this*
% This sentence is not true.
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
I generalized this idea with
Olcott's Minimal Type Theory.
G ↔ ¬Prov[PA](⌜G⌝)
Directed Graph of evaluation sequence
00 ↔ 01 02
01 G
02 ¬ 03
03 Prov[PA] 04
04 Gödel_Number_of 01 // cycle
This is 100% exactly the same idea as not
having a "well founded justification tree".
PTS people tend to do these things in natural
deduction and Sequent Calculus.
https://plato.stanford.edu/entries/natural-deduction/ https://mathworld.wolfram.com/SequentCalculus.html
On 04/18/2026 06:01 AM, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all. I know because I have
done this for my work and it anchored my whole system
in direct quotes from foundational papers in proof
theoretic semantics.
Proof-theoretic semantics is inherently inferential,
as it is inferential activity which manifests itself
in proofs. ... inferences and the rules of inference
establish the meaning of expressions...
Schroeder-Heister, Peter, 2024 "Proof-Theoretic Semantics"
https://plato.stanford.edu/entries/proof-theoretic-semantics/
#InfeIntuAntiReal
I don't yet have the best possible quote for the
requirement of a finite proof within a "well founded
justification tree" because aspects of this notion
are strewn here and there using different terminology.
a "well founded justification tree" is exactly
determined in Prolog by unify_with_occurs_check()
This is precisely the exact same idea.
*It is exactly the same as this*
% This sentence is not true.
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
I generalized this idea with
Olcott's Minimal Type Theory.
G ↔ ¬Prov[PA](⌜G⌝)
Directed Graph of evaluation sequence
00 ↔ 01 02
01 G
02 ¬ 03
03 Prov[PA] 04
04 Gödel_Number_of 01 // cycle
This is 100% exactly the same idea as not
having a "well founded justification tree".
PTS people tend to do these things in natural
deduction and Sequent Calculus.
https://plato.stanford.edu/entries/natural-deduction/
https://mathworld.wolfram.com/SequentCalculus.html
No, "proof-theoretic semantics" as "anti-realism"
is just "empiricism" about "inference" with regards
to "truth" which belongs to "realism".
It's largely the interpretation of model theory after
Tarski
and nominalism and the fragmented instead of
for realism and structuralism and holism, then about
that model-theory and proof-theory are equi-interpretable.
I'd point to Sheffer and Chwistek as more aligned with
De Morgan then Herbrand for the language instead of
Gentzen or Kripke and Montague after Boole ("happy hypocrites").
The "Plato" article (Stanford Encyclopedia of Philosophy)
points out the alignment of that entire course with:
"classical logic the quasi-modal logic", that it does
have material implication and ex falso quodlibet,
so, arguably it's absent being a modal, temporal,
relevance logic, and says nothing instead of everything.
It's "Plato" the idea of "platonism" that there is
a _true_ mathematics and logic.
"This means that we obtain a proof-theoretic criterion
to tell whether we have a meaningful proof or not, and
proofs of the paradoxes would not be meaningful in this
sense." --Schroeder-Heister, "Proof Theoretic Semantics", https://plato.stanford.edu/entries/proof-theoretic-semantics/#ClasLogi
Willful ignorance, that's what that is. That's the heaping
pile of Philo's Plotinus' Occam's Compte's scientism's Boole's
Russell's logicist positivism's Tarski's Montague's _empiricism_.
Do you think it's "true" that "1 + 1 = 2" under all possible
interpretations of 1, +, 1, =, and 2 in integers and their
operations? Congratulations, that's mathematical platonism.
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all. I know because I have
done this for my work and it anchored my whole system
in direct quotes from foundational papers in proof
theoretic semantics.
Proof-theoretic semantics is inherently inferential,
as it is inferential activity which manifests itself
in proofs. ... inferences and the rules of inference
establish the meaning of expressions...
Schroeder-Heister, Peter, 2024 "Proof-Theoretic Semantics" https://plato.stanford.edu/entries/proof-theoretic-semantics/ #InfeIntuAntiReal
I don't yet have the best possible quote for the
requirement of a finite proof within a "well founded
justification tree" because aspects of this notion
are strewn here and there using different terminology.
a "well founded justification tree" is exactly
determined in Prolog by unify_with_occurs_check()
This is precisely the exact same idea.
*It is exactly the same as this*
% This sentence is not true.
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
I generalized this idea with
Olcott's Minimal Type Theory.
G ↔ ¬Prov[PA](⌜G⌝)
Directed Graph of evaluation sequence
00 ↔ 01 02
01 G
02 ¬ 03
03 Prov[PA] 04
04 Gödel_Number_of 01 // cycle
This is 100% exactly the same idea as not
having a "well founded justification tree".
PTS people tend to do these things in natural
deduction and Sequent Calculus.
https://plato.stanford.edu/entries/natural-deduction/ https://mathworld.wolfram.com/SequentCalculus.html
On 4/18/2026 9:14 AM, Ross Finlayson wrote:
On 04/18/2026 06:01 AM, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all. I know because I have
done this for my work and it anchored my whole system
in direct quotes from foundational papers in proof
theoretic semantics.
Proof-theoretic semantics is inherently inferential,
as it is inferential activity which manifests itself
in proofs. ... inferences and the rules of inference
establish the meaning of expressions...
Schroeder-Heister, Peter, 2024 "Proof-Theoretic Semantics"
https://plato.stanford.edu/entries/proof-theoretic-semantics/
#InfeIntuAntiReal
I don't yet have the best possible quote for the
requirement of a finite proof within a "well founded
justification tree" because aspects of this notion
are strewn here and there using different terminology.
a "well founded justification tree" is exactly
determined in Prolog by unify_with_occurs_check()
This is precisely the exact same idea.
*It is exactly the same as this*
% This sentence is not true.
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
I generalized this idea with
Olcott's Minimal Type Theory.
G ↔ ¬Prov[PA](⌜G⌝)
Directed Graph of evaluation sequence
00 ↔ 01 02
01 G
02 ¬ 03
03 Prov[PA] 04
04 Gödel_Number_of 01 // cycle
This is 100% exactly the same idea as not
having a "well founded justification tree".
PTS people tend to do these things in natural
deduction and Sequent Calculus.
https://plato.stanford.edu/entries/natural-deduction/
https://mathworld.wolfram.com/SequentCalculus.html
No, "proof-theoretic semantics" as "anti-realism"
is just "empiricism" about "inference" with regards
to "truth" which belongs to "realism".
Counter-factual.
In analytic philosophy, anti-realism is the
position that the truth of a statement rests
on its demonstrability through internal logic
mechanisms, such as the context principle or
intuitionistic logic, in direct opposition
to the realist notion that the truth of a
statement rests on its correspondence to an
external, independent reality.
https://en.wikipedia.org/wiki/Anti-realism
It's largely the interpretation of model theory after
Tarski
That Proof Theoretic Semantics utterly, completely,
unequivocally and totally rejects.
and nominalism and the fragmented instead of
for realism and structuralism and holism, then about
that model-theory and proof-theory are equi-interpretable.
I'd point to Sheffer and Chwistek as more aligned with
De Morgan then Herbrand for the language instead of
Gentzen or Kripke and Montague after Boole ("happy hypocrites").
The "Plato" article (Stanford Encyclopedia of Philosophy)
points out the alignment of that entire course with:
"classical logic the quasi-modal logic", that it does
have material implication and ex falso quodlibet,
so, arguably it's absent being a modal, temporal,
relevance logic, and says nothing instead of everything.
P ∨ Q Disjunction introduction
Relevance Logic cannot allow Disjunction
Introduction within the strictest notion of
maintaining relevance because the above Q
is irrelevant and introduced.
This prevents ex falso quodlibet before it begins. https://en.wikipedia.org/wiki/Principle_of_explosion
It's "Plato" the idea of "platonism" that there is
a _true_ mathematics and logic.
"This means that we obtain a proof-theoretic criterion
to tell whether we have a meaningful proof or not, and
proofs of the paradoxes would not be meaningful in this
sense." --Schroeder-Heister, "Proof Theoretic Semantics",
https://plato.stanford.edu/entries/proof-theoretic-semantics/#ClasLogi
Yes that is exactly consistent with my view. It is not just
the meaning of the proof, yet within PTS the expression itself
is also construed as meaningless.
Proof-theoretic semantics is inherently inferential, as it
is inferential activity which manifests itself in proofs.
... inferences and the rules of inference establish the
meaning of expressions
Schroeder-Heister, Peter, 2024 "Proof-Theoretic Semantics" https://plato.stanford.edu/entries/proof-theoretic-semantics/#InfeIntuAntiReal
Willful ignorance, that's what that is. That's the heaping
pile of Philo's Plotinus' Occam's Compte's scientism's Boole's
Russell's logicist positivism's Tarski's Montague's _empiricism_.
Do you think it's "true" that "1 + 1 = 2" under all possible
interpretations of 1, +, 1, =, and 2 in integers and their
operations? Congratulations, that's mathematical platonism.
Not when we "interpret" "1" to be a dead cat and "2"
to be "a box of chocolates".
On 04/18/2026 06:01 AM, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all. I know because I have
done this for my work and it anchored my whole system
in direct quotes from foundational papers in proof
theoretic semantics.
Proof-theoretic semantics is inherently inferential,
as it is inferential activity which manifests itself
in proofs. ... inferences and the rules of inference
establish the meaning of expressions...
Schroeder-Heister, Peter, 2024 "Proof-Theoretic Semantics"
https://plato.stanford.edu/entries/proof-theoretic-semantics/
#InfeIntuAntiReal
I don't yet have the best possible quote for the
requirement of a finite proof within a "well founded
justification tree" because aspects of this notion
are strewn here and there using different terminology.
a "well founded justification tree" is exactly
determined in Prolog by unify_with_occurs_check()
This is precisely the exact same idea.
*It is exactly the same as this*
% This sentence is not true.
?- LP = not(true(LP)).
LP = not(true(LP)).
?- unify_with_occurs_check(LP, not(true(LP))).
false.
I generalized this idea with
Olcott's Minimal Type Theory.
G ↔ ¬Prov[PA](⌜G⌝)
Directed Graph of evaluation sequence
00 ↔ 01 02
01 G
02 ¬ 03
03 Prov[PA] 04
04 Gödel_Number_of 01 // cycle
This is 100% exactly the same idea as not
having a "well founded justification tree".
PTS people tend to do these things in natural
deduction and Sequent Calculus.
https://plato.stanford.edu/entries/natural-deduction/
https://mathworld.wolfram.com/SequentCalculus.html
No, "proof-theoretic semantics" as "anti-realism"
is just "empiricism" about "inference" with regards
to "truth" which belongs to "realism".
and nominalism and the fragmented instead of
for realism and structuralism and holism, then about
that model-theory and proof-theory are equi-interpretable.
I'd point to Sheffer and Chwistek as more aligned with
De Morgan then Herbrand for the language instead of
Gentzen or Kripke and Montague after Boole ("happy hypocrites").
The "Plato" article (Stanford Encyclopedia of Philosophy)
points out the alignment of that entire course with:
"classical logic the quasi-modal logic", that it does
have material implication and ex falso quodlibet,
so, arguably it's absent being a modal, temporal,
relevance logic, and says nothing instead of everything.
It's "Plato" the idea of "platonism" that there is
a _true_ mathematics and logic.
"This means that we obtain a proof-theoretic criterion
to tell whether we have a meaningful proof or not, and
proofs of the paradoxes would not be meaningful in this
sense." --Schroeder-Heister, "Proof Theoretic Semantics", https://plato.stanford.edu/entries/proof-theoretic-semantics/#ClasLogi
In sci.math Chris M. Thomasson <[email protected]> wrote:
[ .... ]
AI can be useful:
_____________________
[ .... ]
See? Pete loves it.
Chris, you're a sanctimonious arsehole. You've posted this stuff
repeatedly over the last week or so. It's off-topic, and it's
harrassment. It's the worst form of ad hominem one can imagine and is
an implicit admission you cannot win arguments fairly.
Peter, Chris is posting from eternal-september.org In their terms and conditions, on page
https://eternal-september.org/index.php?showpage=abuse, appears the following:
Q: Will we terminate an account for posting personal information about
another?
A: Yes if our user continues to do it deliberately to harass. While it
is not illegal to post publicly available info, if it is being done as
a means to harass and attack, we will terminate the account. Please
note that personal info means name and address or phone, not name
alone. Email address does not count as personal information.
Please note that we do not act on third party complaints regarding
personal information.
I suggest you send a complaint to eternal-september.
On 04/17/2026 02:56 PM, Chris M. Thomasson wrote:
On 4/16/2026 11:38 PM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
AI can be useful:
_____________________
Regarding your search for a "Peter Olcott" arrest record, there is a
documented case involving a man by that name that matches the details
you've mentioned.
The Arrest Details
In April 2015, 60-year-old Peter Olcott Jr. was arrested in Omaha,
Nebraska. According to court documents and local news reports (such as
KMTV 3 News), the specific circumstances were:
The Charges: He was charged with possession of child pornography.
The "God" Claim: During the investigation, Olcott reportedly told police
that the material was legal because he was God, and therefore he was not
subject to human laws.
The Outcome: Following his arrest, Olcott underwent a series of mental
health evaluations. In late 2015, he was found incompetent to stand
trial, and the court ordered him to be committed to the Lincoln Regional
Center for psychiatric treatment.
_____________________
See? Pete loves it.
Hm. How distasteful.
One might wonder what it was and where he got it,
since the FBI and Navy are the largest holders and purveyors of
CSAM, since it drives other their business lines, vis-a-vis the
mall security guards skulking in the changing-room at Forever 21,
or the janitor or gas-station rest-room cleaner with their latest
spy-cam setup, or the pornographers, or sadly enough often enough the parents, that all slurped up, and dribbled out, by the FBI and Navy
calling itself NSA.
Then, about surveillance-tech and ad-tech, or stalk-tech and
web-integrated grooming of minors, in the interests of protecting
the children includes also protecting adults from pimps and pushers.
Yeah, I'd rather not know, since familiarity breeds contempt, and
here that ignorance is a defense, since intrusiveness is an attack.
On 4/18/2026 3:47 AM, Alan Mackenzie wrote:
In sci.math Chris M. Thomasson <[email protected]> wrote:
[ .... ]
AI can be useful:
_____________________
[ .... ]
See? Pete loves it.
Chris, you're a sanctimonious arsehole. You've posted this stuff
repeatedly over the last week or so. It's off-topic, and it's
harrassment. It's the worst form of ad hominem one can imagine and is
an implicit admission you cannot win arguments fairly.
Peter, Chris is posting from eternal-september.org In their terms and
conditions, on page
https://eternal-september.org/index.php?showpage=abuse, appears the
following:
Q: Will we terminate an account for posting personal information about
another?
A: Yes if our user continues to do it deliberately to harass. While it
is not illegal to post publicly available info, if it is being done as
a means to harass and attack, we will terminate the account. Please
note that personal info means name and address or phone, not name
alone. Email address does not count as personal information.
Please note that we do not act on third party complaints regarding
personal information.
I suggest you send a complaint to eternal-september.
Peter needs to also call up the AI companies and tell them to remove all
of they info they have on him? The connection is that PO claims to have solved the halting problem because he thinks he is God?
The Arrest Details
In April 2015, 60-year-old Peter Olcott Jr. was arrested in Omaha,
Nebraska. According to court documents and local news reports (such as
KMTV 3 News), the specific circumstances were:
The Charges: He was charged with possession of child pornography.
The "God" Claim: During the investigation, Olcott reportedly told police that the material was legal because he was God, and therefore he was not subject to human laws.
The Outcome: Following his arrest, Olcott underwent a series of mental health evaluations. In late 2015, he was found incompetent to stand
trial, and the court ordered him to be committed to the Lincoln Regional Center for psychiatric treatment.
On 4/18/26 12:48 PM, Chris M. Thomasson wrote:
On 4/18/2026 3:47 AM, Alan Mackenzie wrote:
In sci.math Chris M. Thomasson <[email protected]> wrote:
[ .... ]
AI can be useful:
_____________________
[ .... ]
See? Pete loves it.
Chris, you're a sanctimonious arsehole. You've posted this stuff
repeatedly over the last week or so. It's off-topic, and it's
harrassment. It's the worst form of ad hominem one can imagine and is
an implicit admission you cannot win arguments fairly.
Peter, Chris is posting from eternal-september.org In their terms and
conditions, on page
https://eternal-september.org/index.php?showpage=abuse, appears the
following:
Q: Will we terminate an account for posting personal information about >>>> another?
A: Yes if our user continues to do it deliberately to harass. While it >>>> is not illegal to post publicly available info, if it is being done as >>>> a means to harass and attack, we will terminate the account. Please
note that personal info means name and address or phone, not name
alone. Email address does not count as personal information.
Please note that we do not act on third party complaints regarding
personal information.
I suggest you send a complaint to eternal-september.
Peter needs to also call up the AI companies and tell them to remove
all of they info they have on him? The connection is that PO claims to
have solved the halting problem because he thinks he is God?
he just claimed the halting problem was a logical impossibility, so idk
what he even thinks he's solve
On 4/17/26 5:56 PM, Chris M. Thomasson wrote:
The Arrest Details
In April 2015, 60-year-old Peter Olcott Jr. was arrested in Omaha,
Nebraska. According to court documents and local news reports (such as
KMTV 3 News), the specific circumstances were:
The Charges: He was charged with possession of child pornography.
The "God" Claim: During the investigation, Olcott reportedly told
police that the material was legal because he was God, and therefore
he was not subject to human laws.
The Outcome: Following his arrest, Olcott underwent a series of mental
health evaluations. In late 2015, he was found incompetent to stand
trial, and the court ordered him to be committed to the Lincoln
Regional Center for psychiatric treatment.
That might support my idea that Peter may be intentionally exhibiting
insane behavior, as having been arrested but found incompetent, the
details may have been sealed, due to no finding of guilt being possible
at the time, but the clocks might have stopped, and if he is ever found
to be competent to stand trial, he can be tried, and perhaps the
physical evidence is strong enough that convection is likely.
Thus, the logical action is to keep looking as being mentally unstable
as his stay out of jail card.
In no way a proof, but is a possible explanation.
From other things he has linked to, his idea that he is in some way "divine" is a long held beleif, which explains some of his mental models.
And it could be that he is just that insane from the beginning, and his actions are just what comes naturally.
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
On 4/17/26 5:56 PM, Chris M. Thomasson wrote:
The Arrest Details
In April 2015, 60-year-old Peter Olcott Jr. was arrested in Omaha,
Nebraska. According to court documents and local news reports (such as
KMTV 3 News), the specific circumstances were:
The Charges: He was charged with possession of child pornography.
The "God" Claim: During the investigation, Olcott reportedly told
police that the material was legal because he was God, and therefore
he was not subject to human laws.
The Outcome: Following his arrest, Olcott underwent a series of mental
health evaluations. In late 2015, he was found incompetent to stand
trial, and the court ordered him to be committed to the Lincoln
Regional Center for psychiatric treatment.
That might support my idea that Peter may be intentionally exhibiting
insane behavior, as having been arrested but found incompetent, the
details may have been sealed, due to no finding of guilt being possible
at the time, but the clocks might have stopped, and if he is ever found
to be competent to stand trial, he can be tried, and perhaps the
physical evidence is strong enough that convection is likely.
Thus, the logical action is to keep looking as being mentally unstable
as his stay out of jail card.
In no way a proof, but is a possible explanation.
From other things he has linked to, his idea that he is in some way
"divine" is a long held beleif, which explains some of his mental models.
And it could be that he is just that insane from the beginning, and his actions are just what comes naturally.
On 04/18/2026 07:05 PM, Richard Damon wrote:
On 4/17/26 5:56 PM, Chris M. Thomasson wrote:
The Arrest Details
In April 2015, 60-year-old Peter Olcott Jr. was arrested in Omaha,
Nebraska. According to court documents and local news reports (such as
KMTV 3 News), the specific circumstances were:
The Charges: He was charged with possession of child pornography.
The "God" Claim: During the investigation, Olcott reportedly told
police that the material was legal because he was God, and therefore
he was not subject to human laws.
The Outcome: Following his arrest, Olcott underwent a series of mental
health evaluations. In late 2015, he was found incompetent to stand
trial, and the court ordered him to be committed to the Lincoln
Regional Center for psychiatric treatment.
That might support my idea that Peter may be intentionally exhibiting
insane behavior, as having been arrested but found incompetent, the
details may have been sealed, due to no finding of guilt being possible
at the time, but the clocks might have stopped, and if he is ever found
to be competent to stand trial, he can be tried, and perhaps the
physical evidence is strong enough that convection is likely.
Thus, the logical action is to keep looking as being mentally unstable
as his stay out of jail card.
In no way a proof, but is a possible explanation.
From other things he has linked to, his idea that he is in some way
"divine" is a long held beleif, which explains some of his mental models.
And it could be that he is just that insane from the beginning, and his
actions are just what comes naturally.
Any accused under trial has the right to a competent legal
advocate, acting in the accused's interest, as an agent of the court.
An agent of the court, for example, an accused under trial,
can make orders of the court.
An agent of the court, in the interests of the court,
in the interests of equal protections, may demand production
of all the resources of the court, as may be relevant.
This could go a long ways to helping advise the court
on the landscape of equal protections, and of the accused,
on the machinery of the legal system, or "wheels of justice".
So, maybe he should fire his lawyer and doctor both.
It's not on me to defend either bad acts or bad laws,
in the interests of something like "The Massachusetts
Institute of Technology Student Association for Freedom
of Expression", or bad taste like that one guy in the early
'90's who after the National Endowment for the Arts, or
Mapplethorpe, the art was a bunch of smut, I'm not here
to defend smut (though I know what I like, while despising pimps).
That said, I mostly don't believe in "sealed" cases
since they hide the guilty besides hiding the innocent,
that they also hide court or cop errors which would
greatly weigh on the defendant's rights.
"Investigations" as they've been largely mechanized
and automated, is for providing the same sort of
resources to the accused.
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Whether what you propose is a good way to do academic research is>not self-evident. It is even far from obvious how one can and should
compare ways to do and quality of academic research.
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is>not self-evident. It is even far from obvious how one can and should
compare ways to do and quality of academic research.
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is>not self-evident. It is even far from obvious how one can and should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a fact,
there's nothing ultimately that proof-theory has that model-theory
hasn't, and vice versa.
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is>not self-evident. It is even far from obvious how one can and should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a fact,
there's nothing ultimately that proof-theory has that model-theory
hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is> >>>> not self-evident. It is even far from obvious how one can and should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a fact,
there's nothing ultimately that proof-theory has that model-theory
hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch? v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations"
then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is> >>>> not self-evident. It is even far from obvious how one can and should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a fact,
there's nothing ultimately that proof-theory has that model-theory
hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch? v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations"
then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is> >>>> not self-evident. It is even far from obvious how one can and should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a fact,
there's nothing ultimately that proof-theory has that model-theory
hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
On 4/19/2026 1:18 PM, Ross Finlayson wrote:
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that >>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is> >>>>> not self-evident. It is even far from obvious how one can and should >>>>>> compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a fact,
there's nothing ultimately that proof-theory has that model-theory
hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch?
v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations"
then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
Here is what Gemini said:
"Logos 2000" refers to a series of video lectures and a personal philosophical framework created by an independent thinker named Ross Finlayson, primarily published on YouTube.
The specific phrase "paradox-free reasoning" (or "paradox-free reason")
is a core concept he discusses, most notably in a video titled "Logos
2000: A Theory."
Here is a breakdown of what the concept and the broader series entail:
The Premise: Finlayson's work delves deeply into mathematical logic, set theory, and the philosophy of mathematics. His focus on "paradox-free reasoning" centers on trying to articulate a logical system or
foundation for mathematics that avoids the famous paradoxes that
troubled early set theory (such as Russell's Paradox).
The Influences: In his discussions on building this paradox-free system,
he references the work of several major logicians, mathematicians, and philosophers. These include Graham Priest (famous for his work on
paradoxes and non-classical logic), Dana Scott, Thoralf Skolem, Paul
Erdős, and Paul Cohen.
The Scope: The "Logos 2000" series is highly eclectic. While it focuses heavily on "theatheory" (a term he uses) and logic, it also spans topics
like computer science, the philosophy of statistics, the "absurd," infinitives in language, and the works of classical thinkers like
Descartes and Quine.
In short, it is a niche, independent philosophical project exploring the foundations of logic, mathematics, and computing, with "paradox-free
reason" being his attempt to conceptualize a perfectly consistent system
of logical thought.
On 04/19/2026 11:44 AM, olcott wrote:
On 4/19/2026 1:18 PM, Ross Finlayson wrote:
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that >>>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question >>>>>>> is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is> >>>>>> not self-evident. It is even far from obvious how one can and should >>>>>>> compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a fact, >>>>> there's nothing ultimately that proof-theory has that model-theory
hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch?
v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations"
then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
Here is what Gemini said:
"Logos 2000" refers to a series of video lectures and a personal
philosophical framework created by an independent thinker named Ross
Finlayson, primarily published on YouTube.
The specific phrase "paradox-free reasoning" (or "paradox-free reason")
is a core concept he discusses, most notably in a video titled "Logos
2000: A Theory."
Here is a breakdown of what the concept and the broader series entail:
The Premise: Finlayson's work delves deeply into mathematical logic, set
theory, and the philosophy of mathematics. His focus on "paradox-free
reasoning" centers on trying to articulate a logical system or
foundation for mathematics that avoids the famous paradoxes that
troubled early set theory (such as Russell's Paradox).
The Influences: In his discussions on building this paradox-free system,
he references the work of several major logicians, mathematicians, and
philosophers. These include Graham Priest (famous for his work on
paradoxes and non-classical logic), Dana Scott, Thoralf Skolem, Paul
Erdős, and Paul Cohen.
The Scope: The "Logos 2000" series is highly eclectic. While it focuses
heavily on "theatheory" (a term he uses) and logic, it also spans topics
like computer science, the philosophy of statistics, the "absurd,"
infinitives in language, and the works of classical thinkers like
Descartes and Quine.
In short, it is a niche, independent philosophical project exploring the
foundations of logic, mathematics, and computing, with "paradox-free
reason" being his attempt to conceptualize a perfectly consistent system
of logical thought.
Hm. Tell me more.
That "Logos 2000: Foundations briefly" is rather summatory.
https://www.youtube.com/watch?v=fjtXZ5mBVOc
I wonder what it makes of "Moment and Motion", about it's philosophical approach to "worlds turn".
Aristotle won't be made a fool.
Quotes of Aristotle include "there is no un-moved mover", yet also,
"circular movement is eternal", yet also, "the movement of the stars
is voluntary".
Hegel is roundly regarded as a great idealist, and having a
very correct analytical account.
On 4/19/2026 2:21 PM, Ross Finlayson wrote:
On 04/19/2026 11:44 AM, olcott wrote:
On 4/19/2026 1:18 PM, Ross Finlayson wrote:
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs. >>>>>>>>>>>>>
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that >>>>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question >>>>>>>> is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research >>>>>>> is>not self-evident. It is even far from obvious how one can and should >>>>>>>> compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a fact, >>>>>> there's nothing ultimately that proof-theory has that model-theory >>>>>> hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch?
v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations"
then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
Here is what Gemini said:
"Logos 2000" refers to a series of video lectures and a personal
philosophical framework created by an independent thinker named Ross
Finlayson, primarily published on YouTube.
The specific phrase "paradox-free reasoning" (or "paradox-free reason")
is a core concept he discusses, most notably in a video titled "Logos
2000: A Theory."
Here is a breakdown of what the concept and the broader series entail:
The Premise: Finlayson's work delves deeply into mathematical logic, set >>> theory, and the philosophy of mathematics. His focus on "paradox-free
reasoning" centers on trying to articulate a logical system or
foundation for mathematics that avoids the famous paradoxes that
troubled early set theory (such as Russell's Paradox).
The Influences: In his discussions on building this paradox-free system, >>> he references the work of several major logicians, mathematicians, and
philosophers. These include Graham Priest (famous for his work on
paradoxes and non-classical logic), Dana Scott, Thoralf Skolem, Paul
Erdős, and Paul Cohen.
The Scope: The "Logos 2000" series is highly eclectic. While it focuses
heavily on "theatheory" (a term he uses) and logic, it also spans topics >>> like computer science, the philosophy of statistics, the "absurd,"
infinitives in language, and the works of classical thinkers like
Descartes and Quine.
In short, it is a niche, independent philosophical project exploring the >>> foundations of logic, mathematics, and computing, with "paradox-free
reason" being his attempt to conceptualize a perfectly consistent system >>> of logical thought.
Hm. Tell me more.
That "Logos 2000: Foundations briefly" is rather summatory.
https://www.youtube.com/watch?v=fjtXZ5mBVOc
I took a quick glance at it. I need to see a single
20 succinct overview of you whole system.
Can you do this or is that just not the way that
your mind works?
"Logos 2000: paradox-free reason"
Seems to be an excellent two second overview.
What is the single generic process that you
prevent paradoxes, in 100 words or less.
I wonder what it makes of "Moment and Motion", about it's philosophical
approach to "worlds turn".
Aristotle won't be made a fool.
Quotes of Aristotle include "there is no un-moved mover", yet also,
"circular movement is eternal", yet also, "the movement of the stars
is voluntary".
Hegel is roundly regarded as a great idealist, and having a
very correct analytical account.
On 04/19/2026 07:29 PM, olcott wrote:
On 4/19/2026 2:21 PM, Ross Finlayson wrote:
On 04/19/2026 11:44 AM, olcott wrote:
On 4/19/2026 1:18 PM, Ross Finlayson wrote:
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs. >>>>>>>>>>>>>>
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct >>>>>>>>>>>>>> across five different LLMs then test for consensus. >>>>>>>>>>>>>>
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from >>>>>>>>>>>>>> foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation. >>>>>>>>>>>> To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that >>>>>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question >>>>>>>>> is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research >>>>>>>> is>not self-evident. It is even far from obvious how one can and >>>>>>>> should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a fact, >>>>>>> there's nothing ultimately that proof-theory has that model-theory >>>>>>> hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch?
v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations"
then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
Here is what Gemini said:
"Logos 2000" refers to a series of video lectures and a personal
philosophical framework created by an independent thinker named Ross
Finlayson, primarily published on YouTube.
The specific phrase "paradox-free reasoning" (or "paradox-free reason") >>>> is a core concept he discusses, most notably in a video titled "Logos
2000: A Theory."
Here is a breakdown of what the concept and the broader series entail: >>>>
The Premise: Finlayson's work delves deeply into mathematical logic,
set
theory, and the philosophy of mathematics. His focus on "paradox-free
reasoning" centers on trying to articulate a logical system or
foundation for mathematics that avoids the famous paradoxes that
troubled early set theory (such as Russell's Paradox).
The Influences: In his discussions on building this paradox-free
system,
he references the work of several major logicians, mathematicians, and >>>> philosophers. These include Graham Priest (famous for his work on
paradoxes and non-classical logic), Dana Scott, Thoralf Skolem, Paul
Erdős, and Paul Cohen.
The Scope: The "Logos 2000" series is highly eclectic. While it focuses >>>> heavily on "theatheory" (a term he uses) and logic, it also spans
topics
like computer science, the philosophy of statistics, the "absurd,"
infinitives in language, and the works of classical thinkers like
Descartes and Quine.
In short, it is a niche, independent philosophical project exploring
the
foundations of logic, mathematics, and computing, with "paradox-free
reason" being his attempt to conceptualize a perfectly consistent
system
of logical thought.
Hm. Tell me more.
That "Logos 2000: Foundations briefly" is rather summatory.
https://www.youtube.com/watch?v=fjtXZ5mBVOc
I took a quick glance at it. I need to see a single
20 succinct overview of you whole system.
Can you do this or is that just not the way that
your mind works?
"Logos 2000: paradox-free reason"
Seems to be an excellent two second overview.
What is the single generic process that you
prevent paradoxes, in 100 words or less.
I wonder what it makes of "Moment and Motion", about it's philosophical
approach to "worlds turn".
Aristotle won't be made a fool.
Quotes of Aristotle include "there is no un-moved mover", yet also,
"circular movement is eternal", yet also, "the movement of the stars
is voluntary".
Hegel is roundly regarded as a great idealist, and having a
very correct analytical account.
"There is a royal road to geometry."
To resolve paradoxes for a paradox-free reason,
first one resolves the logical paradoxes, after
a great classical universal education, then the
post-modern deconstruction, then the paleo-classical
study, of the canon, dogma, doctrine, and candidate,
for Foundations: a constant, consistent, complete,
concrete theory.
Or, read the "Theatheory: super-theory and natural science"
thread, it's ongoing.
On 4/19/2026 10:05 PM, Ross Finlayson wrote:
On 04/19/2026 07:29 PM, olcott wrote:
On 4/19/2026 2:21 PM, Ross Finlayson wrote:
On 04/19/2026 11:44 AM, olcott wrote:
On 4/19/2026 1:18 PM, Ross Finlayson wrote:
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs. >>>>>>>>>>>>>>>
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended, >>>>>>>>>>>>>>> Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct >>>>>>>>>>>>>>> across five different LLMs then test for consensus. >>>>>>>>>>>>>>>
(3) Once consensus is achieved carefully examine >>>>>>>>>>>>>>> actual verbiage of key source documents. For
academic research this involves direct quotes from >>>>>>>>>>>>>>> foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation. >>>>>>>>>>>>> To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that >>>>>>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question >>>>>>>>>> is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research >>>>>>>>> is>not self-evident. It is even far from obvious how one can and >>>>>>>>> should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a >>>>>>>> fact,
there's nothing ultimately that proof-theory has that model-theory >>>>>>>> hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch?
v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations"
then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
Here is what Gemini said:
"Logos 2000" refers to a series of video lectures and a personal
philosophical framework created by an independent thinker named Ross >>>>> Finlayson, primarily published on YouTube.
The specific phrase "paradox-free reasoning" (or "paradox-free
reason")
is a core concept he discusses, most notably in a video titled "Logos >>>>> 2000: A Theory."
Here is a breakdown of what the concept and the broader series entail: >>>>>
The Premise: Finlayson's work delves deeply into mathematical
logic, set
theory, and the philosophy of mathematics. His focus on "paradox-free >>>>> reasoning" centers on trying to articulate a logical system or
foundation for mathematics that avoids the famous paradoxes that
troubled early set theory (such as Russell's Paradox).
The Influences: In his discussions on building this paradox-free
system,
he references the work of several major logicians, mathematicians, and >>>>> philosophers. These include Graham Priest (famous for his work on
paradoxes and non-classical logic), Dana Scott, Thoralf Skolem, Paul >>>>> Erdős, and Paul Cohen.
The Scope: The "Logos 2000" series is highly eclectic. While it
focuses
heavily on "theatheory" (a term he uses) and logic, it also spans
topics
like computer science, the philosophy of statistics, the "absurd,"
infinitives in language, and the works of classical thinkers like
Descartes and Quine.
In short, it is a niche, independent philosophical project
exploring the
foundations of logic, mathematics, and computing, with "paradox-free >>>>> reason" being his attempt to conceptualize a perfectly consistent
system
of logical thought.
Hm. Tell me more.
That "Logos 2000: Foundations briefly" is rather summatory.
https://www.youtube.com/watch?v=fjtXZ5mBVOc
I took a quick glance at it. I need to see a single
20 succinct overview of you whole system.
Can you do this or is that just not the way that
your mind works?
"Logos 2000: paradox-free reason"
Seems to be an excellent two second overview.
What is the single generic process that you
prevent paradoxes, in 100 words or less.
I wonder what it makes of "Moment and Motion", about it's philosophical >>>> approach to "worlds turn".
Aristotle won't be made a fool.
Quotes of Aristotle include "there is no un-moved mover", yet also,
"circular movement is eternal", yet also, "the movement of the stars
is voluntary".
Hegel is roundly regarded as a great idealist, and having a
very correct analytical account.
"There is a royal road to geometry."
To resolve paradoxes for a paradox-free reason,
first one resolves the logical paradoxes, after
a great classical universal education, then the
post-modern deconstruction, then the paleo-classical
study, of the canon, dogma, doctrine, and candidate,
for Foundations: a constant, consistent, complete,
concrete theory.
We already got rid of Russell's Paradox by switching
from naive set theory to ZFC. My own system works in
a similar way.
Do you understand all of the differences between
naive set theory and the ZFC version of axiomatic
set theory?
I cannot possibly sufficiently understand anything
sufficiently well unless it is in writing (I need
highlighting to focus my concentration) and it
must be presented at many different levels of
abstraction / specificity.
Einstein said: "If you can't explain it simply, you
don't understand it well enough."
Or, read the "Theatheory: super-theory and natural science"
thread, it's ongoing.
On 04/19/2026 08:49 PM, olcott wrote:
On 4/19/2026 10:05 PM, Ross Finlayson wrote:
On 04/19/2026 07:29 PM, olcott wrote:
On 4/19/2026 2:21 PM, Ross Finlayson wrote:
On 04/19/2026 11:44 AM, olcott wrote:
On 4/19/2026 1:18 PM, Ross Finlayson wrote:
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more >>>>>>>>>>>>>>>> unequivocal and succinct across five different LLMs. >>>>>>>>>>>>>>>>
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended, >>>>>>>>>>>>>>>> Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct >>>>>>>>>>>>>>>> across five different LLMs then test for consensus. >>>>>>>>>>>>>>>>
(3) Once consensus is achieved carefully examine >>>>>>>>>>>>>>>> actual verbiage of key source documents. For
academic research this involves direct quotes from >>>>>>>>>>>>>>>> foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic, >>>>>>>>>>>>>> linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each >>>>>>>>>>>>>> of these fields not merely the conventional views. >>>>>>>>>>>>>> This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field >>>>>>>>>>>>>> combined with one alternative philosophical foundation. >>>>>>>>>>>>>> To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them >>>>>>>>>>>>>> better for research.
That you don't answer the question is a strong indication that >>>>>>>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question >>>>>>>>>>> is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory. >>>>>>>>>>
Whether what you propose is a good way to do academic research >>>>>>>>>> is>not self-evident. It is even far from obvious how one can and >>>>>>>>>> should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a >>>>>>>>> fact,
there's nothing ultimately that proof-theory has that model-theory >>>>>>>>> hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch?
v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations" >>>>>>> then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
Here is what Gemini said:
"Logos 2000" refers to a series of video lectures and a personal
philosophical framework created by an independent thinker named Ross >>>>>> Finlayson, primarily published on YouTube.
The specific phrase "paradox-free reasoning" (or "paradox-free
reason")
is a core concept he discusses, most notably in a video titled "Logos >>>>>> 2000: A Theory."
Here is a breakdown of what the concept and the broader series
entail:
The Premise: Finlayson's work delves deeply into mathematical
logic, set
theory, and the philosophy of mathematics. His focus on "paradox-free >>>>>> reasoning" centers on trying to articulate a logical system or
foundation for mathematics that avoids the famous paradoxes that
troubled early set theory (such as Russell's Paradox).
The Influences: In his discussions on building this paradox-free
system,
he references the work of several major logicians, mathematicians, >>>>>> and
philosophers. These include Graham Priest (famous for his work on
paradoxes and non-classical logic), Dana Scott, Thoralf Skolem, Paul >>>>>> Erdős, and Paul Cohen.
The Scope: The "Logos 2000" series is highly eclectic. While it
focuses
heavily on "theatheory" (a term he uses) and logic, it also spans
topics
like computer science, the philosophy of statistics, the "absurd," >>>>>> infinitives in language, and the works of classical thinkers like
Descartes and Quine.
In short, it is a niche, independent philosophical project
exploring the
foundations of logic, mathematics, and computing, with "paradox-free >>>>>> reason" being his attempt to conceptualize a perfectly consistent
system
of logical thought.
Hm. Tell me more.
That "Logos 2000: Foundations briefly" is rather summatory.
https://www.youtube.com/watch?v=fjtXZ5mBVOc
I took a quick glance at it. I need to see a single
20 succinct overview of you whole system.
Can you do this or is that just not the way that
your mind works?
"Logos 2000: paradox-free reason"
Seems to be an excellent two second overview.
What is the single generic process that you
prevent paradoxes, in 100 words or less.
I wonder what it makes of "Moment and Motion", about it's
philosophical
approach to "worlds turn".
Aristotle won't be made a fool.
Quotes of Aristotle include "there is no un-moved mover", yet also,
"circular movement is eternal", yet also, "the movement of the stars >>>>> is voluntary".
Hegel is roundly regarded as a great idealist, and having a
very correct analytical account.
"There is a royal road to geometry."
To resolve paradoxes for a paradox-free reason,
first one resolves the logical paradoxes, after
a great classical universal education, then the
post-modern deconstruction, then the paleo-classical
study, of the canon, dogma, doctrine, and candidate,
for Foundations: a constant, consistent, complete,
concrete theory.
We already got rid of Russell's Paradox by switching
from naive set theory to ZFC. My own system works in
a similar way.
Do you understand all of the differences between
naive set theory and the ZFC version of axiomatic
set theory?
I cannot possibly sufficiently understand anything
sufficiently well unless it is in writing (I need
highlighting to focus my concentration) and it
must be presented at many different levels of
abstraction / specificity.
Einstein said: "If you can't explain it simply, you
don't understand it well enough."
Or, read the "Theatheory: super-theory and natural science"
thread, it's ongoing.
No you didn't. One can readily re-create "Russell's paradox"
by quantifying over an inductive set of ordinals as "the set
of all sets that don't contain themselves". Russell's claim
to "resolving" the Russell's paradox by "defining it away"
is _false_ and is readily re-built constructively.
I've written 10,000's posts to Usenet,
there's plenty to read. I'm quite familiar
with "set theory", including ZF and ZFC, and
for example for something like Martin's axiom,
so there's each of the well-foundedness, well-ordering,
and well-dispersion, all one theory.
You memoryless hypocrite.
On 04/19/2026 08:49 PM, olcott wrote:
On 4/19/2026 10:05 PM, Ross Finlayson wrote:
On 04/19/2026 07:29 PM, olcott wrote:
On 4/19/2026 2:21 PM, Ross Finlayson wrote:
On 04/19/2026 11:44 AM, olcott wrote:
On 4/19/2026 1:18 PM, Ross Finlayson wrote:
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more >>>>>>>>>>>>>>>> unequivocal and succinct across five different LLMs. >>>>>>>>>>>>>>>>
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended, >>>>>>>>>>>>>>>> Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct >>>>>>>>>>>>>>>> across five different LLMs then test for consensus. >>>>>>>>>>>>>>>>
(3) Once consensus is achieved carefully examine >>>>>>>>>>>>>>>> actual verbiage of key source documents. For
academic research this involves direct quotes from >>>>>>>>>>>>>>>> foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic, >>>>>>>>>>>>>> linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each >>>>>>>>>>>>>> of these fields not merely the conventional views. >>>>>>>>>>>>>> This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field >>>>>>>>>>>>>> combined with one alternative philosophical foundation. >>>>>>>>>>>>>> To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them >>>>>>>>>>>>>> better for research.
That you don't answer the question is a strong indication that >>>>>>>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question >>>>>>>>>>> is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory. >>>>>>>>>>
Whether what you propose is a good way to do academic research >>>>>>>>>> is>not self-evident. It is even far from obvious how one can and >>>>>>>>>> should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a >>>>>>>>> fact,
there's nothing ultimately that proof-theory has that model-theory >>>>>>>>> hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch?
v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations" >>>>>>> then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
Here is what Gemini said:
"Logos 2000" refers to a series of video lectures and a personal
philosophical framework created by an independent thinker named Ross >>>>>> Finlayson, primarily published on YouTube.
The specific phrase "paradox-free reasoning" (or "paradox-free
reason")
is a core concept he discusses, most notably in a video titled "Logos >>>>>> 2000: A Theory."
Here is a breakdown of what the concept and the broader series
entail:
The Premise: Finlayson's work delves deeply into mathematical
logic, set
theory, and the philosophy of mathematics. His focus on "paradox-free >>>>>> reasoning" centers on trying to articulate a logical system or
foundation for mathematics that avoids the famous paradoxes that
troubled early set theory (such as Russell's Paradox).
The Influences: In his discussions on building this paradox-free
system,
he references the work of several major logicians, mathematicians, >>>>>> and
philosophers. These include Graham Priest (famous for his work on
paradoxes and non-classical logic), Dana Scott, Thoralf Skolem, Paul >>>>>> Erdős, and Paul Cohen.
The Scope: The "Logos 2000" series is highly eclectic. While it
focuses
heavily on "theatheory" (a term he uses) and logic, it also spans
topics
like computer science, the philosophy of statistics, the "absurd," >>>>>> infinitives in language, and the works of classical thinkers like
Descartes and Quine.
In short, it is a niche, independent philosophical project
exploring the
foundations of logic, mathematics, and computing, with "paradox-free >>>>>> reason" being his attempt to conceptualize a perfectly consistent
system
of logical thought.
Hm. Tell me more.
That "Logos 2000: Foundations briefly" is rather summatory.
https://www.youtube.com/watch?v=fjtXZ5mBVOc
I took a quick glance at it. I need to see a single
20 succinct overview of you whole system.
Can you do this or is that just not the way that
your mind works?
"Logos 2000: paradox-free reason"
Seems to be an excellent two second overview.
What is the single generic process that you
prevent paradoxes, in 100 words or less.
I wonder what it makes of "Moment and Motion", about it's
philosophical
approach to "worlds turn".
Aristotle won't be made a fool.
Quotes of Aristotle include "there is no un-moved mover", yet also,
"circular movement is eternal", yet also, "the movement of the stars >>>>> is voluntary".
Hegel is roundly regarded as a great idealist, and having a
very correct analytical account.
"There is a royal road to geometry."
To resolve paradoxes for a paradox-free reason,
first one resolves the logical paradoxes, after
a great classical universal education, then the
post-modern deconstruction, then the paleo-classical
study, of the canon, dogma, doctrine, and candidate,
for Foundations: a constant, consistent, complete,
concrete theory.
We already got rid of Russell's Paradox by switching
from naive set theory to ZFC. My own system works in
a similar way.
Do you understand all of the differences between
naive set theory and the ZFC version of axiomatic
set theory?
I cannot possibly sufficiently understand anything
sufficiently well unless it is in writing (I need
highlighting to focus my concentration) and it
must be presented at many different levels of
abstraction / specificity.
Einstein said: "If you can't explain it simply, you
don't understand it well enough."
Or, read the "Theatheory: super-theory and natural science"
thread, it's ongoing.
No you didn't. One can readily re-create "Russell's paradox"
by quantifying over an inductive set of ordinals as "the set
of all sets that don't contain themselves". Russell's claim
to "resolving" the Russell's paradox by "defining it away"
is _false_ and is readily re-built constructively.
I've written 10,000's posts to Usenet,
there's plenty to read. I'm quite familiar
with "set theory", including ZF and ZFC, and
for example for something like Martin's axiom,
so there's each of the well-foundedness, well-ordering,
and well-dispersion, all one theory.
You memoryless hypocrite.
On 4/19/2026 10:57 PM, Ross Finlayson wrote:
On 04/19/2026 08:49 PM, olcott wrote:
On 4/19/2026 10:05 PM, Ross Finlayson wrote:
On 04/19/2026 07:29 PM, olcott wrote:
On 4/19/2026 2:21 PM, Ross Finlayson wrote:
On 04/19/2026 11:44 AM, olcott wrote:
On 4/19/2026 1:18 PM, Ross Finlayson wrote:
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more >>>>>>>>>>>>>>>>> unequivocal and succinct across five different LLMs. >>>>>>>>>>>>>>>>>
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended, >>>>>>>>>>>>>>>>> Grok Expert, Gemini Pro, Copilot Think deeper >>>>>>>>>>>>>>>>> and occasionally NotebookLM for Deep Research >>>>>>>>>>>>>>>>> and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct >>>>>>>>>>>>>>>>> across five different LLMs then test for consensus. >>>>>>>>>>>>>>>>>
(3) Once consensus is achieved carefully examine >>>>>>>>>>>>>>>>> actual verbiage of key source documents. For >>>>>>>>>>>>>>>>> academic research this involves direct quotes from >>>>>>>>>>>>>>>>> foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet >>>>>>>>>>>>>>> are a little senile. They were able to look at my >>>>>>>>>>>>>>> ideas from a computer science, mathematics, logic, >>>>>>>>>>>>>>> linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate >>>>>>>>>>>>>>> every alternative philosophical foundation of each >>>>>>>>>>>>>>> of these fields not merely the conventional views. >>>>>>>>>>>>>>> This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field >>>>>>>>>>>>>>> combined with one alternative philosophical foundation. >>>>>>>>>>>>>>> To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them >>>>>>>>>>>>>>> better for research.
That you don't answer the question is a strong indication >>>>>>>>>>>>>> that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the >>>>>>>>>>>> question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as >>>>>>>>>>> very effective. The proof of this is not any sequence of >>>>>>>>>>> steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across >>>>>>>>>>> every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory. >>>>>>>>>>>
Whether what you propose is a good way to do academic >>>>>>>>>>> researchnot self-evident. It is even far from obvious how one can and >>>>>>>>>>> should
compare ways to do and quality of academic research.
That proof-theory is equi-interpretable with model-theory is a >>>>>>>>>> fact,
there's nothing ultimately that proof-theory has that
model-theory
hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch?
v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations" >>>>>>>> then for an "A Theory", has that mechanical reasoners readily
read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
Here is what Gemini said:
"Logos 2000" refers to a series of video lectures and a personal >>>>>>> philosophical framework created by an independent thinker named Ross >>>>>>> Finlayson, primarily published on YouTube.
The specific phrase "paradox-free reasoning" (or "paradox-free
reason")
is a core concept he discusses, most notably in a video titled
"Logos
2000: A Theory."
Here is a breakdown of what the concept and the broader series
entail:
The Premise: Finlayson's work delves deeply into mathematical
logic, set
theory, and the philosophy of mathematics. His focus on
"paradox-free
reasoning" centers on trying to articulate a logical system or
foundation for mathematics that avoids the famous paradoxes that >>>>>>> troubled early set theory (such as Russell's Paradox).
The Influences: In his discussions on building this paradox-free >>>>>>> system,
he references the work of several major logicians,
mathematicians, and
philosophers. These include Graham Priest (famous for his work on >>>>>>> paradoxes and non-classical logic), Dana Scott, Thoralf Skolem, Paul >>>>>>> Erdős, and Paul Cohen.
The Scope: The "Logos 2000" series is highly eclectic. While it
focuses
heavily on "theatheory" (a term he uses) and logic, it also spans >>>>>>> topics
like computer science, the philosophy of statistics, the "absurd," >>>>>>> infinitives in language, and the works of classical thinkers like >>>>>>> Descartes and Quine.
In short, it is a niche, independent philosophical project
exploring the
foundations of logic, mathematics, and computing, with "paradox-free >>>>>>> reason" being his attempt to conceptualize a perfectly consistent >>>>>>> system
of logical thought.
Hm. Tell me more.
That "Logos 2000: Foundations briefly" is rather summatory.
https://www.youtube.com/watch?v=fjtXZ5mBVOc
I took a quick glance at it. I need to see a single
20 succinct overview of you whole system.
Can you do this or is that just not the way that
your mind works?
"Logos 2000: paradox-free reason"
Seems to be an excellent two second overview.
What is the single generic process that you
prevent paradoxes, in 100 words or less.
I wonder what it makes of "Moment and Motion", about it's
philosophical
approach to "worlds turn".
Aristotle won't be made a fool.
Quotes of Aristotle include "there is no un-moved mover", yet also, >>>>>> "circular movement is eternal", yet also, "the movement of the stars >>>>>> is voluntary".
Hegel is roundly regarded as a great idealist, and having a
very correct analytical account.
"There is a royal road to geometry."
To resolve paradoxes for a paradox-free reason,
first one resolves the logical paradoxes, after
a great classical universal education, then the
post-modern deconstruction, then the paleo-classical
study, of the canon, dogma, doctrine, and candidate,
for Foundations: a constant, consistent, complete,
concrete theory.
We already got rid of Russell's Paradox by switching
from naive set theory to ZFC. My own system works in
a similar way.
Do you understand all of the differences between
naive set theory and the ZFC version of axiomatic
set theory?
I cannot possibly sufficiently understand anything
sufficiently well unless it is in writing (I need
highlighting to focus my concentration) and it
must be presented at many different levels of
abstraction / specificity.
Einstein said: "If you can't explain it simply, you
don't understand it well enough."
Or, read the "Theatheory: super-theory and natural science"
thread, it's ongoing.
No you didn't. One can readily re-create "Russell's paradox"
by quantifying over an inductive set of ordinals as "the set
of all sets that don't contain themselves". Russell's claim
to "resolving" the Russell's paradox by "defining it away"
is _false_ and is readily re-built constructively.
I've written 10,000's posts to Usenet,
there's plenty to read. I'm quite familiar
with "set theory", including ZF and ZFC, and
for example for something like Martin's axiom,
so there's each of the well-foundedness, well-ordering,
and well-dispersion, all one theory.
So what is the key element from ZFC that eliminates Russell's Paradox?
You memoryless hypocrite.
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge acrossshould>> compare ways to do and quality of academic research.
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is
not self-evident. It is even far from obvious how one can and
On 19/04/2026 19:48, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
That does not answer the question nor justify false claims about
other people.
Unlike Humans LLMs have relatively deep knowledge acrossshould>> compare ways to do and quality of academic research.
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is
not self-evident. It is even far from obvious how one can and
Seems that you are merely speculating.
On 04/19/2026 09:05 PM, olcott wrote:
On 4/19/2026 10:57 PM, Ross Finlayson wrote:
On 04/19/2026 08:49 PM, olcott wrote:
On 4/19/2026 10:05 PM, Ross Finlayson wrote:
On 04/19/2026 07:29 PM, olcott wrote:
On 4/19/2026 2:21 PM, Ross Finlayson wrote:
On 04/19/2026 11:44 AM, olcott wrote:
On 4/19/2026 1:18 PM, Ross Finlayson wrote:
On 04/19/2026 10:27 AM, olcott wrote:
On 4/19/2026 11:55 AM, Ross Finlayson wrote:
On 04/19/2026 09:48 AM, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more >>>>>>>>>>>>>>>>>> unequivocal and succinct across five different LLMs. >>>>>>>>>>>>>>>>>>
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended, >>>>>>>>>>>>>>>>>> Grok Expert, Gemini Pro, Copilot Think deeper >>>>>>>>>>>>>>>>>> and occasionally NotebookLM for Deep Research >>>>>>>>>>>>>>>>>> and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct >>>>>>>>>>>>>>>>>> across five different LLMs then test for consensus. >>>>>>>>>>>>>>>>>>
(3) Once consensus is achieved carefully examine >>>>>>>>>>>>>>>>>> actual verbiage of key source documents. For >>>>>>>>>>>>>>>>>> academic research this involves direct quotes from >>>>>>>>>>>>>>>>>> foundational peer reviewed papers.
How do you know what is the best way or even a good way >>>>>>>>>>>>>>>>> for
academic research?
LLMs are like a guy with a PhD in everything yet >>>>>>>>>>>>>>>> are a little senile. They were able to look at my >>>>>>>>>>>>>>>> ideas from a computer science, mathematics, logic, >>>>>>>>>>>>>>>> linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate >>>>>>>>>>>>>>>> every alternative philosophical foundation of each >>>>>>>>>>>>>>>> of these fields not merely the conventional views. >>>>>>>>>>>>>>>> This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field >>>>>>>>>>>>>>>> combined with one alternative philosophical foundation. >>>>>>>>>>>>>>>> To sum this up LLMs have an enormously broader >>>>>>>>>>>>>>>> perspective than any human. That is what makes them >>>>>>>>>>>>>>>> better for research.
That you don't answer the question is a strong indication >>>>>>>>>>>>>>> that
you are just speculating.
Whenever my answer is self-evidently true you treat >>>>>>>>>>>>>> it as not answering at all.
A self-evidently true "answer" that does not answer the >>>>>>>>>>>>> question
is not an answer and leaves the question unanswered. >>>>>>>>>>>>>
Using LLMs for brainstorming is empirically verifiable as >>>>>>>>>>>> very effective. The proof of this is not any sequence of >>>>>>>>>>>> steps, it is: "try it for yourself and see".
Unlike Humans LLMs have relatively deep knowledge across >>>>>>>>>>>> every domain. No one on any of the dozens and dozens of >>>>>>>>>>>> forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory. >>>>>>>>>>>>
Whether what you propose is a good way to do academic >>>>>>>>>>>> researchnot self-evident. It is even far from obvious how one can and >>>>>>>>>>>> should
compare ways to do and quality of academic research. >>>>>>>>>>>>>
That proof-theory is equi-interpretable with model-theory is a >>>>>>>>>>> fact,
there's nothing ultimately that proof-theory has that
model-theory
hasn't, and vice versa.
Specifically counter-factual.
I am shocked by your lack of academic discipline
on this issue. I expected enormously much more
from you. Somewhere I read or saw on one of your
videos that you are focused on investigating
foundations. If so are you focused on this from
the basis of rote memorization or careful critique?
You can drop the links into Google Gemini
and it will start building a summary.
I'd suggest "Logos 2000: paradox-free reason",
then "Logos 2000: rulial foundations", then
"Logos 2000: A Theory".
There's some hundreds of hours of lectures.
https://www.youtube.com/watch?
v=LKnZUg9jPf0&list=PLb7rLSBiE7F795DGcwSvwHj-GEbdhPJNe
I don't need your help.
https://www.youtube.com/watch?v=zwX9Y2oEtHs
Here there's paradox-free reason for all one theory.
That proof-theory and model-theory are equi-interpretable
has that otherwise they aren't.
So, "paradox-free reason" then an account of "rulial foundations" >>>>>>>>> then for an "A Theory", has that mechanical reasoners readily >>>>>>>>> read these.
The transcriptions aren't necessarily perfectly accurate,
particularly with regards to proper names.
Here is what Gemini said:
"Logos 2000" refers to a series of video lectures and a personal >>>>>>>> philosophical framework created by an independent thinker named >>>>>>>> Ross
Finlayson, primarily published on YouTube.
The specific phrase "paradox-free reasoning" (or "paradox-free >>>>>>>> reason")
is a core concept he discusses, most notably in a video titled >>>>>>>> "Logos
2000: A Theory."
Here is a breakdown of what the concept and the broader series >>>>>>>> entail:
The Premise: Finlayson's work delves deeply into mathematical
logic, set
theory, and the philosophy of mathematics. His focus on
"paradox-free
reasoning" centers on trying to articulate a logical system or >>>>>>>> foundation for mathematics that avoids the famous paradoxes that >>>>>>>> troubled early set theory (such as Russell's Paradox).
The Influences: In his discussions on building this paradox-free >>>>>>>> system,
he references the work of several major logicians,
mathematicians, and
philosophers. These include Graham Priest (famous for his work on >>>>>>>> paradoxes and non-classical logic), Dana Scott, Thoralf Skolem, >>>>>>>> Paul
Erdős, and Paul Cohen.
The Scope: The "Logos 2000" series is highly eclectic. While it >>>>>>>> focuses
heavily on "theatheory" (a term he uses) and logic, it also spans >>>>>>>> topics
like computer science, the philosophy of statistics, the "absurd," >>>>>>>> infinitives in language, and the works of classical thinkers like >>>>>>>> Descartes and Quine.
In short, it is a niche, independent philosophical project
exploring the
foundations of logic, mathematics, and computing, with
"paradox-free
reason" being his attempt to conceptualize a perfectly consistent >>>>>>>> system
of logical thought.
Hm. Tell me more.
That "Logos 2000: Foundations briefly" is rather summatory.
https://www.youtube.com/watch?v=fjtXZ5mBVOc
I took a quick glance at it. I need to see a single
20 succinct overview of you whole system.
Can you do this or is that just not the way that
your mind works?
"Logos 2000: paradox-free reason"
Seems to be an excellent two second overview.
What is the single generic process that you
prevent paradoxes, in 100 words or less.
I wonder what it makes of "Moment and Motion", about it's
philosophical
approach to "worlds turn".
Aristotle won't be made a fool.
Quotes of Aristotle include "there is no un-moved mover", yet also, >>>>>>> "circular movement is eternal", yet also, "the movement of the stars >>>>>>> is voluntary".
Hegel is roundly regarded as a great idealist, and having a
very correct analytical account.
"There is a royal road to geometry."
To resolve paradoxes for a paradox-free reason,
first one resolves the logical paradoxes, after
a great classical universal education, then the
post-modern deconstruction, then the paleo-classical
study, of the canon, dogma, doctrine, and candidate,
for Foundations: a constant, consistent, complete,
concrete theory.
We already got rid of Russell's Paradox by switching
from naive set theory to ZFC. My own system works in
a similar way.
Do you understand all of the differences between
naive set theory and the ZFC version of axiomatic
set theory?
I cannot possibly sufficiently understand anything
sufficiently well unless it is in writing (I need
highlighting to focus my concentration) and it
must be presented at many different levels of
abstraction / specificity.
Einstein said: "If you can't explain it simply, you
don't understand it well enough."
Or, read the "Theatheory: super-theory and natural science"
thread, it's ongoing.
No you didn't. One can readily re-create "Russell's paradox"
by quantifying over an inductive set of ordinals as "the set
of all sets that don't contain themselves". Russell's claim
to "resolving" the Russell's paradox by "defining it away"
is _false_ and is readily re-built constructively.
I've written 10,000's posts to Usenet,
there's plenty to read. I'm quite familiar
with "set theory", including ZF and ZFC, and
for example for something like Martin's axiom,
so there's each of the well-foundedness, well-ordering,
and well-dispersion, all one theory.
So what is the key element from ZFC that eliminates Russell's Paradox?
You memoryless hypocrite.
It doesn't is what I'm saying. Russell's et alia's "Axiom of
_Ordinary_ Infinity" the restriction of comprehension, since
naturally after quantification the infinity would be
extra-ordinary, as Mirimanoff pointed out, is _ignorance_,
and simple comprehension rebuild's "Russell's paradox"
despite "Russell's retro-thesis": "please don't call me wrong".
Making for that well-foundedness, well-ordering, and well-dispersion
can play together nicely, is the subject of my video essay
"Logos 2000: rulial foundations". So, point your bot at it.
https://www.youtube.com/watch?v=GkqfnoFGj14
If you wanted to know more about Bertrand Russell's set theory,
you'd necessarily have read "W.V.O. Quine's Set Theory".
Foundations, nature, entropy, emergence, reality and ideals, inference
and reason, intelligence and wisdom, de Morgan, causality and
implication, model theory, Boole, abstract symbolic logic, forms and syllogism, entailment and monotonicity, arithmetization and
algebraization and geometrization, model theory and proof theory, the
inner and outer, comprehension, structure and truth, paradox,
consistency and completeness, theory of theory, the liar paradox,
Comenius language, the ex falso, contradiction in itself, deduction and abduction, monism, natural language and intersubjectivity,
noumenological and phenomenological senses, consistency and completeness
and constancy and concreteness, mathematical and physical intepretations
and models, natural science and super-natural theory, completions and
limits, analytical bridges, positivism and axiomatization, diversity and variety, closed categories and continuous quantities, Aristotle's actual infinite, Kant and the sublime, Hegel and Being and Nothing, an integer continuum, Euclid's geometry, models of continuous domains, the modular
and replete, axiomless geometry, perceived paradox, restriction of comprehension, fin de siecle foundations, logicist positivism and mathematical platonism, science and the empirical, idealism and
absolutes, mathematical universe hypothesis, space-time, state and
change, cosmic book-keeping, freedom of imagination and thought,
absolutes and truth, Derrida and Husserl and Quine, lies and logic, the quasi-modal and modal, rules and the rulial, inductive limits and
infinite limits, Zermelo-Fraenkel set theory, elt, set-theoretic
paradoxes, regularity and regularit(ies), well-foundedness, ZFC, well-ordering, univalency the illative and well-dispersion, class/set distinction, descriptive set theory, expansion and restriction of comprehension, Goedel and incompleteness, uncountability, Russell's reto-thesis, Mirimanoff and Skolem, Frege and Russell, Peirce,
duBois-Reymond and Cantor, Russell's paradox applied to finite numbers, Russell in logic, apologetics in logical, Occam and Plotinus and Philo, Russell and Whitehead, descriptive set theory and model theory, Tarski,
20'th century modern classical logic, three regularities, alternation
and carriage, newer modern logic, Peano, Goedelian incompleteness
applied to itself, Cohen and the independency of the Continuum
Hypothesis, forcing's axiom, induction as blind and invincibly ignorant, contradiction not in itself, DesCartes and Quine, Principia Mathematica, Chwistek, anti-foundational set theories, set theories with universes, Burali-Forti and the gesammelt, Myhill paradox, Russell on candidate
axioms, composability and separability, Sheffer and Gentzen, the Begriffsschrift and concept-scripts, Russell and classes and relations, Russell and "significance" and "isolation", Suppes, principles of mathematics, Shoenfield, Moschavakis and Jech, ruliality and perfection, modern mathematics.
On 4/20/2026 4:08 AM, Mikko wrote:
On 19/04/2026 19:48, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
That does not answer the question nor justify false claims about
other people.
You are looking for a sequence of inference steps
that can only justify an assertion through first-hand
direct experience.
The philosophical foundations of math is off topic in most groupsUnlike Humans LLMs have relatively deep knowledge acrossshould>> compare ways to do and quality of academic research.
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is
not self-evident. It is even far from obvious how one can and
Seems that you are merely speculating.
To exactly what degree have you (or anyone else) carefully studied
all of the alternative philosophical foundations of math?
On 20/04/2026 20:19, olcott wrote:
On 4/20/2026 4:08 AM, Mikko wrote:
On 19/04/2026 19:48, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that
you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
That does not answer the question nor justify false claims about
other people.
You are looking for a sequence of inference steps
that can only justify an assertion through first-hand
direct experience.
It is not a good idea to lie about other people. It is better to not
even mention other people except as authors of quoted texts.
The philosophical foundations of math is off topic in most groupsUnlike Humans LLMs have relatively deep knowledge acrossshould>> compare ways to do and quality of academic research.
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is
not self-evident. It is even far from obvious how one can and
Seems that you are merely speculating.
To exactly what degree have you (or anyone else) carefully studied
all of the alternative philosophical foundations of math?
that message was posted to. Only parts of the math itself are
interesting.
On 4/21/2026 1:56 AM, Mikko wrote:
On 20/04/2026 20:19, olcott wrote:
On 4/20/2026 4:08 AM, Mikko wrote:
On 19/04/2026 19:48, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that >>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question
is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
That does not answer the question nor justify false claims about
other people.
You are looking for a sequence of inference steps
that can only justify an assertion through first-hand
direct experience.
It is not a good idea to lie about other people. It is better to not
even mention other people except as authors of quoted texts.
The philosophical foundations of math is off topic in most groupsUnlike Humans LLMs have relatively deep knowledge acrossshould>> compare ways to do and quality of academic research.
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is
not self-evident. It is even far from obvious how one can and
Seems that you are merely speculating.
To exactly what degree have you (or anyone else) carefully studied
all of the alternative philosophical foundations of math?
that message was posted to. Only parts of the math itself are
interesting.
philosophical foundations of math, it especially relevant
to these technical group because proof theoretic semantics
corrects the inherent incoherence of current foundations
that makes
"true on the basis of meaning expressed in language"
reliably computable for the entire body of knowledge.
Without these changes this is not possible.
This issue has become a survival of the species thing
when we extrapolate the long term climate equilibrium
results of climate change. The IPCC only looks at the
very short term.
We must make knowledge computable
to counter-act the hired liars.
On 22/04/2026 10:53, olcott wrote:
On 4/22/2026 2:29 AM, Mikko wrote:
On 21/04/2026 16:42, olcott wrote:
On 4/21/2026 1:56 AM, Mikko wrote:
On 20/04/2026 20:19, olcott wrote:
On 4/20/2026 4:08 AM, Mikko wrote:
On 19/04/2026 19:48, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs. >>>>>>>>>>>>>>
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct >>>>>>>>>>>>>> across five different LLMs then test for consensus. >>>>>>>>>>>>>>
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from >>>>>>>>>>>>>> foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation. >>>>>>>>>>>> To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that >>>>>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question >>>>>>>>> is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
That does not answer the question nor justify false claims about >>>>>>> other people.
You are looking for a sequence of inference steps
that can only justify an assertion through first-hand
direct experience.
It is not a good idea to lie about other people. It is better to not >>>>> even mention other people except as authors of quoted texts.
The philosophical foundations of math is off topic in most groupsUnlike Humans LLMs have relatively deep knowledge across
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is >>>>>>> >> not self-evident. It is even far from obvious how one can and >>>>>>> should>> compare ways to do and quality of academic research.
Seems that you are merely speculating.
To exactly what degree have you (or anyone else) carefully studied >>>>>> all of the alternative philosophical foundations of math?
that message was posted to. Only parts of the math itself are
interesting.
philosophical foundations of math, it especially relevant
to these technical group because proof theoretic semantics
corrects the inherent incoherence of current foundations
that makes
"true on the basis of meaning expressed in language"
reliably computable for the entire body of knowledge.
Without these changes this is not possible.
This issue has become a survival of the species thing
when we extrapolate the long term climate equilibrium
results of climate change. The IPCC only looks at the
very short term.
We must make knowledge computable
to counter-act the hired liars.
No, that does not make it relevant. Only things that have practial
consequences are relevant in technical groups.
So if the technical groups are not interested then
that makes it OK for hirer liars to kill the planet
in the hot pursuit of one more dollar bill?
That's another question that is off topic in all goups that messare
was posted to.
To counter-act any liars the real world semantics matter the most.
Other semantics may serve as a tool but usually syntax oriented
tools are better.
I am making semantics into a coherent system of
provably correct reasoning.
Perhaps you are trying to make. There is no reason to think that you
will succeed or even approach the goal.
So far your aim seems to be lies that are disconnected from the realLies that are disconnected from the real world are less common and
less harmful.
world. THough that hardly matters as you are not approaching even
that goal.
On 4/23/2026 1:44 AM, Mikko wrote:
On 22/04/2026 10:53, olcott wrote:
On 4/22/2026 2:29 AM, Mikko wrote:
On 21/04/2026 16:42, olcott wrote:
On 4/21/2026 1:56 AM, Mikko wrote:
On 20/04/2026 20:19, olcott wrote:
On 4/20/2026 4:08 AM, Mikko wrote:
On 19/04/2026 19:48, olcott wrote:
On 4/19/2026 5:01 AM, Mikko wrote:
On 18/04/2026 16:01, olcott wrote:
On 4/18/2026 4:11 AM, Mikko wrote:
On 17/04/2026 16:56, olcott wrote:
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs. >>>>>>>>>>>>>>>
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended, >>>>>>>>>>>>>>> Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct >>>>>>>>>>>>>>> across five different LLMs then test for consensus. >>>>>>>>>>>>>>>
(3) Once consensus is achieved carefully examine >>>>>>>>>>>>>>> actual verbiage of key source documents. For
academic research this involves direct quotes from >>>>>>>>>>>>>>> foundational peer reviewed papers.
How do you know what is the best way or even a good way for >>>>>>>>>>>>>> academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation. >>>>>>>>>>>>> To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
That you don't answer the question is a strong indication that >>>>>>>>>>>> you are just speculating.
Whenever my answer is self-evidently true you treat
it as not answering at all.
A self-evidently true "answer" that does not answer the question >>>>>>>>>> is not an answer and leaves the question unanswered.
Using LLMs for brainstorming is empirically verifiable as
very effective. The proof of this is not any sequence of
steps, it is: "try it for yourself and see".
That does not answer the question nor justify false claims about >>>>>>>> other people.
You are looking for a sequence of inference steps
that can only justify an assertion through first-hand
direct experience.
It is not a good idea to lie about other people. It is better to not >>>>>> even mention other people except as authors of quoted texts.
The philosophical foundations of math is off topic in most groupsUnlike Humans LLMs have relatively deep knowledge acrossSeems that you are merely speculating.
every domain. No one on any of the dozens and dozens of
forums that I was on ever had the slightly clue about
alternative foundations of semantics besides model theory.
Whether what you propose is a good way to do academic research is >>>>>>>> >> not self-evident. It is even far from obvious how one can >>>>>>>> and should>> compare ways to do and quality of academic research. >>>>>>>>
To exactly what degree have you (or anyone else) carefully studied >>>>>>> all of the alternative philosophical foundations of math?
that message was posted to. Only parts of the math itself are
interesting.
philosophical foundations of math, it especially relevant
to these technical group because proof theoretic semantics
corrects the inherent incoherence of current foundations
that makes
"true on the basis of meaning expressed in language"
reliably computable for the entire body of knowledge.
Without these changes this is not possible.
This issue has become a survival of the species thing
when we extrapolate the long term climate equilibrium
results of climate change. The IPCC only looks at the
very short term.
We must make knowledge computable
to counter-act the hired liars.
No, that does not make it relevant. Only things that have practial
consequences are relevant in technical groups.
So if the technical groups are not interested then
that makes it OK for hirer liars to kill the planet
in the hot pursuit of one more dollar bill?
That's another question that is off topic in all goups that messare
was posted to.
I am just saying that the cost of mindless rebuttal
is the survival of like on Earth. The silly game of
disagreeing with whatever I say has lethal consequence.
To counter-act any liars the real world semantics matter the most.
Other semantics may serve as a tool but usually syntax oriented
tools are better.
I am making semantics into a coherent system of
provably correct reasoning.
Perhaps you are trying to make. There is no reason to think that you
will succeed or even approach the goal.
I have proved that this system does get rid of
undecidability for the entirely body of knowledge
expressed in language.
No one wants to bother to pay enough attention to see this.
--So far your aim seems to be lies that are disconnected from the realLies that are disconnected from the real world are less common and
less harmful.
world. THough that hardly matters as you are not approaching even
that goal.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,114 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 492507:09:40 |
| Calls: | 14,267 |
| Calls today: | 3 |
| Files: | 186,320 |
| D/L today: |
16,392 files (5,001M bytes) |
| Messages: | 2,518,271 |