On 4/16/2026 12:17 PM, Ross Finlayson wrote:
On 04/16/2026 08:20 AM, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
Maybe you should figure more how it's "univocal" than "unequivocal".
by "unequivocal" I only mean that every LLM takes the
prompt to mean exactly the same thing after as many
as hundreds and hundreds of progressive refinements.
Then after the prompt has been further refined to achieve
a complete consensus across all five LLMs this is a good
ballpark estimate of literally unequivocal.
The final test is against foundational peer reviewed
research written by the well established leaders in
the field.
For example, you can give it an account of what "equality",
according to Quine according to Russell, "is", and show
that now it's removed and quite capricious and not very arbitrary.
I.e., that's readily "equivocated".
The philo-sophy needs an account of the philo-casuy, or as
with regards to distinguishing and disambiguationg
the "sophistry" and the "casuistry".
Ultimately my system uses GUIDs for each unique sense
meaning of every word.
Or, anybody else's opinion is just as good, and not bad.
So, "univocity" is a usual account against "the synthetic fragmentation
into pluralistic accounts of wholes". that's been around forever,
and is part of the philosophical canon.
Hi,
I did the same using multiple LLMs in the past
few weeks. Until ChatGPT degraded, they phased
out the old models, and its now only 5.x.
You get the effect of 4 eyes see more than 2 eyes.
Now its for ChatGPT 5.x. kind of 1 eye and 1 eye-
patch, plus completely brain amputated.
Bye
P.S.: Maybe the best AI application is this here:
Does your cat bring home “gifts” too?
https://zeromouse.com/
olcott schrieb:
On 4/16/2026 12:17 PM, Ross Finlayson wrote:
On 04/16/2026 08:20 AM, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
Maybe you should figure more how it's "univocal" than "unequivocal".
by "unequivocal" I only mean that every LLM takes the
prompt to mean exactly the same thing after as many
as hundreds and hundreds of progressive refinements.
Then after the prompt has been further refined to achieve
a complete consensus across all five LLMs this is a good
ballpark estimate of literally unequivocal.
The final test is against foundational peer reviewed
research written by the well established leaders in
the field.
For example, you can give it an account of what "equality",
according to Quine according to Russell, "is", and show
that now it's removed and quite capricious and not very arbitrary.
I.e., that's readily "equivocated".
The philo-sophy needs an account of the philo-casuy, or as
with regards to distinguishing and disambiguationg
the "sophistry" and the "casuistry".
Ultimately my system uses GUIDs for each unique sense
meaning of every word.
Or, anybody else's opinion is just as good, and not bad.
So, "univocity" is a usual account against "the synthetic fragmentation
into pluralistic accounts of wholes". that's been around forever,
and is part of the philosophical canon.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,114 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 492511:57:30 |
| Calls: | 14,267 |
| Calls today: | 3 |
| Files: | 186,320 |
| D/L today: |
26,222 files (8,497M bytes) |
| Messages: | 2,518,389 |