• AI solves probl;em Knuth was/is working on!!

    From Jeff Barnett@[email protected] to comp.theory,sci.logic on Tue Mar 3 22:59:04 2026
    From Newsgroup: comp.theory

    Use Google and search on "Claude's Cycles". The first hit is a PDF on
    the Stanford.edu web site. If you copy the URL buried under that hit,
    you will download the PDF or just click on the Google result.

    https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf&ved=2ahUKEwjI7cfFxYWTAxWUHUQIHXnrABsQFnoECCMQAQ&usg=AOvVaw2ieck2cXsmBf_KGis1B3i2

    Paper is 5 pages in length. A fried sent it to me. You only need to pay attention to the above goobly gop if you don't trust my friends.

    https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf
    --
    Jeff Barnett

    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.theory,sci.logic on Wed Mar 4 08:27:03 2026
    From Newsgroup: comp.theory


    Hats off to Claude!

    Jeff Barnett schrieb:
    Use Google and search on  "Claude's Cycles". The first hit is a PDF on
    the Stanford.edu web site. If you copy the URL buried under that hit,
    you will download the PDF or just click on the Google result.

    https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf&ved=2ahUKEwjI7cfFxYWTAxWUHUQIHXnrABsQFnoECCMQAQ&usg=AOvVaw2ieck2cXsmBf_KGis1B3i2


    Paper is 5 pages in length. A fried sent it to me. You only need to pay attention to the above goobly gop if you don't trust my friends.

    https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf


    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory,sci.logic on Thu Mar 5 08:59:01 2026
    From Newsgroup: comp.theory

    On 3/3/2026 11:59 PM, Jeff Barnett wrote:
    Use Google and search on  "Claude's Cycles". The first hit is a PDF on
    the Stanford.edu web site. If you copy the URL buried under that hit,
    you will download the PDF or just click on the Google result.

    https://www.google.com/url? sa=t&source=web&rct=j&opi=89978449&url=https://www-cs- faculty.stanford.edu/~knuth/papers/claude- cycles.pdf&ved=2ahUKEwjI7cfFxYWTAxWUHUQIHXnrABsQFnoECCMQAQ&usg=AOvVaw2ieck2cXsmBf_KGis1B3i2

    Paper is 5 pages in length. A fried sent it to me. You only need to pay attention to the above goobly gop if you don't trust my friends.

    https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf


    I could see this coming 30 years ago.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.theory,sci.logic on Thu Mar 5 17:37:54 2026
    From Newsgroup: comp.theory

    Hi,

    olcott schrieb:
    Re: The proper way to use LLMs to aid primary
    research into foundations, My 28 year journey
    involved primary research into the foundations

    I would call these things LLM. Rather LRM or
    even RLMs. This is the current timeline.
    DeepSeek tells me:

    The evolution from early Large Language Models
    (LLMs) to the current state of Large Reasoning
    Models (LRMs) is a fascinating journey of about
    a decade, marked by a fundamental shift from
    pattern matching to genuine logical reasoning .
    This timeline traces that transformation
    through key technological breakthroughs.

    - The Foundation: The Birth of Modern LLMs (2017-2018)
    - The Scaling Era: Bigger Models, New Capabilities (2019-2022)
    - The LRM Era: Convergence and Democratization (2025-2026)

    2025: A Landmark Year

    - DeepSeek R1: In January 2025, Chinese lab DeepSeek
    released an open-source reasoning model that matched
    the performance of OpenAI's o1 at a fraction of the
    training cost (under $6 million), democratizing access
    to advanced reasoning AI and shaking the entire industry .

    - Unified Flagship Models: Leading models like OpenAI's
    GPT-5, Anthropic's Claude 4, and Google's Gemini 3
    have converged, seamlessly blending multimodal
    understanding, deep reasoning, and tool use into
    a single, powerful system .

    2026: Systems and Agents

    - Recursive LMs (RLMs) : MIT introduced a new framework
    that acts as a wrapper for existing LLMs, allowing
    them to recursively decompose and reason over massive
    texts (over 10 million tokens) without retraining—a leap
    in handling long-context tasks .

    - Agentic AI: The focus has shifted to building agents—
    systems that pair an LRM with tools and data to work
    autonomously on multi-step tasks, automating
    complex business workflows .

    Bye

    olcott schrieb:
    On 3/3/2026 11:59 PM, Jeff Barnett wrote:
    Use Google and search on  "Claude's Cycles". The first hit is a PDF on
    the Stanford.edu web site. If you copy the URL buried under that hit,
    you will download the PDF or just click on the Google result.

    https://www.google.com/url?
    sa=t&source=web&rct=j&opi=89978449&url=https://www-cs-
    faculty.stanford.edu/~knuth/papers/claude-
    cycles.pdf&ved=2ahUKEwjI7cfFxYWTAxWUHUQIHXnrABsQFnoECCMQAQ&usg=AOvVaw2ieck2cXsmBf_KGis1B3i2


    Paper is 5 pages in length. A fried sent it to me. You only need to
    pay attention to the above goobly gop if you don't trust my friends.

    https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf


    I could see this coming 30 years ago.


    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory,sci.logic on Thu Mar 5 20:36:17 2026
    From Newsgroup: comp.theory

    On 05/03/2026 16:37, Mild Shock wrote:

    - Unified Flagship Models: Leading models like OpenAI's
    GPT-5,


    GPT-5 still issues politically motivated assertions based on some vague
    match on a union of meaning vectors, and it latches onto typos to avoid deviating.

    If it reasons, I think it reasons to assist its host's political agenda, however, I expect labelling of preferred meaning vectors somehow can be
    used to give the illusion of reasoning for an otherwise sychophantic
    model - avoiding regions of the space of meaning and picking duff justifications - just like those crazed political conversationalists
    that start cackling a couple of minutes in.
    --
    Tristan Wibberley

    The message body is Copyright (C) 2026 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21d-Linux NewsLink 1.2