• word choice survey

    From dart200@[email protected] to comp.theory on Thu Nov 6 15:07:03 2025
    From Newsgroup: comp.theory

    i've been using the words context-dependent, context-sensitive, and context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Jeff Barnett@[email protected] to comp.theory on Fri Nov 7 01:45:40 2025
    From Newsgroup: comp.theory

    On 11/6/2025 4:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?
    Context-dependent is an adjective formally defined describing languages
    that can be recognized (yes, no) by linear bounded automata. In the
    Chomsky hierarchy, this class of languages sits between those that can
    be recognized by nondeterministic pushdown automata and those that
    require Touring machines.

    Informal technical conversation has been using context-sensitive as a
    virtual synonym for many, many decades. Though I've heard and used the
    term context-aware for a fair while, I could imagine objections from a
    pedant because it seems to blatantly involve an "aware" agent. Are you
    talking about a language, a meta language, a machine, a class of
    machines, etc?

    BTW: I believe that it is only recently settled whether the classes of deterministic and nondeterministic linear automata resolve the same set
    of languages or not. I have forgotten but if they are different so might
    be the definition of context sensitive languages.
    --
    Jeff Barnett

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Fri Nov 7 15:40:55 2025
    From Newsgroup: comp.theory

    On 07/11/2025 08:45, Jeff Barnett wrote:
    On 11/6/2025 4:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and
    context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?

    "dynamically-closed", maybe.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Fri Nov 7 10:09:44 2025
    From Newsgroup: comp.theory

    On 11/6/2025 5:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?


    The first two are similar the third one is
    different.

    I refer to the first two as a sequence of
    steps that depend on their execution context.

    When I refer to execution context I mean the
    entire state of the machine prior to the
    execution of the sequence of steps.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Fri Nov 7 16:14:47 2025
    From Newsgroup: comp.theory

    On 07/11/2025 16:09, olcott wrote:
    When I refer to execution context I mean the
    entire state of the machine prior to the
    execution of the sequence of steps.

    Be aware, "context of execution" is a term referring, pretty
    well-constrained, to a long-known concept (related to the modern "green-threads" and Microsoft "Fibers"), see man pages of posix
    "make_context".

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Fri Nov 7 10:40:54 2025
    From Newsgroup: comp.theory

    On 11/7/2025 10:14 AM, Tristan Wibberley wrote:
    On 07/11/2025 16:09, olcott wrote:
    When I refer to execution context I mean the
    entire state of the machine prior to the
    execution of the sequence of steps.

    Be aware, "context of execution" is a term referring, pretty well-constrained, to a long-known concept (related to the modern "green-threads" and Microsoft "Fibers"), see man pages of posix "make_context".


    https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution
    Is the meaning of the term when context switching is referenced.

    It is essentially the full state of the machine at a specific
    point it its execution. I wrote the context switching cooperative
    multi-taking operating system of x86utm. Its context is its set
    of virtual machine registers and its own private stack.

    Line 1445 begins SaveState() followed by LoadState() https://github.com/plolcott/x86utm/blob/master/x86utm.cpp
    does the context switching.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Fri Nov 7 19:53:48 2025
    From Newsgroup: comp.theory

    On 07/11/2025 16:40, olcott wrote:
    On 11/7/2025 10:14 AM, Tristan Wibberley wrote:

    Be aware, "context of execution" is a term referring, pretty
    well-constrained, to a long-known concept (related to the modern
    "green-threads" and Microsoft "Fibers"), see man pages of posix
    "make_context".


    https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution
    Is the meaning of the term when context switching is referenced.

    That description is incomplete, it has been written from an OS
    abstraction perspective which is far from the whole story. It's very
    suspicious because wikipedia used to get such things right.


    It is essentially the full state of the machine at a specific
    point it its execution.

    No. Please see the man page for POSIX make_context. It's clear that your sources are not definitive. You should be in a position to expect a
    range of meanings are both available and important.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Fri Nov 7 14:00:36 2025
    From Newsgroup: comp.theory

    On 11/7/2025 1:53 PM, Tristan Wibberley wrote:
    On 07/11/2025 16:40, olcott wrote:
    On 11/7/2025 10:14 AM, Tristan Wibberley wrote:

    Be aware, "context of execution" is a term referring, pretty
    well-constrained, to a long-known concept (related to the modern
    "green-threads" and Microsoft "Fibers"), see man pages of posix
    "make_context".


    https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution
    Is the meaning of the term when context switching is referenced.

    That description is incomplete, it has been written from an OS
    abstraction perspective which is far from the whole story. It's very suspicious because wikipedia used to get such things right.


    It is essentially the full state of the machine at a specific
    point it its execution.

    No. Please see the man page for POSIX make_context. It's clear that your sources are not definitive. You should be in a position to expect a
    range of meanings are both available and important.


    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    D simulated by H requires H to simulate itself simulating
    D such that the simulated D never reaches its final halt
    state. D simulated by H1 halts.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Fri Nov 7 15:01:35 2025
    From Newsgroup: comp.theory

    On 11/7/2025 1:53 PM, Tristan Wibberley wrote:
    On 07/11/2025 16:40, olcott wrote:
    On 11/7/2025 10:14 AM, Tristan Wibberley wrote:

    Be aware, "context of execution" is a term referring, pretty
    well-constrained, to a long-known concept (related to the modern
    "green-threads" and Microsoft "Fibers"), see man pages of posix
    "make_context".


    https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution
    Is the meaning of the term when context switching is referenced.

    That description is incomplete, it has been written from an OS
    abstraction perspective which is far from the whole story. It's very suspicious because wikipedia used to get such things right.


    It is essentially the full state of the machine at a specific
    point it its execution.

    No. Please see the man page for POSIX make_context. It's clear that your sources are not definitive. You should be in a position to expect a
    range of meanings are both available and important.


    That is the second time that posting failed on
    eternal September today.

    I was using the term "execution context" as
    meaning the same thing as "process context"
    in context switching.

    I did this because this is what the compositional
    meaning of the terms "execution" and "context" mean.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@[email protected] to comp.theory on Fri Nov 7 16:01:58 2025
    From Newsgroup: comp.theory

    On 11/7/2025 3:57 PM, olcott wrote:
    On 11/7/2025 1:53 PM, Tristan Wibberley wrote:
    On 07/11/2025 16:40, olcott wrote:
    On 11/7/2025 10:14 AM, Tristan Wibberley wrote:

    Be aware, "context of execution" is a term referring, pretty
    well-constrained, to a long-known concept (related to the modern
    "green-threads" and Microsoft "Fibers"), see man pages of posix
    "make_context".


    https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution >>> Is the meaning of the term when context switching is referenced.

    That description is incomplete, it has been written from an OS
    abstraction perspective which is far from the whole story. It's very
    suspicious because wikipedia used to get such things right.


    It is essentially the full state of the machine at a specific
    point it its execution.

    No. Please see the man page for POSIX make_context. It's clear that your
    sources are not definitive. You should be in a position to expect a
    range of meanings are both available and important.


    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    False, as the instruction being simulated and the state of those
    instructions are exactly the same for algorithm H and algorithm H1 up to
    the point that algorithm H aborts. The directly executed algorithm H performing the simulation is not part of the simulation and therefore
    neither is the state of the directly executed algorithm H.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Fri Nov 7 21:19:34 2025
    From Newsgroup: comp.theory

    On 07/11/2025 21:01, dbush wrote:
    On 11/7/2025 3:57 PM, olcott wrote:

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    False, as the instruction being simulated and the state of those
    instructions are exactly the same for algorithm H and algorithm H1 up to
    the point that algorithm H aborts.  The directly executed algorithm H performing the simulation is not part of the simulation and therefore
    neither is the state of the directly executed algorithm H.

    In his case, which satisfies his stated constraint, directly executed is
    on a [ordinary system administrator term] real machine and simulated is
    on a [ordinary system administrator term] virtual machine. The
    constraint allows them to be different and to have reflection and I
    believe Olcott has such computers.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Fri Nov 7 15:24:14 2025
    From Newsgroup: comp.theory

    On 11/7/2025 3:19 PM, Tristan Wibberley wrote:
    On 07/11/2025 21:01, dbush wrote:
    On 11/7/2025 3:57 PM, olcott wrote:

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    False, as the instruction being simulated and the state of those
    instructions are exactly the same for algorithm H and algorithm H1 up to
    the point that algorithm H aborts.  The directly executed algorithm H
    performing the simulation is not part of the simulation and therefore
    neither is the state of the directly executed algorithm H.

    In his case, which satisfies his stated constraint, directly executed is
    on a [ordinary system administrator term] real machine and simulated is
    on a [ordinary system administrator term] virtual machine. The
    constraint allows them to be different and to have reflection and I
    believe Olcott has such computers.


    I welcome you as an honest and seemingly extremely
    competent reviewer.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@[email protected] to comp.theory on Fri Nov 7 16:56:47 2025
    From Newsgroup: comp.theory

    On 11/7/2025 4:19 PM, Tristan Wibberley wrote:
    On 07/11/2025 21:01, dbush wrote:
    On 11/7/2025 3:57 PM, olcott wrote:

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    False, as the instruction being simulated and the state of those
    instructions are exactly the same for algorithm H and algorithm H1 up to
    the point that algorithm H aborts.  The directly executed algorithm H
    performing the simulation is not part of the simulation and therefore
    neither is the state of the directly executed algorithm H.

    In his case, which satisfies his stated constraint, directly executed is
    on a [ordinary system administrator term] real machine and simulated is
    on a [ordinary system administrator term] virtual machine. The
    constraint allows them to be different and to have reflection and I
    believe Olcott has such computers.

    If reflection is part of his constraint, then it is outside the realm of Turning machine and the halting problem. By definition, a correct
    simulation exactly replicates the behavior of the machine being simulated.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@[email protected] to comp.theory on Fri Nov 7 14:32:25 2025
    From Newsgroup: comp.theory

    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@[email protected] to comp.theory on Fri Nov 7 14:40:48 2025
    From Newsgroup: comp.theory

    On 11/7/2025 11:53 AM, Tristan Wibberley wrote:
    On 07/11/2025 16:40, olcott wrote:
    On 11/7/2025 10:14 AM, Tristan Wibberley wrote:

    Be aware, "context of execution" is a term referring, pretty
    well-constrained, to a long-known concept (related to the modern
    "green-threads" and Microsoft "Fibers"), see man pages of posix
    "make_context".
    [...]

    I think Olcott can use fibers/green-threads quite easily to model DD.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Fri Nov 7 16:53:18 2025
    From Newsgroup: comp.theory

    On 11/7/2025 1:53 PM, Tristan Wibberley wrote:
    On 07/11/2025 16:40, olcott wrote:
    On 11/7/2025 10:14 AM, Tristan Wibberley wrote:

    Be aware, "context of execution" is a term referring, pretty
    well-constrained, to a long-known concept (related to the modern
    "green-threads" and Microsoft "Fibers"), see man pages of posix
    "make_context".


    https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution
    Is the meaning of the term when context switching is referenced.

    That description is incomplete, it has been written from an OS
    abstraction perspective which is far from the whole story. It's very suspicious because wikipedia used to get such things right.


    The execution context of a task contains the code that’s currently
    running (the instruction pointer) and everything that aids in its
    execution on a CPU core (CPU flags, key registers, variables, open
    files, connections, etc.);

    it must be loaded back into the processor before the code resumes execution

    https://medium.com/@vikas.taank_40391/understanding-context-switching-b1e2ec5d216e



    It is essentially the full state of the machine at a specific
    point it its execution.

    No. Please see the man page for POSIX make_context. It's clear that your sources are not definitive. You should be in a position to expect a
    range of meanings are both available and important.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Fri Nov 7 23:15:20 2025
    From Newsgroup: comp.theory

    On 07/11/2025 21:56, dbush wrote:
    On 11/7/2025 4:19 PM, Tristan Wibberley wrote:

    In his case, which satisfies his stated constraint, directly executed is
    on a [ordinary system administrator term] real machine and simulated is
    on a [ordinary system administrator term] virtual machine. The
    constraint allows them to be different and to have reflection and I
    believe Olcott has such computers.

    If reflection is part of his constraint,

    No it's not, it's merely not excluded so it's a valid solution.


    ... it is outside the realm of
    Turning machine and the halting problem.

    It /is/ interesting in the vicinity of the realm of the halting problem.


    By definition, a correct
    simulation exactly replicates the behavior of the machine being simulated.

    It does, it's simulating (emulating) a subtly different machine than the
    one running the simulation (emulation).


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Fri Nov 7 23:18:46 2025
    From Newsgroup: comp.theory

    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and
    context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a dictionary
    handy. I just can't understand how the conversation is going.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@[email protected] to comp.theory on Fri Nov 7 15:24:14 2025
    From Newsgroup: comp.theory

    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and
    context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a dictionary handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself multiple
    times. Sigh.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@[email protected] to comp.theory on Fri Nov 7 15:47:01 2025
    From Newsgroup: comp.theory

    On 11/7/2025 3:15 PM, Tristan Wibberley wrote:
    On 07/11/2025 21:56, dbush wrote:
    On 11/7/2025 4:19 PM, Tristan Wibberley wrote:

    In his case, which satisfies his stated constraint, directly executed is >>> on a [ordinary system administrator term] real machine and simulated is
    on a [ordinary system administrator term] virtual machine. The
    constraint allows them to be different and to have reflection and I
    believe Olcott has such computers.

    If reflection is part of his constraint,

    No it's not, it's merely not excluded so it's a valid solution.


    ... it is outside the realm of
    Turning machine and the halting problem.

    It /is/ interesting in the vicinity of the realm of the halting problem.


    By definition, a correct
    simulation exactly replicates the behavior of the machine being simulated.

    It does, it's simulating (emulating) a subtly different machine than the
    one running the simulation (emulation).

    Just don't think you can abort something that you think takes too long
    and call it non-halting.




    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Fri Nov 7 18:02:52 2025
    From Newsgroup: comp.theory

    On 11/7/2025 5:15 PM, Tristan Wibberley wrote:
    On 07/11/2025 21:56, dbush wrote:
    On 11/7/2025 4:19 PM, Tristan Wibberley wrote:

    In his case, which satisfies his stated constraint, directly executed is >>> on a [ordinary system administrator term] real machine and simulated is
    on a [ordinary system administrator term] virtual machine. The
    constraint allows them to be different and to have reflection and I
    believe Olcott has such computers.

    If reflection is part of his constraint,

    No it's not, it's merely not excluded so it's a valid solution.


    ... it is outside the realm of
    Turning machine and the halting problem.

    It /is/ interesting in the vicinity of the realm of the halting problem.


    By definition, a correct
    simulation exactly replicates the behavior of the machine being simulated.

    It does, it's simulating (emulating) a subtly different machine than the
    one running the simulation (emulation).



    OK I have up-rated you to excellent reviewer, not
    because you are agreeing with my position. I am doing
    this because you understand my position on the basis
    of a deep understanding of the whole area of the
    subject matter.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@[email protected] to comp.theory on Fri Nov 7 16:08:02 2025
    From Newsgroup: comp.theory

    On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and
    context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a dictionary
    handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself multiple
    times. Sigh.

    exactly make the world a better place by removing urself from it
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@[email protected] to comp.theory on Fri Nov 7 16:10:14 2025
    From Newsgroup: comp.theory

    On 11/7/2025 4:08 PM, dart200 wrote:
    On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and
    context-aware interchangeably ...

    which one do you guys think is best for the context-based computation >>>>> i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a dictionary >>> handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself multiple
    times. Sigh.

    exactly make the world a better place by removing urself from it


    You are a special one for sure. Sigh. ;^o
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@[email protected] to comp.theory on Fri Nov 7 16:14:30 2025
    From Newsgroup: comp.theory

    On 11/7/25 4:10 PM, Chris M. Thomasson wrote:
    On 11/7/2025 4:08 PM, dart200 wrote:
    On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and >>>>>> context-aware interchangeably ...

    which one do you guys think is best for the context-based computation >>>>>> i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a
    dictionary
    handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself multiple
    times. Sigh.

    exactly make the world a better place by removing urself from it


    You are a special one for sure. Sigh. ;^o

    all u do is intentionally troll, it's a waste of anyone's time and
    awareness, so yeah fuck off and die
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@[email protected] to comp.theory on Fri Nov 7 16:15:09 2025
    From Newsgroup: comp.theory

    On 11/7/25 12:45 AM, Jeff Barnett wrote:
    On 11/6/2025 4:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and
    context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?
    Context-dependent is an adjective formally defined describing languages
    that can be recognized (yes, no) by linear bounded automata. In the
    Chomsky hierarchy, this class of languages sits between those that can
    be recognized by nondeterministic pushdown automata and those that
    require Touring machines.

    Informal technical conversation has been using context-sensitive as a virtual synonym for many, many decades. Though I've heard and used the
    term context-aware for a fair while, I could imagine objections from a pedant because it seems to blatantly involve an "aware" agent. Are you talking about a language, a meta language, a machine, a class of
    machines, etc?

    i mean computing machines are ultimately predicated on our ability to
    manually run thru the computations and accept that as justification for
    their existence

    richard d had a tough time with that


    BTW: I believe that it is only recently settled whether the classes of deterministic and nondeterministic linear automata resolve the same set
    of languages or not. I have forgotten but if they are different so might
    be the definition of context sensitive languages.
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Fri Nov 7 18:25:33 2025
    From Newsgroup: comp.theory

    On 11/7/2025 6:08 PM, dart200 wrote:
    On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and
    context-aware interchangeably ...

    which one do you guys think is best for the context-based computation >>>>> i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a dictionary >>> handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself multiple
    times. Sigh.

    exactly make the world a better place by removing urself from it


    I wouldn't say that makes you unstable.
    Maybe a little too harsh when you can do
    what I did.

    Make a message filter so that you never
    see his messages again and they are all
    deleted from your feed. You can do that
    easily in Thunderbird.

    Chris, dbush and Flibble are the only
    people that I *plonked* in the last
    five years.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@[email protected] to comp.theory on Fri Nov 7 16:33:36 2025
    From Newsgroup: comp.theory

    On 11/7/25 4:25 PM, olcott wrote:
    On 11/7/2025 6:08 PM, dart200 wrote:
    On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and >>>>>> context-aware interchangeably ...

    which one do you guys think is best for the context-based computation >>>>>> i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a
    dictionary
    handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself multiple
    times. Sigh.

    exactly make the world a better place by removing urself from it


    I wouldn't say that makes you unstable.
    Maybe a little too harsh when you can do
    what I did.

    i don't block random people on the internet,

    i just want him to choose his words more wisely,

    or not say anything,

    either one works for me tbh

    reading all the stupid things people say is viscerally intolerable to
    me, because this stuff actually matters to me

    at least others try half-way, chris just low-effort shitposts, so i'm
    going to say viscerally intolerable things back


    Make a message filter so that you never
    see his messages again and they are all
    deleted from your feed. You can do that
    easily in Thunderbird.

    Chris, dbush and Flibble are the only
    people that I *plonked* in the last
    five years.

    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@[email protected] to comp.theory on Fri Nov 7 16:34:55 2025
    From Newsgroup: comp.theory

    On 11/7/25 7:40 AM, Tristan Wibberley wrote:
    On 07/11/2025 08:45, Jeff Barnett wrote:
    On 11/6/2025 4:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and
    context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?

    "dynamically-closed", maybe.

    idk what that means, could you explain how context-based computation is "dynamically-closed" vs something that might be "statically-closed"?


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Fri Nov 7 18:45:40 2025
    From Newsgroup: comp.theory

    On 11/7/2025 6:33 PM, dart200 wrote:
    On 11/7/25 4:25 PM, olcott wrote:
    On 11/7/2025 6:08 PM, dart200 wrote:
    On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and >>>>>>> context-aware interchangeably ...

    which one do you guys think is best for the context-based
    computation
    i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a
    dictionary
    handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself multiple
    times. Sigh.

    exactly make the world a better place by removing urself from it


    I wouldn't say that makes you unstable.
    Maybe a little too harsh when you can do
    what I did.

    i don't block random people on the internet,

    i just want him to choose his words more wisely,

    or not say anything,


    The three Trolls Chris, dbush and Flibble
    are best blocked. Flibble actually did
    have some very good things to say for a while.
    Chris is not a troll on the comp.lang.c groups.

    I almost never block anyone.

    either one works for me tbh

    reading all the stupid things people say is viscerally intolerable to
    me, because this stuff actually matters to me

    at least others try half-way, chris just low-effort shitposts, so i'm
    going to say viscerally intolerable things back


    Make a message filter so that you never
    see his messages again and they are all
    deleted from your feed. You can do that
    easily in Thunderbird.

    Chris, dbush and Flibble are the only
    people that I *plonked* in the last
    five years.


    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@[email protected] to comp.theory on Fri Nov 7 16:50:47 2025
    From Newsgroup: comp.theory

    On 11/7/25 4:45 PM, olcott wrote:
    On 11/7/2025 6:33 PM, dart200 wrote:
    On 11/7/25 4:25 PM, olcott wrote:
    On 11/7/2025 6:08 PM, dart200 wrote:
    On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and >>>>>>>> context-aware interchangeably ...

    which one do you guys think is best for the context-based
    computation
    i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a
    dictionary
    handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself
    multiple times. Sigh.

    exactly make the world a better place by removing urself from it


    I wouldn't say that makes you unstable.
    Maybe a little too harsh when you can do
    what I did.

    i don't block random people on the internet,

    i just want him to choose his words more wisely,

    or not say anything,


    The three Trolls Chris, dbush and Flibble
    are best blocked. Flibble actually did
    have some very good things to say for a while.
    Chris is not a troll on the comp.lang.c groups.

    I almost never block anyone.

    i'm shooting for never

    unfortunately i made a pretty big blunder the last few months by
    blocking my parents cause i was just tired of listening to them worry
    about shit they can't control anyways

    not cut them out of my life or anything, i'm tryin to see them every
    other week or so ... but blocked them from calling/texting/facebooking me

    they can still email


    either one works for me tbh

    reading all the stupid things people say is viscerally intolerable to
    me, because this stuff actually matters to me

    at least others try half-way, chris just low-effort shitposts, so i'm
    going to say viscerally intolerable things back


    Make a message filter so that you never
    see his messages again and they are all
    deleted from your feed. You can do that
    easily in Thunderbird.

    Chris, dbush and Flibble are the only
    people that I *plonked* in the last
    five years.




    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@[email protected] to comp.theory on Sat Nov 8 04:57:44 2025
    From Newsgroup: comp.theory

    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context,
    you can write a universal decider easily:

    int H(int (*d)())
    {
    return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence. You don't get to
    choose a context. The best you can do is pick a domain. If your
    domain is too restricted, you can hardly call it universal, and
    the domain of programs that know they're being simulated and
    require that simulation to ensure that it never reaches its halt
    state is *ridiculously* non-universal.

    D simulated by H requires H to simulate itself simulating
    D such that the simulated D never reaches its final halt
    state. D simulated by H1 halts.

    There is *nothing* in D imposing such a requirement on H. Indeed,
    there is nothing in D requiring H to simulate anything. The only
    requirement that *would* be imposed on H (if it could exist)
    would be to determine whether D halts.

    H *CAN'T* do this, because at the moment H must report a result,
    whether D halts has yet to be determined.

    H can no more determine whether D halts than G can guess my
    number when I don't choose the number until after G has guessed.

    <snip>
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Sat Nov 8 08:56:41 2025
    From Newsgroup: comp.theory

    On 08/11/2025 00:34, dart200 wrote:
    On 11/7/25 7:40 AM, Tristan Wibberley wrote:
    On 11/6/2025 4:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and
    context-aware interchangeably ...

    which one do you guys think is best for the context-based computation
    i'm trying to formalize?

    "dynamically-closed", maybe.

    idk what that means, could you explain how context-based computation is "dynamically-closed" vs something that might be "statically-closed"?

    It's CompSci technical terminology see "dynamic closure" in
    documentation comparing Scheme and other LISPs, for example.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Sat Nov 8 09:05:06 2025
    From Newsgroup: comp.theory

    On 08/11/2025 00:02, olcott wrote:

    OK I have up-rated you to excellent reviewer, not
    because you are agreeing with my position. I am doing
    this because you understand my position on the basis
    of a deep understanding of the whole area of the
    subject matter.

    I don't think that what you're replying to (referenced in the message's headers) is good evidence of that understanding, but thanks anyway.

    Perhaps it's evidence that I understood the assignment. I remembered
    what my GCSE maths teacher said: "Read the question carefully and answer
    the question that's actually asked", technically you didn't ask a
    question but I think the principle translates.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Sat Nov 8 09:27:45 2025
    From Newsgroup: comp.theory

    On 08/11/2025 04:57, Richard Heathfield wrote:
    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context, you
    can write a universal decider easily:

    int H(int (*d)())
    {
      return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence.

    Olcott's situation doesn't require that licence in order to be solved. D
    calls H so H may be defined thusly;

    int H(int (*d)())
    {
    int deciding_simulate_with_heapsize(size_t);
    void* object = malloc(1);
    free(object);
    if (object == NULL) return 0;
    return deciding_simulate_with_heapsize(0);
    }


    You don't get to choose a
    context.

    The situation stipulated C which requires dynamic closure to have its allocation consequences defined - that is: you /must/ choose a context.


    The best you can do is pick a domain. If your domain is too
    restricted, you can hardly call it universal

    Perhaps, but the Olcott situation statement we're replying about didn't stipulate that H be universal.


    There is *nothing* in D imposing such a requirement on H. Indeed, there
    is nothing in D requiring H to simulate anything. The only requirement
    that *would* be imposed on H (if it could exist) would be to determine whether D halts.

    It /permits/ H to be non-universal. I think you have assumed that you
    may not solve the situation described with a non-universal decider. It
    seems to me that's not stipulated in the situation statement.


    H can no more determine whether D halts than G can guess my number when
    I don't choose the number until after G has guessed.

    H, in the stipulated situation may /choose/ whether D halts because D is
    given to the person that writes H itself which is constructed solely to
    solve the situation described, thus it is a straightforward procedure to
    decide D using the C programming language.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Sat Nov 8 09:33:17 2025
    From Newsgroup: comp.theory

    On 08/11/2025 00:33, dart200 wrote:

    i don't block random people on the internet,

    That wouldn't have been random, it wouldn't even have been arbitrary.

    i just want him to choose his words more wisely,

    see above.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@[email protected] to comp.theory on Sat Nov 8 11:49:53 2025
    From Newsgroup: comp.theory

    On 08/11/2025 09:27, Tristan Wibberley wrote:
    On 08/11/2025 04:57, Richard Heathfield wrote:
    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context, you
    can write a universal decider easily:

    int H(int (*d)())
    {
      return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence.

    Olcott's situation doesn't require that licence in order to be solved.

    Olcott's situation has no solution.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Sat Nov 8 12:42:43 2025
    From Newsgroup: comp.theory

    On 08/11/2025 11:49, Richard Heathfield wrote:

    Olcott's situation has no solution.


    The personality disorder? Sure.
    The computation problem, however...


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Sat Nov 8 07:31:14 2025
    From Newsgroup: comp.theory

    On 11/8/2025 5:49 AM, Richard Heathfield wrote:
    On 08/11/2025 09:27, Tristan Wibberley wrote:
    On 08/11/2025 04:57, Richard Heathfield wrote:
    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context, you
    can write a universal decider easily:

    int H(int (*d)())
    {
       return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence.

    Olcott's situation doesn't require that licence in order to be solved.

    Olcott's situation has no solution.


    D simulated by H cannot possibly reach its own
    simulated "return" statement final halt state
    thus the input to H(D) specifies a non-halting
    sequence of configurations.

    Since H is a simulating termination analyzer H
    aborts its simulation of D as soon as H has
    correctly matched this correct non-halting behavior
    pattern.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@[email protected] to comp.theory on Sat Nov 8 13:48:01 2025
    From Newsgroup: comp.theory

    On 08/11/2025 09:27, Tristan Wibberley wrote:

    D calls H so H may be defined thusly;

    Hang on, maybe I've got confused. I didn't sleep well last night at all.
    Ohh memory is soo bad on bad food and little sleep.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@[email protected] to comp.theory on Sat Nov 8 17:26:39 2025
    From Newsgroup: comp.theory

    On 2025-11-08, Tristan Wibberley <[email protected]> wrote:
    On 08/11/2025 11:49, Richard Heathfield wrote:

    Olcott's situation has no solution.


    The personality disorder? Sure.
    The computation problem, however...

    Of those two responds to medication.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @[email protected]
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@[email protected] to comp.theory on Sat Nov 8 18:10:06 2025
    From Newsgroup: comp.theory

    On 2025-11-08, olcott <[email protected]> wrote:
    On 11/8/2025 5:49 AM, Richard Heathfield wrote:
    On 08/11/2025 09:27, Tristan Wibberley wrote:
    On 08/11/2025 04:57, Richard Heathfield wrote:
    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context, you
    can write a universal decider easily:

    int H(int (*d)())
    {
       return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence.

    Olcott's situation doesn't require that licence in order to be solved.

    Olcott's situation has no solution.


    D simulated by H cannot possibly reach its own
    simulated "return" statement final halt state
    thus the input to H(D) specifies a non-halting
    sequence of configurations.

    This is true of the above H, which returns 1 (accept).

    It is not true of any H that returns 0 for D,
    no matter how that 0 is calculated.

    You only think this because you wrongly reject the idea that the
    simulation is not finished when it is aborted by H.

    /Neglecting to simulate/ D's termination is not the same thing
    as D not having one.

    The neglected simulation can easily be continued by
    an agent other than H.

    One way we can do that very clearly is to have a decider
    API which takes the simulation/interpreter state object as a parameter:

    int H(void (*p)(void), interp *s);

    This s value is initialized by the caller like this:

    s = interp_create(D);

    snd then:

    H(D, s)

    is called.

    When H returns 0, the caller takes the unfinished simulation s
    and steps it:

    while (!interp_step(s)) { }

    if this loop terminates, the 0 result was wrong.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @[email protected]
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Sat Nov 8 12:50:08 2025
    From Newsgroup: comp.theory

    On 11/8/2025 12:10 PM, Kaz Kylheku wrote:
    On 2025-11-08, olcott <[email protected]> wrote:
    On 11/8/2025 5:49 AM, Richard Heathfield wrote:
    On 08/11/2025 09:27, Tristan Wibberley wrote:
    On 08/11/2025 04:57, Richard Heathfield wrote:
    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context, you >>>>> can write a universal decider easily:

    int H(int (*d)())
    {
       return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence.

    Olcott's situation doesn't require that licence in order to be solved.

    Olcott's situation has no solution.


    D simulated by H cannot possibly reach its own
    simulated "return" statement final halt state
    thus the input to H(D) specifies a non-halting
    sequence of configurations.

    This is true of the above H, which returns 1 (accept).

    It is not true of any H that returns 0 for D,
    no matter how that 0 is calculated.


    D simulated by H cannot possibly reach its own
    simulated "return" statement

    Why is is that no one can pay attention to
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H
    D simulated by H

    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?
    and keep confusing it with D executed from main?

    You only think this because you wrongly reject the idea that the
    simulation is not finished when it is aborted by H.

    /Neglecting to simulate/ D's termination is not the same thing
    as D not having one.

    The neglected simulation can easily be continued by
    an agent other than H.

    One way we can do that very clearly is to have a decider
    API which takes the simulation/interpreter state object as a parameter:

    int H(void (*p)(void), interp *s);

    This s value is initialized by the caller like this:

    s = interp_create(D);

    snd then:

    H(D, s)

    is called.

    When H returns 0, the caller takes the unfinished simulation s
    and steps it:

    while (!interp_step(s)) { }

    if this loop terminates, the 0 result was wrong.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Sat Nov 8 13:17:22 2025
    From Newsgroup: comp.theory

    On 11/8/2025 12:10 PM, Kaz Kylheku wrote:
    On 2025-11-08, olcott <[email protected]> wrote:
    On 11/8/2025 5:49 AM, Richard Heathfield wrote:
    On 08/11/2025 09:27, Tristan Wibberley wrote:
    On 08/11/2025 04:57, Richard Heathfield wrote:
    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context, you >>>>> can write a universal decider easily:

    int H(int (*d)())
    {
       return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence.

    Olcott's situation doesn't require that licence in order to be solved.

    Olcott's situation has no solution.


    D simulated by H cannot possibly reach its own
    simulated "return" statement final halt state
    thus the input to H(D) specifies a non-halting
    sequence of configurations.

    This is true of the above H, which returns 1 (accept).

    It is not true of any H that returns 0 for D,
    no matter how that 0 is calculated.

    You only think this because you wrongly reject the idea that the
    simulation is not finished when it is aborted by H.

    /Neglecting to simulate/ D's termination is not the same thing
    as D not having one.


    D simulated by H cannot possibly have an
    execution trace that receives a return value from H.

    The neglected simulation can easily be continued by
    an agent other than H.

    One way we can do that very clearly is to have a decider
    API which takes the simulation/interpreter state object as a parameter:

    int H(void (*p)(void), interp *s);

    This s value is initialized by the caller like this:

    s = interp_create(D);

    snd then:

    H(D, s)

    is called.

    When H returns 0, the caller takes the unfinished simulation s
    and steps it:

    while (!interp_step(s)) { }

    if this loop terminates, the 0 result was wrong.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@[email protected] to comp.theory on Sat Nov 8 11:50:29 2025
    From Newsgroup: comp.theory

    On 11/7/2025 4:33 PM, dart200 wrote:
    On 11/7/25 4:25 PM, olcott wrote:
    On 11/7/2025 6:08 PM, dart200 wrote:
    On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and >>>>>>> context-aware interchangeably ...

    which one do you guys think is best for the context-based
    computation
    i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a
    dictionary
    handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself multiple
    times. Sigh.

    exactly make the world a better place by removing urself from it


    I wouldn't say that makes you unstable.
    Maybe a little too harsh when you can do
    what I did.

    i don't block random people on the internet,

    i just want him to choose his words more wisely,

    or not say anything,

    either one works for me tbh

    reading all the stupid things people say is viscerally intolerable to
    me, because this stuff actually matters to me

    at least others try half-way, chris just low-effort shitposts, so i'm
    going to say viscerally intolerable things back


    Make a message filter so that you never
    see his messages again and they are all
    deleted from your feed. You can do that
    easily in Thunderbird.

    Chris, dbush and Flibble are the only
    people that I *plonked* in the last
    five years.



    We tried to help Olcott. End up being called every name in the book.
    Compared to hitler and shit. Well, what else can we do? Heck, at least
    my fuzzer makes sure to execute all the paths of DD. That's better than Olcott's HHH? Kaz has even coded up an interesting thing in Olcotts utm
    thing. He (olcott) basically shits on it. So, well, I am at a loss.

    Olcott tried to tell me that BASIC cannot handle recursion, so I showed
    him how to create a recursive stack from scratch, in BASIC.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@[email protected] to comp.theory on Sat Nov 8 11:52:34 2025
    From Newsgroup: comp.theory

    On 11/7/2025 4:33 PM, dart200 wrote:
    On 11/7/25 4:25 PM, olcott wrote:
    On 11/7/2025 6:08 PM, dart200 wrote:
    On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
    On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
    On 07/11/2025 22:32, Chris M. Thomasson wrote:
    On 11/6/2025 3:07 PM, dart200 wrote:
    i've been using the words context-dependent, context-sensitive, and >>>>>>> context-aware interchangeably ...

    which one do you guys think is best for the context-based
    computation
    i'm trying to formalize?


    Try to avoid telling others to kill themselves... ?


    I'm reading dart200's post and Chris M. Thomasson's. I have a
    dictionary
    handy. I just can't understand how the conversation is going.

    Afflict, dart is rather unstable. He told me to kill myself multiple
    times. Sigh.

    exactly make the world a better place by removing urself from it


    I wouldn't say that makes you unstable.
    Maybe a little too harsh when you can do
    what I did.

    i don't block random people on the internet,

    i just want him to choose his words more wisely,

    or not say anything,

    either one works for me tbh

    reading all the stupid things people say is viscerally intolerable to
    me, because this stuff actually matters to me

    at least others try half-way, chris just low-effort shitposts, so i'm
    going to say viscerally intolerable things back


    Make a message filter so that you never
    see his messages again and they are all
    deleted from your feed. You can do that
    easily in Thunderbird.

    Chris, dbush and Flibble are the only
    people that I *plonked* in the last
    five years.



    Btw, looking forward to seeing your solution to the halting problem
    using reflection: Get to work! :^)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@[email protected] to comp.theory on Sat Nov 8 19:55:02 2025
    From Newsgroup: comp.theory

    On 2025-11-08, olcott <[email protected]> wrote:
    On 11/8/2025 12:10 PM, Kaz Kylheku wrote:
    On 2025-11-08, olcott <[email protected]> wrote:
    On 11/8/2025 5:49 AM, Richard Heathfield wrote:
    On 08/11/2025 09:27, Tristan Wibberley wrote:
    On 08/11/2025 04:57, Richard Heathfield wrote:
    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context, you >>>>>> can write a universal decider easily:

    int H(int (*d)())
    {
       return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence.

    Olcott's situation doesn't require that licence in order to be solved. >>>>
    Olcott's situation has no solution.


    D simulated by H cannot possibly reach its own
    simulated "return" statement final halt state
    thus the input to H(D) specifies a non-halting
    sequence of configurations.

    This is true of the above H, which returns 1 (accept).

    It is not true of any H that returns 0 for D,
    no matter how that 0 is calculated.


    D simulated by H cannot possibly reach its own
    simulated "return" statement

    D specifies a computation that reaches termination.

    H only neglects to simulate far enough to show this.

    Why is is that no one can pay attention to
    D simulated by H

    Because it's not important. H must return the unfinsihed simulation
    along with its estimated halting status.

    Then the unfinished simulation can be inspected to
    check that it really doesn't terminate.

    and keep confusing it with D executed from main?

    The simulation of D incompletely conducted by H shows
    that that is the same D.

    There is only a single D, with a single specification of behavior.

    Any claim of the existence of some other behavior is
    incorrect; that behavior is incorrect.

    For instance, the incomplete simulation of D up to
    where H decides to stop and return 0---that is not
    the behavior specified by D.

    Only the complete simuation coincides with the behavior of D. When we
    complete the unfinished simulation, D is found to terminate.

    No "D executed from main" is involved.

    People do not see more than one D, because they have healthy minds.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @[email protected]
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@[email protected] to comp.theory on Sat Nov 8 19:58:23 2025
    From Newsgroup: comp.theory

    On 2025-11-08, olcott <[email protected]> wrote:
    On 11/8/2025 12:10 PM, Kaz Kylheku wrote:
    On 2025-11-08, olcott <[email protected]> wrote:
    On 11/8/2025 5:49 AM, Richard Heathfield wrote:
    On 08/11/2025 09:27, Tristan Wibberley wrote:
    On 08/11/2025 04:57, Richard Heathfield wrote:
    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context, you >>>>>> can write a universal decider easily:

    int H(int (*d)())
    {
       return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence.

    Olcott's situation doesn't require that licence in order to be solved. >>>>
    Olcott's situation has no solution.


    D simulated by H cannot possibly reach its own
    simulated "return" statement final halt state
    thus the input to H(D) specifies a non-halting
    sequence of configurations.

    This is true of the above H, which returns 1 (accept).

    It is not true of any H that returns 0 for D,
    no matter how that 0 is calculated.

    You only think this because you wrongly reject the idea that the
    simulation is not finished when it is aborted by H.

    /Neglecting to simulate/ D's termination is not the same thing
    as D not having one.


    D simulated by H cannot possibly have an

    "D simulated by H" is literally not a thing. D is simulated by
    a simulator, which doesn't care whether it is driven by
    events from H, or elsewhere.

    All correct simulations of D show halting.

    Simulations must be /complete/ to be correct.

    int H(void (*p)(void), interp *s);

    From now on, you must only discuss the above API for simulating
    deciders, or any other variant of your choice in which two arguments are represented: the procedure to be analyzed. and a freshly instantiated simulation pointing at that procedure.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @[email protected]
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@[email protected] to comp.theory on Sat Nov 8 14:39:48 2025
    From Newsgroup: comp.theory

    On 11/8/2025 1:55 PM, Kaz Kylheku wrote:
    On 2025-11-08, olcott <[email protected]> wrote:
    On 11/8/2025 12:10 PM, Kaz Kylheku wrote:
    On 2025-11-08, olcott <[email protected]> wrote:
    On 11/8/2025 5:49 AM, Richard Heathfield wrote:
    On 08/11/2025 09:27, Tristan Wibberley wrote:
    On 08/11/2025 04:57, Richard Heathfield wrote:
    On 07/11/2025 20:57, olcott wrote:

    <snip>

    In my specific case D simulated by H specifies a different
    sequence of steps than D executed from main because they
    are executed in different contexts.

    If you give the 'decider' licence to choose an execution context, you >>>>>>> can write a universal decider easily:

    int H(int (*d)())
    {
       return 1; /* in H's context, all programs halt */
    }

    But the decider is *not* granted that licence.

    Olcott's situation doesn't require that licence in order to be solved. >>>>>
    Olcott's situation has no solution.


    D simulated by H cannot possibly reach its own
    simulated "return" statement final halt state
    thus the input to H(D) specifies a non-halting
    sequence of configurations.

    This is true of the above H, which returns 1 (accept).

    It is not true of any H that returns 0 for D,
    no matter how that 0 is calculated.


    D simulated by H cannot possibly reach its own
    simulated "return" statement

    D specifies a computation that reaches termination.


    The input to H(D) specifies a sequence of steps that
    cannot possibly even receive a return value from H.

    Are you just a liar or not very good at execution traces?

    On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
    On 2025-11-05, olcott <[email protected]> wrote:

    The whole point is that D simulated by H
    cannot possbly reach its own simulated
    "return" statement no matter what H does.

    Yes; this doesn't happen while H is running.

    So while H does /something/, no matter what H does,
    that D simulation won't reach the return statement.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2