i've been using the words context-dependent, context-sensitive, and context-aware interchangeably ...Context-dependent is an adjective formally defined describing languages
which one do you guys think is best for the context-based computation
i'm trying to formalize?
On 11/6/2025 4:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and
context-aware interchangeably ...
which one do you guys think is best for the context-based computation
i'm trying to formalize?
i've been using the words context-dependent, context-sensitive, and context-aware interchangeably ...
which one do you guys think is best for the context-based computation
i'm trying to formalize?
When I refer to execution context I mean the
entire state of the machine prior to the
execution of the sequence of steps.
On 07/11/2025 16:09, olcott wrote:
When I refer to execution context I mean the
entire state of the machine prior to the
execution of the sequence of steps.
Be aware, "context of execution" is a term referring, pretty well-constrained, to a long-known concept (related to the modern "green-threads" and Microsoft "Fibers"), see man pages of posix "make_context".
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 11/7/2025 10:14 AM, Tristan Wibberley wrote:
Be aware, "context of execution" is a term referring, pretty
well-constrained, to a long-known concept (related to the modern
"green-threads" and Microsoft "Fibers"), see man pages of posix
"make_context".
https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution
Is the meaning of the term when context switching is referenced.
It is essentially the full state of the machine at a specific
point it its execution.
On 07/11/2025 16:40, olcott wrote:
On 11/7/2025 10:14 AM, Tristan Wibberley wrote:
Be aware, "context of execution" is a term referring, pretty
well-constrained, to a long-known concept (related to the modern
"green-threads" and Microsoft "Fibers"), see man pages of posix
"make_context".
https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution
Is the meaning of the term when context switching is referenced.
That description is incomplete, it has been written from an OS
abstraction perspective which is far from the whole story. It's very suspicious because wikipedia used to get such things right.
It is essentially the full state of the machine at a specific
point it its execution.
No. Please see the man page for POSIX make_context. It's clear that your sources are not definitive. You should be in a position to expect a
range of meanings are both available and important.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 07/11/2025 16:40, olcott wrote:
On 11/7/2025 10:14 AM, Tristan Wibberley wrote:
Be aware, "context of execution" is a term referring, pretty
well-constrained, to a long-known concept (related to the modern
"green-threads" and Microsoft "Fibers"), see man pages of posix
"make_context".
https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution
Is the meaning of the term when context switching is referenced.
That description is incomplete, it has been written from an OS
abstraction perspective which is far from the whole story. It's very suspicious because wikipedia used to get such things right.
It is essentially the full state of the machine at a specific
point it its execution.
No. Please see the man page for POSIX make_context. It's clear that your sources are not definitive. You should be in a position to expect a
range of meanings are both available and important.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 11/7/2025 1:53 PM, Tristan Wibberley wrote:
On 07/11/2025 16:40, olcott wrote:
On 11/7/2025 10:14 AM, Tristan Wibberley wrote:
Be aware, "context of execution" is a term referring, pretty
well-constrained, to a long-known concept (related to the modern
"green-threads" and Microsoft "Fibers"), see man pages of posix
"make_context".
https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution >>> Is the meaning of the term when context switching is referenced.
That description is incomplete, it has been written from an OS
abstraction perspective which is far from the whole story. It's very
suspicious because wikipedia used to get such things right.
It is essentially the full state of the machine at a specific
point it its execution.
No. Please see the man page for POSIX make_context. It's clear that your
sources are not definitive. You should be in a position to expect a
range of meanings are both available and important.
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
On 11/7/2025 3:57 PM, olcott wrote:
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
False, as the instruction being simulated and the state of those
instructions are exactly the same for algorithm H and algorithm H1 up to
the point that algorithm H aborts. The directly executed algorithm H performing the simulation is not part of the simulation and therefore
neither is the state of the directly executed algorithm H.
On 07/11/2025 21:01, dbush wrote:
On 11/7/2025 3:57 PM, olcott wrote:
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
False, as the instruction being simulated and the state of those
instructions are exactly the same for algorithm H and algorithm H1 up to
the point that algorithm H aborts. The directly executed algorithm H
performing the simulation is not part of the simulation and therefore
neither is the state of the directly executed algorithm H.
In his case, which satisfies his stated constraint, directly executed is
on a [ordinary system administrator term] real machine and simulated is
on a [ordinary system administrator term] virtual machine. The
constraint allows them to be different and to have reflection and I
believe Olcott has such computers.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 07/11/2025 21:01, dbush wrote:
On 11/7/2025 3:57 PM, olcott wrote:
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
False, as the instruction being simulated and the state of those
instructions are exactly the same for algorithm H and algorithm H1 up to
the point that algorithm H aborts. The directly executed algorithm H
performing the simulation is not part of the simulation and therefore
neither is the state of the directly executed algorithm H.
In his case, which satisfies his stated constraint, directly executed is
on a [ordinary system administrator term] real machine and simulated is
on a [ordinary system administrator term] virtual machine. The
constraint allows them to be different and to have reflection and I
believe Olcott has such computers.
i've been using the words context-dependent, context-sensitive, and context-aware interchangeably ...
which one do you guys think is best for the context-based computation
i'm trying to formalize?
On 07/11/2025 16:40, olcott wrote:[...]
On 11/7/2025 10:14 AM, Tristan Wibberley wrote:
Be aware, "context of execution" is a term referring, pretty
well-constrained, to a long-known concept (related to the modern
"green-threads" and Microsoft "Fibers"), see man pages of posix
"make_context".
On 07/11/2025 16:40, olcott wrote:
On 11/7/2025 10:14 AM, Tristan Wibberley wrote:
Be aware, "context of execution" is a term referring, pretty
well-constrained, to a long-known concept (related to the modern
"green-threads" and Microsoft "Fibers"), see man pages of posix
"make_context".
https://en.wikipedia.org/wiki/Execution_(computing)#Context_of_execution
Is the meaning of the term when context switching is referenced.
That description is incomplete, it has been written from an OS
abstraction perspective which is far from the whole story. It's very suspicious because wikipedia used to get such things right.
It is essentially the full state of the machine at a specific
point it its execution.
No. Please see the man page for POSIX make_context. It's clear that your sources are not definitive. You should be in a position to expect a
range of meanings are both available and important.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 11/7/2025 4:19 PM, Tristan Wibberley wrote:
In his case, which satisfies his stated constraint, directly executed is
on a [ordinary system administrator term] real machine and simulated is
on a [ordinary system administrator term] virtual machine. The
constraint allows them to be different and to have reflection and I
believe Olcott has such computers.
If reflection is part of his constraint,
... it is outside the realm of
Turning machine and the halting problem.
By definition, a correct
simulation exactly replicates the behavior of the machine being simulated.
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and
context-aware interchangeably ...
which one do you guys think is best for the context-based computation
i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and
context-aware interchangeably ...
which one do you guys think is best for the context-based computation
i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a dictionary handy. I just can't understand how the conversation is going.
On 07/11/2025 21:56, dbush wrote:
On 11/7/2025 4:19 PM, Tristan Wibberley wrote:
In his case, which satisfies his stated constraint, directly executed is >>> on a [ordinary system administrator term] real machine and simulated is
on a [ordinary system administrator term] virtual machine. The
constraint allows them to be different and to have reflection and I
believe Olcott has such computers.
If reflection is part of his constraint,
No it's not, it's merely not excluded so it's a valid solution.
... it is outside the realm of
Turning machine and the halting problem.
It /is/ interesting in the vicinity of the realm of the halting problem.
By definition, a correct
simulation exactly replicates the behavior of the machine being simulated.
It does, it's simulating (emulating) a subtly different machine than the
one running the simulation (emulation).
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 07/11/2025 21:56, dbush wrote:
On 11/7/2025 4:19 PM, Tristan Wibberley wrote:
In his case, which satisfies his stated constraint, directly executed is >>> on a [ordinary system administrator term] real machine and simulated is
on a [ordinary system administrator term] virtual machine. The
constraint allows them to be different and to have reflection and I
believe Olcott has such computers.
If reflection is part of his constraint,
No it's not, it's merely not excluded so it's a valid solution.
... it is outside the realm of
Turning machine and the halting problem.
It /is/ interesting in the vicinity of the realm of the halting problem.
By definition, a correct
simulation exactly replicates the behavior of the machine being simulated.
It does, it's simulating (emulating) a subtly different machine than the
one running the simulation (emulation).
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and
context-aware interchangeably ...
which one do you guys think is best for the context-based computation
i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a dictionary
handy. I just can't understand how the conversation is going.
Afflict, dart is rather unstable. He told me to kill myself multiple
times. Sigh.
On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and
context-aware interchangeably ...
which one do you guys think is best for the context-based computation >>>>> i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a dictionary >>> handy. I just can't understand how the conversation is going.
Afflict, dart is rather unstable. He told me to kill myself multiple
times. Sigh.
exactly make the world a better place by removing urself from it
On 11/7/2025 4:08 PM, dart200 wrote:
On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and >>>>>> context-aware interchangeably ...
which one do you guys think is best for the context-based computation >>>>>> i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a
dictionary
handy. I just can't understand how the conversation is going.
Afflict, dart is rather unstable. He told me to kill myself multiple
times. Sigh.
exactly make the world a better place by removing urself from it
You are a special one for sure. Sigh. ;^o
On 11/6/2025 4:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, andContext-dependent is an adjective formally defined describing languages
context-aware interchangeably ...
which one do you guys think is best for the context-based computation
i'm trying to formalize?
that can be recognized (yes, no) by linear bounded automata. In the
Chomsky hierarchy, this class of languages sits between those that can
be recognized by nondeterministic pushdown automata and those that
require Touring machines.
Informal technical conversation has been using context-sensitive as a virtual synonym for many, many decades. Though I've heard and used the
term context-aware for a fair while, I could imagine objections from a pedant because it seems to blatantly involve an "aware" agent. Are you talking about a language, a meta language, a machine, a class of
machines, etc?
BTW: I believe that it is only recently settled whether the classes of deterministic and nondeterministic linear automata resolve the same set--
of languages or not. I have forgotten but if they are different so might
be the definition of context sensitive languages.
On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and
context-aware interchangeably ...
which one do you guys think is best for the context-based computation >>>>> i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a dictionary >>> handy. I just can't understand how the conversation is going.
Afflict, dart is rather unstable. He told me to kill myself multiple
times. Sigh.
exactly make the world a better place by removing urself from it
On 11/7/2025 6:08 PM, dart200 wrote:
On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and >>>>>> context-aware interchangeably ...
which one do you guys think is best for the context-based computation >>>>>> i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a
dictionary
handy. I just can't understand how the conversation is going.
Afflict, dart is rather unstable. He told me to kill myself multiple
times. Sigh.
exactly make the world a better place by removing urself from it
I wouldn't say that makes you unstable.
Maybe a little too harsh when you can do
what I did.
Make a message filter so that you never
see his messages again and they are all
deleted from your feed. You can do that
easily in Thunderbird.
Chris, dbush and Flibble are the only
people that I *plonked* in the last
five years.
On 07/11/2025 08:45, Jeff Barnett wrote:
On 11/6/2025 4:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and
context-aware interchangeably ...
which one do you guys think is best for the context-based computation
i'm trying to formalize?
"dynamically-closed", maybe.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.
On 11/7/25 4:25 PM, olcott wrote:
On 11/7/2025 6:08 PM, dart200 wrote:
On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and >>>>>>> context-aware interchangeably ...
which one do you guys think is best for the context-based
computation
i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a
dictionary
handy. I just can't understand how the conversation is going.
Afflict, dart is rather unstable. He told me to kill myself multiple
times. Sigh.
exactly make the world a better place by removing urself from it
I wouldn't say that makes you unstable.
Maybe a little too harsh when you can do
what I did.
i don't block random people on the internet,
i just want him to choose his words more wisely,
or not say anything,
either one works for me tbh
reading all the stupid things people say is viscerally intolerable to
me, because this stuff actually matters to me
at least others try half-way, chris just low-effort shitposts, so i'm
going to say viscerally intolerable things back
Make a message filter so that you never
see his messages again and they are all
deleted from your feed. You can do that
easily in Thunderbird.
Chris, dbush and Flibble are the only
people that I *plonked* in the last
five years.
On 11/7/2025 6:33 PM, dart200 wrote:
On 11/7/25 4:25 PM, olcott wrote:
On 11/7/2025 6:08 PM, dart200 wrote:
On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and >>>>>>>> context-aware interchangeably ...
which one do you guys think is best for the context-based
computation
i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a
dictionary
handy. I just can't understand how the conversation is going.
Afflict, dart is rather unstable. He told me to kill myself
multiple times. Sigh.
exactly make the world a better place by removing urself from it
I wouldn't say that makes you unstable.
Maybe a little too harsh when you can do
what I did.
i don't block random people on the internet,
i just want him to choose his words more wisely,
or not say anything,
The three Trolls Chris, dbush and Flibble
are best blocked. Flibble actually did
have some very good things to say for a while.
Chris is not a troll on the comp.lang.c groups.
I almost never block anyone.
either one works for me tbh
reading all the stupid things people say is viscerally intolerable to
me, because this stuff actually matters to me
at least others try half-way, chris just low-effort shitposts, so i'm
going to say viscerally intolerable things back
Make a message filter so that you never
see his messages again and they are all
deleted from your feed. You can do that
easily in Thunderbird.
Chris, dbush and Flibble are the only
people that I *plonked* in the last
five years.
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
D simulated by H requires H to simulate itself simulating
D such that the simulated D never reaches its final halt
state. D simulated by H1 halts.
On 11/7/25 7:40 AM, Tristan Wibberley wrote:
On 11/6/2025 4:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and
context-aware interchangeably ...
which one do you guys think is best for the context-based computation
i'm trying to formalize?
"dynamically-closed", maybe.
idk what that means, could you explain how context-based computation is "dynamically-closed" vs something that might be "statically-closed"?
OK I have up-rated you to excellent reviewer, not
because you are agreeing with my position. I am doing
this because you understand my position on the basis
of a deep understanding of the whole area of the
subject matter.
On 07/11/2025 20:57, olcott wrote:
<snip>
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
If you give the 'decider' licence to choose an execution context, you
can write a universal decider easily:
int H(int (*d)())
{
return 1; /* in H's context, all programs halt */
}
But the decider is *not* granted that licence.
You don't get to choose a
context.
The best you can do is pick a domain. If your domain is too
restricted, you can hardly call it universal
There is *nothing* in D imposing such a requirement on H. Indeed, there
is nothing in D requiring H to simulate anything. The only requirement
that *would* be imposed on H (if it could exist) would be to determine whether D halts.
H can no more determine whether D halts than G can guess my number when
I don't choose the number until after G has guessed.
i don't block random people on the internet,
i just want him to choose his words more wisely,
On 08/11/2025 04:57, Richard Heathfield wrote:
On 07/11/2025 20:57, olcott wrote:
<snip>
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
If you give the 'decider' licence to choose an execution context, you
can write a universal decider easily:
int H(int (*d)())
{
return 1; /* in H's context, all programs halt */
}
But the decider is *not* granted that licence.
Olcott's situation doesn't require that licence in order to be solved.
Olcott's situation has no solution.
On 08/11/2025 09:27, Tristan Wibberley wrote:
On 08/11/2025 04:57, Richard Heathfield wrote:
On 07/11/2025 20:57, olcott wrote:
<snip>
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
If you give the 'decider' licence to choose an execution context, you
can write a universal decider easily:
int H(int (*d)())
{
return 1; /* in H's context, all programs halt */
}
But the decider is *not* granted that licence.
Olcott's situation doesn't require that licence in order to be solved.
Olcott's situation has no solution.
D calls H so H may be defined thusly;
On 08/11/2025 11:49, Richard Heathfield wrote:
Olcott's situation has no solution.
The personality disorder? Sure.
The computation problem, however...
On 11/8/2025 5:49 AM, Richard Heathfield wrote:
On 08/11/2025 09:27, Tristan Wibberley wrote:
On 08/11/2025 04:57, Richard Heathfield wrote:
On 07/11/2025 20:57, olcott wrote:
<snip>
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
If you give the 'decider' licence to choose an execution context, you
can write a universal decider easily:
int H(int (*d)())
{
return 1; /* in H's context, all programs halt */
}
But the decider is *not* granted that licence.
Olcott's situation doesn't require that licence in order to be solved.
Olcott's situation has no solution.
D simulated by H cannot possibly reach its own
simulated "return" statement final halt state
thus the input to H(D) specifies a non-halting
sequence of configurations.
On 2025-11-08, olcott <[email protected]> wrote:
On 11/8/2025 5:49 AM, Richard Heathfield wrote:
On 08/11/2025 09:27, Tristan Wibberley wrote:
On 08/11/2025 04:57, Richard Heathfield wrote:
On 07/11/2025 20:57, olcott wrote:
<snip>
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
If you give the 'decider' licence to choose an execution context, you >>>>> can write a universal decider easily:
int H(int (*d)())
{
return 1; /* in H's context, all programs halt */
}
But the decider is *not* granted that licence.
Olcott's situation doesn't require that licence in order to be solved.
Olcott's situation has no solution.
D simulated by H cannot possibly reach its own
simulated "return" statement final halt state
thus the input to H(D) specifies a non-halting
sequence of configurations.
This is true of the above H, which returns 1 (accept).
It is not true of any H that returns 0 for D,
no matter how that 0 is calculated.
You only think this because you wrongly reject the idea that the
simulation is not finished when it is aborted by H.
/Neglecting to simulate/ D's termination is not the same thing
as D not having one.
The neglected simulation can easily be continued by
an agent other than H.
One way we can do that very clearly is to have a decider
API which takes the simulation/interpreter state object as a parameter:
int H(void (*p)(void), interp *s);
This s value is initialized by the caller like this:
s = interp_create(D);
snd then:
H(D, s)
is called.
When H returns 0, the caller takes the unfinished simulation s
and steps it:
while (!interp_step(s)) { }
if this loop terminates, the 0 result was wrong.
On 2025-11-08, olcott <[email protected]> wrote:
On 11/8/2025 5:49 AM, Richard Heathfield wrote:
On 08/11/2025 09:27, Tristan Wibberley wrote:
On 08/11/2025 04:57, Richard Heathfield wrote:
On 07/11/2025 20:57, olcott wrote:
<snip>
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
If you give the 'decider' licence to choose an execution context, you >>>>> can write a universal decider easily:
int H(int (*d)())
{
return 1; /* in H's context, all programs halt */
}
But the decider is *not* granted that licence.
Olcott's situation doesn't require that licence in order to be solved.
Olcott's situation has no solution.
D simulated by H cannot possibly reach its own
simulated "return" statement final halt state
thus the input to H(D) specifies a non-halting
sequence of configurations.
This is true of the above H, which returns 1 (accept).
It is not true of any H that returns 0 for D,
no matter how that 0 is calculated.
You only think this because you wrongly reject the idea that the
simulation is not finished when it is aborted by H.
/Neglecting to simulate/ D's termination is not the same thing
as D not having one.
The neglected simulation can easily be continued by
an agent other than H.
One way we can do that very clearly is to have a decider
API which takes the simulation/interpreter state object as a parameter:
int H(void (*p)(void), interp *s);
This s value is initialized by the caller like this:
s = interp_create(D);
snd then:
H(D, s)
is called.
When H returns 0, the caller takes the unfinished simulation s
and steps it:
while (!interp_step(s)) { }
if this loop terminates, the 0 result was wrong.
On 11/7/25 4:25 PM, olcott wrote:
On 11/7/2025 6:08 PM, dart200 wrote:
On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and >>>>>>> context-aware interchangeably ...
which one do you guys think is best for the context-based
computation
i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a
dictionary
handy. I just can't understand how the conversation is going.
Afflict, dart is rather unstable. He told me to kill myself multiple
times. Sigh.
exactly make the world a better place by removing urself from it
I wouldn't say that makes you unstable.
Maybe a little too harsh when you can do
what I did.
i don't block random people on the internet,
i just want him to choose his words more wisely,
or not say anything,
either one works for me tbh
reading all the stupid things people say is viscerally intolerable to
me, because this stuff actually matters to me
at least others try half-way, chris just low-effort shitposts, so i'm
going to say viscerally intolerable things back
Make a message filter so that you never
see his messages again and they are all
deleted from your feed. You can do that
easily in Thunderbird.
Chris, dbush and Flibble are the only
people that I *plonked* in the last
five years.
On 11/7/25 4:25 PM, olcott wrote:
On 11/7/2025 6:08 PM, dart200 wrote:
On 11/7/25 3:24 PM, Chris M. Thomasson wrote:
On 11/7/2025 3:18 PM, Tristan Wibberley wrote:
On 07/11/2025 22:32, Chris M. Thomasson wrote:
On 11/6/2025 3:07 PM, dart200 wrote:
i've been using the words context-dependent, context-sensitive, and >>>>>>> context-aware interchangeably ...
which one do you guys think is best for the context-based
computation
i'm trying to formalize?
Try to avoid telling others to kill themselves... ?
I'm reading dart200's post and Chris M. Thomasson's. I have a
dictionary
handy. I just can't understand how the conversation is going.
Afflict, dart is rather unstable. He told me to kill myself multiple
times. Sigh.
exactly make the world a better place by removing urself from it
I wouldn't say that makes you unstable.
Maybe a little too harsh when you can do
what I did.
i don't block random people on the internet,
i just want him to choose his words more wisely,
or not say anything,
either one works for me tbh
reading all the stupid things people say is viscerally intolerable to
me, because this stuff actually matters to me
at least others try half-way, chris just low-effort shitposts, so i'm
going to say viscerally intolerable things back
Make a message filter so that you never
see his messages again and they are all
deleted from your feed. You can do that
easily in Thunderbird.
Chris, dbush and Flibble are the only
people that I *plonked* in the last
five years.
On 11/8/2025 12:10 PM, Kaz Kylheku wrote:
On 2025-11-08, olcott <[email protected]> wrote:
On 11/8/2025 5:49 AM, Richard Heathfield wrote:
On 08/11/2025 09:27, Tristan Wibberley wrote:
On 08/11/2025 04:57, Richard Heathfield wrote:Olcott's situation has no solution.
On 07/11/2025 20:57, olcott wrote:
<snip>
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
If you give the 'decider' licence to choose an execution context, you >>>>>> can write a universal decider easily:
int H(int (*d)())
{
return 1; /* in H's context, all programs halt */
}
But the decider is *not* granted that licence.
Olcott's situation doesn't require that licence in order to be solved. >>>>
D simulated by H cannot possibly reach its own
simulated "return" statement final halt state
thus the input to H(D) specifies a non-halting
sequence of configurations.
This is true of the above H, which returns 1 (accept).
It is not true of any H that returns 0 for D,
no matter how that 0 is calculated.
D simulated by H cannot possibly reach its own
simulated "return" statement
Why is is that no one can pay attention to
D simulated by H
and keep confusing it with D executed from main?
On 11/8/2025 12:10 PM, Kaz Kylheku wrote:
On 2025-11-08, olcott <[email protected]> wrote:
On 11/8/2025 5:49 AM, Richard Heathfield wrote:
On 08/11/2025 09:27, Tristan Wibberley wrote:
On 08/11/2025 04:57, Richard Heathfield wrote:Olcott's situation has no solution.
On 07/11/2025 20:57, olcott wrote:
<snip>
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
If you give the 'decider' licence to choose an execution context, you >>>>>> can write a universal decider easily:
int H(int (*d)())
{
return 1; /* in H's context, all programs halt */
}
But the decider is *not* granted that licence.
Olcott's situation doesn't require that licence in order to be solved. >>>>
D simulated by H cannot possibly reach its own
simulated "return" statement final halt state
thus the input to H(D) specifies a non-halting
sequence of configurations.
This is true of the above H, which returns 1 (accept).
It is not true of any H that returns 0 for D,
no matter how that 0 is calculated.
You only think this because you wrongly reject the idea that the
simulation is not finished when it is aborted by H.
/Neglecting to simulate/ D's termination is not the same thing
as D not having one.
D simulated by H cannot possibly have an
int H(void (*p)(void), interp *s);
On 2025-11-08, olcott <[email protected]> wrote:
On 11/8/2025 12:10 PM, Kaz Kylheku wrote:
On 2025-11-08, olcott <[email protected]> wrote:
On 11/8/2025 5:49 AM, Richard Heathfield wrote:
On 08/11/2025 09:27, Tristan Wibberley wrote:
On 08/11/2025 04:57, Richard Heathfield wrote:Olcott's situation has no solution.
On 07/11/2025 20:57, olcott wrote:
<snip>
In my specific case D simulated by H specifies a different
sequence of steps than D executed from main because they
are executed in different contexts.
If you give the 'decider' licence to choose an execution context, you >>>>>>> can write a universal decider easily:
int H(int (*d)())
{
return 1; /* in H's context, all programs halt */
}
But the decider is *not* granted that licence.
Olcott's situation doesn't require that licence in order to be solved. >>>>>
D simulated by H cannot possibly reach its own
simulated "return" statement final halt state
thus the input to H(D) specifies a non-halting
sequence of configurations.
This is true of the above H, which returns 1 (accept).
It is not true of any H that returns 0 for D,
no matter how that 0 is calculated.
D simulated by H cannot possibly reach its own
simulated "return" statement
D specifies a computation that reaches termination.
On 2025-11-05, olcott <[email protected]> wrote:
The whole point is that D simulated by H
cannot possbly reach its own simulated
"return" statement no matter what H does.
Yes; this doesn't happen while H is running.
So while H does /something/, no matter what H does,
that D simulation won't reach the return statement.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,076 |
| Nodes: | 10 (1 / 9) |
| Uptime: | 78:39:21 |
| Calls: | 13,805 |
| Files: | 186,990 |
| D/L today: |
5,990 files (1,958M bytes) |
| Messages: | 2,443,207 |