• WebPL is already outdated

    From Mild Shock@[email protected] to comp.lang.prolog on Sun Aug 17 18:37:07 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Mon Aug 18 14:52:50 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999 https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
    22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
    82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
    72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
    62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
    52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
    42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
    32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
    22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Mon Aug 18 15:06:39 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Ok lets run the test case on the desktop,
    and not on the web. What do we get? Its almost
    constant for Trealla Prolog as well, in

    WebPL it was perfectly constant, but here
    its only almost constant:

    /* Trealla Prolog 2.82.14 */

    ?- time(test2(10)).
    % Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
    true.

    ?- time(test2(30)).
    % Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
    true.

    ?- time(test2(90)).
    % Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
    true.

    Scryer Prolog fails the test horribly. Which
    is amazing, since it is a Rust Prolog system just
    like WebPL. But they are too traditional in

    following the stupid WAM design:

    /* Scryer Prolog 0.9.4-599 */

    ?- time(test2(10)).
    % CPU time: 0.714s, 7_049_076 inferences
    true.

    ?- time(test2(30)).
    % CPU time: 1.284s, 7_049_099 inferences
    true.

    ?- time(test2(90)).
    % CPU time: 2.984s, 7_049_099 inferences
    true.

    Bye

    Mild Shock schrieb:
    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999 https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
       22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
       12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
       82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
       72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
       62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
       52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
       42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
       32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
       22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
       12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Mon Aug 18 15:42:38 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Smarter Partial Strings would use Program
    Sharing. Take the invention of Scryer
    Prolog and think about it from a Program

    Sharing prespective:

    p --> "abc", q

    Translates to with Partial Strings:

    p(C, B) :- C = "abc"||A, q(A, B).

    Unfortunately straight forward Program
    Sharning of the partial string doesn't
    work anymore, since it is not ground:

    p(C, B) :- C = [a,b,c|A], q(A, B).

    But we could translate the DCG also to:

    p(C, B) :- '$append'([a,b,c],A,C), q(A, B).

    Where '$append'/3 is a mode (+,-,-) specialization
    of append/3. Could be natively implemented.
    The mode (+,-,-) will be more clever

    then the failed programm sharing. The program
    sharing can share the string "abc", since with
    '$append'/3, the DCG is basically:

    p(C, B) :- '$append'("abc",A,C), q(A, B).

    Now '$append'/3 would do a copying of the string,
    if A is unbound, this is usually the "DCG used for
    text generation" mode. But if A is bound, the

    '$append'/3 would not do some copying, but it
    would actually match the prefix. So it gives
    a much better DCG for parsing, since this is

    "DCG used for text parsing" mode.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok lets run the test case on the desktop,
    and not on the web. What do we get? Its almost
    constant for Trealla Prolog as well, in

    WebPL it was perfectly constant, but here
    its only almost constant:

    /* Trealla Prolog 2.82.14 */

    ?- time(test2(10)).
    % Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
       true.

    ?- time(test2(30)).
    % Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
       true.

    ?- time(test2(90)).
    % Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
       true.

    Scryer Prolog fails the test horribly. Which
    is amazing, since it is a Rust Prolog system just
    like WebPL. But they are too traditional in

    following the stupid WAM design:

    /* Scryer Prolog 0.9.4-599 */

    ?- time(test2(10)).
       % CPU time: 0.714s, 7_049_076 inferences
       true.

    ?- time(test2(30)).
       % CPU time: 1.284s, 7_049_099 inferences
       true.

    ?- time(test2(90)).
       % CPU time: 2.984s, 7_049_099 inferences
       true.

    Bye

    Mild Shock schrieb:
    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999
    https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
        22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
        12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
        82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
        72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
        62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
        52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
        42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
        32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
        22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
        12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Mon Aug 18 15:49:54 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    In Dogelog Player I don't need to introduce
    '$append'/3, since in this code there is
    anyway an attempt to do static shunting:

    p(C, B) :- C = [a,b,c|A], q(A, B).

    It is handled as if it were:

    p([a,b,c|A], B) :- q(A, B).

    This means [a,b,c|A] is anyway program shared (PS),
    and a matching happens so that we can ultimately
    omit the creation of a real Prolog variable for

    A. It will get a special place holder that is
    not trailed. Maybe I will find a test case
    to illustrate this form of program sharing,

    which I have temporarily termed static shunting,
    whereas WebPL paper shunting, I would rather
    call dynamic shunting. Unfortuantely WebPL

    does not support DCG parsing, the (-->)/2
    clauses don't work. So will take me more time
    to test whether there is something in WebPL,

    concerning this type of program sharing as well,
    or whether it was botched.

    Bye

    Mild Shock schrieb:
    Hi,

    Smarter Partial Strings would use Program
    Sharing. Take the invention of Scryer
    Prolog and think about it from a Program

    Sharing prespective:

    p --> "abc", q

    Translates to with Partial Strings:

    p(C, B) :- C = "abc"||A, q(A, B).

    Unfortunately straight forward Program
    Sharning of the partial string doesn't
    work anymore, since it is not ground:

    p(C, B) :- C = [a,b,c|A], q(A, B).

    But we could translate the DCG also to:

    p(C, B) :- '$append'([a,b,c],A,C), q(A, B).

    Where '$append'/3 is a mode (+,-,-) specialization
    of append/3. Could be natively implemented.
    The mode (+,-,-) will be more clever

    then the failed programm sharing. The program
    sharing can share the string "abc", since with
    '$append'/3, the DCG is basically:

    p(C, B) :- '$append'("abc",A,C), q(A, B).

    Now '$append'/3 would do a copying of the string,
    if A is unbound, this is usually the "DCG used for
    text generation" mode. But if A is bound, the

    '$append'/3 would not do some copying, but it
    would actually match the prefix. So it gives
    a much better DCG for parsing, since this is

    "DCG used for text parsing" mode.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok lets run the test case on the desktop,
    and not on the web. What do we get? Its almost
    constant for Trealla Prolog as well, in

    WebPL it was perfectly constant, but here
    its only almost constant:

    /* Trealla Prolog 2.82.14 */

    ?- time(test2(10)).
    % Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
        true.

    ?- time(test2(30)).
    % Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
        true.

    ?- time(test2(90)).
    % Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
        true.

    Scryer Prolog fails the test horribly. Which
    is amazing, since it is a Rust Prolog system just
    like WebPL. But they are too traditional in

    following the stupid WAM design:

    /* Scryer Prolog 0.9.4-599 */

    ?- time(test2(10)).
        % CPU time: 0.714s, 7_049_076 inferences
        true.

    ?- time(test2(30)).
        % CPU time: 1.284s, 7_049_099 inferences
        true.

    ?- time(test2(90)).
        % CPU time: 2.984s, 7_049_099 inferences
        true.

    Bye

    Mild Shock schrieb:
    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999
    https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
        22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
        12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
        82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
        72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
        62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
        52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
        42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
        32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
        22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
        12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Sun Aug 31 23:56:56 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350

    BTW: Still ticking along with the primes.pl example:

    test :-
    len(L, 1000),
    primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    primes(L, I),
    K is I+1,
    search(L, K, J).

    search(L, I, J) :-
    mem(X, L),
    I mod X =:= 0, !,
    K is I+1,
    search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    N > 0,
    M is N-1,
    len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Mon Sep 1 00:45:00 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? ❌ None at runtime,
    Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? ❌ None at runtime,
    ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? ❌ None at runtime,
    Lightweight Phi model packaged as ONNX

    AI Semantic Analysis?
    - Python Involced? ❌ None at runtime,
    Text understanding done via compiled
    ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
       len(L, 1000),
       primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
       primes(L, I),
       K is I+1,
       search(L, K, J).

    search(L, I, J) :-
       mem(X, L),
       I mod X =:= 0, !,
       K is I+1,
       search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
       mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
       N > 0,
       M is N-1,
       len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 5 00:36:17 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Swiss AI Apertus
    Model ID: apertus-70b-instruct
    Parameters: 70 billion
    License: Apache 2.0
    Training: 15T tokens across 1,000+ languages
    Availability: Free during Swiss AI Weeks (September 2025)

    https://platform.publicai.co/docs

    Bye

    P.S.: A chat interface is here:

    Try Apertus
    https://publicai.co/

    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? ❌ None at runtime,
      Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? ❌ None at runtime,
      ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? ❌ None at runtime,
      Lightweight Phi model packaged as ONNX

    AI Semantic Analysis?
    - Python Involced? ❌ None at runtime,
      Text understanding done via compiled
      ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
        len(L, 1000),
        primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
        primes(L, I),
        K is I+1,
        search(L, K, J).

    search(L, I, J) :-
        mem(X, L),
        I mod X =:= 0, !,
        K is I+1,
        search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
        mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
        N > 0,
        M is N-1,
        len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 5 01:03:55 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Don't try this, don't ask Apertus how
    many holes an emmentaler cheese has.

    And absolutely don't try this, ask it
    next to please answer in Schwitzerdütsch.

    Bye

    P.S.: Chatgpt can do it.

    Mild Shock schrieb:
    Hi,

    Swiss AI Apertus
    Model ID: apertus-70b-instruct
    Parameters: 70 billion
    License: Apache 2.0
    Training: 15T tokens across 1,000+ languages
    Availability: Free during Swiss AI Weeks (September 2025)

    https://platform.publicai.co/docs

    Bye

    P.S.: A chat interface is here:

    Try Apertus
    https://publicai.co/

    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story
    https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? ❌ None at runtime,
       Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? ❌ None at runtime,
       ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? ❌ None at runtime,
       Lightweight Phi model packaged as ONNX

    AI Semantic Analysis?
    - Python Involced? ❌ None at runtime,
       Text understanding done via compiled
       ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
        len(L, 1000),
        primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
        primes(L, I),
        K is I+1,
        search(L, K, J).

    search(L, I, J) :-
        mem(X, L),
        I mod X =:= 0, !,
        K is I+1,
        search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
        mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
        N > 0,
        M is N-1,
        len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 19 10:01:22 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 19 10:10:51 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 19 14:38:28 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 19 16:08:29 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
    % CPU time: 0.001s, 57 inferences
    true.

    ?- test4(25).
    % CPU time: 2.133s, 57 inferences
    true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
    time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
    time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 19 16:18:25 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Not sure whether its a language issue, or an
    algorithmic issue. But was working hard to
    bring unlimited stacks to Dogelog Player,

    removing the use of native stacks, and introducing
    some agenda data structures for certain primitive
    built-ins. Now amazingly I get in Rust:

    /* Scryer Prolog 0.9.4-656 */
    ?- between(7,10,K), N is 4^K, test2(N), fail; true.
    % CPU time: 0.001s, 56 inferences
    % CPU time: 0.004s, 56 inferences
    % CPU time: 0.019s, 56 inferences
    % CPU time: 0.132s, 56 inferences
    true.

    On the the other hand JavaScript shows me:

    /* Dogelog Player 2.1.1 / Node.js v24.6.0 */
    ?- between(7,10,K), N is 4^K, test2(N), fail; true.
    % Zeit 1 ms, GC 0 ms, Lips 15000, Uhr 19.09.2025 09:17
    % Zeit 4 ms, GC 0 ms, Lips 3750, Uhr 19.09.2025 09:17
    % Zeit 21 ms, GC 0 ms, Lips 714, Uhr 19.09.2025 09:17
    % Zeit 57 ms, GC 0 ms, Lips 263, Uhr 19.09.2025 09:17
    true.

    Stunning! The test case is the same hydra as
    below, now benchmarking the predicate (==)/2:

    test2(N) :- hydra(N, X), hydra(N, Y, Y), time(X == Y).

    But I have to redo the tests with more iterations
    to flatten the erractic behaviour of time measurement
    garbage collection and who nows what. Could get a

    better picture. But I observe since yesterady that
    JavaScript easily beats Rust, when using the Bart
    Demoen folklore trick inside JavaScript. One of

    the big brakes was not the stack, there is practically
    no difference between using a native stack or an
    artificial stack based on Array(). Its more that

    the slowdown was Map(), and it could be removed
    by using Bart Demoen folklore trick, as referenced
    by SWI-Prolog in the source code of unify().

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
       % CPU time: 0.001s, 57 inferences
       true.

    ?- test4(25).
       % CPU time: 2.133s, 57 inferences
       true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
       time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
       time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 19 18:22:23 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50–60 mg per 100 ml,
    giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
    give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
    energy and focus.

    Have Fun! LoL

    Bye


    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 19 18:38:59 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    You deleted like 10 posts of mine in the last
    48 hours, which tried to explain why patching
    is against "discourse".

    Even Torbjörn Lager agreed. I don't think
    you can continue your forum in this style.
    And then after you deleted a dozen of posts

    of mine, I am not allowed to delete my posts?

    You are simply completely crazy!!!

    Bye

    I got the following nonsense from you:

    Jan, we’ve asked you to be less combative with
    people here, but you continue to be extremely
    aggressive towards other users of the site.
    You have very helpful things to add, but when
    you then go back and delete everything you post,
    it obviates that helpfulness.

    Mild Shock schrieb:
    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50–60 mg per 100 ml,
      giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
      give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
      energy and focus.

    Have Fun! LoL

    Bye


    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Fri Sep 19 18:42:23 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I will consult a Lawyer of mine.
    Maybe I can ask for a complete
    tear down of all my content.

    Bye

    Mild Shock schrieb:
    Hi,

    You deleted like 10 posts of mine in the last
    48 hours, which tried to explain why patching
    is against "discourse".

    Even Torbjörn Lager agreed. I don't think
    you can continue your forum in this style.
    And then after you deleted a dozen of posts

    of mine, I am not allowed to delete my posts?

    You are simply completely crazy!!!

    Bye

    I got the following nonsense from you:

    Jan, we’ve asked you to be less combative with
    people here, but you continue to be extremely
    aggressive towards other users of the site.
    You have very helpful things to add, but when
    you then go back and delete everything you post,
    it obviates that helpfulness.

    Mild Shock schrieb:
    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50–60 mg per 100 ml,
       giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
       give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
       energy and focus.

    Have Fun! LoL

    Bye


    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesn’t need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Thu Sep 25 01:50:01 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    % % CPU time: 0.148s, 57 inferences
    % % CPU time: 0.126s, 57 inferences
    % % CPU time: 0.214s, 58 inferences
    % % CPU time: 0.213s, 58 inferences
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % CPU time: 0.036s, 58 inferences
    % % CPU time: 0.042s, 58 inferences
    % % CPU time: 0.018s, 59 inferences
    % % CPU time: 0.096s, 56 inferences
    % true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
    hydra(1048576, X), hydra(1048576, Y, Y),
    time(X = Y),
    time(unify_with_occurs_check(X, Y)),
    time(X == Y),
    time(compare(_, X, Y)), fail; true.

    bench2 :-
    hydra(1048576, X), hydra(1048576, Y, Y),
    time(copy_term(X-Y,_)),
    time(term_variables(X-Y,_)),
    time(\+ ground(X-Y)),
    time(acyclic_term(X-Y)),
    fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
       % CPU time: 0.001s, 57 inferences
       true.

    ?- test4(25).
       % CPU time: 2.133s, 57 inferences
       true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
       time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
       time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Thu Sep 25 01:59:16 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The facinating result was the Jaffar Unification
    beats Scryer Prolog even on the target JavaScript.
    Not to speak of the Java target, which also beat it.

    But I rejected Jaffar Unification, because it
    temporarily modifies my frozen terms, which might
    impede some future program sharing across

    premptive threads. So I rolled back Pointer based
    Jaffar Unification, and went back to Map Based
    Union Find. Overall the Map and a slightly bigger

    stack incures a factor 3x slowdown. So for Java I get now:

    /* Dogelog Player 2.1.1 for Java */

    % ?- bench, bench, bench.
    % [...]
    % % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
    % % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
    % % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
    % % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
    % % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
    % % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
    % % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
    % true.

    In the binary predicates (bench) the factor 3x is pretty
    much seen. But in the unary predicates (bench2) the
    factor is much higher , something 10x - 20x. And JavaScript

    doesn't help. But this might be the price to pay for
    a "non-intrusive" algorithm. Another name I have for my
    current take is "non-tainting" algorithms.

    Should put a closer eye what could be done "non-intrusive",
    or maybe device an algorithm that is a mixture of "non-
    intrusive" and "intrucive".

    Bye

    Mild Shock schrieb:
    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    %    % CPU time: 0.148s, 57 inferences
    %    % CPU time: 0.126s, 57 inferences
    %    % CPU time: 0.214s, 58 inferences
    %    % CPU time: 0.213s, 58 inferences
    %    true.

    % ?- bench2, bench2, bench2.
    % [...]
    %    % CPU time: 0.036s, 58 inferences
    %    % CPU time: 0.042s, 58 inferences
    %    % CPU time: 0.018s, 59 inferences
    %    % CPU time: 0.096s, 56 inferences
    %    true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
       hydra(1048576, X), hydra(1048576, Y, Y),
       time(X = Y),
       time(unify_with_occurs_check(X, Y)),
       time(X == Y),
       time(compare(_, X, Y)), fail; true.

    bench2 :-
       hydra(1048576, X), hydra(1048576, Y, Y),
       time(copy_term(X-Y,_)),
       time(term_variables(X-Y,_)),
       time(\+ ground(X-Y)),
       time(acyclic_term(X-Y)),
       fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
        % CPU time: 0.001s, 57 inferences
        true.

    ?- test4(25).
        % CPU time: 2.133s, 57 inferences
        true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
        time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
        time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Thu Sep 25 02:06:44 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I also tryed to measure Trealla Prolog. But the
    measurements are strange, always 0.001 secs or
    something. My suspicion is that Trealla Prolog

    might apply "frozeness" to cyclic terms. A form
    of hash consing, which gives Trealla Prolog
    enough information to turn certain operations

    practically into no-ops. I don't know yet how
    to proof my suspicion, and don't know how to
    deduce it from the source code.

    That there are kind two types of "frozen" terms,
    acylic and cyclic, emerge a few days ago in
    formerly Jekejeke Prolog. I can represent it

    inside the terms as null versus Variable[], in
    the variable spine. But I was not yet able to
    bring this feature to Dogelog Player. Because

    copy_term/2 does not yet attempt a "frozenness"
    analysis. Frozen Prolog terms are only produced
    during transpilation, consult or assert,

    but not yet during copy_term/2 in Dogelog Player.

    Bye

    Mild Shock schrieb:
    Hi,

    The facinating result was the Jaffar Unification
    beats Scryer Prolog even on the target JavaScript.
    Not to speak of the Java target, which also beat it.

    But I rejected Jaffar Unification, because it
    temporarily modifies my frozen terms, which might
    impede some future program sharing across

    premptive threads. So I rolled back Pointer based
    Jaffar Unification, and went back to Map Based
    Union Find. Overall the Map and a slightly bigger

    stack incures a factor 3x slowdown. So for Java I get now:

    /* Dogelog Player 2.1.1 for Java */

    % ?- bench, bench, bench.
    % [...]
    % % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
    % % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
    % % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
    % % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
    % % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
    % % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
    % % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
    % true.

    In the binary predicates (bench) the factor 3x is pretty
    much seen. But in the unary predicates (bench2) the
    factor is much higher , something 10x - 20x. And JavaScript

    doesn't help. But this might be the price to pay for
    a "non-intrusive" algorithm. Another name I have for my
    current take is "non-tainting" algorithms.

    Should put a closer eye what could be done "non-intrusive",
    or maybe device an algorithm that is a mixture of "non-
    intrusive" and "intrucive".

    Bye

    Mild Shock schrieb:
    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    %    % CPU time: 0.148s, 57 inferences
    %    % CPU time: 0.126s, 57 inferences
    %    % CPU time: 0.214s, 58 inferences
    %    % CPU time: 0.213s, 58 inferences
    %    true.

    % ?- bench2, bench2, bench2.
    % [...]
    %    % CPU time: 0.036s, 58 inferences
    %    % CPU time: 0.042s, 58 inferences
    %    % CPU time: 0.018s, 59 inferences
    %    % CPU time: 0.096s, 56 inferences
    %    true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
        hydra(1048576, X), hydra(1048576, Y, Y),
        time(X = Y),
        time(unify_with_occurs_check(X, Y)),
        time(X == Y),
        time(compare(_, X, Y)), fail; true.

    bench2 :-
        hydra(1048576, X), hydra(1048576, Y, Y),
        time(copy_term(X-Y,_)),
        time(term_variables(X-Y,_)),
        time(\+ ground(X-Y)),
        time(acyclic_term(X-Y)),
        fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
        % CPU time: 0.001s, 57 inferences
        true.

    ?- test4(25).
        % CPU time: 2.133s, 57 inferences
        true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
        time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
        time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Thu Sep 25 02:21:16 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    If there would be a subcategories "acyclic"
    and "cyclic" inside the "frozen" category.
    One could indeed use safely a hybrid algorithm

    that is non-intrusive for frozen terms, and
    intrusive for non-frozen terms. Actually calling
    it hybrid is a little overkill. It would just

    stop at frozen terms. If it has the subcategories,
    the built-in acyclic_term/1 could also stop, and
    draw its result from the subcategory. This works

    already in formerly Jekejeke Prolog now, but not
    yet in Dogelog Player. That the rollback also
    gave a 10x-20x factor slowdown for the unary

    predicates is a little annonying. Must find a compromise.

    Bye

    Mild Shock schrieb:
    Hi,

    I also tryed to measure Trealla Prolog. But the
    measurements are strange, always 0.001 secs or
    something. My suspicion is that Trealla Prolog

    might apply "frozeness" to cyclic terms. A form
    of hash consing, which gives Trealla Prolog
    enough information to turn certain operations

    practically into no-ops. I don't know yet how
    to proof my suspicion, and don't know how to
    deduce it from the source code.

    That there are kind two types of "frozen" terms,
    acylic and cyclic, emerge a few days ago in
    formerly Jekejeke Prolog. I can represent it

    inside the terms as null versus Variable[], in
    the variable spine. But I was not yet able to
    bring this feature to Dogelog Player. Because

    copy_term/2 does not yet attempt a "frozenness"
    analysis. Frozen Prolog terms are only produced
    during transpilation, consult or assert,

    but not yet during copy_term/2 in Dogelog Player.

    Bye

    Mild Shock schrieb:
    Hi,

    The facinating result was the Jaffar Unification
    beats Scryer Prolog even on the target JavaScript.
    Not to speak of the Java target, which also beat it.

    But I rejected Jaffar Unification, because it
    temporarily modifies my frozen terms, which might
    impede some future program sharing across

    premptive threads. So I rolled back Pointer based
    Jaffar Unification, and went back to Map Based
    Union Find. Overall the Map and a slightly bigger

    stack incures a factor 3x slowdown. So for Java I get now:

    /* Dogelog Player 2.1.1 for Java */

    % ?- bench, bench, bench.
    % [...]
    % % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
    % % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
    % % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
    % % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
    % % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
    % % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
    % % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
    % true.

    In the binary predicates (bench) the factor 3x is pretty
    much seen. But in the unary predicates (bench2) the
    factor is much higher , something 10x - 20x. And JavaScript

    doesn't help. But this might be the price to pay for
    a "non-intrusive" algorithm. Another name I have for my
    current take is "non-tainting" algorithms.

    Should put a closer eye what could be done "non-intrusive",
    or maybe device an algorithm that is a mixture of "non-
    intrusive" and "intrucive".

    Bye

    Mild Shock schrieb:
    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    %    % CPU time: 0.148s, 57 inferences
    %    % CPU time: 0.126s, 57 inferences
    %    % CPU time: 0.214s, 58 inferences
    %    % CPU time: 0.213s, 58 inferences
    %    true.

    % ?- bench2, bench2, bench2.
    % [...]
    %    % CPU time: 0.036s, 58 inferences
    %    % CPU time: 0.042s, 58 inferences
    %    % CPU time: 0.018s, 59 inferences
    %    % CPU time: 0.096s, 56 inferences
    %    true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
        hydra(1048576, X), hydra(1048576, Y, Y),
        time(X = Y),
        time(unify_with_occurs_check(X, Y)),
        time(X == Y),
        time(compare(_, X, Y)), fail; true.

    bench2 :-
        hydra(1048576, X), hydra(1048576, Y, Y),
        time(copy_term(X-Y,_)),
        time(term_variables(X-Y,_)),
        time(\+ ground(X-Y)),
        time(acyclic_term(X-Y)),
        fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
        % CPU time: 0.001s, 57 inferences
        true.

    ?- test4(25).
        % CPU time: 2.133s, 57 inferences
        true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
        time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
        time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2