Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
Heap/Stack Prolog systems could solve some Prolog
String Problems, especially in connection with a FFI, but I am
not showing that. More a general design limitation of the common
take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
merged Heap/Stack architecture Prolog system. And has a reference
in its escorting paper to an academic work by Xining Li (1999):
A new term representation method for prolog
Xining Li - 1999 https://www.sciencedirect.com/science/article/pii/S0743106697000629
Besides that Program Sharing (PS), as it is called in the paper,
is nothing new, WebPL also shows a more modern take, in that
it already uses compound data types from Rust. Can we
replicate some of the performance advantages of a PS system
versus the more traditional WAM resp. ZIP based systems? Here
is a simple test in the WebPL Playground, for Web PL without GC:
/* WebPL NoGC */
?- test2(10).
(1795.6ms)
?- test2(30).
(1785.5ms)
?- test2(90).
(1765.6ms)
Then SWI-Prolog WASM as found in SWI-Tinker:
/* SWI-Prolog WASM */
?- test2(10).
(1239.3ms)
?- test2(30).
(2276.1ms)
?- test2(90).
(5372.3ms)
https://webpl.whenderson.dev/
Bye
The test case:
data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).
data(30, [30, 29, 28, 27, 26, 25, 24, 23,
22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).
data(90, [90, 89, 88, 87, 86, 85, 84, 83,
82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).
test(N) :- between(1,1000,_), data(N,_), fail.
test(_).
test2(N) :- between(1,1000,_), test(N), fail.
test2(_).
between(Lo, Lo, R) :- !, Lo = R.
between(Lo, _, Lo).
between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
Ok lets run the test case on the desktop,
and not on the web. What do we get? Its almost
constant for Trealla Prolog as well, in
WebPL it was perfectly constant, but here
its only almost constant:
/* Trealla Prolog 2.82.14 */
?- time(test2(10)).
% Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
true.
?- time(test2(30)).
% Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
true.
?- time(test2(90)).
% Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
true.
Scryer Prolog fails the test horribly. Which
is amazing, since it is a Rust Prolog system just
like WebPL. But they are too traditional in
following the stupid WAM design:
/* Scryer Prolog 0.9.4-599 */
?- time(test2(10)).
% CPU time: 0.714s, 7_049_076 inferences
true.
?- time(test2(30)).
% CPU time: 1.284s, 7_049_099 inferences
true.
?- time(test2(90)).
% CPU time: 2.984s, 7_049_099 inferences
true.
Bye
Mild Shock schrieb:
Hi,
Heap/Stack Prolog systems could solve some Prolog
String Problems, especially in connection with a FFI, but I am
not showing that. More a general design limitation of the common
take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
merged Heap/Stack architecture Prolog system. And has a reference
in its escorting paper to an academic work by Xining Li (1999):
A new term representation method for prolog
Xining Li - 1999
https://www.sciencedirect.com/science/article/pii/S0743106697000629
Besides that Program Sharing (PS), as it is called in the paper,
is nothing new, WebPL also shows a more modern take, in that
it already uses compound data types from Rust. Can we
replicate some of the performance advantages of a PS system
versus the more traditional WAM resp. ZIP based systems? Here
is a simple test in the WebPL Playground, for Web PL without GC:
/* WebPL NoGC */
?- test2(10).
(1795.6ms)
?- test2(30).
(1785.5ms)
?- test2(90).
(1765.6ms)
Then SWI-Prolog WASM as found in SWI-Tinker:
/* SWI-Prolog WASM */
?- test2(10).
(1239.3ms)
?- test2(30).
(2276.1ms)
?- test2(90).
(5372.3ms)
https://webpl.whenderson.dev/
Bye
The test case:
data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).
data(30, [30, 29, 28, 27, 26, 25, 24, 23,
22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).
data(90, [90, 89, 88, 87, 86, 85, 84, 83,
82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).
test(N) :- between(1,1000,_), data(N,_), fail.
test(_).
test2(N) :- between(1,1000,_), test(N), fail.
test2(_).
between(Lo, Lo, R) :- !, Lo = R.
between(Lo, _, Lo).
between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
Smarter Partial Strings would use Program
Sharing. Take the invention of Scryer
Prolog and think about it from a Program
Sharing prespective:
p --> "abc", q
Translates to with Partial Strings:
p(C, B) :- C = "abc"||A, q(A, B).
Unfortunately straight forward Program
Sharning of the partial string doesn't
work anymore, since it is not ground:
p(C, B) :- C = [a,b,c|A], q(A, B).
But we could translate the DCG also to:
p(C, B) :- '$append'([a,b,c],A,C), q(A, B).
Where '$append'/3 is a mode (+,-,-) specialization
of append/3. Could be natively implemented.
The mode (+,-,-) will be more clever
then the failed programm sharing. The program
sharing can share the string "abc", since with
'$append'/3, the DCG is basically:
p(C, B) :- '$append'("abc",A,C), q(A, B).
Now '$append'/3 would do a copying of the string,
if A is unbound, this is usually the "DCG used for
text generation" mode. But if A is bound, the
'$append'/3 would not do some copying, but it
would actually match the prefix. So it gives
a much better DCG for parsing, since this is
"DCG used for text parsing" mode.
Bye
Mild Shock schrieb:
Hi,
Ok lets run the test case on the desktop,
and not on the web. What do we get? Its almost
constant for Trealla Prolog as well, in
WebPL it was perfectly constant, but here
its only almost constant:
/* Trealla Prolog 2.82.14 */
?- time(test2(10)).
% Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
true.
?- time(test2(30)).
% Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
true.
?- time(test2(90)).
% Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
true.
Scryer Prolog fails the test horribly. Which
is amazing, since it is a Rust Prolog system just
like WebPL. But they are too traditional in
following the stupid WAM design:
/* Scryer Prolog 0.9.4-599 */
?- time(test2(10)).
% CPU time: 0.714s, 7_049_076 inferences
true.
?- time(test2(30)).
% CPU time: 1.284s, 7_049_099 inferences
true.
?- time(test2(90)).
% CPU time: 2.984s, 7_049_099 inferences
true.
Bye
Mild Shock schrieb:
Hi,
Heap/Stack Prolog systems could solve some Prolog
String Problems, especially in connection with a FFI, but I am
not showing that. More a general design limitation of the common
take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
merged Heap/Stack architecture Prolog system. And has a reference
in its escorting paper to an academic work by Xining Li (1999):
A new term representation method for prolog
Xining Li - 1999
https://www.sciencedirect.com/science/article/pii/S0743106697000629
Besides that Program Sharing (PS), as it is called in the paper,
is nothing new, WebPL also shows a more modern take, in that
it already uses compound data types from Rust. Can we
replicate some of the performance advantages of a PS system
versus the more traditional WAM resp. ZIP based systems? Here
is a simple test in the WebPL Playground, for Web PL without GC:
/* WebPL NoGC */
?- test2(10).
(1795.6ms)
?- test2(30).
(1785.5ms)
?- test2(90).
(1765.6ms)
Then SWI-Prolog WASM as found in SWI-Tinker:
/* SWI-Prolog WASM */
?- test2(10).
(1239.3ms)
?- test2(30).
(2276.1ms)
?- test2(90).
(5372.3ms)
https://webpl.whenderson.dev/
Bye
The test case:
data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).
data(30, [30, 29, 28, 27, 26, 25, 24, 23,
22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).
data(90, [90, 89, 88, 87, 86, 85, 84, 83,
82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).
test(N) :- between(1,1000,_), data(N,_), fail.
test(_).
test2(N) :- between(1,1000,_), test(N), fail.
test2(_).
between(Lo, Lo, R) :- !, Lo = R.
between(Lo, _, Lo).
between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
Woa! I didn't know that lausy Microsoft
Copilot certified Laptops are that fast:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Dogelog Player 2.1.1 for Java
% AMD Ryzen 5 4500U
% ?- time(test).
% % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
% true.
% AMD Ryzen AI 7 350
% ?- time(test).
% % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
% true.
What happened to the Death of Moore's Law?
But somehow memory speed, CPU - RAM and GPU - RAM
trippled. Possibly due to some Artificial
Intelligence demand. And the bloody thing
has also a NPU (Neural Processing Unit),
nicely visible.
Bye
About the RAM speed. L1, L2 and L3
caches are bigger. So its harder to poison
the CPU. Also the CPU shows a revival of
Hyper-Threading Technology (HTT), which
AMD gives it a different name: They call it
Simultaneous multithreading (SMT).
https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350
BTW: Still ticking along with the primes.pl example:
test :-
len(L, 1000),
primes(L, _).
primes([], 1).
primes([J|L], J) :-
primes(L, I),
K is I+1,
search(L, K, J).
search(L, I, J) :-
mem(X, L),
I mod X =:= 0, !,
K is I+1,
search(L, K, J).
search(_, I, I).
mem(X, [X|_]).
mem(X, [_|Y]) :-
mem(X, Y).
len([], 0) :- !.
len([_|L], N) :-
N > 0,
M is N-1,
len(L, M).
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
2025 will be last year we hear of Python.
This is just a tears in your eyes Eulogy:
Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0
The Zen of Python is very different
from the Zen of Copilot+ . The bloody
Copilot+ Laptop doesn't use Python
in its Artificial Intelligence:
AI Content Extraction
- Python Involced? ❌ None at runtime,
Model runs in ONNX + DirectML on NPU
AI Image Search
- Python Involced? ❌ None at runtime,
ON-device image feature, fully compiled
AI Phi Silica
- Python Involced? ❌ None at runtime,
Lightweight Phi model packaged as ONNX
AI Semantic Analysis?
- Python Involced? ❌ None at runtime,
Text understanding done via compiled
ONNX operators
Bye
Mild Shock schrieb:
Hi,
Woa! I didn't know that lausy Microsoft
Copilot certified Laptops are that fast:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Dogelog Player 2.1.1 for Java
% AMD Ryzen 5 4500U
% ?- time(test).
% % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
% true.
% AMD Ryzen AI 7 350
% ?- time(test).
% % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
% true.
What happened to the Death of Moore's Law?
But somehow memory speed, CPU - RAM and GPU - RAM
trippled. Possibly due to some Artificial
Intelligence demand. And the bloody thing
has also a NPU (Neural Processing Unit),
nicely visible.
Bye
About the RAM speed. L1, L2 and L3
caches are bigger. So its harder to poison
the CPU. Also the CPU shows a revival of
Hyper-Threading Technology (HTT), which
AMD gives it a different name: They call it
Simultaneous multithreading (SMT).
https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350
BTW: Still ticking along with the primes.pl example:
test :-
len(L, 1000),
primes(L, _).
primes([], 1).
primes([J|L], J) :-
primes(L, I),
K is I+1,
search(L, K, J).
search(L, I, J) :-
mem(X, L),
I mod X =:= 0, !,
K is I+1,
search(L, K, J).
search(_, I, I).
mem(X, [X|_]).
mem(X, [_|Y]) :-
mem(X, Y).
len([], 0) :- !.
len([_|L], N) :-
N > 0,
M is N-1,
len(L, M).
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
Swiss AI Apertus
Model ID: apertus-70b-instruct
Parameters: 70 billion
License: Apache 2.0
Training: 15T tokens across 1,000+ languages
Availability: Free during Swiss AI Weeks (September 2025)
https://platform.publicai.co/docs
Bye
P.S.: A chat interface is here:
Try Apertus
https://publicai.co/
Mild Shock schrieb:
Hi,
2025 will be last year we hear of Python.
This is just a tears in your eyes Eulogy:
Python: The Documentary | An origin story
https://www.youtube.com/watch?v=GfH4QL4VqJ0
The Zen of Python is very different
from the Zen of Copilot+ . The bloody
Copilot+ Laptop doesn't use Python
in its Artificial Intelligence:
AI Content Extraction
- Python Involced? ❌ None at runtime,
Model runs in ONNX + DirectML on NPU
AI Image Search
- Python Involced? ❌ None at runtime,
ON-device image feature, fully compiled
AI Phi Silica
- Python Involced? ❌ None at runtime,
Lightweight Phi model packaged as ONNX
AI Semantic Analysis?
- Python Involced? ❌ None at runtime,
Text understanding done via compiled
ONNX operators
Bye
Mild Shock schrieb:
Hi,
Woa! I didn't know that lausy Microsoft
Copilot certified Laptops are that fast:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Dogelog Player 2.1.1 for Java
% AMD Ryzen 5 4500U
% ?- time(test).
% % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
% true.
% AMD Ryzen AI 7 350
% ?- time(test).
% % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
% true.
What happened to the Death of Moore's Law?
But somehow memory speed, CPU - RAM and GPU - RAM
trippled. Possibly due to some Artificial
Intelligence demand. And the bloody thing
has also a NPU (Neural Processing Unit),
nicely visible.
Bye
About the RAM speed. L1, L2 and L3
caches are bigger. So its harder to poison
the CPU. Also the CPU shows a revival of
Hyper-Threading Technology (HTT), which
AMD gives it a different name: They call it
Simultaneous multithreading (SMT).
https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350
BTW: Still ticking along with the primes.pl example:
test :-
len(L, 1000),
primes(L, _).
primes([], 1).
primes([J|L], J) :-
primes(L, I),
K is I+1,
search(L, K, J).
search(L, I, J) :-
mem(X, L),
I mod X =:= 0, !,
K is I+1,
search(L, K, J).
search(_, I, I).
mem(X, [X|_]).
mem(X, [_|Y]) :-
mem(X, Y).
len([], 0) :- !.
len([_|L], N) :-
N > 0,
M is N-1,
len(L, M).
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesn’t need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Hi,
It seems the LP (Linear programming)
library by SWI-Prolog has been also
ported to Scryer Prolog using the same DCG
design as demonstrated in SWI-Prolog again:
Module simplex
https://www.scryer.pl/simplex
What it requires from the Prolog system,
and is not covered by the ISO core standard,
are rational number, i.e. rdiv/2 etc.. and if
you feed it with floating point numbers,
judging from the source code, it might bark
that it has no CLP(R) available to solve it. CLP(R)
could be maybe a good candidate for Copilot+
machines, but I am currently not aware
of a Copliot+ Prolog system so to speak:
About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/
The DCG design could make it easy that a
solver somehow hands a problem to a NPU,
making it transparent for the end-user.
Bye
Mild Shock schrieb:
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems
https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesn’t need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
Since some idiots blocked me on Scryer Prolog issues,
I raise the issue here. Basically uniy_with_occurs_check/2
probably does use a different implementation of unification
than find for (=)/2. Because it doesn't scale, I find:
/* Scryer Prolog */
?- test3(25).
% CPU time: 0.001s, 57 inferences
true.
?- test4(25).
% CPU time: 2.133s, 57 inferences
true.
Expectation would be that unify_with_occurs_check/2
does just scale like it does in SWI-Prolog. In
SWI-Prolog I find:
/* SWI-Prolog 9.3.30 */
?- test3(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
?- test4(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
The test case was simply a hydra variant. Actually the
last hydra modification posted by @kuniaki, which I
am currently ticking along now:
hydra(0, _) :- !.
hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).
hydra(0, A, A) :- !.
hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).
test3(N) :- hydra(N, X), hydra(N, Y, Y),
time(X = Y).
test4(N) :- hydra(N, X), hydra(N, Y, Y),
time(unify_with_occurs_check(X, Y)).
But of course there is a cut (!) in the first rules.
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
I used Claude code to help me create a Prolog
program of a little expert system to manage a
kitchen that needed to produce different dishes
with different appliances and to be able to
maximize revenue. -- bauhaus911
Hi,
Thank god it was only coffee and not orange juice:
Ozzy Pours The Perfect O.J.
https://m.youtube.com/watch?v=ojQUYq21G-o
Bye
Mild Shock schrieb:
Hi,
It seems the LP (Linear programming)
library by SWI-Prolog has been also
ported to Scryer Prolog using the same DCG
design as demonstrated in SWI-Prolog again:
Module simplex
https://www.scryer.pl/simplex
What it requires from the Prolog system,
and is not covered by the ISO core standard,
are rational number, i.e. rdiv/2 etc.. and if
you feed it with floating point numbers,
judging from the source code, it might bark
that it has no CLP(R) available to solve it. CLP(R)
could be maybe a good candidate for Copilot+
machines, but I am currently not aware
of a Copliot+ Prolog system so to speak:
About Microsoft Copilot+ PCs
https://www.wired.com/story/what-is-copilot-plus-pc/
The DCG design could make it easy that a
solver somehow hands a problem to a NPU,
making it transparent for the end-user.
Bye
Mild Shock schrieb:
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems
https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesn’t need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Jan, we’ve asked you to be less combative withpeople here, but you continue to be extremely
You have very helpful things to add, but whenyou then go back and delete everything you post,
Hi,
I like the expert system description by
I used Claude code to help me create a Prolog
program of a little expert system to manage a
kitchen that needed to produce different dishes
with different appliances and to be able to
maximize revenue. -- bauhaus911
Instead of maximizing revenue you could also
maximize energy boost. So instead of having
a couple of morons on SWI-Prolog discourse,
like those that have parked their brain in the
nowhere and are going full throttle Donald
Trump / Kesh Patel Nazi, the system could
indeed recommend Orange Juice instead of
coffee. For the following brain benefits:
- Vitamin C powerhouse: ~50–60 mg per 100 ml,
giving a solid immune boost.
- Quick energy: natural sugars (glucose + fructose)
give your brain and body fast fuel.
- Hydration: mostly water, which helps maintain
energy and focus.
Have Fun! LoL
Bye
Mild Shock schrieb:
Hi,
Thank god it was only coffee and not orange juice:
Ozzy Pours The Perfect O.J.
https://m.youtube.com/watch?v=ojQUYq21G-o
Bye
Mild Shock schrieb:
Hi,
It seems the LP (Linear programming)
library by SWI-Prolog has been also
ported to Scryer Prolog using the same DCG
design as demonstrated in SWI-Prolog again:
Module simplex
https://www.scryer.pl/simplex
What it requires from the Prolog system,
and is not covered by the ISO core standard,
are rational number, i.e. rdiv/2 etc.. and if
you feed it with floating point numbers,
judging from the source code, it might bark
that it has no CLP(R) available to solve it. CLP(R)
could be maybe a good candidate for Copilot+
machines, but I am currently not aware
of a Copliot+ Prolog system so to speak:
About Microsoft Copilot+ PCs
https://www.wired.com/story/what-is-copilot-plus-pc/
The DCG design could make it easy that a
solver somehow hands a problem to a NPU,
making it transparent for the end-user.
Bye
Mild Shock schrieb:
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems
https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesn’t need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Hi,
You deleted like 10 posts of mine in the last
48 hours, which tried to explain why patching
is against "discourse".
Even Torbjörn Lager agreed. I don't think
you can continue your forum in this style.
And then after you deleted a dozen of posts
of mine, I am not allowed to delete my posts?
You are simply completely crazy!!!
Bye
I got the following nonsense from you:
Jan, we’ve asked you to be less combative withpeople here, but you continue to be extremely
aggressive towards other users of the site.
You have very helpful things to add, but whenyou then go back and delete everything you post,
it obviates that helpfulness.
Mild Shock schrieb:
Hi,
I like the expert system description by
I used Claude code to help me create a Prolog
program of a little expert system to manage a
kitchen that needed to produce different dishes
with different appliances and to be able to
maximize revenue. -- bauhaus911
Instead of maximizing revenue you could also
maximize energy boost. So instead of having
a couple of morons on SWI-Prolog discourse,
like those that have parked their brain in the
nowhere and are going full throttle Donald
Trump / Kesh Patel Nazi, the system could
indeed recommend Orange Juice instead of
coffee. For the following brain benefits:
- Vitamin C powerhouse: ~50–60 mg per 100 ml,
giving a solid immune boost.
- Quick energy: natural sugars (glucose + fructose)
give your brain and body fast fuel.
- Hydration: mostly water, which helps maintain
energy and focus.
Have Fun! LoL
Bye
Mild Shock schrieb:
Hi,
Thank god it was only coffee and not orange juice:
Ozzy Pours The Perfect O.J.
https://m.youtube.com/watch?v=ojQUYq21G-o
Bye
Mild Shock schrieb:
Hi,
It seems the LP (Linear programming)
library by SWI-Prolog has been also
ported to Scryer Prolog using the same DCG
design as demonstrated in SWI-Prolog again:
Module simplex
https://www.scryer.pl/simplex
What it requires from the Prolog system,
and is not covered by the ISO core standard,
are rational number, i.e. rdiv/2 etc.. and if
you feed it with floating point numbers,
judging from the source code, it might bark
that it has no CLP(R) available to solve it. CLP(R)
could be maybe a good candidate for Copilot+
machines, but I am currently not aware
of a Copliot+ Prolog system so to speak:
About Microsoft Copilot+ PCs
https://www.wired.com/story/what-is-copilot-plus-pc/
The DCG design could make it easy that a
solver somehow hands a problem to a NPU,
making it transparent for the end-user.
Bye
Mild Shock schrieb:
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems
https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesn’t need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Hi,
Since some idiots blocked me on Scryer Prolog issues,
I raise the issue here. Basically uniy_with_occurs_check/2
probably does use a different implementation of unification
than find for (=)/2. Because it doesn't scale, I find:
/* Scryer Prolog */
?- test3(25).
% CPU time: 0.001s, 57 inferences
true.
?- test4(25).
% CPU time: 2.133s, 57 inferences
true.
Expectation would be that unify_with_occurs_check/2
does just scale like it does in SWI-Prolog. In
SWI-Prolog I find:
/* SWI-Prolog 9.3.30 */
?- test3(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
?- test4(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
The test case was simply a hydra variant. Actually the
last hydra modification posted by @kuniaki, which I
am currently ticking along now:
hydra(0, _) :- !.
hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).
hydra(0, A, A) :- !.
hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).
test3(N) :- hydra(N, X), hydra(N, Y, Y),
time(X = Y).
test4(N) :- hydra(N, X), hydra(N, Y, Y),
time(unify_with_occurs_check(X, Y)).
But of course there is a cut (!) in the first rules.
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
Scryer Prologs unify_with_occurs_check/2 might have
been fixed. I can now test the following:
/* Scryer Prolog 0.9.4-660 */
% ?- bench, bench, bench.
% [...]
% % CPU time: 0.148s, 57 inferences
% % CPU time: 0.126s, 57 inferences
% % CPU time: 0.214s, 58 inferences
% % CPU time: 0.213s, 58 inferences
% true.
% ?- bench2, bench2, bench2.
% [...]
% % CPU time: 0.036s, 58 inferences
% % CPU time: 0.042s, 58 inferences
% % CPU time: 0.018s, 59 inferences
% % CPU time: 0.096s, 56 inferences
% true.
This was the test case, it includes
unify_with_occurs_check/2:
hydra(0, _) :- !.
hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).
hydra(0, A, A) :- !.
hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).
bench :-
hydra(1048576, X), hydra(1048576, Y, Y),
time(X = Y),
time(unify_with_occurs_check(X, Y)),
time(X == Y),
time(compare(_, X, Y)), fail; true.
bench2 :-
hydra(1048576, X), hydra(1048576, Y, Y),
time(copy_term(X-Y,_)),
time(term_variables(X-Y,_)),
time(\+ ground(X-Y)),
time(acyclic_term(X-Y)),
fail; true.
Bye
Mild Shock schrieb:
Hi,
Since some idiots blocked me on Scryer Prolog issues,
I raise the issue here. Basically uniy_with_occurs_check/2
probably does use a different implementation of unification
than find for (=)/2. Because it doesn't scale, I find:
/* Scryer Prolog */
?- test3(25).
% CPU time: 0.001s, 57 inferences
true.
?- test4(25).
% CPU time: 2.133s, 57 inferences
true.
Expectation would be that unify_with_occurs_check/2
does just scale like it does in SWI-Prolog. In
SWI-Prolog I find:
/* SWI-Prolog 9.3.30 */
?- test3(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
?- test4(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
The test case was simply a hydra variant. Actually the
last hydra modification posted by @kuniaki, which I
am currently ticking along now:
hydra(0, _) :- !.
hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).
hydra(0, A, A) :- !.
hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).
test3(N) :- hydra(N, X), hydra(N, Y, Y),
time(X = Y).
test4(N) :- hydra(N, X), hydra(N, Y, Y),
time(unify_with_occurs_check(X, Y)).
But of course there is a cut (!) in the first rules.
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
The facinating result was the Jaffar Unification
beats Scryer Prolog even on the target JavaScript.
Not to speak of the Java target, which also beat it.
But I rejected Jaffar Unification, because it
temporarily modifies my frozen terms, which might
impede some future program sharing across
premptive threads. So I rolled back Pointer based
Jaffar Unification, and went back to Map Based
Union Find. Overall the Map and a slightly bigger
stack incures a factor 3x slowdown. So for Java I get now:
/* Dogelog Player 2.1.1 for Java */
% ?- bench, bench, bench.
% [...]
% % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
% % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
% % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
% % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
% true.
% ?- bench2, bench2, bench2.
% [...]
% % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
% % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
% % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
% % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
% true.
In the binary predicates (bench) the factor 3x is pretty
much seen. But in the unary predicates (bench2) the
factor is much higher , something 10x - 20x. And JavaScript
doesn't help. But this might be the price to pay for
a "non-intrusive" algorithm. Another name I have for my
current take is "non-tainting" algorithms.
Should put a closer eye what could be done "non-intrusive",
or maybe device an algorithm that is a mixture of "non-
intrusive" and "intrucive".
Bye
Mild Shock schrieb:
Hi,
Scryer Prologs unify_with_occurs_check/2 might have
been fixed. I can now test the following:
/* Scryer Prolog 0.9.4-660 */
% ?- bench, bench, bench.
% [...]
% % CPU time: 0.148s, 57 inferences
% % CPU time: 0.126s, 57 inferences
% % CPU time: 0.214s, 58 inferences
% % CPU time: 0.213s, 58 inferences
% true.
% ?- bench2, bench2, bench2.
% [...]
% % CPU time: 0.036s, 58 inferences
% % CPU time: 0.042s, 58 inferences
% % CPU time: 0.018s, 59 inferences
% % CPU time: 0.096s, 56 inferences
% true.
This was the test case, it includes
unify_with_occurs_check/2:
hydra(0, _) :- !.
hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).
hydra(0, A, A) :- !.
hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).
bench :-
hydra(1048576, X), hydra(1048576, Y, Y),
time(X = Y),
time(unify_with_occurs_check(X, Y)),
time(X == Y),
time(compare(_, X, Y)), fail; true.
bench2 :-
hydra(1048576, X), hydra(1048576, Y, Y),
time(copy_term(X-Y,_)),
time(term_variables(X-Y,_)),
time(\+ ground(X-Y)),
time(acyclic_term(X-Y)),
fail; true.
Bye
Mild Shock schrieb:
Hi,
Since some idiots blocked me on Scryer Prolog issues,
I raise the issue here. Basically uniy_with_occurs_check/2
probably does use a different implementation of unification
than find for (=)/2. Because it doesn't scale, I find:
/* Scryer Prolog */
?- test3(25).
% CPU time: 0.001s, 57 inferences
true.
?- test4(25).
% CPU time: 2.133s, 57 inferences
true.
Expectation would be that unify_with_occurs_check/2
does just scale like it does in SWI-Prolog. In
SWI-Prolog I find:
/* SWI-Prolog 9.3.30 */
?- test3(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
?- test4(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
The test case was simply a hydra variant. Actually the
last hydra modification posted by @kuniaki, which I
am currently ticking along now:
hydra(0, _) :- !.
hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).
hydra(0, A, A) :- !.
hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).
test3(N) :- hydra(N, X), hydra(N, Y, Y),
time(X = Y).
test4(N) :- hydra(N, X), hydra(N, Y, Y),
time(unify_with_occurs_check(X, Y)).
But of course there is a cut (!) in the first rules.
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Hi,
I also tryed to measure Trealla Prolog. But the
measurements are strange, always 0.001 secs or
something. My suspicion is that Trealla Prolog
might apply "frozeness" to cyclic terms. A form
of hash consing, which gives Trealla Prolog
enough information to turn certain operations
practically into no-ops. I don't know yet how
to proof my suspicion, and don't know how to
deduce it from the source code.
That there are kind two types of "frozen" terms,
acylic and cyclic, emerge a few days ago in
formerly Jekejeke Prolog. I can represent it
inside the terms as null versus Variable[], in
the variable spine. But I was not yet able to
bring this feature to Dogelog Player. Because
copy_term/2 does not yet attempt a "frozenness"
analysis. Frozen Prolog terms are only produced
during transpilation, consult or assert,
but not yet during copy_term/2 in Dogelog Player.
Bye
Mild Shock schrieb:
Hi,
The facinating result was the Jaffar Unification
beats Scryer Prolog even on the target JavaScript.
Not to speak of the Java target, which also beat it.
But I rejected Jaffar Unification, because it
temporarily modifies my frozen terms, which might
impede some future program sharing across
premptive threads. So I rolled back Pointer based
Jaffar Unification, and went back to Map Based
Union Find. Overall the Map and a slightly bigger
stack incures a factor 3x slowdown. So for Java I get now:
/* Dogelog Player 2.1.1 for Java */
% ?- bench, bench, bench.
% [...]
% % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
% % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
% % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
% % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
% true.
% ?- bench2, bench2, bench2.
% [...]
% % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
% % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
% % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
% % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
% true.
In the binary predicates (bench) the factor 3x is pretty
much seen. But in the unary predicates (bench2) the
factor is much higher , something 10x - 20x. And JavaScript
doesn't help. But this might be the price to pay for
a "non-intrusive" algorithm. Another name I have for my
current take is "non-tainting" algorithms.
Should put a closer eye what could be done "non-intrusive",
or maybe device an algorithm that is a mixture of "non-
intrusive" and "intrucive".
Bye
Mild Shock schrieb:
Hi,
Scryer Prologs unify_with_occurs_check/2 might have
been fixed. I can now test the following:
/* Scryer Prolog 0.9.4-660 */
% ?- bench, bench, bench.
% [...]
% % CPU time: 0.148s, 57 inferences
% % CPU time: 0.126s, 57 inferences
% % CPU time: 0.214s, 58 inferences
% % CPU time: 0.213s, 58 inferences
% true.
% ?- bench2, bench2, bench2.
% [...]
% % CPU time: 0.036s, 58 inferences
% % CPU time: 0.042s, 58 inferences
% % CPU time: 0.018s, 59 inferences
% % CPU time: 0.096s, 56 inferences
% true.
This was the test case, it includes
unify_with_occurs_check/2:
hydra(0, _) :- !.
hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).
hydra(0, A, A) :- !.
hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).
bench :-
hydra(1048576, X), hydra(1048576, Y, Y),
time(X = Y),
time(unify_with_occurs_check(X, Y)),
time(X == Y),
time(compare(_, X, Y)), fail; true.
bench2 :-
hydra(1048576, X), hydra(1048576, Y, Y),
time(copy_term(X-Y,_)),
time(term_variables(X-Y,_)),
time(\+ ground(X-Y)),
time(acyclic_term(X-Y)),
fail; true.
Bye
Mild Shock schrieb:
Hi,
Since some idiots blocked me on Scryer Prolog issues,
I raise the issue here. Basically uniy_with_occurs_check/2
probably does use a different implementation of unification
than find for (=)/2. Because it doesn't scale, I find:
/* Scryer Prolog */
?- test3(25).
% CPU time: 0.001s, 57 inferences
true.
?- test4(25).
% CPU time: 2.133s, 57 inferences
true.
Expectation would be that unify_with_occurs_check/2
does just scale like it does in SWI-Prolog. In
SWI-Prolog I find:
/* SWI-Prolog 9.3.30 */
?- test3(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
?- test4(25).
% -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
true.
The test case was simply a hydra variant. Actually the
last hydra modification posted by @kuniaki, which I
am currently ticking along now:
hydra(0, _) :- !.
hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).
hydra(0, A, A) :- !.
hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).
test3(N) :- hydra(N, X), hydra(N, Y, Y),
time(X = Y).
test4(N) :- hydra(N, X), hydra(N, Y, Y),
time(unify_with_occurs_check(X, Y)).
But of course there is a cut (!) in the first rules.
Mild Shock schrieb:
Hi,
WebPL is already outdated I guess. It doesn't
show the versions of the other Prolog systems
it is using. While I had these results for
the primes example in the WebPL playground:
/* Trealla Prolog WASM */
(23568.9ms)
When I run the example here:
https://php.energy/trealla.html
I get better results:
/* trealla-js 0.27.1 */
?- time(test).
% Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips
Bye
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,071 |
Nodes: | 10 (0 / 10) |
Uptime: | 108:47:27 |
Calls: | 13,747 |
Files: | 186,977 |
D/L today: |
2,000 files (620M bytes) |
Messages: | 2,423,179 |