• USA is still Shitting its Pants [Artificial Intelligence]

    From Mild Shock@[email protected] to comp.lang.prolog on Wed Jan 22 18:41:16 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    How it started (released January 20, 2025):

    Despite trailing the U.S., China continues to make
    significant strides in AI development. A notable example
    is the Chinese startup DeepSeek, which released an AI
    model named R1. This model demonstrates reasoning
    capabilities comparable to leading U.S. models and is
    available as open-source software, allowing free use and
    commercialization. However, R1 faces issues with censorship,
    likely influenced by Chinese regulations. https://en.wikipedia.org/wiki/DeepSeek#Release_history

    How its going (released January 21, 2025):

    In a strategic move to further solidify its leadership,
    President Donald Trump announced the Stargate initiative.
    This $500 billion investment, a collaboration between OpenAI,
    Oracle, and SoftBank, aims to develop extensive AI
    infrastructure across the U.S., including data centers
    and energy facilities. The project is expected to create
    over 100,000 jobs and enhance the nation's AI capabilities. https://en.wikipedia.org/wiki/The_Stargate_Project

    With SoftBank they basically call Japan for help!

    LoL

    Bye
    --- Synchronet 3.20c-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Mon Sep 29 22:32:20 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Is there a silver lining of AI democratization?
    With MedGamma I can analyse my own broken ribs.
    If only I had a body scanner. I am currently exploring

    options of LLM models that I could run on
    my new AI soaked AMD Ryzen AI 7 350 laptop.
    While Qualcomm spear headed LLM players with

    their LM Studio. There is FastFlowLM
    that can do Ryzen, and would utilize the NPU. For
    example to run a distilled DeepSeek would amount to:

    flm run deepseek-r1:8b

    And yes there is MedGamma:

    MedGemma:4B (Multimodal) Running Exclusively on AMD Ryzen™ AI NPU https://www.youtube.com/watch?v=KWzXZEOcgK4

    Bye

    Mild Shock schrieb:
    Hi,

    How it started (released January 20, 2025):

    Despite trailing the U.S., China continues to make
    significant strides in AI development. A notable example
    is the Chinese startup DeepSeek, which released an AI
    model named R1. This model demonstrates reasoning
    capabilities comparable to leading U.S. models and is
    available as open-source software, allowing free use and
    commercialization. However, R1 faces issues with censorship,
    likely influenced by Chinese regulations. https://en.wikipedia.org/wiki/DeepSeek#Release_history

    How its going (released January 21, 2025):

    In a strategic move to further solidify its leadership,
    President Donald Trump announced the Stargate initiative.
    This $500 billion investment, a collaboration between OpenAI,
    Oracle, and SoftBank, aims to develop extensive AI
    infrastructure across the U.S., including data centers
    and energy facilities. The project is expected to create
    over 100,000 jobs and enhance the nation's AI capabilities. https://en.wikipedia.org/wiki/The_Stargate_Project

    With SoftBank they basically call Japan for help!

    LoL

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Mon Sep 29 22:57:54 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I hope it doesn't turn my laptop into a
    frying pan. The thingy had a few hickups
    recently, like using 100% CPU and doing

    nothing. Maybe IntelliJs fork join framework
    was overdoing it. But NPUs are physically
    designed for efficient AI math - more

    computations per watt, less heat generated.
    Lets see. But whats the technology behind
    FastFlowLM? It might be a result of:

    GAIA: An Open-Source Project from AMD for
    Running Local LLMs on Ryzen™ AI. GAIA seems
    to be an important piece of the Ryzen AI story.

    Initially GAIA wanted to Provide a unified
    software stack for Ryzen AI NPUs. But
    AMD shifted focus to DirectML integration

    with Windows. GAIA Absorbed into AMD's ROCm
    Ecosystem, on the other hand XDNA (2024)
    AMD's commercial NPU architecture goes

    full circle back to Niklaus Wirth:

    Hades: fast hardware synthesis tools and a reconfigurable coprocessor https://www.research-collection.ethz.ch/entities/publication/23b3a0e4-e5e7-44fe-9b5d-ab43e21859b2

    It has FPGA-inspired reconfigurable fabric!

    Bye

    P.S.: Shit, I should have such a little toy
    compiler as well somewhere in my notes I took
    during a lecture. Array with For Loop to

    model a hardware bus is really funny.

    Mild Shock schrieb:
    Hi,

    Is there a silver lining of AI democratization?
    With MedGamma I can analyse my own broken ribs.
    If only I had a body scanner. I am currently exploring

    options of LLM models that I could run on
    my new AI soaked AMD Ryzen AI 7 350 laptop.
    While Qualcomm spear headed LLM players with

    their LM Studio. There is FastFlowLM
    that can do Ryzen, and would utilize the NPU. For
    example to run a distilled DeepSeek would amount to:

    flm run deepseek-r1:8b

    And yes there is MedGamma:

    MedGemma:4B (Multimodal) Running Exclusively on AMD Ryzen™ AI NPU https://www.youtube.com/watch?v=KWzXZEOcgK4

    Bye

    Mild Shock schrieb:
    Hi,

    How it started (released January 20, 2025):

    Despite trailing the U.S., China continues to make
    significant strides in AI development. A notable example
    is the Chinese startup DeepSeek, which released an AI
    model named R1. This model demonstrates reasoning
    capabilities comparable to leading U.S. models and is
    available as open-source software, allowing free use and
    commercialization. However, R1 faces issues with censorship,
    likely influenced by Chinese regulations.
    https://en.wikipedia.org/wiki/DeepSeek#Release_history

    How its going (released January 21, 2025):

    In a strategic move to further solidify its leadership,
    President Donald Trump announced the Stargate initiative.
    This $500 billion investment, a collaboration between OpenAI,
    Oracle, and SoftBank, aims to develop extensive AI
    infrastructure across the U.S., including data centers
    and energy facilities. The project is expected to create
    over 100,000 jobs and enhance the nation's AI capabilities.
    https://en.wikipedia.org/wiki/The_Stargate_Project

    With SoftBank they basically call Japan for help!

    LoL

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@[email protected] to comp.lang.prolog on Tue Sep 30 08:40:37 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Was Linus Torvalds cautious or clueless?

    "I think AI is really interesting and I think it
    is going to change the world. At the same time,
    I hate the hype cycle so much that I really don't
    want to go there. So, my approach to AI right now
    is I will basically ignore it because I think
    the whole tech industry around AI is in a
    very bad position, and its 90% marketing and
    10% reality. And, in 5 years, things will change
    and at that point, we will see what of the AI
    is getting used for real workloads".

    https://www.tweaktown.com/news/101381/linux-creator-linus-torvalds-ai-is-useless-its-90-marketing-while-he-ignores-for-now/index.html

    I think his fallacy is to judge AI as hype.
    So his timeline 2030 might have received a
    suckerpunch by Copilot+ already now in late

    2025. Before in 2024, when he made his statement,
    AI was already not hype at all:

    2009–2012 (Deep Learning Wave): GPUs began being
    used for deep learning research, thanks to frameworks
    like Caffe and Theano. This was when convolutional
    networks for vision really took off.

    2012–2015 (Big Data + Deep Learning): Data centers
    started leveraging clusters of GPUs for large-scale
    training, using distributed frameworks like
    TensorFlow and PyTorch (from 2016). Text analysis
    and recommendation systems were already benefiting from this.

    2015–2020 (Specialized Accelerators): Companies
    like Google (TPU), Nvidia (A100), and Qualcomm
    (Hexagon DSP) developed purpose-built hardware
    for AI inference and training. Large-scale NLP
    models like BERT were trained in these environments.

    2020–2024 (Commercial AI Explosion): On-device AI,
    cloud AI services, Copilot+, Claude integrations —
    all of these are the practical realization of what
    had been quietly powering research and enterprise
    workloads for over a decade.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2