• OpenAI bans Chinese, Nort

    From Mike Powell@1:2320/105 to All on Wed Oct 8 08:56:22 2025
    OpenAI bans Chinese, North Korean hacker accounts using ChatGPT to launch surveillance

    Date:
    Wed, 08 Oct 2025 12:38:00 +0000

    Description:
    Malicious actors are trying to trick ChatGPT into doing bad things, but are
    now being banned.

    FULL STORY

    OpenAI has banned Chinese, North Korean, and other accounts which were reportedly using ChatGPT to launch surveillance campaigns, develop phishing techniques and malware , and engage in other malicious practices.

    In a new report , OpenAI said it observed individuals reportedly affiliated with Chinese government entities, or state-linked organizations, using its Large Language Model ( LLM ) to help write proposals for surveillance systems and profiling technologies.

    These included tools for monitoring individuals and analyzing behavioral patterns.

    Exploring phishing

    Some of the accounts that we banned appeared to be attempting to use ChatGPT
    to develop tools for large-scale monitoring: analyzing datasets, often
    gathered from Western or Chinese social media platforms, the report reads.

    These users typically asked ChatGPT to help design such tools or generate promotional materials about them, but not to implement the monitoring.

    The prompts were framed in a way that avoided triggering safety filters, and were often phrased as academic or technical inquiries.

    While the returned content did not directly enable surveillance, its outputs were used to refine documentation and planning for such systems, it was said.

    The North Koreans, on the other hand, used ChatGPT to explore phishing techniques, credential theft, and macOS malware development.

    OpenAI said it observed these accounts testing prompts related to social engineering, password harvesting, and debugging malicious code, especially targeting Apple systems.

    The model refused direct requests for malicious code, OpenAI said, but
    stressed that the threat actors still tried to bypass safeguards by
    rephrasing prompts, or asking for general technical help.

    Just like any other tool, LLMs are being used by both financially motivated, and state-sponsored threat actors, for all sorts of malicious activity.

    This AI misuse is evolving, with threat actors increasingly integrating AI
    into existing workflows to improve their efficiency.

    While developers such as OpenAI work hard on minimizing risk and making sure their products cannot be used like this, there are many prompts that fall between legitimate and malicious use. This gray zone activity, the report hints, requires nuanced detection strategies.

    Via The Register

    ======================================================================
    Link to news story: https://www.techradar.com/pro/security/openai-bans-chinese-north-korean-hacker -accounts-using-chatgpt-to-launch-surveillance

    $$
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)