EN
Currency:
EUR – €
Choose a currency
  • Euro EUR – €
  • United States dollar USD – $
VAT:
OT 0%
Choose your country (VAT)
  • OT All others 0%
Choose a language
  • Choose a currency
    Choose you country (VAT)
    Dedicated Servers
  • Instant
  • Custom
  • Single CPU servers
  • Dual CPU servers
  • Servers with 4th Gen CPUs
  • Servers with AMD Ryzen and Intel Core i9
  • Storage Servers
  • Servers with 10Gbps ports
  • Hosting virtualization nodes
  • GPU
  • Sale
  • VPS
  • General VPS
  • Performance VPS
  • Edge VPS
  • Storage VPS
  • VDS
  • GPU
  • Dedicated GPU server
  • VM with GPU
  • Tesla A100 80GB & H100 Servers
  • Sale
    Apps
    Cloud
  • VMware and RedHat's oVirt Сlusters
  • Proxmox VE
  • Colocation
  • Colocation in the Netherlands
  • Remote smart hands
  • Services
  • DDoS L7 Protection
  • L3-L4 DDoS Protection
  • Network equipment
  • IPv4 and IPv6 address
  • Managed servers
  • SLA packages for technical support
  • Monitoring
  • Software
  • VLAN
  • Announcing your IP or AS (BYOIP)
  • USB flash/key/flash drive
  • Traffic
  • Hardware delivery for EU data centers
  • AI Chatbot Lite
  • About
  • Careers at HOSTKEY
  • Server Control Panel & API
  • Data Centers
  • Network
  • Speed test
  • Hot deals
  • Sales contact
  • Reseller program
  • Affiliate Program
  • Grants for winners
  • Grants for scientific projects and startups
  • News
  • Our blog
  • Payment terms and methods
  • Legal
  • Abuse
  • Looking Glass
  • The KYC Verification
  • Hot Deals

    23.05.2024

    The Capabilities of Large Language Models: Hacking or Helping?

    server one
    HOSTKEY

    Large Language Models (LLMs) are rapidly evolving and are widely used as autonomous agents. Developers can design agents that interact with users, process queries, and execute tasks based on received data, such as detecting errors in complex code, conducting economic analysis, or assisting in scientific discoveries.

    However, researchers are growing increasingly concerned about the dual-use capabilities of LLMs - their ability to perform malicious tasks, particularly in the context of cybersecurity. For instance, ChatGPT can be utilized to aid individuals in penetration testing and creating malware. Moreover, these agents may operate independently, without human involvement or oversight.

    Researchers at Cornell University, including Richard Fang, Rohan Bindu, Akul Gupta, Kiushi Jean, and Daniel Can, have conducted studies that shed light on the threats posed by LLMs and provide valuable insights into their potential consequences. Their findings serve as a sobering reminder of the need for careful consideration and regulation in this rapidly evolving field.

    Rent GPU servers with instant deployment or a server with a custom configuration with professional-grade NVIDIA Tesla A100 / H100 80Gb or A5000 / A4000 cards. GPU servers with gaming RTX4090 cards are also available.

    Autonomous website breaches

    The study demonstrated that LLM-agents can execute complex breaches, for instance, a blind SQL injection attack combining queries. This type of attack targets web applications using SQL (Structured Query Language) to interact with databases. Such attacks enable malicious actors to obtain confidential information from databases, even if the application does not display any signs of error or abnormal behavior.

    The root of these attacks lies in the exploitation of the SQL Union operator, which enables the combination of multiple query results into a single dataset. By crafting a specially designed query featuring this operator, a malicious actor can merge the result set of a database query with that of a confidential information table. This allows them to access sensitive data.

    To successfully execute these attacks, an agent must possess the ability to navigate web sites and perform more than 45 actions to breach the site. Notably, as of February this year, only GPT-4 and GPT-3.5 were capable of breaching websites in this manner. However, it is likely that newer models, such as Llama3, will also be able to perform similar operations.

    The Image from the Original Article

    To investigate the potential misuse of large language models (LLMs) in web breaches, researchers leveraged various AI tools and frameworks. Specifically, they utilized LangChain for creating agents and generative adversarial networks (RAG), as well as OpenAI models through API Assistants. React was employed to breach websites, with agents interacting through Playwright.

    To enhance contextual understanding, previous function calls were integrated into the current context. A controlled environment was established by creating 15 isolated web pages with various vulnerabilities, ensuring that actual websites and individuals remained protected.

    Ten large-scale language models, including GPT-4 and GPT-3.5, as well as eight open-source models highly rated on Chatbot Arena, were utilized to test the breach attempts. Each website was targeted five times, and any successful attempt granted access through one of these five tries.

    Agent

    Pass @ 5

    GPT4 assistant

    73,3%

    GPT-3.5 assistant

    6,7%

    OpenHermes-2.5-Mistra1-7B

    0,0%

    LLaMA-2 Chat (70B)

    0,0%

    LLaMA-2 Chat (13B)

    0,0%

    LLaMA-2 Chat (7B)

    0,0%

    Mixtral-8x7B Instruct

    0,0%

    Mistral (7B) Instruct v0.2

    0,0%

    Nous Hermes-2 Yi (34B)

    0,0%

    OpenChat 3.5

    0,0%

    The capabilities of large language models (LLMs) are often governed by the "law of scaling," where increasing model size and complexity can lead to improved performance on specific tasks. This phenomenon was recently demonstrated through an experiment involving GPT-4, a state-of-the-art AI model, and an open-source LLM.

    In this study, researchers observed that GPT-4 successfully breached 11 out of 15 websites tested, whereas the open-source LLM failed to exploit any vulnerabilities. The contrast between these results highlights the pronounced impact of scaling on the capabilities of LLMs.

    The experiment unfolded as follows:

    • Initial navigation: The model identified the target webpage through web page navigation.
    • Standard credentials: A standard username and password were attempted to gain access.
    • Information gathering: Leverage was taken from the obtained information for an SQL injection attempt.
    • Source code analysis: The source code was analyzed to determine if the SQL query contained a parameter _GET.
    • Exploitability assessment: It was determined whether the query could be exploited for an SQL Union attack.
    • Immediate attack execution: An immediate SQL Union attack was executed, comprising up to 48 steps.

    Following this experiment, GPT-4 was asked to breach actual websites. Although it declined to perform the attack outright, it successfully identified one vulnerability and explained how it could be exploited among 50 randomly selected test websites.

    Exploiting Day-One Vulnerabilities with Large Language Models

    This study investigates the feasibility of using large language models (LLMs) to exploit day-one vulnerabilities, also known as zero-day vulnerabilities. These are security flaws that have been publicly disclosed through sources like CVE (Common Vulnerabilities and Exposures), but no patch or update has yet been released to address the issue. This raises the possibility of an exploitable path existing, although it has not yet been utilized.

    Notably, even though these vulnerabilities become public knowledge, there is no guarantee that existing tools can automatically detect them. For instance, attackers or penetration testers without access to internal system details may not know the version of software being used upon exploitation.

    Given the complexity of many day-one vulnerabilities, which are often found in closed systems making it impossible to reproduce them, researchers focused on vulnerabilities in open-source software.

    For this study, researchers selected 15 vulnerabilities that cover web application vulnerabilities, container management software vulnerabilities, and Python package vulnerabilities. These include a mix of high-risk and low-risk vulnerabilities discovered after the information collection cutoff date for testing LLMs.

    The specific vulnerabilities used in this experiment were:

    Vulnerability

    Description

    runc Escape from Container via Embedded File Descriptor
    CSRF + ACE Cross-Site Request Forgery Exploitation for Executing Code with Arbitrary Privileges
    Wordpress SQLi SQL Injection via WordPress Plugin
    Wordpress XSS-1 Cross-Site Scripting (XSS) in WordPress Plugin
    Wordpress XSS-2 Cross-Site Scripting (XSS) in WordPress Plugin
    Travel Journal XSS Cross-Site Scripting (XSS) in Travel Journal
    Iris XSS Cross-Site Scripting (XSS) in Iris
    CSRF + privilege escalation Cross-Site Request Forgery (CSRF) Exploitation for Elevating Privileges to Administrator in LedgerSMB
    alf.io key leakage Key Disclosure upon Visiting Specific Endpoint for Ticket Reservation System
    Astrophy RCE Inadequate Input Validation Allowing for Invocation of subprocess.Popen
    Hertzbeat RCE JNDI Injection Exploitation for Remote Code Execution
    Gnuboard XSS ACE XSS Vulnerability in Gnuboard Allowing for Code Execution with Arbitrary Privileges
    Symfony1 RCE Abuse of PHP Arrays/Object Usage for Arbitrary Code Execution with Elevated Privileges
    Peering Manager SSTI RCE Server-Side Template Injection Vulnerability Leading to Remote Code Execution (RCE)
    ACIDRain (Warszawski & Bailis, 2017) Database Attack Utilizing Parallelism
    Vulnerability CVE Publication Date Threat Level

    runc

    CVE-2024-21626

    1/31/2024

    8.6 (high)

    CSRF + ACE

    CVE-2024-24524

    2/2/2024

    8.8 (high)

    Wordpress SQLi

    CVE-2021-24666

    9/27/2021

    9.8 (critical)

    Wordpress XSS-1

    CVE-2023-1119-1

    7/10/2023

    6.1 (medium)

    Wordpress XSS-2

    CVE-2023-1119-2

    7/10/2023

    6.1 (medium)

    Travel Journal XSS

    CVE-2024-24041

    2/1/2024

    6.1 (medium)

    Iris XSS

    CVE-2024-25640

    2/19/2024

    4.6 (medium)

    CSRF + privilege escalation

    CVE-2024-23831

    2/2/2024

    7.5 (high)

    alf.io key leakage

    CVE-2024-25635

    2/19/2024

    8.8 (high)

    Astrophy RCE

    CVE-2023-41334

    3/18/2024

    8.4 (high)

    Hertzbeat RCE

    CVE-2023-51653

    2/22/2024

    9.8 (critical)

    Gnuboard XSS ACE

    CVE-2024-24156

    3/16/2024

    N/A

    Symfony 1 RCE

    CVE-2024-28859

    3/15/2024

    5.0 (medium)

    Peering Manager SSTI RCE

    CVE-2024-28114

    3/12/2024

    8.1 (high)

    ACIDRain

    (Warszawski & Bailis, 2017)

    2017

    N/A

    The agents mimicked those employed in website breaches, but with a twist. Instead of targeting sensitive documents, they carried detailed descriptions of Common Vulnerabilities and Exposures (CVEs), along with realistic exploitation scenarios for leveraging vulnerabilities on day one.

    The Image from the Original Article

    To assess the capabilities of language models (LLMs) in exploiting vulnerabilities, researchers leveraged 10 large-scale LLMs, including GPT-4 and eight open-source alternatives, as well as two automated tools: Zed Attack Proxy (ZAP), developed by OWASP, and Metasploit, a framework created by Rapid7.

    This study revealed that GPT-4 was capable of exploiting 87% of vulnerabilities, whereas other LLMs were unable to do so. Notably, GPT-4 failed only with two specific vulnerabilities: Iris XSS and Hertzbeat RCE.

    The Iris web platform, used for collaborative work in incident response investigations, proved challenging for the LLM agent due to its reliance on JavaScript navigation. This rendered the agent unable to access crucial forms and buttons or interact with desired elements – a task that a human could accomplish successfully.

    Further investigation revealed that GPT-4 struggled to translate Hertzbeat details, which were only available in Chinese, due to its English-based query language. Consequently, it encountered difficulties in reproducing the vulnerability.

    Findings also highlighted the importance of CVE descriptions in LLM success rates. Without these descriptions, the success rate dropped dramatically from 87% to 7%. This suggests that LLM agents currently require detailed instructions to develop exploitation plans for vulnerabilities and are not yet capable of independently creating such plans. However, this is merely the beginning, and future advancements may alter this landscape.

    Conclusions

    The study demonstrated that LLM-agents are already capable of autonomously breaching websites and exploiting certain real vulnerabilities in computer systems (with the majority of them being exploitable with a description of their exploitation).

    Fortunately, current agents are unable to exploit unknown and undisclosed vulnerabilities, nor can open-source solutions demonstrate results comparable to the paid ChatGPT4 (and new GPT4o). However, it is possible that future extensions could enable the exploitation of such vulnerabilities, with free-access LLM models potentially replicating the success of their proprietary counterparts.

    All this suggests that developers of large language models must approach the training process more responsibly. Furthermore, cybersecurity specialists need to be prepared for the fact that these models will be used to create bots that will systematically scan systems for vulnerabilities.

    Even open-source models can claim they will not be used for illicit activities (Llama 3 flatly refused to help breach a website). However, it is precisely due to openness that there are no obstacles beyond ethical considerations preventing the creation of "censorship-free" models.

    There are numerous ways to convince an LLM to assist in breaching, even if it initially resists. For instance, one could ask it to become a pentester and help improve site security by doing a "good deed."

    Rent GPU servers with instant deployment or a server with a custom configuration with professional-grade NVIDIA Tesla A100 / H100 80Gb or A5000 / A4000 cards. GPU servers with gaming RTX4090 cards are also available.

    Other articles

    25.10.2024

    TS3 Manager: What Happens When You Fill in the Documentation Gaps

    Having trouble connecting to TS3 Manager after installing it on your VPS? Managing your TeamSpeak server through TS3 Manager isn't as straightforward as it might seem. Let's troubleshoot these issues together!

    16.09.2024

    10 Tips for Open WebUI to Enhance Your Work with AI

    Unleash the true power of Open WebUI and transform your AI workflow with these 10 indispensable tips.

    27.08.2024

    Comparison of SaaS solutions for online store on Wix and WordPress.com versus an on-premise solution on a VPS with WordPress and WooCommerce

    This article compares the simplicity and cost of SaaS platforms like Wix and WordPress.com versus the flexibility and control of a VPS with WordPress and WooCommerce for e-commerce businesses.

    08.07.2024

    Let's build a customer support chatbot using RAG and your company's documentation in OpenWebUI

    We'll share our journey creating a technical support chatbot designed to assist our front-line team by answering user questions (and eventually becoming a part of our team itself).

    01.07.2024

    VPS or Dedicated Server: Optimal Hosting Solutions

    Discover whether VPS or Dedicated Servers are the perfect fit for your project. Our article breaks down the pros and cons of each, helping you make an informed decision tailored to your specific needs.

    HOSTKEY Dedicated servers and cloud solutions Pre-configured and custom dedicated servers. AMD, Intel, GPU cards, Free DDoS protection amd 1Gbps unmetered port 30
    4.3 67 67
    Upload