Is ChatGPT making cybercrime accessible to the masses? UK security agency weighs In
The UK’s National Cyber Security Centre has conducted a comprehensive assessment of the potential security implications of ChatGPT and concludes that there is currently a low risk of less technically skilled cyberattackers developing sophisticated malware.
The UK’s National Cyber Security Centre (NCSC) has conducted an assessment of the potential security implications of large language models (LLMs), including ChatGPT, in the field of cybersecurity. In the blog post, the NCSC experts recognise the potential risks of LLMs, and conclude that there is currently a low risk of less technically skilled cyberattackers developing sophisticated malware.
The expert believes that the tools are unlikely to lower the barrier to entry for less technically skilled hackers. Instead, LLMs are more likely to help skilled hackers save time and carry out more sophisticated attacks. For example, if an attacker struggles to escalate privileges or find data, they could ask an LLM and receive an answer similar to a search engine result but with more context.
Another concern highlighted in the assessment is the potential privacy issues arising from queries by corporate users that are then stored and made available to the LLM provider or its partners to view. As such, the NCSC emphasizes the importance of thoroughly understanding the terms of use and privacy policies before using LLMs.