Categories
Tech Lite World News

ChatGPT helps hackers write malicious codes, steal data

Cyber-security company Check Point Research (CPR) is witnessing attempts by Russian cybercriminals to bypass OpenAI’s restrictions, in order to use ChatGPT for malicious purposes….writes Nishant Arora

Any technology has two sides to it and artificial intelligence (AI)-driven ChatGPT (a third-generation Generative Pre-trained Transformer) is no exception. While it has become a rage on social media for answering like a human, hackers have jumped onto the bandwagon to misuse its capabilities to write malicious codes and hack your devices.

Currently free to use for the public as part of a feedback exercise (a paid subscription is coming soon) from its developer Microsoft-owned OpenAI, ChatGPT has opened a Pandora’s Box as its use is limitless — both good and bad.

Cyber-security company Check Point Research (CPR) is witnessing attempts by Russian cybercriminals to bypass OpenAI’s restrictions, in order to use ChatGPT for malicious purposes.

In underground hacking forums, hackers are discussing how to circumvent IP addresses, payment cards and phone numbers controls — all of which are needed to gain access to ChatGPT from Russia.

CPR shared screenshots of what they saw and warns of the fast-growing interest of hackers in ChatGPT to scale malicious activity.

“Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes. We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations,” warned Sergey Shykevich, Threat Intelligence Group Manager at Check Point.

Cybercriminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient.

Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes.

On December 29, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum.

The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.

On December 21, a threat actor posted a Python script, which he emphasized was the “first script he ever created”.

When another cybercriminal commented that the style of the code resembles OpenAI code, the hacker confirmed that OpenAI gave him a “nice (helping) hand to finish the script with a nice scope”.

This could mean that potential cybercriminals who have little to no development skills at all, could leverage ChatGPT to develop malicious tools and become a fully-fledged cybercriminal with technical capabilities.

Another threat is that ChatGPT can be used to spread misinformation and fake news. OpenAI, however, is already alert on this front.

Its researchers have collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory in the US to investigate how large language models might be misused for disinformation purposes.

As generative language models improve, they open up new possibilities in fields as diverse as healthcare, law, education and science.

“But, as with any new technology, it is worth considering how they can be misused. Against the backdrop of recurring online influence operations-covert or deceptive efforts to influence the opinions of a target audience,” said a latest report based on a workshop that brought together 30 disinformation researchers, machine learning experts, and policy analysts.

“We believe that it is critical to analyse the threat of AI-enabled influence operations and outline steps that can be taken before language models are used for influence operations at scale,” the report mentioned.

ALSO READ: India, UAE ink deal on green hydrogen, under sea connectivity

Leave a Reply

Your email address will not be published. Required fields are marked *