Thank you!
We will contact you shortly
The sensational ChatGPT from OpenAI has been all over the news, forums and memes all over the world. Even Elon Musk tweeted about it being scarily good. But what if it becomes not a find, but a challenge for cybersecurity specialists, and criminals will start using it for malicious purposes? What if it’s already happening?
The AI-enabled chatbot can converse with users, answer follow-up questions and admit to mistakes. So, it’s not improbable that cybercriminals will use it to craft believable phishing campaigns or other hacking techniques.
Based on BlackBerry research, we compiled the key insights about ChatGPT into this article, so you can learn what major cybersecurity specialists think about the upcoming issue and what challenges may lie ahead.
What is ChatGPT?
ChatGPT is a chatbot that uses artificial intelligence, deep learning techniques known as Transformer, a state-of-the-art language model, and other technology to interact with an actual human.
ChatGPT is designed to generate human-like responses to a wide variety of algorithmic inputs. As such, it can be used for a variety of tasks, including answering questions, talking to people in natural language, generating text, and even creating music and images.
What are the dangers of ChatGPT?
Most IT professionals agree that ChatGPT-enabled cyberattacks are imminent. According to BlackBerry research shows that 78% of experts predict that the first ChatGPT-credited attack will occur within two years, 51% – in less than a year, and some think that it will happen in the next few months.
While three-quarters of respondents believe ChatGPT will be used mainly for good, 71% think nation-states may already be leveraging ChatGPT for malicious purposes anyway.
Here are the top five ways respondents think threat actors may harness the AI chatbot:
● Craft more believable phishing emails (53%) ● Help less experienced hackers improve their technical knowledge and develop their skills (49%) ● Spread misinformation/disinformation (49%) ● Create new malware (48%) ● Increase the sophistication of threats/attacks (46%)
“I believe these concerns are valid, based on what we’re already seeing. It’s been well documented that people with malicious intent are testing the waters and over the course of this year, we expect to see hackers get a much better handle on how to use AI-enabled chatbots successfully for nefarious purposes”, says BlackBerry’s Chief Technology Officer of the Cybersecurity Business Unit, Shishir Singh.
Will ChatGPT arm cybercriminals or help fight them?
The chatbot can be a useful tool for cybersecurity professionals. Despite this, it can also become a powerful weapon in the hands of cybercriminals, allowing even beginners to code ransomware.
Let's look at examples of how ChatGPT can help in the fight for network security or, on the contrary, complicate it.
ChatGPT and Reddit
The Reddit subforum r/cybersecurity, provides some eye-opening comments about using the bot for writing:
● Risk management framework policies ● Remediation tips for pentest reports ● Basic PowerShell script
Despite some giving it poor marks as a coder, more than a few Reddit posters shared concerns that the bot’s ability to generate convincing prose could make the jobs of security teams harder than they are already.
ChatGPT and the phishing emails
Threat Researcher Jonathan Todd wondered if ChatGPT could accelerate effective content for social engineering, which is a key tactic used by cyber threat actors to fool humans and build trust. He worked with the bot to create code that would analyze a Reddit user’s posts and comments and develop a rapid attack profile. Then, he instructed the AI to write phishing hooks — emails or messages — based on what it knew of the person.
This research proves that the bot can make it possible to create automated, high-fidelity phishing campaigns at an infinite scale.
ChatGPT and Malware
A group of researchers Point recently came across a thread on the dark web titled, “ChatGPT – Benefits of Malware,” where someone claimed they developed basic infostealer code with the bot. Researchers tested and confirmed this claim was true. This suggests that ChatGPT could give script kiddies and other “newbies” a boost when it comes to creating malicious code, by requiring lesser technical skills than they might otherwise need.
Perspectives for ChatGPT and Cybersecurity
If you are still wondering what the future holds for ChatGPT and cybersecurity, you are not alone. We can expect the bot to get smarter and more powerful, as users figure out how to structure their queries for maximum results.
The longer the bot is in operation — and the more cyber-related queries and content it encounters — the more adept it will likely become.
This makes cybersecurity an even more pressing question for any company, big or small. Email communication, networks, IT infrastructure — everything requires protection from malware and threat actors and continuous monitoring to prevent attacks.
The BlackBerry UES ecosystem can provide such protection. The solution secures the organization's network by blocking access to malicious websites and applications, manages network activity, and controls employee access to information. The system also uses user behavior analysis to detect unusual activity that may indicate the presence of cyber threats.
GET A CONSULTATION / DEMO / FIND OUT THE COST