What is ChatGPT
Unless you’ve been living under a rock, you’ve heard of the breakthrough technology that is ChatGPT. However, ChatGPT in itself is just the tip of the iceberg. What lies underneath is GPT-3 (Generative Pre-trained Transformer 3), a large language model with an unseen amount of processing power and computing capability.
The arms race for the best AI out there is in full force. Google already announced Google Bard, a tool that they hope would challenge OpenAI with the ability to scour the internet, which is one of the pain points of ChatGPT. Chatsonic is another challenger – an AI tool built on top of ChatGPT inherits the might of its sibling, but with the added benefit of accessing Google’s search engine. It makes up for an interesting battle that will surely rapidly develop into some miraculous solutions in the years to come.
However, as things stand, GPT-3 is firmly on the throne.
To even try and grasp the might of GPT-3, let’s take a look at some data. According to Sigmoid, GPT-3 has more than 175 billion machine learning parameters, thus thwarting Microsoft’s Turing NLG which had ‘just’ 17 billion parameters. As time goes on, ChatGPT will only become more powerful, as its founders, OpenAI, are also utilizing reinforcement training, where they employ trainers specifically tasked with talking to their engine and giving it human feedback which then rolls into the insurmountable data, creating a mighty product for us to use.
ChatGPT in Cybersecurity
You’ll often find that the barrier to entering the cybersecurity world can be pretty high. There’s so much knowledge you need to consume before getting started on your journey to become a cybersecurity expert, that for most people, it’s not worth it.
However, that changes with ChatGPT. With its ability to instantly generate code, it enables even just curious enthusiasts to give cybersecurity a shot. This could very well result in a dramatic rise of cybersecurity attacks across the globe, as the number of potential hackers will rise up like never before due to the simplicity of using a tool such as ChatGPT. Suddenly, the barrier to entering the cybersecurity world went down. No more dark terminals, lengthy books, and frustrations – now you just have to fire up the good ol’ AI and you’re good to go, right?
Well, not so fast.
While it is true that ChatGPT is indeed capable of writing malware, apparently the quality isn’t up to the standard. This is clearly some good news, but it’s not all roses; there are plenty of ways clever hackers could use ChatGPT, even if their prompts don’t look ominous on the surface.
BlackBerry conducted a survey that returned some alarming results. On a scale of 1500, more than half of them (51%) predicted there would be a cybersecurity attack credited to ChatGPT in the upcoming year. While it’s hard to expect large-scale cybersecurity attacks to go raving immediately, smaller-scale stuff might go off the rails, and there’s a good reason why.
It’s globally the most common and frowned upon method of hacking – the phishing attack. Why it made its way into a ChatGPT article, you ask? Well, the answer is quite simple, yet scary.
Phishing attacks could run riot in the upcoming months.
For those who don’t know, a phishing attack is scamming a person into giving their sensitive data by pretending to be someone else. It could be an email that looks just like a legit company’s would, but with slight changes that an end-user wouldn’t notice, or it could be a full-fledged clone of an existing website, where the victim would enter their data thinking it was a normal website, thus giving away the sensitive info.
With ChatGPT being able to create code to build websites, cloning existing websites and writing convincing emails has never been easier. This is why you must be extra careful these days – always double-check the URL of the website you’re visiting & make sure that the emails you exchange are coming from the right sources.
It’s not only visuals either; ChatGPT enables hackers to easily generate convincing emails in any language they want. This used to be a big barrier for a lot of non-English hackers as people would quickly recognize broken grammar, but the game has changed now and nobody is off limits.
The time of artificial intelligence has come and it’s not going away anytime soon. With that, we must adapt rather than find a way to get around it. The reality is that machine learning models will only get powerful as they rapidly gather more data and build up to an already fascinating structure.
It’s not just the cybersecurity world that’s in danger. ChatGPT could also be used for some criminal actions as some authors already found a way of getting the program to explain how to create an explosive or hand out practical tips for shoplifting.
While we can’t help you with protecting your physical goods, we certainly can do something about your digital security. Bright allows you to create a safe environment for your apps by finding vulnerabilities early in the SDLC, which allows you to reach quickly and remediate on time. Just like ChatGPT simplifies cybersecurity attacks, we at Bright simplify protection as you’ll find that our dev-centric solution could be the very thing that successfully protects your applications from ominous intents.