AI for Cybersecurity and Cybercrime: How Artificial Intelligence Is Battling Itself

Gaurav Belani
Published 09/06/2023
Share this on:

Artificial intelligence combating cybercrimeThe rise of ChatGPT and other generative artificial intelligence technologies does not only threaten human jobs. It is also amplifying cyber threats. AI can help cybercriminals generate malware rapidly, automate attacks, and enhance the effectiveness of scams or social engineering attacks through deep fakes and human-sounding AI-powered voice synthesis. The cyber threat landscape is becoming more dangerous, and AI plays a big role in it.

Nevertheless, it is somehow reassuring to know that cybersecurity experts are making good use of AI technologies to improve defensive and preventative solutions. Nowadays, AI is fighting AI in a battle between good and bad intentions.

Leveraging AI for cybersecurity


Most security solution providers are already using AI for cybersecurity. Artificial intelligence is used to automate repetitive tasks like data collection and analysis, system management, the accounting of attack surfaces, and vulnerability detection. Additionally, AI broadens situational awareness to enable better decision-making. Cybersecurity systems powered by AI can present context for the security information it displays along with response suggestions.

Notably, AI makes cybersecurity systems more effective in the following areas:

  • Detection of malicious activities – Artificial intelligence can analyze networks to establish benchmarks of safe or regular activities and spot instances that may be deemed anomalous or potentially harmful. AI is a key technology in solutions like user and entity behavior analysis (UEBA), which can detect threats continuously and in real time.
  • Malware detection – AI does not supplant threat intelligence or the identification of malware based on threat signatures. Instead, what it does is examine various factors such as file characteristics, code patterns, and behavior to determine if a file or script introduced to the system is safe or malicious.
  • Handling of zero-day attacks – With AI’s ability to detect malicious activities and malware, it allows cybersecurity systems to perform better in dealing with zero-days or threats that are still unknown.
  • Threat intelligence – AI is also useful in significantly improving threat intelligence as it can automatically gather security-related information from various sources, including the dark web. Cybersecurity solutions that integrate AI can identify emerging threats, correlate indicators of compromise, and present actionable insights.
  • Threat management – Another important benefit of artificial intelligence is its ability to ease the workload of human cybersecurity analysts. It helps address alert fatigue brought about by the deluge of security alerts and event information, which usually includes excessive amounts of false positives. AI can correlate data across multiple sources to accurately determine threats and prioritize the most urgent alerts, so they can be addressed in a timely manner.
  • Security analytics – AI can go through heaps of security logs and incident data to identify trends, detect malicious activities, and examine other metrics that may be missed if organizations solely rely on human security analysts.
  • Proactive threat hunting – AI can automate the process of finding vulnerabilities and potential threats. With machine learning algorithms, it is possible to continuously monitor network traffic, logs, and other data and apply cybersecurity rules and decisions to make sure that threats are found and resolved before they can cause any problem.

 


 

Want More Tech News? Subscribe to ComputingEdge Newsletter Today!

 


 

Moreover, there are cybersecurity solutions that employ AI chatbots to assist human security analysts in evaluating situations better and making the right responses. These chatbots are built with natural language processing (NLP) to help address queries and concerns in the same way a human cybersecurity expert would. Cybersecurity chatbots allow organizations with minimal cybersecurity expertise to get the best out of the security tools they have and adopt best practices in more meaningful ways.

Abusing AI for cybercrimes


Threat actors are relentless and resourceful, so it should not come as a surprise that they are also taking advantage of artificial intelligence for their felonious purposes. There are no recent major attacks that can serve as examples, but security experts suggest that threat actors are already using AI in the following cases:

  • Rapid malware generation – Generative AI technology is capable of coding, which means it can create malicious software quickly and easily. This allows threat actors to unleash a deluge of malicious software to random victims, whose cyber defenses may not have advanced zero-day threat detection capabilities.
  • Automated spear phishing – AI makes it easy to produce targeted and highly convincing phishing copies. With the abundance of publicly available data about potential targets, cybercriminals can develop and launch automated attacks.
  • AI-enhanced botnets – Security solutions have become considerably better at spotting and stopping botnets. That’s why threat actors have started using AI to enhance the ability of their botnets to evade detection and coordinate attacks. AI also augments the command and control infrastructure of botnets. With artificial intelligence, botnets are able to analyze network behavior and reconfigure their attack patterns to adapt to cyber defenses.
  • Fake videos and audio – AI is the underlying technology for deep fakes or fabricated videos that show people saying or doing things they did not do. Also, AI is a key technology in the many text-to-speech and speech synthesis systems that can imitate the voices of real people. These fake videos and audios serve as tools for scammers as they prey on those who are still unfamiliar with the ability of current technologies to fake speech and videos.

Artificial intelligence battles


Cybersecurity providers use AI to improve threat intelligence, undertake behavioral analysis to detect threats without entirely relying on threat signatures, automate various tasks, and proactively hunt threats. Conversely, threat actors use AI to launch zero-day attacks at insane speeds and formulate new attack approaches. It is unclear which side is winning in this race. There are still no comprehensive documentation of actual attacks that prove that one side has been getting the better of the other.

However, threat actors have an ace up their sleeves: adversarial attacks on cybersecurity AI systems. They can use adversarial machine learning techniques to manipulate or negatively influence AI systems used for cyber defenses. They can introduce malicious inputs and exploit vulnerabilities in algorithms, which can result in false positives or negatives. It is also possible to effectively bypass security controls with ingenious attacks.

Additionally, threat actors can engage in data poisoning and machine learning model manipulation. If they manage to gain access to the training data used by cybersecurity providers, they can compromise the integrity and performance of AI-powered cybersecurity platforms. This can render these supposedly advanced solutions inaccurate and unreliable.

Cybersecurity providers cannot do the same attacks (data poisoning and model manipulation) on threat actors, since it is impractical and they also have no specific targets to begin with. That’s why the most they can do is secure their systems with great urgency and meticulousness.

AI provides a long list of benefits for cybersecurity, but cunning cybercriminals can find ways to defeat AI systems. However, this does not mean that threat actors are winning. The growing demand for AI-powered cybersecurity solutions suggests that these solutions are working. Also, there are no signs that cybercriminals are set to overrun the global IT infrastructure anytime soon.

With the help of cybersecurity laws and frameworks as well as government and nonprofit collaborations aimed at addressing cyber threats, there is formidable pushback against AI-powered threats. The AI vs AI showdown between cybersecurity providers and threat actors is far from over, but the good news is that the cyber defense side is working hard to fend off the attacks.


Disclaimer

Disclaimer: The author is completely responsible for the content of this article. The opinions expressed are their own and do not represent IEEE’s position nor that of the Computer Society nor its Leadership.