Many enterprises already employ artificial intelligence (AI) and bots to search for threats and high-risk environments in their own networks and within internal software. But unfortunately, says John Briar, COO and co-founder of BotRx, cyber criminals are seeing the value of AI-powered bots too.
Cyber criminals have appropriated these technologies, creating a new generation of bad bots that can find system vulnerabilities faster, and then exploit them in real-time. Bots rove searching the web for weak, unpatched systems and key vulnerabilities. Aiming to exploit users and their accounts even on well protected systems as well as stealing data, disrupting service, and even spreading fake “facts.”
Cyber threats posed by bad bots
Facebook reports that it removed more than seven million pieces of harmful Covid-19 misinformation between April and June alone. Although substantial, the scale comes as no surprise, given that bots can disseminate misinformation through social media at magnitudes that humans could never achieve. And disinformation campaigns aren’t just limited to “fake news” on social media.
Disinformation campaigns can also be used to influence stock prices, or even degrade a brand’s reputation and damage consumer confidence. Bad bots can masquerade as customers or the company itself, creating negative reviews or spreading disinformation about a company and its leadership.
Besides disinformation, one of the biggest threats to businesses posed by bad bots is content scraping, where bots download or “scrape” content from a website for malicious purposes. Web crawlers and content scraping bots trawl the internet looking for valuable business data to steal for profit. As a result, product details, pricing, promotions, and API (application program interface) data can all be captured by bots and used for competitive gain.
Add to that, fraudsters are also employing automated bots to launch credential stuffing attacks. Credential stuffing uses login and password pairs stolen in a previous data breach to break into a user’s other accounts, counting on the fact that many people reuse the same usernames and passwords across all of their online accounts. Once fraudsters find a login match for a specific website, they sell these verified credential pairs to other cybercriminals that launch follow-on attacks, or use the account access to commit a variety of fraudulent activities
How can businesses fight bad bots?
Because of the open nature of the internet, nothing is stopping bots from gaining access to websites and applications. In fact, bots account for nearly half of the web traffic going to the world’s 1.8 billion websites in 2020. Although good bots can perform beneficial tasks like facilitating business processes, indexing data, and aggregating content, these same automated processes are also being used for malicious purposes.
Given the sheer volume of fraudulent traffic, standard cybersecurity technologies that don’t specialise in bots aren’t effective against them, making bad bot detection and mitigation a must-have for today’s enterprises.
Unfortunately, there is no magic bullet solution to the problem of bad bots, and organisations must evaluate which combination of solutions best fits their needs. One of the biggest challenges for enterprises is being able to combat the dynamic nature of automated bot attacks. These attackers change tactics on such a regular basis that it’s difficult to determine behavioural rules and signatures.
From this point of view, artificial intelligence (AI) and machine learning (ML) based solutions are a better match for automated bot attacks, but even the very best AI and ML solutions can be outsmarted by financially motivated fraudsters who patiently gather intelligence to plan and execute attacks over time.
In addition, AI is only as good as the information it is fed, and while effective for identifying data anomalies, AI still requires manual intervention to classify irregularities as real or false events.
When it comes to firewalls and intrusion prevention systems, signatures and rules can be rendered ineffective because they cannot differentiate quickly changing attack patterns and small footprint attacks that are common today.
Likewise, big data analytics, where analysts examine large amounts of data in order to detect irregularities within a network, often can’t keep up with rapidly-changing attack vectors or easily identify small volume, low frequency attacks. Even with the most up-to-date threat intelligence, the intelligence is “after the fact,” which allows early attackers to go undetected.
New solutions like Moving Target Defense (MTD) look to shift the tactical advantage back to the defenders. The concept of MTD, created by the US Department of Homeland Security, is unique in its proactive approach to stopping malicious bot attacks. MTD makes the attributes of a financial institution’s network dynamic rather than static, obfuscating the attack surface. This reduces the window of opportunity for fraudsters, makes it extremely difficult for them to infiltrate a network, and allows companies to be on the front foot.
Conclusion
Cybercriminals are leveraging AI and are only getting smarter about how to make bad bots behave like humans. While traditional cyber defence methods still have their uses, to outpace fraudsters and their bot armies, businesses must utilise the same advanced technologies, such as AI and machine learning, as well as look to new solutions like MTD that shift the balance of power from attacker to defender.
The author is John Briar, COO and co-founder, BotRx.
Comment on this article below or via Twitter @IoTGN
The post AI: Enterprise security friend or foe? appeared first on IoT global network.