Digital background depicting innovative technologies in security systems, data protection Internet technologies 3d rendering
An arrow pointing leftHome

DarkLight.ai says its knowledge-driven AI can help companies ward off cyberattacks

  • Julie Emory
6/10/2022

The issue of cybersecurity is — unfortunately — alive and well.

Artificial intelligence isn’t new to the cybersecurity industry, but its prominence is growing rapidly. In a 2021 report that examined the cost of data breaches, IBM concluded that AI was the most effective cost-mitigation strategy for companies seeking to invest in security. Which is good news for companies like Seattle-based startup DarkLight.ai, which is attracting attention — and investment — with its symbolic AI-driven approach to detecting known threat actors.

Founded in 2014, the cybersecurity startup recently landed its first beta clients in May. One of these clients, the company says, is a top five defense contractor, and the other is a medium-size security provider based out of Arizona. All this follows DarkLight.ai’s $5.1 million funding round last year, bringing their total funding to date to more than $11 million.

Rather than using neural networks, or machine learning, DarkLight.ai uses an approach to AI that is currently less fashionable in the tech world — it deploys symbolic AI, a method that trains AI to learn and respond much like the human brain does, instead of feeding the system reams of hard data and emulating the results. DarkLight seeks to understand and defend against the behaviors of known threat actors to a client’s security infrastructure.

The company’s technology seeks to leverage its AI analytics on playbooks — ways that companies store and manage standard procedures when threats arise — in order to describe tactics for known threat actors like state-sponsored hackers and other advanced persistent threats.

DarkLight.ai puts together different pieces of the puzzle and analyzes the nature of network traffic and phishing attempts to learn how to detect and block threat actors from a company’s network. DarkLight.ai’s AI aggregates all of this behavior to assess who is on the network within different layers of defense in order to prevent a breach.

The company’s datasets come from the National Vulnerability Database, which is managed by the National Institute of Standards and Technology. The Common Vulnerabilities and Exposure repository, included in the NVD, includes all known threat actors — and that’s what DarkLight.ai uses to curate its AI-powered sensors.

The technology that inspired DarkLight.ai was originally created eight years ago, in the Pacific Northwest National Laboratory with the aim of applying AI to a large-scale dataset to meet national cybersecurity standards.

Dan Wachtler, cybersecurity industry veteran, came across DarkLight after he sold his company Root9B, which sold a cyber threat-hunting service, to Deloitte in 2015.

“I became very much enamored with the cybersecurity challenge,” said Wachtler — now CEO of Darklight.ai. “The cybersecurity problem is alive and well, unfortunately, and I… [wanted to] add as much as I could to help solve a pretty big issue.”

Data breaches cost organizations an average of $4.24 million, and that figure is only anticipated to rise with increased threat vectors — pathways for cybercriminals to gain access, such as users, networks, or mobile devices — resulting from the shift to work-from-home and the continued expansion of digital workplace platforms.

Now DarkLight.ai says that if a threat actor is detected on a client’s network, it takes a “defense-in-depth” approach where firewalls and other security measures will prevent an adversary from gaining access. “We would do a very good job of detecting once they’re in, based on your sensor coverage,” Wachtler said.

Beyond merely identifying a threat actor’s IP address, DarkLight.ai conducts active reconnaissance through its playbooks. Wachtler says DarkLight.ai has six playbooks for one particular Chinese threat actor, in order to respond to all six known campaigns carried out by the group.

“What our system does is we capture those playbooks… and [translate it to] machine speed and scale,” said Wachtler

Ultimately, DarkLight.ai seeks to make decisions that a human — a human very knowledgeable in security threats — would make; this extends to understanding and aggregating threat actors like APT19 by using knowledge graphs. Knowledge graphs go beyond the capabilities of a relational database and further reinforce DarkLight.ai’s ability to automate actions that were previously taken by human actors.

DarkLight.ai, however, does not claim that its technology can prevent zero-day vulnerabilities, or weaknesses in a system or device yet to be discovered by its developer or someone else interested in keeping it secure. The company’s playbooks are derived from known information and therefore cannot predict vulnerabilities that have not yet been exploited.

Wachtler anticipates DarkLight.ai will be pursuing future funding rounds within the next six months to a year and is excited to see DarkLight.ai’s knowledge-driven AI go to the market.

“The percentage of stuff that still filters up to where a human has to make a decision is not scalable today,” Wachtler said. “That’s what we’re trying to solve.”