Researchers at the Korea Advanced Institute of Science & Technology have pre-trained a large language model on documents acquired from the Dark Web to help fight global cybercrime.
Here are some key points:
DarkBERT is a large language model (LLM) designed to handle the Dark Web's enormous lexical and structural variation.
In terms of ransomware leak detection, notable thread detection, and threat phrase inference, the researchers believe it beats other pre-trained language models.
The workload of cybersecurity specialists could be reduced by automating such analyses.
Keep reading with a 7-day free trial
Subscribe to The PhilaVerse to keep reading this post and get 7 days of free access to the full post archives.