There are a number of great sites dedicated to Ransom ware threat feeds. Those with the most value include the Download/Dropper site or the C2 Site.
These lists of observables can help Incident Response teams, by limiting the spread throughout their local environments.
Unfortunately though, malware authors will frequently slip in under the radar, and we find individual users try to rectify the problem on their own. They will visit the payment site and pay the ransom, which keeps IT Teams in the dark. Regardless of what side of the debate you're on, hiding the ransom payment makes it hard for teams to build counter measures or even understand they have a problem.
Using a spare RaspberryPi, we've started mapping out ransom ware domains. Our project operationalizes data from Harry71, Ahmia and VisiTOR. Their excellent work in mapping TOR makes this feed possible. Finally, as we stumble upon malware samples and perform analysis, the results of that analysis is fed into the tool.
After enumerating the .onion sites, we combine the data with known Web2Tor gateways that are commonly used by malware authors, and compile a suggested notification or block list.
Because our research is largely automated, there may be occasional legitimate .onion sites on the list. We do our very best to screen and remove these quickly.
Our goal is to combine this useful data into actionable indicators of warning for IT/IR teams to use in their IDS or SIEM. Ideally you would never see these observables in your environment; but if they hit it is important to act on them immediately.
For example, here is a snippet of a feed generated on December 25, 2016:
# Ransomware Payment Sites on TOR.
# List provided with no warranty by DeepEndResearch.
# Commercial use with permission only.
# There may be false positives in this list. It should be used as an Indicator of Warning list only.
# This file is updated daily.
Our feed is updated daily and posted here:
We make several attempts to remove sites that are no longer operational within 24-48 hours.
One way you may try to operationalize this data, in a Splunk environment:
Convert the feed to a CSV file (set this as a daily Cron in your Splunk Search Head):
Then set a query using the inputlookup option at a schedule that works for your environment.#!/usr/bin/pythonimport requestsif __name__ == '__main__':ioc = feed_file = requests.get('https://files.deependresearch.org/feeds/ransomware/ransomware-payment-sites.txt', verify=False).contentoutfile = 'domain,notes\n'for line in feed_file.splitlines():if line.startswith('#') or '.' not in line:continueoutfile += '%s,DeepEndResearch Suspected Ransomware Payment Site\n' % linewith open('ransomware_payment_site.csv', 'w') as fh:fh.write(outfile)
We hope that you find this feed useful. Please feel free to comment or offer us suggestions!