OpenCanary 3.0 – Evolution not Revolution

USA versus Germany versus Switzerland

The OpenCanary trio have been running for some time but, looking at the dashboards, I wanted more clarity across the three honeypots plus I wanted to widen the geographical locations and reduce some of the operating cost.

Firstly, it seems the resources consumed by the OpenCanary (or would that be the popularity?) was causing an increase in costs in Google Cloud Platform. That instance was replaced by non-US instance in Oracle Cloud and fulfilled the requirement to be able to see geograpical differences and pay less.

Secondly, the slightly-messy OOC (Original OpenCanary) was retired and replaced. Location remains the same (US) but some cleanup was done for the scripting behind the instance (mostly to do with malware scripting).

Thirdly, the Swiss-based instance was tidied up and configured to match the new instances.

All instances share the following ports being open and available:

The process to create and realign the OpenCanary instances was largely:

  • clean opencanary.conf and simplify the name of the node to the country hosting it
  • amend the opencanary.conf to shout OpenCanary all over (for discovery)
  • install a folder-watcher service for SMB file drops (copy, send to VirusTotal, report in Slack)
  • change the login skins for HTTP services to mention OpenCanary – including a disclaimer
Login warning, OpenCanary-style

The result is three clean honeypots. Welcome, Digger, Sentinel and Armada.

I will update the instructions to be more logical, more thorough and useable by all.

How Do They Look?

Legibility of the three competing honeypots is now easier on the eye:

The newly-installed instances are less popular than the established instance. So it’s not just about being open on the Internet, it’s about being known to be open.

Popular ports are all as usual; SSH being chased by REDIS today (that REDIS data is valuable).

OpenCanary Building Blocks

The following are the main components that I use for my OpenCanary instances. They hook together via Tailscale, report into NewRelic, ping a Slack channel when files are dropped on them and of course feed Splunk so that the dashboards above can be consumed.

In no particular order, those building blocks are:

  • OpenCanary. The open-source part of Thinkst that provide the python-based honeypot
  • Ubuntu. My OpenCanary hosts run Ubuntu Server
  • Tailscale. Underpinning the ecosystem with private connectivity
  • Slack. It’s a bit <meh> but so is Teams. Nice for lightweight and infrequent notifications
  • Splunk. Making sense of 1.3 million records per instance per month
  • NewRelic. Even a hobby setup needs monitoring….
  • Oracle Cloud. Free tier is massively useful, thanks!
  • ChatGPT. AI helped with some HTML, scripting and PHP….