Monitoring bandwidth on an ASUS router


  Recently I was being told by Cox that I was about to go over my data cap of 1280gb per month. I immediately had ideas of where my bandwidth was going, but I wanted a definitive answer.

  My wireless router, at the time was an ASUS GT-AXE11000. I have since switched to my 2.5gbit new Linux router.

  I tried the built-in features of my ASUS router, but found severe limitations and found the data incomplete. Foreshadowing, the problem comes up again later.

  My next thought was ntop, which I had used decades ago. At some point it morphed into ntopng. The problem with ntopng is to use it these days, in a way that is useful, you require a license that costs something like $400. I want an answer, but not that bad.

  Next I tried this solution from Jeff Geerling, Monitoring my ASUS RT-AX86U Router with Prometheus and Grafana. This method works great for what it is. The problem is that it isn’t actually what I am looking for. This just giving you a dashboard from Grafana with basic stats like CPU usage, memory, and network bandwidth used per network interface. Where as what I want is not stats about the router itself, but stats about the bandwidth usage of the clients behind the router.

  Next I found Elastiflow, but it was archived Nov 7, 2021. It morphed into Elastiflow, another commercial product. I ran across many different other solutions both great and small. The end result was that either it didn’t do what I wanted, or it was a commercial product. Even with the commercial products I tried them and learned their free versions were effectively worthless.

  Then I found mention of someone trying to take the old Elastiflow and hack it into something new. Sadly that was someone’s pet project for a minute, and then they abandoned it. In the end after running around in circles for a few hours I realized the best path forward was probably to just take the old Elastiflow and get it working again.

  I use Kubernetes on my NAS server for running various local services via helm charts, and one of the things I found in all research was a helm chart for old Elastiflow. I made some minor tweaks to it and managed to get it running on my Kubernetes 1.27.4 cluster. From that I made this git repository, helm-elastiflow. Old Elastiflow is basically ELK / ElasticStack with a little of the Elastiflow software mixed into Logstash.

  So at this point I had old Elastiflow running, but it had no data in it. This then lead me to softflowd. It is a source of flow data for Elastiflow to process into a dashboard. You could compile it from source, but it is installable via Entware. Once you get it installed and running you are probably going to want to use User Script to get it run on boot of the router. The command to use would be something like softflowd -i br0 -m 131070 -n 192.168.1.40:7475 -T full -6 -d -D -P tcp -s 1 -b -v 10. Where br0 is the internal interface of your router, 192.168.1.40 is the ip than Elastiflow is exposed on, and 7475 is the exposed TCP ipfix port. Ipfix is one format of flow data. Note unless you get into load balancers, the helm chart sets up nodeports by default, and they can change.

  Once I actually got the data flowing into Elastiflow I could go to the Kibana dashboard, and see source, destination, protocol, and how much bandwidth was used.

  Now for the bad news, even this solution, as is, for me missed about 50% of my used bandwidth. At first I thought Cox was lying to me on their usage page. But the total bandwidth used on the public network interface of the router matched their numbers within a margin of error. Maybe you can figure out the issue and resolve it. If so, I would love to hear it. My best suspects are either a softflowd bug or something more inherent to the router’s firmware.

  Note these screenshots are from the old Elastiflow’s README.md. Elastiflow OverviewElastiflow Top-N   Here I am going to assume you already understand Kubernetes, Helm, and ElasticStack to some extent.

Useful commands:

# Making the helm chart tarball
helm package helm-elastiflow
# Installing the helm chart from the tarball into Kubernetes
helm upgrade --install elastiflow --create-namespace -n elasticstack elastic-stack-1.5.0.tgz
# Showing you all the details of the flow TCP port, aka the ipfix nodeport
kubectl get service elastiflow-logstash-tcp -n elasticstack -o yaml
# Showing you all the details of the Kibana 
service, including the nodeport needed to access it via your web browser
kubectl get service -n elasticstack elastiflow-kibana -o yaml

Example Kibana URL:

http://192.168.1.40:1265/

  One of the things you need to do once you get access to Kibana is load this json file.

  I went further and used a combination of ingress-nginx+cert-manager+metallb+oauth2-proxy to make https://kibana.cygnusx-1.org/. If you wanted to go for bonus points you could add in external-dns. I have sense torn the software down.

  Using ingress-nginx and metallb give you a full Kubernetes load balancer, aka reverse proxy. With cert-manager you can auto generate LetsEncrypt SSL certificates.

  The reason for oauth2-proxy is that Kibana has no authentication out of the box. This is natural for it since Elastic, the company behind ElasticStack, want you to pay them for the feature of X-Pack.

  The reason to use external-dns would be to auto-magic the updating of your DNS entry if the public ip address of your router changed.

Diagram:

Internet
\
 DNS record, like kibana.cygnusx-1.org and 98.179.66.95
 \
  ASUS Router port forwarding on port 443
  \
   MetalLB ip address like 192.168.1.30
   \
    oauth2-proxy
    \
     ingress-nginx
     \
      kibana service