IoT with InfluxDB, Telegraph and Grafana on the Raspberry Pi 3

Article_InfluxDB_Grafana_v2

In the internet I stumbled on the following beautiful Grafana Dashboard and wanted to try it out myself, of course.

So I will describe here like me

  • InfluxDB, as a database
  • Telegraph as a collector
  • Grafana as visualization
have installed.

1. Install InfluxDB

The data is all collected in the InfluxDB. The current version can be found here .

 wget https://dl.influxdata.com/influxdb/releases/influxdb_1.2.2_armhf.deb
 sudo dpkg -i influxdb_1.2.2_armhf.deb
 sudo systemctl enable influxdb
 sudo systemctl start influxdb  

This will install InfluxDB in version 1.2.2, activate the automatic start and start the database.

pi@raspberrypi:/etc/telegraf $ influx
 Connected to http://localhost:8086 version 1.2.2
 InfluxDB shell version: 1.2.2
 > CREATE USER admin WITH PASSWORD '' WITH ALL PRIVILEGES
 > CREAT DATABASE telegraf
 > exit 

With this we create a user admin and show the later used database.

 sudo vi /etc/influxdb/influxdb.conf
 [http]
  enabled = true
  bind-address = ":8086"
  auth-enabled = true  

Through this customization in influxdb.conf, we enable authentication. These three parameters under [http] should be checked and adapted accordingly.

 sudo systemctl restart influxdb
 influx -username admin -password password  

2. Install the telegraph

Telegraph is the collector and fetches all data via SNMP from the access points. Of course, SNMP and the corresponding community string must be configured in the UniFi Controller. The current version can be found here .

 wget https://dl.influxdata.com/telegraf/releases/telegraf_1.2.1_armhf.deb
 sudo dpkg -i telegraf_1.2.1_armhf.deb

This will load you Telegraph and then install it.

 wget https://github.com/WaterByWind/grafana-dashboards/archive/master.zip
 unzip master.zip
 sudo cp grafana-dashboards-master/UniFi-UAP/mibs/* /usr/share/snmp/mibs
 sudo apt-get install snmpd

For telegraph to evaluate the data, the MIBs must be installed and snmpd. The MIBs are already included in the GitHUB repository of the dashboard and just need to be copied.

Then telegraph must be configured.

Here is my entire telegraph.conf. The sections Input and Output are important. Everything marked in red you have to adjust. If you already use a telegraph instance, you can supplement your config with the corresponding data, that is, the complete input section at least.

# Telegraf Configuration
 #
 # Telegraf is entirely plugin driven. All metrics are gathered from the
 # declared inputs, and sent to the declared outputs.
 #
 # Plugins must be declared in here to be active.
 # To deactivate a plugin, comment out the name and any variables.
 #
 # Use 'telegraf -config telegraf.conf -test' to see what metrics a config
 # file would generate.
 #
 # Environment variables can be used anywhere in this config file, simply prepend
 # them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
 # for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
 # Global tags can be specified here in key="value" format.
 [global_tags]
  # dc = "us-east-1" # will tag all metrics with dc=us-east-1
  # rack = "1a"
  ## Environment variables can be used as tags, and throughout the config file
  # user = "$USER"
 # Configuration for telegraf agent
 [agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true
  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000
  ## For failed writes, telegraf will cache metric_buffer_limit metrics for each
  ## output, and will flush this buffer on a successful write. Oldest metrics
  ## are dropped first when this buffer fills.
  ## This buffer only fills when writes fail to output plugin(s).
  metric_buffer_limit = 10000
  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"
  ## Default flushing interval for all outputs. You shouldn't set this below
  ## interval. Maximum flush_interval will be flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"
  ## By default, precision will be set to the same timestamp order as the
  ## collection interval, with the maximum being 1s.
  ## Precision will NOT be used for service inputs, such as logparser and statsd.
  ## Valid values are "ns", "us" (or "µs"), "ms", "s".
  precision = ""
  ## Logging configuration:
  ## Run telegraf with debug log messages.
  # debug = true
  ## Run telegraf in quiet mode (error log messages only).
  quiet = false
  ## Specify the log file name. The empty string means to log to stderr.
  logfile = "/var/log/telegraf/telegraf.log"
  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false
 ###############################################################################
 #              OUTPUT PLUGINS                  #
 ###############################################################################
 # Configuration for influxdb server to send metrics to
 [[outputs.influxdb]]
  ## The full HTTP or UDP endpoint URL for your InfluxDB instance.
  ## Multiple urls can be specified as part of the same cluster,
  ## this means that only ONE of the urls will be written to each interval.
  urls = ["http://localhost:8086"] # required
  ## The target database for metrics (telegraf will create it if not exists).
  
database = "telegraf" # required
  ## Retention policy to write to. Empty string writes to the default rp.
  retention_policy = ""
  ## Write consistency (clusters only), can be: "any", "one", "quorum", "all"
  write_consistency = "any"
  ## Write timeout (for the InfluxDB client), formatted as a string.
  ## If not provided, will default to 5s. 0s means no timeout (not recommended).
  timeout = "5s"
  username = "admin"
  password = "password"
  ## Set the user agent for HTTP POSTs (can be useful for log differentiation)
  user_agent = "telegraf"
 ###############################################################################
 #              PROCESSOR PLUGINS                #
 ###############################################################################
 # # Print all metrics that pass through this filter.
 # [[processors.printer]]
 ###############################################################################
 #              AGGREGATOR PLUGINS                #
 ###############################################################################
 # # Keep the aggregate min/max of each metric passing through.
 # [[aggregators.minmax]]
 #  ## General Aggregator Arguments:
 #  ## The period on which to flush & clear the aggregator.
 #  period = "30s"
 #  ## If true, the original metric will be dropped by the
 #  ## aggregator and will not get sent to the output plugins.
 #  drop_original = false
 # Telegraf Configuration for UniFi UAP monitoring via SNMP
 # These input configurations are required for use with the dashboard
 # Edit the list of monitored hosts ("agents")
 #  and SNMP community string ("community") as appropriate.
 ###############################################################################
 #              INPUT PLUGINS                  #
 ###############################################################################
 ##
 ## Retrieves details via SNMP from remote agents
 ##
 ##
 ## UniFi APs (Gen 2/Gen 3)
 ##
  [[inputs.snmp]]
   # List of agents to poll
   agents = [ "ip-ap-1", "ip-ap-2" ]
   # Polling interval
   interval = "60s"
   # Timeout for each SNMP query.
   timeout = "10s"
   # Number of retries to attempt within timeout.
   retries = 3
   # SNMP version, UAP only supports v1
   version = 1
   # SNMP community string.
   community = "public" 
   # The GETBULK max-repetitions parameter
   max_repetitions = 10
   # Measurement name
   name = "snmp.UAP"
   ##
   ## System Details
   ##
   # System name (hostname)
   [[inputs.snmp.field]]
    is_tag = true
    name = "sysName"
    oid = "RFC1213-MIB::sysName.0"
   # System vendor OID
   [[inputs.snmp.field]]
    name = "sysObjectID"
    oid = "RFC1213-MIB::sysObjectID.0"
   # System description
   [[inputs.snmp.field]]
    name = "sysDescr"
    oid = "RFC1213-MIB::sysDescr.0"
   # System contact
   [[inputs.snmp.field]]
    name = "sysContact"
    oid = "RFC1213-MIB::sysContact.0"
   # System location
   [[inputs.snmp.field]]
    name = "sysLocation"
    oid = "RFC1213-MIB::sysLocation.0"
   # System uptime
   [[inputs.snmp.field]]
    name = "sysUpTime"
    oid = "RFC1213-MIB::sysUpTime.0"
   # UAP model
   [[inputs.snmp.field]]
    name = "unifiApSystemModel"
    oid = "UBNT-UniFi-MIB::unifiApSystemModel"
   # UAP firmware version
   [[inputs.snmp.field]]
    name = "unifiApSystemVersion"
    oid = "UBNT-UniFi-MIB::unifiApSystemVersion"
   ##
   ## Host Resources
   ##
   # Total memory
   [[inputs.snmp.field]]
    name = "memTotal"
    oid = "FROGFOOT-RESOURCES-MIB::memTotal.0"
   # Free memory
   [[inputs.snmp.field]]
    name = "memFree"
    oid = "FROGFOOT-RESOURCES-MIB::memFree.0"
   # Buffer memory
   [[inputs.snmp.field]]
    name = "memBuffer"
    oid = "FROGFOOT-RESOURCES-MIB::memBuffer.0"
   # Cache memory
   [[inputs.snmp.field]]
    name = "memCache"
    oid = "FROGFOOT-RESOURCES-MIB::memCache.0"
   # Per-interface traffic, errors, drops
   [[inputs.snmp.table]]
    oid = "IF-MIB::ifTable"
    [[inputs.snmp.table.field]]
     is_tag = true
     oid = "IF-MIB::ifDescr"
   ##
   ## Interface Details & Metrics
   ##
   # Wireless interfaces
   [[inputs.snmp.table]]
    oid = "UBNT-UniFi-MIB::unifiRadioTable"
    [[inputs.snmp.table.field]]
     is_tag = true
     oid = "UBNT-UniFi-MIB::unifiRadioName"
    [[inputs.snmp.table.field]]
     is_tag = true
     oid = "UBNT-UniFi-MIB::unifiRadioRadio"
   # BSS instances
   [[inputs.snmp.table]]
    oid = "UBNT-UniFi-MIB::unifiVapTable"
    [[inputs.snmp.table.field]]
     is_tag = true
     oid = "UBNT-UniFi-MIB::unifiVapName"
    [[inputs.snmp.table.field]]
     is_tag = true
     oid = "UBNT-UniFi-MIB::unifiVapRadio"
   # Ethernet interfaces
   [[inputs.snmp.table]]
    oid = "UBNT-UniFi-MIB::unifiIfTable"
    [[inputs.snmp.table.field]]
     is_tag = true
     oid = "UBNT-UniFi-MIB::unifiIfName"
   ##
   ## System Performance
   ##
   # System load averages
   [[inputs.snmp.table]]
    oid = "FROGFOOT-RESOURCES-MIB::loadTable"
    [[inputs.snmp.table.field]]
     is_tag = true
     oid = "FROGFOOT-RESOURCES-MIB::loadDescr"
   ##
   ## SNMP metrics
   ##
   # Number of SNMP messages received
   [[inputs.snmp.field]]
    name = "snmpInPkts"
    oid = "SNMPv2-MIB::snmpInPkts.0"
   # Number of SNMP Get-Request received
   [[inputs.snmp.field]]
    name = "snmpInGetRequests"
    oid = "SNMPv2-MIB::snmpInGetRequests.0"
   # Number of SNMP Get-Next received
   [[inputs.snmp.field]]
    name = "snmpInGetNexts"
    oid = "SNMPv2-MIB::snmpInGetNexts.0"
   # Number of SNMP objects requested
   [[inputs.snmp.field]]
    name = "snmpInTotalReqVars"
    oid = "SNMPv2-MIB::snmpInTotalReqVars.0"
   # Number of SNMP Get-Response received
   [[inputs.snmp.field]]
    name = "snmpInGetResponses"
    oid = "SNMPv2-MIB::snmpInGetResponses.0"
   # Number of SNMP messages sent
   [[inputs.snmp.field]]
    name = "snmpOutPkts"
    oid = "SNMPv2-MIB::snmpOutPkts.0"
   # Number of SNMP Get-Request sent
   [[inputs.snmp.field]]
    name = "snmpOutGetRequests"
    oid = "SNMPv2-MIB::snmpOutGetRequests.0"
   # Number of SNMP Get-Next sent
   [[inputs.snmp.field]]
    name = "snmpOutGetNexts"
    oid = "SNMPv2-MIB::snmpOutGetNexts.0"
   # Number of SNMP Get-Response sent
   [[inputs.snmp.field]]
    name = "snmpOutGetResponses"
    oid = "SNMPv2-MIB::snmpOutGetResponses.0"

This command tests whether the data query works so far.

 sudo -u telegraf telegraf --config /etc/telegraf/telegraf.conf --config-directory /etc/telegraf/telegraf.d/ --input-filter snmp -test

If the data is also written to the InfluxDB, see the following tables:

pi@raspberrypi:~ $ influx -username admin -password yourpassword
 Connected to http://localhost:8086 version 1.2.2
 InfluxDB shell version: 1.2.2
 > use telegraf
 Using database telegraf
 > show measurements
 name: measurements
 name
 ----
 ifTable
 loadTable
 snmp.UAP
 unifiIfTable
 unifiRadioTable
 unifiVapTable

You can test whether the data is written to the InfluxDB.

3. Install Grafana

Soon we have done it, it only needs to be installed and adjusted Grafana. The current version can be found here .

 wget https://github.com/fg2it/grafana-on-raspberry/releases/download/v4.2.0/grafana_4.2.0_armhf.deb
 sudo dpkg -i grafana_4.2.0_armhf.deb
 sudo systemctl enable grafana-server
 sudo systemctl start grafana-server

That’s how you installed and started Grafana.

You can now reach your Grafana via http: // : 3000 . As login you use admin / admin. Then you should change your password under http: // : 3000 / profile / password . Then you have to specify your data source.

datasource

Now you can import the dashboard and view your data.

dashboard

Credits and Reference:

https://www.bjoerns-techblog.de/2017/05/installation-von-influxdb-telegraf-und-grafana-auf-dem-raspberry-pi-3/

https://bentek.fr/influxdb-grafana-raspberry-pi/

https://hodgkins.io/windows-metric-dashboards-with-influxdb-and-grafana

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s