Nagios Plugins for Linux v25

Nagios Plugins for Linux v25 (Gentoo Release)

The version 25 of the Nagios Plugins for Linux is available for download.

The Nagios Plugins for Linux is a free and open source (license) set of plugins for monitoring the major system parameters of the Linux servers.

The complete source code is available at GitHub (nagios-plugins-linux-25.tar.xz). You can find the full documentation on https://github.com/madrisan/nagios-plugins-linux.

What’s new in this release

Fixes

  • Fix compilation when libcurl headers are not installed.
  • Fix warning message about obsolete AC_PROG_RANLIB macro usage.
  • sysfsparser library: fix debug messages in sysfsparser_thermal_get_temperature().
  • check_memory plugin: minor code cleanup and typo fixes.

Enhancements

  • Add perfdata to mem_available and mem_used (feature asked by @sbraz).
  • Add a build option to disable the linking of libcurl (necessary to build check_docker): --disable-libcurl (feature asked by @sbraz).
  • Packages: add support for Fedora 30 and Debian 10 (Buster).

Changes

  • Update the external jsmn library.
  • Move some functions to the new library perfdata.
  • Drop support for building Fedora 24-27 and Debian 6 (Squeeze) packages.

Test framework

New unit test for lib/perfdata.c.

Gentoo Package

The plugins are now available in the Gentoo tree. They can be installed by running:

emerge -av net-analyzer/nagios-plugins-linux-madrisan

The curl USE flag is required to build check_docker.

Deploy OpenStack in a few minutes

In this blog, we will provide a procedure for installing an all-in-one OpenStack Newton Open Virtual Appliance (OVA) on Linux (Fedora 27) and KVM hypervisor.

OpenStack_logo_small

We will make use of the image created by Matt Dorn for the really great bookPreparing for the Certified OpenStack Administrator Exam” published by Packt Publishing  in august 2017.

In his book, Matt describes how to import the OVA image into VirtualBox. Unfortunately VirtualBox (packaged by Oracle) cannot be installed on Fedora 27 without disabling the Secure Boot feature, because otherwise the vboxdrv kernel module, which is not cryptographically signed with a Fedora key, would not be loaded into memory:

# modprobe -v vboxdrv
insmod /lib/modules/4.14.11-300.fc27.x86_64/misc/vboxdrv.ko 
modprobe: ERROR: could not insert 'vboxdrv': Required key not available

Switching to the KVM hypervisor solves this long standing issue.

Pre-requirements

Make sure VT-x or AMD-v virtualization is enabled in your computer’s BIOS. To check whether you have proper CPU support, run the command:

$ egrep '^flags.*(vmx|svm)' /proc/cpuinfo

If nothing is printed, your system does not support the relevant extensions and you should check in the BIOS (or UEFI) setup whether you can enabled it.

You also need a root access to the Linux system (direct access or via the sudo command).

System requirements

As stated by the authors of the OpenStack appliance, the hardware requirements are:

  • 2 GHz or faster 64-bit (x64) processor with Intel VTx or AMD-V support
  • 6 GB available RAM
  • 10 GB available hard disk space

Qemu and KVM configuration

Fedora uses the libvirt family of tools as its virtualization solution. By default libvirt on Fedora will use Qemu to run guest instances. Qemu can emulate a host machine in software, or given a CPU with hardware support can use KVM to provide a fast full virtualization.

You need to install the following group of packages:

# dnf install @virtualization

and than start the service libvirtd:

# systemctl start libvirtd

To start libvirtd on boot, run the folloing command:

# systemctl enable libvirtd

Verify that the kvm kernel modules were properly loaded:

$ lsmod | grep ^kvm
kvm_intel 229376 3
kvm 696320 1 kvm_intel

OVA image conversion

Download the .ova image from GitHub.

The .ova format is nothing more than a tar archive, containing an .ovf and a .vmdk files, respectively the VM configuration and disk.

So, you can simply extract the files:

$ tar xvf coa-aio-newton.ova

and do the actual image conversion to the qcow2 format, a file format for disk image files used by QEMU:

$ qemu-img convert -O qcow2 coa-aio-newton_2_1-disk001.vmdk coa-aio-newton_2_1-disk001.qcow2

The new image will be named coa-aio-newton_2_1-disk001.qcow2.

Network configuration

The interface virbr0 should now be available on your system. We need to create one more for the OVA image to be fully functional:

# cd /etc/sysconfig/network-scripts/
# cat <<EOF | sudo tee ifcfg-virbr1
DEVICE="virbr1"
BOOTPROTO="static"
IPADDR="192.168.56.2"
NETMASK="255.255.255.0"
GATEWAY="192.168.56.1"
ONBOOT="yes"
TYPE="Bridge"
NM_CONTROLLED="yes"
EOF
# ./ifup virbr1

You can check the network configuration by entering the command:

$ ip addr show dev virbr1

Start up the virtual machine

Now it’s time to start the appliance. Launch virt-manager with your regular user. Add a new virtual machine from an existing disk image.

screenshot_virtmanager_new_vm

Select the Linux OS and the Ubuntu distribution. Than browse the local disk for the .qcow2 image coa-aio-newton_2_1-disk001.qcow2.

screenshot_virtmanager_new_vm_ubuntu

Configure 6Gb of memory and one CPU for this virtual machine.

screenshot_virtmanager_new_vm_hw

Select “Network Interfaces” from the menu and configure the interface virbr1 as described in the screenshot:

screenshot_virt_manager_bridge_interface

And finally start the instance. You will know it’s up and running once you see the Ubuntu logon prompt.

screenshot_kvm_ova

You can login to the console or better SSH into the appliance from a terminal client on your host operating system.

$ ssh openstack@192.168.56.56

You can also start testing your OpenStack environment through the Horizon dashboard.

screenshot_horizon

Congratulations! You can now start experimenting with the OpenStack technologies.

AI Can Be Made Legally Accountable for Its Decisions

In the near future when Artificial Intelligence (Intelligence displayed by machines) will start to spread out and autonomously make important decisions that will impact our daily lives, the issue of its accountability under the law will raise.

AI systems are expected to justify their decisions without revealing all their internal secrets, to protect the commercial advantage of the AI providers. Not to mention that map inputs and intermediate representations in AI systems to human-interpretable concepts is a challenging problem because these systems tend to work as black boxes.

As such explanation systems should be considered distinct from AI systems.

This paper written by researchers of the Harvard University highlights some interesting aspects of this debate and shows that this problem is by no mean straightforward.

Nagios Plugins for Linux v22

I’m delighted to announce the immediate, free availability of the version 22 of the the Nagios Plugins for Linux. The Nagios Plugins for Linux is a free and open source (license) set of plugins for monitoring the major system parameters of the Linux servers.

As usual the complete source code is available at GitHub (nagios-plugins-linux-22.tar.xz). You can find the full documentation on https://github.com/madrisan/nagios-plugins-linux.

What’s new in this release

Fixes

vminfo lib: add the following items to the /proc/vmstat parser:

  • vm_pgalloc_dma32
  • vm_pgrefill_dma32
  • vm_pgscan_direct_dma32
  • vm_pgscan_kswapd_dma32
  • vm_pgsteal_dma32
  • vm_pgsteal_direct_dma
  • vm_pgsteal_direct_dma32

The DMA32 memory zone is only available on 64-bit linux (low ~4GBytes of memory).
This patch can slightly modify the value of the memory counters reported by check_memory.

Enhancements

Fix several warnings reported by Codacy and Codeclimate.

 

Nagios Plugins for Linux v21

The Nagios Plugins for Linux version 21 is now available.
The source is available at GitHub: nagios-plugins-linux-21.tar.xz

You can find documentation on https://github.com/madrisan/nagios-plugins-linux

What’s new in this release

Enhancements

  • check_paging the command-line option --swapping-only has been added for displaying only the swap reads and writes. The help message has been updated and improved by added some lines that explain which kernel variable(s) are selected when a user specify the warning and/or critical thresholds.
  • The Docker-based framework for packaging the Nagios Plugins for Linux supports two new extra distributions:
    • Debian 9 (Stretch)
    • Fedora 26
  • The test framework has been reworked and enriched in new modules:
    • tests/tsclock_thresholds
    • tests/tscswch
    • tests/tsintr
    • tests/tslibmeminfo_conversions
    • tests/tslibmeminfo_interface
    • tests/tslibmeminfo_procparser
    • tests/tslibmessages
    • tests/tslibvminfo
    • tests/tsload_normalize
    • tests/tsload_thresholds
    • tests/tspaging
    • tests/tstemperature
    • tests/tsuptime

The result of each text execution is now displayed with colors.

  •  The code of several plugins has been polished and modularized to
    allow testing. The glibc function secure_getenv() (or __secure_getenv() on Ubuntu, and maybe other distributions) is now used, instead of getenv(), in the test code to improve security.

SaltStack SDB Interface

The SDB (Simple Data Base) interface is designed to store and retrieve data that, unlike pillars and grains, is not necessarily minion-specific. It is a generic database interface for SaltStack.

We will show how we can make use of SDB for storing and retrieving passwords.

SDB Configuration

The SDB interface requires a profile to be set up in the master configuration file.

/etc/salt/master.d/passwords.conf

The configuration stanza includes the name/ID that the profile will be referred to as, a driver setting, and any other arguments that are necessary for the SDB module that will be used.

 pwd:
     driver: json
     data: /srv/salt/common/pwd.json

We will store the data in the JSON format and make use of the sdb execution module to get, set and delete values from this file. These three methods can be implemented as follows in the python script /srv/salt/_sdb/json.py

'''
SDB module for JSON

Like all sdb modules, the JSON module requires a configuration profile
to be configured in either the minion or, as in our implementation,
in the master configuration file (/etc/salt/master.d/passwords.conf).
This profile requires very little:

    .. code-block:: yaml

      pwd:
        driver: json
        data: /srv/salt/common/pwd.json

The ``driver`` refers to the json module and ``data`` is the path
to the JSON file that contains the data.

This file should be saved as salt/_sdb/json.py

.. code-block:: yaml

    user: sdb://pwd/user1

CLI Example:

    .. code-block:: bash

        sudo salt-run sdb.get sdb://pwd/user1
'''
from __future__ import absolute_import
from salt.exceptions import CommandExecutionError
import salt.utils
import json

__func_alias__ = {
    'set_': 'set'
}

def _read_json(profile):
    '''
    Return the content of a JSON file
    '''
    try:
        with salt.utils.fopen(profile['data'], 'r') as fp_:
            return json.load(fp_)
        except IOError as exc:
            raise CommandExecutionError(exc)
        except KeyError as exc:
            raise CommandExecutionError(
                '{0} needs to be configured'.format(exc))
        except ValueError as exc:
            raise CommandExecutionError(
                'There was an error with the JSON data: {0}'.format(exc))

def get(key, profile=None):
    '''
    Get a value from a JSON file
    '''
    json_data = _read_json(profile)
    return json_data.get(key, {})

def set_(key, value, profile=None):
    '''
    Set a key/value pair in a JSON file
    '''
    json_data = _read_json(profile)
    json_data[key] = value

    try:
        with salt.utils.fopen(profile['data'], 'w') as fp_:
            json.dump(json_data, fp_, indent=2, sort_keys=True)
    except IOError as exc:
        raise CommandExecutionError(exc)

    return get(key, profile)

You can now store the hashed passwords in the JSON data file

{
  "user1": "$5$tEpxpTHeP...0128tglwMKE.X9b88fO4x0",
  "user2": "$5$n4XiZajqf...P3BrvFM5hYq.UazR4dHxl8"
}

and quering SDB to get the hashed strings:

$ sudo salt-run sdb.get sdb://pwd/user1

Of course the SDB query can be coded in a pillar .sls file.

users:
  user1:
    fullname: User One
    uid: 2000
    gid: 1000
    password: {{ salt['sdb.get']('sdb://pwd/user1') }}

SaltStack Execution Modules

SaltStack (or Salt, for short) is a Python-based open-source configuration management software and remote execution engine. It supports the “Infrastructure as Code” approach (the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools) to deployment and cloud management.

I’ll describe in this short article how you can extend the Salt functionalities through the addition of a new execution module.

Just to clarify, you don’t need to be able to write Python or other code to use Salt in the normal usage case. Adding extensions to Salt is an “advanced”, but quite interesting topic.

A bit of theory

A Salt execution module is a Python (2.6+) or Cython module, though some specificities do exist, placed in a directory called _modules at the root of the Salt file server, usually /srv/salt.

An execution module usually define a __virtual__() function, to determine whether the requirements for that module are met, and the __virtualname__ string variable, that is used by the documentation build system to know the virtual name of a module without calling the __virtual__ function.

The following example from the official documentation should clarify this point.

A huge and consistent corpus of libraries and functions are packaged in the Salt framework. You can speed up and highly simplify the development of your modules by making use of those resources.

# Some examples of import:
import salt.utils
import salt.utils.itertools
import salt.utils.url
import salt.fileserver
from salt.utils.odict import OrderedDict

See the official documentation for more information, or… wait for a future article.

As an example, we can write a simple module returning some information about the CPU architecture of a Linux host.

Step by step development

We start by importing some Python and Salt libraries and by defining the  __virtualname__ variable and a __virtual__() function.

# Import Python libs
import logging
# Import Salt libs
from salt.exceptions import CommandExecutionError

__virtualname__ = 'cpuinfo'

def __virtual__():
    '''
    Only run on Linux systems
    '''
    if __grains__['kernel'] != 'Linux':
        return (False,
            'The {0} execution module cannot be loaded: '
            'only available on Linux systems.'.format(
            __virtualname__))
    return __virtualname__

As you can see, __virtual__() returns  False  when the operating system is not Linux. This means that the module cpuinfo will be only available for Linux minions and hidden otherwise.

Salt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents salt with grains of information. Grains are collected for the operating system, domain name, IP address, kernel, OS type, memory, and many other system properties. The __grains__ dictionary contains the grains data generated by the minion that is currently being worked with.

It’s time now to implement the logic of our module.

log = logging.getLogger(__name__)

def _verify_run(out):
    '''
    Crash to the log if command execution was not
    successful.
    '''
    if out.get('retcode', 0) and out['stderr']:
        log.debug('Return code: {0}'.format(
            out.get('retcode')))
        log.debug('Error output\n{0}'.format(
            out.get('stderr', 'N/A')))
        raise CommandExecutionError(out['stderr'])

def _lscpu():
    '''
    Get available CPU information.
    '''
    try:
        out = __salt__['cmd.run_all']("lscpu")
    except:
        return None

    _verify_run(out)

    data = dict()
    for descr, value in [elm.split(":", 1) \
        for elm in out['stdout'].split(os.linesep)]:
            data[descr.strip()] = value.strip()

    cpus = data.get('CPU(s)')
    sockets = data.get('Socket(s)')
    cores = data.get('Core(s) per socket')
    return (cpus, sockets, cores)

Note that the functions _lscpu() and _verify_run() have their names starting with an underscore (Python weak “internal use” indicator) and thus, by convention, will not be exported by Salt to the public interface.

The Salt method cmd.run_all is used here to execute an external binary (lscpu) and grasp its standard output and error.

The function _verify_run() aims to catch any system error and, when necessary, abort the module execution. This code snippet shows the usage of the Python exceptions in Salt. We raise here a CommandExecutionError exception, declared in the Salt library salt.exceptions if a system error has occurred.

To end our module we implement a function which just calls _lscpu() and parses the user command line arguments (if any), or the module extra arguments, when out module is called by another script. A CommandExecutionError exception is raised for any invalid argument passed to our function.

def lscpu(*args):
    (cpus, sockets, cores) = _lscpu()
    infos = {
        'cores': cores,
        'logicals': cpus,
        'sockets': sockets
    }

    if not args:
        return infos

    try:
        ret = dict((arg, infos[arg]) for arg in args)
    except:
        raise CommandExecutionError(
            'Invalid flag passed to {0}.proc'.format(
            __virtualname__))
    return ret

This function lscpu() is public and will be available on all the Linux minions managed by Salt. Any public method that you define in a module can be invoked by prefixing its name with the corresponding virtual module (cpuinfo in our case):

salt '*' cpuinfo.lscpu

or, if you just need the number of logical CPUs:

salt '*' cpuinfo.lscpu logicals

We have extended Salt.

The final module

When we put all of the preceding code together, we end up with the following code:

'''
SaltStack module returning some information about the CPU
architecture.  This module parses the output of the command lscpu.
'''
# Import Python libs
import logging
# Import salt libs
from salt.exceptions import CommandExecutionError

__virtualname__ = 'cpuinfo'

def __virtual__():
    '''
    Only run on Linux systems
    '''
    if __grains__['kernel'] != 'Linux':
        return (False,
            'The {0} execution module cannot be loaded: '
            'only available on Linux systems.'.format(
            __virtualname__))
    return __virtualname__

log = logging.getLogger(__name__)

def _verify_run(out):
    '''
    Crash to the log if command execution was not
    successful.
    '''
    if out.get('retcode', 0) and out['stderr']:
        log.debug('Return code: {0}'.format(
            out.get('retcode')))
        log.debug('Error output\n{0}'.format(
            out.get('stderr', 'N/A')))
        raise CommandExecutionError(out['stderr'])

def _lscpu():
    '''
    Get available CPU information.
    '''
    try:
        out = __salt__['cmd.run_all']("lscpu")
    except:
        return None
    _verify_run(out)

    data = dict()
    for descr, value in [elm.split(":", 1) \
        for elm in out['stdout'].split(os.linesep)]:
        data[descr.strip()] = value.strip()

    cpus = data.get('CPU(s)')
    sockets = data.get('Socket(s)')
    cores = data.get('Core(s) per socket')

    return (cpus, sockets, cores)

def lscpu(*args):
    '''
    Return the number of core, logical, and CPU sockets,
    by parsing the lscpu command and following back to
    /proc/cpuinfo when this tool is not available.

    CLI Example:

        .. code-block:: bash

            salt '*' cpuinfo.lscpu
            salt '*' cpuinfo.lscpu logicals
    '''
    (cpus, sockets, cores) = _lscpu()
    infos = {
        'cores': cores,
        'logicals': cpus,
        'sockets': sockets
    }
    if not args:
        return infos
    try:
        ret = dict((arg, infos[arg]) for arg in args)
    except:
        raise CommandExecutionError(
            'Invalid flag passed to {0}.proc'.format(
            __virtualname__))
    return ret

You can find other examples in this GitHub page.

PyXymon

A Python module for writing Xymon external scripts

I’m very happy to announce the immediate availability of PyXymon, release 3.

PyXymon is a simple helper Python module that can help you write Xymon external scripts in Python. PyXymon provides some methods for rendering the messages you want to display in the Xymon web page and for sending them to the Xymon server.

PyXymon reads the required information from the Xymon environment variables, so you do not need to add any extra configuration file.

This project is hosted at GitHub.

Overcoming catastrophic forgetting in neural networks

Another pretty good step forward in Deep Neural Networks from DeepMind.

They took inspiration from neurosciences-based theories about the consolidation of previously acquired skills and memories in mammalian and human brains: connections between neurons are less likely to be overwritten if they have been important in previously learnt tasks. This mechanism is known as “synaptic consolidation“.

The result is a neural network model that can learn several tasks without overwriting what was previously learnt (a known limitation of the current neural network approach,  known as “catastrophic forgetting”).

The new algorithm has been called “Elastic Weight Consolidation” (EWC).

All the details can be read in their last  PNAS paper.

New Security Threats from the IoT

Bruce Schneier on New Security Threats from the Internet of Things

An article that’s worth reading.
The main point, that seems to me the original part of the speech, is the following equality:

IoT (Internet of Things)
==
world-size distributed robot across the Internet.

«Through the sensors, we’re giving the Internet eyes and ears. Through the actuators, we’re giving the Internet hands and feet. Through the processing — mostly in the cloud — we’re giving the Internet a brain. Together, we’re creating an Internet that senses, thinks, and acts. This is the classic definition of a robot, and I contend that we’re building a world-sized robot without even realizing it».

The Internet will be a computerized, networked, and interconnected world that we’ll live in, and made with devices often sold at a low margin by companies that simply don’t have the expertise to make them secure: computer security is becoming everything security.

Some emergent properties can eventually arise. These are unexpected behaviors that stem from interaction between the components of a complex system and their environment. In some contexts, emergent properties can be beneficial; users adapt products to support tasks that designers never intended. They can also be harmful if they undermine important safety requirements.

This is a definitely interesting topic.