Nagios Plugins for Linux v20

I’m pleased to announce the immediate, free availability of the Nagios Plugins for Linux version 20.
Full details about what’s included can be found in the release notes.
As usual, you can download the sources from GitHub.
Bug reports, feature requests, and ideas for improvements are welcome!

Security fixes

Some insecure data handling issues discovered by Coverity in the new test framework have been fixed.


The Clang Static Analyser can now be executed by running the command

make -C tests check-clang-checker

in the project root directory. All the warnings spotted by this code analyser have been fixed.

A new Docker-based framework for packaging the Nagios Plugins for Linux (rpm and deb packages) is now available. The supported Linux distributions follow:

CentOS/RHEL 5, 6, 7
Debian 6, 7, 8
Fedora 24/25/rawhide.

The messages displayed in case of a too large “count” or “delay” error have been improved.

Nagios Plugins for Linux v19

The release 19 of the Nagios Plugins for Linux is now available for download!

You can download the tarball from GitHub.

As usual, bug reports, feature requests, and ideas for improvements are welcome!



Recent versions of multipath no longer open a multipathd socket file in the file system, but instead use an abstract namespace socket. Thanks to Chris Procter “chr15p” for reporting the issue and creating a pull request.


Fixed the performance data output.



Fixed the long-standing gcc compiler warning “dereferencing type-punned pointer might break strict-aliasing rules”.  This was a false problem, but the code has been modified to quiet the warning.

A larger buffer for queries is now set, to make this plugin working with systems that have lots of mapped disks.

Test Suite

A framework for testing the code (make check) has been added and some tests are now available.

Compatibility issues


By default the abstract namespace socket “/org/kernel/linux/storage/multipathd” is now selected at build time.
If you need to monitor old distributions (RHEL5 and RHEL6 for instance) you need to configure this package as followed:

./configure --with-socketfile=/var/run/multipathd.sock


Andrew Ng: advice to students

What is one piece of advice you would like to give to students?

When deciding how to spend your time, I recommend you take into account two criteria:
  • Whether what you’re doing can change the world;
  • How much you’ll learn.

Even today, this is how I decide how to spend my time.

Our society today is incredibly good at giving individuals the opportunities to change the world. With digital technology and modern communications, ideas and products can spread faster than ever before. With the right ideas and strong execution, any person can quickly help a lot of others on our planet.

So, ask yourself: If what you’re working on succeeds beyond your wildest dreams, would you have significantly helped other people? If not, then keep searching for something else to work on. Otherwise you’re not living up to your full potential.

Second, especially when you’re young, don’t estimate the value of investing in your own future education.

My definition of “young” is anyone less than 100 years old.

Anything you learn will pay off for decades to come. But it won’t be easy. Once you’re out of school, investing time in learning has relatively few short-term rewards. There’s no teacher standing over your shoulder to give you a grade or motivate you to keep studying. But if you can inspire yourself or make it fun to keep reading, keep playing with ideas, keep talking to people that you can learn from, then over a span of years you can become incredibly talented in your areas of study.

For myself, I love reading. I have >1000 books on my kindle, and spend a lot of time in the evenings and weekends reading. My reading diet includes academic research papers, books on business strategy, the innovation process, products, biographies of people I admire, and more. I sometimes take MOOCs. I also love talking to people who can teach me new things, whether an old friend or a new acquaintance.

The process of learning will also help you decide what to work on. When you’ve seen enough examples of what others are doing to change the world, you’ll also get more and more ideas for how you can change the world yourself.

To summarize: Keep investing in your own learning, even when it’s hard. And keep searching for a way to contribute to something that helps humanity!

Andrew Ng, Chief Scientist at Baidu; Chairman/Co-Founder of Coursera;  Stanford faculty.

Nagios Plugins for Linux 18 released

Here is it, version 18 of the Nagios Plugins for Linux.

It’s manly a bugfix release with a fix for an issue recently pointed out by Paul Dunkler: some of the plugins did not terminate with the correct return code when reaching a warning or critical threshold.

The check_memory plugin no more reports as cached memory the unreclaimable slab values, which cannot be reclaimed even under memory pressure.

The check_cpu plugin executed with the ‘-i | —cpuinfo‘ switch, now correctly detect on 64-bit architectures the CPU 64-bit op-mode.

A minor memory resource leak reported by the Coverity Scan tool has also been fixed.

You can download the source code (.xz compressed tarball) here and visit the GitHub project web page for more information.

As usual, bug reports, feature requests, and ideas for improvements are welcome!

Gradient Boosting

Gradient boosting ensemble technique for regression

Gradient boosting is a machine learning technique for regression and classification  problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function. (source: Wikipedia)

This is a great video tutorial from Alexander Ihler, Associate Professor at Information & Computer Science, UC Irvine.

You can found other interesting data science tutorials made by Alexander Ihler in this YouTube channel:


Introduction to Deep Learning with Python

Introduction to Deep Learning with Python

Alec Radford, Head of Research at indico Data Solutions, speaking on deep learning with Python and the Theano library.

An amazing data science YouTube tutorial with emphasis on high performance computing, natural language processing using recurrent neural nets, and large scale learning with GPUs.

This tutorial provides and excellent example of how deep learning can be practically applied to real world problems.

SlideShare presentation is available here:

Neuroscientists simulate tiny part of rat brain

82 scientists and engineers simulate 37 million synapses in massive Blue Brain Project

The Blue Brain Project, the simulation core of the European Human Brain Project, has released a draft digital reconstruction of the neocortical microcircuitry of a piece of the rat-brain neocortex — about a third of a cubic millimeter of brain tissue containing about 30,000 neurons connected by nearly 40 million synapses.

The electrical behavior of the virtual brain tissue was simulated on supercomputers and found to match a range of previous observations made in experiments on the brain, validating its biological accuracy and providing new insights into the functioning of the neocortex. The project has published the full set of experimental data and the digital reconstruction, allowing other researchers to use them.

Although the resulting data collection is one of the most comprehensive to date on a part of the brain, it remains far from sufficient to reconstruct a complete map of the microcircuitry, admits Henry Markram. “We can’t and don’t have to measure everything. The brain is a well-ordered structure, so once you begin to understand the order at the microscopic level, you can start to predict much of the missing data.

The Open Source Software (in C, C++, Java, Python) produced and used by the Blue Brain Project is available on GitHub.

Source: Neuroscientists simulate tiny part of rat brain in a supercomputer

Let’s Encrypt – Arriving Q4 2015

Encrypt the entire web

Let’s Encrypt is a new, free, automated, and open certificate authority service provided by Internet Security Research Group (ISRG), a California public benefit corporation.

The project aims to make encrypted connections in the World Wide Web the default case, by providing free X.509 certificates for Transport Layer Security encryption (TLS).

They’ll be working towards general availability over the next couple of months by issuing certificates to domains participating in the beta program.
You can request that your domain be included in our beta program by clicking here.

Getting Started with Storm

Getting Started with Storm“, by Jonathan Leibiusky, Gabriel Eisbruch & Dario Simonassi.
Countinuous Streaming Computation with Twitter’s Cluster Technology.

Even as big data is turning the world upside down, the next phase of the revolution is already taking shape: real-time data analysis. This hands-on guide introduces you to Storm, a distributed, JVM-based system for processing streaming data. Through simple tutorials, sample Java code, and a complete real-world scenario, you’ll learn how to build fast, fault-tolerant solutions that process results as soon as the data arrives.

Discover how easy it is to set up Storm clusters for solving various problems, including continuous data computation, distributed remote procedure calls, and data stream processing.

Apache Storm project site

Note that this book is based on Storm 0.7.1 but so far the latest version is 0.9.5, so this book is quite outdated and need to be integrated with the online documentation.

SF Big Analytics – Update on Speech Recognition and HPC at Baidu Research

Baidu hosted SF Analytics Meetup at their Sunnyvale office on August 19th, 2015 – Updates on Speech Recognition, Deep Learning and HPC.

SF Big Analytics Part 1. Deep Learning by Chief Scientist Andrew Ng

SF Big Analytics Part 2. Bryan Catanzaro, Senior Researcher: “Why is HPC So Important to AI?”

SF Big Analytics Part 3. Awni Hannun, Senior Researcher: “Update on Deep Speech”