Building a machine monitoring tool from scratch using Python

Glances machine monitoring dashboard

A monitoring tool allows users to see the status of a machine at a specific point in time. The status can include but is not limited to CPU usage, network latency, memory usage, and disk usage.

Getting the statistics


In order to do this, we can use a library for retrieving information from the machine. psutil (process and system utilities), a library for retrieving information on running processes and system utilization (CPU, memory, disks, network) in python can be a perfect library for this. However, since we want to build the agent from scratch, we will create our own library in order to achieve this.

CPU and System Load


First, we will check the number of both physical and logical CPUs and then check the System Load. For this, we will use the python os module, as it provides methods to access both the number of CPUs and the system load as shown below.

programming source code

Memory (RAM) usage

For memory usage, we will start by showing the total memory, followed by used memory. For this, we will utilize the operating system commands sysctl and vm_stat to get information about the RAM. We then parse these results and add them to our statistics dictionary, as shown below.

programming source code

Disk Usage

Here we will get the total disk size, check the used disk space and finally check for the free disk and add all this to the dictionary of statistics.

import shutil subprocess

Network Latency

Network latency is an expression of how much time it takes for a data packet to get from one designated point to another. Using the Linux ping command, the round-trip time is considered the network latency. We will use the ping command to determine the network latency of our machine.

programming script

All of the above have been combined into one file named monitor.py.

programming script
Andela's team member
network latency coding screenshot

The output from my machine

programming script

Running the Agent

Having been able to collect statistics from above, we need a way to ensure that the script to collect the statistics is executed every 5 minutes (or a custom number of minutes). For this, we will use the Linux crontab to run the monitoring script.

*/2 * * * * location_to_python3/python3 ~/monitor.py > /tmp/monitor.log 2>&1

Remember to move the monitoring script to the home directory.

And that’s it! Thank you for reading.

The above script has only be tested on Mac OS and a few small modifications might be needed for it to work on Linux and Windows.

If you found Esir’s blog useful, check out our other blog posts for more essential insights!

Are you a developer interested in growing your software engineering career? Apply to join the Andela Talent Network today.

Related posts

The latest articles from Andela.

Visit our blog

Customer-obsessed? 4 Steps to improve your culture

If you get your team's culture right, you can create processes that will allow you to operationalize useful new technologies. Check out our 4 steps to transform your company culture.

How to Build a RAG-Powered LLM Chat App with ChromaDB and Python

Harness the power of retrieval augmented generation (RAG) and large language models (LLMs) to create a generative AI app. Andela community member Oladimeji Sowole explains how.

Navigating the future of work with generative AI and stellar UX design

In this Writer's Room blog, Carlos Tay discusses why ethical AI and user-centric design are essential in shaping a future where technology amplifies human potential.

We have a 96%+
talent match success rate.

The Andela Talent Operating Platform provides transparency to talent profiles and assessment before hiring. AI-driven algorithms match the right talent for the job.