Outputting DS18B20 temperatures on a LCD1602 – Raspberry Pi Temperature Monitoring Part 4

This is part of a tutorial series. If you feel a bit lost, I suggest following the tutorial in order:

Just interested in the code? – https://github.com/albertherd/DS18B20Reader

This article assumes that you’ve configured one or more DS18B20 sensors to your Raspberry Pi and configured your Raspberry Pi to work with an LCD1602. If you did not, read the above mentioned links.

Okay – this should be a quick post – most of the heavy lifting is done. Remember the LCD1602 library that we’ve used in the second tutorial? We’ll be using that to simply get the temperature we’ve captured in third tutorial and display it.

We’ll need to do the following changes:

  1. Import the LCD1602 library.
  2. Initialize the LCD1602 on application startup.
  3. Read and display information on the LCD1602.
  4. Cleanup resources before exiting – this is important since we’ll need to turn off the backlight after usage.

Import the LCD1602 library

Get a copy of this repository – either by cloning or simply copying lcd1602.c and lcd1602.h files to your solution. We’ll be also adding them to our CMake file – it should look something of the sort. I’ve left the solution on debug in this case.

set(SOURCE main.c sensor.c lcd1602.c main.h sensor.h lcd1602.h)
set(CMAKE_BUILD_TYPE Debug)
add_executable(DS18B20Reader ${SOURCE})

Now it’s simply just adding a reference to the lcd1602.h in the solution. Next!

Initialize the LCD1602 on application startup

We’ll need to initialize the library and open a connection to the display under the right address – check your device address by checking the third tutorial. To simplify things, I hard-coded the value which is 0x27 in my case. Call this when initializing the application.

void InitializeLCD()
{
    int rc;
	rc = lcd1602Init(1, LCDADDRESS);
	if (rc)
	{
		printf("Initialization failed; aborting...\n");
		return;
	}
}

Read and display information on the LCD1602

We’ll be modifying our main loop to output content to the console (not important though and output to the LCD1602 screen. The main adjustment we’ve did in the main loop is that we’ve broken down our output to two functions – Outputting to console (not important) and outputting to the LCD1602 – Let’s see the main loop:

void ReadTemperatureLoop(SensorList *sensorList)
{
    while(!sigintFlag)
    {
        for(int i = 0; i SensorCount; i++)
        {
            float temperature = ReadTemperature(sensorList->Sensors[i]);
            PrintTemperatueToLCD1602(sensorList->Sensors[i], i % LCD1602LINES, temperature);
            LogTemperature(sensorList->Sensors[i], temperature);
        }       
    }
}

Let’s now have a look at the important method – PrintTemperatueToLCD1602. Keeping in mind that the LCD1602 has two lines, we’ll be receiving the calculated line number as a parameter. This will make sure that values will lie between 0 and 1 only using modulus.

We’ll also need to remember that each line will hold up to 16 characters, so we’re truncating anything more than 16 characters (actually 16 + 1 for null termination). We’ll then just pass the (potentially truncated) string to the LCD1602 and et voila!

void PrintTemperatueToLCD1602(Sensor *sensor, int lineToPrintDataOn, float temperature)
{
    char temperatureString[LCD1602CHARACTERS + 1];
    snprintf(temperatureString, LCD1602CHARACTERS + 1, "%s : %.2fC", sensor->SensorName, temperature);

    lcd1602SetCursor(0, lineToPrintDataOn);
    lcd1602WriteString(temperatureString);
}

Cleanup resources before exiting

After we’re done, it’s just a matter of cleaning up resources. As previously mentioned, this is important since we’ll need to turn off the backlight after usage. In the cleanup method, we’re just calling the lcd1602Shutdown method.

void Cleanup(SensorList *sensorList)
{
    printf("Exiting...\n");
    FreeSensors(sensorList);
    lcd1602Shutdown();
}

Fetch a complete copy of the code from from GitHub – https://github.com/albertherd/DS18B20Reader

Let’s run the application! In a terminal with git and cmake installed, run the following commands

git clone https://github.com/albertherd/DS18B20Reader
cd ./DS18B20Reader
cmake . && make && ./DS18B20Reader "Sensor"

With some luck, your LCD1602 should display something like the below. In my case I have two sensors so I’ve fired up the application using the following syntax:

./DS18B20Reader "Sensor1" "Sensor2"

LCD1602OutputCropped

Until the next one!

Using C to monitor temperatures through your DS18B20 thermal sensor – Raspberry Pi Temperature Monitoring Part 3

This is part of a tutorial series. If you feel a bit lost, I suggest following the tutorial in order:

Just interested in the code? – https://github.com/albertherd/DS18B20Reader

Since you made it here, great! Your Raspberry Pi should have one or more DS18B20 thermal sensors connected, like the image below.

sensor2.png

Now that we have our DS18B20 thermal sensor connected to our Raspberry Pi, it’s time to do some programming to read out the temperature! Our application will need to be able to the following tasks:

  1. Discover all the DS18B20 sensors (in my case, I’ve connected 2 but this application should handle an arbitrary number of sensors).
  2. Assign a friendly name so we’ll know which sensor is which and store them in a list.
  3. Retrieve and parse the information from the device.
  4. Do whatever necessary with the gathered information.

1) Discover all the DS18B20 sensors

First and foremost, our code cannot just assume that devices just exist on the system – we’ll need to go and discover these devices. Since the DS18B20 makes use of the 1-Wire protocol, devices will live under the /sys/bus/w1/devices/ directory. Therefore, our code will need to devices living under this directory, whose names start with 28 since all DS18B20 device names will start with 28. Let’s start by knowing how many devices are connected.

typedef struct Sensor
{
    char *SensorName;
    FILE *SensorFile;    
} Sensor;
 
typedef struct SensorList
{
    Sensor **Sensors;
    int SensorCount;
} SensorList;

DIR *dir;
struct dirent *dirEntry;

SensorList *sensorList = malloc(sizeof(SensorList));
sensorList->SensorCount = 0;

if(!(dir = opendir("/sys/bus/w1/devices/")))
    return sensorList;

while((dirEntry = readdir(dir)))
{        
    if(strncmp(dirEntry->d_name, "28", 2) == 0)
    {
        sensorList->SensorCount++;
    }
}

2) Assign a friendly name so we’ll know which sensor is which and store them in a list

Now that we’ve discovered the devices connected to the system, it’s time to save a reference and optionally a friendly name as well. Logic mostly applies from step 1.

sensorList->Sensors = malloc(sizeof(Sensor*) * sensorList->SensorCount);
Sensor **currentSensor = sensorList->Sensors;   

int sensorNamesAllocated = 0;
while((dirEntry = readdir(dir)))
{        
    if(strncmp(dirEntry->d_name, "28", 2) == 0)
    {   
        char *sensorName;
        if(sensorNamesCount > sensorNamesAllocated)
        {
            sensorName = strdup(*sensorNames);
            sensorNames++;
        }
        else
        {
            sensorName = strdup("Sensor");
        }

        char sensorFilePath[64];     
        sprintf(sensorFilePath, "%s%s%s",  "/sys/bus/w1/devices/", dirEntry->d_name, "/w1_slave");
        *currentSensor = GetSensor(sensorFilePath, sensorName);
        currentSensor++;
    }
}

Sensor *GetSensor(char *sensorId, char *sensorName)
{
    Sensor *sensor = malloc(sizeof(Sensor));
    sensor->SensorFile = fopen(sensorId, "r");
    sensor->SensorName = sensorName;
    return sensor;
}    

3) Read the temperature from the device

This is the most exciting part – we actually get to read the temperatures! Using the sensor information we got from steps 1 and 2, we can get the device, open it as a file, extract the readings and parse it accordingly. Using the FILE API makes it very easy to do so – grab all the contents and store it in a buffer. As mentioned in the first tutorial, the content of the file looks as follows –

0b 01 55 05 7f 7e 81 66 bf : crc=bf YES
0b 01 55 05 7f 7e 81 66 bf t=16687

We’re only intereested in the t= component, so some string manipulation and float conversion will take the 16687 and convert it into 16.687C. We’re also doing some range checking since the DS18B20 is rated between -55C and +125C

   
long deviceFileSize;
char *buffer;

FILE *deviceFile = sensor->SensorFile;
fseek(deviceFile, 0, SEEK_END);
deviceFileSize = ftell(deviceFile);
fseek(deviceFile, 0, SEEK_SET);

buffer = calloc(deviceFileSize, sizeof(char));

fread(buffer, sizeof(char), deviceFileSize, deviceFile);
char *temperatureComponent = strstr(buffer, "t=");
if(!temperatureComponent)
{
    free(buffer);
    return -1;
}

temperatureComponent +=2; //move pointer 2 spaces to compensate for t=

float temperatureFloat = atof(temperatureComponent);
temperatureFloat = temperatureFloat / 1000;

if(temperatureFloat  125)
    temperatureFloat = 125;    

free(buffer);
return temperatureFloat;

4) Do whatever necessary with the gathered information

We now have the information at hand, great! We can do many sort of things with it, such as sending an email, activating some other device or whatever is necessary. For demo purposes, we’re simply going to output the contents to the console just to see it working. This is in a loop so we’ll keep reading the temperature until the application exits.

 while(1)
    {
        for(int i = 0; i SensorCount; i++)
        {
            char dateTimeStringBuffer[32];
            strftime(dateTimeStringBuffer, 32, "%Y-%m-%d %H:%M:%S", localtime(¤tTime));

            float temperature = ReadTemperature(sensorList->Sensors[i]);
            printf("%s - %s - %.2fC\n", dateTimeStringBuffer, sensorList->Sensors[i]->SensorName, temperature);
        }       
    }

To try out the code as a whole solution, grab a copy from GitHub – https://github.com/albertherd/DS18B20Reader

In a terminal with git and cmake installed, run the following commands

git clone https://github.com/albertherd/DS18B20Reader
cd ./DS18B20Reader
cmake . && make && ./DS18B20Reader "Sensor"

If the output looks like the below, congratulations!

ds18b20 tutorial sample output.png

In the next tutorial, we’ll pick up from here and we’ll start outputting the content on an LCD1602! Until the next one.

Connecting a LCD1602 with an I2C module to your Raspberry Pi – Raspberry Pi Temperature Monitoring Part 2

The LCD1602 is a very famous LCD that can be connected to various devices such as the Raspberry Pi. The LCD1602 on its own is quite tricky to wire it up since it requires 16 pins to be connected. The LCD1602 can also be purchased with an I2C module, which reduces the amount of pins needed to just 4.

For this tutorial, we’ll be working with a LCD1602 with an I2C module. I got mine from AliExpress for around $2.50. Make sure to grab a set of jumper cables as you’ll need them to connect the LCD to the Raspberry Pi. I got mine from AliExpress as well for around $1.50.

IMG_20190102_110607.jpg

img_20190109_230900

Let’s start by wiring it up. We have 4 pins connect – GND (ground), VCC (power, 5V), SDA (data line) and SCL (clock line). GND and VCC can be connected to any equivalent GND and 5V pin. SDA and SCL should be connected to pins BCM 2 and BCM 3 accordingly.

lcd1602_i2c_raspberrypi

img_20190109_231321

If you’re following the Raspberry Pi Temperature Monitoring Part 1 and connected the DS18B20 temperature sensors, you should now have the following configuration.

lcd1602_i2c_ds18b20_raspberrypi

img_20190109_232649

Great! We’re done from the hardware’s side – let’s start configuring our Raspberry Pi to communicate with our LCD.

Firstly, let’s enable I2C from the Raspberry Pi Config. Fire up the raspi-config to get started: sudo raspi config

Now navigate to Interfacing Options => I2C => Enable I2C

raspi-config-interfacing-options

raspi-config-interfacing-options-i2c

Now that we’ve enabled I2C communication, it’s time to start development! We’ll need to get some tools before we start working though, so fire up a shell and input:

sudo apt-get install i2c-tools.

Once that’s done, the LCD is ready to be programmed! Let’s make sure that the LCD is properly connected and working. In a shell, type:

i2cdetect -y 1.

The output should be something like the below. Note the number outputted by the command; will be needed later on. We’ll need this address when we’re trying our demo code. In this case, the address is “27”.

i2cdetect

Great! Now, it’s time to test out our display and see if it works! We’ll be using a Github library – https://github.com/albertherd/LCD1602. This has been forked from https://github.com/bitbank2/LCD1602. We’ll be using my fork since the original repository has an unresolved issue with clearing the display.

After you’ve cloned the repository in your working directory, it’s time to use the address (27 in my case) obtained earlier. Open the main.c and find the call to lcd1602Init and change second parameter. This is how it looks in my case:

lcd1602Init(1, 0x27);

Now it’s time to compile and run our code. If all goes well, we should be getting some text on the screen. You can change the text to whatever you’d like by changing the following lines in main.c.

lcd1602WriteString("BitBank LCD1602");
lcd1602SetCursor(0,1);
lcd1602WriteString("ENTER to quit");

Build and run using the following commands:

make
make -f make_demo
./demo

The screen should look like the below:

IMG_20190110_231937.jpg

Great! Now we’ve successfully connected our LCD1602 to our Raspberry Pi and we’re able to output content on it!

In the next part of this tutorial series, we’ll start by capturing the temperature using the sensor in our first part of the tutorial and outputting it! Stay tuned.

Connecting a DS18B20 thermal sensor to your Raspberry Pi – Raspberry Pi Temperature Monitoring Part 1

A project that I’ve been working on during the Christmas holidays was to hook up some thermal probes to my Raspberry Pi, just to play around. This tutorial simply follows the steps that I’ve taken to achieve so.

You’ll need:

  • Raspberry Pi, any flavor as long as it has GPIO headers available. I had a Raspberry Pi 2, so I used that.
  • You’ll also need the usual suspects – USB to MicroUSB to hook it up to power, HDMI to connect it to a display for initial configuration and an ethernet port to manage it through SSH. I highly recommend configuring SSH rather than using the device itself. This tutorial assumes you’re using SSH.
  • A DS18B20 sensor – I’d suggest getting one which includes a Plugable Terminal to avoid soldering – just wire it up and you’re good to go. I got mine from AliExpress
  • Also make sure your kit has 3 jumper cables. They are typically included. Just to be sure, I also got a set of female to female jumper cables from AliExpress though I did not use them for the DS18B20 sensor.

All right, let’s wire it up! The DS18B20 sensor requires three pins – data, VCC (3.3V), and ground. Connect the wires as below. Data is yellow, VCC is red and ground is black.

IMG_20190102_105252

Connect the 3 pins using the jumper cables as shown below.
sensor1.png

IMG_20190102_105726

We’ll also need to instruct the Raspberry Pi that we’re going to connect the DS18B20 sensor. This sensor makes use of the 1-Wire protocol, so let’s activate it:

  • Connect to the Raspberry Pi using SSH
  • Let’s start by editing the config file that the Raspberry PI parses every time it boots up: sudo nano /boot/config.txt
  • Go to the end of the document and input the following. Specifying gpiopin=4 is actually optional since by convention, 1-wire devices are expected on gpiopin 4 on the Raspberry Pi.
    # Enable OneWire Protocol
    dtoverlay=w1-gpio;gpiopin=4
  • Time to reboot the Raspberry Pi sudo reboot
  • Once the Raspberry PI reboots and you re-connect using SSH, it’s time to get data from the sensor! Let’s find the 1-wire devices connected to the system. Let’s start by browsing to the appropriate directory. cd /sys/bus/w1/devices
  • Great! Let’s now see the devices attached to the Raspberry Pi. ls
  • This will get the devices attached using the 1-Wire protocol. You should have a device called 28-xxxxxxxxxxxx (where x stands for your unique 12 digit serial number). Let’s now browse the device. Mine is 28-02199245e07b, so let’s use it an example. cd 28-02199245e07b
  • Once you access the device, there should be a file called w1_slave. Let’s see the contents of the file. cat w1_slave
  • The file should look like this:
    0b 01 55 05 7f 7e 81 66 bf : crc=bf YES
    0b 01 55 05 7f 7e 81 66 bf t=16687
  • If the file looks like the above, great! The temperature component is t=16687. The temperature in this case is 16.687 °C

We also can take this to the next level and add another thermal probe! Attach it as shown below.
sensor2IMG_20190102_110134

This will require re-editing the /boot/config.txt. Let’s do it!

  • Re-open /boot/config.txt – sudo nano /boot/config.txt
  • Go to the end and add the following. I chose pin 24 because it’s easy to wire since it’s close to a 3.3v and ground. dtoverlay=w1-gpio;gpiopin=24
  • Close and save, then cd /sys/bus/w1/devices
  • You should now see two devices as 28-xxxxxxxxxxxx

Of course, at this stage we did get the temperature, but it’s not really usable. We can get access to this information programmatically – this is what we’ll be doing in the next part of this tutorial. We’ll also be eventually showing the information on a separate LCD screen! Stay tuned!

Group files into folder structure by date using PowerShell

Lately I was sorting out around 5000 files worth of pictures and videos taken out from a mobile phone, to upload them on a cloud drive. To make this task more manageable, I wanted to separate these files out, firstly by file type and then by date. This will make the folders far more smaller and therefore easier to upload these files to the cloud folder by folder.

Grouping such several files manually will be very tedious and error-prone. This task is a perfect candidate to be scripted out; this is exactly what I did. Since this may be helpful to other people going through such task, I decided to share this script.

All the script requires is a source path and a target path – anything else is handled by the script. It has several assumptions, such as it always moves, never deletes a file. Customising the script is easy and only requires basic programming knowledge.

Check it out here or below: https://github.com/albertherd/GroupFilesPowershell/blob/master/GroupFiles.ps1

Param(
    [Parameter(Mandatory=$true)]
    [string]$sourcePath,
    [Parameter(Mandatory=$true)]
    [string]$targetPath
)

$global:fileTypeLookup = @{};
$folderDateTimeFormat = "MM-yyyy"

function Copy-FilesIntoFoldersByMonthAndType{
    param()

    $files = Get-ChildItem -Recurse -File -Path $sourcePath
    $filesProcessed = 0

    foreach($file in $files){
        $folder = Get-DirectoryForFile $file
        Copy-Item $file.FullName -Destination $folder

        $filesProcessed++
        Write-Progress -Activity "Grouping files" -Status "$($filesProcessed) out of $($files.Count) grouped" -PercentComplete (($filesProcessed / $files.Count) * 100)
    }
}
function Get-DirectoryForFile{
    param($file)

    $monthYearDirLookup = Get-FilePathDictionary $file
    $modifiedTimeMonthYearInternal = $file.LastWriteTime.ToString("MMyyyy")

    if($monthYearDirLookup.ContainsKey($modifiedTimeMonthYearInternal)){
        return $monthYearDirLookup[$modifiedTimeMonthYearInternal]
    }

    $extensionWithoutDot = $file.Extension.Substring(1, $file.Extension.Length - 1)
    $dateFolderFileName = $file.LastWriteTime.ToString($folderDateTimeFormat)
    $newPath = $targetPath + "\" + $extensionWithoutDot + "\" + $dateFolderFileName
    $path = New-Item -ItemType Directory $newPath -Force

    $monthYearDirLookup[$modifiedTimeMonthYearInternal] = $path.FullName
    return $path.FullName
}

function Get-FilePathDictionary{
    param($file)

    if($global:fileTypeLookup.ContainsKey($file.Extension)){
        return $global:fileTypeLookup[$file.Extension]
    }

    $global:fileTypeLookup.Add($file.Extension, @{})
    return $global:fileTypeLookup[$file.Extension]
}

Copy-FilesIntoFoldersByMonthAndType

In my case, it generated the below folder structure:

FolderStucture

Far more manageable than one flat folder of 38.8 GB folder, for sure!

Until the next one.

Performance differences when using AVX instructions

Download source code from here

Recent news on exploits on both Meltdown and Spectre got me thinking and researching a bit more in depth on Assembly. I ended up reading on the differences and performance gains when using SIMD instructions versus naive implementations. Let’s briefly discuss what SIMD is.

SIMD (Single instruction, multiple data) is the process of piping vector data through a single instruction, effectively speeding up the calculations significantly. Given that SIMD instruction can process larger amount of data in parallel atomically, SIMD does provide a significant performance boost when used. Real-Life applications of SIMD are various, ranging from image processing, audio processing and graphics generation.

Let’s investigate the real performance gains when using SIMD instructions – in this case we’ll be using AVX (Advanced Vector Extensions), which provides newer SIMD instructions. We’ll be using several SIMD instructions, such as VADDPS. VSUBPS, VMULPS, and VDIVPS. Each instruction is responsible for adding, subtracting, multiplying and dividing single precision numbers (floats).

In reality, we will not be writing any Assembly at all, we’ll be using Intrinsics, which ship directly with any decent C/C++ compiler. For our example, we’ll be using MSVC compiler, but any decent compiler will do. The Intel Intrinsics Guide provides a very good platform to look up any required intrinsic functions one may need, thus removing the need to write Assembly, just C code.

There are two benchmarks for each arithmetic operation: one is done naively and one is done using intrinsics thus using the necessary AVX instruction. Each operation is  performed 200,000,000 times thus to make sure that there is enough time to demonstrate it for a benchmark,

Here’s an example of how the multiplication is implemented naively:

void DoNaiveMultiplication(int iterations)
{
    float z[8];

    for (int i = 0; i < iterations; i++)
    {
        z[0] = x[0] * y[0];
        z[1] = x[1] * y[1];
        z[2] = x[2] * y[2];
        z[3] = x[3] * y[3];
        z[4] = x[4] * y[4];
        z[5] = x[5] * y[5];
        z[6] = x[6] * y[6];
        z[7] = x[7] * y[7];
    }
}

Here's an example of how the multiplication is implemented in AVX:

void DoAvxMultiplication(int iterations)
{
__m256 x256 = _mm256_loadu_ps((__m256*)x);
__m256 y256 = _mm256_loadu_ps((__m256*)y);
__m256 result;

for (int i = 0; i < iterations; i++)
{
result = _mm256_mul_ps(x256, y256);
}
}

Finally, let's take a look on how the results look:

naivevsavx

 

AVXPerformanceGains
Performance gains when using AVX

From the graph above, one can see that when optimizing from naive to AVX, there are the following gains:

  • Addition: 217% faster – from 1141ms to 359ms
  • Subtraction: 209% faster – from 1110ms to 359ms
  • Multiplication: 221% faster- from 1156ms to 360ms
  • Division: 300% faster – from 2687ms to 672ms

Of course, the benchmarks show the best case scenarios; so real-life mileage may vary. These benchmarks can be downloaded and tested out from here. Kindly note that you’ll need either an Intel CPU from 2011 onwards (Sandy Bridge), or an AMD processor from 2011 onwards (Bulldozer)  in order to be able to run the benchmarks.

 

 

I hate it when my laptop’s fan switches on – here’s how I solved it (Part 1)!

I’ve made it a point that I’d buy my laptop equipped with a Intel U-Based – this is to make sure that my laptop is as light, power efficient and quiet as possible. My HP Spectre X360 does all of this; well almost. It’s light (around 1.3kg), power efficient (8-10 hours of battery plus), but is not the quietest laptop on the planet.

When the laptop has a relatively moderate task to process, it ramps up the CPU to full (3.5 Ghz). That’s great, except for the fact that high clocks generate a lot of heat. When the threshold temperature is constantly exceeded (in my laptop’s case, around 50c), the fan needs to kick-in in order to manage thermals.

There’s nothing wrong with that; the laptop functions perfectly. What I’d like is to do all these tasks, whilst the laptop remains cool and will only require passive cooling. How can this be achieved? By lowering the maximum CPU Clock, of course!

What I ended up doing is setting up the maximum CPU usage to 45% (at around 1.6 Ghz), instead of 100%. This means that tasks run slightly slower, but meaning that the laptop runs way cooler. Even better, most of the time, the performance cost is not felt since the tasks do not actually max the CPU usage; thus a lower CPU clock is sufficient!

For now, I’ve solved it naively – setting up this value as a fixed value is not the most efficient. There are times that my laptop is running well below under the threshold temperature where the fan needs to kick-in. A more intelligent solution is to adjust the temperatures on the fly, so that the laptop maintains a target temperature, much like how NVIDA’s GPU Boost works.

This is very easy to set up – this can be accessed through the Windows Power Options. Here’s a step by step guide.

Power Options
1) Right click the battery icon – select Power Options

 

Change Plan Settings
2) Select your desired power plan and select Change plan settings

 

Change Advanced Power Settings
3) Select Change Advanced Power Settings

 

Max Processor State
4) Scroll down, open Processor power management, open Maximum processor state, and type your maximum value. (Eg 45%)

That’s it! Next time, we’ll see how we can do all this programmatically, through WinAPI.

Until the next one.

Exception Filtering in C#

What do you do when you have a piece of code that can fail, and when it fails, you need to log to a database? You wrap your code in a try-catch block and chuck a Log call in the catch block. That’s all good! What if I tell you that there is a better way to do it?

try
{
    // Code that might fail
}
catch(Exception ex)
{
    // Handle
    // Log to database
}

What’s the problem with the typical approach?

When your code enters a catch block – the stack unwinds. This refers to the process when the stack goes backwards / upwards in order to arrive the stack frame where the original call is located. Wikipedia can explain this in a bit more detail. What this means is that we might lose information with regards to the original stack location and information. If a catch block is being entered just to log to the database and then the exception is being re-thrown, this means that we’re losing vital information to discover where the issue exists; this is especially true in release / live environments.

What’s the way forward?

C# 6 offers the Exception Filtering concept; here’s how to use it.

try
{
    //Code
}
catch (FancyException fe) when (fe.ErrorCode > 0)
{
    //Handle
}

The above catch block won’t be executed if the ErrorCode property of the exception is not greater than zero. Brilliant, we can now introduce logic without interfering with the catch mechanism and avoiding stack unwinding!

A more advanced example

Let’s now go and see a more advanced example. The application below accepts input from the Console – when the input length is zero, an exception with code 0 is raised, else an exception with code 1 is raised. Anytime an exception is raised, the application logs it. Though, the exception is only caught if only if the ErrorCode is greater than 0. The complete application is on GitHub.


class Program
{
    static void Main(string[] args)
    {
        while (true)
        {
            new FancyRepository().GetCatchErrorGreaterThanZero(Console.ReadLine());
        }
    }
}

public class FancyRepository
{
    public string GetCatchErrorGreaterThanZero(string value)
    {
        try
        {
            return GetInternal(value);
        }
        catch (FancyException fe) when (LogToDatabase(fe.ErrorCode) || fe.ErrorCode > 0)
        {
            throw;
        }
    }

    private string GetInternal(string value)
    {
        if (!value.Any())
           throw new FancyException(0);

        throw new FancyException(1);
    }

    private bool LogToDatabase(int errorCode)
    {
        Console.WriteLine($"Exception with code {errorCode} has been logged");
        return false;
    }
}

 

1st Scenario – Triggering the filter

In the first scenario, when the exception is thrown by the GetInternal method, the filter successfully executes and prevents the code from entering the catch statement. This can be illustrated by the fact that Visual Studio breaks in the throw new FancyException(0); line rather than in the throw; line. This means that the stack has not been unwound; this can be proven by the fact that we can still investigate the randomNumber value. The Call Stack is fully preserved – we can go through each frame and investigate the data in each call stack.

1

2nd Scenario – Triggering the catch

In the second scenario, when the exception is thrown by the GetInternal method, the filter does not handle it due to the ErrorCode is greater than 0. This means that the catch statement is executed and the error is re-thrown. In the debugger, we can see this due to the fact that Visual Studio break in the throw; line rather than the throw new FancyException(1); line. This means that we’ve lost a stack frame; it is impossible to investigate the randomNumber value, since the stack has been unwound to the GetCatchErrorGreaterThanZero call.

2catch

What’s happening under the hood?

As one can assume, the underlying code being generated must differ at an IL level, since the stack is not being unwound. And one would assume right – the when keyword is being translated into the filter instruction.

Let’s take two try-catch blocks, and see their equivalent IL.


try
{
    throw new Exception();
}
catch(Exception ex)
{

}

Generates

 .try
 {
     IL_0003: nop
     IL_0004: newobj instance void [mscorlib]System.Exception::.ctor()
     IL_0009: throw
 } // end .try
 catch [mscorlib]System.Exception
 {
     IL_000a: stloc.1
     IL_000b: nop
     IL_000c: nop
     IL_000d: leave.s IL_000f
 } // end handler

The next one is just like the previous, but it introduces a filter to check on some value on whether it’s equivalent to 1.


try
{
    throw new Exception();
}
catch(Exception ex) when(value == 1)
{

}

Generates

.try
 {
     IL_0010: nop
     IL_0011: newobj instance void [mscorlib]System.Exception::.ctor()
     IL_0016: throw
 } // end .try
 filter
 {
     IL_0017: isinst [mscorlib]System.Exception
     IL_001c: dup
     IL_001d: brtrue.s IL_0023
     IL_001f: pop
     IL_0020: ldc.i4.0
     IL_0021: br.s IL_002d
     IL_0023: stloc.2
     IL_0024: ldloc.0
     IL_0025: ldc.i4.1
     IL_0026: ceq
     IL_0028: stloc.3
     IL_0029: ldloc.3
     IL_002a: ldc.i4.0
     IL_002b: cgt.un
     IL_002d: endfilter
 } // end filter
 { // handler
     IL_002f: pop
     IL_0030: nop
     IL_0031: nop
     IL_0032: leave.s IL_0034
 } // end handler

Although the second example generates more IL (which is partly due the value checking), it does not enter the catch block! Interestingly enough the filter keyword, is not available in C# directly (only available through the use of the when keyword.

Credits

This blog post would have been impossible if readers of my blog did not provide me with the necessary feedback. I understand that the first version of this post was outright wrong. I’ve taken feedback received from my readers and changed it so now it delivers the intended message. I thank all the below people.

Rachel Farrell – Introduced me to the fact that the when keyword generates the filter IL rather than just being syntactic sugar.

Ben Camilleri – Pointed out that when catching the exception, the statement should be throw; instead of throw ex; to maintain the StackTrace property properly.

Cedric Mamo – Pointed out that the logic was flawed and provided the appropriate solution in order to successfully demonstrate it using Visual Studio.

Until the next one!

Code never lies, Documentation sometimes does!

Lately, I was working on a Windows Service using C#. Having never done such using C#, I’d thought that I’d go through the documentation that Microsoft provides. I went through it in quite a breeze; my service was running in no time.

I then added some initialisation code, which means that service startup is not instant. No problem with that; in fact the documentation has a section dedicated for such. The user’s code can provide some status update on what state the initialisation is. Unfortunately, C# does not provide this functionality; you’ll have to call native API to do so (through the SetServiceStatus call).

As I was going through the C# documentation of the struct that it accepts, I noticed that it does not match up with the documentation for the native API. The C# says that it accept long (64-bit) parameters, whilst the native API says that it accepts DWORD (32-bit) parameters. This got me thinking; is the C# documentation wrong?

I’ve whipped up two applications: one in C++ and one in C#. I checked the size of, in bytes, of the SERVICE_STATUS that SetServiceStatus expects. The answer was 28 bytes, which makes sense given that it consists of 7 DWORDs (32-bit) – 7 * 4  = 28 bytes.

size_t sizeOfServiceStatus = sizeof(SERVICE_STATUS);
cout << "Size: " << sizeOfServiceStatus << endl;

The C# application consists of copying and pasting the example in Microsoft’s documentation. After checking out the ServiceStatus struct’s size, it showed 56! Again, this was not surprising, since it consists of 6 long (64-bit) plus the ServiceState enum (which defaults to int, 32-bit) plus an additional 32-bit of padding – (6 * 8) + 4 + 4 = 56 . Therefore, the resultant struct is 56 bytes instead of 28 bytes!

int size = Marshal.SizeOf(typeof(ServiceStatus));
Console.WriteLine("Size: " + size);

Unfortunately, this will still to appear to work in the code, but obviously the output of this function is undefined since the data that’s being fed in is totally out of alignment. To make matters worse, pinvoke.net reports this as Microsoft do, which threw me off in the beginning as well.

Naturally, fixing this issue is trivial; it’s just a matter of converting all longs to uint (since DWORD is an unsigned integer). Therefore, the example should look like the following:

public enum ServiceState
{
SERVICE_STOPPED = 0x00000001,
SERVICE_START_PENDING = 0x00000002,
SERVICE_STOP_PENDING = 0x00000003,
SERVICE_RUNNING = 0x00000004,
SERVICE_CONTINUE_PENDING = 0x00000005,
SERVICE_PAUSE_PENDING = 0x00000006,
SERVICE_PAUSED = 0x00000007,
}

[StructLayout(LayoutKind.Sequential)]
public struct ServiceStatus
{
public uint dwServiceType;
public ServiceState dwCurrentState;
public uint dwControlsAccepted;
public uint dwWin32ExitCode;
public uint dwServiceSpecificExitCode;
public uint dwCheckPoint;
public uint dwWaitHint;
};

 

On the usage of ‘out’ parameters

The other day, I was discussing with a colleague on whether or not the usage of out parameters is OK. If I’m honest, I immediately cringed as I am not really a fan of said keyword. But first, let’s briefly discuss what on how the ‘out’ parameter keyword works.

In C#, the ‘out’ keyword is used to allow a method to return multiple values of data. This means that the method can return data using the ‘return’ statement and modify values using the ‘out’ keyword. Why did I say modify instead of return when referring to the out statement? Simple, because what the ‘out’ does, is that it receives a pointer to said data structure and then dereferences and applies the value when a new value is assigned. This means that the ‘out’ keyword is introducing the concept of pointers.

OK, the previous paragraph may not make much sense if you do not have any experience with unmanaged languages and pointers. And that’s exactly the main problem with the ‘out’ parameter. It’s introducing pointers without the user’s awareness.

Let’s now talk about the pattern and architecture of said ‘out’ parameter. As we said earlier, the ‘out’ keyword is used in a method to allow it to return multiple values. An ‘out’ parameter gives a guarantee that the value will get initialised by the callee and the callee does not expect the value passed as the out parameter to be initialised.  Let’s see an example:

User GetUser(int id, out string errorMessage)
{
 // User is fetched from database
 // ErrorMessage is set if there was an error fetching the user
}

This may be used as such

string errorMessage;
User user = GetUser(1, out errorMessage);

By the way, C# 7 now allows the out variable to be declared inline, looking something like this:

User user = GetUser(1, out string errorMessage);

This can easily be refactored, so that the message can be returned encapsulated within the same object. It may look something like the below:

class UserWithError
{
 User user {get; set;}
 string ErrorMessage {get; set;}
}

UserWithError GetUser(int id)
{
 // User is fetched from database
 // ErrorMessage is set if there was an error fetching the user
}

Let’s quickly go through the problems with the ‘out’ keyword exposed. Firstly, it’s not easy to discard the return value. With a return value, we can easily call GetUser and ignore the return value. But with the out parameter, we do have to pass a string to capture the error message, even if one does not need the actual error message. Secondly, declaration is a bit more cumbersome since it needs to happen in more than one line (since we need to declare the out parameter). Although this was fixed in C# 7, there are a lot of code-bases which are not running C# 7. Lastly, this prevents from the method to run as “async”.

By the way, ‘out’ raises a Code Quality warning, as defined by Microsoft’s design patterns.

Last thing I want to mention is the use of the ‘out’ keyword in the Try pattern, which returns a bool as a return type, and sets a value using the ‘out’ keyword. This is the only globally accepted pattern which makes use of the ‘out’ keyword.

int amount;
if(Int32.TryParse(amountAsString, out amount))
{
//amountAsString was indeed an integer
}

Long story short, if you want a method to return multiple values, wrap them in a class; don’t use thee ‘out’ keyword.