Connecting a LCD1602 with an I2C module to your Raspberry Pi – Raspberry Pi Temperature Monitoring Part 2

The LCD1602 is a very famous LCD that can be connected to various devices such as the Raspberry Pi. The LCD1602 on its own is quite tricky to wire it up since it requires 16 pins to be connected. The LCD1602 can also be purchased with an I2C module, which reduces the amount of pins needed to just 4.

For this tutorial, we’ll be working with a LCD1602 with an I2C module. I got mine from AliExpress for around $2.50. Make sure to grab a set of jumper cables as you’ll need them to connect the LCD to the Raspberry Pi. I got mine from AliExpress as well for around $1.50.

IMG_20190102_110607.jpg

img_20190109_230900

Let’s start by wiring it up. We have 4 pins connect – GND (ground), VCC (power, 5V), SDA (data line) and SCL (clock line). GND and VCC can be connected to any equivalent GND and 5V pin. SDA and SCL should be connected to pins BCM 2 and BCM 3 accordingly.

lcd1602_i2c_raspberrypi

img_20190109_231321

If you’re following the Raspberry Pi Temperature Monitoring Part 1 and connected the DS18B20 temperature sensors, you should now have the following configuration.

lcd1602_i2c_ds18b20_raspberrypi

img_20190109_232649

Great! We’re done from the hardware’s side – let’s start configuring our Raspberry Pi to communicate with our LCD.

Firstly, let’s enable I2C from the Raspberry Pi Config. Fire up the raspi-config to get started: sudo raspi config

Now navigate to Interfacing Options => I2C => Enable I2C

raspi-config-interfacing-options

raspi-config-interfacing-options-i2c

Now that we’ve enabled I2C communication, it’s time to start development! We’ll need to get some tools before we start working though, so fire up a shell and input:

sudo apt-get install i2c-tools.

Once that’s done, the LCD is ready to be programmed! Let’s make sure that the LCD is properly connected and working. In a shell, type:

i2cdetect -y 1.

The output should be something like the below. Note the number outputted by the command; will be needed later on. We’ll need this address when we’re trying our demo code. In this case, the address is “27”.

i2cdetect

Great! Now, it’s time to test out our display and see if it works! We’ll be using a Github library – https://github.com/albertherd/LCD1602. This has been forked from https://github.com/bitbank2/LCD1602. We’ll be using my fork since the original repository has an unresolved issue with clearing the display.

After you’ve cloned the repository in your working directory, it’s time to use the address (27 in my case) obtained earlier. Open the main.c and find the call to lcd1602Init and change second parameter. This is how it looks in my case:

lcd1602Init(1, 0x27);

Now it’s time to compile and run our code. If all goes well, we should be getting some text on the screen. You can change the text to whatever you’d like by changing the following lines in main.c.

lcd1602WriteString("BitBank LCD1602");
lcd1602SetCursor(0,1);
lcd1602WriteString("ENTER to quit");

Build and run using the following commands:

make
make -f make_demo
./demo

The screen should look like the below:

IMG_20190110_231937.jpg

Great! Now we’ve successfully connected our LCD1602 to our Raspberry Pi and we’re able to output content on it!

In the next part of this tutorial series, we’ll start by capturing the temperature using the sensor in our first part of the tutorial and outputting it! Stay tuned.

Connecting a DS18B20 thermal sensor to your Raspberry Pi – Raspberry Pi Temperature Monitoring Part 1

A project that I’ve been working on during the Christmas holidays was to hook up some thermal probes to my Raspberry Pi, just to play around. This tutorial simply follows the steps that I’ve taken to achieve so.

You’ll need:

  • Raspberry Pi, any flavor as long as it has GPIO headers available. I had a Raspberry Pi 2, so I used that.
  • You’ll also need the usual suspects – USB to MicroUSB to hook it up to power, HDMI to connect it to a display for initial configuration and an ethernet port to manage it through SSH. I highly recommend configuring SSH rather than using the device itself. This tutorial assumes you’re using SSH.
  • A DS18B20 sensor – I’d suggest getting one which includes a Plugable Terminal to avoid soldering – just wire it up and you’re good to go. I got mine from AliExpress
  • Also make sure your kit has 3 jumper cables. They are typically included. Just to be sure, I also got a set of female to female jumper cables from AliExpress though I did not use them for the DS18B20 sensor.

All right, let’s wire it up! The DS18B20 sensor requires three pins – data, VCC (3.3V), and ground. Connect the wires as below. Data is yellow, VCC is red and ground is black.

IMG_20190102_105252

Connect the 3 pins using the jumper cables as shown below.
sensor1.png

IMG_20190102_105726

We’ll also need to instruct the Raspberry Pi that we’re going to connect the DS18B20 sensor. This sensor makes use of the 1-Wire protocol, so let’s activate it:

  • Connect to the Raspberry Pi using SSH
  • Let’s start by editing the config file that the Raspberry PI parses every time it boots up: sudo nano /boot/config.txt
  • Go to the end of the document and input the following. Specifying gpiopin=4 is actually optional since by convention, 1-wire devices are expected on gpiopin 4 on the Raspberry Pi.
    # Enable OneWire Protocol
    dtoverlay=w1-gpio;gpiopin=4
  • Time to reboot the Raspberry Pi sudo reboot
  • Once the Raspberry PI reboots and you re-connect using SSH, it’s time to get data from the sensor! Let’s find the 1-wire devices connected to the system. Let’s start by browsing to the appropriate directory. cd /sys/bus/w1/devices
  • Great! Let’s now see the devices attached to the Raspberry Pi. ls
  • This will get the devices attached using the 1-Wire protocol. You should have a device called 28-xxxxxxxxxxxx (where x stands for your unique 12 digit serial number). Let’s now browse the device. Mine is 28-02199245e07b, so let’s use it an example. cd 28-02199245e07b
  • Once you access the device, there should be a file called w1_slave. Let’s see the contents of the file. cat w1_slave
  • The file should look like this:
    0b 01 55 05 7f 7e 81 66 bf : crc=bf YES
    0b 01 55 05 7f 7e 81 66 bf t=16687
  • If the file looks like the above, great! The temperature component is t=16687. The temperature in this case is 16.687 °C

We also can take this to the next level and add another thermal probe! Attach it as shown below.
sensor2IMG_20190102_110134

This will require re-editing the /boot/config.txt. Let’s do it!

  • Re-open /boot/config.txt – sudo nano /boot/config.txt
  • Go to the end and add the following. I chose pin 24 because it’s easy to wire since it’s close to a 3.3v and ground. dtoverlay=w1-gpio;gpiopin=24
  • Close and save, then cd /sys/bus/w1/devices
  • You should now see two devices as 28-xxxxxxxxxxxx

Of course, at this stage we did get the temperature, but it’s not really usable. We can get access to this information programmatically – this is what we’ll be doing in the next part of this tutorial. We’ll also be eventually showing the information on a separate LCD screen! Stay tuned!

Group files into folder structure by date using PowerShell

Lately I was sorting out around 5000 files worth of pictures and videos taken out from a mobile phone, to upload them on a cloud drive. To make this task more manageable, I wanted to separate these files out, firstly by file type and then by date. This will make the folders far more smaller and therefore easier to upload these files to the cloud folder by folder.

Grouping such several files manually will be very tedious and error-prone. This task is a perfect candidate to be scripted out; this is exactly what I did. Since this may be helpful to other people going through such task, I decided to share this script.

All the script requires is a source path and a target path – anything else is handled by the script. It has several assumptions, such as it always moves, never deletes a file. Customising the script is easy and only requires basic programming knowledge.

Check it out here or below: https://github.com/albertherd/GroupFilesPowershell/blob/master/GroupFiles.ps1

Param(
    [Parameter(Mandatory=$true)]
    [string]$sourcePath,
    [Parameter(Mandatory=$true)]
    [string]$targetPath
)

$global:fileTypeLookup = @{};
$folderDateTimeFormat = "MM-yyyy"

function Copy-FilesIntoFoldersByMonthAndType{
    param()

    $files = Get-ChildItem -Recurse -File -Path $sourcePath
    $filesProcessed = 0

    foreach($file in $files){
        $folder = Get-DirectoryForFile $file
        Copy-Item $file.FullName -Destination $folder

        $filesProcessed++
        Write-Progress -Activity "Grouping files" -Status "$($filesProcessed) out of $($files.Count) grouped" -PercentComplete (($filesProcessed / $files.Count) * 100)
    }
}
function Get-DirectoryForFile{
    param($file)

    $monthYearDirLookup = Get-FilePathDictionary $file
    $modifiedTimeMonthYearInternal = $file.LastWriteTime.ToString("MMyyyy")

    if($monthYearDirLookup.ContainsKey($modifiedTimeMonthYearInternal)){
        return $monthYearDirLookup[$modifiedTimeMonthYearInternal]
    }

    $extensionWithoutDot = $file.Extension.Substring(1, $file.Extension.Length - 1)
    $dateFolderFileName = $file.LastWriteTime.ToString($folderDateTimeFormat)
    $newPath = $targetPath + "\" + $extensionWithoutDot + "\" + $dateFolderFileName
    $path = New-Item -ItemType Directory $newPath -Force

    $monthYearDirLookup[$modifiedTimeMonthYearInternal] = $path.FullName
    return $path.FullName
}

function Get-FilePathDictionary{
    param($file)

    if($global:fileTypeLookup.ContainsKey($file.Extension)){
        return $global:fileTypeLookup[$file.Extension]
    }

    $global:fileTypeLookup.Add($file.Extension, @{})
    return $global:fileTypeLookup[$file.Extension]
}

Copy-FilesIntoFoldersByMonthAndType

In my case, it generated the below folder structure:

FolderStucture

Far more manageable than one flat folder of 38.8 GB folder, for sure!

Until the next one.

Performance differences when using AVX instructions

Download source code from here

Recent news on exploits on both Meltdown and Spectre got me thinking and researching a bit more in depth on Assembly. I ended up reading on the differences and performance gains when using SIMD instructions versus naive implementations. Let’s briefly discuss what SIMD is.

SIMD (Single instruction, multiple data) is the process of piping vector data through a single instruction, effectively speeding up the calculations significantly. Given that SIMD instruction can process larger amount of data in parallel atomically, SIMD does provide a significant performance boost when used. Real-Life applications of SIMD are various, ranging from image processing, audio processing and graphics generation.

Let’s investigate the real performance gains when using SIMD instructions – in this case we’ll be using AVX (Advanced Vector Extensions), which provides newer SIMD instructions. We’ll be using several SIMD instructions, such as VADDPS. VSUBPS, VMULPS, and VDIVPS. Each instruction is responsible for adding, subtracting, multiplying and dividing single precision numbers (floats).

In reality, we will not be writing any Assembly at all, we’ll be using Intrinsics, which ship directly with any decent C/C++ compiler. For our example, we’ll be using MSVC compiler, but any decent compiler will do. The Intel Intrinsics Guide provides a very good platform to look up any required intrinsic functions one may need, thus removing the need to write Assembly, just C code.

There are two benchmarks for each arithmetic operation: one is done naively and one is done using intrinsics thus using the necessary AVX instruction. Each operation is  performed 200,000,000 times thus to make sure that there is enough time to demonstrate it for a benchmark,

Here’s an example of how the multiplication is implemented naively:

void DoNaiveMultiplication(int iterations)
{
    float z[8];

    for (int i = 0; i < iterations; i++)
    {
        z[0] = x[0] * y[0];
        z[1] = x[1] * y[1];
        z[2] = x[2] * y[2];
        z[3] = x[3] * y[3];
        z[4] = x[4] * y[4];
        z[5] = x[5] * y[5];
        z[6] = x[6] * y[6];
        z[7] = x[7] * y[7];
    }
}

Here's an example of how the multiplication is implemented in AVX:

void DoAvxMultiplication(int iterations)
{
__m256 x256 = _mm256_loadu_ps((__m256*)x);
__m256 y256 = _mm256_loadu_ps((__m256*)y);
__m256 result;

for (int i = 0; i < iterations; i++)
{
result = _mm256_mul_ps(x256, y256);
}
}

Finally, let's take a look on how the results look:

naivevsavx

 

AVXPerformanceGains
Performance gains when using AVX

From the graph above, one can see that when optimizing from naive to AVX, there are the following gains:

  • Addition: 217% faster – from 1141ms to 359ms
  • Subtraction: 209% faster – from 1110ms to 359ms
  • Multiplication: 221% faster- from 1156ms to 360ms
  • Division: 300% faster – from 2687ms to 672ms

Of course, the benchmarks show the best case scenarios; so real-life mileage may vary. These benchmarks can be downloaded and tested out from here. Kindly note that you’ll need either an Intel CPU from 2011 onwards (Sandy Bridge), or an AMD processor from 2011 onwards (Bulldozer)  in order to be able to run the benchmarks.

 

 

I hate it when my laptop’s fan switches on – here’s how I solved it (Part 1)!

I’ve made it a point that I’d buy my laptop equipped with a Intel U-Based – this is to make sure that my laptop is as light, power efficient and quiet as possible. My HP Spectre X360 does all of this; well almost. It’s light (around 1.3kg), power efficient (8-10 hours of battery plus), but is not the quietest laptop on the planet.

When the laptop has a relatively moderate task to process, it ramps up the CPU to full (3.5 Ghz). That’s great, except for the fact that high clocks generate a lot of heat. When the threshold temperature is constantly exceeded (in my laptop’s case, around 50c), the fan needs to kick-in in order to manage thermals.

There’s nothing wrong with that; the laptop functions perfectly. What I’d like is to do all these tasks, whilst the laptop remains cool and will only require passive cooling. How can this be achieved? By lowering the maximum CPU Clock, of course!

What I ended up doing is setting up the maximum CPU usage to 45% (at around 1.6 Ghz), instead of 100%. This means that tasks run slightly slower, but meaning that the laptop runs way cooler. Even better, most of the time, the performance cost is not felt since the tasks do not actually max the CPU usage; thus a lower CPU clock is sufficient!

For now, I’ve solved it naively – setting up this value as a fixed value is not the most efficient. There are times that my laptop is running well below under the threshold temperature where the fan needs to kick-in. A more intelligent solution is to adjust the temperatures on the fly, so that the laptop maintains a target temperature, much like how NVIDA’s GPU Boost works.

This is very easy to set up – this can be accessed through the Windows Power Options. Here’s a step by step guide.

Power Options
1) Right click the battery icon – select Power Options

 

Change Plan Settings
2) Select your desired power plan and select Change plan settings

 

Change Advanced Power Settings
3) Select Change Advanced Power Settings

 

Max Processor State
4) Scroll down, open Processor power management, open Maximum processor state, and type your maximum value. (Eg 45%)

That’s it! Next time, we’ll see how we can do all this programmatically, through WinAPI.

Until the next one.

Exception Filtering in C#

What do you do when you have a piece of code that can fail, and when it fails, you need to log to a database? You wrap your code in a try-catch block and chuck a Log call in the catch block. That’s all good! What if I tell you that there is a better way to do it?

try
{
    // Code that might fail
}
catch(Exception ex)
{
    // Handle
    // Log to database
}

What’s the problem with the typical approach?

When your code enters a catch block – the stack unwinds. This refers to the process when the stack goes backwards / upwards in order to arrive the stack frame where the original call is located. Wikipedia can explain this in a bit more detail. What this means is that we might lose information with regards to the original stack location and information. If a catch block is being entered just to log to the database and then the exception is being re-thrown, this means that we’re losing vital information to discover where the issue exists; this is especially true in release / live environments.

What’s the way forward?

C# 6 offers the Exception Filtering concept; here’s how to use it.

try
{
    //Code
}
catch (FancyException fe) when (fe.ErrorCode > 0)
{
    //Handle
}

The above catch block won’t be executed if the ErrorCode property of the exception is not greater than zero. Brilliant, we can now introduce logic without interfering with the catch mechanism and avoiding stack unwinding!

A more advanced example

Let’s now go and see a more advanced example. The application below accepts input from the Console – when the input length is zero, an exception with code 0 is raised, else an exception with code 1 is raised. Anytime an exception is raised, the application logs it. Though, the exception is only caught if only if the ErrorCode is greater than 0. The complete application is on GitHub.


class Program
{
    static void Main(string[] args)
    {
        while (true)
        {
            new FancyRepository().GetCatchErrorGreaterThanZero(Console.ReadLine());
        }
    }
}

public class FancyRepository
{
    public string GetCatchErrorGreaterThanZero(string value)
    {
        try
        {
            return GetInternal(value);
        }
        catch (FancyException fe) when (LogToDatabase(fe.ErrorCode) || fe.ErrorCode > 0)
        {
            throw;
        }
    }

    private string GetInternal(string value)
    {
        if (!value.Any())
           throw new FancyException(0);

        throw new FancyException(1);
    }

    private bool LogToDatabase(int errorCode)
    {
        Console.WriteLine($"Exception with code {errorCode} has been logged");
        return false;
    }
}

 

1st Scenario – Triggering the filter

In the first scenario, when the exception is thrown by the GetInternal method, the filter successfully executes and prevents the code from entering the catch statement. This can be illustrated by the fact that Visual Studio breaks in the throw new FancyException(0); line rather than in the throw; line. This means that the stack has not been unwound; this can be proven by the fact that we can still investigate the randomNumber value. The Call Stack is fully preserved – we can go through each frame and investigate the data in each call stack.

1

2nd Scenario – Triggering the catch

In the second scenario, when the exception is thrown by the GetInternal method, the filter does not handle it due to the ErrorCode is greater than 0. This means that the catch statement is executed and the error is re-thrown. In the debugger, we can see this due to the fact that Visual Studio break in the throw; line rather than the throw new FancyException(1); line. This means that we’ve lost a stack frame; it is impossible to investigate the randomNumber value, since the stack has been unwound to the GetCatchErrorGreaterThanZero call.

2catch

What’s happening under the hood?

As one can assume, the underlying code being generated must differ at an IL level, since the stack is not being unwound. And one would assume right – the when keyword is being translated into the filter instruction.

Let’s take two try-catch blocks, and see their equivalent IL.


try
{
    throw new Exception();
}
catch(Exception ex)
{

}

Generates

 .try
 {
     IL_0003: nop
     IL_0004: newobj instance void [mscorlib]System.Exception::.ctor()
     IL_0009: throw
 } // end .try
 catch [mscorlib]System.Exception
 {
     IL_000a: stloc.1
     IL_000b: nop
     IL_000c: nop
     IL_000d: leave.s IL_000f
 } // end handler

The next one is just like the previous, but it introduces a filter to check on some value on whether it’s equivalent to 1.


try
{
    throw new Exception();
}
catch(Exception ex) when(value == 1)
{

}

Generates

.try
 {
     IL_0010: nop
     IL_0011: newobj instance void [mscorlib]System.Exception::.ctor()
     IL_0016: throw
 } // end .try
 filter
 {
     IL_0017: isinst [mscorlib]System.Exception
     IL_001c: dup
     IL_001d: brtrue.s IL_0023
     IL_001f: pop
     IL_0020: ldc.i4.0
     IL_0021: br.s IL_002d
     IL_0023: stloc.2
     IL_0024: ldloc.0
     IL_0025: ldc.i4.1
     IL_0026: ceq
     IL_0028: stloc.3
     IL_0029: ldloc.3
     IL_002a: ldc.i4.0
     IL_002b: cgt.un
     IL_002d: endfilter
 } // end filter
 { // handler
     IL_002f: pop
     IL_0030: nop
     IL_0031: nop
     IL_0032: leave.s IL_0034
 } // end handler

Although the second example generates more IL (which is partly due the value checking), it does not enter the catch block! Interestingly enough the filter keyword, is not available in C# directly (only available through the use of the when keyword.

Credits

This blog post would have been impossible if readers of my blog did not provide me with the necessary feedback. I understand that the first version of this post was outright wrong. I’ve taken feedback received from my readers and changed it so now it delivers the intended message. I thank all the below people.

Rachel Farrell – Introduced me to the fact that the when keyword generates the filter IL rather than just being syntactic sugar.

Ben Camilleri – Pointed out that when catching the exception, the statement should be throw; instead of throw ex; to maintain the StackTrace property properly.

Cedric Mamo – Pointed out that the logic was flawed and provided the appropriate solution in order to successfully demonstrate it using Visual Studio.

Until the next one!

Code never lies, Documentation sometimes does!

Lately, I was working on a Windows Service using C#. Having never done such using C#, I’d thought that I’d go through the documentation that Microsoft provides. I went through it in quite a breeze; my service was running in no time.

I then added some initialisation code, which means that service startup is not instant. No problem with that; in fact the documentation has a section dedicated for such. The user’s code can provide some status update on what state the initialisation is. Unfortunately, C# does not provide this functionality; you’ll have to call native API to do so (through the SetServiceStatus call).

As I was going through the C# documentation of the struct that it accepts, I noticed that it does not match up with the documentation for the native API. The C# says that it accept long (64-bit) parameters, whilst the native API says that it accepts DWORD (32-bit) parameters. This got me thinking; is the C# documentation wrong?

I’ve whipped up two applications: one in C++ and one in C#. I checked the size of, in bytes, of the SERVICE_STATUS that SetServiceStatus expects. The answer was 28 bytes, which makes sense given that it consists of 7 DWORDs (32-bit) – 7 * 4  = 28 bytes.

size_t sizeOfServiceStatus = sizeof(SERVICE_STATUS);
cout << "Size: " << sizeOfServiceStatus << endl;

The C# application consists of copying and pasting the example in Microsoft’s documentation. After checking out the ServiceStatus struct’s size, it showed 56! Again, this was not surprising, since it consists of 6 long (64-bit) plus the ServiceState enum (which defaults to int, 32-bit) plus an additional 32-bit of padding – (6 * 8) + 4 + 4 = 56 . Therefore, the resultant struct is 56 bytes instead of 28 bytes!

int size = Marshal.SizeOf(typeof(ServiceStatus));
Console.WriteLine("Size: " + size);

Unfortunately, this will still to appear to work in the code, but obviously the output of this function is undefined since the data that’s being fed in is totally out of alignment. To make matters worse, pinvoke.net reports this as Microsoft do, which threw me off in the beginning as well.

Naturally, fixing this issue is trivial; it’s just a matter of converting all longs to uint (since DWORD is an unsigned integer). Therefore, the example should look like the following:

public enum ServiceState
{
SERVICE_STOPPED = 0x00000001,
SERVICE_START_PENDING = 0x00000002,
SERVICE_STOP_PENDING = 0x00000003,
SERVICE_RUNNING = 0x00000004,
SERVICE_CONTINUE_PENDING = 0x00000005,
SERVICE_PAUSE_PENDING = 0x00000006,
SERVICE_PAUSED = 0x00000007,
}

[StructLayout(LayoutKind.Sequential)]
public struct ServiceStatus
{
public uint dwServiceType;
public ServiceState dwCurrentState;
public uint dwControlsAccepted;
public uint dwWin32ExitCode;
public uint dwServiceSpecificExitCode;
public uint dwCheckPoint;
public uint dwWaitHint;
};

 

On the usage of ‘out’ parameters

The other day, I was discussing with a colleague on whether or not the usage of out parameters is OK. If I’m honest, I immediately cringed as I am not really a fan of said keyword. But first, let’s briefly discuss what on how the ‘out’ parameter keyword works.

In C#, the ‘out’ keyword is used to allow a method to return multiple values of data. This means that the method can return data using the ‘return’ statement and modify values using the ‘out’ keyword. Why did I say modify instead of return when referring to the out statement? Simple, because what the ‘out’ does, is that it receives a pointer to said data structure and then dereferences and applies the value when a new value is assigned. This means that the ‘out’ keyword is introducing the concept of pointers.

OK, the previous paragraph may not make much sense if you do not have any experience with unmanaged languages and pointers. And that’s exactly the main problem with the ‘out’ parameter. It’s introducing pointers without the user’s awareness.

Let’s now talk about the pattern and architecture of said ‘out’ parameter. As we said earlier, the ‘out’ keyword is used in a method to allow it to return multiple values. An ‘out’ parameter gives a guarantee that the value will get initialised by the callee and the callee does not expect the value passed as the out parameter to be initialised.  Let’s see an example:

User GetUser(int id, out string errorMessage)
{
 // User is fetched from database
 // ErrorMessage is set if there was an error fetching the user
}

This may be used as such

string errorMessage;
User user = GetUser(1, out errorMessage);

By the way, C# 7 now allows the out variable to be declared inline, looking something like this:

User user = GetUser(1, out string errorMessage);

This can easily be refactored, so that the message can be returned encapsulated within the same object. It may look something like the below:

class UserWithError
{
 User user {get; set;}
 string ErrorMessage {get; set;}
}

UserWithError GetUser(int id)
{
 // User is fetched from database
 // ErrorMessage is set if there was an error fetching the user
}

Let’s quickly go through the problems with the ‘out’ keyword exposed. Firstly, it’s not easy to discard the return value. With a return value, we can easily call GetUser and ignore the return value. But with the out parameter, we do have to pass a string to capture the error message, even if one does not need the actual error message. Secondly, declaration is a bit more cumbersome since it needs to happen in more than one line (since we need to declare the out parameter). Although this was fixed in C# 7, there are a lot of code-bases which are not running C# 7. Lastly, this prevents from the method to run as “async”.

By the way, ‘out’ raises a Code Quality warning, as defined by Microsoft’s design patterns.

Last thing I want to mention is the use of the ‘out’ keyword in the Try pattern, which returns a bool as a return type, and sets a value using the ‘out’ keyword. This is the only globally accepted pattern which makes use of the ‘out’ keyword.

int amount;
if(Int32.TryParse(amountAsString, out amount))
{
//amountAsString was indeed an integer
}

Long story short, if you want a method to return multiple values, wrap them in a class; don’t use thee ‘out’ keyword.

On the usage of bool in method parameters

The number of times that I encountered a piece of code like the following is quite alarming:

Transaction transaction =
    TransactionFactory.CreateTransaction(
        true,
        true,
        false,
        true);

What do those true, true, false, true mean? We’ll need to view the method signature each time we need to understand what those four booleans represent!  This means that anyone trying to skim through your code will have a bad time. Sidenote: if you’re writing method calls with such parameters, you seriously need to consider re-thinking such calls.

Let’s try to improve the readability of that code by a bit

Transaction transaction =
    TransactionFactory.CreateTransaction(
        true /* postInSage */,
        true /* isPaidInFull */,
        false /* recurringTransaction */,
        true /* sendEmailReceipt */);

What did we do here? We added some comments next to each boolean so that when reading the code, the reader can quickly identify what each boolean signifies. Neat, we’ve improved the readability by a lot! Microsoft developers seem to like doing it this way; a quick look at .NET Framework Source will show you some good examples, such as here, here and here.

But, what happens in case the order of the booleans change? Apart from breaking functionality, the comments will not update to reflect the new API call. As they say, comments lie, code never does.

Instead of opting to document the parameter names with comments, C# offers the facility of naming your parameters. This means that you can choose to ignore the order of the parameters, but affix the name of the parameter before the actual value. Let’s apply it to our example.

Transaction transaction =
    TransactionFactory.CreateTransaction(
        postInSage: true,
        isPaidInFull: true,
        recurringTransaction: false,
        sendEmailReceipt: true);

That’s looking great! We can even improve a bit by defaulting all boolean arguments to false, thus we’ll only pass those booleans which should be true.

Now, the method signature will look like this:

CreateTransaction(
    bool postInSage = false,
    bool isPaidInFull = false,
    bool recurringTransaction = false,
    bool sendEmailReceipt = false) 

The method call with look like this

Transaction transaction =
    TransactionFactory.CreateTransaction(
        postInSage: true,
        isPaidInFull: true,
        sendEmailReceipt: true); 

We can also take a totally different approach and eliminate the use of boolean parameters and introduce Enums, specifically Enum flags. This means that when we call the CreateTransaction method, we’ll simply pass the required flags. In case you forgot, here’s a quick refresher on how it works.  It will look something of the sort:

Transaction transaction =
    TransactionFactory.CreateTransaction(
        TransactionFlags.PostInSage |
        TransactionFlags.IsPaidInFull |
        TransactionFlags.SendEmailReceipt);

Not bad! When you read that piece of code, you can easily identify any properties that should be taken into consideration when creating the transaction. We ended up eliminating the need of booleans, in favour of flags.

Does this means that booleans should never be used when dealing with parameters? Of course not. I just wanted to shed some light on the fact that there are better approaches than just writing and consuming APIs in a more readable fashion.

Handling uncaught exceptions in a SharePoint environment through customErrors in web.config

Handling exceptions in SharePoint environment is quite straightforward; it’s a out-of-the-box feature. Any unhandled exception is caught by the SharePoint environment and a pretty “Sorry, something went wrong” screen is shown.

Sorry, something went wrong exception
Generic error when an unhandled exception is caught

That’s all great! Things can get a bit tougher if exceptions occur outside the SharePoint context. How can this happen? In my case, it can happen when custom code is being loaded through an HttpContext, and some DLL might fail to load. In this case, there is no SharePoint context to catch the uncaught exception, thus it remains as an unhandled exception. By default, unhandled exceptions generate the Yellow Screen of Death.

Yellow Screen of Death
Yellow Screen of Death – Source

That’s not something we should be displaying, right? Of course not! First and foremost, we’re showing the actual code to the public! That’s never good. Even if we manage to turn off the details in such exceptions, it’s still terrible practice to show such screen to public.

Alright then, let’s use some custom page to show our error! This is where it gets a bit trickier. Sadly enough, we cannot safely and reliably meddle with the Global.asax file, to globally catch errors and handle them nicely. We need to thing of something else.

Whilst I was trying to tackle this issue, my obvious guess was to amend the web.config – more specifically the customError section. My first attempt at solving this was by creating a simple custom error page, putting it in my IIS folder and configuring IIS to load this page in the customError section. My web.config looked something like this:

<customErrors defaultRedirect="errorPage.html" mode="On" />

After trying that out, I was still stuck with the same Yellow Screen of Death. What was happening? It so happens that when an error is thrown, indeed a redirect to my custom page happens. But when this page tries to load, it attempts to re-load all my HttpModules and this failing and throwing a new exception all over again. This means that this approach could never work.

In order to solve this, I had to do some additional steps. The solution to this issue is to create a new Site in IIS, that is designed specifically to handle such erroneous cases. Let’s go through the steps in order to carry this out.

These steps assume that you are in possession of a functional SharePoint on-premise solution.

1) Create a friendly error page to display to the user when something goes wrong

In order to keep this as simple as possible, we’re just going to display a header that something went wrong, no fancy images or CSS. You could be a bit more creative if you want but for the sake of this blog, it’s fine.

<html>
<body>
<h1>Sorry, something went wrong</h1>
</body>
</html>

2) Create a new IIS Site to host our newly created HTML

This site will simply be used to host the newly created HTML. We can opt to create this site either on the same Application Pool or a new one; it does not really matter in this case. I’ll just follow the default configuration provided by IIS.

Adding a new site in IIS
Adding a new site in IIS

3) Map your newly created IIS site to a subdomain.

Since we still need to use the same external port, we need to make sure that our newly created IIS site is accessible from outside. We need to create bindings in IIS to make sure that this is OK. If you’re running HTTPS (please do), you’ll need a wildcard certificate and IIS 7.5 or higher.

siteBindings
Editing site bindings for our new IIS site

4) Assign permissions to Everyone on the newly created Site.

Creating the site is not enough, we need to allow Everyone to access the site. We’ll use one of the out of the box special identities provided; the “Everyone” identity. Therefore anyone will be able to access it.

Permissions
Step 1: Right click your site -> Edit Permissions

securityEdit
Step 2: In the Security tab, press Edit

addEveryone
Step 3: In this new window, hit Add. In the Object name, type “Everyone”

addReadPermission
Assign Read Permission to “Everyone”

5) Configure the customErrors tag in your SharePoint site to point to the newly created Site

All we need now is to point the customErrors tag to thew newly created site.


<customErrors defaultRedirect="https://error.mysharepointsite.com" mode="On" />

 

All done! Keep in mind that this page is used in very special circumstances: when exceptions occur and the SharePoint context has not kicked in. All other typical exceptions are to be handled by the default out of the box SharePoint behavior.